Cut-The-Rope: A Game of Stealthy Intrusion

Comments, Follow-up work, Updates and Corrections

For convenience of the reader, and also to easy playing around with variants of the implementation, we provide the full code of CutTheRope in R, with some additional explanations inline in the code, and separately given here where more details may be needed: lines 2 until 27 are basically a preparation of the playground. We compile the graph from a collection of all routes in terms of numeric representatives, i.e., the circled numbers in Figure 1b from the paper. The action sets as1 and as2 for the defender and the attacker are computed from these sets likewise, excluding the target node as a trivial (winning) strategy for the attacker. The for loops in lines 27 and 28 merely iterate over all strategies for the defender and attacker, with the inner for loop over all adversary types (one per starting point; defined in line 20 of the code) is to evaluate the sum (equation (3)  in the paper).

Line 29 just prepares an empty vector to take up the utility distribution \(U\) from eq. (2)  in the paper later, and the variable path is the route along with the current adversary type works its way towards \(v_0\). The if in line 32 distinguishes two cases:

Case 1: The adversary (type) \(\theta\) starts from a point on the \(j\)-th route, then we construct its utility by first reducing the path to the bit from \(\theta\) until \(v_0\) (line 34), and second, evaluating a Poisson density for a random number \(D\in\left\{0,1,2,\ldots\right\}\) of steps taken forward (line 34). This number of steps is limited by the total distance until \(v_0\), since the adversary won't take more steps than necessary to reach \(v_0\). Line 35 is then just the conditioning on "\(D\leq~\)distance to \(v_0\)", which amounts to a humble renormalization of the density (line 35) evaluated only at the relevant points (line 34). Next, we *cut the rope* by determining whether the current defender's location i is on the route (which statement in line 36), or whether the attacker can take the full steps without running into the defender (note that if i is not on the path, then the which will return an error value, causing min to return its second argument automatically as being the only numeric input). The actual utility distribution (eq. (2)  in the paper) is then evaluated on the so reduced path \(\pi|_c=\) route[1:cutPoint], by conditioning (i.e., renormalizing) the Poisson density to this bit of the route (line 37), and relocating the resulting masses to the nodes on the overall graph in line 39 (putting zero probability mass on all other locations; line 38).

Case 2 The adversary \(\theta\) is on a different route than the currently considered, in which case there is no contribution made to the sum (3)  in the paper, simply because the term \(U(i,\theta,\lambda)\) is undefined (the type \(\theta\) attacker does not have the \(i\)-th strategy). Here, a slight technical issue arises, since we cannot plainly use a all-0-vector as a discrete distribution \(U\), as this would not be a proper distribution. We thus emulate the absence of a contribution to the utility by assigning a dominated utility distribution in that case, so that a \(\preceq\)-maximizing attacker, even having the strategy \(i\) in its action space \(AS_2\), would never play that strategy (for it being strictly dominated). Our choice is the degenerate distribution putting mass 1 on the starting point (line 42). This effectively corresponds to the behavior of the attacker not moving at all (or moving back to the start), thus assigning zero probability in any equilibrium.

Having constructed the payoff for the type \(\theta\) adversary, we add it to the overall payoff (eq. (3)  in the paper) weighted by \(\Pr_{\Theta}(\theta)=\) Theta[type]. Once this is done for all adversary types, we go embody the utility distribution \(U'\) in an object suitable for the subsequent equilibrium computation (line 46), and add it to the payoff structure for the CutTheRope instance (line 47). The remaining lines 51 and 52 are then only the construction of the game object (by a call to mosg to construct a multi-objective security game), and computing a multi-goal security strategy (call to mgss in line 52).

Let us leave the technical details of how games are constructed for the HyRiM package aside, as those are of no interest for the moment. The parameters directly correspond to telling the system that we have a discrete distribution (discrete = TRUE), specified as a probability density function (type = "pdf") supported on the set \(V\) (supp = range(V), by default assumed to be a set of consecutive integers). The remaining parameters are explained below, which the user may safely skip without loosing anything for the discussion of the resulting PBE.

By convention of the HyRiM package for R, the support of an admissible lossDistribution density consists only of points having positive mass. The package depends on this to be satisfied, but our game construction will in many cases assign zero mass to some points in \(V\), since it cuts attack paths at the starting point \(\theta\) and the defender's inspected location \(i\). We can bypass this technical requirement by smoothing the loss distribution \(U'\) with a Gaussian kernel of specified bandwidth bw to assign a little bit probability mass to all locations in all cases. This is the meaning of the two parameters smoothing = "always", bw = 0.2. From a game theoretic perspective, this is nothing else than the addition of a small "tremble" to the adversary's play: like in a *trembling hands equilibrium*, it merely means that we enforce mixed strategies, by assuming that no matter of whether the attacker intends to play \(j\), there is always a tiny chance to end up playing \(j'\neq j\) with some likelihood \(\varepsilon>0\). This quantity is explicitly assigned by kernel smoothing within the HyRiM package, and letting \(\varepsilon\to 0\) is the same as specifying a vanishing sequence of values for the parameter bw here.

Download the code without line numbers


01  library(HyRiM); library(compare)
02  target <- 10  # remember the target node
03  routes <- list(c( 1, 2, 3, 4, 5, 6, 7, target),
04              c( 1, 2, 3, 6, 7, target),
05              c( 1, 8, 9, 7, target),
06              c( 1, 3, 4, 5, 6, 7, target),
07              c( 1, 3, 6, 7, target),
08              c( 1, 9, 7, target),
09              c( 1, 5, 4, 3, 6, 7, target),
10              c( 1, 5, 6, 7, target))
11  
12  V <- unique(unlist(routes)) # get all nodes from all routes
13  as1 <- setdiff(V, target) # the defender can check everywhere
14  as2 <- routes  # action space for the attacker
15  m <- length(as2) # size of the game
16  n <- length(as1)
17  
18  attackRate <- 2  # parameter lambda from the paper
19  # assume one adversary type per starting point
20  advTypes <- setdiff(V, target)
21  # no a priori information about the type
22  # i.e., a uniform prior
23  Theta <- rep(1/n, times = length(advTypes))  
24  
25  payoffMatrix <- list() # to collect utility distributions
26  
27  for(i in 1:n) {
28    for(j in 1:m) {
29      U <- rep(0, length(V))
30      path <- as2[[j]]
31      for(type in advTypes) {
32        if (type %in% path) {  
33          route <- path[which(path == type):length(path)]
34          pdfD <- dpois(x = 0:(length(route) - 1), lambda = attackRate)
35          pdfD <- pdfD / sum(pdfD)  
36          cutPoint <- min(which(route == i), length(route) + 1) - 1
37          payoffDistr <- pdfD[1:cutPoint] / sum(pdfD[1:cutPoint])
38          L <- rep(0, length(V))
39          L[route[1:cutPoint]] <- payoffDistr
40        }
41        else {
42          L <- c(1, rep(0, length(V)-1))
43        }
44        U <- U + Theta[type] * L
45      }
46      ld <- lossDistribution(U, discrete = TRUE, dataType = "pdf", supp = range(V), smoothing = "always", bw = 0.2)
47      payoffMatrix <- append(payoffMatrix, list(ld))
48    }
49  }
50  # construct and solve the game
51  G <- mosg(n,m,1,losses = payoffMatrix, byrow = TRUE)
52  eq <- mgss(G)
53  print(eq)