2.2 Effective computations Proposition 1. For all n ≥ 1 we have Proof. This trivial to establish by recurrence using matrix block multiplications.     □ We hence get the Corollary 2 (direct p-value). For all n ≥ 1 we have for all 1 ≤ i ≤ k     ℙ(En|X1 = si) =     and         (5) with yn-2 computable through the following recurrence relations: x0 = y0 = v     and, for all j ≥ 0     xj+1 = Rxj     and     yj+1 = yj+xj     (6) Proof. Simply use proposition 1 to rewrite equations (1) and (3). Recurrence relations are then obvious to establish.     □ And we also get the Corollary 3 (complementary p-value). For all n ≥ 1 we have for all 1 ≤ i ≤ k     ℙ(|X1 = si) =     and         (7) with x0 is a size L - 1 column vector filled with ones and with xn-1 = Rn-1x0 which is computable through the following recurrence relation: for all j ≥ 0     xj+1 = Rxj     (8) Proof. ∏ being a stochastic matrix, ∏n-1 is also stochastic, it is therefore clear that the sum of Rn-1 over the columns gives 1 - yn-2 and the corollary is proved.     □ Using these two corollaries, it is therefore possible to accurately compute the p-value of the event or of its complementary with a complexity O(L + ζ) in memory and O(n × ζ) in time where ζ is the number of non zero terms in the matrix R. In the worst case, ζ = (L - 1)2 but the technique of FMCI usually leads to a very sparse structure for R. One should note that these dramatic improvements from the naive approach could even get better by considering the structure of R itself, but this have to be done specifically for each considered problem. We will give detailed examples of this in both our application parts but, for the moment, we focus on the general case for which we give algorithms.