PMC:4996402 / 20012-34073 JSONTXT

Annnotations TAB JSON ListView MergeView

{"target":"https://pubannotation.org/docs/sourcedb/PMC/sourceid/4996402","sourcedb":"PMC","sourceid":"4996402","source_url":"https://www.ncbi.nlm.nih.gov/pmc/4996402","text":"4. Alternative NCA-Based Algorithms\nIn this section, we review some alternative NCA-based algorithms that were also recently reported in the literature. Different from the algorithms discussed in Section 3, where mNCA, gNCA, NCAr and gfNCA utilize the NCA algorithm to infer TRNs, the algorithms discussed in this section focus on designing more efficient algorithms to estimate the matrices A and S in the NCA system model Equation (1). These algorithms can be roughly classified into two classes, namely the iterative and the non-iterative class.\n\n4.1. Iterative NCA Algorithms\nAs described in Section 2, NCA adopts the ALS approach to iteratively update matrices A and S. Therefore, NCA, along with all of the algorithms that employ ALS, such as mNCA, gNCA, NCAr and gfNCA, is an iterative method. Another example of iterative methods, referred to as robust NCA (ROBNCA), is reviewed next.\n\nRobust NCA\nROBNCA [29] is a robust NCA-based approach that tries to cope with the possible noise and outliers present in the microarray data due to erroneous measurements and/or the abnormal response of genes [30]. To counteract the presence of outliers, the system model of TRNs is formulated as:(5) X=AS+O+Γ where matrix O models explicitly the presence of outliers. Since typically, only a few outliers exist, the outlier matrix O represents a column-sparse matrix. Accounting for the sparsity of matrix O, ROBNCA aims to solve the following optimization problem:(6) {A^,S^,O^}=argminA,S,O||X−AS−O||F2+λ0||O||0s.t.   A(I)=0, where ||O||0 denotes the number of nonzero columns in O and λ0 is a penalization parameter used to control the extent of sparsity of O. Due to the intractability and high complexity of computing the l0-norm-based optimization problem, the problem Equation (6) is relaxed to:(7) {A^,S^,O^}=argminA,S,O||X−AS−O||F2+λ2||O||2,cs.t.   A(I)=0 where ||O||2,c stands for the column-wise l2-norm sum of O, i.e., ||O||2,c=∑k=1K||ok||2, where ok denotes the k-th column of O. Since the optimization problem Equation (7) is not jointly convex with respect to {A,S,O}, an iterative algorithm is performed in [29] to optimize Equation (7) with respect to one parameter at a time.\nTowards this end, the ROBNCA algorithm at iteration j assumes that the values of A and O from iteration (j−1), i.e., A(j−1) and O(j−1), are known. Defining Y(j)=X−O(j−1), the update of S(j) can be calculated by carrying out the optimization problem:S(j)=argminS||Y(j)−A(j−1)S||F2 which admits a closed-form solution. The next step of ROBNCA at iteration j is to update A(j) while fixing O and S to O(j−1) and S(j), respectively. This can be performed via the following optimization problem:(8) A(j)=argminA||Y(j)−AS(j)||F2.s.t.   A(I)=0\nThe problem Equation (8) was also considered in the original NCA paper [17] in which a closed-form solution was not provided. Since this optimization problem has to be conducted at each iteration, a closed-form solution is derived in ROBNCA using the re-parameterization of variables and the Karush–Kuhn–Tucker (KKT) conditions to reduce the computational complexity and improve the convergence speed of the original NCA algorithm. In the last step, the iterative algorithm estimates the outlier matrix O by using the iterates A(j) and S(j) obtained in the previous steps, i.e., (9) O(j)=argminok||C(j)−O||22+λ2||O||2,c where C(j)=X−A(j)S(j). The solution to Equation (9) is obtained by using standard convex optimization techniques, and it can be expressed in a closed form.\nIt can be observed that at each iteration, the updates of matrices A, S and O all assume a closed-form expression, and it is this aspect that significantly reduces the computational complexity of ROBNCA when compared to the original NCA algorithm. In addition, the term λ2||O||2,c guarantees the robustness of the ROBNCA algorithm against outliers. Simulation results in [29] also show that ROBNCA estimates TFAs and the TF-gene connectivity matrix with a much higher accuracy in terms of normalized mean square error than FastNCA [31] and non-iterative NCA (NINCA) [32], irrespective of varying noise, the level of correlation and outliers.\n\n4.2. Non-Iterative NCA Algorithms\nThis section presents four fundamental non-iterative methods, namely, fast NCA (FastNCA) [31], positive NCA (PosNCA) [33], non-negative NCA (nnNCA) [34] and non-iterative NCA (NINCA) [32]. These algorithms employ the subspace separation principle (SSP) and overcome some drawbacks of the existing iterative NCA algorithms. FastNCA utilizes SSP to preprocess the noise in gene expression data and to estimate the required orthogonal projection matrices. On the other hand, in PosNCA, nnNCA and NINCA, the subspace separation principle is adopted to reformulate the estimation of the connectivity matrix as a convex optimization problem. This convex formulation provides the following benefits: (i) it ensures a global solution; (ii) it allows usage of efficient convex programming techniques, like the interior point method [24]; and (iii) it offers the flexibility of adding additional convex constraints. Since SSP represents the core technique of these non-iterative NCA-based algorithms, this important concept is first explained in the next subsection.\n\n4.2.1. Subspace Separation Principle\nAssume matrix X is decomposed into the sum of two other matrices X=B+Γ, where X∈RN×K(K\u003cN) stands for the observed data, B∈RN×K represents the true signal and Γ∈RN×K denotes the noise matrix. SSP attempts to partition the range space of X into two subspaces, where one subspace is spanned by the source signal and the other subspace is spanned by noise. One possible way to do this is via singular value decomposition (SVD). Specifically, the SVD of X takes the form:(10) X=UΣVT=∑i=1KσkukvkT where the singular values are arranged in a descending order σ1≥σ2≥...≥σK≥0. In the situation where the noise level is low and the signal matrix is not ill-conditioned, the significant singular values (singular values with larger values) correspond to the signal subspace, and the remaining negligible singular values correspond to the noise subspace. Under the assumption of keeping (L\u003cK) singular values as the signal singular values, the SVD of Equation (10) can be decomposed into two components, corresponding to the signal (XL) and noise component (XR), respectively. (11) X=ULΣLVLT︸XL+URΣRVRT︸XR\nThe first term in Equation (11), i.e., XL, is called the L-rank Eckart–Young–Mirsky (EYM) approximation of X and represents the higher signal-to-noise ratio (SNR) representation of X. Matrix ΣL is a diagonal matrix, and it contains the first L singular values corresponding to the signal component; and UL and VL correspond to the left and right singular vectors, respectively. Similarly, ΣR is a diagonal matrix containing the last K−L singular values corresponding to the noise part, and UR and VR correspond to the left and right noise singular vectors, respectively. Hence, the space of the observed measurements is approximately decomposed into two separate subspaces: signal and noise subspace, respectively. If we still further denote X as the product of the two matrices A∈RN×M and S∈RM×K, i.e., X=AS+Γ as shown in Equation (1), it is shown in [32] that UR represents a robust approximation of the left null space of A in the case L=M.\n\n4.2.2. FastNCA\nFastNCA [31] provides a closed form solution to NCA, and it overcomes in the same time the speed limitations of the original NCA. FastNCA employs a series of matrix partitionings and orthogonal projections to estimate the connectivity matrix on a column-by-column basis. Once matrix A is estimated, matrix S is estimated by a direct application of the least-squares principle:(12) S=A†X\nNext, a detailed explanation of the FastNCA approach to estimate the first column of A, i.e., a1, in both the noiseless and noisy case is presented. The same analysis can be repeated for the remaining columns, since the columns in A can be re-ordered by appropriately changing the rows of S.\nIn the ideal case where no noise exists, the system model in Equation (1) assumes the form:(13) X=AS\nWithout loss of generality, the elements in a1 are rearranged, such that the nonzero elements are located at the beginning of the vector and the zero elements are placed at the end:(14) a1=a˜10\nThen, Equation (13) can be partitioned as:(15) X=XcXr=a˜1Ac0Ars1TSr=a˜1s1T+AcSrArSr\nTaking the transpose of Equation (15) results in:(16) XcT=s1a1˜T+SrTAcT (17) XrT=SrTAcT\nExtracting a˜1 is possible if the term SrTAcT in Equation (16) can be eliminated. This can be determined by using an orthogonal matrix projection. Assuming the orthogonal projection matrix onto SrT is PSrT⊥ and multiplying Equation (16) by PSrT⊥ leads to:(18) PSrT⊥XcT=PSrT⊥s1a˜1T=s˜1a˜1T where s˜1=PSrT⊥s1. Therefore, the challenge is to find PSrT⊥. From Equation (17), the range space of SrT and the left null space of XrT are the same, since SrT is full column rank (the third NCA criterion). Furthermore, ArT is full row rank (first NCA criterion). Hence, PSrT⊥=PXrT⊥. PSrT⊥XcT is known. Therefore, a rank-one factorization of PSrT⊥XcT yields an estimate of a˜1T up to a scalar ambiguity, and it represents the first right singular vector of PSrT⊥XcT.\nIn the noise case, as shown in Equation (1), FastNCA handles the noise in the gene expression measurements by using the concept of subspace separation. This is done by replacing the noisy observation data X with its L-rank EYM approximation XL (see Equation (11)). In this way, it follows that:X=ULΣLVLT and moreover:(19) W=UL=XVLΣL−1=(AS+Γ)VLΣL−1=AS˜+Γ˜ where UL is represented by W for simplicity, S˜=SVLΣL−1 and Γ˜=ΓVLΣL−1.\nPartitioning W in the same way as in Equation (15) yields:(20) W=WcWr=a˜1Ac0Ars˜1TS˜r+Γ˜cΓ˜r which further results in:(21) WcT=s˜1a1˜T+S˜rTAcT+Γ˜cT(22) WrT=S˜rTArT+Γ˜rT\nDue to noise, a direct repetition of the noiseless case analysis is not applicable, because PS˜rT⊥≠PWrT⊥. The subspace separation principle provides an estimate of PS˜rT⊥. Consider the following SVD of Wr:(23) Wr=U1Σ1V1T+U0Σ0V0T where Σ1 and Σ0 contain the leading M−1 and last L−M+1 singular values, respectively. Then, an estimate of P^S˜rT⊥ is given by:(24) P^S˜rT⊥=V0V0T\nSimilar to the noiseless case, a˜1T can be obtained by applying a rank-one factorization for P^S˜rT⊥WcT.\n\n4.2.3. Positive NCA, Non-Negative NCA and Non-Iterative NCA\nPosNCA [33] modifies the original NCA algorithm in two regards. The first aspect pertains to evaluating matrix A via a convex optimization (instead of ALS, as in the original NCA). The second aspect refers to the addition of the positivity constraints on all of the nonzero elements in the connectivity matrix. This assumption has a biological support [35]. The positivity constraint is valid only in situations where all TFs play the same role (i.e., activating or deactivating) on their corresponding targeted genes. If all of the TFs regulate the genes in a negative way (deactivating), the positivity assumption is maintained by multiplying the activity value in the signal matrix by the value −1. This positivity assumption is a convex constraint, which perfectly integrates with the convex formulation of the problem.\nThe essence of the formulation of PosNCA as a convex optimization problem relies on the orthogonality between the range space and the left null space. However, the challenge is to find a basis for the left null space of A. Consider C to be a basis for the left null space of A; then, it follows that:(25) CTA=0.\nIn the ideal case (X=AS), the range space and left null space of A are the same as those of X. This is because A is a full column rank (first criterion of NCA) and S is full row rank (third criterion of NCA). Therefore, C is obtained directly from X. In contrast to the noiseless case, there is no direct access to C in the noisy case. Alternatively, SSP provides a robust approximation of C. Consider the SVD X=UΣVT, and let U be partitioned as U=[UL,UR], where UL is of dimensions N×M and UR is of dimensions N×(N−M). Then, based on the discussion in Section 4.2.1, UR represents an approximation of C (C^=UR). Therefore, A can be estimated by minimizing the Frobenius norm of ||C^TA||F, while maintaining both constraints, i.e., the structure of the connectivity matrix and the positivity of all nonzero elements in the connectivity matrix. Mathematically, this problem can be formulated as follows:(26) A^=argminA||C^TA||Fs.t.   A(I)=0,   A(J)≥c where J stands for the set of indices of the nonzero elements in A and c is small positive constant. The optimization problem in Equation (26) is a convex optimization problem, since both the objective function and constraints are convex. The authors of [33] used an interior point-based method [24] to solve Equation (26). After evaluating A, the signal matrix is estimated using the traditional ALS:(27) S=A†X\nThe authors of [34] pioneered nnNCA, which utilizes the separable nature of the estimation problem corresponding to the matrix A in Equation (26) to achieve a computationally-efficient version of their previously-reported algorithm PosNCA. In PosNCA, matrix A is estimated in one shot by solving the optimization problem Equation (26). On the other hand, nnNCA estimates the columns in A in parallel, since each column of the connectivity matrix can be estimated independently of the other columns [34].\nLater, NINCA [32] was proposed to further improve the computational efficiency and the estimation accuracy of the framework reported in PosNCA. Analogous to nnNCA, NINCA estimates matrix A on a column-by-column basis. In addition, NINCA does not assume a positive constraint on the non-zero elements of the connectivity matrix and further imposes the constraint 1T·ai=1 for each column of matrix A to avoid the trivial solution. In terms of the procedure to estimate the TF matrix S, instead of using the traditional least-squares error adopted in [33] and [34], NINCA employs a total least-squares (TLS) algorithm [36] that not only considers the error in S, but also weighs the error in A.\n\n","divisions":[{"label":"Title","span":{"begin":0,"end":35}},{"label":"Section","span":{"begin":550,"end":4143}},{"label":"Title","span":{"begin":550,"end":579}},{"label":"Section","span":{"begin":894,"end":4142}},{"label":"Title","span":{"begin":894,"end":904}},{"label":"Section","span":{"begin":4144,"end":14060}},{"label":"Title","span":{"begin":4144,"end":4177}},{"label":"Section","span":{"begin":5236,"end":7310}},{"label":"Title","span":{"begin":5236,"end":5272}},{"label":"Section","span":{"begin":7312,"end":10304}},{"label":"Title","span":{"begin":7312,"end":7326}},{"label":"Section","span":{"begin":10306,"end":14059}},{"label":"Title","span":{"begin":10306,"end":10365}}],"tracks":[{"project":"2_test","denotations":[{"id":"27600242-23940252-69476459","span":{"begin":913,"end":915},"obj":"23940252"},{"id":"27600242-23940252-69476460","span":{"begin":2118,"end":2120},"obj":"23940252"},{"id":"27600242-14673099-69476461","span":{"begin":2797,"end":2799},"obj":"14673099"},{"id":"27600242-23940252-69476462","span":{"begin":3873,"end":3875},"obj":"23940252"},{"id":"27600242-18400771-69476463","span":{"begin":4033,"end":4035},"obj":"18400771"},{"id":"27600242-22641712-69476464","span":{"begin":4068,"end":4070},"obj":"22641712"},{"id":"27600242-18400771-69476465","span":{"begin":4268,"end":4270},"obj":"18400771"},{"id":"27600242-22641712-69476466","span":{"begin":4362,"end":4364},"obj":"22641712"},{"id":"27600242-22641712-69476467","span":{"begin":7220,"end":7222},"obj":"22641712"},{"id":"27600242-18400771-69476468","span":{"begin":7336,"end":7338},"obj":"18400771"},{"id":"27600242-22641712-69476469","span":{"begin":13382,"end":13384},"obj":"22641712"}],"attributes":[{"subj":"27600242-23940252-69476459","pred":"source","obj":"2_test"},{"subj":"27600242-23940252-69476460","pred":"source","obj":"2_test"},{"subj":"27600242-14673099-69476461","pred":"source","obj":"2_test"},{"subj":"27600242-23940252-69476462","pred":"source","obj":"2_test"},{"subj":"27600242-18400771-69476463","pred":"source","obj":"2_test"},{"subj":"27600242-22641712-69476464","pred":"source","obj":"2_test"},{"subj":"27600242-18400771-69476465","pred":"source","obj":"2_test"},{"subj":"27600242-22641712-69476466","pred":"source","obj":"2_test"},{"subj":"27600242-22641712-69476467","pred":"source","obj":"2_test"},{"subj":"27600242-18400771-69476468","pred":"source","obj":"2_test"},{"subj":"27600242-22641712-69476469","pred":"source","obj":"2_test"}]}],"config":{"attribute types":[{"pred":"source","value type":"selection","values":[{"id":"2_test","color":"#bfec93","default":true}]}]}}