PMC:4331678 / 12568-23498 JSONTXT

Annnotations TAB JSON ListView MergeView

{"target":"https://pubannotation.org/docs/sourcedb/PMC/sourceid/4331678","sourcedb":"PMC","sourceid":"4331678","source_url":"https://www.ncbi.nlm.nih.gov/pmc/4331678","text":"Method\n\nNetwork-based prediction algorithm\nLet Wm∈ℝn×n(m∈{1,2,…,M}) be a weight matrix corresponding to the m-th individual functional association network. Each node of a network corresponds to one of the n proteins, and the entry Wi,jm≥0 is the association (similarity, or reliability of interaction) between proteins i and j in the m-th data source. Among the n proteins, the first l proteins have confirmed annotation, and the functional annotation of the remaining u = n - l proteins needs to be predicted. These annotated proteins have C distinct functions, and each annotated protein has a subset of the C functions. Each of these C functions corresponds to a Gene Ontology (GO) term in one of the three sub-branches (Biological Process, Molecular Function, Cellular Component) of the GO [30]. The functions of the i-th protein is represented as a label vector yi ∈ {0|1}C, where yic = 1 if the i-th protein is confirmed to have the c-th function, otherwise, yic = 0. For an unlabeled protein j, yjc = 0 (l \u003cj ≤ n, 1 ≤ c ≤ C). Here, we denote the predicted likelihood vector of the i-th protein as fi ∈ RC , and fic represents the likelihood that the i-th protein has the c-th function.\nLet W=∑m=1MαmWm be the association matrix of the composite network, where αm ≥ 0 is the weight assigned to the m-th network, representing its relevance towards the prediction task. Many network-based algorithms [13] extend the guilt-by-association rule [31], or exploit the cluster structure of protein complex [4] to predict protein functions using the PPI networks. Inspired by these work, we use a linear neighborhood propagation algorithm [32] on the composite network W to predict protein functions:\n(1) f = arg  min f ∑ i = 1 l | | f i - y i | | 2 2 + ∑ i = 1 n | | f i - ∑ j ∈ N ( i ) W i j f j | | 2 2 s . t . ∑ j = 1 n W i j = 1\nwhere N(i) is the set of proteins that have connections with the i-th protein, 0 ≤ Wij ≤ 1 is the (i, j)-th entry of W , and I is an n × n identity matrix. The first term in Eq. (1) enforces the prediction to be close to the initial annotation of the l proteins, and it is often viewed as the empirical loss term. The minimization of the second term enforces that the functions assigned to an unlabeled protein j are determined by the functions of its connected proteins in W; as such the second term acts as a smoothness loss term [33]. Eq. (1) is motivated by the observation that interacting proteins are more likely to share similar functions [31], and two proteins with similar amino acids often have similar functions [17]. The above equation can be rewritten in matrix notation as:\n(2) F = arg  min F t r ( ( F - Y ) T H ( F - Y ) ) + t r ( F T L F )\nHere, Y = [y1, y2,⋯,yn], F = [f1, f2,⋯,fn] ∈ Rn × C, H is an n × n diagonal matrix with Hii = 1 if i ≤ l, and Hii = 0 otherwise, L = (I - W )T (I - W) and I is an n × n identity matrix, tr(·) and T are the matrix trace and transpose operators, respectively. By taking the differentiation of Eq. (2) to F and setting the differentiation to zero, F can be computed as:\n(3) F = ( H + L ) - 1 H Y\nThe functional labels are organized in a hierarchy (a directed acyclic graph for GO terms, and a tree for MIPS FunCat). The more specific the functional label is in the hierarchy, the fewer member proteins this label has. If a protein has a confirmed functional label, this protein is also annotated with its ancestor labels in the hierarchy [3,30,34]. Therefore, protein function prediction is an unbalanced classification problem and to achieve a good prediction it's important to take into account this issue [3,11]. Eq. (1) ignores the unbalanced problem and considers all functional labels as equal. To address this limitation, we modify yic into y˜ic=yiclogNnc+ (N=∑c=1Cnc+,nc+ proteins annotated with the c-th function). The added factor has the effect of putting more emphasis on functional labels that are more specific. This forces the optimizer to focus on the more specific functions, versus the general ones which have more member proteins. We set Y˜=[y˜1,⋯,y˜n], and F=(H+L)-1HY˜.\n\nKernel target alignment\nGiven Wij=∑m=1MαmWijm and the available functional association networks {Wm}m=1M, the accuracy of protein function prediction is determined by α = [α1, α2,⋯,αM]. [24] and [35] have shown that the target aligned kernel (network) can boost the performance of kernel-based classification and regression. To compute the weights to be assigned to the M individual networks, we resort to a form of kernel-target alignment algorithm [24] as follows:\n(4) α = arg min α t r ( ( K - W ) T ( K - W ) ) s . t . W = ∑ m = 1 M α m W m , α m ≥ 0\nwhere K∈ℝn×n is the induced target network of functional labels, defined as K=∑c=1CKc, where Kc is the c-th induced target network computed as:\nK c ( i , j ) = n c - l 2 if y i c = y j c = 1 n c + n c - l 2 if y i c y j c = 0 \u0026 y i c + y j c = 1 \u0026 i , j ≤ 1 0 , otherwise\nwhere nc- is the number of proteins which are not annotated with the c-th function. Since a functional label often has a relatively small number of member proteins nc+\u003cnc- and nc-l2\u003enc+nc-l2. From the definition, the more functions two proteins have in commom, the larger the entry (corresponding to the weight of the edge between them) in the target network is. This idea was adapted to define the target network [15,16] and to reconstruct the functional association network [36]. Mostafavi et al. [15,16] set the entry (corresponding to the edge between two proteins such that one has the c-th function and the other doesn't) in the target network as -nc+nc-n2. In contrast, we set the entry as nc+nc-l2. The reason is that the available GO term annotation of proteins is incomplete, is updated regularly and suffer from a large research bias [3,21,37]. As such, yic = 0 should not be simply interpreted as if the i-th protein does not have the c-th function. Furthermore, for a to be predicted protein j, if W(i; j) is large, from the guilty by association rule, protein j is likely to share some functions with the i-th protein. By minimizing Eq. (4), we aim at crediting larger weights to the networks which consider highly similar proteins which share more functions, and smaller weights to networks which consider highly similar proteins which share fewer or no functions. By doing so, we can assign larger weights to networks that are coherent with functional labels.\nBased on the fact that tr(KW) = vec(K)T vec(W), where vec(K) is the vectorization operator that stacks the columns of K on top of each other, we can rewrite Eq. (4) as a non-negative linear regression problem:\n(5) α = arg min α t r ( ( V K - V W α ) T ( V K - V W α ) ) s . t .   α m ≥ 0 , 1 ≤ m ≤ M\nwhere VK = vec(K), VW=[vec(W1),⋯,vec(WM)]∈ℝ(n×n)×M.\n\nThe unified objective function\nMostafavi et al. [15,16] first utilized Eq. (4) to determine α for the individual networks, and then applied a Gaussian Random Field classifier [33] on the composite network W to predict protein functions. However, the composite network W optimized from the target network K is not necessarily optimal for the classifier, since the optimization of the composite network and of the classifier is divided into two separate objectives. To avoid this limitation, we attempt to integrate these two objectives into a unified function as follows:\n(6) α = arg min α t r ( ( V K - ∑ m = 1 M α m v W m ) T ( ( V K - ∑ m = 1 M α m v W m ) ) + λ t r ( F T ( I - ∑ m = 1 M α m W m ) T ( I - ∑ m = 1 M α m W m ) F ) + λ t r ( ( F - Y ˜ ) T H ( F - Y ˜ ) ) s . t .   α m ≥ 0 , 1 ≤ m ≤ M\nwhere vWm is the m-th column vector of VW. Eq. (6) combines the objectives of network-based classification and of target network alignment. Therefore, we can enforce the composite network to be coherently optimal with respect to both objectives.\nThe above objective function is convex with respect to F and α, respectively. Here, we introduce an EM [38] style algorithm to iteratively optimize F (or α) while keeping α (or F) constant. For a fixed α, we can obtain F directly from Eq. (2). For a fixed F, using tr(αmW) = αmtr(W) and tr(K - W) = tr(K) - tr(W), Eq. (7) can be rewritten as:\n(7) α = arg min α - 2 α T V W T V K + α T V W T V K α + λ ( - 2 α T μ + α T Θ α ) s . t .   α m ≥ 0 , 1 ≤ m ≤ M\nwhere Θ is an M × M matrix with Θ(m1,m2)=tr(FT(Wm1)TWm2F), μ is an M × 1 vector with μm = tr(FTWmF). The other terms in Eq. (6) are constant for a fixed F and irrelevant for α, thus they are not included in Eq. (7). Taking the partial derivatives with respect to α, we can obtain the following solution:\n(8) α = ( V W T V W + λ Θ ) - 1 ( V W T V K + λ μ )\nIt is easy to see that, if λ = 0, only the kernel target alignment criterion is used to optimize α, and MNet is similar to SW.\nThe MNet algorithm is presented in Algorithm 1. Ft and αt are the computed values for F and α at the t-th iteration, maxIter and θ are the maximum number of iterations and the convergence threshold, respectively.\nAlgorithm 1 MNet: Integrating Multiple Networks for Protein Function Prediction\nInput:\n   {Wm}m=1M functional association networks, Y˜, λ, maxIter, θ\nOutput:\nPredicted likelihood score vectors {fj}j=l+1n\n1: Initialize αm1=1(1≤m≤M), and t = 1\n2: while t \u003cmaxIter and |δ| \u003eθ do\n3:   t = t + 1\n4:   Compute Ft using Eq. (3)\n5:   Compute αt using Eq. (8)\n6: δ = |αt - αt-1|\n7: end while\n\nComplexity analysis\nSeveral components in Eq. (8) (i.e., VWTVW∈RM×M, VWTVK∈RM×1 and (Wm1)TWm2∈Rn×n) can be computed before the iterative process. The time complexity of tr(FTWF ) is O(Cn2). Θ is an M × M symmetric matrix, in each iteration there are M (M + 1)/2 elements to be computed, so the time complexity of computing Θ is O(M (M + 1) × Cn2). The complexity of matrix inverse in Eq. (3) is O(n3). To avoid large matrix inverse, iterative approximation algorithms (i.e. Conjugate Gradient) can be applied. Since the computation complexity of matrix inverse in Eq. (8), and the complexity of μ is smaller than Θ, the overall time complexity of MNet is max{O(M2TCn2), O(Tn3)}. T is the number of iterations for convergency. In practice, T is no more than 10. In our study, the association matrices of the individual networks and the composite network are all sparse with O(n) non-zero elements. For this reason, the time complexity of the above operations can be greatly reduced. The main difference between MNet and ProMK is that ProMK just uses μ, so its time complexity is max{O(MTCn2), O(Tn3)}. Given that the number of individual networks is much smaller than the number of proteins and functional labels, MNet has similar complexity with ProMK.","divisions":[{"label":"title","span":{"begin":0,"end":6}},{"label":"sec","span":{"begin":8,"end":4158}},{"label":"title","span":{"begin":8,"end":42}},{"label":"p","span":{"begin":43,"end":1192}},{"label":"p","span":{"begin":1193,"end":1697}},{"label":"p","span":{"begin":1698,"end":1889}},{"label":"label","span":{"begin":1698,"end":1701}},{"label":"p","span":{"begin":1890,"end":2678}},{"label":"p","span":{"begin":2679,"end":2764}},{"label":"label","span":{"begin":2679,"end":2682}},{"label":"p","span":{"begin":2765,"end":3131}},{"label":"p","span":{"begin":3132,"end":3163}},{"label":"label","span":{"begin":3132,"end":3135}},{"label":"p","span":{"begin":3164,"end":4158}},{"label":"sec","span":{"begin":4160,"end":6931}},{"label":"title","span":{"begin":4160,"end":4183}},{"label":"p","span":{"begin":4184,"end":4626}},{"label":"p","span":{"begin":4627,"end":4743}},{"label":"label","span":{"begin":4627,"end":4630}},{"label":"p","span":{"begin":4744,"end":4887}},{"label":"p","span":{"begin":4888,"end":5073}},{"label":"p","span":{"begin":5074,"end":6549}},{"label":"p","span":{"begin":6550,"end":6759}},{"label":"p","span":{"begin":6760,"end":6879}},{"label":"label","span":{"begin":6760,"end":6763}},{"label":"p","span":{"begin":6880,"end":6931}},{"label":"sec","span":{"begin":6933,"end":9676}},{"label":"title","span":{"begin":6933,"end":6963}},{"label":"p","span":{"begin":6964,"end":7503}},{"label":"p","span":{"begin":7504,"end":7836}},{"label":"label","span":{"begin":7504,"end":7507}},{"label":"p","span":{"begin":7837,"end":8082}},{"label":"p","span":{"begin":8083,"end":8425}},{"label":"p","span":{"begin":8426,"end":8576}},{"label":"label","span":{"begin":8426,"end":8429}},{"label":"p","span":{"begin":8577,"end":8880}},{"label":"p","span":{"begin":8881,"end":8953}},{"label":"label","span":{"begin":8881,"end":8884}},{"label":"p","span":{"begin":8954,"end":9080}},{"label":"p","span":{"begin":9081,"end":9293}},{"label":"p","span":{"begin":9294,"end":9373}},{"label":"p","span":{"begin":9374,"end":9380}},{"label":"p","span":{"begin":9381,"end":9443}},{"label":"p","span":{"begin":9444,"end":9451}},{"label":"p","span":{"begin":9452,"end":9497}},{"label":"p","span":{"begin":9498,"end":9535}},{"label":"p","span":{"begin":9536,"end":9569}},{"label":"p","span":{"begin":9570,"end":9584}},{"label":"p","span":{"begin":9585,"end":9614}},{"label":"p","span":{"begin":9615,"end":9644}},{"label":"p","span":{"begin":9645,"end":9663}},{"label":"p","span":{"begin":9664,"end":9676}},{"label":"title","span":{"begin":9678,"end":9697}}],"tracks":[{"project":"2_test","denotations":[{"id":"25707434-10802651-14839071","span":{"begin":795,"end":797},"obj":"10802651"},{"id":"25707434-11101803-14839072","span":{"begin":1447,"end":1449},"obj":"11101803"},{"id":"25707434-19574621-14839073","span":{"begin":1637,"end":1639},"obj":"19574621"},{"id":"25707434-11101803-14839074","span":{"begin":2538,"end":2540},"obj":"11101803"},{"id":"25707434-14990442-14839075","span":{"begin":2615,"end":2617},"obj":"14990442"},{"id":"25707434-10802651-14839076","span":{"begin":3509,"end":3511},"obj":"10802651"},{"id":"25707434-20479498-14839077","span":{"begin":3512,"end":3514},"obj":"20479498"},{"id":"25707434-20507895-14839078","span":{"begin":5492,"end":5494},"obj":"20507895"},{"id":"25707434-24149053-14839079","span":{"begin":5551,"end":5553},"obj":"24149053"},{"id":"25707434-20507895-14839080","span":{"begin":5577,"end":5579},"obj":"20507895"},{"id":"25707434-22522134-14839081","span":{"begin":5922,"end":5924},"obj":"22522134"},{"id":"25707434-20507895-14839082","span":{"begin":6985,"end":6987},"obj":"20507895"}],"attributes":[{"subj":"25707434-10802651-14839071","pred":"source","obj":"2_test"},{"subj":"25707434-11101803-14839072","pred":"source","obj":"2_test"},{"subj":"25707434-19574621-14839073","pred":"source","obj":"2_test"},{"subj":"25707434-11101803-14839074","pred":"source","obj":"2_test"},{"subj":"25707434-14990442-14839075","pred":"source","obj":"2_test"},{"subj":"25707434-10802651-14839076","pred":"source","obj":"2_test"},{"subj":"25707434-20479498-14839077","pred":"source","obj":"2_test"},{"subj":"25707434-20507895-14839078","pred":"source","obj":"2_test"},{"subj":"25707434-24149053-14839079","pred":"source","obj":"2_test"},{"subj":"25707434-20507895-14839080","pred":"source","obj":"2_test"},{"subj":"25707434-22522134-14839081","pred":"source","obj":"2_test"},{"subj":"25707434-20507895-14839082","pred":"source","obj":"2_test"}]}],"config":{"attribute types":[{"pred":"source","value type":"selection","values":[{"id":"2_test","color":"#93ece1","default":true}]}]}}