PMC:4331678 / 40242-43115
Annnotations
{"target":"https://pubannotation.org/docs/sourcedb/PMC/sourceid/4331678","sourcedb":"PMC","sourceid":"4331678","source_url":"https://www.ncbi.nlm.nih.gov/pmc/4331678","text":"Parameter sensitivity\nSome of the algorithms used for comparison need to tune several parameters, and the specification of these parameters affect the performance. These parameters and their suggested ranges are listed in Table 1 of the Additional File 1. The result of MNet depends on λ, whose purpose is to balance the kernel target alignment and the loss of the classifier on the composite network. ProMK relies on the specification of λ2 to determine the weights on individual networks. OMG needs to tune the power size r on the weights and LIG requires to set the number of subnetworks for each input network. To study the parameter sensitive of these algorithms, for MNet, we vary λ in {10-2, 10-1,⋯,105}; for ProMK, we vary λ2 in {100, 101,⋯,107}; for OMG, we vary r in {1.2, 1.5, 2, 3, 4, 5, 6}, and for LIG, we vary C in {1, 5, 10, 20, 30}. For each setting value of the parameter, we execute five-fold cross validation as in the previous experiment, and report the average result. The results of these methods on Yeast (annotated with BP functions) under different values of the parameters are reported in Figure 3. We also provide similar results (Yeast annotated with CC and BP functions, and Human annotated with BP functions) in Figures 7-9 of the Additional File 1.\nFigure 3 Parameter sensitivity of MNet, ProMK, OMG, and LIG on the Yeast dataset annotated with BP functions. When λ is set to a small value (i.e., λ = 10-2), a small emphasis is put on the classification task and a large stress on the kernel target alignment; as such, the results of MNet slightly deteriorate. These results also support our statement that optimizing the kernel target alignment (or composite network) does not necessarily result in the optimal composite network for classification. When λ = 1 or above, MNet has a stable performance and outperforms the other methods. This trend also justifies our motivation to unifying the kernel target alignment and the classifier on the composite network in a combined objective function.\nWhen λ2 is small, only one network can be chosen by ProMK, and therefore ProMK achieves a relatively poor result in this case. When λ2 increases to a value larger than 103, more kernels are used to construct the composite network, and the results of ProMK progressively improve and achieve stability when most of the networks are used to construct the composite one. The best setting of r for OMG is r = 1.5; for larger values, the results worsen and they become stable when r ≤ 3. As for LIG, the values C = 1 and C = 5 often give the best results, and LIG's performance sometimes fluctuates sharply for other settings of C.\nFrom these results, we can draw the conclusion that MNet can select an effective λ's value from a wide range, and MNet is less affected by the parameter selection problem than ProMK and the other competitive algorithms.","divisions":[{"label":"title","span":{"begin":0,"end":21}},{"label":"p","span":{"begin":22,"end":1280}},{"label":"figure","span":{"begin":1281,"end":1391}},{"label":"label","span":{"begin":1281,"end":1289}},{"label":"caption","span":{"begin":1291,"end":1391}},{"label":"p","span":{"begin":1291,"end":1391}},{"label":"p","span":{"begin":1392,"end":2027}},{"label":"p","span":{"begin":2028,"end":2653}}],"tracks":[]}