PMC:7782580 / 56113-58140
Annnotations
LitCovid-PubTator
{"project":"LitCovid-PubTator","denotations":[{"id":"361","span":{"begin":521,"end":548},"obj":"Disease"}],"attributes":[{"id":"A361","pred":"tao:has_database_id","subj":"361","obj":"MESH:C537866"}],"namespaces":[{"prefix":"Tax","uri":"https://www.ncbi.nlm.nih.gov/taxonomy/"},{"prefix":"MESH","uri":"https://id.nlm.nih.gov/mesh/"},{"prefix":"Gene","uri":"https://www.ncbi.nlm.nih.gov/gene/"},{"prefix":"CVCL","uri":"https://web.expasy.org/cellosaurus/CVCL_"}],"text":"Based on the four blocks, two frameworks were designed for the classification task and regression task, respectively.Classification framework: The CNNCF consisted of stage I and stage II, as shown in Fig. 3a. Stage I was duplicated Q times in the framework (in this study, Q = 1). It consisted of multiple ResBlock-A with a number of M (in this study, M = 2), one ResBlock-B, and one Control Gate Block. Stage II consisted of multiple ResBlock-A with a number of N (in this study, N = 2) and one ResBlock-B. The weighted cross-entropy loss function was used and was minimized using the SGD optimizer with a learning rate of a1 (in this study, a1 = 0.01). A warm-up strategy58 was used in the initialization of the learning rate for a smooth training start, and a reduction factor of b1 (in this study, b1 = 0.1) was used to reduce the learning rate after every c1 (in this study, c1 = 10) training epochs. The model was trained for d1 (in this study, d1 = 40) epochs, and the model parameters saved in the last epoch was used in the test phase.\nRegression framework: The CNNRF (Fig. 3b) consisted of two parts (stage II and the regressor). The inputs to the regression framework were the images of the lesion areas, and the output was the corresponding vector with five dimensions, representing the five clinical indicators (all clinical indicators were normalized to a range of 0–1). The stage II structure was the same as that in the classification framework, except for some parameters. The loss function was the MSE loss function, which was minimized using the SGD optimizer with a learning rate of a2 (in this study, a2 = 0.01). A warm-up strategy was used in the initialization of the learning rate for a smooth training start, and a reduction factor of b2 (in this study, b2 = 0.1) was used to reduce the learning rate after every c2 (in this study, c2 = 50) training epochs. The framework was trained for d2 (in this study, d2 = 200) epochs, and the model parameters saved in the last epoch were used in the test phase."}
LitCovid-sentences
{"project":"LitCovid-sentences","denotations":[{"id":"T421","span":{"begin":0,"end":142},"obj":"Sentence"},{"id":"T422","span":{"begin":143,"end":208},"obj":"Sentence"},{"id":"T423","span":{"begin":209,"end":280},"obj":"Sentence"},{"id":"T424","span":{"begin":281,"end":403},"obj":"Sentence"},{"id":"T425","span":{"begin":404,"end":507},"obj":"Sentence"},{"id":"T426","span":{"begin":508,"end":654},"obj":"Sentence"},{"id":"T427","span":{"begin":655,"end":905},"obj":"Sentence"},{"id":"T428","span":{"begin":906,"end":1044},"obj":"Sentence"},{"id":"T429","span":{"begin":1045,"end":1066},"obj":"Sentence"},{"id":"T430","span":{"begin":1067,"end":1139},"obj":"Sentence"},{"id":"T431","span":{"begin":1140,"end":1384},"obj":"Sentence"},{"id":"T432","span":{"begin":1385,"end":1489},"obj":"Sentence"},{"id":"T433","span":{"begin":1490,"end":1633},"obj":"Sentence"},{"id":"T434","span":{"begin":1634,"end":1882},"obj":"Sentence"},{"id":"T435","span":{"begin":1883,"end":2027},"obj":"Sentence"}],"namespaces":[{"prefix":"_base","uri":"http://pubannotation.org/ontology/tao.owl#"}],"text":"Based on the four blocks, two frameworks were designed for the classification task and regression task, respectively.Classification framework: The CNNCF consisted of stage I and stage II, as shown in Fig. 3a. Stage I was duplicated Q times in the framework (in this study, Q = 1). It consisted of multiple ResBlock-A with a number of M (in this study, M = 2), one ResBlock-B, and one Control Gate Block. Stage II consisted of multiple ResBlock-A with a number of N (in this study, N = 2) and one ResBlock-B. The weighted cross-entropy loss function was used and was minimized using the SGD optimizer with a learning rate of a1 (in this study, a1 = 0.01). A warm-up strategy58 was used in the initialization of the learning rate for a smooth training start, and a reduction factor of b1 (in this study, b1 = 0.1) was used to reduce the learning rate after every c1 (in this study, c1 = 10) training epochs. The model was trained for d1 (in this study, d1 = 40) epochs, and the model parameters saved in the last epoch was used in the test phase.\nRegression framework: The CNNRF (Fig. 3b) consisted of two parts (stage II and the regressor). The inputs to the regression framework were the images of the lesion areas, and the output was the corresponding vector with five dimensions, representing the five clinical indicators (all clinical indicators were normalized to a range of 0–1). The stage II structure was the same as that in the classification framework, except for some parameters. The loss function was the MSE loss function, which was minimized using the SGD optimizer with a learning rate of a2 (in this study, a2 = 0.01). A warm-up strategy was used in the initialization of the learning rate for a smooth training start, and a reduction factor of b2 (in this study, b2 = 0.1) was used to reduce the learning rate after every c2 (in this study, c2 = 50) training epochs. The framework was trained for d2 (in this study, d2 = 200) epochs, and the model parameters saved in the last epoch were used in the test phase."}