PMC:7782580 / 50410-53253 JSONTXT

Annnotations TAB TSV DIC JSON TextAE

Id Subject Object Predicate Lexical cue
T376 12-32 Sentence denotes As discussed in ref.
T377 33-174 Sentence denotes 57, the residual block is a CNN-based block that allows the CNN models to reuse features, thus accelerating the training speed of the models.
T378 175-335 Sentence denotes In this study, we developed a residual block (ResBlock-A) that utilized a skip-connection for retaining features in different layers in the forward propagation.
T379 336-548 Sentence denotes This block (Fig. 6a) consisted of a multiple-input multiple-output structure with two branches (an upper branch and a bottom branch), where input 1 and input 2 have the same size, but the values may be different.
T380 549-642 Sentence denotes In contrast, output 1 and output 2 had the same size, but output 1 did not have a ReLu layer.
T381 643-770 Sentence denotes The upper branch consisted of a max-pooling layer (Max-Pooling), a convolution layer (Conv 1 × 1), and a batch norm layer (BN).
T382 771-1005 Sentence denotes The Max-Pooling had a kernel size of 3 × 3 and a stride of 2 to downsample the input 1 for retaining the features and ensuring the same size as the output layer before the element-wise add operation was conducted in the bottom branch.
T383 1006-1184 Sentence denotes The Conv 1 × 1 consisted of multiple 1 × 1 convolution kernels with the same number as that in the second convolution layer in the bottom branch to adjust the number of channels.
T384 1185-1334 Sentence denotes The BN used a regulation function to ensure the input in each layer of the model followed a normal distribution with a mean of 0 and a variance of 1.
T385 1335-1425 Sentence denotes The bottom branch consisted of two convolution layers, two BN layers, and two ReLu layers.
T386 1426-1634 Sentence denotes The first convolution layer in the bottom branch consisted of multiple 3 × 3 convolution kernels with a stride of 2 and a padding of 1 to reduce the size of the feature maps when local features were obtained.
T387 1635-1771 Sentence denotes The second convolution layer in the bottom branch consisted of multiple 3 × 3 convolution kernels with a stride of 1 and a padding of 1.
T388 1772-1891 Sentence denotes The ReLu function was used as the activation function to ensure a non-linear relationship between the different layers.
T389 1892-2026 Sentence denotes The output of the upper branch and the output of the bottom branch after the second BN were fused using an element-wise add operation.
T390 2027-2113 Sentence denotes The fused result was output 1, and the fused result after the ReLu layer was output 2.
T391 2114-2162 Sentence denotes Fig. 6 The four units of the proposed framework.
T392 2163-2843 Sentence denotes a ResBlock-A architecture, containing two convolution layers with 3 × 3 kernels, one convolution layer with a 1 × 1 kernel, three batch normalization layers, two ReLu layers, and one max-pooling layer with a 3 × 3 kernel. b ResBlock-B architecture; the basic unit is the same as the ResBlock-A, except for output 1. c The Control Gate Block has a synaptic-based frontend architecture that controls the direction of the feature map flow and the overall optimization direction of the framework. d The Regressor architecture is a skip-connection architecture containing one convolution layer with 3 × 3 kernels, one batch normalization layer, one ReLu layer, and three linear layers.