> top > docs > PMC:7796058 > spans > 52791-56255 > annotations

PMC:7796058 / 52791-56255 JSONTXT

Annnotations TAB JSON ListView MergeView

LitCovid-PubTator

Id Subject Object Predicate Lexical cue tao:has_database_id
387 113-118 Species denotes Human Tax:9606
388 272-280 Disease denotes coughing MESH:D003371
389 653-661 Disease denotes COVID-19 MESH:C000657245
390 759-767 Disease denotes coughing MESH:D003371
391 817-825 Disease denotes COVID-19 MESH:C000657245
395 1309-1315 Species denotes people Tax:9606
396 1430-1436 Species denotes people Tax:9606
397 903-911 Disease denotes coughing MESH:D003371
399 2101-2109 Disease denotes COVID-19 MESH:C000657245
403 2480-2488 Disease denotes coughing MESH:D003371
404 2810-2818 Disease denotes coughing MESH:D003371
405 2890-2898 Disease denotes coughing MESH:D003371

LitCovid-PD-HP

Id Subject Object Predicate Lexical cue hp_id
T5 272-280 Phenotype denotes coughing http://purl.obolibrary.org/obo/HP_0012735
T6 759-767 Phenotype denotes coughing http://purl.obolibrary.org/obo/HP_0012735
T7 903-911 Phenotype denotes coughing http://purl.obolibrary.org/obo/HP_0012735
T8 2480-2488 Phenotype denotes coughing http://purl.obolibrary.org/obo/HP_0012735
T9 2810-2818 Phenotype denotes coughing http://purl.obolibrary.org/obo/HP_0012735
T10 2890-2898 Phenotype denotes coughing http://purl.obolibrary.org/obo/HP_0012735

LitCovid-sentences

Id Subject Object Predicate Lexical cue
T376 0-4 Sentence denotes 5.5.
T377 5-41 Sentence denotes Video-Based Risky Behavior Detection
T378 42-112 Sentence denotes Camera stream processing is a popular and quick way to detect objects.
T379 113-226 Sentence denotes Human behaviors and actions can be detected as objects from the video frames using a trained deep learning model.
T380 227-477 Sentence denotes For the detection of risky behaviors such as coughing, hugging, handshaking, and doorknob touching, the You Only Look Once version3 (YOLOv3) which is suitable for real-time behavior detection for online video streams, was trained and applied [63,64].
T381 478-600 Sentence denotes This library classifies and localizes detected objects in one step with a speed of faster than 40 frames per second (FPS).
T382 601-682 Sentence denotes We considered two main types of risky behaviors for COVID-19 indoor transmission:
T383 683-769 Sentence denotes Group risky behaviors (e.g., hugging) and individual risky behaviors (e.g., coughing).
T384 770-877 Sentence denotes Figure 12 illustrates how to train a model for COVID-19 transmission risky behavior detection using YOLOv3.
T385 878-1088 Sentence denotes In total, 603 images for coughing, 634 images for hugging, 608 images for handshaking, and 623 images for door touching were used from COCO dataset [62] for transfer learning for the pre-trained model (YOLOv3).
T386 1089-1167 Sentence denotes These images were taken from free sources found through Google image searches.
T387 1168-1227 Sentence denotes For labelling objects, a semi-automatic method was applied.
T388 1228-1271 Sentence denotes Darknet library was also used for training.
T389 1272-1464 Sentence denotes For individual behaviors, all of the people in images were detected and labelled in a text file whilst the algorithm aggregated intersected bounding boxes of people into a single bounding box.
T390 1465-1572 Sentence denotes As wrong labels might be generated, the images should be manually checked to correct misclassified objects.
T391 1573-1666 Sentence denotes For this step 80 percent of the images were selected for training and 20 percent for testing.
T392 1667-1745 Sentence denotes To increase the accuracy of this model, the configuration in Table 3 was used.
T393 1746-1827 Sentence denotes To increase training accuracy and speed, a transfer learning process was applied.
T394 1828-1945 Sentence denotes The base layer is a pre-trained YOLOv3 that uses the COCO dataset for all of the layers of our model except the last.
T395 1946-2131 Sentence denotes Transfer learning helps with training by exploiting the knowledge of a pre-trained supervised model to address the problems of small training datasets for COVID-19 risky behaviors [65].
T396 2132-2337 Sentence denotes To evaluate the accuracy of the model, we tried to check the results for different video datasets by exporting all of the frames for detection under various circumstances for the metrics listed in Table 4.
T397 2338-2543 Sentence denotes After studying the outcomes, we found that the “hugging” and “handshaking” classes experienced the highest false negative results compared to coughing as the larger dataset was being prepared for training.
T398 2544-2670 Sentence denotes It appeared that hugging and handshaking (grouping actions) were more varied in terms of the types of handshaking and hugging.
T399 2671-2760 Sentence denotes Therefore, training precision could be improved with the preparation of more varied data.
T400 2761-2947 Sentence denotes Moreover, some of the false positive results for coughing showed that in most cases, moving a hand near the face was detected as coughing, regardless whether it had actually taken place.
T401 2948-3026 Sentence denotes Furthermore, the number of false negatives increased in a more populated area.
T402 3027-3112 Sentence denotes Detected touching behavior results demonstrated high numbers of false negative cases.
T403 3113-3217 Sentence denotes About 75 percent of false negative cases occurred when the predictor incorrectly detected small objects.
T404 3218-3335 Sentence denotes Therefore, specifying limitations for box sizes and level of confidence for the predictor can reduce false negatives.
T405 3336-3464 Sentence denotes The results of evaluating precision, recall, F-score, and number of samples for each behavior action class is listed in Table 5.