semrep-sample | | Sample annotation of SemRep, produced by Rindflesch, et al.
Rindflesch, T.C. and Fiszman, M. (2003). The interaction of domain knowledge and linguistic structure in natural language processing: interpreting hypernymic propositions in biomedical text. Journal of Biomedical Informatics, 36(6):462-477. | 11.1 K | 2023-11-29 | Testing | |
metamap-sample | | Sample annotation of MetaMep, produced by Aronson, et al.
An overview of MetaMap: historical perspective and recent advances, JAMIA 2010 | 10.9 K | 2023-11-27 | Testing | |
LitCoin-GeneOrGeneProduct-v2 | | threshold = 0.93 | 8.98 K | 2023-11-29 | | |
Grays_part2_test | | | 8.57 K | 2023-11-29 | Testing | |
bionlp-st-ge-2016-test | | It is the benchmark test data set of the BioNLP-ST 2016 GE task. It includes Genia-style event annotations to 14 full paper articles which are about NFκB proteins. For testing purpose, however, annotations are all blinded, which means users cannot see the annotations in this project. Instead, annotations in any other project can be compared to the hidden annotations in this project, then the annotations in the project will be automatically evaluated based on the comparison.
A participant of GE task can get the evaluation of his/her result of automatic annotation, through following process:
Create a new project.
Import documents from the project, bionlp-st-2016-test-proteins to your project.
Import annotations from the project, bionlp-st-2016-test-proteins to your project.
At this point, you may want to compare you project to this project, the benchmark data set. It will show that protein annotations in your project is 100% correct, but other annotations, e.g., events, are 0%.
Produce event annotations, using your system, upon the protein annotations.
Upload your event annotations to your project.
Compare your project to this project, to get evaluation.
GE 2016 benchmark data set is provided as multi-layer annotations which include:
bionlp-st-ge-2016-reference: benchmark reference data set
bionlp-st-ge-2016-test: benchmark test data set (this project)
bionlp-st-ge-2016-test-proteins: protein annotation to the benchmark test data set
Following is supporting resources:
bionlp-st-ge-2016-coref: coreference annotation
bionlp-st-ge-2016-uniprot: Protein annotation with UniProt IDs.
pmc-enju-pas: dependency parsing result produced by Enju
UBERON-AE: annotation for anatomical entities as defined in UBERON
ICD10: annotation for disease names as defined in ICD10
GO-BP: annotation for biological process names as defined in GO
GO-CC: annotation for cellular component names as defined in GO
A SPARQL-driven search interface is provided at http://bionlp.dbcls.jp/sparql. | 7.99 K | 2023-11-29 | Released | |
LitCoin-sentences | | | 4.94 K | 2023-11-29 | Testing | |
LitCoin-GeneOrGeneProduct-v3 | | GeneOrGeneProduct
after false positive control | 4.67 K | 2023-11-24 | | |
bionlp-st-ge-2016-test-proteins | | Protein annotations to the benchmark test data set of the BioNLP-ST 2016 GE task.
A participant of the GE task may import the documents and annotations of this project to his/her own project, to begin with producing event annotations.
For more details, please refer to the benchmark test data set (bionlp-st-ge-2016-test).
| 4.34 K | 2023-11-27 | Released | |
LitCovid-PD-FMA-UBERON-v1 | | PubDictionaries annotation for anatomy terms - updated at 2020-04-20
Disease term annotation based on FMA and Uberon. Version 2020-04-20.
The terms in FMA and Uberon are loaded in PubDictionaries
(FMA and
Uberon), with which the annotations in this project are produced.
The parameter configuration used for this project is
here for FMA and
there for Uberon.
Note that it is an automatically generated dictionary-based annotation. It will be updated periodically, as the documents are increased, and the dictionary is improved. | 4.3 K | 2023-11-27 | Released | |
GlycoBiology-Motifs | | | 4.15 K | 2023-11-29 | | |