> top > users > Jin-Dong Kim
Jin-Dong Kim
User info
Collections
NameDescriptionUpdated at
11-12 / 12 show all
Glycosmos6This collection contains annotation projects which target all the PubMed abstracts (at the time of January 14, 2022) from the 6 glycobiology-related journals: Glycobiology Glycoconjugate journal The Journal of biological chemistry Journal of proteome research Journal of proteomics Carbohydrate research 2023-11-16
GlyCosmos15Collection of annotations to the abstracts from the following journals: Analytical_Chemistry Biochim_Biophys_Acta Carbohydrate_Research Cell Glycobiology Glycoconjugate_Journal J_Am_Chem_Soc Journal_of_Biological_Chemistry Journal_of_Proteome_Research Journal_of_Proteomics Molecular_and_Cellular_Proteomics Nature_Biotechnology Nature_Communications Nature_Methods Scientific_Reports 2024-09-19
Projects
Name TDescription# Ann.Updated atStatus
1-10 / 163 show all
Anatomy-MATAnatomical structures based on Minimal Anatomical Terminology (MAT).778 K2024-09-19Developing
Anatomy-UBERONAnatomical structures based on UBERON.2.12 M2024-09-19Developing
BioASQ-samplecollection of PubMed articles which appear in the BioASQ sample data set.02023-11-28Testing
bionlp-st-ge-2016-corefCoreference annotation to the benchmark data set (reference and test) of BioNLP-ST 2016 GE task. For detailed information, please refer to the benchmark reference data set (bionlp-st-ge-2016-reference) and benchmark test data set (bionlp-st-ge-2016-test).8532024-06-17Released
bionlp-st-ge-2016-referenceIt is the benchmark reference data set of the BioNLP-ST 2016 GE task. It includes Genia-style event annotations to 20 full paper articles which are about NFκB proteins. The task is to develop an automatic annotation system which can produce annotation similar to the annotation in this data set as much as possible. For evaluation of the performance of a participating system, the system needs to produce annotations to the documents in the benchmark test data set (bionlp-st-ge-2016-test). GE 2016 benchmark data set is provided as multi-layer annotations which include: bionlp-st-ge-2016-reference: benchmark reference data set (this project) bionlp-st-ge-2016-test: benchmark test data set (annotations are blined) bionlp-st-ge-2016-test-proteins: protein annotation to the benchmark test data set Following is supporting resources: bionlp-st-ge-2016-coref: coreference annotation bionlp-st-ge-2016-uniprot: Protein annotation with UniProt IDs. pmc-enju-pas: dependency parsing result produced by Enju UBERON-AE: annotation for anatomical entities as defined in UBERON ICD10: annotation for disease names as defined in ICD10 GO-BP: annotation for biological process names as defined in GO GO-CC: annotation for cellular component names as defined in GO A SPARQL-driven search interface is provided at http://bionlp.dbcls.jp/sparql.14.4 K2023-11-29Released
bionlp-st-ge-2016-reference-eval4262023-11-29Testing
bionlp-st-ge-2016-testIt is the benchmark test data set of the BioNLP-ST 2016 GE task. It includes Genia-style event annotations to 14 full paper articles which are about NFκB proteins. For testing purpose, however, annotations are all blinded, which means users cannot see the annotations in this project. Instead, annotations in any other project can be compared to the hidden annotations in this project, then the annotations in the project will be automatically evaluated based on the comparison. A participant of GE task can get the evaluation of his/her result of automatic annotation, through following process: Create a new project. Import documents from the project, bionlp-st-2016-test-proteins to your project. Import annotations from the project, bionlp-st-2016-test-proteins to your project. At this point, you may want to compare you project to this project, the benchmark data set. It will show that protein annotations in your project is 100% correct, but other annotations, e.g., events, are 0%. Produce event annotations, using your system, upon the protein annotations. Upload your event annotations to your project. Compare your project to this project, to get evaluation. GE 2016 benchmark data set is provided as multi-layer annotations which include: bionlp-st-ge-2016-reference: benchmark reference data set bionlp-st-ge-2016-test: benchmark test data set (this project) bionlp-st-ge-2016-test-proteins: protein annotation to the benchmark test data set Following is supporting resources: bionlp-st-ge-2016-coref: coreference annotation bionlp-st-ge-2016-uniprot: Protein annotation with UniProt IDs. pmc-enju-pas: dependency parsing result produced by Enju UBERON-AE: annotation for anatomical entities as defined in UBERON ICD10: annotation for disease names as defined in ICD10 GO-BP: annotation for biological process names as defined in GO GO-CC: annotation for cellular component names as defined in GO A SPARQL-driven search interface is provided at http://bionlp.dbcls.jp/sparql.7.99 K2023-11-29Released
bionlp-st-ge-2016-test-proteinsProtein annotations to the benchmark test data set of the BioNLP-ST 2016 GE task. A participant of the GE task may import the documents and annotations of this project to his/her own project, to begin with producing event annotations. For more details, please refer to the benchmark test data set (bionlp-st-ge-2016-test). 4.34 K2023-11-27Released
bionlp-st-ge-2016-uniprotUniProt protein annotation to the benchmark data set of BioNLP-ST 2016 GE task: reference data set (bionlp-st-ge-2016-reference) and test data set (bionlp-st-ge-2016-test). The annotations are produced based on a dictionary which is semi-automatically compiled for the 34 full paper articles included in the benchmark data set (20 in the reference data set + 14 in the test data set). For detailed information about BioNLP-ST GE 2016 task data sets, please refer to the benchmark reference data set (bionlp-st-ge-2016-reference) and benchmark test data set (bionlp-st-ge-2016-test). 16.2 K2023-11-29Beta
CHEMDNER-training-testThe training subset of the CHEMDNER corpus29.4 K2023-11-27Testing
Automatic annotators
Editors
NameDescription
1-1 / 1
TextAEThe official stable version of TextAE.