> top > users > Jin-Dong Kim

User 'Jin-Dong Kim'


NameDescriptionUpdated at
1-3 / 3
GlycoBiologyAnnotations made to the titles and abstracts of the journal 'GlycoBiology'2019-03-10
PreeclampsiaPreeclampsia-related annotations for text mining2019-03-10
bionlp-st-ge-2016The 2016 edition of the Genia event extraction (GE) task organized within BioNLP-ST 20162019-03-11


NameTDescription# Ann.Updated atStatus
21-30 / 48 show all
bionlp-st-ge-2016-uniprotUniProt protein annotation to the benchmark data set of BioNLP-ST 2016 GE task: reference data set (bionlp-st-ge-2016-reference) and test data set (bionlp-st-ge-2016-test). The annotations are produced based on a dictionary which is semi-automatically compiled for the 34 full paper articles included in the benchmark data set (20 in the reference data set + 14 in the test data set). For detailed information about BioNLP-ST GE 2016 task data sets, please refer to the benchmark reference data set (bionlp-st-ge-2016-reference) and benchmark test data set (bionlp-st-ge-2016-test). 16.2 K2016-05-22Beta
bionlp-st-ge-2016-testIt is the benchmark test data set of the BioNLP-ST 2016 GE task. It includes Genia-style event annotations to 14 full paper articles which are about NFκB proteins. For testing purpose, however, annotations are all blinded, which means users cannot see the annotations in this project. Instead, annotations in any other project can be compared to the hidden annotations in this project, then the annotations in the project will be automatically evaluated based on the comparison. A participant of GE task can get the evaluation of his/her result of automatic annotation, through following process: Create a new project. Import documents from the project, bionlp-st-2016-test-proteins to your project. Import annotations from the project, bionlp-st-2016-test-proteins to your project. At this point, you may want to compare you project to this project, the benchmark data set. It will show that protein annotations in your project is 100% correct, but other annotations, e.g., events, are 0%. Produce event annotations, using your system, upon the protein annotations. Upload your event annotations to your project. Compare your project to this project, to get evaluation. GE 2016 benchmark data set is provided as multi-layer annotations which include: bionlp-st-ge-2016-reference: benchmark reference data set bionlp-st-ge-2016-test: benchmark test data set (this project) bionlp-st-ge-2016-test-proteins: protein annotation to the benchmark test data set Following is supporting resources: bionlp-st-ge-2016-coref: coreference annotation bionlp-st-ge-2016-uniprot: Protein annotation with UniProt IDs. pmc-enju-pas: dependency parsing result produced by Enju UBERON-AE: annotation for anatomical entities as defined in UBERON ICD10: annotation for disease names as defined in ICD10 GO-BP: annotation for biological process names as defined in GO GO-CC: annotation for cellular component names as defined in GO A SPARQL-driven search interface is provided at http://bionlp.dbcls.jp/sparql.7.99 K2016-05-22Released
semrep-sampleSample annotation of SemRep, produced by Rindflesch, et al. Rindflesch, T.C. and Fiszman, M. (2003). The interaction of domain knowledge and linguistic structure in natural language processing: interpreting hypernymic propositions in biomedical text. Journal of Biomedical Informatics, 36(6):462-477.11.1 K2017-07-15Testing
jdkim-test1.28 K2019-03-11
NCBITAXONannotation for NCBI taxonomy5.9 K2016-10-11Testing
GlycoConjugate-collectionThe PubMed entries (titles and abstracts) from the journal of GlycoConjugate02018-02-09Developing
examplean example project to demonstrate the functionality of PubAnnotation5522019-02-11Testing
pubmed-sentences-benchmarkA benchmark data for text segmentation into sentences. The source of annotation is the GENIA treebank v1.0. Following is the process taken. began with the GENIA treebank v1.0. sentence annotations were extracted and converted to PubAnnotation JSON. uploaded. 12 abstracts met alignment failure. among the 12 failure cases, 4 had a dot('.') character where there should be colon (':'). They were manually fixed then successfully uploaded: 7903907, 8053950, 8508358, 9415639. among the 12 failed abstracts, 8 were "250 word truncation" cases. They were manually fixed and successfully uploaded. During the fixing, manual annotations were added for the missing pieces of text. 30 abstracts had extra text in the end, indicating copyright statement, e.g., "Copyright 1998 Academic Press." They were annotated as a sentence in GTB. However, the text did not exist anymore in PubMed. Therefore, the extra texts were removed, together with the sentence annotation to them. 18.5 K2017-08-15Released

Automatic annotators

1-10 / 15 show all
TextSentencersentence segmentation
EnjuParserEnju HPSG Parser developed by University of Tokyo.
PD-UBERON-AE-BIt annotates for anatomical entities, based on the UBERON-AE dictionary on PubDictionaries. It used the default threshold, 0.85. It uses the batch mode annotation, and may be used for annotation to a large amount of documents.
PD-UBERON-AEIt annotates for anatomical entities, based on the UBERON-AE dictionary on PubDictionaries. Threshold is set to 0.85.
PD-GlycoEpitope-BA batch annotator using PubDictionaries with the dictionary 'GlycoEpitope'
PubTator-ChemicalTo pull the pre-computed chemical annotation from PubTator.
PubTator-GeneTo pull the pre-computed gene annotation from PubTator.
PubTator-SpeciesTo pull the pre-computed Species annotation from PubTator.


1-2 / 2
TextAE-DevIt is a development version of TextAE. While this version has richer features, there is a chance of some bugs.
TextAEIt is a "Text Annotation Editor" developed by DBCLS. It is developed as a model implementation of PubAnnotation-interoperable viewer/editor.