> top > users > Jin-Dong Kim
Jin-Dong Kim
User info
Collections
NameDescriptionUpdated at
1-10 / 13 show all
GlycoBiologyAnnotations made to the titles and abstracts of the journal 'GlycoBiology'2019-03-10
PreeclampsiaPreeclampsia-related annotations for text mining2019-03-10
bionlp-st-ge-2016The 2016 edition of the Genia event extraction (GE) task organized within BioNLP-ST 20162019-03-11
GlyCosmos600A random collection of 600 PubMed abstracts from 6 glycobiology-related journals: Glycobiology, Glycoconjugate journal, The Journal of biological chemistry, Journal of proteome research, Journal of proteomics, and Carbohydrate research. The whole PMIDs were collected on June 11, 2019. From each journal, 100 PMIDs were randomly sampled.2021-10-22
LitCovid-v1This collection includes the result from the Covid-19 Virtual Hackathon. LitCovid is a comprehensive literature resource on the subject of Covid-19 collected by NCBI: https://www.ncbi.nlm.nih.gov/research/coronavirus/ Since the literature dataset was released, several groups are producing annotations to the dataset. To facilitate a venue for aggregating the valuable resources which are highly relevant to each other, and should be much more useful when they can be accessed together, this PubAnnotation collection is set up. It is a part of the Covid19-PubAnnotation project. In this collection, the LitCovid-docs project contains all the documents contained in the LitCovid literature collection, and the other projects are annotation datasets contributed by various groups. It is an open collection, which means anyone who wants to contribute can do so, in the following way: take the documents in the, LitCovid-docs project produce annotation to the texts based on your resource, and contribute the annotation back to this collection: create your own project at PubAnnotaiton, upload your annotation to the project (HowTo), and add the project to this collection. All the contributed annotations will become publicly available. Please note that, during uploading your annotation data, you do not need to be worried about slight changes in the text: PubAnnotation will automatically catch them and adjust the positions appropriately. Should you have any question, please feel free to mail to admin@pubannotation.org. 2020-11-20
LitCovid-sampleVarious annotations to a sample set of LitCovid, to demonstrate potential of harmonized various annotations.2021-01-14
CORD-19-sample-annotation2020-04-21
LitCovid2021-10-18
LitCoin2021-12-14
CORD-19CORD-19 (COVID-19 Open Research Dataset) is a free, open resource for the global research community provided by the Allen Institute for AI: https://pages.semanticscholar.org/coronavirus-research. As of 2020-03-20, it contains over 29,000 full text articles. This CORD-19 collection at PubAnnotation is prepared for the purpose of collecting annotations to the texts, so that they can be easily accessed and utilized. If you want to contribute with your annotation, take the documents in the CORD-19_All_docs project, produce your annotation to the texts using your annotation system, and contribute the annotation back to PubAnnotation (HowTo). All the contributed annotations will become publicly available. Please note that, during uploading your annotation data, you do not need to be worried about slight changes in the text: PubAnnotation will automatically catch them and adjust the positions appropriately. Once you have uploaded your annotation, please notify it to admin@pubannotation.org admin@pubannotation.org, so that it can be included in this collection, which will make your annotation much easily findable. Note that as the CORD-19 dataset grows, the documents in this collection also will be updated. IMPORTANT: CORD-19 License agreement requires that the dataset must be used for text and data mining only.2020-04-14
Projects
Name TDescription# Ann.Updated atStatus
1-10 / 175 show all
Anatomy-MATAnatomical structures based on Minimal Anatomical Terminology (MAT).778 K2024-09-19Developing
Anatomy-UBERONAnatomical structures based on UBERON.2.12 M2024-09-19Developing
BioASQ-samplecollection of PubMed articles which appear in the BioASQ sample data set.02023-11-28Testing
bionlp-st-ge-2016-corefCoreference annotation to the benchmark data set (reference and test) of BioNLP-ST 2016 GE task. For detailed information, please refer to the benchmark reference data set (bionlp-st-ge-2016-reference) and benchmark test data set (bionlp-st-ge-2016-test).8532024-06-17Released
bionlp-st-ge-2016-referenceIt is the benchmark reference data set of the BioNLP-ST 2016 GE task. It includes Genia-style event annotations to 20 full paper articles which are about NFκB proteins. The task is to develop an automatic annotation system which can produce annotation similar to the annotation in this data set as much as possible. For evaluation of the performance of a participating system, the system needs to produce annotations to the documents in the benchmark test data set (bionlp-st-ge-2016-test). GE 2016 benchmark data set is provided as multi-layer annotations which include: bionlp-st-ge-2016-reference: benchmark reference data set (this project) bionlp-st-ge-2016-test: benchmark test data set (annotations are blined) bionlp-st-ge-2016-test-proteins: protein annotation to the benchmark test data set Following is supporting resources: bionlp-st-ge-2016-coref: coreference annotation bionlp-st-ge-2016-uniprot: Protein annotation with UniProt IDs. pmc-enju-pas: dependency parsing result produced by Enju UBERON-AE: annotation for anatomical entities as defined in UBERON ICD10: annotation for disease names as defined in ICD10 GO-BP: annotation for biological process names as defined in GO GO-CC: annotation for cellular component names as defined in GO A SPARQL-driven search interface is provided at http://bionlp.dbcls.jp/sparql.14.4 K2023-11-29Released
bionlp-st-ge-2016-reference-eval4262023-11-29Testing
bionlp-st-ge-2016-testIt is the benchmark test data set of the BioNLP-ST 2016 GE task. It includes Genia-style event annotations to 14 full paper articles which are about NFκB proteins. For testing purpose, however, annotations are all blinded, which means users cannot see the annotations in this project. Instead, annotations in any other project can be compared to the hidden annotations in this project, then the annotations in the project will be automatically evaluated based on the comparison. A participant of GE task can get the evaluation of his/her result of automatic annotation, through following process: Create a new project. Import documents from the project, bionlp-st-2016-test-proteins to your project. Import annotations from the project, bionlp-st-2016-test-proteins to your project. At this point, you may want to compare you project to this project, the benchmark data set. It will show that protein annotations in your project is 100% correct, but other annotations, e.g., events, are 0%. Produce event annotations, using your system, upon the protein annotations. Upload your event annotations to your project. Compare your project to this project, to get evaluation. GE 2016 benchmark data set is provided as multi-layer annotations which include: bionlp-st-ge-2016-reference: benchmark reference data set bionlp-st-ge-2016-test: benchmark test data set (this project) bionlp-st-ge-2016-test-proteins: protein annotation to the benchmark test data set Following is supporting resources: bionlp-st-ge-2016-coref: coreference annotation bionlp-st-ge-2016-uniprot: Protein annotation with UniProt IDs. pmc-enju-pas: dependency parsing result produced by Enju UBERON-AE: annotation for anatomical entities as defined in UBERON ICD10: annotation for disease names as defined in ICD10 GO-BP: annotation for biological process names as defined in GO GO-CC: annotation for cellular component names as defined in GO A SPARQL-driven search interface is provided at http://bionlp.dbcls.jp/sparql.7.99 K2023-11-29Released
bionlp-st-ge-2016-test-proteinsProtein annotations to the benchmark test data set of the BioNLP-ST 2016 GE task. A participant of the GE task may import the documents and annotations of this project to his/her own project, to begin with producing event annotations. For more details, please refer to the benchmark test data set (bionlp-st-ge-2016-test). 4.34 K2023-11-27Released
bionlp-st-ge-2016-uniprotUniProt protein annotation to the benchmark data set of BioNLP-ST 2016 GE task: reference data set (bionlp-st-ge-2016-reference) and test data set (bionlp-st-ge-2016-test). The annotations are produced based on a dictionary which is semi-automatically compiled for the 34 full paper articles included in the benchmark data set (20 in the reference data set + 14 in the test data set). For detailed information about BioNLP-ST GE 2016 task data sets, please refer to the benchmark reference data set (bionlp-st-ge-2016-reference) and benchmark test data set (bionlp-st-ge-2016-test). 16.2 K2023-11-29Beta
CHEMDNER-training-testThe training subset of the CHEMDNER corpus29.4 K2023-11-27Testing
Automatic annotators
NameDescription
21-30 / 40 show all
PubTatorPubTator annotation provided by NCBI
PD-CHEBI-B
PD-GlycoGenes20190927-B
PD-GO-BP-BBiological Processes as defined in GO
PD-CLO-B
PD-HP-PA-B
PD-HP-PA
PD-NCBIGene
PD-GlycoEpitope-BA batch annotator using PubDictionaries with the dictionary 'GlycoEpitope'
Glycan-Motif-Image
Editors
NameDescription
1-1 / 1
TextAEThe official stable version of TextAE.