Upon the completion of the Kaggle challenge, the community has moved towards repurposing the submitted contributions. Among the contributions are output review tables from Round 2, which provide a useful overview of research findings(https://www.kaggle.com/covid-19-contributions). Table results have been used to quickly bootstrap QA datasets [48, 87], which will be useful for training COVID-19 QA systems. Early COVID-19 QA systems rely on either existing biomedical QA datasets that do not contain questions specific to COVID-19 (e.g. BioASQ) or had to bootstrap their own COVID-19 training data through expert annotation, which is expensive and results in small-scale data. These new QA datasets and shared tasks like EPIC-QA (Section 5.3) aim to address the lack of domain-specific QA training data.