Discussion From the start of the COVID-19 pandemic in late 2019 to now, the community has introduced numerous text mining resources and systems aimed at handling the tidal wave of the new COVID-19 literature. Over this time, we have iterated through many versions of corpora, models, systems and shared tasks. Though significant progress has been made, many open questions remain. We summarize some learnings and challenges below. It is helpful to have a centralized corpus of documents, such as CORD-19 or LitCovid, that is maintained and updated regularly. The existence of these corpora free the community to focus on model and system development, encouraging faster iteration and development of novel methodology. Intermediate infrastructure for sharing both automatically and manually produced data annotations, such as PubTator or PubAnnotation, increase the reach of annotation efforts. Annotations shared through these platforms can be reused by many downstream applications. Community shared tasks can be used to pool resources for evaluation and provide expert assessments on the performance of different systems. For COVID-19, the rapid submission and assessment cycles employed by tasks like Kaggle and TREC-COVID emulate the realistic challenges of rapid system development and deployment. These realistic sensibilities, though challenging to implement for organizers, may result in more robust systems that can adapt quickly to changing data and user needs. It is important to engage expert communities early and often, to keep the focus on real-world tasks and user needs. Tasks should be selected to maximize their similarity to relevant workflows, e.g. paper search, or systematic review construction. Because these existing workflows are validated and known to be useful, anchoring shared tasks to these workflows is more likely to result in effective systems. Though much of the infrastructure discussed in this review have existed for decades, the realities of COVID-19 forced us to accelerate the processes around science and research, including in the steps of dataset development, model development and deployment, evaluation and publication. Adapting to these changes has produced difficulties along the way. For example, earlier releases of the CORD-19 corpus were unstable, with formats changing from week to week as we adapted to engineering challenges and user requests. Shared tasks also had to adjust accordingly. TREC-COVID, for example, was organized in five rounds, with one week windows for submission during each round. This required very rapid turnaround from both the participants submitting system for review as well as the expert assessors, who are used to working within more relaxed time constraints. It also takes time to identify how best to involve medical experts in assessment. For TREC-COVID, the task of ad hoc retrieval is well defined and has historically been recognized as a useful and important text mining task. The TREC-COVID assessments, though completed in a narrower time window than typical, were still relatively easy for the expert assessors. In the case of Kaggle, however, the first round tasks were very open-ended and submissions were correspondingly diverse and difficult to compare. Medical experts were asked to manually assess more than 500 of these submissions, which was quite time-consuming. As Kaggle converged on a more structured table completion task in Round 2, these assessments became easier and arguably a better use of expert time.