Q: What is the goal of this website?
The Department of Precision Medicine of Maastricht University has created this website to serve as an archive for AI prediction models related to all aspects of COVID-19, including diagnosis, theragnosis (how to treat the patient, risk stratification…), and follow-up (treatment response and complication). The flagship model of this department is MU4, a tool that incorporates the patient’s age with some standard blood analysis results to predict the risk of severe disease. This FAQ is mainly focused on this flagship model. The website also contains published open-source models that have passed a vetting process performed by The Department of Precision Medicine to ensure that these open-source models have a publicly available manuscript and have performed external validation using data from medical centers that are different from where the model training data was obtained.
Q: Who is the target user for the models? Is it only for doctors or also for patients? Can they be used clinically?
The target users of these models are clinicians and researchers with adequate grasp of the medical complexities associated with COVID-19. The aim for all such models is to supplement clinical judgment, not substitute it. Patients should only use these tools in consultation with their doctors, so the doctors are able to correctly interpret and explain the results to the patients. The models are not (yet) intended for clinical use, therefore not restricted by certification requirements, facilitating swift dissemination (key requirement for a rapidly evolving field of research such as COVID-19). Whenever the model code has open source license, we will aim to make the code accessible to website users; when not open source, users will still be able to supply the necessary inputs for a given model and receive the outcome prediction in real time.
Q: What will be the usefulness of these models?
As mentioned above, we aim to provide models that predict a wide range of outcomes that capture various stages of the COVID-19 patient’s journey: diagnosis, theragnosis (how to treat the patient, risk stratification), and follow-up (treatment response and complication). The flagship model MU4 performs the task of risk stratification, essential for triage: who to send home, who to admit for surveillance, and who to fast-track to the ICU.
Q: Are these models always correct?
The accuracy of any AI model is limited by the data it is trained on. We will be updating our models continuously with prospective data as we are able to access them. Our flagship model MU4 was prospectively validated on six cohorts yielding sensitivities ranging from 83.7% to 100%, and specificities ranging from 41.0% to 95.7%. For every model we put on our website, we will provide such performance metrics, so that the clinician can decide how much weight to put on the model prediction while making their decision. We note that RT-PCR, the standard way of currently diagnosing a COVID-19 infection, has a problem with false negatives, and the sensitivity can vary widely (50-90%) based on how the specimen was obtained (oral swab or bronchoalveolar lavage) and the skill of the person administering the test
Q: How can a model save lives?
Diagnostic models can save lives by correctly diagnosing patients who might be missed by the current standard of diagnosis (RT-PCR). Theragnostic models (how to treat the patient, risk stratification…), which are our first focus, have the greatest chance of saving lives. Every clinician is aware of the incredible importance of accurate triage. We do not want people to be sent home from the hospital with the incorrect belief that they are low risk. We do not want people to be falsely assessed as high risk, thereby tying up the precious ICU resources which are in short supply. Models that deal with follow-up (treatment response and complication) will be able to save lives by early identification of patients who are not responding to the standard treatment, so that specialized approaches can be tried on them. Some COVID-19 patients are currently dying from a process referred to as “cytokine storm”. The ability to identify this group early is crucial, and it is almost certain that the way to accomplish that will be through a predictive AI model.
Q: Why not just trust the doctor’s judgement?
The doctor is and will continue to be the central character in patient care. These models are meant to support the decision-making process of the doctor. There are many reasons why such aids are necessary. Firstly, the human mind is incapable of considering more then fivefactors at once, whereas AI models are specifically designed to be able to handle a large number of inputs. The more data we have, the more inputs we are free to consider while training the AI. Secondly, doctors are currently working under conditions of enormous stress, with long hours leading to fatigue and exhaustion. A tired mind is a compromised mind. In contrast, AI predictions do not show variability. Finally, the expertise of a doctor can vary based on their specialization and years of experience, especially in these times where those who are not specialized in respiratory diseases are still required to care for COVID-19 patients due to staff shortage. AI support can help reduce this expertise variability.
Q: Which countries shave been involved?
Our flagship model MU4 is trained on data from Asian countries and validated on data from Asian and European (Belgium and Italy) countries. We are actively working to include other countries. COVID-19 is a global pandemic, and our models must be globally applicable.
Q: Why have you not analyzed data from The Netherlands, despite this being the initiative of a Dutch university?
We are collaborating in a Dutch initiative (https://covidpredict.nl/) to pool well-curated data from multiple hospitals within The Netherlands and any other institutions that we are able to collaborate with from around the world that can satisfy the same quality requirements. A model is only as good as the data it is trained on, so in our effort to expand the scope of our flagship model, we must maintain data quality. We expect to begin using these resources within a matter of weeks.
Q: Is the paper published? How do I find out more about these models?
The paper that explains the flagship model has been submitted to a highly respected journal for peer review. However, peer review is a slow process that typically takes 3-6 months. We cannot wait that long to make the model available for public use. Hence, we have made the current manuscript available using the bioRxiv repository. The same is true for the manuscripts related to all external models. The links to all manuscripts can be found on the Models page
Q: In your flagship study of MU4, medical imaging does not play a role: why is that?
From our initial studies and literature review, it appears that medical imaging (CT scans) are extremely powerful at diagnosis, but not very predictive of the risk of developing severe disease. Since the flagship model is meant for fast triage and given the resource constraints (a CT scan is not easy to obtain at time of hospital visit), there is also a strong clinical incentive to develop AI models that achieve good prediction performance without need for imaging. We are also involved in other initiatives to further worked out models based on imaging in particular for diagnostic questions. We also believe very much in approaches that are using more advanced quantitative imaging approaches such as Radiomics.
Q: If there is treatment or vaccination, will these models still work?
The models will need to be updated, but the overall techniques of model building can be used as they currently are. We possess the expertise to quickly adapt existing models to incorporate the critical information of therapeutic drug usage (which may be relevant in the coming months, particularly for those drugs that are already approved for clinical use for a different disease and only need to be repurposed for COVID-19) and vaccine administration (which will likely be relevant in about a year or two).
Q: Your lab has been working mainly on oncology, why are you involved in COVID-19 studies?
Our lab has a disease-agnostic approach to precision medicine. The AI arsenal that we specialize in contains specific tools that can be applied to any medical question. While the bulk of our publications are in oncology, we have already delved into other domains (e.g., neurological conditions, ophthalmology). The current COVID-19 crisis is a call to arms, and we are excited to help in this fight using our extensive expertise.
Q: Can other centers participate? How?
There are two main ways centers can participate. The first is by providing data for validating the models that are on the website. If your center possesses the data variables that are used by a certain model, please get in touch with us so we can establish a data transfer agreement and improve the robustness and scope of applicability of the models. Secondly, you can participate by providing us with new models. The basic requirements are (1) public availability of a manuscript, (2) validation of the model on data from at least one center that was not used for model training, and (3) either having the code be open-source or providing a way to interface with the hidden code. If you are interested in sharing your model, please get in touch with us immediately. This is a more efficient way for us to build our model repository compared to us trying to scour literature.
Q: Why should other centers participate? What is the incentive?
We will assist external researchers for the successful incorporation of their models on our platform. This will create synergies that are bound to accelerate AI research on COVID-19. It will also ensure that models get the recognition they deserve and are used widely, instead of gathering dust as often happens when there are many publications on the same broad theme during a short period (a certainty in the context of COVID-19, given its world-changing nature).
Q: What will be the next steps?
Our flagship model MU4 will be updated continuously with prospective data as we are able to access them. Our models MU1-MU3 will be added as soon as they pass external validation. We are asking everyone to contact us if they are interested in participating in our effort, either by contributing data or by contributing models.
Q: Why have you made an app?
To reduce the threshold of using our models in clinical practice.
Q:Are you working on a solutions connected to the electronic health record to avoid typing and potential mistakes?
Yes we do