338. Modelling COVID-19
When there is a serious epidemic or a pandemic such as COVID-19, numerous epidemiological modelling groups around the world get busy. How should these various groups be coordinated to generate the most useful information to guide how an outbreak should be managed?
I’m excited to say that I have a new paper out in Science that addresses this question. In doing this project, I got to rub shoulders (in a virtual sense) with an international team of six modellers and epidemiologists from the US, the UK and China.
I was invited to join the team by the lead author, Katriona Shea, an ecologist from Penn State University who specialises in the management of populations of plants and animals and of disease outbreaks. She spent a sabbatical with us in the Centre for Environmental Economics and Policy at UWA in 2018, learning about economics and behaviour.
Katriona found that the sorts of things we do could be useful in her world. Aspects of the design of our proposed modelling process were designed with behaviour change (by modelling teams) in mind.
Here’s an extract from the official news release from Science. The full release is here.
“A new process to harness multiple disease models for outbreak management has been developed by an international team of researchers. The team describes the process in a paper appearing May 8 in the journal Science and was awarded a Grant for Rapid Response Research (RAPID) from the National Science Foundation to immediately implement the process to help inform policy decisions for the COVID-19 outbreak.
During a disease outbreak, many research groups independently generate models, for example projecting how the disease will spread, which groups will be impacted most severely, or how implementing a particular management action might affect these dynamics. These models help inform public health policy for managing the outbreak.
“While most models have strong scientific underpinnings, they often differ greatly in their projections and policy recommendation,” said Katriona Shea, professor of biology and Alumni Professor in the Biological Sciences, Penn State. “This means that policymakers are forced to rely on consensus when it appears, or on a single trusted source of advice, without confidence that their decisions will be the best possible.”
We designed our process to achieve a number of aims.
- Get the modelling groups working on the issues that will be most helpful for decision making.
- Help decision makers tap into the expertise of the full range of modelling groups. Currently, they sometimes pick a winner and go with the predictions of a single model, ignoring the significant variation between models.
- Foster learning between the groups, so as to maximise the quality of predictions made. Currently, when multiple models are used, the usual approach is to just take an average of their results. Our process requires the modelling groups to discuss the reasons for their differences, and to adjust their models if appropriate once they understand those reasons.
- Reduce bias in the decision process. Likely biases to guard against include dominance
effects (agreeing with field “leaders”), starting-point bias or anchoring (focusing on suggestions raised early in the process to the detriment of other ideas), and groupthink (where a psychological desire for cohesiveness causes a group of collaborators to minimize conflict and reach a consensus without sufficient critical evaluation). - Don’t delay the decision-making process.
- Make it attractive for the modelling groups to participate in the process.
Our process works as follows.
(a). The decision-making body defines the objective (e.g., minimise caseload), and specifies the management options to be assessed and communicates these to multiple modelling teams (Aims 1 and 2).
(b) The teams model the specified management options, working independently to avoid prematurely locking in on a certain way of thinking (Aims 2 and 4).
(c) The decision-making body coordinates a process where the modelling teams discuss their results, providing feedback and ideas to each other, and learning how they might improve their models (Aim 3).
(d) The teams again work independently (Aim 4) to produce another set of model results with their improved models. The full set of results is collated and considered by decision makers, not just the average (Aim 2).
(e) Information from step (b) can be used for initial decision making, without waiting for steps (c) and (d), so no time is lost (Aim 5). If the new results from step (d) indicate that the best management response is different than initially indicated, the response can be adjusted. We’ve seen plenty of adaptations to strategies over time by governments in the current pandemic.
(f) Benefits for the modelling teams themselves (Aim 6) include that they still essentially operate independently and can publish their own work; that the final quality of their model predictions is probably better; and that they can be confident that their results will be explicitly considered by the decision makers.
In some ways, this might seem like a common-sense approach, but in practice, it is rather different from what is currently done, at least in the contexts that the team of authors is aware of.
It is particularly exciting that Katriona has managed to obtain funding to roll out this approach immediately. She is already working with a collection of modelling groups in the US. The team will share results with the U.S. Centers for Disease Control and Prevention as they are generated.
Further reading
Shea, K., Runge, M.C., Pannell, D., Probert, W., Shou-Li, L., Tildesley, M. and Ferrari, M. (2020). Harnessing the power of multiple models for outbreak management, Science 368(6491), 577-579. Journal web page
Well done, Dave. Congratulations.
I can recall a couple of experiences where elements of the process you describe were implemented but we were certainly not thinking about the issue in a comprehensively constructive way as your group has.
Not too long after I left BSES, the CSIRO sugarcane modelling group got together the main people who were modelling sugarcane growth which included the young guy BSES appointed to replace me (QCane), the developer of CaneGro from South Africa, and the CSIRO people who developed APSIM Sugarcane. Each group ran their model for one or more situations and results were compared. As you would expect, there were significant differences in the results. Some models did better than others in some regards. Because of the dominance of CSIRO, their model has been the one to survive and possibly valuable modifications from the other models have not been incorporated.
But I suspect the experience has not been totally lost. CSIRO was initially very protective of the APSIM software and did not allow others to even see the code, let alone make changes to it. That attitude later changed when the APSRU joint venture managed the software and other developers were encouraged to try out alternatives routines for modelling particular aspects of plant growth. If the new method proved as good or better than the original (as judged by the APSIM software engineers), then it might be incorporated into the model or offered as an alternative routine. I can see the obvious benefit of the whole group looking at these alternative ways of modelling a situation as your group has suggested.
I guess this points out to me the value of scientific discovery as an iterative process. It would be nice if it moved a bit more quickly in most cases. In my estimation, your group’s contribution would have helped speed up the process of improving crop models if we had known about the full process, rather than just tackling a few of the steps along the way.
Well done Dave, that’s brilliant.
Fantastic – congrats, David! Brings back to mind your valuable contribution now some years ago to the question of how to make agricultural economics research relevant for policy advice.