Three Mistakes That Are Made When Interpreting COVID-19 Numbers

It is easy to misread COVID-19 case counts and draw faulty conclusions from the data about the current state of the outbreak. Below we discuss three common mistakes that are made when interpreting COVID-19 numbers that you should be aware of as you track the impact of the COVID-19 pandemic.

Mistake #1: Assuming the Sample Data Represents the Population
Testing data on case counts may not actually reflect the true state of the pandemic in the broader population. With the number of tests varying from day to day, how do you balance the variation in tests with increases or decreases in the case count? Additionally, how reflective of the broader population is the daily sample of people tested? Could those getting tested be more likely to test positive than a truly random sample of people. If this is the case, the case totals would partially reflect a sample selection bias where someone who is showing symptoms is more likely to get tested than someone without symptoms. Thus, we wouldn’t necessarily have a perfectly clear picture of what is going on in the broader population. Until we have more systematic and complete testing, this challenge will remain.

Mistake #2: Relying on Averages or Data That is not Normalized
Before the summer spike in cases, people were talking about how the number of cases was flattening in the US. However, if you had broken up the US into two groups: New York State and the other 49 states, it would have shown that the epidemic was on the upswing in large swathes of the country while decreasing in New York. By looking at total or average case counts across the US, it masked the fact that many states were not seeing the number of cases flatten. Looking at more localized samples of case counts can provide a more accurate picture and better insights to people trying to understand the outbreaks closer to home. Additionally, to fully discern how the outbreak is changing, it is key to examine not just total case counts, but case counts over different time horizons to measure the effects of certain events. Given the roughly two week incubation period for the virus, a trailing 14-day case count can be informative. If you do need to compare two areas, normalizing regional case counts to account for population can be helpful. Taken all together, limiting data to relevant regions and periods of time while also normalizing the data, instead of relying on totals or averages, can make case and/or death counts more informative.

Mistake #3: Assuming Causation
As certain states started to re-open there were incorrect inferences drawn regarding the impact of reopening on daily case counts. For example, there was one narrative that came out in Florida claiming that a phase of the state’s reopening caused confirmed cases to spike the next day. There is no causal relationship between those events. In fact, there is around a two-week lag between when an acute event (say a city moving into a new phase of COVID-19-restrictions) occurs and when the case data will start to show the effect of that event (a good example of this is in April when Anthony Fauci noted the effects of social distancing took approximately two weeks to bend the case curve). This isn’t to say that cases won’t go up if certain municipalities reopen, it’s just that connecting the dots in this instance is not helpful and incorrect.

For information on the impacts of the COVID-19 pandemic visit the “Economic Impact Studies” page of our website. (https://edgeworthanalytics.com/insights-news/economic-impact-studies/)

Contact

Karuna Batcha

Services

Related Capabilities

Share on facebook
Share on linkedin
Share on twitter

Contact

Karuna Batcha

Subscribe to our Newsletter

This website uses cookies to improve functionality and performance. By continuing to use this website, you agree to the use of cookies in accordance with our Privacy Policy.