Skip to main content
SearchLoginLogin or Signup

Better Metrics to Guide Public Health Policy: Lessons Learned From COVID-19 for Data Systems Improvement

Published onJan 26, 2023
Better Metrics to Guide Public Health Policy: Lessons Learned From COVID-19 for Data Systems Improvement
·

Abstract

As communities adjust COVID-19 policies, objective ‘metrics’ are needed to guide decisions. But metrics are only as good as their inputs. Most COVID-19 data are based on case surveillance. Individuals’ decisions to be tested depend on concerns about exposure and symptoms, as well as the availability of testing. These factors cause drop-offs at every step in the reporting process, so the numbers of actual cases, as well as deaths and hospitalizations, are far higher than reported, varying over time and among population subgroups.

Metrics are used to compare among population groups and over time, so consistency in data systems is important. The Centers for Disease Control and Prevention (CDC) must provide leadership to standardize public health, hospital, and other data systems, not just case definitions. Public health emergencies are complex phenomena that cannot be summarized in a single indicator, so CDC should develop a balanced portfolio of metrics that together describe the epidemiologic situation and provide information to guide decision-making. Metrics are intended to inform—not decide—policy decisions that balance epidemiologic benefits and social and economic costs, taking into account the current state of the pandemic. Policymakers should avoid hard triggers and consider trends over weeks to months rather than daily numbers.

Going forward, we must complement case counts with population estimation methods such as sampling, excess mortality, and wastewater surveillance. While ‘estimation’ sounds less precise than ‘counting,’ these methods can provide a more comprehensive and accurate assessment of the pandemic’s impact on different populations and as it changes over time.

Keywords: COVID-19, surveillance, metrics, public health policy, data quality, epidemiology


Media Summary

As communities adjust policies to mitigate COVID-19, the focus is on objective ‘metrics’ to guide decisions. But metrics are only as good as their inputs, and three years into the pandemic, problems abound with the data underlying them. Indeed, the United States is still plagued with many of the statistical problems identified in the May 2020 special issue of Harvard Data Science Review on COVID-19. At the beginning of the pandemic, it was necessary and appropriate to make use of the available data, primarily counts of cases and deaths. It is now well past time to improve the data systems needed to guide policy for COVID-19, as well as for future public health emergencies.

Taking a systems perspective, this article summarizes what needs to be monitored to guide policy, describes the main sources of data used to construct metrics, and assesses their strengths and weaknesses. We also show how to make the best of current data systems and identify strategies for improving data systems for future public health emergencies.

Most COVID-19 data are based on case surveillance. Individuals’ decisions to be tested depend on concerns about exposure and symptoms, as well as the availability of testing. These factors cause drop-offs in diagnosis and reporting at every step in the reporting process, so the numbers of actual cases, as well as deaths and hospitalizations, are far higher than reported, varying over time and among population subgroups.

Metrics are used to compare among population groups and over time, so consistency in data systems is more important than complete counts. The Centers for Disease Control and Prevention (CDC) must provide leadership to standardize public health, hospital, and other data systems, not just case definitions. Public health emergencies are complex phenomenon that cannot be summarized in a single indicator, so the CDC should develop a balanced portfolio of metrics that together describe the epidemiologic situation and provide information to guide decision-making. Metrics are intended to inform—not decide—policy decisions that balance epidemiologic benefits and social and economic costs, taking into account the current state of the pandemic. Policymakers should avoid hard triggers and consider the ‘big picture’ and trends over weeks to months rather than daily numbers.

Going forward, we must complement complete case counts with population estimation methods such as sampling, excess mortality, and syndromic and wastewater surveillance. While ‘estimation’ sounds less precise than ‘counting,’ these methods can provide a more comprehensive and accurate assessment of the pandemic’s impact on different populations and as it changes over time.

Aggregating, analyzing, and preparing visualizations of the data needed to monitor public health emergencies is an immense challenge for data science. In addition to ensuring metrics’ validity, data will be coming from a wide variety of public and private sources, reflecting different aspects of COVID-19 infections and their consequences. Public health, hospital, and biosurveillance data will be based on different geographies and reflect different time frames.


1. Introduction

As communities adjust their policies to prevent the spread of COVID-19, the focus is on science-based, objective ‘metrics’ to guide decisions. But metrics are only as good as their inputs, and more than two years into the pandemic, problems abound with the data underlying them. Indeed, the United States is still plagued with many of the statistical problems identified in the May 2020 special issue of Harvard Data Science Review on COVID-19. As it comes out of its ‘zero covid’ policy in December 2022, China faces the same problems (Wakabayashi & Fu, 2022). Although data are ubiquitous, metrics policymakers use to guide decisions suffer from Xiao-Li Meng’s “big data paradox”—the tendency of big data sets to minimize errors due to small sample size but magnify others (Powell, 2021).

Moreover, the messiness of data in the Omicron wave (‘record-high cases! but fewer deaths!’) has deepened our COVID Rashomon (after the 1950 film), in which different communities are telling themselves different stories about what’s going on, and coming to different conclusions about how to lead their lives. That’s true even within populations that, a year ago, were united in their desire to take the pandemic seriously and were outraged by those who refused to do so (Thompson, 2022). Furthermore, dissonance about the metrics can undermine the trust in the public health system that is essential to an effective collective response to the pandemic.

Metrics are a product of public health data systems, so improving the metrics must begin with an understanding of the strengths and limitations of our existing public health data systems and methods. The problems with these systems are well recognized (LaFraniere, 2022), and are the focus of efforts to reform the CDC (Cueto, 2022) and the Improving DATA in Public Health Act (Office of Lauren Underwood, 2022), introduced in July 2022.

Metrics are ‘statistical aggregates,’ described by the National Academy of Sciences, Engineering and Medicine (NASEM) as “nonidentifiable aggregates, estimates, and statistics, with the goal of using statistical aggregation to create useful information for society and decisionmakers without harming individuals” (NASEM, 2022). Although they are based on reports of individual COVID-19 cases, hospitalizations, and deaths, metrics are expressed as counts, rates, or proportions. Thus, we must consider not only the completeness and accuracy of the individual reports but the definitions of the metrics and the local, state, and national processes used to calculate and report them.

Taking a systems perspective, this article describes the main sources of data used to construct metrics, and assesses their strengths and weaknesses. We also discuss how the principles of statistics and data science can help us to make the best of current data systems and identify additional types of data that might be the basis of more informative metrics. In particular, we explain why public health data systems must move beyond case counts to population estimation methods such as sampling, excess mortality, and syndromic and wastewater surveillance.

Our analysis draws lessons from the COVID-19 pandemic, but the goal is to improve data systems for future public health emergencies, whatever their cause may be. In addition to the epidemiological metrics that are the focus of this article, much more is needed to track the consequences of the pandemic and to manage public health control strategies, as well as to assess preparedness and evaluate the impact of interventions. Although these other matters are beyond the scope of this article, many of the same principles apply.

2. Where Do the Data Come From?

Most of what we know about COVID-19 in our communities is based on case surveillance, a classic public health approach. Health care providers and labs notify the health department, which then identifies and isolates contacts. This process requires that health officials identify specific individuals with the disease, starting with those with symptoms who test for diagnostic purposes. These detailed, individual-level data also facilitate epidemiologic investigations to characterize clinical disease course and risk factors. These data also can help health officials identify local transmission risks (e.g., locations such as meatpacking plants where super-spreader events may occur) so they can take action (e.g., close the location or enhance safety).

But the metrics compiled from case data can be problematic. Individuals’ decisions to be tested and seek health care depend on concerns about exposure to the disease, knowledge and information about the disease, concern about symptoms, and other factors (Carter et al., 2021), and thus on the information they read or hear. People’s willingness to get tested, and thus case counts, also rely on the availability of adequate and accurate testing, case definitions, and health professionals’ judgments and actions. For example, when the new Omicron variant was on the rise in November and December 2021, there was renewed media coverage of COVID-19, and a surge in testing across the country. Multiple news reports characterized the lack of testing availability and rapidly rising raw numbers of cases. In other words, because of this new variant, the holiday travel season, and the perception of potential for being infected, the population was newly motivated to seek testing. Limited information and access to accurate testing and care cause drop-offs in diagnosis and reporting at every step in this complex process (Hamilton et al., 2021). Hence, the number of officially recorded COVID-19 cases in the United States is a dramatic underestimate.

Individual behavior is driven by a combination of social, psychological, and contextual clues; which taken together prompt action or inaction. In thinking about the action of testing, take, for example, social science theories of health behavior that change can help to explain motivations. Take, for example, the health belief model, one such explanatory model. In Table 1, we explain how motivations to seek testing will vary along theoretically based lines. If some individuals are not motivated to get tested, due to variance in their perceived susceptibility, severity, or the benefits of doing so, they will not be counted in case surveillance. As time has gone on, several of these constructs have shifted, perceptions of severity of COVID-19 have decreased as vaccination has increased, and death rates have gone down; new variants have emerged; and perceived benefits of testing have decreased as testing is no longer an access point to activities. These shifts influence the desire to be tested and add nuance and confusion to counts.

Case definitions and diagnostic methods typically change over the course of a pandemic as epidemiologists learn more about the pathogen and the disease’s symptoms. For example, at the peak of the outbreak in New York City, tests were not available in sufficient numbers, so individuals who had COVID-19 symptoms were regarded as ‘presumptive cases.’ In fact, there were almost 60% as many presumptive as confirmed cases in March and April 2020 (Goodman & Rashbaum, 2020). The proliferation of rapid home testing in early 2022 means that many positive test results will never be reported to health departments, and thus not counted (Healey & Garcia, 2022). As a result, the official case counts will diverge even further from actual infections (Smith-Schoenwalder, 2022).

Table 1. Theoretical explanation of individual motivation for seeking a test based on the Health Belief Model (Rosenstock et al., 1988).

Constructs

Definition

Application to COVID-19 testing motivation

Perceived susceptibility

Belief about the likelihood of getting a disease or condition

Perception of COVID-19 being an issue in the individual’s geographic area

Perceived severity

Belief about the seriousness of contracting a condition or of leaving it untreated, including physical consequences and social consequences

Perception the individual is at risk for COVID-19 given age and other demographics

Perceived benefits

Beliefs about positive features or advantages of a recommended action to reduce threat, both to oneself and to one’s community

Perception that a test provides beneficial information or is a “treatment” unto itself

Perceived barriers

Beliefs about negative features or of a recommended action to reduce threat

Perception that test is not useful or is painful to get

Cue to action

Internal or external markers stimulus

Media coverage of testing; knowing others who have been tested; getting sick

Self-efficacy

The conviction that one can successfully execute a behavior

Ability to go and get a test

That case counts underrepresent infections is so common that epidemiologists coined the phrase ‘iceberg effect’ to describe it. But unlike real icebergs, the proportion above the waterline varies—the iceberg ‘bobs.’ For example, cases were likely undercounted by a factor of from 2 to 10 (Angulo et al., 2021), with larger ratios early in spring 2020. In addition, variation in testing capacity and practices means that population subgroups may be disproportionately represented (or underrepresented) in case counts—sometimes quite substantially. For example, Reitsma et al. (2021) found that in California, Latinos are more than eight times likelier to live in high exposure–risk households than White people and are overrepresented in cumulative cases by a factor of 3. However, Latinos were 27% less likely to be tested, so the disparity between their relative exposure risk and reported cases is likely partially attributable to less testing. Furthermore, starting in September 2020, Neelon and colleagues (2021) found that states with Republican governors did much less testing than their Democratic counterparts. Simultaneously, rates of new COVID-19 cases were generally higher in Republican than Democratic states; if the subsequent undercount in cases were accounted for, the gap would have been even greater.

Problems with the counts lead to new perceptions of the pandemic. The public and decision makers perceive metrics and in turn act on them. Both numeracy (Peters, 2008) and risk perception (Keller & Siegrist, 2009) are challenging even without a global pandemic, so confusion about metrics is unsurprising. Misunderstanding creates a ripe environment for conspiracy beliefs (Walter et al., 2018) and disinformation campaigns (Lyons et al., 2019), and some believe that cases and deaths vary from reported numbers. Preexisting government mistrust, language and information access barriers, and other communication factors add to widespread confusion about the virus and the needed next steps. And even those who believe the numbers are accurate may disagree with the decisions informed by data.

Metrics and perception also intersect in the case of ‘rare breakthrough infections’ among individuals who have been fully-vaccinated but later test positive. Between January and April 2021, CDC reported 10,262 breakthrough infections among 101 million fully vaccinated individuals (CDC COVID-19 Vaccine Breakthrough Case Investigations Team, 2021). Beginning in May 2021 the CDC stopped monitoring individuals who tested positive following vaccination unless the person was hospitalized or died. The CDC and the media regard these breakthrough infections as ‘rare’ but that term was not tied to a particular infection rate. As more people were vaccinated, local outbreaks with some cases in vaccinated individuals have been documented, setting up the question as to whether breakthrough infections remain ‘rare.’ Without a baseline metric tracked over time, making sense of breakthrough infection metrics is challenging. Vaccinated individuals who now perceive they may be more likely to have a breakthrough infection may seek testing again, further skewing the positivity rates among vaccinated individuals. While the proportion of vaccinated individuals who are hospitalized or who have died of COVID-19 remains below 0.01%, there is confusion over perception of breakthrough infection metrics, and resultingly, how to make individual and community decisions.

In 2022, the public’s perception of COVID-19 continues to evolve. Mask mandates have been lifted throughout the country except in health care facilities or on modes of mass transportation and vaccine requirements have been removed for dining inside or attending sporting events. To mirror this shift, the CDC updated their measurement guidance for localities to no longer focus on cases but on hospital capacity and hospitalization (Centers for Disease Control and Prevention, 2022b; Mandavilli, 2022).

3. We Need More Testing!

Fighting the pandemic can complicate the interpretation of the metrics we use to track it. For instance, because case identification, tracking, and tracing are so essential, early in the pandemic experts called for more testing (Stein, 2020). Throughout 2020, more and different types of tests became available, some less sensitive and specific than the original reverse transcription polymerase chain reaction (rtPCR) tests, but with lower cost and shorter turnaround times (Wu, 2020). Many testing sites were established, serving workers concerned about possible exposures or for whom test results are required and individuals visiting vulnerable relatives or seeking release from travel-related quarantine requirements or simply ‘peace of mind.’ Testing became an important component of many universities’ effort to control COVID-19 in the fall of 2020. For example, in addition to symptom-based testing, universities employ a variety of strategies to reduce transmission such as universal entry screening, routine asymptomatic testing, and on-demand testing (Walke et al., 2020). Most recently, in response to the Omicron wave, the Biden administration took steps to encourage use of rapid at-home antigen tests so individuals can self-isolate when infectious (Kaur & Megerian, 2022).

The constantly changing reasons for testing, and the contexts in which it is performed, complicates interpreting results (Piltch-Loeb et al., 2021). These changes also create problems for metrics. For example, the number of positive results largely mirrored testing capacity in the spring of 2020, and some suggested increases did not reflect true rises in incidence. As testing rates fell dramatically in the first half of 2021 (Tompkins & McDonnell Nieto del Rio, 2021), it was difficult to know how much of the drop reflected falling incidence, rather than perceptions that testing is no longer needed after vaccination.

With the rise of Omicron, rapid tests were sold out in pharmacies throughout the country. Interest in at-home tests increased, but less so among individuals who self-identified as Black, were aged 75 years or over, had lower incomes, and had a high school–level education or less (Rader, 2022). In some localities, individuals who were able to use rapid tests were told to self-isolate after receiving a positive rtPCR test. In other localities, rtPCR testing was encouraged after a rapid test. There was variation in local health capacity to report a positive at-home rapid test. These variations in test availability, protocols for using a rapid, then rtPCR test, and reporting an at-home test result meant that comparing the positivity rate in one jurisdiction to another was not comparing apples to apples.

To address this limitation, the ‘test positivity rate’ (the proportion of tests performed that return a positive result) has become popular. But this metric has limitations: changes in the kinds of tests available, disparities in access to testing, and differences in who seeks testing all affect both the size and the composition of the positivity rate’s numerator and denominator in ways that might not accurately reflect transmission of COVID-19 in the population (Ledur et al., 2020). More testing capacity facilitates repeat testing, both in routine testing programs (for example, at universities) and for those seeking confirmation of a positive result or other reason. When both the numerator and denominator are evolving, interpretation of the rate changes too. While there are legitimate reasons to exclude some tests from the denominator, there is no general agreement about which ones. Consequently, metric definitions—and their interpretation—vary across states.

4. Hospitalizations: The New Standard?

As treatments improved, vaccines became widely available, and new variants emerged, some suggested that hospitalizations are the best indicator of COVID-19 impact (Gandhi & Bienen, 2021). Early in the pandemic, hospitalization was less susceptible to shifts in testing rates and thus was a relatively stable proxy for COVID-19 transmission because similar proportions of cases could be expected to be hospitalized across locations. However, the fraction of incident cases that are hospitalized has fallen over time due to vaccination, acquired immunity, treatment, and reductions in the inherent severity of some variants (Beaney et al., 2022; Lewnard et al., 2022) . In 2022, the number of people in a jurisdiction who are hospitalized reflects those with more severe disease, decoupling serious morbidity and heightened risk of mortality from the spread of the virus in the community. Thus, hospitalization metrics are becoming less useful for individual members of the public who want to assess their risk of acquiring COVID-19 in the community because substantially more community transmission occurs today at the same level of hospitalization than a year ago.

However, hospitalization metrics remain very useful for public health authorities and policy makers. As COVID-19 approaches something more similar to endemicity, it makes sense to focus less on incident disease and more on the risk of severe morbidity, mortality, and health system strain when deciding what interventions to impose (and when). The share of hospital (and ICU) beds that are filled is an indicator of overall health system capacity and strain and can be helpful for deciding whether nonpharmaceutical interventions are necessary to protect the health care system. Finally, hospitalization likely balances timeliness (it is quicker than mortality) with stability (it is less biased by testing rates than reported diagnoses). Following this logic, in February 2022 the CDC adopted hospital admission rates and hospital bed capacity levels as key indicators to determine its “COVID-19 Community Level” classifications (Centers for Disease Control and Prevention, 2022b) .

Of course, ‘hospitalization’ represents varying constructs, so decision makers must be aware of distinctions among measures of hospitalization and their intended uses. The share of beds that are filled is a good measure of health system strain but not necessarily a measure of COVID-19 transmission because it depends on the number of beds that exist and trends in other causes of hospitalization. New COVID-19 admissions in many jurisdictions encompass both admissions due to COVID-19 (a proxy for incidence and a measure of severe disease) and admission due to other causes but where the patient is found to also have COVID-19 (which is more complicated to interpret, but still a proxy for incidence). New York, for example, seeks to mitigate this problem by reporting instances where patients were diagnosed with COVID-19 after admission in a disaggregated fashion New York State Department of Health, n.d.). Other jurisdictions, however, adopted hospitalization metrics that created confusion. For example, the District of Columbia reported the percentage of confirmed cases that were hospitalized as its primary hospitalization indicator for much of the Omicron wave. Using reported cases as the denominator was highly misleading in that this rate fell, and was characterized as “improving” when, in fact, many more patients were hospitalized (District of Columbia Department of Public Health, 2022). To its credit, the District later deemphasized this indicator (District of Columbia Department of Public Health, n.d.). Finally, jurisdictions vary in how they report hospitalization data. For example, the District of Columbia reports new hospitalizations per 100,000 DC residents although about half of inpatients in Washington hospitals reside outside the District. This makes it difficult to compare rates for other jurisdictions because the CDC presents new hospitalizations without regard to patients’ residency (District of Columbia Department of Public Health, n.d.).

5. Making the Best of Existing COVID-19 Metrics

Managing the COVID-19 pandemic requires valid and objective data, but current metrics can be misleading and do not always tell us what we need to know. It is easy to forget that metrics are designed to measure a specific construct, so a metric that is appropriate in one context may be easily misinterpreted in another. Furthermore, the lack of consensus on what metrics to report and how they are defined also provides an opportunity to pick and choose among the options in support of political aims. So, how do we make sense of the sea of data in which we are tossed? This section presents three lessons from the COVID-19 pandemic about using metrics effectively to guide public health decisions (see Table 2). The following section discusses changes in public health data systems and contributions from data science that can improve our response to this and future public health emergencies.

Table 2. Lessons learned in the COVID-19 pandemic about public health metrics.

Lessons learned

Implications

1. Because metrics are used to compare among population groups and over time, consistency in data systems is more important than obtaining complete case counts.

The Centers for Disease Control and Prevention (CDC) needs to provide leadership to standardize state and local public health, hospital, and other data systems used to produce metrics, not just case definitions.

2. Pandemics and other public health emergencies are complex phenomena that cannot be summarized in a single indicator.

The CDC should develop a balanced portfolio of public health metrics that together describe the epidemiologic situation.

3. Metrics are intended to inform—not decide—public health policy decisions that balance epidemiologic benefits and social and economic costs, taking into account the current state of the pandemic.

Policy makers should avoid hard cut-offs and triggers based on single metrics, consider the ‘big picture’ and trends over weeks to months rather than daily numbers, and use good epidemiologic judgment.

5.1. Strive for Consistency Rather Than Complete Counts

Metrics rarely stand alone; they are used to analyze trends over time and make comparisons among different geopolitical and sociodemographic groups. These comparisons are difficult to interpret, however, especially when the iceberg bobs, as described above. This means that consistency in data systems is more important than obtaining complete case counts. To the extent possible, therefore, we should seek constant reference populations so that metrics are not overly affected by changes in test availability, public perceptions about the need for testing, and other factors and to reduce political cherry-picking. Similarly, definitions (e.g., whether a metric uses 7- or 14-day averages) must be clear and consistent over time and among jurisdictions.

Improving consistency begins with the understanding that COVID-19 data and metrics are the output of a complex system. Health care providers, test centers and laboratories, hospital administrators, funeral directors, and many others generate case reports, test results, ICU capacity reports, and death certificates. The resulting data are compiled, processed, analyzed, and published by a network of local, state, and federal public health and other government agencies, each with its own regulations, procedures, and interests. Metrics and other statistics are disseminated by these agencies as well as the media and other platforms, which do their own analysis and visualizations. Because there is so much variability in surveillance systems, data can be difficult to interpret and to compare over time and among population groups.

We need more coordination among data publishers about what measures to report and on how metrics are defined. During the winter 2021 surge, for example, California required different levels of community restrictions based on the infection rate and test positivity rate in each county. To ensure consistency, the state health department calculated and published these rates for all counties, making statistical adjustments for the level of testing and small counties, and incorporating an adjudication process if counties felt the rates were incorrect (California Department of Public Health, n.d.). Early in the pandemic, private groups such as The COVID Tracking Project (n.d.) and COVID Act Now (n.d.) stepped up to create dashboards attempting to provide a consistent national picture. The development of the CDC’s COVID Data Tracker website (CDC, n.d.-a) in early 2021 was an important step, but it takes data as generated rather than enforced national standards. Although it may not have formal authority, the CDC can and must assert its national public health leadership role to standardize state and local public health, hospital, and other data systems used to produce metrics, not just case definitions.

5.2. Develop a Balanced Portfolio of Public Health Metrics

Although the public sometimes seems like it wants a single index (‘how bad is it?’ or ‘is it over yet?’), pandemics and other public health emergencies are complex phenomena that cannot be summarized in a single indicator. Cumulative case counts and death tolls grab attention, but decision makers need measures of the current incidence levels and trends to decide when restrictions should be modified. The public needs information on the level of infection in different locations, settings, and populations to help decide what risks to accept. Health system administrators are concerned with health care system capacity and utilization, patient severity and vaccine status, as well as staff, resources, and supplies. Policymakers need information on the economic and social consequences of the pandemic and control measures. And because policies often are set and interpreted at the state and local levels, and by organizations such as worksites and universities, metrics must be disaggregated by geography, community, sociodemographic groups, and other factors.

To address multiple needs, the CDC should develop a balanced portfolio of metrics (Stoto, 2014) that together describe key aspects of the epidemiologic situation without overwhelming numbers of indicators or detail. Each metric in a balanced portfolio has different strengths, weaknesses, and appropriate uses (Currie, 2022). Having multiple measures in the portfolio also enables them to be compared or ‘triangulated.’ For example, knowing the number and types of tests performed is critical for interpreting changes in the number of reported cases. More recently, dramatic surges in cases coupled with much smaller increases in hospitalizations and deaths helped public health researchers understand that the Omicron variant was more transmissible, but less virulent, than earlier variants, and thus required different control strategies.

5.3. Focus on the Big Picture Rather Than Daily Numbers

The media, the public, and often decision makers are drawn to day-to-day changes (e.g., ‘this is more cases in a day than we’ve seen since …’) and major milestones (e.g., one million COVID-19 deaths). However, both the spread of the virus in the population and the effect of control strategies take weeks to months to develop. Moreover, until there is more consistency in reporting and publication systems, short-term changes will be dominated by random fluctuations, weekend effects, and other reporting artifacts. We must, therefore, shift the focus from day-to-day changes to longer trends, comparisons, and analysis.

In the spring of 2022, some states moved from daily to weekly reports. Although some are concerned that a new surge could be missed (Hassan, 2022), a more analytical approach with measures designed to detect the emergence of new variants might actually be more productive. Los Angeles County, for example, recently implemented such a system, which focuses on virologic surveillance, emergency department visits, the case rate in low-income areas of the county, and outbreaks in four specific settings (skilled nursing homes, K-12 schools, homeless shelters, and workplaces) (Lin & Money, 2022).

The data is not an end in itself, but are intended to inform—not decide—public health policy decisions that balance epidemiologic benefits and social and economic costs, taking into account the current state of the pandemic. However, metrics are imperfect indicators of the epidemiologic situation whose interpretation and utility changes over time as the pandemic ebbs and flows, data systems evolve, and policy questions change. For instance, as noted above, the growing availability of vaccines and effective treatments made case counts a less meaningful indicator of the severity of COVID-19. When we see the metric as the end in itself, we focus on the number. When we see it as a decision tool, we focus on the decision. One implication, therefore, is to avoid hard cutoffs and triggers based on single metrics such as a policy of closing schools if the positivity rate exceeds 5%. A better use of thresholds is as triggers for a more in-depth review and analysis of the data in which good epidemiologic judgment can be brought to bear.

6. Improving Public Health Metrics: Implications for Data System Reform

The U.S. public health data system has weathered many challenges, both before and during the pandemic (DeSalvo et al., 2021). At the beginning of the pandemic, it was necessary and appropriate to make use of the available data, which were primarily counts of cases and deaths. It is now well past time to improve the data systems needed to guide policy for COVID-19, as well as for future public health emergencies. Indeed, the first element of President Biden’s COVID-19 strategy (White House, 2021) introduced in January 2021, was to “restore trust with the American people” with science- and data-based decision-making and public engagement. Since then, the CDC’s COVID Data Tracker website (CDC, n.d.-a) has become far more robust. Other countries have found data-based transparency to be critical. Germany’s focus on collecting and analyzing data and communicating the results to the public, for instance, has led to high levels of trust in the government throughout most of the pandemic (Wieler et al., 2021).

A year after Biden’s COVID-19 plan was issued, however, a group of his former advisors called for a “comprehensive, digital, real-time, integrated data infrastructure for public health” (Emanuel et al., 2022) and “a greatly improved public health infrastructure, including a comprehensive, permanently funded system for testing, surveillance, and mitigation measures” (Michaels et al., 2022). And starting in 2022, the CDC’s Data Modernization Initiative is working with state, tribal, local, and territorial public health jurisdictions and private and public sector partners to create modern, interoperable, and real-time public health data and surveillance systems (CDC, n.d.-b). As noted in the Introduction, efforts to reform the CDC (Cueto, 2022) and the Improving DATA in Public Health Act (Office of Lauren Underwood, 2022) both address many of the same issues.

The thrust of these initiatives, it seems, is to make better use of a wide variety of information that already exists in disparate medical, public health, and other data systems. While there is clearly much that can be accomplished in this way, there are many data science challenges in integrating the volume, velocity, and variety of this information. It is also important to recognize, as emphasized in the earlier sections of this article, that data generated for one purpose in a particular context may not have the same meaning and be comparable to data from other settings. Thus, while current public health data system reforms are important, they may not address the problems with metrics that are the focus of this analysis. More than sharing data on known cases, hospitalizations, and deaths, more needs to be known about infections not diagnosed or recorded.

One problem with relying on data currently in the health care system is that it does not include individuals who were not tested, did not seek care, and may not even know they were infected with SARS-CoV-2, the virus that causes COVID-19. According to one estimate, the ratio of actual to reported infections in the United States is more than 4, and at times and in some states as much as 10 (Aizenman et al., 2021). As the consequences of an infection have become less serious (at least for vaccinated individuals), and as contact tracing efforts have fallen off, reported cases represent an even smaller fraction of actual infections (Hassan, 2022).

Just as with cases, many COVID-19 deaths are not reported as such. Excess mortality calculations (Fricker, 2021), which are based on comparing the observed number of deaths in the pandemic period to an earlier time, include both direct (caused by a documented COVID-19 infection) and indirect deaths (for example, a heart attack victim unable to get care due to overcrowded emergency rooms) typically provide the most complete estimates of COVID-19’s impact (Currie, 2022). For instance, The Economist calculates that through March 30, 2022, there were 1.2 to 1.4 million excess deaths in the United States compared to the official count of 977,000, a 20% difference ("The Pandemic’s True Death Toll," n.d.). Stokes and colleagues (2021) found that deaths were more likely to be missed in counties with lower average socioeconomic status, more comorbidities, and located in the South and West. Excess mortality analysis also shows that the pandemic hit Blacks and Hispanics especially hard (Rossen et al., 2021; Stokes et al., 2021). COVID-19 deaths were less likely to be classified as such in rural areas, the South, and in counties that supported Donald Trump (Goldhill, 2021). Stoto et al. (2022) have shown that the ration of estimated to reported COVID-19 deaths varies regionally and over time, from 54% in the West in March–June 2021 to 121% in the Northeast in the same period.

Complete case counts make sense at the beginning of an outbreak, when they facilitate contact tracing and epidemiologic investigations. Public health case reports will always be important for virologic surveillance (e.g., to identify new variants) and sentinel surveillance (in high-risk settings such as nursing homes). But given the limitations of case reports, it is critical to complement them with metrics that are not based on reported cases, hospitalizations, and deaths.

For example, NASEM (2020) shows how population-based statistical estimates can complement case and death counts. While “estimation’ sounds less precise than ‘counting,’ and especially if levels of trust are low, estimation methods can provide a more comprehensive assessment of the pandemic’s impact (Stoto & Wynia, 2020). And because case data generated for operational purposes such as contact tracing may not contain demographic descriptors, estimates can provide more detailed information on disparities and social determinants of health (Stoto et al., 2021).

In addition, the United States should conduct seroprevalence surveys in representative samples to determine how many in a community have been infected (Angulo et al., 2021). The results from these surveys can be broken down by race, ethnicity, and other factors. The REal-time Assessment of Community Transmission-1 (REACT-1) study, which has been monitoring the prevalence of SARS-CoV-2 infection in England since May 2020 (Ward et al., 2020) provides an interesting example. REACT-1 obtains self-administered throat and nose swabs every month from a random sample of the 100,000 or more participants at ages 5 years and over. Swabs are tested for SARS-CoV-2 infection by rtPCR and samples testing positive are sent for viral genome sequencing. As an example, an analysis of 297,728 participants with a valid rtPCR test result in round 16 described the rapid spread of the Omicron variant in small geographic regions in November and December, 2021 (Ward et al., 2020).

If not random samples, blood samples obtained from individuals in the regular course of medical care can provide useful information. Anand and colleagues (2020) analyzed samples from individuals undergoing kidney dialysis. Continuing a series of studies using blood drawn for routine clinical assessments, Clarke et al. (2022) found that by February 2022, 60% of Americans, and 75% of children had been exposed to SARS-CoV-2. As long as the reasons for drawing blood do not change or vary from one location to another, these samples can provide useful indicators of changes in temporal or geographical patterns.

While case counting methods are more timely than population estimation methods, COVID-19 has demonstrated the value of ongoing syndromic surveillance efforts, such as the CDC’s Outpatient Influenza-like Illness Surveillance Network (ILINet), which provides near real-time data on visits for influenza-like illness (ILI) (fever and cough and/or sore throat) reported by approximately 2,600 primary care providers, emergency departments, and urgent care centers throughout the United States. Because COVID-19 illness often presents with ILI symptoms, ILINet is used to track trends and allows for comparison with prior influenza seasons. In 2020, the National Syndromic Surveillance Program (NSSP), which tracks emergency department (ED) visits in 47 states, was extended to include COVID-19-like illness (fever and cough or shortness of breath or difficulty breathing) (CDC, 2020). Syndromic surveillance data are not, however, included in the current CDC COVID Data Tracker (CDC, n.d.-a). These data are also not included in the CDC’s COVID-19 Community Levels metrics because the NSSP covers only 71% of U.S. EDs (CDC, 2022a), and perhaps not a representative sample of them.

Because people with COVID-19 shed the virus in their feces, wastewater testing can help monitor COVID-19 in communities without reliance on individuals’ decisions about testing and reporting. Virus levels in wastewater usually increase 4 to 6 days before clinical cases increase, so surveillance results can help communities act quickly to prevent the spread of COVID-19 (Barry-Jester, 2022). In February 2022, the CDC’s COVID Data Tracker released a Wastewater Surveillance tab, which tracks changes and detections of SARS-CoV-2 viral RNA levels at more than 600 testing sites across the country (CDC, n.d.-c). However, despite CDC support for this effort, coverage is incomplete (especially in rural areas), data systems are inconsistent, and the results not always easily accessible to or understood by local decision makers (Keshaviah & Diamond, 2022).

In addition, questionnaire-based surveys such as the Census Bureau’s Pulse Survey (United States Census Bureau, n.d.) can help us understand the psychological, social, educational, and economic consequences of COVID-19, both in general and in different sociodemographic groups (Monte & Perez-Lopez, 2021). Population-based surveys are critical for understanding issues such as disparities in vaccination uptake and hesitancy (Anderson et al., 2021). Developed rapidly during the pandemic, however, the Pulse Survey made certain decisions trading off quality for speed. Going forward, NASEM (2020) describes how generic survey methods can be developed now so that they are ready to be adapted to a new public health crisis.

Just as with case data, statistical methods vary. Research is needed to specify methodological best practices and harness the federal government’s existing survey infrastructure so that, in times of national emergency, survey instruments can be immediately used to measure the overall impact of the pandemic from both a health and socioeconomic perspective NASEM (2020). Ultimately, this research can also help us to better understand and eventually minimize the historic burden of disparities and inequities faced by the most vulnerable among us (Stoto et al., 2021).

Aggregating, analyzing, and preparing visualizations of the wide variety of data needed to properly monitor COVID-19 or any other public health issue is an immense challenge for data science. In addition to ensuring the validity of the metrics, which is the focus of this article, data will be coming from multiple public and private sources, reflecting different aspects of COVID-19 infections and their consequences. Public health, hospital, and bio-surveillance data will be based on different geographies and reflect different time frames.

Italy, which was severely constrained by data problems in the first wave of the pandemic, provides a model of what can be done in the United States. Addressing the problems encountered, it has since developed and implemented an Integrated Surveillance System. This web platform incorporates information from and facilitates data sharing with public health authorities in the autonomous regions, the Department of Civil Protection (which prepares daily COVID-19 case counts), an existing influenza surveillance system (InfluNet), virological surveillance, and other systems. Subsequently, the Ministry of Heath established the Epidemic Intelligence Network to coordinate all activities aimed at the early identification of risks in public health (especially those that are that are unusual/unexpected), their validation, evaluation, and investigation, and to facilitate international cooperation (Ministero della Salute & Istituto Superiore di Sanità, 2020).

7. Conclusions

Science-based, objective metrics are essential for informing all public health decisions. COVID-19, however, has drawn attention to weaknesses in U.S. data systems that both complicate decision-making and undermine trust in the public health system and the resulting policies. To rebuild trust for the current pandemic as well as future emergencies, the CDC must provide leadership to move beyond case counts to population-based methods such as representative sampling, excess mortality estimates, and syndromic surveillance, that together with case reports and deaths provide a comprehensive picture of the evolving pandemic. In addition, the CDC must also work to standardize the system that generates metrics, from case definitions to testing policies.

Finally, the CDC should develop a balanced portfolio of standardized metrics that together describe the epidemiologic situation and are focused on decisions that need to be made. The development of the COVID-19 Community Levels metrics in 2022 (CDC, 2022a), and enhancements to the COVID Data Tracker (CDC, n.d.-a) to include seroprevalence estimates and wastewater surveillance represent important steps in this direction. Building a balanced portfolio, however, goes beyond adding data streams to selecting a limited number of metrics that together provide an overview of the epidemiologic situation and information to guide decision-making.


Acknowledgments

The authors are grateful for comments from colleagues at Georgetown and Harvard T.H Chan School of Public Health, and also in public health practice, with whom we have spoken about these issues. We also thank Peyton Yee, Samantha Schlageter, and Katrina Dolendo for assistance with the references.

Disclosure Statement

Michael A. Stoto, John D. Kraemer, and Rachael Piltch-Loeb have no financial or non-financial disclosures to share for this article.


References

Aizenman, N., Carlsen, A., & Talbot, R. (2021, February 6). Why the pandemic is 10 times worse than you think. NPR. https://www.npr.org/sections/health-shots/2021/02/06/964527835/why-the-pandemic-is-10-times-worse-than-you-think

Anand, S., Montez-Rath, M., Han, J., Bozeman, J., Kerschmann, R., Beyer, P., Parsonnet, J., & Chertow, G. M. (2020). Prevalence of SARS-CoV-2 antibodies in a large nationwide sample of patients on dialysis in the USA: A cross-sectional study. The Lancet, 396(10259), 1335–1344. https://doi.org/10.1016/S0140-6736(20)32009-2

Anderson, L., File, T., Marshall, J., McElrath, K., & Scherer, Z. (2021, April 14). New tool tracks vaccination and vaccine hesitancy rates across geographies, population groups. U.S. Census Bureau. https://www.census.gov/library/stories/2021/04/how-do-covid-19-vaccination-and-vaccine-hesitancy-rates-vary-over-time.html

Angulo, F. J., Finelli, L., & Swerdlow, D. L. (2021). Estimation of US SARS-CoV-2 infections, symptomatic infections, hospitalizations, and deaths using seroprevalence surveys. JAMA Network Open, 4(1), Article e2033706. https://doi.org/10.1001/jamanetworkopen.2020.33706

Barry-Jester, A. M. (2022, March 21). Poop surveillance proved its worth during COVID-19 pandemic. The Los Angeles Times. https://www.latimes.com/california/story/2022-03-21/sewage-surveillance-covid-infectious-diseases-future

Beaney, T., Neves, A. L., Alboksmaty, A., Ashrafian, H., Flott, K., Fowler, A., Benger, J. R., Aylin, P., Elkin, S., Darzi, A., & Clarke, J. (2022). Trends and associated factors for Covid-19 hospitalisation and fatality risk in 2.3 million adults in England. Nature Communications, 13, Article 2356. https://doi.org/10.1038/s41467-022-29880-7

California Department of Public Health. (n.d.). Blueprint for a safer economy. State of California. Retrieved April 19, 2022, from https://www.cdph.ca.gov/Programs/CID/DCDC/Pages/COVID-19/COVID19CountyMonitoringOverview.aspx

Carter, P., Megnin-Viggars, O., & Rubin, G. J. (2021). What factors influence symptom reporting and access to healthcare during an emerging infectious disease outbreak? Health Security, 19(4), 353–363. https://doi.org/10.1089/hs.2020.0126

Centers for Disease Control and Prevention. (n.d.-a) COVID Data Tracker. Retrieved April 19, 2022, from https://covid.cdc.gov/covid-data-tracker

Centers for Disease Control and Prevention. (n.d.-b) Data Modernization Initiative. Retrieved April 19, 2022, from https://www.cdc.gov/surveillance/data-modernization/index.html

Centers for Disease Control and Prevention. (n.d.-c) Wastewater Surveillance. Retrieved April 19, 2022, from https://covid.cdc.gov/covid-data-tracker/#wastewater-surveillance

Centers for Disease Control and Prevention. (2020, October 16). COVIDView Summary ending on October 10, 2020. https://www.cdc.gov/coronavirus/2019-ncov/covid-data/covidview/past-reports/10162020.html

Centers for Disease Control and Prevention. (2022a, February 25). Indicators for monitoring COVID-19 community levels and COVID-19 and implementing COVID-19 prevention strategies. https://www.cdc.gov/coronavirus/2019-ncov/downloads/science/Scientific-Rationale-summary-COVID-19-Community-Levels.pdf

Centers for Disease Control and Prevention. (2022b, March 24). Community levels. https://www.cdc.gov/coronavirus/2019-ncov/science/community-levels.html

CDC COVID-19 Vaccine Breakthrough Case Investigations Team. (2021). COVID-19 vaccine breakthrough infections reported to CDC - United States, January 1-April 30, 2021. MMWR Morbidity and Mortality Weekly Report, 70(21), 792–793. https://doi.org/10.15585/mmwr.mm7021e3

Clarke, K. E. N., Jones, J. M., Deng, Y., Nycz, E., Lee, A., Iachan, R., Gundlapalli, A.V., Hall, A. J., & MacNeil, A. (2022). Seroprevalence of infection-induced SARS-CoV-2 Antibodies — United States, September 2021–February 2022. MMWR Morbidity and Mortality Weekly Report. 71(17), 606–608. http://doi.org/10.15585/mmwr.mm7117e3

COVID Act Now. (n.d.). U.S COVID Tracker. Retrieved July 15, 2022, from https://covidactnow.org/?s=36902444

Cueto, I. (2022, September 23). "Disaster to disaster": Underinvestment in public health systems obstructs response to Covid, monkeypox, Walensky says. STAT. https://www.statnews.com/2022/09/23/disaster-to-disaster-underinvestment-in-public-health-systems-obstructs-response-to-covid-monkeypox-walensky-says

Currie, J., Bassett, M. T., & Raftery, A (2022) Evaluating COVID-19-related surveillance measures for decision-making. National Academies of Sciences, Engineering, and Medicine. https://doi.org/10.17226/26578

DeSalvo, K., Hughes, B., Bassett, M., Benjamin, G., Fraser, M., Galea, S., Gracia, J. N., & Howard, J. (2021). Public health COVID-19 impact assessment: Lessons learned and compelling needs (Discussion paper). NAM Perspectives. https://doi.org/10.31478/202104c

District of Columbia Department of Public Health. (2022, January 5) Coronavirus data for January 4, 2022. Government of the District of Columbia. https://coronavirus.dc.gov/release/coronavirus-data-january-4-2022

District of Columbia Department of Public Health. (n.d.) Key metrics. Government of the District of Columbia. Retrieved April 19, 2022, from https://coronavirus.dc.gov/key-metrics

Elliott, P., Bodinier, B., Eales, O., Wang, H., Haw, D., Elliott, J., Whitaker, M., Jonnerby, J., Tang, D., Walters, C. E., Atchison, C., Diggle, P. J. Page, A. J., Trotter, A. J., Ashby, D., Barclay, W., Taylor, G., Ward, H., Darzi, A., Cooke, G. S., Chadeau-Hyam, M., & Donnelly, C. A. (2021, December 23). Rapid increase in Omicron infections in England during December 2021: REACT-1 study (Working paper). Imperial College London. https://spiral.imperial.ac.uk/handle/10044/1/93241n

Emanuel, E. J., Osterholm, M., & Gounder, C. R. (2022). A national strategy for the “new normal” of life with COVID. JAMA, 327(3), 211–212. https://doi.org/10.1001/jama.2021.24282

Fricker, R. D. (2021). Covid-19: One year on…. Significance, 18(1), 12–15. https://doi.org/10.1111/1740-9713.01485

Gandhi, M., & Bienen, L. (2021, December 11). Why hospitalizations are now a better indicator of Covid’s impact. The New York Times. https://www.nytimes.com/2021/12/11/opinion/why-hospitalizations-are-now-a-better-indicator-of-covids-impact.html

Goldhill, O. (2021, January 25). Undercounting of Covid-19 deaths is greatest in pro-Trump areas. STAT. https://www.statnews.com/2021/01/25/undercounting-covid-19-deaths-greatest-in-pro-trump-areas-analysis-shows/

Goodman, J. D., & Rashbaum, W. K. (2020, April 14). N.Y.C. death toll soars past 10,000 in revised virus count. The New York Times. https://www.nytimes.com/2020/04/14/nyregion/new-york-coronavirus-deaths.html

Hamilton, J. J., Turner, K., & Lichtenstein Cone, M. (2021). Responding to the pandemic: Challenges with public health surveillance systems and development of a COVID-19 national surveillance case definition to support case-based morbidity surveillance during the early response. Journal of Public Health Management and Practice, 27(Suppl 1), S80–S86. https://doi.org/10.1097/PHH.0000000000001299

Hassan, A. (2022, March 19). Some U.S. states are reducing daily reporting of coronavirus data, raising fears of blind spots. The New York Times. https://www.nytimes.com/2022/03/19/us/covid-reporting-states.html

Healey, J., & Garcia, K. (2022, January 6). If you take an at-home coronavirus test, who keeps track of the results? Probably no one. The Los Angeles Times. https://www.latimes.com/science/story/2022-01-06/if-you-take-an-at-home-coronavirus-test-who-keeps-track-of-the-results-probably-no-one

Kaur, A., & Megerian, C. (2022, January 13). "We’re all frustrated" on COVID: Biden to boost test availability and send military to help hospitals. The Los Angeles Times. https://www.latimes.com/politics/story/2022-01-13/biden-omicron-covid-speech

Keller, C., & Siegrist, M. (2009). Effect of risk communication formats on risk perception depending on numeracy. Medical Decision Making: An International Journal of the Society for Medical Decision Making, 29(4), 483–490. https://doi.org/10.1177/0272989X09333122

Keshaviah, A., & Diamond, M. (2022, June 9). Wastewater monitoring: How to strengthen and sustain an important public health tool. STAT News. https://www.statnews.com/2022/06/09/wastewater-monitoring-strengthen-sustain-important-public-health-tool/

LaFraniere, S. (2022, September 20). "Very harmful" lack of data blunts U.S. response to outbreaks. The New York Times. https://www.nytimes.com/2022/09/20/us/politics/covid-data-outbreaks.html?smid=url-share

Ledur, J., Rivera, J. M., & Wang, T. (2020, September 22). Test positivity: So valuable, so easy to misinterpret. The COVID Tracking Project. https://covidtracking.com/analysis-updates/test-positivity

Lewnard, J. A., Hong, V. X., Patel, M. M., Kahn, R., Lipsitch, M., & Tartof, S.Y. (2022). Clinical outcomes associated with SARS-CoV-2 Omicron (B.1.1.529) variant and BA.1/BA.1.1 or BA.2 subvariant infection in southern California. Nature Medicine, 28, 1933–1943. https://doi.org/10.1038/s41591-022-01887-z

Lin, R.-G. II, & Money, L. (2022, April 14). How will officials detect the next surge? The Los Angeles Times. https://enewspaper.latimes.com/infinity/article_share.aspx?guid=c283b122-e475-44d8-901a-768d20c1556b

Lyons, B., Merola, V., & Reifler, J. (2019). Not just asking questions: Effects of implicit and explicit conspiracy information about vaccines and genetic modification. Health Communication, 34(14), 1741–1750. https://doi.org/10.1080/10410236.2018.1530526

Mandavilli, A. (2022, February 25). C.D.C. guidelines suggest 70 percent of U.S. Could stop wearing masks. The New York Times. https://www.nytimes.com/2022/02/25/health/cdc-mask-guidance.html?smid=em-share

Michaels, D., Emanuel, E. J., & Bright, R. A. (2022). A national strategy for COVID-19: Testing, surveillance, and mitigation strategies. JAMA, 327(3), 213–214. https://doi.org/10.1001/jama.2021.24168

Ministero della Salute & Istituto Superiore di Sanità. (2020). Prevention and response to COVID-19: Evolution of strategy and planning in the transition phase for the autumn-winter season. Istituto Superiore di Sanità. Retrieved July 15, 2022, from https://www.iss.it/web/iss-en/monographs/-/asset_publisher/yZRmELq8dkmf/content/prevention-and-response-to-covid-19-evolution-of-strategy-and-planning-in-the-transition-phase-for-the-autumn-winter-season.-english-version

Monte, L. M., & Perez-Lopez, D. J. (2021, July 21). COVID-19 pandemic hit Black households harder than White households, even when pre-pandemic socio-economic disparities are taken into account. U.S. Census Bureau. https://www.census.gov/library/stories/2021/07/how-pandemic-affected-black-and-white-households.html

National Academies of Sciences, Engineering, and Medicine. (2020). A framework for assessing mortality and morbidity after large-scale disasters. The National Academies Press. https://doi.org/10.17226/25863

National Academies of Sciences, Engineering, and Medicine. (2022). Toward a 21st century national data infrastructure: Mobilizing information for the common good. The National Academies Press. https://doi.org/10.17226/26688

Neelon, B., Mutiso, F., Mueller, N. T., Pearce, J. L., & Benjamin-Neelon, S. E. (2021). Associations between governor political affiliation and COVID-19 cases, deaths, and testing in the U.S. American Journal of Preventive Medicine, 61(1), 115–119. https://doi.org/10.1016/j.amepre.2021.01.034

New York State Department of Health. (n.d.) Statewide COVID-19 hospitalizations and beds. State of New York. Retrieved April 19, 2022, from https://health.data.ny.gov/Health/New-York-State-Statewide-COVID-19-Hospitalizations/jw46-jpb7

Peters, E. (2008). Numeracy and the perception and communication of risk. Annals of the New York Academy of Sciences, 1128(1), 1–7. https://doi.org/10.1196/annals.1399.001

Office of Lauren Underwood. (2022, July 22). Underwood, Bera, Castor, and DeLauro introduce bill to modernize public health data infrastructurehttps://underwood.house.gov/media/press-releases/underwood-bera-castor-and-delauro-introduce-bill-modernize-public-health-data

Piltch-Loeb, R., Jeong, K. Y., Lin, K. W., Kraemer, J., & Stoto, M. A. (2021). Interpreting COVID-19 test results in clinical settings: It depends! Journal of the American Board of Family Medicine: JABFM, 34(Suppl), S233–S243. https://doi.org/10.3122/jabfm.2021.S1.200413

Powell, A. (2021, December 8). Vaccination surveys fell victim to "big data paradox," Harvard researchers say. Harvard Gazette. https://news.harvard.edu/gazette/story/2021/12/vaccination-surveys-fell-victim-to-big-data-paradox-harvard-researchers-say/

Rader, B. (2022). Use of at-home COVID-19 tests—United States, August 23, 2021–March 12, 2022. MMWR Morbidity and Mortality Weekly Report, 71(13), 489–494. https://doi.org/10.15585/mmwr.mm7113e1

Reitsma, M. B., Claypool, A. L., Vargo, J., Shete, P. B., McCorvie, R., Wheeler, W. H., Rocha, D. A., Myers, J. F., Murray, E. L., Bregman, B., Dominguez, D. M., Nguyen, A. D., Porse, C., Fritz, C. L., Jain, S., Watt, J. P., Salomon, J. A., & Goldhaber-Fiebert, J. D. (2021). Racial/ethnic disparities in COVID-19 exposure risk, testing, and cases at the subcounty level in California. Health Affairs, 40(6), 870–878. https://doi.org/10.1377/hlthaff.2021.00098

Rosenstock, I. M., Strecher, V. J., & Becker, M. H. (1988). Social learning theory and the Health Belief Model. Health Education Quarterly, 15(2), 175–183. https://doi.org/10.1177/109019818801500203

Rossen, L. M., Ahmad, F. B., Anderson, R. N., Branum, A. M., Du, C., Krumholz, H. M., Li, S.-X., Lin, Z., Marshall, A., Sutton, P. D., & Faust, J. S. (2021). Disparities in excess mortality associated with COVID-19—United States, 2020. MMWR Morbidity and Mortality Weekly Report, 70(33), 1114–1119. https://doi.org/10.15585/mmwr.mm7033a2

Smith-Schoenwalder, C. (2022, May 20). Latest COVID-19 surge in U.S. is drastically undercounted. U.S. News & World Report. https://www.usnews.com/news/health-news/articles/2022-05-20/latest-covid-19-surge-in-u-s-is-drastically-undercounted?src=usn_thereport&utm_source=Sailthru&utm_medium=email&utm_campaign=The%20Report-Fri%20May%2027%2007:26:41%20EDT%202022&utm_term=The%20Report

Stein, R. (2020, December 22). Is your state doing enough coronavirus testing? NPR. https://www.npr.org/sections/health-shots/2020/12/22/948085513/vaccines-are-coming-but-the-u-s-still-needs-more-testing-to-stop-the-surge

Stokes, A. C., Lundberg, D. J., Elo, I. T., Hempstead, K., Bor, J., & Preston, S. H. (2021). COVID-19 and excess mortality in the United States: A county-level analysis. PLOS Medicine, 18(5), Article e1003571. https://doi.org/10.1371/journal.pmed.1003571

Stoto, M. A. (2014). Population health measurement: Applying performance measurement concepts in population health settings. EGEMS, 2(4), Article 1132. https://doi.org/10.13063/2327-9214.1132

Stoto, M. A., Rothwell, C., Lichtveld, M., & Wynia, M. K. (2021). A national framework to improve mortality, morbidity, and disparities data for COVID-19 and other large-scale disasters. American Journal of Public Health, 111(S2), S93–S100. https://doi.org/10.2105/AJPH.2021.306334

Stoto, M. A., Schlageter, S., & Kraemer, J. D. (2022). COVID-19 mortality in the United States: It's been two Americas from the start. PLoS ONE, 17(4), Article e0265053. https://doi.org/10.1371/journal.pone.0265053

Stoto, M. A., & Wynia, M. K. (2020, June 25). Assessing morbidity and mortality associated with the COVID-19 pandemic. Health Affairs Forefront. https://www.healthaffairs.org/do/10.1377/forefront.20200622.970112/full/

The COVID Tracking Project (n.d.). https://www.theatlantic.com/author/covid-tracking-project/

The pandemic’s true death toll. (n.d.). The Economist. Retrieved April 19, 2022, from https://www.economist.com/graphic-detail/coronavirus-excess-deaths-estimates

Tompkins, L., & del Rio, G. M. N. (2021, March 1). Why has coronavirus testing slumped in the U.S.? It’s complicated. The New York Times. https://www.nytimes.com/live/2021/03/01/world/covid-19-coronavirus

Thompson, D. (2022, January 22). Why more Americans are saying they’re “vaxxed and done.” The Atlantic. https://www.theatlantic.com/ideas/archive/2022/01/covid-omicron-vaccination-rashomon/621199/

United States Census Bureau. (n.d.). Household Pulse Survey: Measuring Social and Economic Impacts during the Coronavirus Pandemic. U.S Census Bureau. Retrieved April 19, 2022, from https://www.census.gov/householdpulse

Wakabayashi, D., & Fu, C. (2022, December 14). How will China fare with Covid? "Meaningless" data clouds the picture. The New York Times. https://www.nytimes.com/2022/12/14/business/china-economy-covid.html

Walke, H. T., Honein, M. A., & Redfield, R. R. (2020). Preventing and responding to COVID-19 on college campuses. JAMA, 324(17), 1727–1728. https://doi.org/10.1001/jama.2020.20027

Walter, N., Ball-Rokeach, S. J., Xu, Y., & Broad, G. M. (2018). Communication ecologies: Analyzing adoption of false beliefs in an information-rich environment. Science Communication, 40(5), 650–668. https://doi.org/10.1177/1075547018793427

Ward, H., Cooke, G., Atchison, C., Whitaker, M., Elliott, J., Moshe, M., Brown, J. C., Flower, B., Daunt, A., Ainslie, K., Ashby, D., Donnelly, C., Riley, S., Darzi, A., Barclay, W., Elliott, P., & Team, for the REACT study team. (2020). Declining prevalence of antibody positivity to SARS-CoV-2: A community study of 365,000 adults. MedRxiv. https://doi.org/10.1101/2020.10.26.20219725

Wieler, L. H., Rexroth, U., & Gottschalk, R. (2021, March 20). Emerging COVID-19 success story: Germany’s push to maintain progress. Our World in Data. Retrieved July 15, 2022, from https://ourworldindata.org/covid-exemplar-germany

White House. (2021, January 21). National strategy for the COVID 19 response and pandemic preparedness. https://www.whitehouse.gov/wp-content/uploads/2021/01/National-Strategy-for-the-COVID-19-Response-and-Pandemic-Preparedness.pdf

Wu, K. J. (2020, July 6). A new generation of fast coronavirus tests is coming. The New York Times. https://www.nytimes.com/2020/07/06/health/fast-coronavirus-tests.html


©2023 Michael A. Stoto, John D. Kraemer, and Rachael Piltch-Loeb. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Comments
0
comment
No comments here
Why not start the discussion?