460
Views
0
CrossRef citations to date
0
Altmetric
Editorial

What did the 2009 H1N1 Pandemic Teach us about Influenza Surveillance Systems?

Pages 829-832 | Published online: 27 Aug 2013

Much has been written about the epidemiology of pH1N1, the pandemic virus that emerged in North America in 2009. In order to assess and improve public health systems‘ preparedness for future pandemics, my colleagues and I have been looking closely at how well global surveillance systems performed in this pandemic, in order to identify their strengths and weaknesses Citation[1–3].

The picture that has emerged is mixed – in some areas these surveillance systems performed well, and certainly better than the systems that had been in place before the global investments prompted by concerns about bioterrorism, SARS and influenza pandemics. In other areas, however, gaps in surveillance capabilities were identified or clarified. We found, for instance, that investments in notification systems and enhanced laboratory capacities probably led to an earlier detection than would have been possible a decade earlier. There were challenges, however, in characterizing the emerging pathogens‘ epidemiologic characteristics and tracking its spread through the population.

Outbreak detection

Detecting an outbreak as quickly as possible enables an earlier and more effective public health response. Our analysis of efforts in Mexico and the USA to detect the outbreak suggests that investments in surveillance systems made a major difference, enabling a quicker response than would have been possible a decade earlier. These investments include laboratory capacity enhancements in the USA, Mexico and Canada, as well as arrangements that enabled collaboration among these three countries. Perhaps more important were developments in global notification systems – also known as event-based surveillance – such as the Global Public Health Intelligence Network (GPHIN), ProMED Mail and HealthMap, which enabled Mexican officials to ‘connect the dots‘ and realize that outbreaks that they were aware of throughout the country were all manifestations of the pH1N1 virus first isolated in two California children. The expectations set up by the 2005 International Health Regulations (IHR) that countries would report a potential ‘public health emergency of international concern‘ (PHEIC) were also important Citation[3].

Our analysis showed that syndromic surveillance played an important role in detecting the pH1N1 outbreak, but a different one than is commonly used to justify these systems. Syndromic surveillance systems collect and analyze statistical data on health trends – such as symptoms reported by people seeking care in emergency departments or other healthcare settings, or even sales of prescription or over-the-counter influenza medicines or web searches – and are typically used to detect outbreaks before conventional surveillance systems, enabling a rapid public health response Citation[4]. As pH1N1 emerged in the winter, there were too few cases to be detected against the background of the normal influenza season. Rather, increases in influenza-like illnesses led authorities to conduct active surveillance for severe pneumonia and later provided positive confirmation that the virus had spread widely throughout Mexico Citation[3].

Despite this generally good performance, there was a period of 1–2 weeks in April 2009 when Mexican authorities were aware of the outbreak but did not understand the full implications of the evidence. Going forward, it will be important to recognize that even in the best of circumstances, some period of uncertainty of this sort is to be expected and planned for. The IHR, for instance, requires countries to report a PHEIC “within 24 hours of assessment of the public health information by the national authority” Citation[5]. In the likely event that future outbreaks are similar to 2009 H1N1, the point at which it becomes clear that something is a PHEIC will not be very distinct. A more nuanced process that recognizes the inherent uncertainty and incorporates efforts to obtain more information about the pathogen and its epidemiologic characteristics would be more appropriate. The risk-management approach in WHO‘s new interim pandemic influenza guidance is an example Citation[101].

Epidemic characterization

Once a new pathogen is identified, it must be characterized in order to develop testing kits and surveillance procedures, create and manufacture a vaccine and set policies for its use, as well as to guide interventions such as infection control policies, social distancing and quarantine. In 2009, the enhanced laboratory capacities discussed earlier led to the rapid characterization of the virus itself and development of a vaccine and PCR testing kits, and so on. On the other hand, pH1N1‘s epidemiological characteristics were harder to identify. As is often the case, underascertainment of infected individuals with less-severe cases led to an initial overestimate of the case–fatality rate, and the mischaracterization of the ‘severity‘ of the virus Citation[6–8]. This was compounded by differences in whether ‘severity‘ referred to virulence or ability to spread globally, which was the basis for the WHO‘s pandemic phase classification in force at that time Citation[101].

One of the most commonly held perceptions about pH1N1 is that children and young adults were at especially ‘high risk‘. While children were more likely to become infected with pH1N1 than with seasonal influenza, the case–fatality and hospitalization rates among those infected were lower than in the elderly. Our research suggests that early concerns about children being ‘at risk‘ led to surveillance biases that inflated the reported numbers of cases and deaths in children. In addition, evidence of pre-existing immunity in older individuals led to surveillance biases deflating reported numbers of cases and deaths in the elderly. While there is no ‘gold standard‘ evidence about the actual numbers of cases and deaths, the evidence suggests that these biases are due in part to surveillance systems that are dependent on patients‘ decisions to seek care and providers‘ actions to report cases they see Citation[2].

This problem is complicated by failing to understand the distinction between the risk of infection and of having a severe case requiring hospitalization or leading to death. While epidemiologists understood this and were aware of the age biases in the surveillance data, these distinctions may have been lost on policymakers and the public. Mischaracterization of who was ‘at risk‘ could have led to vaccine priorities focused on children rather than the elderly and school-closing policies that were less than optimal.

Tracking & situational awareness

Once an outbreak has been identified and the pathogen characterized, surveillance systems are needed to track its spread through the population, including geographically. This is important ‘situational awareness‘ information needed to monitor the effectiveness of disease control policies and interventions and enable planning for health services, and so on.

Despite the many surveillance systems that have been set up in recent years, more need to be developed during the pandemic to obtain additional data on priority populations such as children (e.g., hospitalizations and school absenteeism surveillance) and to inform local decision-making. Many of the new systems that were developed before and during the pandemic, including Google Flu Trends, social media and other ‘big data‘ approaches, focus on prediagnostic data. Indeed, the 2009 H1N1 experience provides a test of the hypothesis that these systems might be better at providing situational awareness than at outbreak detection Citation[9].

As with risk group characterization, however, the evidence suggests that all of these systems are dependent on patient and provider decisions, which in turn depend on what they hear in the media, and the information environment changes over time. The problem is further complicated by changes in surveillance policies, such as recommendations for diagnostic testing after state and local laboratories become overwhelmed with samples, as they did in 2009. One example of this is that the regional differences in the timing of the fall wave of the pandemic in the USA were probably exaggerated Citation[2]. In her unpublished Georgetown doctoral dissertation, Zhang has found similar patterns in Hong Kong and, using a hierarchical Bayesian model, has identified some data series that are more vulnerable to ‘information environment‘ biases than others [Zhang Y, Unpublished Data]. Google Flu Trends and social media suffer from these same problems Citation[10].

Implications for policy & practice

More fundamentally, the 2009 H1N1 experience reminds us that uncertainty is inherent in infectious disease outbreaks, especially those involving emerging pathogens, so should be expected and planned for Citation[6]. For instance, the first evidence of an outbreak should trigger efforts to learn more, rather than disproportionate control measures based on worst-case scenarios Citation[11,102]. The requirement for better risk clarification in the WHO‘s new interim pandemic influenza guidance is another example Citation[101].

The 2009 H1N1 experience also reminds us that we are overly dependent on case-based surveillance methods that are subject to information environment-related reporting biases, as well as artifactual differences due to changes in surveillance and other policies. Population-based surveillance methods are needed to address this deficiency. Lipsitch and colleagues, for instance, have suggested identifying well-defined population cohorts at high risk for pH1N1 infection and ensuring that everyone in that group is tested to avoid biases due to physician decisions about who should be tested Citation[12]. Ultimately, a population-based seroprevalence survey, such as those deployed in the UK Citation[13] and Hong Kong Citation[14], would provide the least biased data on who is at risk for infection, as well as temporal and geographic patterns.

Finally, in assessing how well our public health systems are prepared to deal with future outbreaks, we must learn to measure capabilities rather than capacities. In the USA, for instance, the CDC‘s and the Trust for America‘s Health‘s assessments of public health preparedness focus on ensuring that state and local public health laboratories can respond rapidly, identify or rule out particular known biological agents and have the workforce and surge capacity to process large numbers of samples during an emergency Citation[103,104]. Globally, the focus has been on ensuring the surveillance capacities of countries‘ IHR obligations Citation[15]. What we need to know instead is how well public health surveillance systems work in actual incidents to detect, characterize and track disease outbreaks, and whether health officials can ‘connect the dots‘ and understand their implications.

Financial & competing interests disclosure

This article was developed with funding support awarded to the Harvard School of Public Health under cooperative agreements with the US CDC, grant number 5P01TP000307-01. The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

No writing assistance was utilized in the production of this manuscript.

Additional information

Funding

This article was developed with funding support awarded to the Harvard School of Public Health under cooperative agreements with the US CDC, grant number 5P01TP000307-01. The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. No writing assistance was utilized in the production of this manuscript.

References

  • Zhang Y , MayL, StotoMA. Evaluating syndromic surveillance systems at institutions of higher education (IHEs) during the 2009 H1N1 influenza pandemic. BMC Pub. Health11 , 591 (2011).
  • Stoto MA . The effectiveness of U.S. public health surveillance systems for situational awareness during the 2009 H1N1 pandemic: a retrospective analysis. PLoS ONE7(8) , e40984 (2012).
  • Zhang Y , Lopez-GatellH, Alpuche-ArandaCM, StotoMA. Did advances in global surveillance and notification systems make a difference in the 2009 H1N1 pandemic? – a retrospective analysis. PLoS ONE8(4) , e59893 (2013).
  • Stoto MA . Syndromic surveillance in public health practice. In: Global Infectious Disease Surveillance and Detection: Assessing the Challenges – Finding Solutions, Workshop Summary. Lemon SM, Hamburg MA, Sparling PF, Choffnes ER, Mack A (Eds). National Academy Press, Washington, DC, USA, 63–72 (2007).
  • Katz R . Use of revised International Health Regulations during influenza A(H1N1) epidemic, 2009. Emerg. Infect. Dis.15(8) , 1165–1170 (2009).
  • Lipsitch M , RileyS, CauchemezSet al.. Managing and reducing uncertainty in an emerging influenza pandemic. N. Engl. J. Med.361(2) , 112–115 (2009)
  • Garske T , LegrandJ, DonnellyCA. Assessing the severity of the novel influenza A/H1N1 pandemic. BMJ339 , b2840 (2009).
  • Wong JY , KellyH, IpDKet al.. Case fatality risk of influenza A (H1N1pdm09): a systematic review. Epidemiology (2013) (In Press).
  • Lipsitch M , FinelliL, HeffernanRTet al.. Improving the evidence base for decision making during a pandemic: the example of 2009 influenza A/H1N1. Biosecur. Bioterror.9(2) , 89–115 (2011).
  • Butler D . When Google got flu wrong. Nature494(7436) , 155–156 (2013).
  • Fineberg HV , WilsonME. Epidemic science in real time. Science324(5930) , 987 (2009).
  • Lipsitch M , HaydenFG, CowlingBJet al.. How to maintain surveillance for novel influenza A H1N1 when there are too many cases to count. Lancet374(9696) , 1209–1211 (2009).
  • Miller E , HoschlerK, HardelidPet al.. Incidence of 2009 pandemic influenza A H1N1 infection in England: a cross-sectional serological study. Lancet375(9720) , 1100–1108 (2010).
  • Cowling BJ , ChanKH, FangVJet al.. Comparative epidemiology of pandemic and seasonal influenza A in households. N. Engl. J. Med.362(23) , 2175–2184 (2010).
  • Katz R , HatéV, KornbletS, FischerJE. Costing framework for International Health Regulations (2005). Emerg. Infect. Dis.18(7) , 1121–1127 (2012).

Websites

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.