OUP user menu

Swine influenza and vaccines: an alternative approach for decision making about pandemic prevention

Marcello Basili, Silvia Ferrini, Emanuele Montomoli
DOI: http://dx.doi.org/10.1093/eurpub/ckt004 669-673 First published online: 26 March 2013

Abstract

Background: During the global pandemic of A/H1N1/California/07/2009 (A/H1N1/Cal) influenza, many governments signed contracts with vaccine producers for a universal influenza immunization program and bought hundreds of millions of vaccines doses. We argue that, as Health Ministers assumed the occurrence of the worst possible scenario (generalized pandemic influenza) and followed the strong version of the Precautionary Principle, they undervalued the possibility of mild or weak pandemic wave. Methodology: An alternative decision rule, based on the non-extensive entropy principle, is introduced, and a different Precautionary Principle characterization is applied. This approach values extreme negative results (catastrophic events) in a different way and predicts more plausible and mild events. It introduces less pessimistic forecasts in the case of uncertain influenza pandemic outbreaks. A simplified application is presented using seasonal data of morbidity and severity among Italian children influenza-like illness for the period 2003–10. Principal Findings: Established literature results predict an average attack rate of not less than 15% for the next pandemic influenza [Meltzer M, Cox N, Fukuda K. The economic impact of pandemic influenza in the United States: implications for setting priorities for interventions. Emerg Infect Dis 1999;5:659–71; Meltzer M, Cox N, Fukuda K. Modeling the Economic Impact of Pandemic Influenza in the United States: Implications for Setting Priorities for Intervention. Background paper. Atlanta, GA: CDC, 1999. Available at: http://www.cdc.gov/ncidod/eid/vol5no5/melt_back.htm (7 January 2011, date last accessed))]. The strong version of the Precautionary Principle would suggest using this prediction for vaccination campaigns. On the contrary, the non-extensive maximum entropy principle predicts a lower attack rate, which induces a 20% saving in public funding for vaccines doses. Conclusions: The need for an effective influenza pandemic prevention program, coupled with an efficient use of public funding, calls for a rethinking of the Precautionary Principle. The non-extensive maximum entropy principle, which incorporates vague and incomplete information available to decision makers, produces a more coherent forecast of possible influenza pandemic and a conservative spending in public funding.

Introduction

On 14 June 2009, the Director-General of the World Health Organization (WHO) declared a global pandemic of A/H1N1/Cal influenza and suggested the application of the WHO Interim Program (2007) to mitigate the pandemic emergency. The WHO suggested combining vaccination and an extensive campaign of antiviral prophylaxis and treatment. The justification for this approach may have been the combination of the constantly evolving nature of influenza viruses and the possible shortage of vaccine supply due to constrained production capacity.

There was uncertainty about the nature of influenza pandemics and its economic impacts, but all studies (global, continental, regional) on the possible human and economic costs of an influenza pandemic agreed in considering global consequences catastrophic: uncountable deaths and economic losses of about a trillion US dollars (World Bank suggested 4.8% of global gross domestic product). Many governments signed pre-pandemic contracts with vaccine producers for a universal influenza immunization program, and hundreds of millions of doses of vaccine were bought, or subscribed options to buy (e.g. in spring 2009, the US government signed contracts for 251 million doses, and UK and The Netherlands bought vaccines for 30% of the population).1–5

The containment strategy ended in February 2010 reporting roughly 18,500 laboratory-confirmed deaths for pandemic influenza A/H1N1/Cal from 213 countries. The adopted containment strategy may appear to be a success. However, millions of short-running antiviral doses and a large number of pandemic vaccine doses remained on the shelves, as millions of people refused vaccination (e.g. only 60 million Americans had been vaccinated). These undesired outcomes induced some critical questions on the role of the WHO in declaring the pandemic alert, and the function of pharmaceutical industry in managing such emergencies. Cohen and Carter6,7 published the article ‘WHO and the pandemic flu conspiracy’, where they supported the conspiracy theory charging the WHO to cover ‘troubling questions about how the WHO managed conflicts of interest among the scientists who advised its pandemic planning, and about the transparency of the science underlying its advice to governments’. Moreover, Cohen and Carter reported doubts about the manner in which the pandemic risk was estimated and communicated. Wilson8 offered a different view and interpreted the difference between predicted and observed effect of the pandemic A/H1N1/Cal as a problem of ‘how … deaths are estimated, counted and compared’ (p. 7). In this article, we contribute to this discussion by considering the management of A/H1N1/Cal pandemic as a problem of decision-making rules. We acknowledge that the vaccination choice depends on more factors than presently described (e.g. political decisions and budget constraints), but the core of the article is only about decision principles in pandemic prevention. In the puzzling and ambiguous scenario of the pandemic influenza, the WHO and Health Ministers decided to contain the A/H1N1/Cal influenza by adopting the strong version of the Precautionary Principle (PP), which dictates ‘better safe than sorry’. This decision corresponds to the application of Wald’s ‘maximin criterion’,9 which evaluates acts according to the worst possible scenario and almost everywhere induces the most conservative choice. We argue that the A/H1N1/Cal pandemic could have been managed with a less conservative rule. This article suggests that the application of the full conservative notion or strong version of the PP induces over-reaction. The application of a less conservative PP, which is based on the notion of non-extensive entropy principle, is proposed and yields a reduction in the degree of preventive actions required. To demonstrate the application of this principle, a simplified example based on Italian data on influenza-like illnesses (ILI) among children for the period 2003–10 is presented.

Method: non-extensive entropy maximization

In the global pandemic of influenza10 A/H1N1/Cal, opinions available to decision makers appeared to be incomplete and generally uncertain. Nevertheless, because experts’ and scientists’ opinions were expressed as probability measures, densities, mass functions or odds, their probability distributions could have been used to form a ‘consensus distribution’, that is, a combination of all probability distributions. In general this consensus distribution can be used to develop a rational decision rule. In the standard decision theory, Bayesian pooling methods, experts’ opinions and personal judges exist as methods for eliciting a consensus distribution. These methods face problems (i.e. arbitrariness of the pooling weights, the dependence between the decision maker’s information and the experts’ information, dependence among experts’ probability distributions or stochastic dependence and calibration of experts’ opinion), and crucially they do not leave room for ambiguous attitude. Nevertheless, the ambiguity attitude, that is, the attitude about the reliability of available information on the underlying uncertainty, emerges when individuals face vague and incomplete statistical data. Further, ambiguity influences perception of risky events and induces human beings to elicit probabilities and apply decision rules that violate the standard rational paradigm.

As an alternative to the Bayesian pooling methods, this article introduces a decision rule based on the ‘non-extensive entropy’, which portraits the concept of generalized entropy as a quantitative criterion for measuring uncertainty in estimations. The decision makers assess extreme negative results (catastrophic events) in a different way than ordinary results (more plausible events).11,12 Crucially, because extreme events normally lay on the tail of the ordinary probability distributions, the non-extensive entropy principle considers the divergence between the two distributions before forecasting the pandemic events. Ambiguity attitude of the decision maker is represented through the non-extensive statistical mechanics, indeed a non-additive generalization of quantum information theory based on the non-extensive entropy, and the maximum entropy solution is defined.

Assume Embedded Image is the set of states of the world, Embedded Image is the experts’ judgments or available information. Embedded Image is the probability distribution of Embedded Image-th information or judgment for the state of the world Embedded Image. Then, the non-extensive entropy is defined as follows:

Definition 1.

Embedded Image is known as the Tsallis entropy, where Embedded Image such that if q<1 or q>1, super-extensivity and sub-extensivity occurs, respectively. Embedded Image is concave for Embedded Image (and convex for Embedded Image); hence, q-entropy maximizing distributions, given specific constraints, are uniquely defined for Embedded Image.

The non-extensive entropy is a parametric entropy and depends on the factor Embedded Image. The entropic index that represents the dependence degree among experts’ opinions is Embedded Image Embedded Image which is the usual concave and extensive Boltzman–Gibbs–Shannon entropy.13 This form of entropy was first introduced by Tsallis in 1988, as a generalization of the standard entropy, and has been used in many applications, such as solar wind, high-energy physics and financial markets.

The ordinary risk distribution (P) and the chance of the catastrophic event (Q) may be obtained for decision making conclusions. These distributions satisfy properties in Definition 2 and are used in finding the distribution Embedded Image as the solution of the following Problem 1.

Definition 2.

Let P and Q be two probability distributions, such that Embedded Image is absolutely continuous with respect to Embedded Image. Then, the KullbackLeibler or relative entropy of Embedded Image and Embedded Image is Embedded Image Embedded Image, and it exhibits the divergence between Embedded Image and Embedded Image.

The probability distribution Embedded Image represents the excess of a randomness over the distribution Embedded Image and the Hkl, which is the Kullback–Leibler distance, measures the maximum feasible divergence between the two probability distributions. It is worth observing that the measure of distance between distributions represents the constraint that has to be satisfied by the distribution H(P), which is the optimal solution of the following Problem 1.

Problem 1.

Embedded Image Crucially, Embedded Image, the solution of this constrained maximum entropy problem, is ‘the “Renyi” entropy of distribution Embedded Image with index Embedded Image minus a linear function of the constrained’.14

Definition 3.

The Renyi entropy is Embedded Image with Embedded Image and it is concave only for Embedded Image.

Results: an example of the non-extensive entropy in influenza pandemic forecast

The previous framework can be applied once a decision maker faces a set of probability distributions describing the possible attack rate for an infectious disease. In the case of epidemic and pandemic flu, data are generally aggregated for different age classes, and discrete probability distributions are considered for the expected attack rate. Alternative prevention plans can be assessed by comparing different scenarios under various attack rate distributions. In this application the experts' influenza pandemic attack rate on population is drawn from studies of Metzler et al.15,16 and the ordinary seasonal attack rate is obtained from the annual Italian ILI morbidity and severity data.

Let us suppose that in preparing the pandemic prevention program, the decision-maker wants to model all available information and be as neutral as possible about what is unknown. Therefore, calling H the model for the pandemic influenza attack rate, we need to assign a level of severity to the next influenza wave H(p) to find the less concentrate or most ‘uniform’ decision model. The parameter p is the level of severity. Observing influenza time series data, the level of severity may be broadly grouped into five levels, called very low (VL), low (L), medium (M), high (Hi) and very high (VHi). With this information, the first constraint in modelling H is Embedded Image and we can search a suitable model that obeys this constraint. The constraint is satisfied if model H always predicts that the attack rate is very low, H(VL) = 1, or if the model predicts that VL and L occurs with probability ½, and so forth. Infinite combinations of events are possible. From previous pandemic events, we know that very high attack rate can occur, such as with the pandemic flu of 1918 and in this case we may assume that the total probability is evenly distributed among events, H(VL) = H(L) = H(M) = H(Hi) = H(VHi) = 1/5, and this model is the ‘most uniform model’, subject to the assumption that all levels of severity can occur. However, further information may be available, for example, that levels VL and L appear 30% of the time. Therefore, this extra piece of information, together with constraint (a), reduces the number of solutions; however, again different models can be consistent, and the most uniform model H, which allocates probabilities as evenly as possible given the two constraints, is H(VL) = H(L) = 3/20 and H(M) = H(Hi) = H(VHi) = 7/30. Proceeding by adding piece of information, the model H is getting complex, and two problems emerged: what exactly does ‘uniform’ model mean? How does one find the most uniform model subject to a set of constraints?

The maximum entropy method answers both questions, modelling available information and assuming nothing about what is unknown. Therefore, given a collection of influenza events, such as historical estimates of pandemic influenza attack rates15–19 and the seasonal attack rate, choose a model that is consistent with all these previous observable events, but otherwise be as uniform as possible. This intuitive principle of building the maximum-entropy decision rule is followed in our numerical example, where distribution Embedded Image, Embedded Image and Embedded Image, as in Problem 1, are known and based on simplified assumptions.

The ordinary attack rates of influenza come from the Italian annual ILI morbidity and severity data, provided by the surveillance system, which collects epidemiological and virological data from national networks.20 Data refer to the period 2003–10 for the following age classes: 0–4, 5–14, 14–64 and >65 years. Data span from 42th week in 2003 to 17th week in 2010, and are based, on average, on >1 million people. For the year 2009, data are also available for the Weeks 17–42, defined as the ‘non-ordinary influenza season’. An overview of influenza attack rates and a breakdown for age classes in ordinary and non-ordinary flu season is reported in table 1.

View this table:
Table 1

The average Italian ILI rate in the season 2003–10 per 1000 people

ILI seasonChildren (0–4 years)Young (5–14 years)Adult (15–64 years)Senior (>65 years)All
2003–045.333.771.741.072.04
2004–058.447.433.232.573.91
2005–064.202.911.150.661.44
2006–076.914.621.730.932.23
2007–087.695.732.331.212.83
2008–096.584.752.041.202.48
Average attack rate in ordinary seasons6.524.872.041.272.49
2009–108.359.712.310.963.50
Average attack rate in non-ordinary influenza season1.590.840.340.160.43
Total in swine flu period4.975.301.330.572.00
P-value: H0: ordinary = swine flu Kruskas–Wallis test0.080.040.0020.00010.01
  • Bold rows represent average attack rates in ordinary and swine flu seasons Bold and Italics row represents significant level for the non parametric K-W comparison test

In the ordinary season, the period 2004–05 presented, on average, the highest attack rate for all age classes, except for the Young group (5–14 years), for which the worst period was 2009–10, with an attack rate of almost 10%. On average, the swine flu period shows lower attack rates for all age classes except 5–14 years. Similarly, in the ‘non-ordinary influenza season’ the influenza attack rate is lower than in any other ordinary season. Contrasting the ordinary and swine flu seasonal attack rates, we find some statistically significant differences as shown in the last row of table 1. In the swine flu season, the Young group experienced a higher attack rate than in the ordinary season (P = 0.04), whereas the Adult and Senior group show a statistically lower attack rate than in the ordinary season (P = 0.002 and 0.0001). No difference is found for the Children group.

This ‘post-pandemic’ estimate suggests very mild effects. However, at the beginning of 2009, the Italian Health Minister could not foresee the impact of swine flu, but had to decide which quantity of vaccines to buy. Following the strong version of the Precautionary Principle (PP) and the experts’ prediction for pandemic influenza, the Health Minister could sign a pre-influenza contract for buying millions of vaccines doses. Alternatively, the maximum entropy rule suggests using the experts' prediction of the pandemic (Q) and the seasonal influenza attack rate (P) for obtaining an alternative attack rate scenario indeed the distribution Embedded Image.

The influenza attack rate variable is broadly summarized in five levels as follows:

  1. VL (<10/1000 people)

  2. L (11–50/1000 people)

  3. M (51–100/1000 people)

  4. Hi (101–200/1000 people)

  5. VHi (201–350/1000 people)

Therefore, counting the proportion of times that each age class registers an attack rate in one of the five levels in the ordinary influenza season, we get the severity of influenza distributions as in figure 1.

Figure 1

The severity of influenza distribution obtained from the Italian weekly ILI attack rate for age classes (P(x))

As expected for all age classes, the seasonal attack rate distribution is concentrated in the first two categories (L and VL), and each of these distributions may be used as a proxy for the distribution Embedded Image in Problem 1. Problem 1 is formalized focusing on the Children group distribution, which predicts that level VL occurs in 80% of cases and L in 20%. On average, seasonal attack rate is 10/1000. For the distribution Q(x) of Problem 1, we assume that experts’ forecast for the next influenza pandemic reflects findings in Meltez et al.15,16 and predicts an average attack rate of 15%. Nevertheless, we assume that the experts’ distribution is concentrated between levels M and Hi. The underlying assumption is that the experts’ distribution is mainly concentrated on level Hi, but few experts are willing to assign a lower level of attack rate (M) (see figure 2 for a graphical description). In this case, the average attack rate available to the Health Minister is 15%, but she observes uncertainty across experts’ prediction. Applying the strong version of the PP, the government should use the most extreme prediction and buy doses of vaccine for 15% of children population, which, at a cost of €21 per dose, implies a spending of about €3 millions. The decision maker can incorporate uncertainty in their decisions and weight differently the next pandemic influenza. The key elements for the maximum entropy decision are as follows:

  • the Children seasonal distribution Embedded Image as in figure 1,

  • the distribution Embedded Image is on the tail of the seasonal or ordinary influenza distribution

  • and the relative entropy of Embedded Image and Embedded Image is within the range 10–20. This assumption reflects a medium level of uncertainty (θ). In fact without uncertainty the Embedded Image and with maximum uncertainty Embedded Image.

Figure 2

Smoothing distributions for: Children seasonal influenza attack rates P(x); expected attack by experts Q(x), and resulting Tsallis distribution Embedded Image for future influenza attack rates

Solving Problem 1 numerically,21 we get the Embedded Image distribution for the pandemic influenza attack rate. This distribution is largely flat, with heavier tails than the seasonal influenza distribution. On average, the Embedded Image distribution predicts an attack rate of 12%, which is less pessimistic than experts’ forecast. Furthermore, using a Tsallis distribution in decision making for the pandemic influenza, we take explicitly into account uncertainty and acknowledge that the distribution for decision makers’ choice is broad as in figure 2.

In conclusion, using this alternative decision rule, the decision makers could rationally decide to buy vaccines doses for 12% of the children population for preventing pandemic influenza. This choice implies a public saving of around €500 000 (17% of expenditure) compared with the strong PP rule. This result only reflects assumptions used in the numerical example, and more skewed Embedded Image distribution and average pandemic influenza attack rate can be found varying the distribution and uncertainty assumptions (e.g. the experts’ attack rate, the uncertainty factor, etc.).

Discussion

The A/H1N1/Cal influenza pandemic shed light on the need for a rational decision rule to assist policy makers and planners with effective health system responses to epidemics. We introduce a decision rule capable of taking explicitly into account the uncertainty about the pandemic infection of A/H1N1/Cal. We considered the selection of the best containment strategy, given uncertainty and incomplete information for the pandemic A/H1N1/Cal, and we concluded that one critical drawback of the containment strategy was the application of a strong version of the PP. In fact, different Health Ministers assumed the occurrence of the worst possible scenario and undervalued the possibility of mild or weak pandemic wave. This over-pessimistic view resulted in billions of euros/dollars allocated to vaccines stockpiling. We argue that by applying a different decision rule, based on the maximum entropy principle, decision makers may smooth out misevaluation and induce less pessimistic choices.

The pandemic virus of A/H1N1/Cal was first recognized in North America as a reasserting of the influenza A/H1N1 sub-type and then confirmed in Australia and New Zealand in May 2009. This information and the influenza distribution observed in one hemisphere could have been used as a proxy in the maximum entropy setting for predicting the attack rate in other hemispheres. Given the level of information at the beginning of 2009, if the alternative decision rule for the pandemic influenza distribution, based on constrained Tsallis entropy, had been applied, a targeted vaccination program would have emerged as a more sensible choice, relative to a universal program. In this case, billions of euros/dollars would have been saved.

Conflicts of interest: None declared.

Key points

  • The documented effect of A/H1N1/Cal influenza was mild, and in some cases lower than the seasonal influenza.

  • We argue that the choices made by governments for influenza prevention reflect a strong version of the Precautionary Principle.

  • An alternative version of the Precautionary Principle is presented through the application of a non-extensive entropy principle.

  • An example shows how this alternative decision method takes explicitly into account uncertainty and forecasts less pessimistic attack rate.

References

View Abstract