OUP user menu

Modelling the risk–benefit impact of H1N1 influenza vaccines

Lawrence D. Phillips, Barbara Fasolo, Nikolaos Zafiropoulous, Hans-Georg Eichler, Falk Ehmann, Veronika Jekerle, Piotr Kramarz, Angus Nicoll, Thomas Lönngren
DOI: http://dx.doi.org/10.1093/eurpub/ckt006 674-678 First published online: 12 February 2013

Abstract

Background: Shortly after the H1N1 influenza virus reached pandemic status in June 2009, the benefit–risk project team at the European Medicines Agency recognized this presented a research opportunity for testing the usefulness of a decision analysis model in deliberations about approving vaccines soon based on limited data or waiting for more data. Undertaken purely as a research exercise, the model was not connected to the ongoing assessment by the European Medicines Agency, which approved the H1N1 vaccines on 25 September 2009. Methods: A decision tree model constructed initially on 1 September 2009, and slightly revised subsequently as new data were obtained, represented an end-of-September or end-of-October approval of vaccines. The model showed combinations of uncertain events, the severity of the disease and the vaccines’ efficacy and safety, leading to estimates of numbers of deaths and serious disabilities. The group based their probability assessments on available information and background knowledge about vaccines and similar pandemics in the past. Results: Weighting the numbers by their joint probabilities for all paths through the decision tree gave a weighted average for a September decision of 216 500 deaths and serious disabilities, and for a decision delayed to October of 291 547, showing that an early decision was preferable. Conclusions: The process of constructing the model facilitated communications among the group’s members and led to new insights for several participants, while its robustness built confidence in the decision. These findings suggest that models might be helpful to regulators, as they form their preferences during the process of deliberation and debate, and more generally, for public health issues when decision makers face considerable uncertainty.

Introduction

In January 2009, the European Medicines Agency (EMA) established a benefit–risk project team to study regulatory decision making and to examine the potential for decision-analytic and other models1 to clarify difficult regulatory decisions. Shortly after the H1N1 influenza virus reached pandemic status, in June 2009,2 the team recognized an opportunity for testing the usefulness of a decision theory–based model for deliberations about approving vaccines and drugs. Regulators were facing a dilemma: either approve early the H1N1 influenza vaccines, based on limited available data and on previous experience with flu vaccines, or wait for more mature observations and data from the H1N1 strain vaccine and risk lives of people who could otherwise have been vaccinated and protected. An early decision to approve could save lives in a pandemic situation, assuming the vaccines were to be effective, but would also be detrimental to public health if safety were poor, adverse side effects occurred and the burden of disease over the coming months turned out to be low. Arguments could be found to support either immediate or delayed action on the part of regulatory authorities, depending on what assumptions were made about the subsequent resolution of uncertainties about disease seriousness, and safety and efficacy of the vaccines.

Applying modelling processes to the influenza vaccine decision purely as a research exercise, with no legal or regulatory implications or formal connection to the ongoing work in the EMA on the H1N1 vaccines, would provide an initial test of the team’s working hypothesis that a more explicit modelling-based approach to regulatory decision making could improve understanding, transparency and communication.

Methods

Approach

Three meetings took place in September 2009 to construct a model and explore its results. This report indicates the status of the model, and the information and uncertainty it represents, by the end of September. As the purpose of the meeting was to assess the potential usefulness of modelling to regulators in the state of uncertainty they faced at the time of making a decision, there was no need to revise these figures after the end of September.

The first meeting, on 1st September, was organized as a decision conference3: key people from across the EMA worked together over most of the day. The meeting began with a broad exploration of the issues about the decision to be made and a discussion of the factors that made it difficult. Agreement was quickly reached that the next decision was between two alternatives: (i) Early approval (end of September) with minimum data on safety and efficacy. (ii) Delayed approval (end of October) to obtain more data.

During this meeting, participants recognized, in particular, the high uncertainty about many subsequent events: the future progression of the disease, the efficacy and safety aspects of any vaccine, the predicted death rate among the 500 million people in Europe from the H1N1 pandemic and the effects of any vaccine on critical sub-populations. As the discussion progressed, it soon became apparent that the problem was dominated by a few main uncertainties, and that a decision tree would be most suitable for modelling the choices and their consequences. Building the model on-the-spot, with the constructed model displayed on a large screen, provided opportunities for participants to understand the complex relationships between the decisions and the uncertain events, which influenced assessments of probabilities about the outcomes of the events. Participants who had been working on the problem for many weeks brought information about past pandemics, experience from similar vaccines and current experience. Assumptions were discussed and made explicit, which enabled consequences of the decisions to be calculated. Probabilities of possible outcomes of the three uncertain events were assessed by the sub-group of three experts knowledgeable about the current pandemic and past ones; application of expert elicitation methods4 facilitated formulation of realistic probabilities.

This exercise was intended as a test of our working hypothesis about the potential usefulness of modelling to aid decision making, not as a full model covering all possible scenarios, so many working assumptions were agreed by the group in formulating the model and providing input information. In combination, these assumptions extend the range of scenarios about possible outcomes of the pandemic. The assumptions ‘at the time of the first meeting’ were as follows:

  1. A second wave is expected early in October. A second wave has been seen in past pandemics (although not in all countries), so this was taken as a worst-case scenario.5

  2. Shortly after approval, all of Europe’s 500 million people are vaccinated. In reality, fewer people would be vaccinated, but while the actual numbers of deaths and serious disabilities (DSDs) depends on the total number of those vaccinated, the relative numbers and therefore the relative preference of an early or late decision would not.

  3. The attack rate of the new pandemic is 30%, independent of seriousness.6

  4. If the decision is to delay 1 month, 37 500 Europeans will die of influenza (30% attack rate, 0.1% death rate).

  5. Data obtained from delaying approval will support a more efficient use of the vaccine (e.g. dosing), potentially providing a better safety and efficacy profile.

  6. Safety of the vaccine is independent from efficacy (the antigenic match of the vaccine to the H1N1 virus, which determines efficacy, is independent of DSDs).

  7. The vaccine will be less efficacious if the disease turns out to be serious.

  8. The death rate is 0.1% for a moderate disease and 1% for a serious disease. These two rates define ‘moderate’ and ‘serious’, and they bracket the world-wide rate from validated official sources of 0.6% as on 16 July 2009.7 Death rates in between can be simulated in the model by changing the probabilities of those rates.

  9. High-risk and low-risk populations and differences between countries are not considered.

  10. All types of vaccine have the same profile.

  11. A serious disability or death from the vaccine is equally important as death from the pandemic.

On completion of the model, results were examined and subjected to many sensitivity analyses to discover the robustness of the decision to differences of opinion and imprecision in the data. During the meeting, it became clear that the EMA needed additional information about the epidemiology of the H1N1 disease, so the group agreed to consult experts from the European Centre for Disease Prevention and Control (ECDC) after the meeting.

At a second 2-hour meeting, a sub-set of participants from the first meeting met to discuss how the model could be further refined. A third 1-hour meeting engaged the ECDC specialists and helped to update the data.

Finally, we conducted informal non-directive interviews with the three workshop participants who provided the bulk of information for the model to gain their views about the workshop.

The model

The two decision alternatives and the key uncertainties enabled the group to consider 24 different scenarios, represented as a decision tree (created using TreePlan, an Excel software program add-in), the elements of which are shown in figure 1.

Figure 1

(a) The initial part of the decision tree: a decision node (square) with two decision options followed by Disease seriousness nodes (circles) with two possible outcomes and their probabilities, which are conditional on the decision. (b) The subsequent events’ efficacy and safety, and their outcomes. The Safety node attaches at the end of each branch of the Efficacy node, which in turn attaches at the end of each Serious node’s outcome branch. The triangles at the end of each path receive the number of DSDs appropriate for the outcome of the uncertain events in that path

The square decision node (figure 1a) shows two decision branches, each of which is followed by an event node representing the disease severity, shown as moderate or severe on the node’s branches. The probability of 0.20 reflects the group’s judgement at the first meeting that the Spanish flu pandemic in 1918, one out of five previous serious flu epidemics, could be considered catastrophic, so that event was taken as a model for the possibility of the current virus mutating to a deadlier strain. The increase of that probability to 0.25 for the later decision reflects the judgement that an early mass vaccination would restrict the potential for the virus to spread and mutate.8 Attached to each of the four paths through this early portion of the tree is the event node for the efficacy of the vaccine, defined as the percentage of people who are protected by the vaccine (without regard for pandemic risk groups such as pregnant women or different attack rates, as sufficient data were not available then), >75, 50 and <25% (figure 1b, left), assuming rapid and wide-spread vaccination after approval. Participants’ uncertainties about those outcomes depended on the seriousness of the disease and on the decision. Thus, their judged probabilities (table 1), which were based on extrapolations of the seasonal flu vaccines for the protected population9 and on some pre-publication data on the immunogenicity of the new vaccines (e.g. reported in the ECDC Daily Update), show greater confidence that efficacy will be high if approval is delayed to October, and if the disease is moderate, on average, rather than severe.

View this table:
Table 1

Probabilities of vaccine efficacy

Decision optionDisease seriousnessEfficacy
>75%50%<25%
Approve by end of SeptemberModerate0.300.500.20
Severe0.250.500.25
Approve by end of OctoberModerate0.400.500.10
Severe0.350.500.15
  • Probabilities shown in the right three columns of the table depend on the decision option and on the seriousness of the disease, moderate or severe.

Each of the three efficacy branches is followed by an event node representing uncertainty about the safety of the vaccine (figure 1b, right), with two branches for the frequency of number of DSDs from the vaccine, good at 1 in 100 000 and poor at 1 in 10 000 (figure 1c). The lower rate was based on concern about the reports of Guillain–Barre syndrome after the 1976 A/New Jersey influenza vaccination,10 and the higher rate expressed the group’s judgement that new vaccines could potentially be less safe than in the past, although subsequent literature suggests the group was too cautious.11 The probabilities of those branches depend only on the decision: with the information available at the first meeting, participants assessed probabilities of 0.90 and 0.10 for the September decision, but, assuming more information by the end of October, 0.95 and 0.05. These high probabilities for the 1-in-100 000 branch reflect the group’s judgement that vaccines are usually safe, and experience with the H5N1 vaccine supports that view.12 The relevance/influence diagram in table 1 provides a readily communicable summary of how knowledge of events at one node can affect uncertainty about events at another node.13

The complete tree, with all event nodes attached, shows 24 paths: 2 decision branches, 2 disease severity branches, 3 vaccine efficacy branches and 2 vaccine safety branches: 2 × 2 × 3 × 2 = 24 scenarios of decisions and subsequent events. A figure appears at each of those 24 triangular end points, representing the number of DSDs, defined in the workshop as the sum of deaths from influenza plus death and serious disabilities from the vaccine. These figures range from the best case (approve by end of September; disease, moderate; efficacy, >75%; safety, good) with 42 500 DSDs, to the worst case (delay to end of October; disease, serious; efficacy, <25%; safety, poor) with 1 268 750 DSDs, a ratio of worst to best being 26.7 to 1.

With the tree complete, a laptop computer ‘rolled the tree back’ by computing expected (weighted average) values at each node, beginning at the far right of the tree. For example, following the best case path to the Safety node shows 42 500 DSDs if safety is good, with probability 0.9, but 87 500 DSDs if safety is poor, with probability 0.1. Multiplying each of the DSDs by their associated probabilities gives an expected (weighted average) DSD of 47 000. This process of taking averages of DSDs with probabilities as the weights is repeated from right to left, leaving two expected DSD figures at the branches of the decision node.

Results

The expected DSDs for the September and October decision branches were 216 500 and 291 547, respectively, showing that early approval is most preferred. Of course, those figures are not the numbers of DSDs ‘expected’ in the ordinary sense of that word; they are simply ‘figures of merit’ that indicate the relative attractiveness of the two options: the larger the difference between the numbers, the more strongly preferred one option should be to the other. In this case, the difference between those numbers, 75 047, suggests a fairly serious penalty for waiting.

The group explored the impact on the model of different assumptions by changing one or more probabilities on the decision tree. The Excel program immediately rolled the tree back to give the new expected DSDs. To facilitate this process, a ‘dashboard’ display was later constructed, with the expected DSDs shown as bar graphs and key probabilities displayed next to scroll bars, which enabled any one probability or combination of probabilities to be changed (figure 2).

Figure 2

The dashboard showing the overall results as bar graphs, with scroll bars enabling a user to change any of the originally assessed probabilities (base values) over their range from 0 to 1.0 (here shown as figures between 0 and 100). A change to any one of the >75% efficacy probabilities causes the 50% and <25% probabilities to change in proportion to their original probabilities, thus ensuring that the probabilities on the three branches always sum to 1

These sensitivity analyses showed that even with zero probabilities on all the upper branches of the safety and efficacy branches of the tree, the September option was still favoured. Only with a few extreme probabilities for both safety and efficacy, which could not be defended as they defied all logical explanation, was the October option favoured. However, uncertainty about the seriousness of the disease did matter: if the probability of the disease being moderate, given delay to the end of October, were judged to be higher than 0.84, it would be better to delay the authorization to the end of October. At the time of the September meetings, nobody was prepared to defend a probability that high, as the data available then, case fatality rates of 0.2% for the United Kingdom and Spain reported in Vaillant, La Ruche, Tarantola and Barboza,7 were judged to justify probabilities of 4 to 1 in favour of moderate, given the September decision, and the group had assigned probabilities of 3 to 1 for a later decision on the grounds that the pandemic might become more serious. In short, the model proved to be robust to disagreements about the probabilities and imprecision in the data on which those judgements were based.

Discussion

As far as we know, no regulatory authority for medicinal products has published an application of a decision theory–based model that might be useful in a regulatory setting. So, did our working hypothesis support that modelling could be helpful to regulators? The main and most important consequence of the modelling was that participants were no longer divided about the best course of action; all preferred an early decision. Discussion clarified outstanding uncertainties between the participants and created a shared understanding and sense of common purpose based on the information available. The relative insensitivity of the model to uncertainty about safety and efficacy showed that there was a potentially high cost, in expected DSDs, for waiting. The judged reduction in uncertainty that would be provided by better data from waiting until the end of October was not a sufficient change over the state of knowledge that existed in September.

The three workshop interviewees agreed that the model helped to improve communications within the group, and that it brought together higher-level thinking about the project with detailed historical and current data. As one person said, ‘The greatest value of the exercise was that it was group-based. It brought into the same room different people, with different views, expertise and seniority’

Other reported benefits of the modelling included increased confidence in the efficacy of the vaccine, support for the message of ‘no delay’, development of new insights and the realization that uncertainties about safety and efficacy were less crucial to the decision than uncertainties about the seriousness of the disease. It was apparent from the interviews that the model helped the group to take the problem apart into manageable pieces, focus on one issue at a time, attend only to the facts relevant to that part of the problem, discuss the numerical information available and so provide a discussion that was more objective and less emotional.

One of the founders of decision analysis, Howard Raiffa, described it as a ‘divide and conquer’ strategy for difficult problems.14 The human mind is limited in how many features can be kept in thought all at once,15 and research by psychologists shows that as a consequence, we adopt various heuristics, or ‘rules of thumb’, to simplify complex situations, sometimes resulting in poor decisions.16 This exercise showed how the ‘divide and conquer’ strategy could be useful to regulators, who already decompose a complex problem into its components, consider data and apply expert judgement to the parts. Here, we quantified judgement and let a computer reassemble the pieces by applying the rational combination rules of decision theory. This process is first reductionist, then as it reassembles the pieces, it is constructionist,17 which allows new properties to emerge from looking at the whole, for example, the unexpected insensitivity of the results over ranges of defendable probability judgements about safety and efficacy. The model provided a way to look at a to-be-constructed future, like an architect’s model,18 enabling decision makers and their advisors to form their preferences and so develop an informed decision with confidence.

As stated in the ‘Introduction’ section, the flu pandemic provided an opportunity to test the potential for modelling to improve understanding, transparency and confidence of regulatory decision making. The model was limited in scope, with its exclusive focus on death and life-debilitating effects caused by the pandemic influenza and by the vaccine, respectively. All considerations of non–life-threatening morbidity, hospitalizations and other outcomes were not considered, nor were other populations, such as children and pregnant women, explicitly taken into account. The model is further based on the assumption that vaccination after approval occurs in a timely and broad manner, which in reality does not happen because of differences in vaccine supply and geographical and seasonal differences between health care environments. However, even if this were to be taken into account, it is unlikely that it would have changed the resulting preference for the earlier decision. If this model had been used to support regulatory decisions, any additional considerations deemed important could have been included.

In addition, if such a model were to be used by a regulatory agency to assist the deliberative process of approving a vaccine or drug, the model would be continually updated as further information became available right up to the time of a decision. Subsequent to the first meeting, it was clear that the attack rate of 30% and death rates for a moderate or serious disease of 0.1 and 1%, respectively, were all too high (thus, the large number of deaths, 42 500, for even our best-case scenario). We did not revise the model with these new values because our intent was to test the usefulness of a model before a decision was made, and so we have reported here the values assumed in early September.

This project has shown the potential usefulness of modelling to help regulators and other decision makers on significant safety issues, and, more broadly on public health issues, to make their decisions more explicit, transparent and auditable, and therefore more acceptable to external stakeholders.

Funding

This project was funded by the EMA through the secondment of LDP and BF from the London School of Economics to the EMA’s benefit/risk methodology project.

Conflicts of interest: None declared.

Key points

  • As far as the authors are aware, this is the first time a regulatory authority has examined the potential for a formal structured approach that could support the decision-making process.

  • This article shows how transparency in regulatory decisions can be facilitated by formal modelling, and how improved communication to affected stakeholders could then follow.

  • Decision modelling could be critically important for situations in which regulators face crucial public health issues.

Acknowledgements

The European Medicines Agency (EMA) provided funding for this study through the secondment from the London School of Economics to L.D.P. and B.F. The findings and conclusions in this report are those of the authors and do not necessarily represent the official positions of the EMA, the London School of Economics and Political Science or the European Centre for Disease Prevention and Control. We are grateful to the many people in the EMA who contributed their expertise and judgements in the construction and exploration of the model. Our thanks also extend to Dr Desmond Fitzgerald, who provided helpful suggestions for improving the original draft of this report.

L.D.P. facilitated the decision conference and the second and third meetings, developed the model and wrote the report. B.F. assisted in creating the model, wrote notes of the decision conference, facilitated working sessions with data providers and interviewed some of them and assisted with report writing. N.Z. assisted with data interpretation and report writing, and interviewed some of the data providers. H.-G.E. provided senior-level guidance in the first meeting and advised on many matters in writing this report. V.J. and F.E. provided information about past pandemics and data about efficacy and safety of similar vaccines and research on the H1N1 vaccines. P.K. and A.N. gave information and available data for various definitions of disease seriousness. T.L. suggested exploring the usefulness to regulatory authorities of decision-analytic modelling of the H1N1 vaccine recommendation.

The views expressed in this article are the personal views of the authors and may not be understood or quoted as being made on behalf of or reflecting the position of the European Medicines Agency or one of its committees or working parties. The information contained in this article is not intended to be an accurate reflection of submitted data and was not taken into account by the European Medicines Agency’s CHMP during the scientific assessment.

References

View Abstract