Information

How is duration of efficacy estimated for vaccines?


Vaccines, especially those given in adulthood, usually have term limits attached, eg: 10 years for yellow fever or 3 years for typhoid. Since presumably the time course of an immune response is no great respecter of our calendrical conventions, and since there is also presumably a spectrum of responses across the population, how are these durations estimated? What are the criteria for deciding a cutoff time?

Also, do the estimates get revisited and updated as time goes on and new data become available? Do new data become available? Is there continuing follow-up research into this, or does the estimate just get made before the vaccine comes to market and then get taken as gospel?


Duration of efficacy is typically determined by tracking the antibody titers of a cohort of subjects who have gotten the vaccine, and estimating based on the trajectory of those titers where they will eventually cross the threshold to the point where the vaccine no longer confers immune resistance.

These estimates do get revised and estimated as time goes on - you will occasionally see new recommendations for a second or third "booster" dose of a vaccine, which is meant to extend the duration of immunity beyond the duration of the original vaccine.

Two major types of studies track this over time. Phase IV clinical trials, which are clinical trials required post-licensure, and observational epidemiology trials, which tend to be performed when disease transmission starts to occur in supposedly vaccinated populations.


Mechanical snail has raised the issue of viral evolution, so I'll touch on it briefly. The duration of efficacy discussed above is based on how long a patient's body can mount an immune response to a particular challenge. That's a concern for all vaccines.

For some vaccines, there's a secondary process that's of concern - that of the virus evolving in such a way that the antigens targeted by the vaccine are no longer those on the virus itself. This is only a concern for some viruses, notably those that are particularly fast-evolving, like influenza or HIV, and less of an issue for say, measles and HPV.

But that's typically not what people are talking about when they say "duration of efficacy" because it's inherently unpredictable, and less a function of the vaccine and more a function of the virus.


I read somewhere (I don't have a citation) that duration of efficacy is limited by the evolution of viral antigens; eventually the population will evolve to the point where the immune system no longer recognizes them effectively. RNA viruses have more volatile genomes and so their antigens drift faster. This explanation makes some sense but it conflicts with @Epigrad's answer.


How CDC Monitors COVID-19 Vaccine Effectiveness

After the U.S. Food and Drug Administration (FDA) authorizes a vaccine for emergency use or approves a vaccine, experts continue to assess how well the vaccine is working in real-world conditions. Clinical trials of the available COVID-19 vaccines are important for authorization and approval, but their results may not reflect how effective the vaccines are once administered beyond the clinical trial. CDC has established methods to monitor vaccine effectiveness in real-world conditions and address ongoing and future questions around how vaccines perform.

CDC uses several methods to evaluate COVID-19 vaccine effectiveness. These methods can contribute different information and build a base of evidence about how COVID-19 vaccines are working. The information collected through this combined approach enables CDC to monitor vaccine effectiveness over time. This can help inform public health action to continue to best protect the population.


How Flu Vaccine Effectiveness and Efficacy are Measured

Two general types of studies are used to determine how well influenza vaccines work: randomized controlled trials and observational studies. These study designs are described below.

Randomized controlled trials (RCTs)

The first type of study design is called a randomized controlled trial (RCT). In a RCT, volunteers are assigned randomly to receive an influenza vaccine or a placebo (e.g., a shot of saline). Vaccine efficacy is measured by comparing the frequency of influenza illness in the vaccinated and the unvaccinated (placebo) groups. The RCT study design minimizes bias that could lead to invalid study results. Bias is an unintended systematic error in the way researchers select study participants, measure outcomes, or analyze data that can lead to inaccurate results. In a RCT, vaccine allocation is usually double-blinded, which means neither the study volunteers nor the researchers know if a given person has received vaccine or placebo. National regulatory authorities, such as the Food and Drug Administration (FDA) in the United States, require RCTs to be conducted and to demonstrate the protective benefits of a new vaccine before the vaccine is licensed for routine use. However, some vaccines are licensed based on RCTs that use antibody response to the vaccine as measured in the laboratory, rather than decreases in influenza disease among people who were vaccinated.

Observational Studies

The second type of study design is an observational study. There are several types of observational studies, including cohort and case-control studies. Observational studies assess how influenza vaccines work by comparing the occurrence of influenza among people who have been vaccinated compared to people not vaccinated. Vaccine effectiveness is the percent reduction in the frequency of influenza illness among vaccinated people compared to people not vaccinated, usually with adjustment for factors (like presence of chronic medical conditions) that are related to both influenza illness and vaccination. (See below for details.)

How do vaccine effectiveness studies differ from vaccine efficacy studies?

Vaccine efficacy refers to vaccine protection measured in RCTs usually under optimal conditions where vaccine storage and delivery are monitored and participants are usually healthy. Vaccine effectiveness refers to vaccine protection measured in observational studies that include people with underlying medical conditions who have been administered vaccines by different health care providers under real-world conditions.

Once an influenza vaccine has been licensed by FDA, recommendations are typically made by CDC&rsquos Advisory Committee for Immunization Practices (ACIP) for its routine use. For example, ACIP now recommends annual influenza vaccination for all U.S. residents aged 6 months and older. These universal vaccine recommendations make it unethical to perform placebo-controlled RCTs because assigning people to a placebo group could place them at risk for serious complications from influenza. Also, observational studies often are the only option to measure vaccine effectiveness against more severe, less common influenza outcomes, such as hospitalization.

What factors can affect the results of influenza vaccine efficacy and effectiveness studies?

The measurement of influenza vaccine efficacy and effectiveness can be affected by virus and host factors as well as the study methodology used. Therefore, vaccine efficacy/effectiveness point estimates have varied among published studies.

Virus factors

The protective benefits of influenza vaccination are generally lower during flu seasons where the majority of circulating influenza viruses differ from the influenza viruses used to make the vaccines. Influenza viruses are continuously changing through a natural process known as antigenic drift. (For more information, see How the flu virus can change: Drift and Shift.) However, the degree of antigenic drift and the frequency of drifted viruses in circulation can vary for each of the three or four viruses included in the seasonal flu vaccine. So even when circulating influenza viruses are mildly or moderately drifted in comparison to the vaccine, it is possible that people may still receive some protective benefit from vaccination and if other circulating influenza viruses are well matched, the vaccine could still provide protective benefits overall.

Host factors

In addition to virus factors, host factors such as age, underlying medical conditions, history of prior infections and prior vaccinations can affect the benefits received from vaccination.

Study Design Factors

Experts consider RCTs to be the best study design because they are less susceptible to biases. However, as stated above, these studies cannot be conducted when vaccination is recommended in a population and these studies are very difficult to conduct for more severe outcomes that are less common. There are several observational study designs, but many programs currently use the test-negative, case-control design. In the test-negative design, people who seek care for an acute respiratory illness are enrolled at ambulatory care settings (such as outpatient clinics, urgent care clinics, and emergency department) and information is collected about the patients&rsquo influenza vaccination status. All participants are tested for influenza using a highly specific and sensitive test for influenza virus infection, such as reverse transcription polymerase chain reaction (RT-PCR). The ratio of vaccinated to unvaccinated persons (i.e., the odds of influenza vaccination) is then compared for patients with and without laboratory-confirmed influenza. The test-negative design removes selection bias due to health-care seeking behaviors. In addition to the test-negative design, there are additional observational study designs that have been used to estimate vaccine effectiveness.

Factors Related to Measuring Specific versus Non-Specific Outcomes

For both RCTs and observational studies, the specificity of the outcome measured in the study is important. Non-specific outcomes, such as pneumonia hospitalizations or influenza-like illness (ILI) can be associated with influenza virus infections as well as infections with other viruses and bacteria. Vaccine efficacy/effectiveness estimates against non-specific outcomes are generally lower, depending on what proportion of the outcome measured is attributable to influenza. For example, a study among healthy adults found that the inactivated influenza vaccine (i.e., the flu shot) was 86% effective against laboratory-confirmed influenza, but only 10% effective against all respiratory illnesses in the same population and season[1]. Laboratory-confirmed influenza virus infections, by RT-PCR or viral culture, are generally the most specific outcomes for vaccine efficacy/effectiveness studies.

Serologic assays to detect influenza infection (i.e., which require a four-fold rise in antibody titers against influenza viruses detected from paired sera) were often used in past flu VE studies to detect influenza infections prior to more accurate tests, such as RT-PCR, becoming more widely available. The problem with VE studies that use serology to test for influenza infection, is that vaccination elevates antibody levels, similar to infection. New influenza infections could be missed in a vaccinated person since antibodies are already high and a four-fold increase doesn&rsquot develop. Therefore, serologic testing methods can result in biased VE estimates that inflate VE

Can you describe biases that are important to consider for observational studies measuring vaccine effectiveness?

Observational studies are subject to various forms of bias (see above for definition) more so than RCT studies. Therefore, it is important that bias be minimized with the study design or adjusted for in the analysis. Observational studies of influenza vaccine effectiveness can be subject to three forms of bias: confounding, selection bias, and information bias.

Confounding occurs when the effect of vaccination on the risk of the outcome being measured (e.g., influenza-related hospitalizations confirmed by RT-PCR) is distorted by another factor associated both with vaccination (the exposure) and the outcome. In RCTs, confounding factors are expected to be evenly distributed between vaccinated and unvaccinated groups. This is not true of observational studies. For example, chronic medical conditions can confound the association between influenza vaccination and hospitalization with influenza in observational studies. Chronic medical conditions increase the risk of influenza-related hospitalization and vaccination coverage often is higher among people with chronic medical conditions. Therefore, the presence of a chronic medical condition in a study participant is a potential confounding factor that should be considered in analysis. This is an example of confounding by indication because those at greatest risk for the outcome being measured (i.e., influenza associated hospitalization) are targeted for vaccination, and therefore, they are more likely than those without a chronic medical condition to receive a flu vaccine. Not adjusting for confounders could bias the vaccine effectiveness estimate away from the true estimate. In the example given, the vaccine effectiveness estimate could be biased lower, or towards lower effectiveness.

Selection bias occurs when people with the outcome being measured by the study (i.e., influenza infection) differ from people who do not have the outcome. In observational studies of influenza vaccine effectiveness, people with and without influenza may have different likelihoods of being vaccinated, and this can bias the estimate of vaccine effectiveness. For example, people who visit their health care provider in outpatient settings (e.g., clinics and urgent care) are more likely to be vaccinated than people who do not go to a provider for care. If controls are selected from a different population than the cases (e.g., cases are from a clinic and controls from a community sample) with different health care seeking behaviors, selection bias related to health care seeking (and the likelihood to be vaccinated) may be introduced. The test-negative study design minimizes selection bias related to health care seeking by enrolling patients who seek care for a respiratory illness. This study design is used by many studies globally, including CDC-funded networks that measure vaccine effectiveness.

Information bias occurs if exposures or outcome information are based on different sources of information for people with and without the disease of interest. For example, if researchers obtain information on vaccination for children with influenza from immunization records but ask parents of children without influenza if the child was vaccinated, this difference in data collection procedures could bias the results of the study.

How well do influenza vaccines work during seasons in which the flu vaccine is not well matched to circulating influenza viruses?

As described above, when the virus components of the flu vaccine are not well matched with circulating influenza viruses, the benefits of influenza vaccination may be reduced. However, the degree of antigenic drift from vaccine viruses and the proportion of circulating drifted viruses can vary. As a result, even when circulating influenza viruses are mildly or moderately drifted in comparison to the vaccine, it is still possible that people may receive some protective benefit from influenza vaccination. In addition, even when some circulating influenza viruses are significantly drifted, it is possible for other influenza viruses in circulation to be well matched to the vaccine. It is not possible to predict how well the vaccine and circulating strains will be matched in advance of the influenza season, nor is it possible to predict how this match may affect vaccine effectiveness.

What is the evidence that influenza vaccines work?

Adults 65 years or older

Among older adults, annual influenza vaccination was recommended based on the high burden of influenza-related disease and demonstrated vaccine efficacy among younger adults. One RCT of adults aged 60 years and older relied on serology for confirmation of influenza and reported a vaccine efficacy of 58% (95% confidence interval (CI): 26-77)[2]. However, it is unknown if infections were missed by serology among the study participants that were vaccinated (and if the vaccine efficacy estimate is biased upwards &ndash see previous description of how bias can occur in VE studies that test for influenza using serology). A meta-analysis of observational studies that used the test-negative design provided VE estimates for adults aged >60 years against RT-PCR confirmed influenza infection. This meta-analysis reported significant vaccine effectiveness of 52% (95% CI: 41-61) during seasons when the vaccine and circulating viruses were well-matched [3]. During seasons when the circulating viruses were antigenically drifted (not well matched), reported VE was 36% (95% CI: 22-48) 3 .

An RCT that compared a high-dose, inactivated influenza vaccine (containing four times the standard amount of influenza antigen) to standard dose vaccine in persons aged 65 years or older during the 2011-12 and 2012-13 influenza seasons found that rates of laboratory-confirmed influenza were 24% lower (95% CI: 10-37) among persons who received high-dose vaccine compared to standard dose influenza vaccine, indicating that high-dose vaccine provided 24% better protection against influenza than standard dose vaccine in this trial.[4]

Several observational studies have reported significant vaccine effectiveness against RT-PCR confirmed influenza-related hospitalization among older adults. A three-year study (2006-07 through 2008-09) in Tennessee that used a test-negative design reported vaccine effectiveness of 61% (95% CI: 18-83) among hospitalized adults >50 years of age[5]. In an analysis of two additional seasons, including 2010-11 and 2011-2012 (excluding 2009-10), VE was 58% (95% CI: 8-81) against RT-PCR confirmed influenza associated hospitalizations for persons >50 years of age for the five seasons combined[6].

Adults

Several RCTs have been done in healthy adults aged <65 years[7,8,9,10,11,12]. These studies have reported vaccine efficacy estimates ranging from 16%-75% VE of 16% was reported during a season with few influenza infections. An RCT in South Africa among HIV infected adults reported vaccine efficacy of 76% (95 CI 9-96).[13] A meta-analysis that included data from RCTs of licensed inactivated influenza vaccines reported a pooled vaccine effectiveness of 59% (95% CI 51-67) against influenza confirmed by RT-PCR or viral culture[14]. In addition, RCTs of cell-based inactivated influenza vaccines (IIVs) and recombinant trivalent HA protein vaccines have been performed among healthy adults. In general, efficacy estimates for these types of vaccines are similar to other inactivated influenza vaccines that are egg-based[15,16,17].

Children

In a four-year RCT of inactivated vaccines among children aged 1&ndash15 years, vaccine efficacy was estimated at 77% against influenza A (H3N2) and 91% against influenza A (H1N1) virus infection[18] . An RCT of children aged 6&ndash24 months reported vaccine efficacy of 66% against laboratory-confirmed influenza in 1999-2000 but no vaccine efficacy during the second year when there was little influenza activity[19]. During 2010-11, the vaccine efficacy of a quadrivalent inactivated vaccine among children aged 3-8 years was 59% (95% CI: 45%-70%)[20]. In addition, a cluster-randomized trial conducted in Hutterite communities in Canada found that vaccinating children aged 3 to 15 years with trivalent inactivated influenza vaccine before the 2008-09 season reduced RT-PCR confirmed influenza in the entire community by 61% (95% CI: 8-83), including a 59% reduction (95% CI: 5-82) in confirmed influenza among non-vaccinated community members, evidence of the &ldquoindirect&rdquo effect of influenza vaccination on prevention on disease transmission[21].

Several RCTs of live attenuated influenza vaccines among young children have demonstrated vaccine efficacy against laboratory confirmed influenza with estimates ranging from 74%-94%[22,23,24,25]. A study conducted among children aged 12 through 36 months living in Asia during consecutive influenza seasons reported efficacy for live attenuated influenza vaccine of 64%&ndash70%[26].

Pregnant women

An RCT conducted among pregnant women in South Africa during 2011 and 2012 reported vaccine efficacy against RT-PCR confirmed influenza of 50% among HIV-negative women and 58% among HIV-positive women vaccinated during the third trimester[27]. In addition, the trial showed that vaccination reduced the incidence of laboratory-confirmed influenza among infants born to HIV-negative women by 49% the study was unable to assess vaccine efficacy among infants of HIV infected women. An observational study in the United States during 2010-11 and 2011-12 using a test-negative design reported vaccine effectiveness of 44% (95% CI: 5 to 67) against influenza among pregnant woman[28].

A randomized trial in Bangladesh found that babies born to mothers vaccinated during pregnancy with trivalent inactivated influenza vaccines were significantly less likely to be born small for gestational age and weighed an average of 200g more than babies born to unvaccinated mothers[29,30]. No effect of maternal immunization on infant birth weight was reported in the South African trial described above. Some observational studies in developed and developing countries have found lower risk of prematurity or low birth weight in babies born to vaccinated mothers, but the effect has not been consistently demonstrated[31,32,33,34,35].

How well does the live attenuated influenza vaccine (LAIV) work compared to inactivated influenza vaccine (IIV)?

Children

Three randomized clinical trials comparing live attenuated influenza vaccine to trivalent inactivated influenza vaccine in young children, 2-8 years of age, suggested that live attenuated influenza vaccine had superior efficacy compared to inactivated influenza vaccine[36,37,38]. Recently, several observational studies suggest that LAIV did not consistently provide better protection against influenza than inactivated vaccine, especially against influenza caused by the 2009 H1N1 pandemic virus[39,40,41]. However, a randomized, school-based study in Canada reported lower rates of confirmed influenza among students vaccinated with live-attenuated vaccine compared to students vaccinated with inactivated influenza vaccine, as well as decreased influenza transmission among family members of students vaccinated with live-attenuated influenza vaccines[42].

Adults

Clinical trials during 2004-05, 2005-06, and 2007-08 that compared inactivated influenza vaccines and live attenuated influenza vaccines to no vaccine among adults suggested that inactivated influenza vaccines provided better protection against influenza than live attenuated influenza vaccines in adults 7,8 .

How does CDC monitor vaccine effectiveness?

CDC monitors vaccine effectiveness annually through the Influenza Vaccine Effectiveness (VE) Network, a collaboration with participating institutions in five geographic locations. These institutions enroll patients with respiratory symptoms at ambulatory clinics and test for influenza by RT-PCR. Vaccine effectiveness is estimated using the test negative design, comparing proportions (odds) of influenza vaccination among patients with and without influenza. Statistical methods are used to account for differences in age, race and underlying medical conditions that might influence vaccine effectiveness. Estimates are reported annually, and often, an early estimate is reported during the season. Since the match between circulating and vaccine viruses is not known before the season, annual estimates of vaccine effectiveness give a real-world look at how well the vaccine protects against influenza caused by circulating viruses each season.


Peer-reviewed data show high protection for leading COVID vaccines

The peer-reviewed data on both the Moderna and Pfizer-BioNTech COVID vaccines are in, demonstrating 94% to 95% protection from the disease.

The phase 3 clinical trial results for the Moderna COVID-19 vaccine, mRNA-1273, and the Pfizer-BioNTech COVID-19 vaccine, BNT162b2 or Comirnaty, were published late last week in the New England Journal of Medicine (NEJM). When compared with placebos, Moderna's vaccine showed 94.1% efficacy (95% confidence interval [CI], 89.3% to 96.8%), and Pfizer's had 95.0% efficacy (95% CI, 90.3% to 97.6%).

Both rates are for patients who received the two intended doses. Adverse events were uncommon in both studies.

"That the mRNA-1273 Covid-19 and the BNT162b2 Covid-19 vaccines protect with near-identical 94 to 95% vaccine efficacies—and that both vaccines were developed and tested in less than year—are extraordinary scientific and medical triumphs," writes Barton Haynes, MD, in a NEJM editorial on the Moderna study.

Eric Rubin, MD, PhD, and Dan Longo, MD, also use the word triumph to describe the Pfizer vaccine in their editorial, adding that, despite the further areas of research needed, the data are "impressive enough to hold up in any conceivable analysis."

The US Food and Drug Administration authorized the Pfizer vaccine on Dec 11 and the Moderna vaccine on Dec 18 and. Both make use of messenger RNA in lipid nanoparticles.

The authors of both papers aim to assess similar outcomes in future studies: long-term efficacy, uncommon or slow-to-surface side effects, and effects on asymptomatic infections and transmission rates.

Moderna trial aimed for representative demographics

In Moderna's phase 3 trial, also called COVE (Coronavirus Efficacy and Safety Study), 14,134 adults across 99 sites in the United States received two 100-microgram (mcg) dosages of the vaccine 28 days apart, with follow-ups continuing for a median of 64 days after the second dose.

When compared with 14,073 patients who received placebo, the researchers found 11 (0.1%) patients in the intervention group developed symptomatic COVID-19, compared with 185 (1.3%) in the placebo group. The only cases of severe COVID-19 infections occurred in the placebo group (0.2%). Data stratification by patients' age, sex, and race still showed consistent vaccine efficacy. The lowest rate was for those 65 and above (86.4% 95% CI, 61.4% to 95.2%).

"The data suggest protection from severe illness, indicating that the vaccine could have an impact on preventing hospitalizations and deaths, at least in the first several months post-vaccination," says Lindsey Baden, MD, co-principal investigator for the study and lead author, says in a Brigham and Women's Hospital press release. Baden is an infectious disease specialist at Brigham, which was one of the study's locations.

The study's design tried to create a sample pool that reflected US racial demographics, including 20.5% of Hispanic or Latino background, 10.2% of black or African American background, and 4.6% of Asian descent, according to the press release. Additionally, 24.8% of patients were 65 and older, and 16.7% of those younger had comorbidities such as chronic lung disease, diabetes, or obesity.

Common side effects for the vaccine treatment were mild-to-moderate injection-site pain, headache, and fatigue. Overall, severe adverse events occurred in 0.5% of the intervention group and 0.2% of the placebo group, and none were classified as immediately life-threatening or as a cause of death.

People as young as 16 received Pfizer vaccine

In the Pfizer-BioNTech study, 18,508 people 16 years and older received the vaccine in two 30-mcg dosages 21 days apart.

During a follow-up that had a median duration of two months, 9 cases of symptomatic COVID-19 were found in the intervention group (0.05%), whereas 162 of the 18,435 people in the placebo group (0.9%) reported COVID-19 infections. Of the 10 severe COVID-19 cases, 9 occurred in the placebo group. Vaccine efficacy appeared to be consistent across racial backgrounds, age, and comorbidities.

"The study was not designed to assess the efficacy of a single-dose regimen," the researchers wrote. "Nevertheless, in the interval between the first and second doses, the observed vaccine efficacy against Covid-19 was 52%, and in the first 7 days after dose 2, it was 91%, reaching full efficacy against disease with onset at least 7 days after dose 2."

The study cohort was 82.9% white, 9.2% black, 27.9% Hispanic or Latino, and 4.2% Asian. Twenty-one percent had at least one pre-existing comorbidity, and 42.3% were older than 55 years. Almost 77% of the study cohort was located in the United States, but 15.3% were located in Argentina, 6.1% were in Brazil and 2.0% were in South Africa.

Similar to Moderna's vaccine, the most common adverse effects were mild-to-moderate injection-site pain, fatigue, and headaches, as reported by a subgroup of 8,183 people. Those younger than 55 experienced more injection-site pain and systemic symptoms than older patients.

"The frequency of any severe systemic event after the first dose was 0.9% or less. Severe systemic events were reported in less than 2% of vaccine recipients after either dose, except for fatigue (in 3.8%) and headache (in 2.0%) after the second dose," the researchers write. No vaccine-related deaths occurred in any study participant.


Estimation of vaccine efficacy in a repeated measures study under heterogeneity of exposure or susceptibility to infection

Vaccine efficacy (VE) is commonly estimated through proportional hazards modelling of the time to first infection or disease, even when the event of interest can recur. These methods can result in biased estimates when VE is heterogeneous across levels of exposure and susceptibility in subjects. These two factors are important sources of unmeasured heterogeneity, since they vary within and across areas, and often cannot be individually quantified. We propose an estimator of VE per exposure that accounts for heterogeneous susceptibility and exposure for a repeated measures study with binary recurrent outcomes. The estimator requires only information about the probability distribution of environmental exposures. Through simulation studies, we compare the properties of this estimator with proportional hazards estimation under the heterogeneity of exposure. The methods are applied to a reanalysis of a malaria vaccine trial in Brazil.

1. Introduction

Vaccine efficacy (VE) is defined as the per cent reduction in the probability or hazard of disease conferred by the vaccine to an individual. It is typically estimated based on a marginal, or population-based, parameter, which is an average of the individual vaccine effects, specific to a geographically and temporally defined population (Halloran et al. 1991). Commonly, estimates of VE are based on one minus the hazard ratio from a proportional hazards model for time to first event, which can be an infection or disease, even when the disease under study recurs (Alonso et al. 1994 Bojang et al. 2001 Aponte et al. 2007). The use of the proportional hazards model in VE studies is widespread owing to its ease of implementation, interpretation and flexibility regarding the shape of the baseline hazard over time. An important advantage of proportional hazards models is that, in balanced randomized trials and under the proportional hazards assumption, the population VE represents the individual VE (Halloran et al. 1994 Gilbert et al. 1998), which is the causal parameter of interest. This individual VE in trials represents the experimental or biological effect of the vaccine and is useful in selecting or comparing the vaccine candidates. The population VE for the same vaccine could vary for different studies and does not allow an assessment of the efficacy of the vaccine itself.

Under homogeneity of risk factors of infection or disease in the population or of the VE itself, proportionality of the hazards may hold true (Vaupel et al. 1979 Aalen 1988 Hougaard 1995). However, the assumption of homogeneity is often unrealistic because fundamental risk factors for infectious diseases, such as individual exposure levels and susceptibilities to the pathogen pre-vaccination, are likely to be heterogeneous in the population (Struchiner et al. 1994 Halloran et al. 1995). In this paper we will focus on the heterogeneity due to exposure, assuming that the heterogeneous susceptibilities can be minimized in the design or analysis, e.g. by using covariates. Examples of covariates to model heterogeneous susceptibility include age or other markers of previous exposure history, such as time living in the area (Baird et al. 1993).

Exposure intensity varies greatly within and across populations and cannot always be reasonably and accurately quantified through covariates. The intensity of exposure can vary across individual behavioural characteristics such as occupation or sexual behaviour and is likely to fluctuate within geographical regions due to environmental factors (Smith et al. 1995). Under heterogeneity, proportional hazards analysis of time to first event can underestimate the individual VE, owing to unmeasured covariates representing prognostic factors for survival or random heterogeneity (Gail et al. 1984 Struthers & Kalbfleisch 1986 Schumacher et al. 1987 Aalen 1988, 1994 Chastang et al. 1988 Lin & Wei 1989 Omori & Johnson 1993 Schmoor & Schumacher 1997 Henderson & Oman 1999). Basically, unmeasured heterogeneity affects the comparability between vaccinees and non-vaccinees achieved by randomization in the beginning of the study. Intuitively, if the vaccine is effective, higher risk (more exposed) unvaccinated subjects fail faster than vaccinated subjects and are removed from the risk set.

Population heterogeneities in exposure intensities can be modelled using random effects survival methods, including Cox models with random effects (Hougaard 1995 Halloran et al. 1996 Longini & Halloran 1996). When infection or disease does not confer long-lasting immunity and may recur, multivariate survival methods may be used (Hougaard 2000 Therneau & Grambsch 2000). Multiple events may be analysed with Poisson regression or using a more flexible approach, such as the marginal model originally proposed by Anderson & Gill (1982) with variances adjusted for correlations within subjects using robust variance estimates (Lin & Wei 1989). Analyses of multiple events using the Anderson–Gill model are expected to correct for bias due to unmeasured heterogeneity because, since subjects remain in the risk set until the end of the study, vaccinated and unvaccinated subjects remain comparable. However, continuous time survival analysis methods typically require information about the exact failure times, though often in practice we can only distinguish a subject's outcomes in the time interval between consecutive visits (active case detection). Even when studies combine active case detection with passive case detection (detection of the event whenever people seek care), many of their cases are found through active detection and have event time interval censored. Under those circumstances, although the exact failure time could be approximated, a repeated measures analysis for a binary outcome represents a practical alternative to handle the sequential monitoring of subjects, interval-censored observations, recurrent episodes and non-proportional hazards.

In this paper, we present an estimator for VE given one exposure contact in a repeated measures analysis with a generalized estimating equations (GEE) approach, accounting for heterogeneous exposure and susceptibility pre-vaccination. Although the proposed estimator may be applied to different vaccines, we focus our model on estimating efficacy for malaria vaccines. Malaria is a leading cause of child mortality in developing countries (WHO 1995) and is transmitted by a mosquito vector. The transmission intensity of malaria is highly variable across and within regions, and highly seasonal (Smith et al. 1995). Immunity to malaria is partial and does not prevent recurrences of infection or disease. Currently there are several vaccines in the preclinical test phase (Aide et al. 2007). For some malaria vaccines, recent randomized trials have been performed in which the VE is primarily estimated based on the proportional hazards modelling of time to first event (Alonso et al. 2005 Bejon et al. 2006, 2007). It is not feasible to record exposure to infected mosquito bites received by each trial subject, and so exposure information typically relies on mosquito collections in samples of the study area. Case detection methods rely on both passive case detection, when individuals seek medical treatment, and active case detection, when houses are visited and individuals are examined at regular time intervals.

Section 2 presents our general model and estimator of VE. Under some simplifying assumptions, we propose estimation procedures using standard commercial software. In §3 we show that, with heterogeneity of exposure to infection, VE estimates based on the proportional hazards modelling of time to first event yield biased estimates of the individual VE. We also use simulation studies to compare our repeated measures estimator of VE with the proportional hazards model estimator of VE, when the assumption of proportional hazards holds. In §4, we apply the proposed model to a reanalysis of the data from a malaria SPf66 vaccine trial carried out in Brazil.

2. Model description

In a VE study, subjects i, for i=1, …, n, with different susceptibility status are randomized to vaccination (Vi=1) or placebo (Vi=0) at the beginning of the study period. The outcome status (Dij) of each subject (binary) is subsequently recorded at specific time intervals, j. For vaccines targeting recurrent events, after an event, a subject treated can re-enter the risk set and present the outcome again. The probability of presenting the outcome at each interval j depends on the specific amount of exposures received (Wij), which arrive with intensity λij. Although Wij cannot be measured, λij may be estimated in a separate ecological study or indirectly, allowing one to determine the empirical probability distribution of exposure. In this scenario, an estimator of VE can be defined based on the likelihood of

Different functions can be chosen to model f, g and λ and different probability distributions to model . If f and g are exponential functions (log link model), we can write the expression for the probability of disease for the ith subject at the jth interval given vaccination status as

In the above models, the individual VE or VE in one exposure contact varies across subjects and intervals and can be expressed by

The population VE or VE marginal on exposure can be expressed as

Straightforward extensions of this model could involve random effects or Markov covariates (Diggle et al. 1994). The appropriateness of each of these approaches is related to the existence of heterogeneity other than that caused by exposure, i.e. heterogeneity in susceptibility not measured by the modelled covariates and heterogeneity in the VE per exposure. Alternative approaches will lead to different interpretation of the VE in one exposure.

Throughout this paper, we will model the marginal probability of disease given exposure. The marginal models assume that event history during the study period does not affect the susceptibility per exposure. To model the repeated measures, we will use GEE methods (Liang & Zeger 1986 Zeger & Liang 1986). When there is no additional random heterogeneity, 1−exp(β) represents the individual VE per exposure.

3. Assessing bias of our repeated measures model and the proportional hazards model under heterogeneous exposure intensities

In this section we show that VE based on one minus the hazard ratio of the first episode is a biased estimate of the VE per exposure (the individual VE or causal parameter of interest), when the assumption of proportional hazards is violated due to heterogeneous exposures to infection. We studied two scenarios of heterogeneous exposure: the first generated by the mixture of two Poisson distributions and the second by a continuous mixture of Poisson distributions. Using simulations, we also compared VE estimated through our repeated measures model to that estimated through proportional hazards under a heterogeneous and continuous intensity of exposure.

In the first exposure heterogeneity scenario, the population was assumed to be subdivided in two groups (Xi=0/1). We allowed these two groups to receive exposures according to Poisson distributions, with means λ1 and λ2, respectively. Under these assumptions, the interarrival times of each disease episode given the number of exposures received in the interval followed an exponential distribution, homogeneous across time, with mean when Xi=0 and when Xi=1, where p is the baseline probability that one exposure causes an event and g(βvi) represents the reduction in the probability of an event in one exposure contact conferred by the vaccine. The proportion of individuals in each of the two groups varied from 5 to 95%. Half of the subjects in each group were vaccinated with a vaccine with individual efficacy of 50%, to mimic a perfectly balanced randomized trial. The resulting distribution of exposure contacts was thus a mixture of two Poisson distributions with mean and variance equal to

When the exposure was based on a two-point distribution figure 1(a), the hazard was plotted based on

Figure 1 Population VE based on the hazard ratio comparing vaccinated with unvaccinated subjects as a function of time and individual VE of 50%. (a) Exposure was generated assuming a mixing distribution of two Poisson distributions with means λ1=12 and λ2=2, with the probability of occurrence of λ2 from 0.1 to 0.9, as specified. Triple dot-dashed line, 0.1 dashed line, 0.3 dot-dashed line, 0.5 dotted line, 0.7 solid line, 0.9. (b) Exposure was generated assuming a continuous mixture of Poisson distributions (negative binomial) with different means and variances (means and variances were chosen to mimic the mean and variance of the two point distribution). Triple dot-dashed line, mean=11 and variance=20 dashed line, mean=9 and variance=30 dot-dashed line, mean=7 and variance=32 dotted line, mean=5 and variance=26 solid line, mean=3 and variance=12.

In the second exposure heterogeneity scenario, we assumed that half of the subjects were vaccinated with a vaccine of individual efficacy of 50%. The intensity of exposure for each subject (λi) followed a gamma distribution with mean λ and variance λ/γ, leading to an overdispersed Poisson distribution for exposure with mean λ and variance λϕ (ϕ=(γ+1)/γ). The parameters λ and γ were chosen to match the mean and variance of the two group problems described above. The interarrival times of each disease episode given the number of exposures received in the interval, for subject i, were exponentially distributed with mean .

When the exposure was based on a mixture of Poisson distributions (negative binomial, figure 1b), the hazard was plotted based on

Figure 1 plots the VE over time based on hazard ratios under these two scenarios. In a study with one year follow-up time, the population VE is always lower than the biological VE or VE per exposure (50%). After two years, VE based on hazard ratios could substantially underestimate the true (and constant over time) effect of the vaccine. The difference between the individual and the population vaccine efficacies is higher when most subjects have higher intensity of exposure (λ) or when the heterogeneity is large. The hazard of the mixing distribution generated by a binary intensity of exposure is similar to that generated by a continuous intensity of exposure, except when the mean of the mixing distribution is low. Similar numerical results were found by Schmoor & Schumacher (1997).

For both exposure scenarios, we generated datasets of 250 simulations with 1200 study subjects. Subjects were followed for 720 days and then censored (type I censoring only). For the repeated measures analysis, subjects were allowed to re-enter the study after each failure. We assigned vaccine to 50% of the study subjects and then sampled exposure using the corresponding distribution. The probability of infection per bite, p, was chosen to average a cumulative probability of disease over the two years in unvaccinated subjects of approximately 40% and VE was chosen to be 50%.

In all simulations, time was subdivided into intervals as if the subjects had been observed every 30 days. A binary outcome random variable was created, Dij, which was equal to 1 if the ith subject developed the outcome during the corresponding time period j. With these data, we fitted proportional hazards models for time to first event, frailty or random effect survival modelling time to first event (Gaussian frailty), our repeated measures model for all events using a complementary log–log link function and GEE, and a marginal multiple events survival model for continuous time to event, i.e. the Anderson–Gill model. All simulations and analyses were performed in Splus v. 8.0 (Insightful Corporation, Seattle). Estimation for the repeated measures model was implemented using the complementary log–log link and GEEs, assuming an independence working covariance matrix, using the gee function from the correlatedData library. Estimation for all survival models was implemented using the coxph function with the cluster function for the Anderson–Gill model and with the frailty function for the random effects survival model. Wald CIs were calculated based on the estimated standard errors.

When heterogeneity in the intensity of exposure varied continuously, the proportional hazards VE estimated without or with a random effect was biased. Our repeated measures estimator of individual efficacy performed substantially better and had negligible bias (figure 2). Results with our estimator and the Anderson–Gill approach were comparable. Discrepancies between our method and the Anderson–Gill approach are likely to be due to the discretization of time. While VE estimated by our method is based on the ratio of cumulative hazards over the specified time period, VE estimated by the Anderson–Gill method is based on the ratio of instantaneous hazards over the time period. Overall, our repeated measures model constituted a valid alternative to the Anderson–Gill approach, and in fact would be a more appropriate method to analyse data in which information about time to event is known in discrete time intervals (interval censoring). Moreover, although our estimator was based on discrete time to event, the half-widths of the 95% Wald CI of the estimator proposed here and that from the Anderson–Gill method were very similar. The difference was negligible in all simulation scenarios and at most 0.001.

Figure 2 Comparison of the per cent bias in VE under heterogeneity of the intensity of exposure, as a function of the expected value of the mixing distribution of exposure, in 250 simulations each with a sample size of 1200 subjects. VE was estimated via modelling of time to first infection/disease (using proportional hazard and Gaussian frailty models), via modelling of time to all infection/disease (using Anderson–Gill model) and via a repeated measures model with a complementary log–log link and GEE approach. Exposure was generated assuming a continuous mixture of Poisson distributions with the specified mean intensity λ (and variance ϕλ). Dashed line, first event frailty model dot-dashed line, repeated measures dotted line, Anderson–Gill solid line, first event proportional hazard.

4. Reanalysis of a malaria vaccine trial

We reanalyse the Brazilian trial of the SPf66 vaccine (Urdaneta et al. 1998) to compare the VE estimated through proportional hazards analysis (of first event only) with the VE estimated through our repeated measures estimator (of first and second events) implemented with a GEE approach. The SPf66 vaccine was expected only to protect vaccinees from disease, without affecting transmission (Graves et al. 1998 Graves & Gelband 2001).

In the Brazilian SPf66 vaccine trial, 58% of the study population had immigrated to the trial area in the two years prior to the trial, suggesting heterogeneous susceptibility to malaria among the study subjects. Although no mosquito surveys were performed in the trial area during the trial period, studies in nearby regions indicated that the intensity of exposure in the region ranged from 0.4 to 2.1 infected mosquito bites/person/night, depending on the season (Klein & Lima 1990 Urdaneta et al. 1996). A total of 800 individuals were randomized to vaccination (400) or placebo (400) and 572 (287 vaccinees versus 285 non-vaccinees) received three doses. As 32 of these individuals were lost to follow-up immediately after the third dose, the final analysis includes 540 study subjects. The study lasted 18 months after the third dose and recorded first and second malaria episodes.

Among the 540 subjects, 161 had one episode of falciparum malaria (the type of malaria with higher morbidity), and among those 44 presented with a second episode. The original trial analysed time to first infection/disease episode through life-table methods, with the hazard for each group estimated as the ratio of the number of cases at the end of the follow-up period to the person–time at risk. The trial reported a crude VE of 14.1% (95% CI of [−17.0, 36.9%]) for the first episode of malaria.

We performed a survival analysis for first episode using proportional hazards models, and a repeated measures analysis for first and second episodes, in monthly intervals, with GEEs using a working independence covariance assumption. In the repeated measures analysis, we assumed that the intensity of exposure (λ) was constant over time and, based on the previous entomological studies done in the area, equal to 30 infected bites/person/month. For this example, we chose three categorical covariates: vaccination time living in the trial area and age.

For each model, the estimated vaccine effect was low and did not reach statistical significance (table 1). Individuals who had lived in the area for more than two years had a lower susceptibility or probability of infection per exposure contact than those who were living in the area for two years or less. Neither the main effect of age nor its interaction with VE was significant in any analyses, indicating that baseline susceptibility and VE were relatively homogeneous across age groups. Adjusting for age or time living in the area did not appreciably change the point estimates of VE, suggesting no confounding due to these factors.

Table 1 Point and CI estimates of VE and possible predictors of individual susceptibility, based on a survival analysis with proportional hazards model (using first episode only as outcome), and a repeated measures analysis with GEE of the Brazilian SPf66 malaria vaccine trial. Covariates include vaccination status, years living in the trial area prior to the trial and age.


Statistical Modeling, Causal Inference, and Social Science

The 94.5% efficacy announcement is based on comparing 5 of 15k to 90 of 15k:

On Sunday, an independent monitoring board broke the code to examine 95 infections that were recorded starting two weeks after volunteers’ second dose — and discovered all but five illnesses occurred in participants who got the placebo.

Similar stuff from Pfizer etc., of course.

Unlikely to happen by chance but low baselines.

My [Gaurav’s] guess is that the final numbers will be a lot lower than 95%.

The data = control group is 5 out of 15k and the treatment group is 90 out of 15k. The base rate (control group) is 0.6%. When the base rate is so low, it is generally hard to be confident about the ratio (1 – (5/95)). But noise is not the same as bias. One reason to think why 94.5% is an overestimate is simply that 94.5% is pretty close to the maximum point on the scale.

The other reason to worry about 94.5% is that the efficacy of a Flu vaccine is dramatically lower. (There is a difference in the time horizons over which effectiveness is measured for Flu for Covid, with Covid being much shorter, but useful to take that as a caveat when trying to project the effectiveness of Covid vaccine.)

79 Comments

I saw the 90 out of 95 cases too, and began wondering- are these typical samply sizes for vaccine trials? For example, when flu vaccines are trialed, do they get 100 cases in the sample or a lot more?

Most of the people in the trial didn’t get infected, and therefore don’t tell us anything about how effective the vaccine is. This is the advantage of a Human Challenge Trial – deliberately infecting people creates more statistical power. The disadvantage is… well the disadvantage is obvious.

People not getting the virus in the trial certainly do tell us about the vaccine’s efficacy?

From my reading of the interim results, it seems like the vaccine companies really do only care about the the infected participants. What could you learn from uninfected people other than the proportion of uninfected in each group, which is just another way of getting the proportion of infected people?

It’s even less useful than that, since the proportion of infected people in a vaccine trial is unlikely to be comparable to the entire population — people who sign up for vaccine trials will be those who take COVID seriously.

Not if people in both the treatment and placebo group fail to get it. When you look at the trial results, for example, only one American Indian/Alaskan native in the placebo got the virus and none in the vaccine group did. Would you be comfortable saying it has 100% efficacy for that group? Certainly not. We need variation to estimate an effect.

To be fair, these are interim results. The number infected will be much higher by the end of the study.

151 cases for the EUA. As Pfizer said when they announced, the recent spike in new cases means targets will be hit more quickly so both firms expect to file for the EUA as soon as the timeframe for the safety aspect of the trial has passed. As I understand it, that is.

Interesting. I would be interested to see where the 151 number comes from.

From the study protocol – https://www.modernatx.com/sites/default/files/mRNA-1273-P301-Protocol.pdf – they say that “a total of 151 COVID-19 cases will provide 90% power to detect a 60% reduction in hazard rate (60% VE), rejecting the null hypothesis H0: VE≤30%”

So would they still need 151 since the efficacy seems much higher than 60%?

If the efficacy is really 90%+, then you could rule out the “below 50%” that would prevent FDA approval with smaller sample size, couldn’t you?

I don’t think it is meaningful to compare the COVID vaccine to flu vaccines. Flu vaccines are developed for what are projected to be the most prominent strains among many circulating in the upcoming season. If this projection is wrong, or the most prominent strains are not as prominent as competing strains, the efficacy of the vaccine drops substantially for that year. To the best of my knowledge that is not currently a problem with COVID.

Yeah, flu is a very different situation! I don’t see any reason to expect COVID vaccines to be limited by the same factors as flu vaccines.

I don’t buy that as a statistical argument. One of the things about conditioning on the total number of cases in the analysis is that it removes the base rate parameter. Also, I believe that they may have meant 1 – 5/90 for the ratio. I have an updated post on that may be of interest.

Perhaps you can make the case from a biological point of view that the rate will likely come down, though COVID is fundamentally different from the flu and mutates much more slowly.

It will come down. No one under 65 with comorbidities reported symptoms, no one over 65 with comorbidities was included in the study, and only 10-15 healthy 65+ year olds reported symptoms vs 75-80 younger than 65.

In the animal studies of SARS vaccines the young healthy animals were protected while aged were not.

Again, I think you can make a biological argument, which you do. I don’t think there is any statistical reason to believe this based on the study alone. I would note that only 13% of the population is over 65, so the subgroup of those with co-morbidities can’t move the needle of overall efficacy that much.

Also, just because they didn’t specify the co-morbid counts doesn’t mean that none of the subjects with comorbidities were cases.

40% of the population is obese alone. Apparently none of them included in the study got infected in the placebo group.

I think they would be bragging about it if the stats for comorbid young subjects looked good for the vaccine… Could be all 5 in the vaccine group were comorbid/young.

“Apparently none of them included in the study got infected in the placebo group.”

*Citation needed.* You can’t just say that this is true because they didn’t explicitly say that it wasn’t true. We have no idea how many of the cases in either arm have any of the relevant comorbidities.

The 95 COVID-19 cases included 15 older adults (ages 65+) and 20 participants identifying as being from diverse communities (including 12 Hispanic or LatinX, 4 Black or African Americans, 3 Asian Americans and 1 multiracial).

They mention every other subgroup included in the study, why not the young/comorbid subjects? It is quite a glaring omission to me, especially since that is the only place where I expected to see an issue beforehand.

They did report 11 severe cases in the placebo arm, given what we know about severe covid infections, it’s very unlikely, almost impossible, that none of them were free of comorbidities.

I really do not think there is any reason to expect a risk in young/with comorbidities greater than old/without comorbidities.

In fact, there is pretty strong evidence against it: the observed age disparity in COVID deaths in the US is simply far too great, given how common comorbidities – especially obesity and asthma – are in the younger US population.

So even if *you* specifically were concerned with that group, there’s no reason for the vaccine developers to focus on it or even mention it.

Yes, follow the links in my post. They included well defined subgroups, and talked about all except the subjects with comorbidities in their press release.

A peer reviewer who failed to ask about that would be incompetent as can be.

Press releases aren’t necessarily peer reviewed…

10-15 elderly people infected out of 90 actually doesn’t seem that low: a vaccine trial population would be expected to exclude people who think the virus is a hoax or “just a cold/flu”, so one would expect the elderly (at much greater risk) to be more cautious overall.

And the young are also more numerous (US median age is 38). Even in a trial of just adults, wouldn’t one expect a majority to be under 65, especially as minimum health standards might also exclude more of the elderly?

Also not sure how much we can compare SARS vaccine studies 16-17 years ago to this I believe the Moderna and Pfizer mRNA vaccines are a rather new technology. Genetic stuff has come a really long way since the early 2000s.

Also not sure how much we can compare SARS vaccine studies 16-17 years ago to this I believe the Moderna and Pfizer mRNA vaccines are a rather new technology. Genetic stuff has come a really long way since the early 2000s.

What matters is having few/weak antibodies to the spike protein. Whatever triggered them doesnt matter other than perhaps the strength of the immune response and rate of waning. Exposure to SARS3 in a few years is another big risk factor here.

Eh… maybe? But I don’t think there is anything like the certainty you are suggesting that ADE will be a thing for SARS-COV-2, much less what the risk factors for it would be.

Was ADE shown for SARS-1 in vivo, or only in vitro?

And future viruses that haven’t even evolved yet are *by definition* unpredictable!

Was ADE shown for SARS-1 in vivo, or only in vitro?

Ive linked direct quotes about this multiple times on here:

Am I reading something wrong? The first and third of those *do* seem to be in vitro (cell line) studies.

The second one is in mice, granted, but I am not sure “the vaccine failed to protect aged animals in which augmented immune pathology was also observed, indicating the possibility of the animals being harmed because of the vaccination” is equivalent to “antibody-dependent enhancement did in fact happen”, much less that it would happen in humans.

Yes, it only quotes one of the many in vivo studies.

Look, I’m not claiming any special expertise (which I don’t have). But none of those studies that you are quoting really seem to answer the question I was asking: two of them are not in vivo, and the in vivo one may not really demonstrate ADE.

And I’m not sure that vaccine type is that irrelevant. The RSV vaccine issues Daniel Lakeland mentioned on another thread may have been related to that (the paper I saw on it didn’t seem terribly clear, but that may be because it happened in the 60s and the knowledge of the time was not entirely up to par in terms of understanding what happened).

But that might have had some white-blood-cell involvement rather than being “purely” antibody-caused.

I don’t nearly have the expertise to judge this — but I really don’t think this is nearly as certain/solid as you suggest.

All Ive ever said since Feb is repeat the exact same experiments done for SARS and see. Its now mid nov and still no.

It seems to me to be pretty likely that that’s not been done because it in fact would not be relevant/useful.

Otherwise one would have to assume that many research groups in many different countries are all making the exact same errors.

IE – if this is obvious to you, why isn’t it obvious to them?

I complain a lot about US drug development/approval issues, but those are fairly specific to the way the FDA does things – a single nation with a specific regulatory structure that creates incentives (not always positive ones). In this case many nations with different structures are involved.

It seems to me to be pretty likely that that’s not been done because it in fact would not be relevant/useful.

It was always considered relevant/useful before covid. And doesnt cost much to do the study given the money being thrown around.

Are you aware of what happened last time hysteria caused a rushed vaccine?

It was only stopped when one of the main proponents publicly vaccinated his grandchildren, one died and the other was paralyzed:
https://www.nytimes.com/1955/05/05/archives/bulbar-polio-kills-doctors-grandson.html

Will the politicians, bill gates, and pharma executives/scientists publicly vaccinate their at risk relatives?

Then what is your explanation for why it hasn’t been done? COVID vaccine efforts are too widespread/decentralized for it to be plausible that everyone is making the same “obvious” mistake.

As for the polio vaccine, I really don’t think problems that happened in *the 1950s* have any relevance. Biological understanding in the 50s was pitifully limited, DNA was just being figured out. That would be like comparing safety of modern aircraft to World War I-era ones.

If ADE was likely to be a real problem with COVID, we’d see a lot more trouble with natural reinfection than we do.

>>What matters is having few/weak antibodies to the spike protein.

For what it’s worth, this may be true in mice for SARS, but not carry over to COVID-19

“one figure to take home is that 90% of the subjects were still seropositive for neutralizing antibodies at the 6 to 8 month time points. The authors point out that in primate studies, even low titers (>1:20) of such neutralizing antibodies were still largely protective, so if humans work similarly, that’s a good sign. An even better sign, though, are the numbers for memory B cells”

If low titers are still protective, the problem may not exist for this disease.

One thing that’s clear is that the baseline case rate assumed when designing the trial is way too low. In the Moderna and Astrazenaca protocols, the base rate is assumed to be

0.7% over six months. It’s pretty clear they are seeing that level over just a few weeks so the base rate is off by a huge margin. If they had assumed say 5% base rate in the design, wouldn’t the interim analysis require more cases?

Why? What you need is a sufficient number of cases. If incidence is higher than expected and you can get there in six week rather than six months you are happy to have your results earlier. If they had assumed a higher base rate maybe they would have enrolled less people in the trial (on the other hand you need lots of people for the safety endpoints anyway, whatever the incidence).

In classical design (no interim analysis), the closer the base rate is to 50%, the higher the required sample size – so when multiplied by a higher base rate, the # of cases would have been higher, not lower.

Isn’t the design just “go until we get N total cases across both arms”… in which case the base rate is just to estimate how many people are needed to get N cases in a reasonable time?

I think this is the same thing Carlos said, so obviously I’m not following. If you could elaborate a little I’d appreciate.

In the design phase, how do they come up with N total cases? Is that a function of the assumed baseline case rate?

I don’t follow you. Forget the interim analysis. If you decide you need 200 cases to look at the split vaccine/placebo and be happy with the inference you make about the vaccine efficiency, why does it matter whether you get those 200 cases in six weeks or six months [1]? Why would you require more cases if 200 are enough? It’s also possible that I have misundestood your previous comment entirely.

[1] Apart from the insight you may get about duration.

You guys are right. I just realized I got the direction flipped.

Carlos: “If you decide you need 200 cases to look at the split vaccine/placebo and be happy with the inference you make…”

But how do they decide they need 200 [or whatever the real number is] cases?

Moderna: “Under the assumption of proportional hazards over time and with 1:1 randomization of mRNA-1273 and placebo, a total of 151 COVID-19 cases will provide 90% power to detect a 60% reduction in hazard rate (60% VE), rejecting the null hypothesis H0: VE ≤ 30%, with 2 IAs at 35% and 70% of the target total number of cases using a 1-sided O’Brien-Fleming boundary for efficacy and a log-rank test statistic with a 1-sided false positive error rate of 0.025.”

Pfizer: “Under the assumption of a true VE rate of ≥60%, after the second dose of investigational product, a target of 164 primary-endpoint cases of confirmed COVID-19 due to SARS-CoV-2 occurring at least 7 days following the second dose of the primary series of the candidate vaccine will be sufficient to provide 90% power to conclude true VE >30% with high probability.”

Janssen: “The study TNE is determined using the following assumptions: a VE for molecularly confirmed, moderate to severe/critical SARS-CoV-2 infection of 60%, approximately 90% power to reject a null hypothesis of H0: VE≤30%, type 1 error rate α = 2.5% to evaluate VE of the vaccine regimen (employing the sequential probability ratio test [SPRT] to perform a fully sequential design analysis detailed in Section 9.5.1), a randomization ratio of 1:1 for active versus placebo. (…) Under the assumptions above, the total TNE to compare the active vaccine versus placebo equals 154, based on events in the active vaccination and placebo group, according to the primary endpoint case definition of moderate to severe/critical COVID-19 (Section 8.1.3.1).”

AstraZeneca: “Approximately 33 000 participants will be screened such that approximately 30 000 participants will be randomized in a 2:1 ratio to receive 2 IM doses of either 5 × 1010 vp (nominal, ± 1.5 × 1010 vp) AZD1222 (the active group, n = approximately 20 000) or saline placebo (the control group, n = approximately 10 000) 4 weeks apart, on Days 1 and 29. The sample size calculations are based on the primary efficacy endpoint and were derived following a modified Poisson regression approach (Zou 2004). (…) For the primary efficacy analysis, approximately 150 events meeting the primary efficacy endpoint definition within the population of participants who are not seropositive at baseline are required across the active and control groups to detect a VE of 60% with > 90% power. These calculations assume an observed attack rate of approximately 0.8% and are based on a 2-sided test, where the lower bound of the 2-sided 95.10% CI for VE is required to be greater than 30% with an observed point estimate of at least 50%.”

AstraZeneca is the only one who mentions the attack rate (percentage of an at-risk population that contracts the disease during a specified time interval). It’s not really used to determine that 150 cases are required, it provides the link between the 150 cases to the 30000 participants.

If we got close to a base rate of 50% we are in a very different situation of emergency. Right now NYC is closing schools on a positivity rate of 3% *among those tested*. Unless the bias for tasting is that people with infections are less likely to get tested 50% is very far away. Of course I keep thinking about the fact that the plague kill 25% of the population of Europe over a number of years. The death rate from COVID in North Dakota is 1/1000 and still going up and that’s just a few months.

The reason flu vaccine efficacy is so much lower is that there are multiple circulating strains of flu, not all of which are targeted by a given vaccine, and any given year’s vaccine is based on epidemiologists’ best guess about which strains will be circulating that year. The molecular targets for flu mutate/adapt more than the spike protein for coronavirus. So it is not an apples to apples comparison, for multiple reasons. Many other vaccines have higher efficacy rates.


How is duration of efficacy estimated for vaccines? - Biology

A 69-year old receives a COVID-19 vaccine in a Phase 3 trial. It is important to understand how different groups respond to COVID-19 vaccines to establish the most effective deployment strategy.

PHOTO: PAUL HENNESSY/NURPHOTO VIA GETTY IMAGES

The elderly and people with comorbidities are at greatest risk of severe coronavirus disease 2019 (COVID-19). A safe and effective vaccine could help to protect these groups in two distinct ways: direct protection, where high-risk groups are vaccinated to prevent disease, and indirect protection, where those in contact with high-risk individuals are vaccinated to reduce transmission. Influenza vaccine campaigns initially targeted the elderly, in an effort at direct protection, but more recently have focused on the general population, in part to enhance indirect protection. Because influenza vaccines induce weaker, shorter-lived immune responses in the elderly than in young adults, increasing indirect protection may be a more effective strategy. It is unknown whether the same is true for COVID-19 vaccines.

For COVID-19, age-structured mathematical models with realistic contact patterns are being used to explore different vaccination plans (1, 2), with the recognition that vaccine doses may be limited at first and so should be deployed strategically. But as supplies grow large enough to contemplate an indirect protection strategy, the recommendations of these models depend on the details of how, and how well, these vaccines work and in which groups of people. How can the evidence needed to inform strategic decisions be generated for COVID-19 vaccines?

Phase 3 vaccine trials are designed to assess individual-level efficacy and safety. These trials typically focus on a primary endpoint of virologically confirmed, symptomatic disease to capture the direct benefit of the vaccine that forms the basis for regulatory decisions. Secondary endpoints, such as infection or viral shedding, provide supporting data, along with analyses of vaccine efficacy in subgroups. Nonetheless, unanswered questions about COVID-19 vaccine characteristics are likely to remain even after trials are completed. First, trials are typically not powered to establish subgroup-specific efficacy, yet the performance of the vaccine in high-risk groups affects the success of a direct-protection strategy. Second, can vaccines prevent infection or reduce contagiousness? This matters for achieving indirect protection. Expanding ongoing efforts or planning new studies may generate the data needed to address these questions.

For estimating subgroup-specific efficacy, randomized controlled trials can provide early estimates, yet these will have wide confidence intervals, leaving substantial uncertainty about true effects in high-risk subgroups. This uncertainty would be greater in interim analyses that are based on the number of events across the whole trial population and may be exacerbated if high-risk participants are more cautious and have lower exposure to infection, reducing their contribution to the efficacy estimates.

There are several strategies to address subgroup-specific efficacy, some of them already in place. Ensuring that high-risk adults are well represented in the trial population can be achieved by setting minimum enrollment targets for older adults and/or adults with comorbidities. Another consideration relates to the stopping rules for interim analyses in trials. Vaccine trials with early interim analyses that are planning to discontinue randomization and vaccinate placebo participants after declaring efficacy are most prone to subgroup uncertainty. To improve the precision of efficacy estimates in high-risk subgroups, regulators could insist that interim analyses be performed only after a certain number of confirmed disease cases occur in these subgroups, in addition to existing monitoring of the overall number of events in the study.

Trials that maintain blinded follow-up to assess long-term efficacy and safety may also generate more-reliable evidence on age-specific effects. For example, the World Health Organization's Solidarity Vaccines Trial will preserve placebo-controlled follow-up through month 12 or when an effective vaccine is deployed locally (3). However, depending on where the trials are being done and whether the vaccine becomes rapidly available in sufficient quantities after emergency-use authorization in the population undergoing the trial, it may become unethical and/or impractical to ask participants in some subgroups to forego access to an available vaccine. For vaccine candidates evaluated in multiple trials, such as the Oxford-AstraZeneca vaccine being studied in the United Kingdom, South Africa, Brazil, and the United States, meta-analyses can synthesize results across locations to improve precision of subgroup-specific effect estimates.

Ideally, the phase 3 trials in progress will identify more than one safe, effective vaccine for regulatory approval and deployment. Postapproval studies will then take on an important role for continued assessment of vaccine effectiveness. These may include individual- or community-level randomized trials to compare different active vaccines without a control arm, as in the U.S. Department of Defense's individually randomized Pragmatic Assessment of Influenza Vaccine Effectiveness in the DoD (PAIVED) trial, which assesses the relative merits of three licensed influenza vaccines (NCT03734237).

Another approach to amass evidence on subgroup-specific efficacy is post-approval observational studies. This includes active surveillance of high-priority cohorts from, for example, nursing homes or assisted living facilities, as has been done for influenza. This also includes test-negative designs, which are routinely used to assess vaccine effectiveness (4). Symptomatic individuals that test negative for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) function as controls for test-positive cases, and their vaccination status is compared, adjusting for selected confounders. Test-negative designs can be integrated into outpatient testing in the community (5) or use emergency department visits to estimate vaccine effectiveness against severe disease (6). To rapidly establish these systems, researchers can leverage ongoing influenza surveillance. Conveniently, these programs can simultaneously monitor more than one vaccine, enabling assessment of their relative merits.

A key limitation of observational studies is confounding. There may be many differences between individuals who do and do not get vaccinated, which may create noncausal correlations between vaccine status and outcomes. Although such biases can threaten any observational study of vaccine effectiveness, there are some approaches to detect such biases and reduce their magnitude (7, 8).

The clearest evidence of indirect protection is from a vaccine that prevents infection entirely, thereby reducing transmission. These data will be generated in efficacy trials that include infection as a secondary endpoint. This endpoint is measured by a specialized assay to distinguish an infection-induced response from a vaccine-induced antibody response. A vaccine can provide indirect protection even if it does not fully prevent infection (see the figure). Vaccines that reduce disease severity can also reduce infectiousness by reducing viral shedding and/or symptoms that increase viral spread (e.g., coughing and sneezing). A worst-case scenario is a vaccine that reduces disease while permitting viral shedding this could fail to reduce transmission or conceivably even increase transmission if it suppressed symptoms.

To assess a vaccine's impact on infectiousness, some phase 3 trials examine the amount or duration of viral shedding in laboratory-confirmed, symptomatic participants by home collection of saliva samples and frequent polymerase chain reaction (PCR) testing. However, this would not capture any change in viral shedding for asymptomatic participants. Moreover, serology tests detect previous infection and cannot reconstruct shedding during active infection. To measure viral load in both symptomatic and asymptomatic participants, it is necessary to conduct frequent (e.g., weekly) viral testing, irrespective of symptoms, to capture participants during their period of acute infectiousness. The Oxford-AstraZeneca vaccine trial is testing participants in the United Kingdom for the virus weekly regardless of symptoms, but not in other trials for which protocols have been released. Even weekly testing will not give detailed information about the effect of the vaccine on viral shedding, and the relationship between viral loads and infectiousness is unknown nonetheless, this approach is likely to provide some evidence if viral loads are on average lower among vaccinated people. Human challenge vaccine studies, in which individuals in a randomized controlled trial are deliberately exposed to the virus, could generate high-quality data on the effect of vaccines on viral shedding (9).

Vaccines provide direct protection by reducing susceptibility to disease or infection. Vaccines provide indirect protection by reducing the number of people infected in a population or their infectiousness. These vaccine effects can be assessed in clinical trials by measuring the efficacy to prevent disease, to prevent infection, and to reduce infectiousness, as well as in studies to assess indirect effects of the vaccine (15).

GRAPHIC: KELLIE HOLOSKI/SCIENCE

Other approaches exist to directly estimate infectiousness without the need to extrapolate from viral load. Add-on household studies can supplement efficacy trials. Investigators can follow household members or other close contacts of infected participants to assess the vaccine's effect on infectiousness, as has been implemented for the respiratory disease pertussis (also called whooping cough) (10). Viral sequencing could be used within the trial to link infector-infectee pairs and better estimate indirect effects (11). Another strategy is to design cluster-randomized trials in which indirect effectiveness is a primary outcome. In influenza vaccine trials, health care workers at nursing homes were cluster-randomized to be offered vaccine or not, and the endpoints were mortality, influenza-like illness, or influenza infection in the patients they cared for (12).

Observational studies may also be helpful, but, in general, measuring indirect effects of vaccines is even harder than detecting direct effects. It is urgent, therefore, to obtain evidence on how each candidate vaccine affects infectiousness either before approval or soon after, when scarcity may justify randomized distribution of a vaccine.

Other open questions about the rapidly developed COVID-19 vaccines include long-term safety (indicating the critical need for pharmacovigilance activities), the duration of vaccine protection, the efficacy of a partial vaccination series or of lower doses (13), the vaccine's level of protection against severe infection and death, efficacy by baseline serostatus, and the potential for the virus to evolve to escape vaccine-induced immunity. The answers to such questions inform the optimal use of any vaccine.

Availability of a COVID-19 vaccine will initially be limited, and so several expert committees are exploring strategic prioritization plans. Health care workers are a common first-tier group (14), which in turn preserves health care systems by protecting those who run them . A next priority is to directly protect those who are at highest risk of death or hospitalization when infected: specifically, those over 65 and people with certain comorbid conditions. This strategy may be optimal for reducing mortality even if the vaccine is somewhat less effective in these groups (2). But if a vaccine offers little to no protection in high-risk groups yet is able to reduce infection or infectiousness in younger adults, an indirect strategy could be preferred as vaccine supplies become large enough (1, 2). A worst-case scenario for an effective vaccine is one that reduces disease in younger adults but provides neither direct nor indirect protection to high-risk groups, leaving the most vulnerable at risk. Knowing these vaccine characteristics is important when evaluating the relative merits of other products. Fortunately, there are many vaccine candidates in development that use a mixture of innovative and existing technologies. Although vaccines may vary in their characteristics, having reliable evidence on direct and indirect protection can help plan how to use these vaccines in a coordinated way.

This is an article distributed under the terms of the Science Journals Default License.


Modes and Sites of Vaccine Delivery

An increasing number of vaccine vectors have become available to induce potent humoral or cellular immunity. Gene-based delivery of vaccine antigens effectively elicits immune responses by synthesizing proteins within antigen-presenting cells for endogenous presentation on major histocompatibility complex class I and II molecules. DNA-expression vectors, replication-defective viruses, or prime-boost combinations of the two 31-35 have proved to be effective in eliciting broadly neutralizing antibodies, especially for influenza viruses. 36,37

Prime-boost vaccine regimens that use DNA and viral vectors 33 have increased both humoral immunity and memory CD8 T-cell responses. 38 For example, a study of a vaccine regimen consisting of a poxvirus vector prime and protein boost (known as the RV144 trial) provided evidence that the vaccine prevented HIV-1 infection among persons in Thailand. 39 Eliciting immune responses at portals of infection (e.g., in the respiratory and intestinal epithelial surfaces for pathogens such as influenza virus and rotavirus, respectively) may generate more efficient mucosal immunity. Similarly, waning vaccine responses require periodic boosting at defined times, requiring more integrated management of vaccines at all ages. Immunization in the elderly is of substantial concern because immune senescence can lead to a decrease in the responsiveness to vaccination. 40


Breaking Down What COVID-19 Vaccine Effectiveness Means

T he U.S. now has three safe and effective COVID-19 vaccines being shipped around the country and making it into people&rsquos arms. All meet the U.S. Food and Drug Administration&rsquos threshold for protecting people from COVID-19 disease. Yet two of them are about 95% efficacious, while another is 66% efficacious, which may make it tempting to rank them, and assume that people receiving the 66% efficacious shot are somehow less protected against COVID-19.

That&rsquos not the case, however, since it&rsquos not really possible to compare the vaccines to each other. In the table below, we&rsquove laid out what goes into that so-called vaccine efficacy number, and how you should evaluate the currently authorized vaccines:

To start, there&rsquos a difference between efficacy and effectiveness. &ldquoEfficacy&rdquo refers to the results for how well a drug or vaccine works based on testing, while &ldquoeffectiveness&rdquo refers to how well these products work in the real world, in a much larger group of people. Most people, however, use them interchangeably.

Next, it&rsquos important to understand what these companies were actually measuring to come up with their efficacy numbers. In the case of the COVID-19 vaccines, the researchers were measuring how well their vaccines protected against symptoms of COVID-19. So their vaccine efficacy numbers refer to how well they lowered people&rsquos chance of getting sick with COVID-19. Pfizer-BioNTech&rsquos vaccine is 95% efficacious, meaning that for vaccinated people, it was 95% efficacious in protecting people from getting COVID-19 symptoms. It does not mean that 95% of people vaccinated won&rsquot get COVID-19 and 5% will.

Similarly, Moderna&rsquos vaccine is 94% efficacious, so it was 94% efficacious in protecting people from COVID-19, and Johnson and Johnson&rsquos vaccine is 66% efficacious in doing the same.

But that doesn&rsquot tell the whole story, for a few reasons.

First, the vaccines work in different ways. Pfizer-BioNTech and Moderna developed their vaccines using mRNA technology, which involves taking the genetic code for the spike protein of the SARS-CoV-2 virus and encasing it in a fat-based particle that is injected into the body. Once inside cells, those viral genes instruct immune cells to produce copies of the protein, which in turn switches the immune system into action, churning out cells like antibodies, among other activities. If the vaccinated person then gets infected with the actual virus, the body is ready to quickly produce those same antibodies that can stick to the virus and block it from infecting cells. Johnson & Johnson-Janssen&rsquos vaccine uses a different strategy&mdasha weakened cold virus that is reprogrammed to include the code for the spike protein. Once inside the body, the viral genes trigger a similar dedicated response against the virus. Because the vaccines use different ways to alert the immune system, the differing technologies could lead to varying degrees of efficacy.

Second, each vaccine has different dosing regimens. Pfizer-BioNTech&rsquos and Moderna&rsquos shots require two doses&mdashin Pfizer-BioNTech’s case, the two shots are 21 days apart, and in Moderna&rsquos case, they are 28 days apart. Janssen&rsquos vaccine is a single shot. Pfizer-BioNTech and Moderna say their shots can stimulate the immune system after a single dose, but the second dose is needed to trigger the maximum response. Janssen&rsquos scientists are also testing a booster dose of their vaccine to see if it too might amplify the immune response.

Third, the vaccine companies started recording COVID-19 symptoms at different times after vaccination to see how well their shots could prevent disease. In their respective late-stage studies, Pfizer-BioNTech started recording symptoms seven days after people received the second shot of its vaccine or placebo Moderna started 14 days after the second dose and Janssen recorded results from both 14 days and 28 days after its single dose.

Fourth, the vaccines were tested at different times. That&rsquos important because of the genetic variants of the virus&mdashsome of which are more infectious than the original&mdashthat have emerged in recent months, after some companies had already completed initial testing of their vaccines. Pfizer-BioNTech and Moderna both tested their shots through most of 2020, when SARS-CoV-2 had not mutated as much, so most of the study participants who were infected, were infected by the same viral strain. Janssen, however, started its large phase 3 trial in September, and included people in the U.K., South Africa and Brazil, the three countries where new, mutated variants of the virus started to spread quickly. Indeed, many of the people in Janssen&rsquos trial in those countries were infected with the new variants.

From a scientific point of view, that means Janssen&rsquos study provided important clues about how well its vaccine could confront the growing threat of new variants of the virus. That could also explain why the Janssen vaccine&rsquos efficacy is lower than those of the Pfizer-BioNTech and Moderna shots. In fact, studies have since shown that among people vaccinated with the Pfizer-BioNTech and Moderna vaccines, the level of antibodies against the South African variant in particular are up to six-fold lower compared to levels against the non mutated virus.

So for all of these reasons, comparing the three vaccines is a bit like comparing apples to oranges. What’s important is that all fight COVID-19 according to the FDA&rsquos standard of efficacy, which is that vaccines should be at least 50% efficacious in protecting people from disease. And all are extremely effective in protecting people from getting severely ill with or dying from COVID-19, which, ultimately, is what we&rsquod want any vaccine to do.


Newest data suggests second shot provides better protection against variants

Real-word data from the UK posted May 23 by Public Health England showed that Pfizer's and AstraZeneca's COVID-19 vaccines worked better against the variants when two doses were given rather than just one. Both vaccines were 30% effective against COVID-19 with symptoms caused by the Delta variant, first identified in India, three weeks after the first dose.

This was boosted to between 60% and 88% effectiveness two weeks after the second dose. The two vaccines were 50% effective against COVID-19 with symptoms against the variant first found in the UK, Alpha, three weeks after the first dose. This increased to between 66% and 93% two weeks after the second dose.

Dr. Anthony Fauci, President Joe Biden's chief medical advisor, said on June 8 that getting two doses of COVID-19 vaccines would stop the Delta variant from spreading across the US. In the UK, Professor Deborah Dunn-Walters, chair of the British Society for Immunology COVID-19 Taskforce, said in a statement on June 4 that two doses of Pfizer's vaccine were "critical for protection" against emerging strains of the virus.