PCORI News Plus FDA Suggestions on Comparative Effectiveness Research
The Board of Governors of the Patient-Centered Outcomes Research Institute (PCORI) at its 19 November 2012 meeting in Boston, MA adopted 47 revised methodology standards that are intended to guide the comparative effectiveness research (CER) funded by PCORI, reported RAPS. The Board also authorized at the meeting the development of three new CER funding announcements:
- Treatment options for uterine fibroids,
- the safety and benefits of treatment options for severe asthma, and
- fall prevention in the elderly.
One of the bigger pieces of the Patient Protection and Affordable Care Act (PPACA) was the idea to lower healthcare costs through comparative effectiveness research (CER). To achieve this goal, PPACA (Section 6301 and Section 10602, Public Law 111-148), created the Patient-Centered Outcomes Research Institute (PCORI), an independent, non-profit health research organization. PCORI will have an estimated $3 billion over the next decade to fund CER.
PCORI was created to conduct research to provide information about the best available evidence to help patients and their health care providers make more informed decisions. PCORI’s research is intended to give patients a better understanding of the prevention, treatment and care options available, and the science that supports those options. Below is a short summary of some of the revisions made by the PCORI Board.
FDA and Comparative Effectiveness
In addition to the new measures, an interesting post by Robert Temple MD, Deputy Center Director for Clinical Science at FDA’s Center for Drug Evaluation and Research (CDER) noted that “comparative effectiveness approaches [are] not always the most effective.” Temple explained that while it is not surprising to want to compare data, “the main difficulty with doing comparative studies is that the effects of most drugs, while valuable, are not very large, so that even showing a difference between the drug and no treatment (a placebo treatment) is not easy.”
Moreover, Temple noted that “Showing a difference between two effective drugs, a difference much smaller than the difference between a drug and no treatment, is very challenging and will usually need a very large study.” While comparative data do exist that show advantages for some members of a class over others, “they are not common,” he noted.
He noted that within a class of drugs like antidepressants or antipsychotics, there are relatively few cases where we have been able to say that one drug is better than another, however, there are a few. For example, “the antipsychotic drug clozapine, is generally thought to be more effective than other drugs in its class, however, it has a toxicity that no other members of the class have. Clozapine causes a marked decrease in the number of certain white blood cells. But it was shown to work in people who did not respond to other drugs. This showing was critical to its approval.”
“Some drugs that inhibit platelets in patients with coronary artery disease have been studied in trials that compared the new drugs with an older drug. In some cases, the newer drugs had better effects, reducing the rate of heart attacks, though sometimes causing more bleeding.” Temple also explained that “It is fairly common for a new cancer drug to be more effective than an older therapy.”
“There is a popular class of anti-hypertensive drugs called angiotensin receptor blockers. With considerable effort, and quite large studies, two companies have shown that their drug had a larger effect on blood pressure than other members of the class.”
Temple went on to explain that there is “some hope that instead of doing large, randomized trials to show differences, between treatments, observational studies, i.e., epidemiologic data, can be used to do this.” There is considerable disagreement on whether this is possible, he explained.
His “belief is that any differences between effective drugs are likely to be very small and can be credibly detected only in randomized trials. Whether pooling the results of multiple controlled trials (meta-analyses) will work is an area for discussion, but to do this you need many trials with the same comparison, a rare occurrence.”
Instead, Temple suggested using “people who have not responded to one drug, and then randomize them back to the drug that didn't work and to another drug. This is particularly interesting when the two drugs are in the same pharmacologic class.” He noted that there are individualized responses to treatments where some people do better on one drug than another very similar drug. Thus, “If it is really true that people who don't respond to one member of a drug class actually respond to another member of the class, we can design the perfect study to test this.”
“For example, you could take people who don't respond to a migraine drug and randomize them back to a new migraine drug, the old migraine drug the one they didn’t respond to, and a placebo. If it's true that there are individual differences that are important, then the new drug ought to be able to show its advantage over the drug that didn’t work.” Temple noted a few studies that have tried to do this, including Clozapine; the first angiotensin-converting-enzyme inhibitor, captopril; and Merck’s Celebrex.
Finally, Temple noted that an interesting and therapeutically important question to also consider is whether, “if a drug causes a side effect another member of the drug class, or a drug of a different class can be substituted and not cause the side effect.” He noted one experiment in which women who received anti-depressants had poor sexual functions, but when they were switched and randomized, the new drug did not cause the sexual dysfunction.
Standards Associated with Patient-Centeredness
Originally, this standard only included individuals who have the condition or who are at risk of the condition. The revisions now include “other relevant stakeholders,” which may include “clinicians, administrators, policy makers, or others involved in health care decision making.”
For use of patient reported outcomes when patients are the best source of information, the Board added that “Caregiver reports may be appropriate if the patient cannot self-report the outcomes of interest. If patient-reported outcome (PRO) measures are not planned for use in the study, justification must be provided. The Board also changed dissemination measure, now requiring researchers to “Support dissemination and implementation of study results by suggesting strategies, indicating clinical and policy implications, and working with patients or organizations to report results in a manner understandable to each target audience.”
Standards for Systematic Reviews
Systematic reviews are used to answer questions based on comprehensive consideration of all the pertinent evidence, and can also identify the gaps in evidence and how they might be resolved. Standards for systematic reviews are currently in use, but credible authorities such as Cochrane and AHRQ, vary somewhat in their recommended standards. The Institute of Medicine recently issued standards that draw broadly from available sources. However not all standards are based on empiric evidence and reliable systematic reviews may use alternative approaches. The Board added that, “The methodology committee endorses these standards but recognize that there can be flexibility in the application of some of standards without compromising the validity of the review.
General and Crosscutting Methods For All PCORI
The Board added a completely new requirement, Describe Data Linkage Plans, if Applicable. For studies involving linkage of patient data from two or more sources (including registries, data networks and others), describe (1) each data source and its appropriateness, value and limitations for addressing specific research aims, and (2) any additional requirements that may influence successful linkage, such as information needed to match patients, selection of data elements, and definitions used, and (3) the procedures and algorithm(s) employed in matching patients, including the success, limitations and any validation of the matching algorithm
Causal Inference Standards
Define Analysis Population Using Covariate Histories Information Available at Study Entry : Decisions about whether patients are included in an analysis should be based on information available at each patient’s time of study entry “in prospective studies or on information from a defined time period” prior to “the exposure in retrospective studies. For time-varying treatment or exposure regimes, specific time points should be clearly specified and the covariates history up to and not beyond those time points should be used as population descriptors.”
Measure Confounders before Start of Exposure. Report data on confounders with study results. In general, variables for use in confounding adjustment (either in the design or analysis) should be ascertained and measured prior to the first exposure to the therapy (or therapies) under study. “If confounders are time varying, specific time points for the analysis of the exposure effect should be clearly specified and the confounder history up to and not beyond those time points should be used in that analysis.”
Standards for Heterogeneity of Treatment Effect (HTE)
State the Goals of HTE Analyses: State the inferential goal of each HTE analysis specifying how it is related to the topic of the research, translate this into an analytic approach and highlight the linkages between the two. Identify each analysis as either hypothesis driven (sometimes denoted confirmatory) or hypothesis generating (sometime denoted exploratory).
For all HTE Analyses, Pre-specify the analysis plan; for Hypothesis driven HTE Analyses, Pre-specify Hypotheses for Each Subgroup Effect. The study protocol should unambiguously pre-specify planned HTE analyses. Pre-specification of hypothesis driven HTE analyses should include a clear statement of the hypotheses the study will evaluate, including how groups will be defined (e.g. by multivariate score or stratification) and outcome measures, and the direction of the expected treatment effects. The prespecfied hypotheses should be based on prior evidence which should be described clearly in the study protocol and published paper.
All HTE claims must be based on appropriate statistical contrasts among groups being compared, such as interaction tests or estimates of differences in treatment effect. A common error in HTE analyses is to claim differences in treatment effect when one group shows a statistically significant treatment effect and another does not. To claim differences in treatment effect among subgroups, appropriate statistical methods must be used to directly contrast them. Such contrasts include, but are not limited to interaction tests, differences in treatment effect estimates with standard errors, or a variety of approaches to adjusting the estimated subgroup effect, such as Bayesian shrinkage estimates. Within each subgroup level, studies should present the treatment effect estimates and measures of variability