Grand Rounds June 9, 2023: Emulating Randomized Clinical Trials with Non-randomized Real-world Evidence Studies: Results From The RCT DUPLICATE Initiative (Shirley V. Wang, PhD)

Speaker

Shirley V. Wang
Associate Professor
Division of Pharmacoepidemiology and Pharmacoeconomics
Brigham and Women’s Hospital
Harvard Medical School

 

Keywords

Pragmatic Clinical Trials; Emulation; Design; Database Study; Replicate

 

Key Points

  • RCT DUPLICATE is a series of methods NIH Collaboratory Trials with the the goal of understanding when and how database studies can support regulatory decision making. The first aim of the project is to use healthcare databases to emulate the design of 30 completed trials and predict the results of 7 ongoing trials. The goal with this aim is to determine if the same conclusion can be drawn from a database study and a trial. The second aim is to test the transparent and reproducible process with the FDA to evaluate Real World Evidence (RWE) studies. The third aim is to evaluate factors that predict concordance between the trials and the databases.
  • The study team used three predefined binary agreement metrics that they applied to each trial. They also used correlation coefficients to summarize agreement across the trial emulations.
  • The results showed that when the emulated trials were more pragmatic, the ability to closely emulate the trial design and obtain similar results was higher. However, when attempting to emulate a trial that had more explanatory design features, such as a run in or strict measures in place to ensure prolonged adherence, it was more difficult to disentangle whether the diversions in results were due to design emulation differences, which are differences between randomized controlled trials (RCT) and RWE, or biases, which are differences between RWE treatment arms.
  • In any trial emulation, researchers encounter design emulation differences in addition to potential sources of bias. The emulation differences in this trial included inclusion-exclusion criteria of the trial, population distribution, quality of the comparator emulation, outcome emulation, placebo control, in-hospital start time of medication, loading dose or dose titration protocol during follow-up, delayed effect with a long follow-up window, run-in window, discontinuation of maintenance therapy at randomization, and robustness of findings.
  • The team created an exploratory indicator that split the trials into two categories: those with fewer design emulation differences and those with more substantive design emulation differences. For trials with fewer emulation challenges, there was a closer agreement in agreement metrics. For trials with more emulation challenges, the correlation was low and results diverged more. The real benefit of database studies is the opportunity to generate evidence that is complementary to trials as well as to explore relevant clinical questions in which, for a variety of reasons, a trial cannot be conducted.
  • When the team was able to emulate the design and analysis closely, database studies often drew similar conclusions as trials. It is more challenging to emulate trials that are designed with many constraints to show effects under ideal conditions. The team saw greater success when replicating results of trials with more pragmatic design features.
  • When evaluating when and how RWE studies complement RCTs, it’s important to consider the target trial design that would best match the need or questions of the end users.

Discussion Themes

What’s next for this initiative? Ongoing work we’re tackling is understanding that the methods and validity of trial emulation using EHR data or EHR-linked claims instead of claims data alone, as we used for these 32 trial emulations. We’ve also launched a spin-off project called ENCORE which is focused on emulating oncology trials using oncology specialty EHR data.

What are you interested in on the analytic front regarding this topic? In general, I feel that the design trumps analytics in this case. We’re getting through the study design and making sure we’re asking the right questions. A lot of these analytics are about dealing with confounding in a better way, but if you can’t get the design right, then the analytics aren’t going to be closer.

-How do we teach our students and colleagues how to use the target trial framework and use these emulation approaches to advance our comparative effectiveness studies? It’s useful to lay out the parameters, or all of the questions to define the estimate for the target trial, whether hypothetical or real. Then lay out the same parameters you’re intending to set next to the real-world data. In our case, we color-coded these parameters and made a decision on whether it was feasible to continue or not based on whether it was possible to answer that same question. It’s a great approach to directly compare the parameters to see what question you’re truly asking.

Tags

#pctGR, @Collaboratory1