Grand Rounds October 6, 2023: Hybrid Studies Should Not Sacrifice Rigorous Methods (David M. Murray, PhD; Moderator: Jonathan Moyer, PhD)

Speakers

Speaker: David M. Murray, PhD
NIH Associate Director for Prevention and Director, NIH Office of Disease Prevention

Moderator: Jonathan C. Moyer, PhD
Statistician, NIH Office of Disease Prevention

Keywords

Implementation; Study design; Hybrid; Clustered; DECIPHeR

Key Points

  • People often contest that hybrid designs are not as rigorous as they should be. The use of the term “hybrid design” is unfortunate, as it suggests that implementation research has different methods than other research and might not be held to the same standards. Instead, we should use the same rigorous methods for implementation research that we use for other research and simply change the focus.
  • The Disparities Elimination through Coordinated Interventions to Prevent and Control Heart and Lung Disease Risk (DECIPHeR) initiative at the National Heart, Lung, and Blood Institute (NHLBI) is their first major effort to conduct implementation research. Through this initiative, 7 Clinical Centers are expected to test an evidence-based, multi-level intervention designed to reduce or eliminate cardiovascular and/or pulmonary health disparities. One of the key features was that implementation measures were to be used as primary outcomes.
  • The NHLBI created the Technical Assistance Workgroup, building on the model established by the NIH Pragmatic Trials Collaboratory, with the goal of helping the Clinical Centers create the strongest possible application for the UH3 phase. The group considered the project aims, study design, statistical analysis plan, and power analysis for each center until all aspects were aligned. They also helped review each Clinical Center’s protocol before it went to the DSMB and NHLBI for transition to the UH3 phase.
  • While working across the 7 projects, the Technical Assistance Workgroup encountered several design and analytic issues, including: ensuring emphasis on implementation outcomes, research designs for Type I, II, and III Hybrid Studies, intervention versus implementation strategies, the need to address clustering, cross-classification and multiple membership, time-varying intervention effects, data based parameter estimates, blinding, and adaptations of intervention and implementation strategies.
  • Throughout the time the Technical Assistance Workgroup worked with the 7 Clinical Centers, they learned that implementation research has its own practices in many design and analysis areas. They also learned that consensus is lacking in many areas, such as blinding and adaptation, even among the implementation research community. They learned that researchers outside of the implementation research community often do not understand the features common to implementation research, and there’s a benefit to bringing the two communities together for review of proposed studies. They found that involving methodologists familiar with clustered designs and their analytic and power issues was a key factor in their success.
  • The results of the Technical Assistance Workgroup’s involvement in the Clinical Centers’ development and proposal process was a much stronger set of proposals for the UH3 phase of DECIPHeR.

Learn more

Visit the DECIPHeR website.

Discussion Themes

-You previously mentioned that the practice of blinding is extremely common in clinical trials and less so in implementation studies. In implementation studies, how can you blind outcome assessor? Is independent adjudication a possible solution? Yes, that’s one solution. It’s relatively easy to blind outcome assessors. It’s not bulletproof, but first, you have intervention and implementation staff that are completely independent from measurement staff. Second, you shouldn’t tell the staff collecting the outcome data which arm the various sites are in. To the extent that you can use data from electronic health records, that will help with keeping the staff blinded. It’s important to note that it’s more difficult to keep the centers, or actual clusters, blinded. It’s important for them to know that they’re getting an intervention, even if they’re not aware which one.

-Could you expand more on what implementation outcomes are? Examples of implementation measures include acceptability, adoption, appropriateness, affordability, cost, feasibility, fidelity, reach, etc. In most clinical trials, we might measure these as process outcomes. However, in an implementation trial, these are the most interesting measures. An implementation study generally is used when you have an intervention that has already been shown to be effective on health outcomes, so it’s less important to be concerned about that. The real interest is in finding out about how to improve acceptability, adoption, fidelity, etc. so that people will use the research we’ve done.

-This was an impactful collaboration between NIH investigators and study teams and methodologists. How can this kind of collaboration happen more widely? I wish that those of us involved had enough bandwidth to get involved with every major initiative that NHLBI launches, but that’s unfortunately not possible. This collaboration was a special case with a very good biostatistics and design working group. DECIPHeR benefitted greatly from having the Techinical Assistance Workgroup, and anytime you have a study with a coordinating center, a similar group could be an important function of the coordinating center. However, I don’t have a great general solution for that issue at this point.  

Tags

#pctGR, @Collaboratory1