Clinical Data RegistryFuture of Health CarePerformance ImprovementRegistry ScienceResearch

The CDR Advantage: Why Registry Research Minimizes Study Bias for Performance Improvement

By January 19, 2016 No Comments

ames-room-illusion-1156662-640x428The Clinical Data Registry is a powerful research tool for improving patient health. But what makes Registry-based study design better than pre-post study design? The answer has far-reaching implications for how we will use data to determine treatment effectiveness in the future, as well as how we will meet the challenge of improving health outcomes.

Research can be built on the case-control study, observational study designs, N-of-1 study designs, randomized trials or the N-of-1 population study. Most of these approaches—except those facilitated by a Registry—will be limited by small patient samples due to the patient selection process. But that’s a separate issue we can address in the future. The question here is how Registry research through N-of-1 trials differs from pre-post studies in determining intervention effectiveness. There are plenty of pre-post studies being done; but these studies are criticized for being weak approaches to identifying cause-effect relationships between interventions and outcomes.

Reliable Study Results Require Accurate Measurement

Before we draw distinctions between the usual pre-post or observational studies and the Registry-based N-of-1 population trial, let’s review concepts of what makes study information most useful.

Since science is about measurement, anything that obscures or obfuscates a measure will lead science astray. Besides the simple errors involving technical problems (my first research study found a nearly 10 percent data entry error, causing us to reenter data three times before we got it right) or indistinct measures of physiologic (e.g., FEV1) and physical phenomena (e.g., 6-minute walk test), there are four main reasons why what we measure may be incorrect. These four are most often noted in studies examining subjective outcomes, such as pain, function, fatigue and so on:

  1. The natural history of the clinical condition will vacillate. For example, arthritic pain comes and goes (winter is here, after all), or our satisfaction with life or our physicians can vary with daily experiences. When the natural history of a condition varies over time, any single measure, or even multiple measures, of the condition may be flawed.
  2. Response bias. When my wife asks me how my day was, I may answer, “Good, Dear.” I may answer that way even if it is not a true sentiment, to ease her mind or to avoid a long discussion at that moment. This happens in studies, too; many patients want to give positive results to meet real or imagined expectations, which then lead to biased measures of some outcome event.
  3. Regression to the mean. Since the natural history of many clinical conditions vacillates, choosing a measurement when the condition is at its worst may lead to incorrect comparisons with a measure taken when the condition improves later on. We may falsely think that what we are doing is the reason for the improvement, when, instead, we’ve merely missed a measure. This is a common problem with studies that set cut-offs or threshold values for entry. For example, some drug trials withdraw medications and then enter only those people who worsen when the medication is withdrawn. These sorts of trials are damaged measures, because changes may merely be normal fluctuations.
  4. The Placebo Effect. The placebo effect or response is complex. Simply, it is the effect that remains after natural history, response bias and regression to the mean are taken into account. A clever study of sham acupuncture used three study groups: one was a wait-and-watch group, one was a sham done in a businesslike manner, and the third was a sham done in a caring and nurturing manner. As expected, the average response improved for all three groups, despite no effect of the intervention. This study examined not the sham procedure, but the influence of natural history, regression to the mean, response bias and placebo on outcome measures.
N-of-1 Population Registry Trials Minimize Measurement Flaws

Any study that does not address these potential errors in measurement may be flawed. The N-of-1 full population trial helps with some of these factors—perhaps, all.

Most often, the N-of-1 trial is used with an individual patient. In other words, the patient is the unit of measurement. This helps with response bias, as the same person is the responder to all interventions. In addition, since neither the patient nor provider knows the treatment, response bias is equalized across interventions (not necessarily minimized). Although these sorts of studies are favorable, they do not necessarily eliminate concerns regarding vacillation in the condition’s natural history and regression to the mean. For a patient and variable measurements, N-of-1 trials may require long study durations and multiple crossover treatments to reduce the influence of these two factors.

For the N-of-1 population Registry trial, the entire practice is the unit of measurement. In contrast to the N-of-1 patient trial, the N-of-1 population Registry trial starts only when the unit of measurement is stable. For example, a group of physician practices decides to measure A1C levels. For the first year, the mean A1C level vacillates and, at first, lower average values begin to rise. Why? The practices entered data at different times. Each individual practice’s variation in measures for their patients becomes evident. After the first year, however, the variable averages for each practice even out across all practices. One year after beginning the measurement, the entire practice has no variation in the A1C average level and range of values. This is an important difference between individual trials and population based registry trials: stability in base-line measures.

Measurement Stability Is Key to Actionable Results

This is the point in time to try an intervention. Remember, however, the intervention must be uniform for all practices. The intervention need not be for all patients; it may be applied, for example, only for patients above some level of a measurement. For instance, the practice may want to focus efforts on the sickest patients or patients with highest A1C levels. However, if subgroups are being included in the intervention, the average and range of values for the subgroup must also be stable. If this does not happen, regression to the mean may thwart efforts to interpret improvement efforts.

In summary, N-of-1 population Registry trials aim to address problems with the changing natural history of diseases, regression to the mean and response bias. The N-of-1 population trial requires a stable measure of the natural history of disease to avoid interpretation based on changes in the natural history and regression to the mean, rather than the intervention. The intervention, applied equally to all patients in the practice, protects against response bias.

Should interventions be randomized? Most often, there is not enough practice volume in a system to minimize differences between groups of physicians. In cluster-randomized trials, for example, we are concerned that practice areas and groups vary dramatically on how they care for patients or their diligence for making accurate measures. If there are few practice groups, randomization may fail. Interventions aimed at patients rather than practice groups may also fail if practice groups cluster variations in patients. It is better, in our view, that a single intervention be applied to all practices after base-line measurement is stable. Hence, randomized interventions are rarely needed and may not be ideal.

Many Pre-Post Study Designs Risk Inaccurate Measurement

Many pre-post study designs fail to protect for the conditions that can thwart measurement. I have seen pre-post designs that included different pre- and post-study patient populations—even for stable practice groups. In one quality program evaluation, for example, the number of practices contributing data post-intervention doubled.

I have also seen leaders start pre-post projects in response to worsening levels of some measure that was due only to the varied natural history of the condition. For example, one pre-post study looking at infection rates was initiated in response to higher than average rates noted at one point in time rather than over time. Also, interventions are often applied unequally across populations in pre-post designs, such as a pre-post quality improvement study of a complex checklist; assessment was biased because the checklist was applied more effectively later in the study after full training for staff.

How and when to measure clinical phenomena requires diligent discussion and planning. Study designs that focus learning on local groups and systems, while protecting from biased measures and regression to the mean, are needed. In summary, the N-of-1 population trial is one way to advance timely and accurate science for practice groups and patients, by limiting errors in inference from studies caused by regression to the mean, response bias, ignoring the natural history of fluctuation in outcome measures and the placebo effect. In fact, N-of-1 population-based Registry trials may offer more advantages than randomized trials.

The Clinical Data Registry Will Redefine the Future of Research

Clinical Data Registries have the capacity to create a sea change in the value and accuracy of clinical studies, creating less biased and larger studies to determine effectiveness of treatments. As the Medicare public health reporting requirements assure a larger feed of data to CDRs, ICLOPS and other entities will undoubtedly begin to collaborate and customize clinical studies and expand the Registry infrastructure for managing research.

Transparency of results combined with provider access to their own data have the potential to identify effectiveness and trends in ways that researchers have heretofore only imagined. We’ll explore that subject in future posts.

Founded in 2002, ICLOPS has pioneered data registry solutions for improving patient health. Our industry experts provide comprehensive Performance Improvement and Technology Services and ICLOPS Clinical Data Registry Solutions that help you both report and improve your performance. ICLOPS is a CMS Qualified Clinical Data Registry.

Contact ICLOPS for a Discovery Session.

Image Credit: St. Mattox