Despite the intense focus on data and analytics in health care, most start-up ACOs adopt identical population health initiatives, such as intensive case management for high cost patients. The choice is not made because of proven outcomes and lowered costs from these initiatives, but because everyone else is doing the same thing. It’s ironic that we adopt evidence-based performance measures, but rely on anecdotal results in population health.
With a research-capable Registry and reliable data, we can be smarter about choosing where to spend resources and involve patients. This is called Effectiveness Research.
How Registry-Based Research Works
Two study designs—observation and experiment—are used in science. Observational studies use data from populations of patients, some who choose treatment and some who do not. The unequal use of treatment among patients is the factor that divides the population for study. A classic example is estrogen use. Observational studies showed that women who used estrogen faired better than those who did not.
Experiments differ from observational studies. Treatments are provided systematically to some patients and not others. Experiments include the randomized trial or the consecutive patient, treatment-limited, pre-post clinical trial. The latter is the design used most often in registry-based research. (A classic contest between observational studies and experiments was, again, estrogen use. In experiments, in contrast to findings using observational studies, estrogen use was found to be detrimental). If we want to study the effect of interventions, we need to conduct experiments instead of observations.
To determine the effectiveness of interventions using a Registry, we can set up the populations to randomly choose the treatment group and the non-treatment group in the population, and track both aggregate and patient-specific results. Or, we can start with outcomes and work backwards from outcomes to determine what interventions and variables are producing effects. The simplest and most illustrative way to depict the change is through run- charts, or trended outcomes over time.
Example: Diabetes HgbA1C Levels as Affected by Patient Office Visit Rate
Run-charts are the best way to view the effects of experiments in registry-based research. Below is an example of a run-chart showing the A1C levels for a population of patients with diabetes. (Note that because HgbA1C levels are calculated as an “inverse measure’ by Medicare, an increase in performance is equivalent to poorer HgbA1C levels.) An individual physician’s practice, the practice of his/her group, and the practice of the entire organization in which he/she practices are shown. This run-chart, by itself, is not research; it simply displays the trend of worsening patient outcomes for the organization as a whole, but patients of this individual provider have been stable.
To establish an experiment, such as an intervention that reduces the visit rate for diabetics in this example, we will create a population and a research design across the entire organization’s practices. Since the intervention includes the population, this is an example of a consecutive patient, treatment-limited, pre-post clinical trial. The timing of the intervention is planned for and “mapped” on the graph. A line was drawn to depict the starting time of the change in visit rates for the practices. The line is the division for the pre-post comparison.
Run-charts establish the base-line outcome performance for a practice, as depicted in percentages of the population who meet the desired performance level. It is best to intervene when the values are stable for the population, which is evident from this graph. Notice, from the start of data collection in 12/2012 to the time of the change in the visit rate (pre-intervention period), there was a steady rise in the A1C levels for all practices. The reason for this rise was that Registry team members provided feedback to practice administrators regarding incomplete data collection; better, more complete data gave a more accurate picture of the trend across all practices. In a sense, efforts to improve data collection were an experiment in that all administrators received instructions on data collection.
The intervention to change the visit rate had to be planned at the time of “steady state” A1C levels. This is a crucial aspect of Registry-based research. If the change in the visit rate had been planned simultaneously with efforts to improve data collection, we would not know if the change in visit rate or the feedback for data collection was the independent contributor to any change in outcome values. Without planning, Registry research may lead to incorrect insights. In this particular situation, changing the frequency of the visits did not actually result in better or worse A1C levels. Nor did more patients get to target levels. However, the study did show that utilization, as measured by ambulatory visits, might be reduced without leading to worse A1C levels. In this study it was important to determine that the mix of the intensity of the visits (as judged by CPT codes) did not change, and they did not.
While we do not show the view, run-charts for all outcome measures may be displayed on a single page. Then, the time when the practice decided to change the visit rate for diabetic patients, as our example, can be shown on each run-chart. A single action, such as reducing the visit rates, might affect some outcome measures and not others. The run-chart view with timed interventions allows for real-time research. This is ideal for a busy practice. The keys are, again: a) A single intervention must be used for the entire, relevant population of patients and b) there must be steady-state values for outcomes at the time of intervention.
Test the Effectiveness of Population Health Interventions with Research
The Effectiveness Research on the effect of office visits on HGBA1C levels is interesting because it directly addresses one of the underlying questions about whether getting people back into the office, a typical tactic but associated with greater costs overall, is useful. The use of intensive case management is another good example. It is one of the most implemented—and costly—ACO population health programs, but there is no science that shows its effectiveness on outcomes or cost-benefit, compared to other approaches.
When faced with the challenge of starting an Accountable Care Organization, many start with population health initiatives that are big and far-reaching. The temptation to do everything at once is alluring, but the real gains will come from studies pursued outcome-by-outcome and intervention-by-intervention, with proof of results.
Founded in 2002, ICLOPS has pioneered data registry solutions for improving population health. Our industry experts provide comprehensive Population Health with Grand Rounds, ACO Reporting and Population Health and Effectiveness Research Solutions that help you both report and improve your performance. ICLOPS is a CMS Qualified Clinical Data Registry.
Image Credit: Charles and Adrienne Esselt