Fixing Clinical Science Requires a Moonshot
“We chose to go to the moon”
President John Kennedy’s statement instigated a monumental marshaling of resources to achieve a remarkable goal. Those famous words also established a powerful metaphor for aiming high. We need an equally monumental shift in purpose and commitment of resources for how we conduct clinical science. Nothing less than our nation’s health is at stake.
In my view, there are only three possible ways research efforts might proceed:
First, the conduct of research might not change, but continue to rely on observational studies and non-generalizable randomized trials (RTs). If so, populations of subjects included in studies, even large ones, will remain unclearly constructed and fail to include large enough groups of people with varying prognostic differences. As a result, our ability to maximally advance care will languish. To cite but one example, the massive National Lung Screening Trial (NLST) did not recruit systematically, or randomly, so generalizability is a concern.
Second, clinical research might be usurped by artificial intelligence (AI). Some believe studies will become comparative analyses gleaned from data sets. Comparisons of medical treatments and plans based on algorithms are common and growing in volume. Some comparisons, even, are manufactured under the rubric of “machine learning.” However, these efforts are just a form of observational research, claiming superiority merely by the sheer size of the data sets.
I do not think AI, as conducted at present, is good research, as pointed out previously. Correct comparisons require three components:
- a random sample of a complete populations’ data, or the complete population of patients;
- outcome data gathered systematically to answer a targeted question; and
- planned interventions uniformly applied to groups being compared.
In my view, if research only follows the AI idea, it may hurt more than help us, especially if we blindly follow the “black-box,” non-transparent production of information. AI programs must face the scrutiny of high quality research, not serve as a proxy for it.
Third—my hope—would be a shift toward systematic RTs using generalizable random samples of subjects and full population interventions. We have not examined fully enough, in my view, how to harness our advancing data gathering, sharing, and analytic expertise to enhance research.
How to Get There
How might we organize for better research? Here are some ideas:
1. Reorganize the National Institutes of Health to Consolidate Data
I will not recount the organizational chart of the NIH, as its description would fill this blog and dozens of others. The NIH, in all its glory, is a difficult-to-coordinate cacophony of efforts. Don’t get me wrong. Amazing work is done, both in basic and clinical research domains, but if better RTs are the goal and better full population research studies are needed, then consolidation of NIH efforts will be needed. Presently, in my view, the NIH is inefficiently designed and managed.
One consolidation approach would be to have all clinical research efforts organized by the existing Center for Information Technology (CIT). Each individual institute presently manages both basic and clinical research. If clinical research with RTs becomes systematic, like the Gallop model, there would be no need for each institute to house, maintain, and query any other group of research subjects, other than those in full population disease registries. These disease registries could be collated and maintained by CIT. Each individual institute might advise on what diseases should become registries, but they would not have to develop their own. For example, if the National Cancer Institute decided to conduct research on breast cancer patients, the CIT disease registry event database would be queried. In other words, if there were a national disease registry database, clinical research efforts for all centers could be consolidated.
2. Encourage Creation of Private Sector Disease Registries
While I still believe that government is the best leadership source for research efforts (the NIH paid for nearly a third of all research conducted, even outside the auspices of the NIH), that is not necessarily the only option. Another model would be private enterprise. If I ran an electronic medical record company (EHR), for example, I would maintain disease registries. This idea is not a foreign one; I have seen EHR/insurance partnerships intent on building disease registries for research with their patients. While private enterprise may hasten the development of disease registries, however, they will still need to be linked to other disparate electronic database companies for full population random samples. This requirement for sharing between different private enterprises may require government leadership.
3. Foster Tools to Search Large Data Sets for Specific Disease Conditions
A model for full sample query of different datasets across multiple sites of medical care might be accomplished by a “disease research registry app” capable of trolling large data sets for disease domains. This, also, is not a new idea. Our government is now demanding that data be shared, and data sharing models are being developed in response. It’s only a matter of time before some bright young data programmer figures out how to query any electronic data set to find specific disease conditions. I would even wager that machine learning is pointed to this idea in the future.
4. Promote Full Population Studies as the Foundation for Defining Quality of Care
The process of paying for quality of care must evolve beyond the present scientifically unsound efforts to, instead, full population studies. Paying for quality is laudable, but, presently, many are unsure that quality is being measured and whether paying for quality (as currently measured by processes and one-point-in time measures) actually makes care better for patients. My next post in this series will compare and contrast the present quality payment system versus full patient population studies, like the stock-market model, and demonstrate that we should only be paying for a “prove you improved” model.
Research can be better. Too many studies are wrong and cannot be replicated. There is a reason; we are doing studies poorly. Getting better will require new ides about assuring random samples of full populations, or full sample research. In addition, we will need to think hard about how we should develop our leaders in both the private and government sectors to foster better research models. It is time to go to the moon for better research.
Founded as ICLOPS in 2002, Roji Health Intelligence guides health care systems, providers and patients on the path to better health through Solutions that help providers improve their value and succeed in Risk. Roji Health Intelligence is a CMS Qualified Clinical Data Registry.
Image: SpaceX