If Not Now, It’s Too Late: Clinical Science Needs Fixing
In 1967, the year I graduated from high school, my family’s television required “rabbit ear” antennae with perched aluminum foil. Our farming family had little time to watch TV, but when we did, the ritual included a side trip to reset the antennae’s angle to ensure good reception. Today, I watch a clear picture on myriad devices, no antennae needed.
In the 1980s, my trips to a library to find medical literature were few. A single trip to the library would take hours and net only a small number of papers. Now, I obtain articles on any topic in a manner of minutes.
In the late 1980s and early1990s, when I returned to school to learn decision-analysis and health services research, my decision models were simple due to constraints imposed by computing power. Only when I completed the laborious number-crunching process could I slowly fashion insights about care. Today, models are complex, sophisticated instruments running at light-speed.
These remarkable advances in technology are obvious. But, has the science of caring for people mirrored the same meteoric advances?
Clinical Science Is Regressing
I think, actually, our ability to help patients with the technology of clinical science is regressing. The tools of science don’t impede; it is the data we study and weaknesses in our study designs that thwart.
This may seem a specious claim if one just superficially notes the number of resources for medical information. PubMed claims 29 million citations; 6,000 biomedical journals are indexed, but tens of thousands more journals are produced. How can it be that we are stagnant or even backtracking, given this treasure trove of information?
First, the fact there are so many publications is a symptom of the problem, not a solution. Nearly any sort of paper will see the light of day in some journal, somewhere. Researchers can, even now, pay to be published. The mass of information is so unstructured and scientifically unsound that it would take an act of a computing God to find importance floating in the mess.
The biggest problem, in my view, then, is the quality of clinical science foisted upon us. A faster way to garner superior clinical knowledge is desperately needed; once that remedy is found, as a byproduct, the number of poor insights and publications will decline.
How Clinical Science Must Improve to Be Relevant to the Patient
The weakest components of the way we presently conduct the scientific method are under-emphasized and under-appreciated. My opinion is informed by assessing scientific studies for more than 25 years as a medical editor, as a teacher of evidence-based-medicine, as a researcher and writer. Usually critiques of evidence focus on the “internal validity” of the study (how well the study was carried out). However, applying what we learn from a study to help patients is the problem, in my view, and present studies are limited in their ability to inform the vagaries of individuals.
To refine how insights from studies may be accurately generalized, the following components need improving. (I hint at the deficiency. Future blog posts will explore these in depth.)
- Studying the wrong populations in the first place. Our penchant to publish leads to research done with convenience, using non-random samples of patients. This leaves us uncertain if study results are generalizable.
- Lack of incorporating important variations in clinical/personal characteristics of studied populations. We flatten our insights by failing to include and plan for variations in individuals being studied. Hence, it is difficult to extrapolate study results to inform individuals whose characteristics, or combinations of characteristics, are unstudied.
- Failure to mask intervention and outcome. If a participant or researcher knows the treatment/plans for care and, second, if the outcome variable is subjective in nature, randomization is a waste and no insight is possible.
- Failed randomization when researchers are looking for small effect sizes. Random imbalances in numbers of patients in prognostic subgroups can negate insights, especially when small numbers of patients differ in outcome event rates for studied options.
- Inappropriately using data from comparative observational studies to inform patients. Observational comparative data, including big data, are either wrong or un-provable; acting on them is dangerous.
- Poor presentation of medical evidence to patients. Obfuscation is the norm.
Our present models of information management are outdated, slow and expensive, and information is often irrelevant by the time it reaches the patient’s bedside. We need a better approach to clinical science.
Founded as ICLOPS in 2002, Roji Health Intelligence guides health care systems, providers and patients on the path to better health through Solutions that help providers improve their value and succeed in Risk. Roji Health Intelligence is a CMS Qualified Clinical Data Registry.
Image: feliperizo.co