Every conversation with a patient is an exercise in big data analysis. Your patient’s appearance, changes in mood and expression, and eye contact are data points. The illness narrative is rich in semiotics: pacing, timing, nuances of speech and dialect are influenced by context, background and insight, which, in turn, reflect religion, education, literacy, numeracy, life experiences and peer input. To all this, add personality traits such as recalcitrance, acceptance and personal philosophies.
Taking a history generates a wealth of data. Mix in physical findings of variable reliability, laboratory markers of variable specificity, imaging bits and bytes, and you have “big data.” Then you mine this data for the probabilistic variance of the potential causes of a complaint, based on which you begin to consider values for numerous options for care.
So armed, the physician next needs to factor the benefits and harms of multiple treatments derived from populations that never perfectly reflect the situation of the individual sitting in the chair next to you, your patient. This is the information necessary to empower your patient to make rational choices from the menu of options. That is clinical medicine. That is what we do many times a day to the best of our ability and to the limits of our stamina.
Take that, Watson. You need a lot more than 90 servers and megawatts of electricity to mange my bedside rounds.
The Doctor-Patient Relationship Cannot be Quantified
Technical insufficiency compared to our cognitive birthright is not the only reason that Artificial Intelligence (AI) cannot replace the physician. Even if Watson could grow its server brain to match ours, it still wouldn’t be able to find measurable quantities for the independent variables captured during a patient encounter, nor the role of personal values that temper that patient’s choice. Life does not have independent and dependent variables; the things that matter to us are on both sides of a regression model. Watson needs rules to violate this statistic, and there are none that generalize.
Somehow, our brains have a measuring instrument that no data query can find or assess, and that we innately understand, but can’t communicate. Also, our brains seem to intuitively understand statistics; our brains know that the variations around the regression lines (residuals) mean more to us than the models themselves. Sure, if there is something discrete to know, like what books we buy or some other simple, measurable deterministic item, like an answer to a game show question, Watson will kick most, and maybe all of our butts.
But, what if what is important to us is neither deterministic nor discrete? What if life is more importantly measured in “when” than “if”? And what if the “when” and “how we feel about the when” are intertwined? What if medical life is not even measured in outcomes, but, instead, in terms of relationships that foster peaceful moments? In this reality, Watson has no place.
Nor Is It an Advantage to Read Every Research Study Ever Published
Ironically, Watson’s Artificial Intelligence is also its Achilles’ heel. For better or, for reasons to be explained, worse, Watson is capable of reading the “World’s Literature.” Our desires and motives to improve the care of individuals is being buried in reams of codependent, biased, unrestricted, marketed, false-positive or false-negative associated, and poorly studied information that sees the light of Watson’s day, because it can read every report published in the massive number of nearly 20,000 biomedical journals. A 60 Minutes report on AI told us so.
According to this report, there are supposedly 8,000 research reports published daily. That is Watson’s problem. Watson fails to recognize that it is more important to know what we should not read, rather than to be able to read it all. There is just too much precarious information being perpetrated on unsuspecting readers, whether the readers have eyes or algorithms.
Science is the glue that holds medical care together, and if science is subjugated, we will be lost. I was an editor for more than 25 years. Let me tell one story. I fought not to publish a Phase 2 study of a drug for patients with cancer (I am an oncologist). The study was not fatally flawed by design, just premature. My reasoning was biased, I admit. I have seen many Phase 2 studies fail to be replicated after better-designed Phase 3 studies were performed. Science is about accuracy and redundancy and timelessness and process, not expediency. I failed in my attempt to discourage publication, and the paper became highly cited (more cited papers are a goal of journals), making me look like I lacked sufficient insight into its importance. Sure enough, however, a better-designed Phase 3 study eventually rejected the hypothesis born by the Phase 2 study.
I may have been wrong about the Phase 2 study, but that is not the point. The point is that Watson knows of both studies. You only need to know one of them. How would Watson handle the redundant nature of the studies and their contrary insights? (I always wondered if that negative study was a cited as much as the positive, premature study. I bet Watson would know).
Artificial Intelligence Is In Dire Need of Phase 1 Testing
But, we are perhaps being too tough on AI. I admit that we are not writing about that specific program but, instead, using it as a metaphor for big data analytics and messy regression models. I did look superficially to see if, as a tool, Watson has been subjected to study. I went to “PubMed” and typed in, “Watson artificial intelligence” and found no pertinent randomized trials. I did see studies trying to match patients to clinical studies, but found no outcome studies.
Why this is important to me, and why it should be important to you, is that the 60 Minutes episode told of a patient who was treated after a “recommendation” from Watson. I have to assume that the NIH vetted ethical standards for Phase 1 study, and that the patient was informed. We are left to assume, also, that the information found by AI was reliable and adequately tested. After all, this compliant-with-Watson, unfortunate patient succumbed to an “infection” several months after the uncovered treatment.
I worry about the veracity of the information spewed by the algorithm and how on earth the researchers planned to learn anything from the proposed intervention and subsequent treatment. Science requires universal aims and adequate comparisons, so, in my view, any AI solution for patients should be tested and subjected to stringent public and scientific testing. AI, I firmly believe, is in dire need of Phase 1 testing.
Science can be better. Watson will not advance science; scientific inquiry will. Better designs for clinical care and insights from scientific data need to be developed and implemented. We do not need massive amounts of data, but, alternatively, small amounts gathered in thoughtfully planned studies. And with better science, we will not need AI. We should, instead of banking, or breaking the bank, on AI, use our remarkable brains to learn via scientific planning, introduce valid scientific insights into the big data dialogue we call the “history,” and do so in the service of what we call “patient care.”
Watson and other systems may be able to do a wonderful job determining what books I buy, and, from a medical perspective, it might be able to pick a particular antibiotic, given a known infection, due to the deterministic nature of that task. But, treating infection, as an example, is a small data part of what we do. We help sick people, and for that big data task, Watson will not be sufficiently insightful.
Founded in 2002, ICLOPS has pioneered data registry solutions for performance improvement in health care. Our industry experts provide comprehensive Solutions that help you both report and improve your performance. ICLOPS is a CMS Qualified Clinical Data Registry.
Image Credit: Ryan McGuire