Artificial Intelligence (AI) advances are big news, but the daily onslaught of AI applications in health care is overwhelming. There’s no question that health care is fertile ground for AI. Health care is expensive, highly technical, complicated, and equally frustrating for patients and providers alike. It’s also rich in data—a mostly untapped resource for both clinical and performance improvement. These factors make health care a perfect environment for AI, which can lead to potentially better and faster treatment for patients. And, there’s lots of money in the system.
Health care is also struggling with a worsening physician and staffing shortage. It threatens to become a crisis. AI has a solution for that, too, by filling roles previously occupied by humans. But can AI really substitute for physicians and other providers?
As your ACO navigates its path to Value, you will undoubtedly begin to consider AI and how it might benefit your organization. But be careful and smart. AI is pushing the envelope on health care roles, patient services, and data security. It’s showing up, unexpectedly, everywhere. Before you leap into AI applications, you need to understand not only the potential benefits, but also the short- and long-term consequences—for better and worse.
Here’s what’s at stake:
Are Chatbots Really Better than Physicians at Communicating with Patients?
That’s the surprising implication of a recent study, which maintains that AI chatbots not only deliver higher quality responses than physicians in patient communication, but also appear to show greater empathy. In the study, 195 patient questions from Reddit’s r/AskDocs were fielded to chatbots or physicians for response. Health care professionals then evaluated the responses for quality and empathy. They preferred chatbot responses to those of physicians, 78.5 to 22.1 percent for quality of response, and 45.1 to 4.6 percent for being highly empathetic. Ouch! That stings.
These findings seem to have been readily accepted, if you believe the press accounts and various blogs that propose using chatbots for patient education, navigation, coaching, and outreach. One reporter suggests that chatbots could even respond to direct questions in patient portals.
But before unleashing chatbots on patients, here are a few points to consider. First, did anyone ask the patients how they felt about reviewing their treatment plan with chatbots? Second, where is the usual vetting of the findings? The credentials of the health care evaluators notwithstanding, what potential biases does the study incorporate, and what were the constraints on physicians’ responses?
Time is probably one constraint and venue, certainly, another. Physicians were almost certainly cautious in their responses to the patient questions, given liability concerns. Since the study was based on human and subjective evaluations of physician and chatbot responses, could the findings reflect internal biases or existing belief systems that physicians are not good communicators? The full study sits behind the JAMA paywall, limiting many readers’ access to complete data. Enthusiastic acceptance of the findings most likely reflects only headlines about the conclusions.
This is the danger with rapid adoption of AI technology. It’s all too easy to be mesmerized by the shiny technology that promises powerful new ways to save money and time. But Artificial Intelligence incorporates the very human biases of its creators. Particularly in health care, there is a real risk that AI can lead to inequities—programmed biases in the form of gender and race-biased algorithms.
This is not to say that chatbots are a detriment to health care. Deploying chatbots to write patient education materials, contribute to health literacy efforts, and otherwise support human roles in medicine could be very effective. But we need to fully understand implications for physicians and patients, and to evaluate the potential for unintended consequences.
When we consider ceding direct communications to technology so that physicians have more time “to treat patients,” what do we actually mean? Isn’t the intent of a patient portal to converse with your own providers about your condition or treatment? Why would we cede such essential conversations to technology, potentially eroding trust between patient and physician?
Three AI Guidelines for ACOs
Your ACOs may not be thinking about AI right now. This spring’s NAACOS conference had no agenda items on AI. But that will change quickly, as pressure to live under Risk will force ACOs to invest in technology and solutions for substantial improvements in performance.
Additionally, many are eager to deploy AI solutions for the work that ACOs do, including ACO stakeholders. To avoid getting swept up by the momentum, carefully consider your options, support your participating clinicians in their clinical applications that involve AI, and see how you might collaborate.
Here are some guidelines for how AI might strengthen your organization—and weaken it:
1. Do: Use AI to analyze complex data for identifying risk, patterns and variations of health care services and costs.
The proven value of AI is its ability to dig into data. AI can rapidly and effectively analyze compiled data from multiple sources—genetic data, utilization, clinical data, costs. This aligns well with Value-Based Care. Without AI, using data to develop personalized treatment plans based on multiple patient data points is virtually impossible.
For example, as explained in previous articles, Episodes of Care is an essential method for ACOs to compare costs of procedures and work with specialists to reduce variations, to address persistent poor control in patients with chronic conditions, and to target patients for clinical review based on medication regimens. At the root of Episodes are AI algorithms that process clinical variables across a full patient dataset to identify opportunities for improvement. Without the use of AI-driven Episodes, your ACO’s ability to examine cost drivers, evaluate improvements, and generate data-driven initiatives falls short.
Here’s another good application. Radiology has used AI to advance exponentially in identification of tumors and other images. AI has the potential to make clinical facts clearer and to enable more timely decisions that could be less costly. Already AI has fostered big improvements in diagnostics and, with deep learning, built capability for precision medicine.
Once again, a caveat: Algorithms and their results must be scrutinized to determine effects on population groups and health equity, so that biases are not baked into ACO solutions.
2. Do: Evaluate the use of AI to create patient materials (for review by clinicians).
ACOs have a responsibility to help patients navigate medical decisions and provide cost transparency. Patients will need factual material to understand their conditions and treatment plans, examine choices and the effects of decisions, and engage family members in the process. Chatbot-created communications, with clinical review, could be a great, efficient way of building the tools that patients need.
3. Don’t: Replace direct communications with patients with AI.
Use an iterative, thoughtful approach that assesses the benefits and impacts at each step. Solving staff shortages with AI puts you on a downward spiral for permanent understaffing. You may be able to judiciously explore the use of AI-driven communications and education, such as providing tools and feedback for self-management programs, if the program is piloted and has human resource involvement and guidance. Be sure to understand that your data evaluation needs with AI-adopted programs will increase, not decrease, your costs of data analysis to assess their effects!
The urgency to use data more wisely will push your ACO into AI solutions. Be aware that technology is never neutral, and carefully plan your human and non-human resources. AI will create both immense benefits and harms. Your steerage of AI adoption will help to ensure that your ACO reaps more benefits. Now is the time to learn, so your organization won’t drown in the tidal wave of AI enthusiasm.
Founded in 2002, Roji Health Intelligence guides health care systems, providers and patients on the path to better health through Solutions that help providers improve their value and succeed in Risk.
Image: Steve Johnson
[…] Full Article […]