Arlen Meyers, MD, MBA, is the President and CEO of the Society of Physician Entrepreneurs on Substack and an advisor at Cliexa, MI10, and Thinkhat.
Before stepping into a doctor's office, many patients turn to WebMD or similar online tools to self-diagnose their symptoms. But artificial intelligence is taking this a step further—moving beyond symptom checkers and into hospitals, where it's helping doctors make diagnoses, recommend treatments, and even interact with patients.
Now, imagine a patient arriving at their doctor’s office armed with a list of questions, talking points, and treatment options—generated entirely by AI. But when they bring it up, their doctor replies:
"I'm sorry, but I'm not aware of that treatment and don't know anyone who does it.”
Seems frustrating? For patients it can be. And while it is not currently a full reality, patients will be using AI more and more in the coming years, so doctors need to be ready.
While AI’s potential in healthcare is vast, its adoption isn’t without challenges. What’s holding it back? How do physicians and patients really feel about this shift? And what needs to be considered to improve overall adoption?
According to McKinsey & Company, healthcare organizations are increasingly implementing AI, particularly for operational efficiencies, though its role in direct clinical care is still evolving.
The study found that most healthcare institutions are either integrating AI into their operations or testing its potential. Yet, as the study observes, “despite the industry’s general interest in using AI, there is still a consistent portion of respondents without any plans to pursue gen AI or who are maintaining a wait-and-see approach.”
Interestingly, the healthcare organizations that are implementing AI, are opting for a cobuilding, partnering structure to ensure successful implementation and adoption. This suggests that the most successful platforms prioritize both innovation in patient care and the concerns of doctors and healthcare institutions. But what are those concerns for doctors and healthcare institutions?
“The classic analogy I always love to give is that back in the day, Newton sat under a tree and watched an apple fall and thought, What made the apple fall is what makes the planets go around,” Dr. Perli explained. “But these days you don't need this kind of hypothesis-based understanding because you can have a camera sitting in front of the apple, the apple falls down 1000 times, the camera takes pictures, and the camera now can predict when the apple will fall—it might even figure out F=ma.”
And indeed, AI models that are being built to observe and map the world are becoming more and more sophisticated by the day, thus opening the door for not only expedited drug discovery, but also improved diagnostics, the reduction of administrative burden (both in healthcare and outside of it), and beyond.
So what developments in the HLS industries are being most affected by AI, where is its adoption still lagging, and what does the future hold?
Issues still remain. As panelist, Shah Nawaz, Vice President of Technology & Digital Transformation at Regeneron explained: “With any technology, ecosystem-readiness is important.”
The trepidation of many physicians all comes down to trust and privacy. Doctors want to be assured that AI use in clinics won’t compromise patient data security, and if they are using AI for diagnostics, that the AI will not have any potentially deadly blindspots.
These concerns are reflected in the numbers. A 2024 AMA survey found:
While physicians grapple with trust and privacy issues, patients have their own set of concerns about AI’s role in their care.
When discussing this issue, the patient element often gets lost, yet it is an important lynchpin for the discussion, especially as pre-visit AI tools get implemented and digitally move out of the clinic. It is one thing for a patient to ask an AI to check their symptoms, it is another when doctors ask their patients to interact with a pre-visit AI platform.
Developers and adopters have a number of patient concerns to consider when developing and implementing their platforms. A 2021 study analyzing patient perspectives on AI highlighted a number of concerns, including:
The authors found that patient acceptance of AI is contingent on mitigating these risks, highlighting just how important safety is when it comes to enthusiasm around AI adoption in healthcare. Neglecting these safeguards could deepen mistrust rather than build confidence in AI-driven care.
With physicians’ concerns and patients’ need for secure and ethical AI, what can be done to prepare for the AI of tomorrow?
As AI continues to evolve in healthcare, its success will depend on collaboration between patients, physicians, and developers. Addressing trust, transparency, and ethical considerations today will pave the way for a more effective and accepted AI-driven future.