This article has been updated since it first appeared in print.
Illustration by Justin Tran
For any industry, keeping pace with technology is as much a marathon as it is a sprint. The boom in artificial intelligence technology in recent years has forced sectors from finance to fast food to consider improvements and overhauls to existing business models. A reckoning with the expansive data-processing tool, whether welcomed or begrudgingly accepted, has hit few industries as hard as health care: Over $30 billion has been invested in health care-focused AI companies in just the past three years, according to a report from the National Library of Medicine.
How these AI models fit in with modern medicine can vary; some consumer-facing products are already seen in waiting rooms and patient portals, while AI tools aimed at a range of research disciplines are conceived and implemented rapidly.
Following a global trend, doctors and administrators at Richmond-area health systems have begun integrating AI technologies into their services, balancing the promise of a faster, smarter hospital with still unanswered questions about AI’s functionality and ethics.
In July 2024, VCU Health hired Alok Chaudhary for a first-of-its-kind role in Richmond: chief data and AI officer. Chaudhary, also a vice president at the health system, oversees the use of AI across research and medical facilities. “The health system started recognizing the fact that this role is so critical to be at the table,” Chaudhary says.
VCU is enlisting AI to compile and analyze medical data, according to Chaudhary, with the aim of allowing medical professionals to spend more time with patients and less time searching for records and notes. But with this benefit comes concerns about misinformation, a common problem across AI models that Chaudhary notes requires careful implementation to avoid.
“We have to make sure that whatever goes in front of our clinicians or users, it’s very well vetted,” Chaudhary says. “We are all learning, and we are trying to make sure we have right guardrails in place.” VCU Health’s team of AI professionals — including legal experts, IT managers and medical leaders — ensures all information going in and out of its AI systems is checked, according to Chaudhary. “At the end of the day, what’s paramount for us, which will not change, is patient care.”
Bon Secours Mercy Health, which operates four hospitals and more than a dozen urgent care facilities and emergency rooms in the region, also looks to optimize doctors’ time through the use of conversational AI meant to process patients’ questions and connect them with specialists ahead of a clinic visit. The tool, named “Catherine,” is built for patients seeking orthopedic care to describe symptoms of knee, hip or joint pain and learn about treatments that Richmond-based providers can offer. “It’s navigation through a care journey, but it is not a care assistant,” says Dr. Mark Townsend, chief clinical digital ventures officer at BSMH.
The tool addresses the trend of patients collecting health information from search engines, where AI and unverified sources can fall short in expertise. A model like Catherine is intended to clarify and direct patients to medical professionals with information sourced from and backed by Bon Secours’ clinicians, Townsend explains.
AI integration in telehealth can go beyond patient referrals to suggest possible diagnoses to physicians. Professionals at HCA Healthcare’s hospitals use Viz.ai, a patient portal and communication tool that houses imaging results and symptoms; the app’s FDA-approved AI software also assists doctors with diagnoses, typically surrounding radiology and neurology.
“The app is pretty good at detecting if the patient is going to need the procedure,” says Dr. Kofi-Buaku Atsina, a neurovascular interventional radiologist at HCA’s Radiology Associates of Richmond. “It’s able to detect that, and then it sends a notification, and that enables us to act very quickly.”
Though Viz.ai isn’t accurate in every case, Atsina has found that in successful cases, the app has significantly cut response time, a benefit that is vital to providing high-quality care. “All our hospital systems are trying to minimize how much time is wasted to get the patient to have the treatment that they need, because outcomes in a way depend on time,” he says.
Sam Director (Photo courtesy Sam Director)
Even as the potential of AI advancements in health care grows, the path toward ethical usage is less obvious. Sam Director, a philosopher and professor of leadership studies and and Philosophy, Politics, Economics, and Law at the University of Richmond, studies the ethics of AI in medicine. His research found that Black Box models — AI systems that give information without revealing the reasoning behind the answer — do not conflict with informed consent, even though patients would not receive an explanation for their diagnoses and proposed treatments.
“The issue is that of the foundational pillar of medical ethics, which is respect for patients,” Director says. “Informed consent and autonomy [initially] seem to kind of rule out certain uses of AI, which are the ones that I think would be the most beneficial,” Director says.
As these groundbreaking tools continue to rapidly advance, medical leaders in Richmond suggest that getting AI right means finding a balance, staying on the cutting edge for the benefit of patients and doctors while remaining grounded in the values of medical ethics.
“We hear that, in health care, AI will replace physicians and so on. I don’t see that happening,” Chaudhary says. “AI will become better in terms of providing [physicians] an assistive technology. That’s how I see it. ... The tool is only going to become better and better at what it does.”

