ORLANDO, Fla.—What sort of potential does artificial intelligence hold for the world of medicine and medical practice? Does it pose a threat to job security, or will it make work better by bolstering access, learning and efficiency? It’s too soon to tell, of course, but in this round of our series based on a panel discussion on AI at Minimally Invasive Surgery Week, experts built on what we do know to speculate on how AI could be used to support physicians, and what sort of pitfalls need to be avoided to make AI a bonus and not a burden in healthcare.

img-button
Paul G. Toomey, MD
General Surgeon
Florida Surgical Specialists
Bradenton
img-button
Robert B. Lim, MD
General and Metabolic and Bariatric Surgeon
AtriumHealth, Carolinas Medical Center
Charlotte, N.C.
img-button
Richard M. Satava, MD
Professor Emeritus of Surgery
University of Washington Medical Center
Seattle
img-button
Eric A. Singer, MD
The Dave Longaberger Endowed Chair in Urology
Professor of Urology and Bioethics
Chief, Division of Urologic Oncology
Director, Urologic Oncology Fellowship
The Ohio State University Comprehensive Cancer Center Columbus

Paul G. Toomey, MD: We looked at AI in the literature over the last 20-plus years, and 73% of the articles for the top five specialties were in radiology and pathology, which shows that AI in medicine is more about image processing than anything else for now; everything else is lagging behind. There’s a big push in medicine to implement AI, but in preparation for this discussion, I talked to some of the companies and found the radiology and pathology platforms actually rather underwhelming. Talk to any of the pathologists, and they say, “Yeah, it’s just not there yet; too many false positives.” And the radiologists say the same thing. I also don’t think the general public is buying into it. And it would be difficult to implement because the pathologists’ main concern is that AI will take their jobs. The very specialty driving the technology feels threatened by it. I think the better way to implement it is that AI can be used to make the pathologists and radiologists superhuman, but I don’t know that they see it that way.

image

Eric A. Singer, MD, MA, MS, FACS, FASCO: I’m very skeptical that AI is going to replace pathologists and radiologists in practice. This may be naive, but I don’t think that’s any more likely than surgeons being replaced anytime soon just by pushing a button on a robot to do a surgery. What I do think about, working in a tertiary referral center, is that we have pathologists who specialize in very narrow things; pathologists and radiologists from community centers have to be competent and up to date in everything and that’s a huge volume—it’s not sustainable. What our pathologists see multiple times per month someone in a community center might see once or twice in their career. How can we avoid missed or delayed diagnoses? I think we’re going to get some benefit from AI by being able to pair community practice and university tertiary referral centers to have the image processing that can flag people to findings that are rare. Where you live, your ZIP code, shouldn’t determine your health outcomes as much as it does right now.

Dr. Toomey: I ran across one radiology AI business where AI flags the more important images to read first. Let’s say AI detects a brain bleed on the CT scan and brings that to the top for the radiologist. At Moffitt Cancer Center, one of the radiologists pointed out that they use eye tracking to improve the flow of the radiologists, for example, trying to get them to spend less time on false positives. So, they’re trying to improve their own radiologists, making them more superhuman. The way I see it, like the anesthesiologists at my hospital who have six CRNAs in six rooms, the pathologists will have six different forms of AI doing their reads, flagging images that are uncertain. I think that will improve the output. Any other thoughts on using AI in medicine?

Robert B. Lim, MD: [I see a role] in education: maybe machine teaching with machine learning. There are great simulation devices, practice machines, that give trainees or students or residents great practice—high fidelity with low consequence. If you send that trainee to practice for an hour, the robot will give a ton of feedback; for example, you were off the screen 25% of the time, or you grasped too hard 50% of the time. What would take a teacher several hours and a number of cases in real patients with potentially disastrous consequences, the student can learn in an hour safely. I may be blurring the lines a little bit—is it AI? But the point is, you can use it to teach.

Richard M. Satava, MD: I think the analogy of pathology is a good example of image recognition and pattern matching. Imagine submitting a slide to the pathology department and using AI/machine learning, like having thousands of pathologists instantaneously looking at that one slide and comparing it with the millions of slides in the Armed Forces Institute of Pathology database, one of the greatest references we have in healthcare. I think of AI as the human brain on steroids, or having access instantaneously to millions of people to answer any question you might have. But it all depends on the database you use. Because of the many terms I’ve noticed being used to describe AI, one that’s missing is “trustworthiness.” Do we trust the answers generated by the prompts we put in? We don’t really have a good handle on that yet.

Dr. Singer: I think your perspective is spot-on. As more companies develop their own proprietary algorithms, we will lose transparency. How do we verify that what they’re doing is clinically appropriate and ethically sound? Think about the biases we have in publication right now, the positivity bias, where positive findings are much more likely to be published. This could just amplify the biases we currently have in our literature. Consider this: There have long been calls for registries of innovative procedures or techniques that people have tried—whether or not they’re successful—so that the next person with a similar idea can see what has already been tried 19 times that didn’t work and then decide to explore something else. AI would be a fantastic way to help tap into that. But if we don’t have a database that includes our unpublished failures, our unsuccessful trials, it’s not going to work. I think about the racial bias built into some of our systems, like calculating glomerular filtration rate, which was largely based on unsound science and has been changed, or pulmonary function tests in Black patients, which make them appear less sick than they actually are, who are thus less likely to get treatment or receive disability insurance payments. We need to be very thoughtful about what’s going in, and we need to keep the physician as the ultimate authority and decision-maker so that we can maintain accountability to our patients and to the communities that we serve.

This article is from the September 2025 print issue.