LAKE BUENA VISTA, Fla.—Artificial intelligence, one of the most rapidly evolving chapters in medicine and myriad other fields, stands on the brink of transforming healthcare, with a potential to gather and distill vast realms of data, reduce medical error and improve patient care. At Minimally Invasive Surgery Week 2024, hosted by the Society of Laparoscopic and Robotic Surgeons, a panel of surgeons discussed several AI-related topics relevant to minimally invasive and robotic surgery.
In this issue, General Surgery News initiates a series based on that panel discussion, starting with a brief introduction to AI before getting into the first topic: ethical considerations of the technology.
The release of ChatGPT, in December 2022, was not the beginning of AI in public awareness—just ask former world chess master Garry Kasparov about the computer that beat him in 1997. But if you asked Deep Blue, the victorious IBM, what happened that day, you’d hear nothing but crickets.
“It didn’t talk. But now we’re dealing with natural language,” Paul Toomey, MD, said, referring to ChatGPT 3.5/4. I think of AI as a friend who knows everything, always wants to talk about what you want to talk about, and never gets tired.”
As an analogy, AI does what our brains do: It forms answers (output) questions you feed it (the input), but on a massive scale, Richard Satava, MD said. “For me, AI is the ability to access data well beyond what any individual could possibly do. Consider that an attobyte, 1 × 1018, of data is generated each day, and the AI then instantly does the statistical analysis of what the input is questioning.”
Most importantly, the answer that comes out is contingent upon the data that go in. At the present time, at least 61 different industries have their own massive databases, and their data type and quality vary. There’s also the matter of explainability. “We still don’t understand how these AI systems are ‘learning’ and what they’re doing,” Dr. Satava said.
With modern AI still in its relative nascency, ethical concerns are sure to shift and change over time, but some are more obvious at present. The expert panelists at Minimally Invasive Surgery Week offered their perspectives on these ethical concerns.
Eric A. Singer, MD: How can we use AI to help us and our trainees and our patients figure out the best path forward? As mentioned, what goes in is largely going to influence what comes out. I view our current foray and exploration of AI as being analogous to how we evaluate surgical innovation. As we’re doing things with more complex mathematical modeling and doing them faster, we need to keep asking the key questions: What is our end point and how are we going to get there?
The American Society of Clinical Oncology’s recent guidelines include six principles to guide our consideration of AI: transparency, informed stakeholders, equity and fairness, accountability, oversight and privacy, and human-centered application. Just as we do with many of our surgical interventions, we need to be thinking about transparency and informed stakeholders. I wrote a paper with co-authors looking at AI and discussion of kidney cancer treatment options, and we used some previous versions of publicly available AI. Usually it was quite good, but it honestly made up some citations. If we’re going to be using AI, we need to make sure that we have accuracy and fidelity. We really need to be focused on what we’re putting in, what we’re asking AI to do, and ultimately trying to use it as clinical decision support, not a replacement for practical knowledge or wisdom.
Dr. Toomey: The other thing that comes to mind—is it ethical to use ChatGPT or one of these other models when we’re writing a manuscript or an abstract? First, is it OK? Secondly, if we do use it, do we have to say that we used it? Or are we at the point where everybody’s going to use it so we might just assume that it was used? What do we think about using it for manuscripts or publications, or even the lawyers who try to use it for closing statements? Should we be disclosing our use of AI for publications?
Dr. Singer: I’m an associate editor for a couple of journals, and there are actions, specific attestations we need to make. If you did use ChatGPT or another program, it should be used as an adjunct, and you need to disclose it. But I think it can be great, especially, if you’re publishing a scientific article in a language other than your native language. Having something that helps people from different countries share knowledge is great; also, using it to synthesize something because you’re not willing to or it’s not an efficient use of your time. Again, you’d have to check your sources, but I think so much of it comes down to what your intent is, and how are you using it.
Dr. Satava: It’s almost like every time an input comes into the AI system, the AI will do a systematic review of all the relative literature instantaneously. Statistically, through its computational analytics, it’s able to come up with an answer that is not too different from the way the human body works as a model. But I would caution, from the very beginning, from the very first keystroke on the very first computer, there is an undeniable model, and that is ‘Garbage in, garbage out.’ Your answers from AI depend entirely on the database you are searching. Be very careful, because we are seeing now, in the political arena, some pretty nasty databases that are becoming very popular. And they are included within the analysis that this AI machine is doing—it has no idea what is right and what is wrong.
Robert B. Lim, MD: I go back and forth on it. There are practical uses for it, as we heard from Dr. Singer, such as overcoming language barriers. I don’t think it should be excluded, but if you’re using it to get through some of the mundane stuff you’re doing, is it really your work? I have more of a quandary over this than I thought I would.
Dr. Toomey: It makes me feel a little queasy sometimes using AI for writing. If it’s just a matter of changing a few words or adjusting the structure, I think that’s completely appropriate—AI can help make ideas clearer without altering their essence. But when it comes to having AI rewrite something entirely, I start to question the authenticity of the work. At that point, the thoughts might technically be mine, but they’ve been processed through a lens that can’t replicate human nuance, intention or ethical responsibility. It raises the question: Where do we draw the line between assistance and authorship? I believe it’s crucial to maintain a balance, using AI as a tool for support rather than a substitute for genuine intellectual contribution. But these lines will definitely be blurred and difficult to discern.
This article is from the April 2025 print issue.





Please log in to post a comment