Like all healthcare professionals, I have experience with being humbled by mentors, colleagues, and even patients. But never a computer … until recently.
A few weeks ago, the father of a newborn I examined asked me a common question about baby spit-up – one I’ve answered thousands of times. Although I believed myself to be answering thoughtfully, I was caught off guard by his lack of engagement and persistent scrolling on his phone throughout my response. When I finished, he looked up, pointed at his phone, and said, “I just asked ChatGPT too, and it said [the same thing].” Instantaneously, I felt myself transported into the middle of an Aldous Huxley novel and found myself thinking, What am I doing here? I’m sure I don’t need to expand on how taken aback I was; it should be obvious enough that even a computer could figure it out.
And in fact, it did.
Despite my very limited experience with artificial intelligence (AI), I decided to go directly to the source of my frustration and confront ChatGPT with what happened. Almost expecting an apology, I asked what I should make of this interaction. Its response was even more infuriating because it was spot-on. Even a computer could appreciate the awkwardness of the situation and began its response with, “That’s a loaded moment…”
I did quickly soften up with this unexpected but understanding acknowledgement of my discomfort in this encounter. It then continued with a long and comprehensive breakdown of its interpretation with the following headings.
“Changing Patient Behavior”
ChatGPT first touched on the broader shift of patient behaviors in the age of technology. AI has undoubtedly become a powerful tool in medicine with endless applications. It can analyze patient symptoms, labs, pictures, and radiologic imaging, with “promising accuracy” according to ChatGPT’s own not-so-humble reflection, and can search for, dissect, and apply research –within seconds– to any scenario.
AI is already being used in healthcare in countless ways, from clinical documentation and patient education to operations management and diagnostic support to list a few. When medical professionals use AI so freely, it’s unreasonable to expect patients not to. In an ever-more expensive medical system, plagued with long waitlists and limited specialists, AI is a welcoming option for patients who want to triage their own symptoms. Although this comes with a multitude of caveats, it’s hard to ignore its appealing potential.
“The Trust Gap”
For its second point, ChatGPT questioned my patient’s motivation to use an AI tool – was it distrust or anxiety? It noted that there was no recognition of my contributions as a human doctor, such as “nuance, judgment, experience, context — all of which an AI lacks” (which may have been the subtle apology I was seeking). Although I always hope and strive to attain my patients’ trust, I also know it will not be always within reach.
Even long-before AI, populations of patients have been relying heavily on medical information accessed in other forms technology, most infamously social media. Platforms like Facebook, Instagram, and TikTok are filled with influencers who give out non-evidence-based medical advice in exchange for “likes,” “follows,” and ultimately, compensation. To add fuel to the fire, due to the federal government’s recent changes to proven medical recommendations against all healthcare societies’ advice, patients are understandably more distrustful and skeptical of medical professionals and want second opinions. In that sense, AI has a lot of charm – it’s an easily accessible and free “neutral” third party. Although we can’t ignore that AI is also fallible, it’s easier to favor it over social media knowing that it is at least scanning evidence-based pages online.
“Opportunity in Disguise”
For its third point, my AI confidante brought up how I may have reacted during this situation. It suggested that I could have turned this into an opportunity to “reassert my role and set the tone for future visits.” This line took me by surprise. Although my interaction with my patient was discouraging, “reasserting my role” implies that I should hold a superior power in the doctor-patient relationship and that I should be practicing paternalistically. I expected AI to be more supportive of patient autonomy, a core principle of medical ethics that will not change with any technological advancement. While I can appreciate its uses in medicine, I have also now witnessed first-hand what AI continues to lack – morals and a conscience. ChatGPT ultimately came to a similar conclusion – recognizing that AI continues to have limitations and “is not yet reliable enough to replace human clinical judgment.”
“Your Reaction”
Finally, ChatGPT thoughtfully (and perhaps ominously) warned me that all of this is here to stay and that I should be prepared for more of these interactions. It even kindly offered to help me with similar future exchanges by scripting a professional response – a considerate gesture I was appreciative of. I too was charmed and impressed by ChatGPT’s intelligence and attempt to form a connection with me even in just a short conversation. So, as a doctor, I cannot fault my patients for wanting to consult one themselves. What I can do is continue to respect my patients’ autonomy and their choices to use other sources of information as they feel they need while continuing to encourage science-backed recommendations. In the meantime, all healthcare providers should prepare to adapt and transform how we view the doctor-patient relationship and what that means for practicing medicine in the age of AI. After all, we’ve already become colleagues without even realizing it.