AI and Doctors: Close Enough for Horseshoes and More Time for Golf
May 14, 2024
Burnout is a growing pandemic for all industries, but it’s extremely high in medical professions. Doctors and other medical professionals are at incredibly high risk of burnout. The daily stressors of treating patients, paperwork, dealing with insurance agencies, resource limitations, etc. are worsening. Stat News reports that AI algorithms offer a helpful solution for medical professionals, but there are still bugs in the system: “Generative AI Is Supposed To Save Doctors From Burnout. New Data Show It Needs More Training.”
Clinical notes are important for patient care and ongoing treatment. The downside of clinical notes is that it takes a long time to complete the task. Academic hospitals became training grounds for generative AI usage in the medical fields. Generative AI is a tool with a lot of potential, but it’s proven many times that it still needs a lot of work. The large language models for generative AI in medical documentation proved lacking. Is anyone really surprised? Apparently they were:
“Just in the past week, a study at the University of California, San Diego found that use of an LLM to reply to patient messages did not save clinicians time; another study at Mount Sinai found that popular LLMs are lousy at mapping patients’ illnesses to diagnostic codes; and still another study at Mass General Brigham found that an LLM made safety errors in responding to simulated questions from cancer patients. One reply was potentially lethal.”
Why doesn’t common sense prevail in these cases? Yes, generative AI should be tested so the data will back up the logical outcome. It’s called the scientific method for a reason. Why does everyone act surprised, however? Stop reflecting on the obvious of lackluster AI tools and focus on making them better. Use these tests to find the bugs, fix them, and make them practical applications that work. Is that so hard to accomplish?
Whitney Grace, May 14, 2024