

No additional refinement was made, he said, after the chatbot delivered an answer.Ĭhatbot answers tended to be much more verbose and friendly sounding while those from doctors were clearly dashed off by a chronically-busy person relying on shorthand to be as efficient as possible. "All we did was cut and paste the questions into ChatGPT and save the response." "Honestly, it's just plug-and-play," Ayers said in an email. Since questions and answers are all made in public for anyone on the Internet to read, feeding them to ChatGPT required no particular data wizardry. The group, which has nearly 500,000 members, allows anyone to publicly ask any question they want of doctors whose qualifications are verified by Reddit. Researchers pulled 195 randomly-selected questions from the Ask a Doctor subsection of, the popular news aggregation and discussion site. This paper's results, however, test a very specific set of circumstances pertaining to text communications between doctors and patients and do not generalize to clinical settings. "So many more patients who are now getting no response or a bad response will be able to get answers from an AI equipped physician who will be able to serve far more patients," Ayers said. Ayers, the UCSD computational epidemiologist who led the data collection and analysis process, said that he believes artificial intelligence will be a game changer for medicine in its ability to lighten workloads while simultaneously improving quality for patients. "Chatbot responses were rated significantly more empathetic than physician responses," the paper states.ĭespite the lopsided results, this paper's authors say doctors should be excited by what they show. And, in terms of empathy, an area where people would intuitively seem to have an edge, silicon again excelled. What's more, chatbot responses were found to be of a "significantly higher quality" than those from humans. Published Friday in the medical journal JAMA Internal Medicine, the paper finds that ChatGPT, the world-upending chatbot with a seemingly-infinite breadth of training, was able to more than hold its own when its responses were judged by a panel of experts against those made by flesh-and-blood physicians.Įvaluators found they "preferred the chatbot responses to the physician responses," in 78 percent of evaluations made. Do I need to see a doctor after hitting my head on a metal bar while running?Īm I likely to go blind after getting bleach splashed in my eye?Ī new study led by researchers at UC San Diego explores how artificial intelligence compares to human expertise in the workaday task of dashing off quick responses to routine medical questions.
