If you have ever googled a symptom at 2am, you are not alone. Millions of Australians are now turning to AI tools like ChatGPT for health information. But what actually happens when you ask artificial intelligence to diagnose your rash or suggest a treatment plan?
The honest answer is: sometimes it helps, sometimes it misleads you, and sometimes it could put you in real danger.
Doctors who use AI in their practice offer a simple lesson that applies whether you are a patient or a clinician. AI works best as a springboard for conversations with a medical professional, not as a replacement for one.
The good news first. A study published in the New England Journal of Medicine found that AI systems could frequently identify difficult cases, with a follow-up comparison to a leading human diagnostician showing a slight human advantage. Some patients report that AI-generated warnings sent them to the emergency room in time. One woman was eventually diagnosed with a rare autoimmune disorder called immune thrombocytopenic purpura that can lead to low platelets and increased bleeding.
For doctors themselves, AI has become genuinely transformative. Physicians cited the top benefits of AI as transcription services and capabilities (48 per cent) and streamlining administrative tasks (46 per cent). When AI handles note-taking during patient visits, doctors can look patients in the eye instead of typing. An American Medical Association survey revealed that 93 per cent of physicians using AI can now give patients their full attention.
Here is where the caution comes in. AI has advised a patient to try the anti-parasitic drug ivermectin as a treatment for testicular cancer, with the understanding that while it probably would not hurt, what would hurt is not getting appropriate treatment for cancer that is treatable. AI systems can hallucinate, making confident claims about medical facts that are simply not true.
While specialised AI chatbots offer faster responses and easier access, hallucinations, inconsistent answers and data privacy concerns could limit their potential. These tools also operate outside the privacy laws that govern real doctors. Your data could be used to train the AI systems themselves, raising serious confidentiality issues.
Patients and doctors stress that AI is not a replacement for a doctor, and that considering it as such is dangerous, with doctors saying that without clinical oversight, misdiagnosis, misleading advice, or human misunderstanding are significant problems.
In Australia, health regulators are taking this seriously. The Australian Commission on Safety and Quality in Health Care worked to develop an AI Clinical Use Guide to help clinicians safely use AI in their day-to-day practice. Practitioners must apply human judgment to any output of AI, with TGA approval of a tool not changing a practitioner's responsibility to apply human oversight and judgment to their use of AI.
The practical takeaway is straightforward. If you use AI for health questions, treat it as an information-gathering tool that might prompt you to see a doctor, not as your doctor itself. If you are a patient discussing something an AI suggested with your clinician, be honest about it. If you are a doctor considering AI tools for your practice, focus on how they can free up your time for actual patient care rather than replacing the human judgment that medicine fundamentally requires.
Large language models are competitive with humans in simulated tests of diagnostic reasoning. But simulated tests are not real patients, and real patients deserve the human judgment, accountability and oversight that only a qualified doctor can provide.