The hidden dangers of treating ChatGPT like an all-knowing expert

By Quynh Nguyen   November 22, 2025 | 09:08 pm PT
When 65-year-old Thuy Ha got a sore throat and cough, she asked ChatGPT for a diagnosis, believing it to be "more objective and thorough" than a doctor.

She was introduced to the AI chatbot in mid-2024 when her children installed the app on her phone. Initially using it for tasks like looking up recipes and writing poems, she gradually began to see the AI as a "know-it-all expert."

Soon she discovered its ability to diagnose illnesses. At first, she would verify the AI's diagnosis, but eventually, she began to fully trust it, using it as a replacement for her doctor. "A doctor just asks for symptoms and gives a prescription anyway," she told her children when they expressed concern. "Using ChatGPT is quicker and saves me the hassle of going to the hospital."

But one time, when she experienced sharp pain in her lower abdomen, ChatGPT suggested the issue was "possibly due to stress or mild digestion issues" and recommended drinking warm water and applying soothing oil. When the pain persisted, she went to a hospital, only to discover she was suffering from a stomach bleed. She doubles down, however, saying: "It’s rare for the AI to be wrong. Doctors don’t always diagnose correctly either."

This blind trust in AI is not limited to health. Van Hang, 45, from Hanoi, refers to ChatGPT as his "encyclopedia." He consults it for everything, from car prices to real estate advice. When the AI suggested paying VND1 billion (US$37,900) for a 100-square-meter plot of land outside the city, Hang refused to negotiate with the seller, who was asking for VND1.8 billion. The seller mockingly said, "Why don't you get it [the AI] to buy it for you?" But Hang insisted that the AI was correct, and the seller was overcharging.

Hang's 15-year-old son, Gia Bao, also uses ChatGPT, initially as a problem-solving tool, but soon for all his homework. "It’s better than a tutor," Bao says. "An essay that used to take me all day to write now only takes a minute."

AI chatbot apps on a mobile phone. Photo by VnExpress/Luu Quy

AI chatbot apps on a mobile phone. Photo by VnExpress/Luu Quy

Dinh Ngoc Son, a digital transformation expert, highlights how AI is fundamentally changing how people live and work by automating tasks. But this convenience has led many to mistakenly believe that AI can replace human experts. A photo of someone attempting to buy medicine with a prescription generated by ChatGPT recently went viral and sparked a widespread debate. "AI is everywhere in life, from finding directions to health consultations," Son says, noting its popularity is due to the need for reducing workload and the rapid growth of technological infrastructure.

Tech expert Trinh Trung Hoa believes the appeal of AI lies in its 24/7 availability, cost-free access and convenience. "During a tough economy, a free tool that answers any question is definitely preferred," he says. "Meanwhile, getting an appointment with an expert is both time-consuming and costly."

The reluctance to seek expert advice for questions considered "too trivial or sensitive" also drives users to AI, he adds. OpenAI reports that over 200 million people worldwide use ChatGPT weekly. A report from data-intelligence research company Sensor Tower shows that, in the first half of 2025, Vietnamese users spent 283 million hours using generative AI apps over 7.5 billion sessions.

However, this convenience comes with risks. Hoa says AI’s strength lies in its rapid aggregation of information, but its weakness is that "it cannot analyze right or wrong nor make multidimensional judgments." AI processes data mechanically, often based on unverified sources, which is why applications always carry a disclaimer that they "may make mistakes."

International medical experts have also warned that AI lacks practical reasoning and cannot replace the flexibility and ethics of a doctor. A study in the medical journal PLOS One found that ChatGPT's medical diagnosis accuracy was only 49%. Hospitals report about patients admitted after following AI advice. In April the journal Annals of Internal Medicine published a case of a 60-year-old in the U.K. who suffered bromine poisoning after using ChatGPT’s recommendation to replace salt with sodium bromide.

In Vietnam, Gia An 115 Hospital in HCMC admitted a 42-year-old woman who had stopped her diabetes medication based on AI advice, leading to a spike in blood sugar and near coma. Another patient, 38, with hyperlipidemia, switched from statin medication to herbal remedies based on online advice, causing coronary artery stenosis.

Experts are also concerned about AI’s misuse in education. Cybersecurity expert Ngo Minh Hieu, known as Hieu PC, says students are using AI as a "homework tool" rather than as a "teaching assistant." This habit leads to knowledge gaps and, in the long run, promotes laziness in critical thinking. Gia Bao’s case illustrates this: while using ChatGPT boosted his homework scores, his exam results were consistently among the lowest in his class. "His teacher said my son is now the lowest-performing student in the class," Hang says.

To avoid dependence on AI, Hoa advises users to set clear boundaries and verify all sources actively. "Students can reference AI to learn how to solve problems, but they should not copy answers. They need supervision from family and school."

Son emphasizes that the issue is not the technology itself, but how it is used.

"If used for self-development, AI is a wonderful assistant. If abused, we lose our initiative. AI should not be a ‘crutch’ but a ‘launchpad’ for human development."

 
 
go to top