top of page
Search

Always Take That AI Recommendation with a Pinch of Salt

  • Writer: SQ
    SQ
  • Aug 15
  • 4 min read

Not too long ago, Singapore's Health Promotion Board cautioned us to "trust no tongue" when it comes to salt. Excess sodium can raise blood pressure and increase the risk of heart and kidney disease. A gentleman in the United States shared that same concern and wanted to eliminate table salt (sodium chloride) entirely from his diet. According to a case report published recently on the Annals of Internal Medicine, instead of asking a doctor, he turned to ChatGPT, which stated that chloride can be substituted with bromide.


For cleansing rituals, bromine offers longer lasting effects than chlorine and causes less skin irritation ... when used as sanitizers in pools and hot tubs.
For cleansing rituals, bromine offers longer lasting effects than chlorine and causes less skin irritation ... when used as sanitizers in pools and hot tubs.

Sodium bromide, for those unfamiliar, is a white crystalline solid that looks deceptively like table salt. Once used as one of the earliest anticonvulsants and sedatives, it was later linked to a toxic condition called bromism, which can impair memory, damage the central nervous system, and cause skin problems. In 1975, the US Food and Drug Administration removed bromides from its “Generally Recognized As Safe” list for human medications. Today, sodium bromide’s main market isn’t the pharmacy. It’s sold as a disinfectant for swimming pools and hot tubs, right alongside its cheaper, more familiar cousin, sodium chloride.


Knowing this now, it seems obvious what this gentleman ought not to do, but he did. He presented himself to the emergency department claiming he was being poisoned by his neighbor. Abnormal blood tests led to his admission, during which he experienced a deterioration into psychosis. After intravenous fluids and electrolyte repletion, he stabilised enough to report fatigue, red spots on his skin, excessive thirst, and heavy fluid intake. His history, symptoms, and test results convinced doctors he had bromine poisoning.


The Illusion of Information Authority


Large language models (LLMs) like ChatGPT are engineered in ways that make them feel like an authority on almost anything. They can scour and synthesize vast amounts of online information in seconds, packaging it into neatly structured answers. They get most things right, often enough to build a track record of apparent reliability in the user’s mind. And they speak with a calm confidence and refined verbal precision that mirrors human expertise. Just last week, OpenAI boasted that its latest model, GPT-5, was a “PhD-level expert in anything.”


These design features, while impressive, create a subtle trap: the more consistent and articulate the AI, the more we may mistake its outputs for verified truth, even when it is confidently wrong.


The Lab Coat Effect, Now Available in Code


Psychology has long shown us that authority, real or perceived, can sway human judgment in troubling ways. In the 1960s, Stanley Milgram’s obedience experiments revealed how ordinary people would administer what they believed were dangerous electric shocks simply because a man in a lab coat told them to. A decade later, Philip Zimbardo’s Stanford Prison Experiment demonstrated how quickly people adopt roles and follow orders in a system that legitimises authority. These studies weren’t about gullibility; they were about the powerful social forces that make us defer to those we think “know better.”


These phenomena still haunt us today, explaining why people used to / still fall for government officials impersonation scams.


Many of today's perceived authority figures reside not in people, but in technological tools. GPS systems have famously led drivers off unfinished roads, into the sea, and onto snowbound, undrivable tracks in the dead of winter. In each case, the calm certainty of the digital directions overrode the driver’s own senses and judgment. And now, we see it in generative AI. The authoritative eloquence and algorithmic accuracy of LLMs makes it too easy for users to obey with the same uncritical trust once reserved for human experts.


Corroboration Requires Competence


The irony is that while AI is often seen as a shortcut to expertise, our best safeguard against its pitfalls is still human expertise. Often we are warned to check AI responses for errors and fabrications, but to do so successfully requires having enough knowledge to recognize what’s wrong in the first place, or at the very least the proficiency to detect and subsequently verify with other sources. On the other hand, a trained clinician, engineer, or scientist does more than recall facts. They interpret them in context, ask clarifying questions, and weigh risks against the bigger picture. That’s the kind of thinking today’s AI, for all its fluency, still cannot do. Without the relevant foundation, fact-checking becomes guesswork, and the very reason of filling a knowledge gap using AI becomes the source of our vulnerability.


The growing reputation of AI as an oracle of expertise renders its unreliable recommendations dangerously easy to mistake for truth. We would balk, one hopes, at the prospect of someone lacking any tertiary education in psychology being entrusted to instruct undergraduates on that subject. Yet in our eagerness to embrace quick and easy wins, we often blind ourselves to the demands of critical thought and, in doing so, undermine genuine expertise—putting ourselves, or worse, others, in harm’s way. After all, bad information is worse than no information.


Even AI knows that perception is everything.
Even AI knows that perception is everything.

 
 
 

Comments


bottom of page