top of page
Search

2025, Too Quick To AI

  • Writer: SQ
    SQ
  • 7 hours ago
  • 4 min read

In true Sesame Street fashion, 2025 was brought to you by the letters A and I. Unless you have been blissfully out of range of corporate rebranding exercises, you’ve likely noticed that everything now claims to be “AI-powered”, from real-time fraud detection, to facial recognition that caught 2 shoplifters, to electric toothbrushes that "recognizes your brushing style" and "tracks your brushing performance". The mere presence of an algorithm has become a conspicuous cue for futuristic progress.


Within this suspiciously dot-com-esque bubble lurks an imperfect general-purpose technology, hovering just beneath the surface, eager to automate vast stretches of human activity with promises of speed, scale, and intellectual succor. The appeal is understandable. When work is complex, exhausting, and increasingly unforgiving, anything that offers cognitive relief feels like mercy. We have embraced many over the years: Planes (and now cars) learned to operate themselves; Satellite navigation freed us from wrong turns and misread maps; Internet search engines demonstrated their capabilities to crawl billions of pages and retrieve relevant documents in milliseconds. Such empowering automation revolutionized the way we lived, reshaping not just what we did, but how we thought about doing it.



If automation involves reducing or replacing human effort with technology, then artificial intelligence represents the automation of thinking. Image source.
If automation involves reducing or replacing human effort with technology, then artificial intelligence represents the automation of thinking. Image source.

Yet, each of these victories also resulted in new dependencies, new forms of fragility, and new expectations about human performance. Satellite navigation has eroded our sense of direction, and many of us feel oddly helpless behind the steering wheel until our GPS has routed our journey. The Google effect describes our tendency to forget information we can easily look up, quietly offloading information that we can now retrieve on demand. Such cautionary tales are well-documented in decades of human factors research. So while pundits warn of AI's unexpected impact on human performance, the truth is that very little about this is unexpected at all.


More automation begets more, not less, human involvement - Ironies of Automation by Lisanne Bainbridge (1983)


Routine tasks are quick and easy to automate, leaving humans responsible for rare, high-stakes failures which, ironically, are precisely the moments humans need the most help. As we outsource more thinking and decision-making to AI, our role quietly shifts from active problem-solver to passive taskmaster, expected to watch, wait, and intervene only when something goes wrong. The upgraded job of babysitting automation is often boring and demotivating while also accelerating skill erosion and cognitive atrophy. What feels like efficiency on a good day becomes fragility on a bad one.


Watching is not the same as knowing - The Out-of-the-Loop Performance Problem by Mica Endsley and Esin Kiris (1995)


Being reduced to passive monitors also makes it challenging to maintain situation awareness—our understanding of what is happening, why it is happening, and what is likely to happen next. No longer active participants in the work, humans are asked to re-enter the loop only when automation fails, often with incomplete context, diminished skills, and little time to get up to speed. Imagine a scenario where the autopilot obediently flying a problematic plane, before abruptly handling control back to the pilots (or check out this true story). We automate ourselves out of practice and be surprised when the practice is needed.


Automation becomes mysterious when it acts without effective communication - The "Problem" of Automation by Don Norman (1990)


The failures we attribute to automation are rarely the result of too much automation, but of automation that fails to communicate. The true problem lies in systems that act intelligently while remaining cognitively opaque, offering little insight into what they are doing and why. When automation compensates silently, masks degradation, or reports results without revealing process, humans are left mentally isolated from the very systems they are meant to partner and supervise. LLMs speak fluently while revealing little about their hallucination-prone reasoning, inviting misplaced confidence, brittle trust, and untimely surprises. The danger is not that machines are thinking for us, but that they are doing so without letting us follow along.


It's not the technology, it's how we use it - Humans and Automation: Use, Misuse, Disuse, Abuse by Raja Parasuraman and Victor Riley (1997)


What we often label as “too much automation” is more accurately described as automation abuse. Abuse occurs when we automate the wrong things,or pushing technology beyond what it can safely and responsibly do. Misuse arises when we over-trust imperfect systems, drifting into complacency and relinquishing judgment we should retain. At the other extreme, disuse occurs when early errors erode trust so completely that we abandon even genuinely helpful tools. Burned by a few confident hallucinations, some users swear off ChatGPT altogether, trading calibrated reliance for blanket rejection. These patterns reveal a fragile equilibrium where AI is either expected to do too much, trusted too blindly, or dismissed too quickly.


Some write-ups feature so much AI, yet still read bland and lack flavor.
Some write-ups feature so much AI, yet still read bland and lack flavor.

Don’t get me wrong: AI is great. LLMs have kept me company through many lonely late-night writing sessions, a dependable sounding board that never pouts when I disagree. And yet, AI slop continues to march confidently into our inboxes and feeds, stuffing them with corporate articles that read like motivational posters, listicles written by autocomplete, insight pieces with the nutritional value of packing peanuts. It’s the illusion of substance without the inconvenience of actual thought. And somehow, we’re supposed to applaud the innovative chef.


If you made it this far, consider checking out the Chartered Institute of Ergonomics & Human Factors' white paper that was published in 2021 on Human Factors and Ergonomics in Healthcare AI.


 
 
 

Comments


bottom of page