A Power User’s Perspective on genAI

A Power User’s Perspective on genAI

Alex Sleeper

A short few months after the public was gripped by a former Google Engineer’s extraordinary public declaration that Google’s LaMDA demonstrated sentience, OpenAI’s ChatGPT stepped onto the market. This was in November 2022. AI gradually filtered into public discourse, from social media to NPR, until a year later it became the primary location upon which all conversations about work, communication, and creativity took place.

A team of scientists and philosophers at Google rebuked Blake Lemoine’s claims, but their protest was a whisper compared to the volume of Lemoine’s media appearances and Medium posts. Further, their response seemingly failed to matter; everyday people had been primed to accept the fantastical sci-fi plot line that AI understood something about itself, and consequently the world. 

From deep learning models that dominate human Go players to computer vision that can diagnose from medical imaging, when contemplating AI, one is affectionately reminded of Arthur C. Clarke’s third law: “Any sufficiently advanced technology is indistinguishable from magic.”  

It’s unsurprising, then, that corporate boardrooms and graduate classrooms alike are populated by AI mystics. Even the experts are awed by the invisible inner workings of prescient algorithms like it’s a sideshow séance.

Yes, AI is doing incredible things; AI is a harbinger of scientific innovation; but AI is also not a business miracle.

There is no ghost in the machine 

This AI-generated inflection point is, for me, also a reflection point. I’m reminded of a particular Economist article I read in 2015; at the time, I was a young writer in an entry-level marketing role and my life revolved around the written word. Flipping through the waxy magazine pages, I landed on an article titled, “Rise of the Machines”. I had no practical idea of what AI was, nor could I predict how, nearly 10 years later, it would be a focal point in my professional life.  

Despite the article’s aim to be an antidote to automation anxieties, reading samples of early genAI writing made me fearful: would human writers become obsolete? It was clear to me, even then, that the first commercial applications of AI were going to be creative. 

And this, according to Paul Romer, Professor of Economics at Boston College, is where people are getting AI wrong. “The problem with the way most people are framing AI is they’re thinking about autonomous vehicles where you’re taking the human out of the loop; the technology is the replacement for the human. That is not working that well.” 

Dubito, ergo sum

Even with low-stakes tasks like writing a blog article, AI versions cower to their human counterparts. The reason is obvious: AI doesn’t actually know things. Unlike human consciousness, which can grasp concepts – concrete or abstract – and can demonstrate understanding through language, AI only mimics our understanding by mimicking our language.  

We are now living through the AI boom, and I’m no longer scared it will replace me.  

As Romer referenced, driverless cars point to AI’s limitations. Last fall, in my own city of Austin, TX, Cruise driverless taxis were banned due to safety concerns. (I, myself, was nearly flattened by one of these rogue taxis for the crime of crossing at a crosswalk.) These vehicles operate using supervised machine learning, meaning human-assisted training enables the AI to recognize and respond to patterns – like braking for pedestrians crossing the road. The idea is that it gets better over time, eventually outperforming humans. Yeah, “that is not working that well.” 

Neither is AI writing.  

At scale, hidden deficiencies become unignorable. Mistakes I’ve witnessed range from misspellings to mixed metaphors, with genAI’s truly egregious behavior being overusing words or phrases that absolutely no flesh-and-blood person in the English-speaking world would utter.  

This creates a feedback loop, with crappy AI articles fed back into the LLM as training data, causing it to repetitiously regurgitate these unrelatable turns of phrase – even when specifically prompted not to.  

A key instance of this: unsung heroes. I cannot begin to count the number of articles – generated with elaborate and well-tested prompt chains – that were riddled with “unsung heroes.” And I’m hardly alone in this, with all my colleagues and peers also complaining of their perfectly decent AI articles marred with this phrase.

Looking at this Google Trends search, you can see the impact of genAI’s voice on online content.

Note: There was a movie released in April 2024 titled Unsung Hero, but why that particular phrase has embedded itself into ChatGPT’s vocabulary is unclear.

Artificial intelligence is just that: artificial. It isn’t generating new ideas (yet). However, amongst AI mystics, the belief is that AI can, and should, replace human workers when possible.

I think emergence is largely to blame for this sentiment as it has led folks to believe that AI might already ‘know’ things. Emergent properties are unexpected skills or behaviors AI acquires but that were not pre-trained or programmed into its systems. While the merits of emergence are contested, with researchers at Stanford asserting it’s a “mirage,” the theory nevertheless suggests current models are crawling toward artificial general intelligence (AGI).  

Here’s the rub: AI is not as good as you think it is.

In almost every context, genAI is not performing well enough to operate without a human-in-the-loop. AI’s best application is as an assistant – a tool of initiative, a brainstorming device – not a task executor. Despite captivated executives and academics yearning for AI to operate unmanned, we’re putting the cart of ghostly gimmicks before the horse. AI won’t be an effective challenger to creative workers until we achieve AGI because only then can it generate new ideas – the true value of creativity.  

I’ll leave you with this quote.

“…hallucination is all LLMs do. They are dream machines. We direct their dreams with prompts. The prompts start the dream, and based on the LLM’s hazy recollection of its training documents, most of the time the result goes someplace useful…Hallucination is not a bug, it is LLM’s greatest feature.”  

– Andrej Karpathy 

Extu keeps humans-in-the-loop

Our expertly crafted content for tech, building, and automotive partners is fingerprinted with the touch of our talented writers and editors. We utilize the optimal blend of automation, AI, and human expertise to provide vendors and partners with high-quality, cutting-edge, and price-conscious marketing materials. Find your perfect product by exploring our solutions here.