AI: Immature and a Bit Unpredictable

June 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Writers, artists, programmers, other creative professionals, and workers with potentially automated jobs are worried that AI algorithms are going to replace them. ChatGPT is making headlines about its universality with automating tasks and writing all web content. While ChatGPT cannot write succinct Shakespearean drama yet, it can draft a decent cover letter. Vice News explains why we do not need to fear the AI apocalypse yet: “Scary ‘Emergent’ AI Abilities Are Just A ‘Mirage’ Produced By Researchers, Stanford Study Says.”

3 june baby

Responsible adults — one works at Google and the other at Microsoft — don’t know what to do with their unhappy baby named AI. The image is a product of the MidJourney system which Getty Images may not find as amusing as I do.

Stanford researchers wrote a paper where they claim “that so-called “emergent abilities” in AI models—when a large model suddenly displays an ability it ostensibly was not designed to possess—are actually a “mirage” produced by researchers.” Technology leaders, such as Google CEO Sundar Pichai, perpetuate that large language model AI and Google Bard are teaching themselves skills not in their initial training programs. For example, Google Bard can translate Bengali and Chat GPT-4 can solve complex tasks without special assistance. Neither AI had relevant information included in their training datasets to reference.

When technology leaders tell the public about these AI, news outlets automatically perpetuate doomsday scenarios, while businesses want to exploit them for profit. The Stanford study explains that different AI developers measure outcomes differently and also believe smaller AI models are incapable of solving complex problems. The researchers also claim that AI experts make overblown claims, likely for investments or notoriety. The Stanford researchers encourage their brethren to be more realistic:

“The authors conclude the paper by encouraging other researchers to look at tasks and metrics distinctly, consider the metric’s effect on the error rate, and that the better-suited metric may be different from the automated one. The paper also suggests that other researchers take a step back from being overeager about the abilities of large language models. ‘When making claims about capabilities of large models, including proper controls is critical,” the authors wrote in the paper.’”

It would be awesome if news outlets and the technology experts told the world that an AI takeover is still decades away? Nope, the baby AI wants cash, fame, a clean diaper, and a warm bottle… now.

Whitney Grace, June 9, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta