Googzilla: Pointing the Finger of Blame Makes Sense I Guess

June 13, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Here you are: The Thunder Lizard of Search Advertising. Pesky outfits like Microsoft have been quicker than Billy the Kid shooting drunken farmers when it comes to marketing smart software. But the real problem in Deadwood is a bunch of do-gooders turned into revolutionaries undermining the granite foundation of the Google. I have this information from an unimpeachable source: An alleged Google professional talking on a podcast. The news release titled “Google Engineer Says Sam Altman-Led OpenAI Set Back AI Research Progress By 5-10 Years: LLMs Have Sucked The Oxygen Out Of The Room” explains that the actions of OpenAI is causing the Thunder Lizard to wobble.

image

One of the team sets himself apart by blaming OpenAI and his colleagues, not himself. Will the sleek, entitled professionals pay attention to this criticism or just hear “OpenAI”? Thanks, MSFT Copilot. Good enough art.

Consider this statement in the cited news release:

He [an employee of the Thunder Lizard] stated that OpenAI has “single-handedly changed the game” and set back progress towards AGI by a significant number of years. Chollet pointed out that a few years ago, all state-of-the-art results were openly shared and published, but this is no longer the case. He attributed this change to OpenAI’s influence, accusing them of causing a “complete closing down of frontier research publishing.”

I find this interesting. One company, its deal with Microsoft, and that firm’s management meltdown produced a “complete closing down of frontier research publishing.” What about the Dr. Timnit Gebru incident about the “stochastic parrot”?

The write up included this gem from the Googley acolyte of the Thunder Lizard of Search Advertising:

He went on to criticize OpenAI for triggering hype around Large Language Models or LLMs, which he believes have diverted resources and attention away from other potential areas of AGI research.

However, DeepMind — apparently the nerve center of the one best way to generate news releases about computational biology — has been generating PR. That does not count because its is real world smart software I assume.

But there are metrics to back up the claim that OpenAI is the Great Destroyer. The write up says:

Chollet’s [the Googler, remember?] criticism comes after he and Mike Knoop, [a non-Googler] the co-founder of Zapier, announced the $1 million ARC-AGI Prize. The competition, which Chollet created in 2019, measures AGI’s ability to acquire new skills and solve novel, open-ended problems efficiently. Despite 300 teams attempting ARC-AGI last year, the state-of-the-art (SOTA) score has only increased from 20% at inception to 34% today, while humans score between 85-100%, noted Knoop. [emphasis added, editor]

Let’s assume that the effort and money poured into smart software in the last 12 months boosted one key metric by 14 percent. Doesn’t’ that leave LLMs and smart software in general far, far behind the average humanoid?

But here’s the killer point?

… training ChatGPT on more data will not result in human-level intelligence.

Let’s reflect on the information in the news release.

  1. If the data are accurate, LLM-based smart software has reached a dead end. I am not sure the law suits will stop, but perhaps some of the hyperbole will subside?
  2. If these insights into the weaknesses of LLMs, why has Google continued to roll out services based on a dead-end model, suffer assorted problems, and then demonstrated its management prowess by pulling back certain services?
  3. Who is running the Google smart software business? Is it the computationalists combining components of proteins or is the group generating blatantly wonky images? A better question is, “Is anyone in charge of non-advertising activities at Google?”

My hunch is that this individual is representing a percentage of a fractionalized segment of Google employees. I do not think a senior manager is willing to say, “Yes, I am responsible.” The most illuminating facet of the article is the clear cultural preference at Google: Just blame OpenAI. Failing that, blame the users, blame the interns, blame another team, but do not blame oneself. Am I close to the pin?

Stephen E Arnold, June 13, 2024

Comments

One Response to “Googzilla: Pointing the Finger of Blame Makes Sense I Guess”

  1. Deborah Samkoff on June 13th, 2024 1:26 pm

    “If these insights into the weaknesses of LLMs [something is missing}, why has Google continued to roll out services based on a dead-end model,…”
    Question: what is the something missing?

  • Archives

  • Recent Posts

  • Meta