Google: Big Is Good. Huge Is Better.

January 15, 2021

I spotted an interesting datum factoid. The title of the article gives away the “reveal” as thumbtypers are prone to say. “Google Trained a Trillion-Parameter AI Language Model” does not reference the controversial “draft research paper” by a former Google smart software person named Timnit Gebru. The point at issue is that smart software can be trained using available content. Bingo, the smart software reflects the biases in the source content.

Pumping up numbers is interesting and begs the question, “Why is Google shifting into used car sales person mode?” The company has never been adept at communicating or marketing in a clear, coherent manner. How many blog posts about Google’s overlapping services have I seen in the last 20 years? The answer is, “A heck of a lot.”

I circled this passage in the write up:

Google researchers developed and benchmarked techniques they claim enabled them to train a language model containing more than a trillion parameters. They say their 1.6-trillion-parameter model, which appears to be the largest of its size to date, achieved an up to 4 times speedup over the previously largest Google-developed language model (T5-XXL).

Got that?

Like supremacy, the trillion parameter AI language model” revolutionizes big.

Google? What’s with the marketing push for the really expensive and money losing DeepMind thing? Big numbers too.

Stephen E Arnold, January 15, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta