Google DeepMind Risk Paper: 60 Pages with a Few Googley Hooks

May 22, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved in writing, just a dumb humanoid.

I read the long version of “Ethical and Social Risks of Harm from Language Models.” The paper is mostly statements and footnotes to individuals who created journal-type articles which prove the point of each research article. With about 25 percent of the peer reviewed research including shaped, faked, or weaponized data – I am not convinced by footnotes. Obviously the DeepMinders believe that footnotes make a case for the Google way. I am not convinced because the Google has to find a way to control the future of information. Why? Advertising money and hoped for Mississippis of cash.

The research paper dates from 2021 and is part of Google’s case for being ahead of the AI responsibility game. The “old” paper reinforces the myth that Google is ahead of everyone else in the AI game. The explanation for Sam AI-man’s and Microsoft’s markeitng coup is that Google had to go slow because Google knew that there were ethical and social risks of harm from the firm’s technology. Google cares about humanity! The old days of “move fast and break things” are very 1998. Today Google is responsible. The wild and crazy dorm days are over. Today’s Google is concerned, careful, judicious, and really worried about its revenues. I think the company worries about legal actions, its management controversies, and its interdigital dual with the Softies of Redmond.

5 17 hunting for footnotes 2

A young researcher desperately seeking footnotes to support a specious argument. With enough footnotes, one can move the world it seems. Art generated by the smart software MidJourney.

I want to highlight four facets of the 60 page risks paper which are unlikely to get much, if any, attention from today’s “real” journalists.

Googley hook 1: Google wants to frame the discussion. Google is well positioned to “guide mitigation work.” The examples in the paper are selected to “guiding action to resolve any issues that can be identified in advance.” My comment: How magnanimous of Google. Framing stakes out the Googley territory. Why? Google wants to be Googzilla and reap revenue from its users, licensees, models, synthetic data, applications, and advertisers. You can find the relevant text in the paper on page 6 in the paragraph beginning “Responsible innovation.”

Googley hook 2: Google’s risks paper references fuzzy concepts like “acceptability” and “fair.” Like love, truth, and ethics, the notion of “acceptability” is difficult to define. Some might suggest that it is impossible to define. But Google is up to the task, particularly for application spaces unknown at this time. What happens when you apply “acceptability” to “poor quality information.” One just accepts the judgment of the outfit doing the framing. That’s Google. Game. Set. Match. You can find the discussion of “acceptability” on page 9.

Googley hook 3: Google is not going to make the mistake of Microsoft and its racist bot Tay. No way, José. What’s interesting is that the only company mentioned in the text of the 60 page paper is Microsoft. Furthermore, the toxic aspects of large language models are hard for technologies to detect (page18). Plus large language models can infer a person’s private data. So “providing true information is not always beneficial (Page 21). What’s the fix? Use smaller sets of training data… maybe. (Page 22). But one can fall back on trust — for instance, trust in Google the good — to deal with these challenges. In fact, trust Google to choose training data to deal with some of the downsides of large language models (Page 24).

Googley hook 4: Making smart software dependent on large language models that mitigates risk is expensive. Money, smart people who are in short supply, and computing resources are expensive.  Therefore, one need not focus on the origin point (large language model training and configuration). Direct attention at those downstream. Those users can deal with the identified 21 problems. The Google method puts Google out of the primary line of fire. There are more targets for the aggrieved to seek and shoot at (Page 37).

When I step back from the article which is two years old, it is obvious Google was aware of some potential issues with its approach. Dr. Timnit Gebru was sacrificed on a pyre of spite. (She does warrant a couple of references and a footnote or two. But she’s now a Xoogler. The one side effect was that Dr. Jeff Dean, who was not amused by the stochastic parrot has been kicked upstairs and the UK “leader” is now herding the little wizards of Google AI.

The conclusion of the paper echoes the Google knows best argument. Google wants a methodological toolkit because that will keep other people busy. Google wants others to figure out fair, an approach that is similar to Sam Altman (OpenAI) who begs for regulation of a sector about which much is unknown.

The answer, according to the risk analysis is “responsible innovation.” I would suggest that this paper, the television interviews, the PR efforts to get the Google story in as many places as possible are designed to make the sluggish Google a player in the AI game.

Who will be fooled? Will Google catch up in this Silicon Valley venture invigorating hill climb? For me the paper with the footnotes is just part of Google’s PR and marketing effort. Your mileage may vary. May relevance be with you, gentle reader.

Stephen  E Arnold, May 22, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta