Another Xoogler, Another Repetitive, Sad, Dispiriting Story
March 2, 2023
I will keep this brief. I read “The Maze Is in the Mouse.” The essay is Xoogler’s lament. The main point is that Google has four issues. The write up identifies these from a first person point of view:
The way I see it, Google has four core cultural problems. They are all the natural consequences of having a money-printing machine called “Ads” that has kept growing relentlessly every year, hiding all other sins. (1) no mission, (2) no urgency, (3) delusions of exceptionalism, (4) mismanagement.
I agree that “ads” are a big part of the Google challenge. I am not sure about the “mouse” or the “maze.”
Googzilla emerged from an incredible sequence of actions. Taken as a group, Google became the poster child for what smart Silicon Valley brainiacs could accomplish. From the git-go, Google emerged from the Backrub service. Useful research like the CLEVER method was kicking around at some conferences as a breakthrough for determining relevance. The competition was busy trying to become “portals” because the Web indexing thing was expensive and presented what seemed to be an infinite series of programming hoops. Google had zero ways to make money. As I recall, the mom and dad of Googzilla tried to sell the company to those who would listen; for example, the super brainiacs at Yahoo. Then the aha moment. GoTo.com had caused a stir in the Web indexing community by selling traffic. GoTo.com became Overture.com. Yahoo.com (run by super brainiacs, remember) bought Overture. But Yahoo did have the will, the machinery, or the guts to go big. Yahoo went home. Google went big.
What makes Google the interesting outfit it is are these points in my opinion:
- The company was seemingly not above receiving inspiration from the GoTo.com, Overture.com, and ultimately Yahoo.com “pay to play” model. Some people don’t know that Google was built on appropriated innovation and paid money and shares to make Yahoo’s legal eagles fly away. For me, Google embodied intellectual “flexibility” and an ethical compass sensitive to expediency. I may be wrong, but the Google does not strike me as being infused with higher spirits of truth, justice, and the American way Superman does. Google’s innovation boils down to borrowing. That’s okay. I borrow, but I try to footnote, not wait until the legal eagles gnaw at my liver.
- Google management, in my experience, were clueless about the broader context of their blend of search and advertising. I don’t think it was a failure of brainiac thinking. The brainiacs did not have context into which to fit their actions. Larry Page argued with me in 1999 about the value of truncation. He said, “No truncation needed at Google.” Baloney. Google truncates. Google informed a US government agency that Google would not conform to the specifications of the Statement of Work for a major US government search project. A failure to meet the criteria of the Statement of Work made Google ineligible to win that project. What did Google do? Google explained to the government team that the Statement of Work did not apply to Google technology. Well, Statements of Works and procurement works one way. Google did not like that way, so Google complained. Zero context. What Google should have done is address each requirement in a positive manner and turn in the bid. Nope, operating independent of procurement rules, Google just wanted to make up the rules. Period. That’s the way it is now and that’s the way Google has operated for nearly 25 years.
- Google is not mismanaged from Google’s point of view. Google is just right by definition. The management problems were inherent and obvious from the beginning. Let me give one example: Vendors struggled with the Google accounting system 20 or more years ago. Google blamed the Oracle database. Why? The senior management did not know what they did not know and they lacked the mental characteristic of understanding that fact. By assuming Googlers were brainiacs and the dorky Google intelligence test, Googlers could solve any problem. Wrong. Google has and continues to make decisions like a high school science club planning an experiment. Nice group, just not athletes, cheerleaders, class officers, or non nerd advisors. What do you get? You get crazy decisions like dumping Dr. Timnit Gebru and creating the Stochastic Parrot conference as well as Microsoft making Bing and Clippy on steroids look like a big deal.
Net net: Ads are important. But Google is Google because of its original and fundamental mental approach to problems: We know better. One thing is sure in my mind: Google does not know itself any better now than it did when it emerged from the Backrub “borrowed” computers and grousing about using too much Stanford bandwidth. Advertising is a symptom of a deeper malady, a mental issue in my opinion.
Stephen E Arnold,March 2, 2023
Google: Share Googlers As You Did in Kindergarten. No Spats over Cookies!
March 1, 2023
The 2023 manifestation of the Google is fascinating. There was the Code Red. There’s the Supreme Court and the European Union. There’s the anti-Microsoft Bing thing.
And now we have the kindergarten mantra, “Share, kiddies.” Sorry, I meant, “Share, Googlers.”
I read “Google Cloud Staff Asked to Share Desks in Real Estate Efficiency Drive.” The article reports as absolute real journalism:
Google has reportedly asked employees to begin sharing desks at several sites across the US as part of a “real estate efficiency” drive. Employees at Google’s cloud division will be asked to pair up with colleagues and alternate in-office shift patterns as part of the move…
How will this work in Kirkland and Seattle, Washington, Manhattan, San Francisco, and maybe TC3 or MP1? The write up explains:
“Most Googlers will now share a desk with one other Googler,” the documents state. “Through the matching process, they will agree on a basic desk setup and establish norms with their desk partner and teams to ensure a positive experience in the new shared environment.”
Have you been in a Google, DeepMind, Alphabet, or YouTube meeting? Ah, well, if the answer is “yes,” you will know that reaching agreement is an interesting process. If the answer is “no,” you can replicate the experience by visiting a meeting of the local high school’s science club. Close enough I would suggest.
I remember when:
- Tony Bennett performed in the Google cafeteria
- Odwalla (a killer health drink) filled fridges
- A car wash service was available in the parking lot on Shoreline Drive
Yes, I remember.
In 2023, the Google is showing its age (maybe maturity) after the solving death and Loon balloon era.
Reducing costs is a cookie cutter solution to management running out of ideas for generating new revenue. How many McKinsey or Booz, Allen consultants did it require to produce the idea of sharing a sleeping bag? A better question is, “How much did Google pay outside consultants to frame the problem and offer several solutions?
Googzilla is not dead. The beastie is taking steps to make sure it survives after the Microsoft marketing wild fire scorched the tail of the feared online advertising, relevance killed creature.
And Odwalla? Just have a New Coke? Oh, sorry. That’s gone too.
Stephen E Arnold, March 1, 2023
Google: Good at Quantum and Maybe Better at Discarding Intra-Company Messages
February 28, 2023
Google has already declared quantum supremacy. The supremos have outsupremed themselves, if this story in the UK Independent is accurate:
Okay, supremacy but error problems. Supremacy but a significant shift. Then the word “plague.”
The write up states in what strikes me a Google PR recyclish way:
Google researchers say they have found a way of building the technology so that it corrects those errors. The company says it is a breakthrough on a par with its announcement three years ago that it had reached “quantum supremacy”, and represents a milestone on the way to the functional use of quantum computers.
The write up continues:
Dr Julian Kelly, director of quantum hardware at Google Quantum AI, said: “The engineering constraints (of building a quantum computer) certainly are feasible. “It’s a big challenge – it’s something that we have to work on, but by no means that blocks us from, for example, making a large-scale machine.”
What seems to be a similar challenge appears in “DOJ Seeks Court Sanctions against Google over Intentional Destruction of Chat Logs.” This write up is less of a rah rah for the quantum complexity crowd and more for a simpler problem: Retaining employee communications amidst the legal issues through which the Google is wading. The write up says:
Google should face court sanctions over “intentional and repeated destruction” of company chat logs that the US government expected to use in its antitrust case targeting Google’s search business, the Justice Department said Thursday [February 23, 2023]. Despite Google’s promises to preserve internal communications relevant to the suit, for years the company maintained a policy of deleting certain employee chats automatically after 24 hours, DOJ said in a filing in District of Columbia federal court. The practice has harmed the US government’s case against the tech giant, DOJ alleged.
That seems clear, certainly clearer than the assertions about 49 physical qubits and 17 physical qubits being equal to the quantum supremacy assertion several years ago.
How can one company be adept at manipulating qubits and mal-adept at saving chat messages? Wait! Wait!
Maybe Google is equally adept: Manipulating qubits and manipulating digital information.
Strike the quantum fluff and focus on the manipulating of information. Is that a breakthrough?
Stephen E Arnold, February 28, 2023
Stop ChatGPT Now Because We Are Google!
February 21, 2023
Another week, another jaunt to a foreign country to sound the alarm which says to me: “Stop ChatGPT now! We mean it. We are the Google.”
I wonder if there is a vaudeville poster advertising the show that is currently playing in Europe and the US? What would that poster look like? Would a smart software system generate a Yugo-sized billboard like this:
In my opinion, the message and getting it broadcast via an estimable publication like the Metro.co.uk tabloid-like Web site is high comedy. No, the reality of the Metro article is different. The headline reads: “Google Issues Urgent Warning to the Millions of People Using ChatGPT” reports:
A boss at Google has hit out at ChatGPT for giving ‘convincing but completely fictitious’ answers.
And who is the boss? None other than the other half of the management act Sundar and Prabhakar. What’s ChatGPT doing wrong? Getting too much publicity? Lousy search results have been the gold standard since relevance was kicked to the curb. Advertising is the best way to deliver what the user wants because users don’t know what they want. Now we see the Google: Red alert, reactionary, and high school science club antics.
Yep.
And the outfit which touted that it solved protein folding and achieved quantum supremacy cares about technology and people. The write up includes this line about Google’s concern:
This is the only way we will be able to keep the trust of the public.
As I noted in a LinkedIn post in response to a high class consultant’s comment about smart software. I replied, “Google trust?”
Several observations:
- Google like Microsoft cares about money and market position. The trust thing muddies the waters in my opinion. Microsoft and security? Google and alleged monopoly advertising practices?
- Google is pitching the hallucination angle pretty hard. Does Google mention Forrest Timothy Hayes who died of a drug overdose in the company of a non-technical Google contractor. See this story. Who at Google is hallucinating?
- Google does not know how to respond to Microsoft’s marketing play. Google’s response is to travel outside the US explaining that the sky is falling. What’s falling is Google’s marketing effectiveness data about itself I surmise.
Net net: My conclusion about Google’s anti-Microsoft ChatGPT marketing play is, “Is this another comedy act being tested on the road before opening in New York City?” This act may knock George Burns and Gracie Allen from top billing. Let’s ask Bard.
Stephen E Arnold, February 21, 2023
When Dumping an Employee Yields a Conference: Unexpected Consequence? Yep
February 20, 2023
The saga of Google’s management of smart people has taken a surprising twist. On Friday, March 17, 2023, Dr. Timnit Gebru and some colleagues have declared “Stochastic Parrots Day.” The conference is named after the journal article/research paper about some of the risks certain approaches to smart software generates.
Stochastic parrots created by the smart software Craiyon.com. I assume that Craiyon is the owner of these images and that image rights trolls will be on the prowl for violations of the software’s intellectual property. But I enhanced these stochastic parrots, and I wrote this essay. No smart software writing aids for this dinobaby.
You can download the paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? The paywalled ACM version is at this link. The authors of the paper that allowed Dr. Gebru to find her future elsewhere are Emily Bender, Angelina McMillan-Major, and another Xoogler purged from the online ad outfit Margaret Mitchell. from this link, which raises a paywall. However, there is a useful summary prepared by Tushar Chandra at this link. According to the conference announcement, the co-authors and “various guests” will “reflect on what has happened in the last two years, what the large language model landscape currently looks like, and where we are headed versus where we should be headed.”
In my experience, employees who have the opportunity to find their future elsewhere start poking around for work. A few start companies or non-profits. Very few set up a new conference named after the paper which [a] blew the whistle on some of the AI craziness reported endlessly in TechMeme and other online information services and [b] put US Army De Oppresso Liber laser on Google’s personnel management methods.
Yep, a conference. A free conference, although a registrant can donate to the organizers.
What’s the unexpected consequence or, I should say, consequences? Let me do a little speculation:
- Google amps up the Sundar and Prabhakar routine about how Google wants to be careful, to earn trust, and, of course, demonstrate that Microsoft’s brilliant marketing play is just stupid. (Who is hallucinating? Microsoft’s OpenAI demonstrations or the Google?)
- The conference attracts the attention of a major conference organizer. I am not sure the ACM will have the moxie to create a conference that appeals to those who are not members. Imagine a two per year Stochastic Parrot program held twice a year. I think it might work.
- This event strikes me as similar to a one of those quantum moments. Is the parrot dead or alive? Predicting how the conference will interact with the real world and what systems and methods find themselves under the parrot’s confocal-type differential interference contrast microscope. What will emerge? Recursive methods fed synthetic data? Higher level abstractions shaped by engineers’ biases? Misinformation ingested so that results don’t match other sources and findings? Carelessness infused with cost cutting in the content training process? Sail and Snorkel perhaps?
Net net: What happens if a stochastic parrot conference gets too big? Answer: Perhaps Jeff Dean will become a speaker and set the record straight? Yikes! Code Super Red?
Stephen E Arnold
Another Grousing Xoogler: A Case Study Under Construction?
February 20, 2023
Say “Google” to me, and I think of:
[a] Philandering in the Google legal unit. See this story.
[b] A senior manager dead on a yacht with a “special” contractor and alleged concoctions not included in a bright child’s chemistry set. See this story.
[c] Solving death. See this story.
[d] An alleged suicide attempt by a high profile Alphabet professional fond of wearing Google Glass at parties and who suffered post traumatic stress when the love boat crashed. See this story.
[e] Google’s click fraud matter. See this story.
[f] Pundits “forgetting” that Google’s pay-to-play was an idea for which Google’s pre-IPO management paid about $1 billion to avoid an expensive legal hassle over alleged improper use of Yahoo, GoTo, and Overture technology. See this story.
I am not sure what you think about when you hear the word “Google.”
Image of trustworthy people generated by Craiyon.com. A dinobaby wrote this Beyond Search story and the caption for the AI generated image which I assume is now in for fee image banks with PicRights’ software protecting everyone’s rights.
“Former Googler Pulls Back the Curtain on a Bureaucratic Maze and Lambastes Bosses and Employees for Losing Sight of What’s Important” suggests that my associations are not comprehensive. A Xoogler wizard named Praveen Seshadri suggested, according to Fortune Magazine:
Google employees don’t go to work each day thinking they serve users or customers. Instead, they serve something internal to Google, be it a process, a technology, a manager, or other employees.
What about promotions, bonuses, and increasing advertising revenue? Not top of mind for Praveen it seems.
Googlers, he allegedly says, according to Fortune:
Instead, the focus is on potential risk, which is seen in “every line code you change” and “anything you launch,” resulting in layer upon layer of processes, reviews, and approvals.
Ah, ha. Parkinson’s Law applied to high school science club management methods, perhaps?
The Fortune write up states:
… today, Seshadri argues in his essay, there is a “collective delusion” within Google that the company is still exceptional, whenin fact most people quietly complain about the overall inefficiency. As a Google employee, “you don’t wake up everyday thinking about how you should be doing better and how your customers deserve better and how you could be working better,” he writes. “Instead, you believe that things you are doing already are so perfect that they are the only way to do it.”
I suppose I should add one more item to my list of associations:
[g] Googlers strugle to perceive the reality their actions have created. See this story.
What happened to Foundem, the French tax forms, and Timnit Gebru? A certain blindness?
Each week appears to bring another installment of the Sundar and Prabhakar team’s comedy act. I look forward to a few laughs from the group now laboring in Code Red mode.
Stephen E Arnold, February 20, 2023
Fixing Bard with a Moma Badge As a Reward
February 17, 2023
I read an interesting news item from CNBC. Yep, CNBC. The story is “Google Asks Employees to Rewrite Bard’s Bad Responses, Says the A.I. Learns Best by Example.” The passage which caught my attention immediately was:
Prabhakar Raghavan, Google’s vice president for search, asked staffers in an email on Wednesday to help the company make sure its new ChatGPT competitor gets answers right. The email, which CNBC viewed, included a link to a do’s and don’ts page with instructions on how employees should fix responses as they test Bard internally.
Hypothetical Moma buttons for right fixes to Google Bard’s off-the-mark answers. Collect them all!
I don’t know much about Googlers, but from what I have observed, the concept “answers right” is fascinating. From my point of view, Googlers must know what is “right.” Therefore, Google can recognize what is wrong. The process, if the sentence accurately reflects the wisdom of Sundar and Prabhakar, is that Google is all knowing.
Let’s look at one definition of all knowing. The source is the ever popular scribe, disabled, and so-so poet John Milton, who described the Google approach to fixing up its smart software by Google wizards, poobahs, and wonder makers. Milton pointed out his God’s approach to addressing a small problem:
What pleasure I from such obedience paid,
When will and reason (reason also is choice)
Useless and vain, of freedom both despoiled,
Made passive both, had served necessity,
Not me. (3.103-111) [Emphasis added, Editor]
Serving necessity? Question: When the software and systems are flawed, humans must intervene … of necessity?
Will Googlers try to identify right information and remediate it? Yes.
Can Googlers determine “right” and “bad” information? Consider this: If these Googlers could, how does one explain the flawed software and systems which must be fixed by “necessity”?
I know Google’s senior managers are bright, but this intervention by the lesser angels strikes me as [a] expensive, [b] an engineering mess, and [c] demonstrating some darned wacky reasoning. But the task is hard. In fact, it is a journey:
… CEO Sundar Pichai asked employees to spend two to four hours of their time on Bard, acknowledging that “this will be a long journey for everyone, across the field.”
But the weirdness of “field” metaphor is nothing to this stunning comment, which is allegedly dead accurate:
To incentivize people in his organization to test Bard and provide feedback, Raghavan said contributors will earn a “Moma badge…”
A Moma badge? A Moma badge? Like an “Also Participated” ribbon or a scouting patch for helping an elderly person across Shoreline Drive?
If the CNBC write up is accurately relating what a senior Googler said, Google’s approach manifests arrogance and a bit of mental neuropathy. My view is that the “Moma badge” thing smacks of a group of adolescents in a high school science club deciding to create buttons to award to themselves for setting the chem lab on fire. Good work, kids. Is the Moma badge and example of Google management insight.
I know one thing: I want a Moma badge… now.
Stephen E Arnold, February 17, 2023
Google Pushback: Malik Aforethought?
February 16, 2023
High school reunions will be interesting this year — particularly in a country where youthful relationships persist for life. I read “A Well Known Tech Blogger and Venture Capitalist Says It Might Be Time for Google to Find a New CEO.” The write up includes a sentence I found intriguing about Sundar Pichai, the Google digital leader:
“Google’s board, including the founders, must ask: is Pichai the right guy to run the company, or is it time for Sundar to go? Does the company need a more offense minded CEO? Someone who is not satisfied with status quo, and willing to break some eggs?”
The Microsoft ChatGPT marketing thunderbolt may well put asunder Sundar.
The write up quotes the pundit Om Malik again:
“Google seems to have dragged its feet. The botched demo and lack of action around AI are symptoms of a bigger disease — a company entrapped in its past, inaction, and missed opportunities.”
Imagine. Attending a high school graduation hoe down in Mumbai and having to explain:
- Microsoft’s smart software scorched earth method
- Missing an “opportunity”
- Criticism from one of Silicon Valley’s most loved insiders.
Yep, long evening.
Stephen E Arnold, February 16, 2023
Goggle Points Out the ChatGPT Has a Core Neural Disorder: LSD or Spoiled Baloney?
February 16, 2023
I am an old-fashioned dinobaby. I have a reasonably good memory for great moments in search and retrieval. I recall when Danny Sullivan told me that search engine optimization improves relevance. In 2006, Prabhakar Raghavan on a conference call with a Managing Director of a so-so financial outfit explained that Yahoo had semantic technology that made Google’s pathetic effort look like outdated technology.
Hallucinating pizza courtesy of the super smart AI app Craiyon.com. The art, not the write up it accompanies, was created by smart software. The article is the work of the dinobaby, Stephen E Arnold. Looks like pizza to me. Close enough for horseshoes like so many zippy technologies.
Now that SEO and its spawn are scrambling to find a way to fiddle with increasingly weird methods for making software return results the search engine optimization crowd’s customers demand, Google’s head of search Prabhakar Raghavan is opining about the oh, so miserable work of Open AI and its now TikTok trend ChatGPT. May I remind you, gentle reader, that OpenAI availed itself of some Googley open source smart software and consulted with some Googlers as it ramped up to the tsunami of PR ripples? May I remind you that Microsoft said, “Yo, we’re putting some OpenAI goodies in PowerPoint.” The world rejoiced and Reddit plus Twitter kicked into rave mode.
Google responded with a nifty roll out in Paris. February is not April, but maybe it should have been in April 2023, not in les temp d’hiver?
I read with considerable amusement “Google Vice President Warns That AI Chatbots Are Hallucinating.” The write up states as rock solid George Washington I cannot tell a lie truth the following:
Speaking to German newspaper Welt am Sonntag, Raghavan warned that users may be delivered complete nonsense by chatbots, despite answers seeming coherent. “This type of artificial intelligence we’re talking about can sometimes lead to something we call hallucination,” Raghavan told Welt Am Sonntag. “This is then expressed in such a way that a machine delivers a convincing but completely fictitious answer.”
LSD or just the Google code relied upon? Was it the Googlers of whom OpenAI asked questions? Was it reading the gems of wisdom in Google patent documents? Was it coincidence?
I recall that Dr. Timnit Gebru and her co-authors of the Stochastic Parrot paper suggest that life on the Google island was not palm trees and friendly natives. Nope. Disagree with the Google and your future elsewhere awaits.
Now we have the hallucination issue. The implication is that smart software like Google-infused OpenAI is addled. It imagines things. It hallucinates. It is living in a fantasy land with bean bag chairs, Foosball tables, and memories of Odwalla juice.
I wrote about the after-the-fact yip yap from Google’s Chair Person of the Board. I mentioned the Father of the Darned Internet’s post ChatGPT PR blasts. Now we have the head of search’s observation about screwed up neural networks.
Yep, someone from Verity should know about flawed software. Yep, someone from Yahoo should be familiar with using PR to mask spectacular failure in search. Yep, someone from Google is definitely in a position to suggest that smart software may be somewhat unreliable because of fundamental flaws in the systems and methods implemented at Google and probably other outfits loving the Tensor T shirts.
Stephen E Arnold, February 16, 2023
Google Wizards: Hey, We Knew But Did Not Intervene. Very Bard Like
February 15, 2023
I read two stories. Each offers a glimpse into what I call backing away and distancing. I think each reveals the failure of Google governance. You may disagree. That’s okay, particularly if the stories are horse feathers. My hunch is that there is a genetically warped turkey under the plumage.
The first item is from the increasingly sensational Insider. The story is “Google Didn’t Think Its Bard AI Was Really Ready for a Product Yet, Says Alphabet Chairman, Days after Its Stock Fell Following the Chatbot’s Very Public Mistake.” The write up pivots on information (allegedly 100 percent dead solid in the bull’s eye) provided by John Hennessy, the chairman of Alphabet. The chair person! What did this captain of the digital titan say? I quote from the write up:
“I think Google was hesitant to productize this because it didn’t think it was really ready for a product yet, but, I think, as a demonstration vehicle, it’s a great piece of technology….He added Google was slow to introduce Bard because it was still giving wrong answers.
From my point of view, isn’t the role of the Board of Directors, and specifically the Chair, supposed to provide what might be called governance guidance? Since this admission of “giving wrong answers” is made public after the disaster in a city where a great lunch is easy to obtain, I would suggest that the bowl of soupe a l’oignon was prepared from a bag of instant convenient food: Not particularly good but perfect for a high school science club snack.
The second item is from CNet, which has some experience with smart software. The article is “Computing Guru Criticizes ChatGPT AI Tech for Making Things Up.” And who is the computing guru? None other than Vint Cerf, one of the father’s of the Internet if I remember something I heard at a conference.
The CNet article reported as actual factual:
But, speaking Monday [February 13, 2023] at Celesta Capital’s TechSurge Summit, he did warn about ethical issues of a technology that can generate plausible sounding but incorrect information even when trained on a foundation of factual material. If an executive tried to get him to apply ChatGPT to some business problem, his response would be to call it snake oil, referring to bogus medicines that quacks sold in the 1800s, he said. Another ChatGPT metaphor involved kitchen appliances.
Then this allegedly accurate quotation from the father of the Internet and Google guru:
“It’s like a salad shooter — you know how the lettuce goes all over everywhere,” Cerf said. “The facts are all over everywhere, and it mixes them together because it doesn’t know any better.”
Did the Googlers crafting Bard run the demonstration by Mr. Cerf? Nope. The write up says:
Cerf said he was surprised to learn that ChatGPT could fabricate bogus information from a factual foundation. “I asked it, ‘Write me a biography of Vint Cerf.’ It got a bunch of things wrong,” Cerf said. That’s when he learned the technology’s inner workings — that it uses statistical patterns spotted from huge amounts of training data to construct its response. “It knows how to string a sentence together that’s grammatically likely to be correct,” but it has no true knowledge of what it’s saying, Cerf said. “We are a long way away from the self-awareness we want.”
It seems to me that if the father of the Internet is on staff, it would make sense to get some inputs.
Let’s recap:
- After the fact, the Chair of the Board points out known problems but does not invoke action based on the need for governance related to product performance. Seems like something slipped betwixt the cup and the lip.l
- After the fact, the father of the Internet points out that he was “surprised” that Google technology generated misinformation. Again … after the fact.
Is the company managed by responsible adults or individuals who believe themselves to be in a high school science club? Are Googlers indifferent to the need to get their act together before they take the show on the road.
I think the French could label either Googlers’ comment as observations offered in l’esprit de l’escalier. Accurate but not management.
Stephen E Arnold, February 15, 2023