Angling to Land the Big Google Fish: A Humblebrag Quest to Be CEO?
April 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My goodness, the staff and alums of DeepMind have been in the news. Wherever there are big bucks or big buzz opportunities, one will find the DeepMind marketing machinery. Consider “Can Demis Hassabis Save Google?” The headline has two messages for me. The first is that a “real” journalist things that Google is in big trouble. Big trouble translates to stakeholder discontent. That discontent means it is time to roll in a new Top Dog. I love poohbahing. But opining that the Google is in trouble. Sure, it was aced by the Microsoft-OpenAI play not too long ago. But the Softies have moved forward with the Mistral deal and the mysterious Inflection deal . But the Google has money, market share, and might. Jake Paul can say he wants the Mike Tyson death stare. But that’s an opinion until Mr. Tyson hits Mr. Paul in the face.
The second message in the headline that one of the DeepMind tribe can take over Google, defeat Microsoft, generate new revenues, avoid regulatory purgatory, and dodge the pain of its swinging door approach to online advertising revenue generation; that is, people pay to get in, people pay to get out, and soon will have to subscribe to watch those entering and exiting the company’s advertising machine.
Thanks, MSFT Copilot. Nice fish.
What are the points of the essay which caught my attention other than the headline for those clued in to the Silicon Valley approach to “real” news? Let me highlight a few points.
First, here’s a quote from the write up:
Late on chatbots, rife with naming confusing, and with an embarrassing image generation fiasco just in the rearview mirror, the path forward won’t be simple. But Hassabis has a chance to fix it. To those who known him, have worked alongside him, and still do — all of whom I’ve spoken with for this story — Hassabis just might be the perfect person for the job. “We’re very good at inventing new breakthroughs,” Hassabis tells me. “I think we’ll be the ones at the forefront of doing that again in the future.”
Is the past a predictor of future success? More than lab-to-Android is going to be required. But the evaluation of the “good at inventing new breakthroughs” is an assertion. Google has been in the me-too business for a long time. The company sees itself as a modern Bell Labs and PARC. I think that the company’s perception of itself, its culture, and the comments of its senior executives suggest that the derivative nature of Google is neither remembered nor considered. It’s just “we’re very good.” Sure “we” are.
Second, I noted this statement:
Ironically, a breakthrough within Google — called the transformer model — led to the real leap. OpenAI used transformers to build its GPT models, which eventually powered ChatGPT. Its generative ‘large language’ models employed a form of training called “self-supervised learning,” focused on predicting patterns, and not understanding their environments, as AlphaGo did. OpenAI’s generative models were clueless about the physical world they inhabited, making them a dubious path toward human level intelligence, but would still become extremely powerful. Within DeepMind, generative models weren’t taken seriously enough, according to those inside, perhaps because they didn’t align with Hassabis’s AGI priority, and weren’t close to reinforcement learning. Whatever the rationale, DeepMind fell behind in a key area.
Google figured something out and then did nothing with the “insight.” There were research papers and chatter. But OpenAI (powered in part by Sam AI-Man) used the Google invention and used it to carpet bomb, mine, and set on fire Google’s presumed lead in anything related to search, retrieval, and smart software. The aftermath of the Microsoft OpenAI PR coup is a continuing story of rehabilitation. From what I have seen, Google needs more time getting its ageingbody parts working again. The ad machine produces money, but the company reels from management issue to management issue with alarming frequency. Biased models complement spats with employees. Silicon Valley chutzpah causes neurological spasms among US and EU regulators. Something is broken, and I am not sure a person from inside the company has the perspective, knowledge, and management skills to fix an increasingly peculiar outfit. (Yes, I am thinking of ethnically-incorrect German soldiers loyal to a certain entity on Google’s list of questionable words and phrases.)
And, lastly, let’s look at this statement in the essay:
Many of those who know Hassabis pine for him to become the next CEO, saying so in their conversations with me. But they may have to hold their breath. “I haven’t heard that myself,” Hassabis says after I bring up the CEO talk. He instantly points to how busy he is with research, how much invention is just ahead, and how much he wants to be part of it. Perhaps, given the stakes, that’s right where Google needs him. “I can do management,” he says, ”but it’s not my passion. Put it that way. I always try to optimize for the research and the science.”
I wonder why the author of the essay does not query Jeff Dean, the former head of a big AI unit in Mother Google’s inner sanctum about Mr. Hassabis? How about querying Mr. Hassabis’ co-founder of DeepMind about Mr. Hassabis’ temperament and decision-making method? What about chasing down former employees of DeepMind and getting those wizards’ perspective on what DeepMind can and cannot accomplish.
Net net: Somewhere in the little-understood universe of big technology, there is an invisible hand pointing at DeepMind and making sure the company appears in scientific publications, the trade press, peer reviewed journals, and LinkedIn funded content. Determining what’s self-delusion, fact, and PR wordsmithing is quite difficult.
Google may need some help. To be frank, I am not sure anyone in the Google starting line up can do the job. I am also not certain that a blue chip consulting firm can do much either. Google, after a quarter century of zero effective regulation, has become larger than most government agencies. Its institutional mythos creates dozens of delusional Ulysses who cannot separate fantasies of the lotus eaters from the gritty reality of the company as one of the contributors to the problems facing youth, smaller businesses, governments, and cultural norms.
Google is Googley. It will resist change.
Stephen E Arnold, April 3, 2024
AI and Stupid Users: A Glimpse of What Is to Come
March 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
When smart software does not deliver, who is responsible? I don’t have a dog in the AI fight. I am thinking about deployment of smart software in professional environments. When the outputs are wonky or do not deliver the bang of a competing system, what is the customer supposed to do. Is the vendor responsible? Is the customer responsible? Is the person who tried to validate the outputs guilty of putting a finger on the scale of a system which its developers cannot explain exactly how an output was determined? Viewed from one angle, this is the Achilles’ heel of artificial intelligence. Viewed from another angle determining responsibility is an issue which, in my opinion, will be decided by legal processes. In the meantime, the issue of a system’s not working can have significant consequences. How about those automated systems on aircraft which dive suddenly or vessels which can jam a ship channel?
I read a write up which provides a peek at what large outfits pushing smart software will do when challenged about quality, accuracy, or other subjective factors related to AI-imbued systems. Let’s take a quick look at “Customers Complain That Copilot Isn’t As Good as ChatGPT, Microsoft Blames Misunderstanding and Misuse.”
The main idea in the write up strikes me as:
Microsoft is doing absolutely everything it can to force people into using its Copilot AI tools, whether they want to or not. According to a new report, several customers have reported a problem: it doesn’t perform as well as ChatGPT. But Microsoft believes the issue lies with people who aren’t using Copilot correctly or don’t understand the differences between the two products.
Yep, the user is the problem. I can imagine the adjudicator (illustrated as a mother) listening to a large company’s sales professional and a professional certified developer arguing about how the customer went off the rails. Is the original programmer the problem? Is the new manager in charge of AI responsible? Is it the user or users?
Illustration by MSFT Copilot. Good enough, MSFT.
The write up continues:
One complaint that has repeatedly been raised by customers is that Copilot doesn’t compare to ChatGPT. Microsoft says this is because customers don’t understand the differences between the two products: Copilot for Microsoft 365 is built on the Azure OpenAI model, combining OpenAI’s large language models with user data in the Microsoft Graph and the Microsoft 365 apps. Microsoft says this means its tools have more restrictions than ChatGPT, including only temporarily accessing internal data before deleting it after each query.
Here’s another snippet from the cited article:
In addition to blaming customers’ apparent ignorance, Microsoft employees say many users are just bad at writing prompts. “If you don’t ask the right question, it will still do its best to give you the right answer and it can assume things,” one worker said. “It’s a copilot, not an autopilot. You have to work with it,” they added, which sounds like a slogan Microsoft should adopt in its marketing for Copilot. The employee added that Microsoft has hired partner BrainStorm, which offers training for Microsoft 365, to help create instructional videos to help customers create better Copilot prompts.
I will be interested in watching how these “blame games” unfold.
Stephen E Arnold, March 29, 2024
The Many Faces of Zuckbook
March 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
As evidenced by his business decisions, Mark Zuckerberg seems to be a complicated fellow. For example, a couple recent articles illustrate this contrast: On one hand is his commitment to support open source software, an apparently benevolent position. On the other, Meta is once again in the crosshairs of EU privacy advocates for what they insist is its disregard for the law.
First, we turn to a section of VentureBeat’s piece, “Inside Meta’s AI Strategy: Zuckerberg Stresses Compute, Open Source, and Training Data.” In it, reporter Sharon Goldman shares highlights from Meta’s Q4 2023 earnings call. She emphasizes Zuckerberg’s continued commitment to open source software, specifically AI software Llama 3 and PyTorch. He touts these products as keys to “innovation across the industry.” Sounds great. But he also states:
“Efficiency improvements and lowering the compute costs also benefit everyone including us. Second, open source software often becomes an industry standard, and when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.”
Ah, there it is.
Our next item was apparently meant to be sneaky, but who did Meta think it was fooling? The Register reports, “Meta’s Pay-or-Consent Model Hides ‘Massive Illegal Data Processing Ops’: Lawsuit.” Meta is attempting to “comply” with the EU’s privacy regulations by making users pay to opt in to them. That is not what regulators had in mind. We learn:
“Those of us with aunties on FB or friends on Instagram were asked to say yes to data processing for the purpose of advertising – to ‘choose to continue to use Facebook and Instagram with ads’ – or to pay up for a ‘subscription service with no ads on Facebook and Instagram.’ Meta, of course, made the changes in an attempt to comply with EU law. But privacy rights folks weren’t happy about it from the get-go, with privacy advocacy group noyb (None Of Your Business), for example, sarcastically claiming Meta was proposing you pay it in order to enjoy your fundamental rights under EU law. The group already challenged Meta’s move in November, arguing EU law requires consent for data processing to be given freely, rather than to be offered as an alternative to a fee. Noyb also filed a lawsuit in January this year in which it objected to the inability of users to ‘freely’ withdraw data processing consent they’d already given to Facebook or Instagram.”
And now eight European Consumer Organization (BEUC) members have filed new complaints, insisting Meta’s pay-or-consent tactic violates the European General Data Protection Regulation (GDPR). While that may seem obvious to some, Meta insists it is in compliance with the law. Because of course it does.
Cynthia Murrell, March 29, 2024
My Way or the Highway, Humanoid
March 28, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Curious how “nice” people achieve success? “Playground Bullies Do Prosper – And Go On to Earn More in Middle Age” may have an answer. The write up says:
Children who displayed aggressive behavior at school, such as bullying or temper outbursts, are likely to earn more money in middle age, according to a five-decade study that upends the maxim that bullies do not prosper.
If you want a tip for career success, I would interpret the write up’s information to start when young. Also, start small. The Logan Paul approach to making news is to fight the ageing Mike Tyson. Is that for you? I know I would not start small by irritating someone who walks with a cane. But, to each his or her own. If there is a small child selling Girl Scout Cookies, one might sharpen his or her leadership skills by knocking the cookie box to the ground and stomping on it. The modest demonstration of power can then be followed with the statement, “Those cookies contain harmful substances. You should be ashamed.” Then as your skills become more fluid and automatic, move up. I suggest testing one’s bullying expertise on a local branch of a street gang involved in possibly illegal activities.
Thanks MSFT Copilot. I wonder if you used sophisticated techniques when explaining to OpenAI that you were hedging your bets.
The write up quotes an expert as saying:
“We found that those children who teachers felt had problems with attention, peer relationships and emotional instability did end up earning less in the future, as we expected, but we were surprised to find a strong link between aggressive behavior at school and higher earnings in later life,” said Prof Emilia Del Bono, one of the study’s authors.
A bully might respond to this professor and say, “What are you going to do about it?” One response is, “You will earn more, young student.” The write up reports:
Many successful people have had problems of various kinds at school, from Winston Churchill, who was taken out of his primary school, to those who were expelled or suspended.
Will nice guys who are not bullies become the leaders of the post Covid world? The article quotes another expert as saying:
“We’re also seeing a generational shift where younger generations expect to have a culture of belonging and being treated with fairness, respect and kindness.”
Sounds promising. Has anyone told the companies terminating thousands of workers? What about outfits like IBM which are dumping humans for smart software? Yep, progress just like that made at Google in the last couple of years.
Stephen E Arnold, March 28, 2024
A Single, Glittering Google Gem for 27 March 2024
March 27, 2024
This essay is the work of a dumb dinobaby. No smart software required.
So many choices. But one gem outshines the others. Google’s search generative experience is generating publicity. The old chestnut may be true. Any publicity is good publicity. I would add a footnote. Any publicity about Google’s flawed smart software is probably good for Microsoft and other AI competitors. Google definitely looks as though it has some behaviors that are — how shall I phrase it? — questionable. No, maybe, ill-considered. No, let’s go with bungling. That word has a nice ring to it. Bungling.
I learned about this gem in “Google’s New AI Search Results Promotes Sites Pushing Malware, Scams.” The write up asserts:
Google’s new AI-powered ‘Search Generative Experience’ algorithms recommend scam sites that redirect visitors to unwanted Chrome extensions, fake iPhone giveaways, browser spam subscriptions, and tech support scams.
The technique which gets the user from the quantumly supreme Google to the bad actor goodies is redirects. Some user notification functions to pump even more inducements toward the befuddled user. (See, bungling and befuddled. Alliteration.)
Why do users fall for these bad actor gift traps? It seems that Google SGE conversational recommendations sound so darned wonderful, Google users just believe that the GOOG cares about the information it presents to those who “trust” the company. k
The write up points out that the DeepMinded Google provided this information about the bumbling SGE:
"We continue to update our advanced spam-fighting systems to keep spam out of Search, and we utilize these anti-spam protections to safeguard SGE," Google told BleepingComputer. "We’ve taken action under our policies to remove the examples shared, which were showing up for uncommon queries."
Isn’t that reassuring? I wonder if the anecdote about this most recent demonstration of the Google’s wizardry will become part of the Sundar & Prabhakar Comedy Act?
This is a gem. It combines Google’s management process, word salad frippery, and smart software into one delightful bouquet. There you have it: Bungling, befuddled, bumbling, and bouquet. I am adding blundering. I do like butterfingered, however.
Stephen E Arnold, March 27, 2024
Xoogler Predicts the Future: China Bad, Xoogler Good
March 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Did you know China, when viewed from the vantage point of a former Google executive, is bad? That is a stunning comment. Google tried valiantly to convert China into a money stream. That worked until it didn’t. Now a former Googler or Xoogler in some circles has changed his tune.
Thanks, MSFT Copilot. Working on security I presume?
“Eric Schmidt’s China Alarm” includes some interesting observations. None of which address Google’s attempt to build a China-acceptable search engine. Oh, well, anyone can forget minor initiatives like that. Let’s look at a couple of comments from the article:
How about this comment about responding to China:
"We have to do whatever it takes."
I wonder if Mr. Schmidt has been watching Dr. Strangelove on YouTube. Someone might pull that viewing history to clarify “whatever it takes.”
Another comment I found interesting is:
China has already become a peer of the U.S. and has a clear plan for how it wants to dominate critical fields, from semiconductors to AI, and clean energy to biotech.
That’s interesting. My thought is that the “clear plan” seems to embrace education; that is, producing more engineers than some other countries, leveraging open source technology, and erecting interesting barriers to prevent US companies from selling some products in the Middle Kingdom. How long has this “clear plan” been chugging along? I spotted portions of the plan in Wuhan in 2007. But I guess now it’s a more significant issue after decades of being front and center.
I noted this comment about artificial intelligence:
Schmidt also said Europe’s proposals on regulating artificial intelligence "need to be re-done," and in general says he is opposed to regulating AI and other advances to solve problems that have yet to appear.
The idea is an interesting one. The UN and numerous NGOs and governmental entities around the world are trying to regulate, tame, direct, or ameliorate the impact of smart software. How’s that going? My answer is, “Nowhere fast.”
The article makes clear that Mr. Schmidt is not just a Xoogler; he is a global statesperson. But in the back of my mind, once a Googler, always a Googler.
Stephen E Arnold, March 26, 2024
AI Job Lawnmowers: Will Your Blooms Be Chopped Off and Put a Rat King in Your Future?
March 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I love “you will lose your job to AI” articles. I spotted an interesting one titled “The Job Sectors That Will Be Most Disrupted By AI, Ranked.” This is not so much an article as a billboard for an outfit named Voronoi, “where data tells the story.” That’s interesting because there is no data, no methodology, and no indication of the confidence level for each “nuked job.” Nevertheless, we have a ranking.
Thanks, MSFT Copilot. Will you be sparking human rat kings? I would wager that you will.
As I understand the analysis of 19,000 tasks, here’s that the most likely to be chopped down and converted to AI silage will be:
IT / programmers: 73 percent of the job will experience a large impact
Finance / bean counters: 70 percent of the jobs will experience a large impact
Customer sales: 67 percent of the job will experience a large impact
Operations (well, that’s a fuzzy category, isn’t it?): 65 percent of the job will experience a large impact
Personnel / HR: 57 percent of the job will experience a large impact
Marketing: 56 percent of the job will experience a large impact
Legal eagles: 46 percent of the job will experience a large impact
Supply chain (another fuzzy wuzzy bucket): 43 percent of the job will experience a large impact
The kicker in the data is that the numbers date from September 2023. Six months in the faerie land of smart software is a long, long time. Let’s assume that the data meet 2024’s gold standard.
Technology, finance, sales, marketing, and lawyering may shatter the future of employees of less value in terms of compensation, cost to the organization, or whatever management legerdemain the top dogs and their consultants whip up. Imagine eliminate the overhead for humans like office space, health care, retirement baloney, and vacations makes smart software into an attractive “play.”
And what about the fuzzy buckets? My thought is that many people will be trimmed because a chatbot can close a sale for a product without the hassle which humans drag into the office; for example, sexual harassment, mental, drug, and alcohol “issues,” and the unfortunate workplace shooting. I think that a person sitting in a field office to troubleshoot issues related to a state or county contract might fall into the “operations” category even though the employee sees the job as something smart software cannot perform. Ho ho ho.
Several observations:
- A trivial cost analysis of human versus software over a five-year period means humans lose
- AI systems, which may suck initially, will be improved over time. These initial failures may cause the once alert to replacement employee into a false sense of security
- Once displaced, former employees will have to scramble to produce cash. With lots of individuals chasing available work and money plays, life is unlikely to revert back to the good old days of the Organization Man. (The world will be Organization AI. No suit and white shirt required.)
Net net: I am glad I am old and not quite as enthralled by efficiency.
Stephen E Arnold, March 25, 2024
Software Failure: Why Problems Abound and Multiply Like Gerbils
March 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Why Software Projects Fail” after a lunch at which crappy software and lousy products were a source of amusement. The door fell off what?
What’s interesting about the article is that it contains a number of statements which resonated with me. I recommend the article, but I want to highlight several statements from the essay. These do a good job of explaining why small and large projects go off the rails. Within the last 12 months I witnessed one project get tangled in solving a problem that existed 15 years ago. Today not so much. The team crafted the equivalent of a Greek Corinthian helmet from the 8th century BCE. Another project infused with AI and vision of providing a “new” approach to security wobble between and among a telecommunications approach, an email approach, and an SMS approach with bells and whistles only a science fiction fan would appreciate. Both of these examples obtained funding; neither set out to build a clown car. What happened? That’s where “Why Projects Fail?” becomes relevant.
Thanks, MSFT Copilot. You have that MVP idea nailed with the recent Windows 11 update, don’t you. Good enough I suppose.
Let’s look at three passages from the essay, shall we?
Belief in One’s Abilities or I Got an Also-Participated Ribbon in Middle School
Here’s the statement from the essay:
One of the things that I’ve noticed is that developers often underestimate not just the complexity of tasks, but there’s a general overconfidence in their abilities, not limited by programming:
- Overconfidence in their coding skills.
- Overconfidence in learning new technologies.
- Overconfidence in our abstractions.
- Overconfidence in external dependencies, e.g., third-party services or some open-source library.
My comment: Spot on. Those ribbons built confidence, but they mean nothing.
Open Source Is Great Unless It Has Been Screwed Up, Become a Malware Delivery Vehicle, or Just Does Not Work
Here’s the statement from the essay:
… anything you do not directly control is a risk of hidden complexity. The assumption that third-party services, libraries, packages, or APIs will work as expected without bugs is a common oversight.
My view is that “complexity” is kicked around as if everyone held a shared understanding of the term. There are quite different types of complexity. For software, there is the complexity of a simple process created in Assembler but essentially impenetrable to a 20-something from a whiz-bang computer science school. There is the complexity of software built over time by attention deficit driven people who do not communicate, coordinate, or care what others are doing, will do, or have done. Toss in the complexity of indifferent, uninformed, or uninterested “management,” and you get an exciting environment in which to “fix up” software. The cherry on top of this confection is that quite a bit of software is assumed to be good. Ho ho ho.
The Real World: It Exists and Permeates
I liked this statement:
Technology that seemed straightforward refuses to cooperate, external competitors launch similar ideas, key partners back out, and internal business stakeholders focus more on the projects that include AI in their name. Things slow down, and as months turn into years, enthusiasm wanes. Then the snowball continues — key members leave, and new people join, each departure a slight shift in direction. New tech lead steps in, eager to leave their mark, steering the project further from its original course. At this point, nobody knows where the project is headed, and nobody wants to admit the project has failed. It’s a tough spot, especially when everyone’s playing it safe, avoiding the embarrassment or penalties of admitting failure.
What are the signals that trouble looms? A fumbled ball at the Google or the Apple car that isn’t can be blinking lights. Staff who go rogue on social media or find an ambulance chasing honed law firm can catch some individual’s attention.
The write up contains other helpful observations. Will people take heed? Are you kidding me? Excellence costs money, requires informed judgment, and expertise. Who has time for this with AI calendars, the demands of TikTok and Instagram, and hitting the local coffee shop?
Stephen E Arnold, March 19, 2024
A Single Google Gem for March 19, 2024
March 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I want to focus on what could be the star sapphire of Googledom. The story appeared on the estimable Murdoch confection Fox News. Its title? “Is Google Too Broken to Be Fixed? Investors Deeply Frustrated and Angry, Former Insider Warns”? The word choice in this Googley headline signals the alert reader that the Foxy folks have a juicy story to share. “Broken,” “Frustrated,” “Angry,” and “Warns” suggest that someone has identified some issues at the beloved Google.
A Google gem. Thanks, MSFT Copilot Bing thing. How’s the staff’s security today?
The write up states:
A former Google executive [David Friedberg] revealed that investors are “deeply frustrated” that the scandal surrounding their Gemini artificial intelligence (AI) model is becoming a “real threat” to the tech company. Google has issued several apologies for Gemini after critics slammed the AI for creating “woke” content.
The Xoogler, in what seems to be tortured prose, allegedly said:
“The real threat to Google is more so, are they in a position to maintain their search monopoly or maintain the chunk of profits that drive the business under the threat of AI? Are they adapting? And less so about the anger around woke and DEI,” Friedberg explained. “Because most of the investors I spoke with aren’t angry about the woke, DEI search engine, they’re angry about the fact that such a blunder happened and that it indicates that Google may not be able to compete effectively and isn’t organized to compete effectively just from a consumer competitiveness perspective,” he continued.
The interesting comment in the write up (which is recycled podcast chatter) seems to be:
Google CEO Sundar Pichai promised the company was working “around the clock” to fix the AI model, calling the images generated “biased” and “completely unacceptable.”
Does the comment attributed to a Big Dog Microsoftie reflect the new perception of the Google. The Hindustan Times, which should have radar tuned to the actions, of certain executives with roots entwined in India reported:
Satya Nadella said that Google “should have been the default winner” of Big Tech’s AI race as the resources available to it are the maximum which would easily make it a frontrunner.
My interpretation of this statement is that Google had a chance to own the AI casino, roulette wheel, and the croupiers. Instead, Google’s senior management ran over the smart squirrel with the Paris demonstration of the fantastic Bard AI system, a series of me-too announcements, and the outputting of US historical scenes with people of color turning up in what I would call surprising places.
Then the PR parade of Google wizards explains the online advertising firm’s innovations in playing games, figuring out health stuff (shades of IBM Watson), and achieving quantum supremacy in everything. Well, everything except smart software. The predicament of the ad giant is illuminated with the burning of billions in market cap coincident with the wizards’ flubs.
Net net: That’s a gem. Google losing a game it allegedly owned. I am waiting for the next podcast about the Sundar & Prabhakar Comedy Tour.
Stephen E Arnold, March 19, 2024
Just One Big Google Zircon Gemstone for March 5, 2024
March 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I have a folder stuffed with Google gems for the week of February 26 to March 1, 12023. I have a write up capturing more Australians stranded by following Google Maps’s representation of a territory, Google’s getting tangled in another publisher lawsuit, Google figuring out how to deliver better search even when the user’s network connection sucks, Google’s firing 43 unionized contractors while in the midst of a legal action, and more.
The brilliant and very nice wizard adds, “Yes, we have created a thing which looks valuable, but it is laboratory-generated. And it is gem and a deeply flawed one, not something we can use to sell advertising yet”. Thanks, MSFT Copilot Bing thing. Good enough and I liked the unasked for ethnic nuance.
But there is just one story: Google nuked billions in market value and created the meme of the week by making many images the heart and soul of diversity. Pundits wanted one half of the Sundar and Prabhakar comedy show yanked off the stage. Check out Stratechery’s view of Google management’s grasp of leading the company in a positive manner in Gemini and Google’s Culture. The screw up was so bad that even the world’s favorite expert in aircraft refurbishment and modern gas-filled airships spoke up. (Yep, that’s the estimable Sergey Brin!)
In the aftermath of a brilliant PR move, CNBC ran a story yesterday that summed up the February 26 to March 1 Google experience. The title was “Google Co-Founder Sergey Brin Says in Rare Public Appearance That Company ‘Definitely Messed Up’ Gemini Image Launch.” What an incisive comment from one of the father of “clever” methods of determining relevance. The article includes this brilliant analysis:
He also commented on the flawed launch last month of Google’s image generator, which the company pulled after users discovered historical inaccuracies and questionable responses. “We definitely messed up on the image generation,” Brin said on Saturday. “I think it was mostly due to just not thorough testing. It definitely, for good reasons, upset a lot of people.”
That’s the Google “gem.” Amazing.
Stephen E Arnold, March 5, 2024