More Google PR: For an Outfit with an Interesting Past, Chattiness Is Now a Core Competency
May 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
How many speeches, public talks, and interviews did Sergey Brin, Larry Page, and Eric Schmidt do? To my recollection, not too many. And what about now? Larry Page is tough to find. Mr. Brin is sort of invisible. Eric Schmidt has backed off his claim that Qwant keeps him up at night? But Sundar Pichai, one half of the Sundar and Prabhakar Comedy Show, is quite visible. AI everywhere keynote speeches, essays about smart software, and now an original “he wrote it himself” essay in the weird salmon-tinted newspaper The Financial Times. Yeah, pinkish.
Smart software provided me with an illustration of a fast talker pitching the future benefits of a new product. Yep, future probabilities. Rock solid. Thank you, MidJourney.
What’s with the spotlight on the current Google big wheel? Gentle reader, the visibility is one way Google is trying to advance its agenda. Before I offer my opinion about the Alphabet Google YouTube agenda, I want to highlight three statements in “Google CEO: building AI Responsibly Is the Only Race That Really Matters.”
Statement from the Google essay #1
At Google, we’ve been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right.
The theme is that Google has been doing smart software for a long time. Let’s not forget that the GOOG released the Transformer model as open source and sat on its Googley paws while “stuff happened” starting in 2018. Was that responsible? If so, what does Google mean when it uses the word “responsible” as it struggles to cope with the meme “Google is late to the game.” For example, Microsoft pulled off a global PR coup with its Davos’ smart software announcements. Google responded with the Paris demonstration of Bard, a hoot for many in the information retrieval killing field. That performance of the Sundar and Prabhakar Comedy Show flopped. Meanwhile, Microsoft pushed its “flavor” of AI into its enterprise software and cloud services. My experience is that for every big PR action, there is an equal or greater PR reaction. Google is trying to catch faster race cars with words, not a better, faster, and cheaper machine. The notion that Google “gets it right” means to me one thing: Maintaining quasi monopolistic control of its market and generating the ad revenue. Google, after 25 years of walking the same old Chihuahua in a dog park with younger, more agile canines. After 25 years of me too and flopping with projects like solving death, revenue is the ONLY thing that matters to stakeholders. More of the Sundar and Prabhakar routine are wearing thin.
Statement from the Google essay #2
We have many examples of putting those principles into practice…
The “principles” apply to Google AI implementation. But the word principles is an interesting one. Google is paying fines for ignoring laws and its principles. Google is under the watchful eye of regulators in the European Union due to Google’s principles. China wanted Google to change and then beavered away on a China-acceptable search system until the cat was let out of the bag. Google is into equality, a nice principle, which was implemented by firing AI researchers who complained about what Google AI was enabling. Google is not the outfit I would consider the optimal source of enlightenment about principles. High tech in general and Google in particular is viewed with increasing concern by regulators in US states and assorted nation states. Why? The Googley notion of principles is not what others understand the word to denote. In fact, some might say that Google operates in an unprincipled manner. Is that why companies like Foundem and regulatory officials point out behaviors which some might find predatory, mendacious, or illegal? Principles, yes, principles.
Statement from the Google essay #3
AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more.
Many years ago, I was in a meeting in DC, and the Donald Rumsfeld quote about information was making the rounds. Good appointees loved to cite this Donald.Here’s the quote from 2002:
There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.
I would humbly suggest that smart software is chock full of known unknowns. But humans are not very good at predicting the future. When it comes to acting “responsibly” in the face of unknown unknowns, I dismiss those who dare to suggest that humans can predict the future in order to act in a responsible manner. Humans do not act responsibly with either predictability or reliability. My evidence is part of your mental furniture: Racism, discrimination, continuous war, criminality, prevarication, exaggeration, failure to regulate damaging technologies, ineffectual action against industrial polluters, etc. etc. etc.
I want to point out that the Google essay penned by one half of the Sundar and Prabhakar Comedy Show team could be funny if it were not a synopsis of the digital tragedy of the commons in which we live.
Stephen E Arnold, May 23, 2023
Neeva: Another Death from a Search Crash on the Information Highway
May 22, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
What will forensic search experts find when they examine the remains of Neeva? The “gee, we failed” essay “Next Steps for Neeva” presents one side of what might be an interesting investigation for a bushy tailed and wide eyed Gen Z search influencer. I noted some statements which may have been plucked from speeches at the original Search Engine Conferences ginned up by an outfit in the UK or academic post mortems at the old International Online Meeting once held in the companionable Olympia London.
I noted these statements from the cited document:
Statement 1: The users of a Web search system
We started Neeva with the mission to take search back to its users.
The reality is that 99 percent of people using a Web search engine are happy when sort of accurate information is provided free. Yep, no one wants to pay for search. That’s the reason that when a commercial online service like LexisNexis loses one big client, it is expensive, time consuming, and difficulty to replace the revenue. One former LexisNexis big wheel told me when we met in his limousine in the parking lot of the Cherry Hill Mall: “If one of the top 100 law firms goes belly up, we need a minimum of 200 new law firms to sign up for our service and pay for it.”
“Mommy, I failed Search,” says Timmy Neeva. Mrs. Neeva says, “What caused your delusional state, Timmy.” The art work is a result of the smart software MidJourney.
Users don’t care about for fee search when those users wouldn’t know whether a hit in a results list was right, mostly right, mostly wrong, or stupidly crazy. Free is the fuel that pulls users, and without advertising, there’s no chance a free service will be able to generate enough cash to index, update the index, and develop new features. At the same time, the plumbing is leaking. Plumbing repairs are expensive: New machines, new ways to reduce power consumption, and oodles of new storage devices.
Users want free. Users don’t want to compare the results from a for fee service and a free service. Users want free. After 25 years, the Google is the champion of free search. Like the old Xoogler search system Search2, Neeva’s wizards never figured that most users don’t care about Fancy Dan yip yap about search.
Statement 2: An answer engine.
We rallied the Neeva team around the vision to create an answer engine.
Shades of DR-LINK: Users want answers. In 1981, a former Predicasts’ executive named Paul Owen told me, “Dialog users want answers.” That sounds logical, and it is to many who are expert informationists the Gospel according to Online. The reality is that users want crunchy, bite sized chunks of information which appear to answer the question or almost right answers that are “good enough” or “close enough for horseshoes.”
Users cannot differentiate from correct and incorrect information. Heck, some developers of search engines don’t know the difference between weaponized information and content produced by a middle school teacher about the school’s graduation ceremony. Why? Weaponized information is abundant; non-weaponized information may not pass the user’s sniff test. And the middle school graduation ceremony may have a typo about the start time or the principal of the school changed his mind due to an active shooter situation. Something output from a computer is believed to be credible, accurate, and “right.” An answer engine is what a free Web search engine spits out. The TikTok search spits out answers, and no one wonders if the results list are shaped by Chinese interests.
Search and retrieval has been defined by Google. The company has a 90 plus percent share of the Web search traffic in North America and Western Europe. (In Denmark, the company has 99 percent of Danish users’ search traffic. People in Denmark are happier, and it is not because Google search delivers better or more accurate results. Google is free and it answers questions.
The baloney about it takes one-click to change search engines sounds great. The reality is as Neeva found out, no one wants to click away from what is perceived to work for them. Neeva’s yip yap about smart software proves that the jazz about artificial intelligence is unlikely to change how free Web search works in Google’s backyard. Samsung did not embrace Bing because users would rebel.
Answer engine. Baloney. Users want something free that will make life easier; for example, a high school student looking for a quick way to crank out a 250 word essay about global warming or how to make a taco. ChatGPT is not answering questions; the application is delivering something that is highly desirable to a lazy student. By the way, at least the lazy student had the git up and go to use a system to spit out a bunch of recycled content that is good enough. But an answer engine? No, an online convenience store is closer to the truth.
Statement 3:
We are actively exploring how we can apply our search and LLM expertise in these settings, and we will provide updates on the future of our work and our team in the next few weeks.
My interpretation of this statement is that a couple of Neeva professionals will become venture centric. Others will become consultants. A few will join the handful of big companies which are feverishly trying to use “smart software” to generate more revenue. Will there be some who end up working at Philz Coffee. Yeah, some. Perhaps another company will buy the “code,” but valuing something that failed is likely to prove tricky. Who remembers who bought Entopia? No one, right?
Net net: The GenZ forensic search failure exercise will produce some spectacular Silicon Valley news reporting. Neeva is explaining its failure, but that failure presaged when Fast Search & Transfer pivoted from Web search to the enterprise, failed, and was acquired by Microsoft. Where is Fast Search now as the smart Bing is soon to be everywhere. The reality is that Google has had 25 years to do cement its search monopoly. Neeva did not read the email. So Neeva sucked up investment bucks with a song and dance about zapping the Big Bad Google with a death ray. Yep, another example of high school science club mentality touched by spreadsheet fever.
Well, the fever broke.
Stephen E Arnold, May 22, 2023
Google DeepMind Risk Paper: 60 Pages with a Few Googley Hooks
May 22, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved in writing, just a dumb humanoid.
I read the long version of “Ethical and Social Risks of Harm from Language Models.” The paper is mostly statements and footnotes to individuals who created journal-type articles which prove the point of each research article. With about 25 percent of the peer reviewed research including shaped, faked, or weaponized data – I am not convinced by footnotes. Obviously the DeepMinders believe that footnotes make a case for the Google way. I am not convinced because the Google has to find a way to control the future of information. Why? Advertising money and hoped for Mississippis of cash.
The research paper dates from 2021 and is part of Google’s case for being ahead of the AI responsibility game. The “old” paper reinforces the myth that Google is ahead of everyone else in the AI game. The explanation for Sam AI-man’s and Microsoft’s markeitng coup is that Google had to go slow because Google knew that there were ethical and social risks of harm from the firm’s technology. Google cares about humanity! The old days of “move fast and break things” are very 1998. Today Google is responsible. The wild and crazy dorm days are over. Today’s Google is concerned, careful, judicious, and really worried about its revenues. I think the company worries about legal actions, its management controversies, and its interdigital dual with the Softies of Redmond.
A young researcher desperately seeking footnotes to support a specious argument. With enough footnotes, one can move the world it seems. Art generated by the smart software MidJourney.
I want to highlight four facets of the 60 page risks paper which are unlikely to get much, if any, attention from today’s “real” journalists.
Googley hook 1: Google wants to frame the discussion. Google is well positioned to “guide mitigation work.” The examples in the paper are selected to “guiding action to resolve any issues that can be identified in advance.” My comment: How magnanimous of Google. Framing stakes out the Googley territory. Why? Google wants to be Googzilla and reap revenue from its users, licensees, models, synthetic data, applications, and advertisers. You can find the relevant text in the paper on page 6 in the paragraph beginning “Responsible innovation.”
Googley hook 2: Google’s risks paper references fuzzy concepts like “acceptability” and “fair.” Like love, truth, and ethics, the notion of “acceptability” is difficult to define. Some might suggest that it is impossible to define. But Google is up to the task, particularly for application spaces unknown at this time. What happens when you apply “acceptability” to “poor quality information.” One just accepts the judgment of the outfit doing the framing. That’s Google. Game. Set. Match. You can find the discussion of “acceptability” on page 9.
Googley hook 3: Google is not going to make the mistake of Microsoft and its racist bot Tay. No way, José. What’s interesting is that the only company mentioned in the text of the 60 page paper is Microsoft. Furthermore, the toxic aspects of large language models are hard for technologies to detect (page18). Plus large language models can infer a person’s private data. So “providing true information is not always beneficial (Page 21). What’s the fix? Use smaller sets of training data… maybe. (Page 22). But one can fall back on trust — for instance, trust in Google the good — to deal with these challenges. In fact, trust Google to choose training data to deal with some of the downsides of large language models (Page 24).
Googley hook 4: Making smart software dependent on large language models that mitigates risk is expensive. Money, smart people who are in short supply, and computing resources are expensive. Therefore, one need not focus on the origin point (large language model training and configuration). Direct attention at those downstream. Those users can deal with the identified 21 problems. The Google method puts Google out of the primary line of fire. There are more targets for the aggrieved to seek and shoot at (Page 37).
When I step back from the article which is two years old, it is obvious Google was aware of some potential issues with its approach. Dr. Timnit Gebru was sacrificed on a pyre of spite. (She does warrant a couple of references and a footnote or two. But she’s now a Xoogler. The one side effect was that Dr. Jeff Dean, who was not amused by the stochastic parrot has been kicked upstairs and the UK “leader” is now herding the little wizards of Google AI.
The conclusion of the paper echoes the Google knows best argument. Google wants a methodological toolkit because that will keep other people busy. Google wants others to figure out fair, an approach that is similar to Sam Altman (OpenAI) who begs for regulation of a sector about which much is unknown.
The answer, according to the risk analysis is “responsible innovation.” I would suggest that this paper, the television interviews, the PR efforts to get the Google story in as many places as possible are designed to make the sluggish Google a player in the AI game.
Who will be fooled? Will Google catch up in this Silicon Valley venture invigorating hill climb? For me the paper with the footnotes is just part of Google’s PR and marketing effort. Your mileage may vary. May relevance be with you, gentle reader.
Stephen E Arnold, May 22, 2023
The Seven Wonders of the Google AI World
May 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read the content at this Google Web page: https://ai.google/responsibility/principles/. I found it darned amazing. In fact, I thought of the original seven wonders of the world. Let’s see how Google’s statements compare with the down-through-time achievements of mere mortals from ancient times.
Let’s imagine two comedians explaining the difference between the two important set of landmarks in human achievement. Here are the entertainers. These impressive individuals are a product of MidJourney’s smart software. The drawing illustrates the possibilities of artificial intelligence applied to regular intelligence and a certain big ad company’s capabilities. (That’s humor, gentle reader.)
Here are the seven wonders of the world according to the semi reliable National Geographic (l loved those old Nat Geos when I was in the seventh grade in 1956-1957!):
- The pyramids of Giza (tombs or alien machinery, take your pick)
- The hanging gardens of Babylon (a building with a flower show)
- The temple of Artemis (god of the hunt for maybe relevant advertising?)
- The statue of Zeus (the thunder god like Googzilla?)
- The mausoleum at Halicarnassus (a tomb)
- The colossus of Rhodes (Greek sun god who inspired Louis XIV and his just-so-hoity toity pals)
- The lighthouse of Alexandria (bright light which baffles some who doubt a fire can cast a bright light to ships at sea)
Now the seven wonders of the Google AI world:
- Socially beneficial AI (how does AI help those who are not advertisers?)
- Avoid creating or reinforcing unfair bias (What’s Dr. Timnit Gebru say about this?)
- Be built and tested for safety? (Will AI address video on YouTube which provide links to cracked software; e.g. this one?)
- Be accountable to people? (Maybe people who call for Google customer support?)
- Incorporate privacy design principles? (Will the European Commission embrace the Google, not litigate it?)
- Uphold high standards of scientific excellence? (Interesting. What’s “high” mean? What’s scientific about threshold fiddling? What’s “excellence”?)
- AI will be made available for uses that “accord with these principles”. (Is this another “Don’t be evil moment?)
Now let’s evaluate in broad strokes the two seven wonders. My initial impression is that the ancient seven wonders were fungible, not based on the future tense, the progressive tense, and breathing the exhaust fumes of OpenAI and others in the AI game. After a bit of thought, I am not sure Google’s management will be able to convince me that its personnel policies, its management of its high school science club, and its knee jerk reaction to the Microsoft Davos slam dunk are more than bloviating. Finally, the original seven wonders are either ruins or lost to all but a MidJourney reconstruction or a Bing output. Google is in the “careful” business. Translating: Google is Googley. OpenAI and ChatGPT are delivering blocks and stones for a real wonder of the world.
Net net: The ancient seven wonders represent something to which humans aspired or honored. The Google seven wonders of AI are, in my opinion, marketing via uncoordinated demos. However, Google will make more money than any of the ancient attractions did. The Google list may be perfect for the next Sundar and Prabhakar Comedy Show. Will it play in Paris? The last one there flopped.
Stephen E Arnold, May 12, 2023
The Big Show from the Google: Meh
May 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I ran a query on You.com, asking where I could view the Google Big Show* (no Tallulah Bankhead, just Sundar and friends). You replied as the show was airing on YouTube Live, “I don’t know where the program is.” Love that smart software, right? I clicked off because it was not as good as what Microsoft hit the slopes with in Davos. After Paris, I figured the Googlers would enlist its industry leading smart software and the really thrilled merged Google Brain and DeepMind wizards and roll out a killer program. I was thinking a digital Steve Jobs explaining killer innovations and an ending with “one more thing.” Alas, no reality distortion field, just me too, me too, me too.
A sad amateur vaudeville performer holds a tomato thrown at him when his song and dance act flopped. The art was created by the helpful and available MidJourney system. I wanted to use Bing, but I am not comfortable with the alleged surveillance characteristics of Credge.
How do I know my reaction is semi-valid. Today’s Murdochy Wall Street Journal ran the story about the Big Show on page three with the headline “Google Unveils Search Revamped for AI Era.” That’s like a vaudeville billing toward the bottom with the dog act and phrase “exotic animals.” Page three for the company that ignores the fact that it is selling online advertising with a system that generates oodles of cash yet not enough to keep a full complement of staff? That’s amazing!
I listened — briefly — to the This Week in Google podcast. I can’t understand how a program about Google can beat up on the firm with such gentle punches. I recall the phrase “a lack of strategic vision.” That was it. Navigate away to Lawfare, a program which actually discusses topics with some intellectual body blows.
I spoke with one of my research team. That person’s comment was:
I think Sundar is hitting the applause button and nothing is happening.
I though Google smart music could generate an applause track. Failing that, why not snip an applause track from one of Steve Jobs’s presentations. I like the one with the computer in the envelope or the roll out of the iPhone. I wonder if the AI infused Google search could not locate the video? You.com couldn’t locate the Google in out or off on program, but that is understandable. It was definitely a “don’t fail to miss it” event.
And where was Prabhakar Raghavan, the head of search? Where was Danny Sullivan, Google’s “we deliver relevant results”. Where was the charming head of DeepMind, an executive beloved by his team? Where was Dr. Jeff Dean, the inventor of Chubby and champion of recipes?
I know that OpenAI has been enjoying the Google wizard who explained that Google cannot keep up. See this allegedly accurate report called “Google and OpenAI Will Lose the AI Arms Race to Open-Source Engineers, a Googler Said in a Leaked Document.” Microsoft is probably high fiving and holding Team meetings with happy faces on the Microsofties who are logged in.
* The Big Show was a big flop for NBC when it aired in the early 1950s. Ah, Tallulah and the endless recycling of Jimmy Durante, snippets of stage plays, and truly memorable performers whose talent is different from today’s rap and pop stars. Here’s a famous quote from Tallulah which may be appropriate for Google’s hurry and catch up approach to innovation:
“There’s less here than meets the eye.”
I love that Tallulah quote.
Stephen E Arnold, May 11, 2023
Google Wobblies: Are Falling Behind and Falling Off Buildings Linked?
May 11, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Google and OpenAI Struggling to Keep Up with Open Source AI, Senior Engineer Warns.” I understand the Google falling behind because big technology outfits are not exactly known for their agile footwork or blazing speed. Let’s face it. Google is not a digital Vinícius Júnior of Real Madrid fame. But OpenAI? The write up states:
Open-source models are faster, more customizable, more private, and pound-for-pound more capable.
Open source? I thought open source had been sucked into the business strategies of Amazon AWS, the Google Cloud, and Microsoft Azure and GitHub. Apparently not.
I think the idea is not “open source,” however. Open source is a phrase which means in my view a heck of a lot of people fooling around with whatever free and low cost generative software is available. What happens when many cooks crowd into big kitchen? The output is going to be voluminous with some lousy, some okay, and a few dishes spectacular. The more cooks, the greater the chances that something spectacular will emerge. Probability low but a Bocuse d’Or-grade entrée may pop out of one’s Le Creuset.
Now what about the falling off buildings? I thought that was a Russian thing. If the New York Post’s reporting is spot on in its write up, there are some real-world consequences of Google’s falling behind.
Stephen E Arnold, May 11, 2023
Am I a Moron Because I Use You.com?
May 10, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“Only Morons Use ChatGPT As a Substitute for Google” is a declarative statement. Three words strike me as important in the title of the Lifehacker (an online publication).
First, “morons.” A moron according to TheFreedictionary.com citation is: A city in Eastern Argentina although it has the accented ó. On to the next definition which is “A person who is considered foolish or stupid.” I think this is closer to the mark. I am not comfortable invoking the third definition because it aims denotative punch a a person with a person having a mental age of from seven to 12. I am 78, so let’s go with “foolish or stupid.” I am in that set.
Second, “ChatGPT.” I think the moniker can apply specifically to the for-fee service of OpenAI. It is possible that “ChatGPT” stands for an entire class of generative software. I tried to make a list of a who’s who in generative software and abandoned the task. Quite a few companies are in the game either directly like the aforementioned OpenAI or a bandwagon of companies joyfully tallied by ProductWatch.com and a few LinkedIn contributors. I think the idea is that ChatGPT outputs content which is either derivative (a characteristic of a machine eating other people’s words and images) or hallucinatory (a feature of software which can go off the rails and output like a digital Lewis Carroll galumphing around a park in which young females frolic).
Third, “Google.” My hunch is that the author is an expert online searcher who like many open source intelligence professionals rely on the advertising-supported Google search for objective, on-point answers. Oh, my, that’s quite a reliable source of information. I want to point out that Google focuses on revenue-generation from advertising. Accuracy of results often has little connection to the user’s query. My interpretation of the word “Google” is that Google is good, probably better than “ChatGPT” in providing answers designed to meet the needs of users who may not read above the 9th grade level, struggle with derivatives, and cannot name the capital of Tasmania. (It is Hobart, by the way.)
I am on the fence with the word “only.” I am not comfortable with categorical affirmatives. Given the context of the article and the fact that Google is the Web search engine of choice (conscious or manipulated) for 90 percent of people in North America and Western Europe, I can understand why the field of view is narrow. An expert with regard to Google knows more and more about less and less.
Why is ChatGPT presented as the yan to Google’s yang? The write up says:
If you can’t fact-check the bot, the answers it gives are useless.
Okay, so people fact-check Google outputs. What does Google present with regard to Larry Page and his alleged interactions with Mr. Epstein, the late Mr. Epstein I should say. Run this query Google Page Epstein and this is what the system presented to me on May 6, 2023:
Are these stories accurate? Is CNBC the most relevant story about a lawsuit involving JPMorgan Chase and Jeffrey Epstein? What about the New York Post article “Virgin Islands Struggles to Subpoena Google’s Larry Page in Jeffrey Epstein Lawsuit against JPMorgan.” Or the tech insider outfit Gizmodo which offers “Google’s Larry Page Could Be Served in Jeffrey Epstein Case.” There are citations to the British newspaper Telegraph and one from Law & Crime. There is a Yahoo story and a Daily Beast article.
Now what does You.com offer me when I query Google Page Epstein?
There have been recent news articles reporting that the US Virgin Islands has issued subpoenas to Google co-founder Larry Page in connection with a lawsuit against JPMorgan Chase over the Jeffrey Epstein sex trafficking case 1. The lawsuit alleges that JPMorgan Chase facilitated Epstein’s trafficking scheme by continuously offering financial support to Epstein despite knowing about the illegal activities conducted at Little St. James, a private island owned by Epstein. The subpoenas seek documents and communications related to JPMorgan’s banking relationship with Epstein. Sergey Brin, another Google co-founder, has also reportedly been subpoenaed. Despite the subpoenas, it appears that the US Virgin Islands is struggling to serve Page with the legal documents. A recent ruling allows Page to be served with the subpoenas via his attorney in California. cnbc1
I would suggest that the Google citations provide a list, no indication of which source is more or less highly regarded for accuracy. Google wants me to click on one or more of the links, ingest the content of each article, and then synthesize the items of information which strike me as on the money. You.com on the other hand provides me with the bare bones of the alleged involvement with a person who like Lewis Carroll may have had an interest in hanging out around a park on a sunny Saturday afternoon. Catching some rays and perhaps coming up with new ideas are interpretations of such as action by a lawyer hired to explain the late and much lamented Mr. Epstein.
So which is it? The harvesting of buckwheat the old-fashioned way or the pellet of information spat out in a second or two?
I think the idea is that morons are going to go the ChatGPT-like route. Wizards and authors of online “real” news articles want to swing that sickle and relive the thrill of the workers in Vincent van Gogh’s “The Harvest.”
The article says:
you can’t tell whether an AI-generated fact is true or not by the way the text looks; it’s designed to look plausible and correct. You have to fact-check it.
Does one need to fact-check what Google spits out? What about the people who follow Google Maps’s instructions and drive off a cliff? What about the links in Google Scholar to papers with non-reproducible results?
Here’s the conclusion to the write up:
So if you want to use ChatGPT to get ideas or brainstorm places to look for more information, fine. But don’t expect it to base its answers on reality. Even for something as innocuous as recommending books based on your favorites, it’s likely to make up books that don’t even exist.
I like that “don’t even exist.” Google Bard would never do that. Google management would never fire a smart software executive who points out that Google’s smart software is biased. Google would never provide search results that explain how to steal copyright protected software. Well, maybe just one time like this:
Oh, no. Wonky software would never ever do that but for Google’s results via YouTube for the query “Magix Vegas crack.” Now who is a moron? Perhaps an apologist for Google?
Stephen E Arnold, May 10, 2023
Microsoft Bing Causes the Google Lights to Flicker
May 10, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The article “The Updated Bing Chat Leapfrogs ChatGPT in 6 Important New Ways” shakes the synapses of Googzilla. The Sundar & Prabhakar Comedy Show has been updating its scripts and practicing fancy dancing. Now the Redmond software, security, and strategy outfit has dragged fingernails across the chalk board in Google World. Annoying? Yes, indeed.
The write up does not mention Google directly, but the eerie light from the L.E.D.s illuminating the online ad vendor’s logo shine between the words in the article. Here’s an example:
opening up access to all.
None of this “to be” stuff from the GOOG. The Microsofties are making their version of ChatGPT available to “all.” (Obviously the categorical “all” is crazy marketing logic, but the main idea is “here and now”, not a progressive or future tense fantasy land.
Also, the write up uses jargon to explain what’s new from the skilled professionals who crafted Windows 3.11. Microsoft has focused on the image generation feature and hooking more people who want smart software into the Edge world of a browser.
But between the spaces in the article, one message flickers. Microsoft is pushing product. Google is reorganizing, watching Dr. Jeff Dean with side glances, and running queries to find out what Dr. Hinton is saying about the online ad outfit’s sense of ethical behavior. In short, the Google is passive with synapses jarred by Microsoft marketing plus actual applications of smart software.
Fascinating. Is the flickering of the Google L.E.D.s a sign that power is failing or flawed electrical engineering is causing wobbles?
Stephen E Arnold, May 10, 2023
Vint Cerf: Explaining Why Google Is Scrambling
May 9, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
One thing OpenAI’s ChatGPT legions of cheerleaders cannot do is use Dr. Vint Cerf as the pointy end of a PR stick. I recall the first time I met Dr. Cerf. He was the keynote at an obscure conference about search and retrieval. Indeed he took off his jacket. He then unbuttoned his shirt to display a white T shirt with “I TCP on everything.” The crowd laughed — not a Jack Benny 30 second blast of ebullience — but a warm sound.
Midjourney output this illustration capturing Googzilla in a rocking chair in the midst of the snow storm after the Microsoft asteroid strike at Davos. Does the Google look aged? Does the Google look angry? Does the Google do anything but talk in the future and progressive tenses? Of course not. Google is not an old dinosaur. The Google is the king of online advertising which is the apex of technology.
I thought about that moment when I read “Vint Cerf on the Exhilarating Mix of Thrill and Hazard at the Frontiers of Tech: That’s Always an Exciting Place to Be — A Place Where Nobody’s Ever Been Before.’” The interview is a peculiar mix of ignoring the fact that the Google is elegantly managing wizards (some who then terminate themselves by alleging falling or jumping off buildings), trapped in a conveyer belt of increasing expenses related to its plumbing and the maintenance thereof, and watching the fireworks ignited by the ChatGPT emulators. And Google is watching from a back alley, not the front row as I write this. The Google may push its way into the prime viewing zone, but it is OpenAI and a handful of other folks who are assembling the sky rockets and aerial bombs, igniting the fuses, and capturing attention.
Yes, that’s an exciting place to be, but at the moment that is not where Google is. Google is doing big time public relations as outfits like Microsoft expand the zing of smart Excel, Outlook, PowerPoint, and — believe it or not — Excel. Google is close enough to see the bright lights and hear the applause directed at lesser outfits. Google knows it is not the focus of attention. That’s where Vint Cerf’s comes into play on the occasion of winning an award for advancing technology (in general, not just online advertising).
Here are a handful of statements I noticed in the TechMeme “Featured Article” conversation with Dr. Cerf. Note, please, that my personal observations are in italic type in a color similar to that used for Alphabet’s Code Red emergency.
Snip 1: “Sergey has come back to do a little bit more on the artificial intelligence side of things…” Interesting. I interpret this as a college student getting a call to come back home to help out an ailing mom in what some health care workers call “sunset mode.” And Mr. Page? Maintaining a lower profile for non-Googley reasons? See the allegedly accurate report “Virgin Islands issued subpoena to Google co-founder Larry Page in lawsuit against JPMorgan Chase over Jeffrey Epstein.”
Snip 2: “a place where nobody’s ever been before.” I interpret this to mean that the Google is behind the eight ball or between an agile athlete and a team composed of yesterday’s champions or a helicopter pilot vaguely that the opposition is flying a nimble, smart rocket equipped fighter jet. Dinosaurs in rocking chairs watch the snow fall; they do not move to Nice, France.
Snip 3: “Be cautious about going too fast and trying to apply it without figuring out how to put guardrails in place.” How slow did Google go when it was inspired by the GoTo, Overture, and Yahoo ad model, settling for about $1 billion before the IPO? I don’t recall picking up the scent of ceramic brakes applied to the young, frisky, and devil-may-care baby Google. Step on the gas and go faster are the mantras I recall hearing.
Snip 4: “I will say that whenever something gets monetized, you should anticipate there will be emergent properties and possibly unexpected behavior, all driven by greed.” I wonder if the statement is a bit of a Freudian slip. Doesn’t the remark suggest that Google itself has manifested this behavior? It sure does to me, but I am no shrink. Who knew Google’s search-and-advertising business would become the poster reptile for surveillance capitalism?
Snip 5: “I think we are going to have to invest more in provenance and identity in order to evaluate the quality of that which we are experiencing.” Has Mr. Cerf again identified one of the conscious choices made by Google decades ago; that is, ignore date and time stamps for when the content was first spidered, when it was created, and when it was updated. What is the quality associated with the obfuscation of urls for certain content types, and remove a user’s ability to display the “content” the user wants; for example, a query for a bound phrase for an entity like “Amanda Rosenberg.” I also wonder about advertisements which link to certain types of content; for example, health care products or apps with gotcha functionalities.
Several observations:
- Google’s attempts to explain that its going slow is a mature business method for Google is amusing. I would recommend that the gag be included in the Sundar and Prabhakar comedy routine.
- The crafted phrases about guardrails and emergent behaviors do not explain why Google is talking and not doing. Furthermore, the talking is delivered not by users of a ChatGPT infused application. The words are flowing from a person who is no expert in smart software and has a few miles on his odometer as I do.
- The remarks ignore the raw fact that Microsoft dominated headlines with its Davos rocket launch. Google’s search wizards were thinking about cost control, legal hassles, and the embarrassing personnel actions related to smart software and intra-company guerilla skirmishes.
Net net: Read the interview and ask, “Where’s Googzilla now?” My answer is, “Prepping for retirement?”
Stephen E Arnold, May 9, 2023
Google Manager Checklist: What an Amazing Approach from the Online Ad Outfit!
May 8, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. I tagged this write up about the cited story as “News.” I wish I had a suitable term at my disposal because “news” does not capture the essence of the write up in my opinion.
Please, take a moment to read and savor “15 Years Ago, Google Determined the Best Bosses Share These 11 Traits. But 1 Behavior Is Still Missing.” If the title were not a fancy birthday cake, here’s the cherry on top in the form of a subtitle:
While Google’s approach to identifying its best managers is great, it ignores the fact a ‘new’ employee isn’t always new to the company.
Imagine. Google defines new in a way incomprehensible to an observer of outstanding, ethical, exemplary, high-performing commercial enterprises.
What are the traits of a super duper best boss at the Google? In fact, let’s look at each as the traits have been applied in recent Google management actions. You can judge for yourself how the wizards are manifesting “best boss” behavior.
Trait 1. My [Googley] manager gives me “actionable” feedback that helps me improve my performance. Based on my conversations with Google full time employees, communications is not exactly a core competency.
Trait 2. My [Googley] manager does not micro-manage. Based on my personal experience, management of any type is similar to the behavior of the snipe.
Trait 3. My [Googley] manager shows consideration to me as a person. Based on reading about the treatment of folks disagreeing with other Googlers (for instance, Dr. Timnit Gebru), consideration must be defined in a unique Alphabet which I don’t understand.
Trait 4. The actions of [a Googley] manager show that the full time equivalent values the perspective and employee brings to his/her team, even if it is different from his/her own. Wowza. See the Dr. Timnit Gebru reference above or consider the snapshots of Googlers protesting.
Trait 5. [The Googley manager] keeps the team focused on our priority results/deliverables. How about those killed projects, the weird dead end help pages, and the mysteries swirling around ad click fraud allegations?
Trait 6. [The Googley] manager regularly shares relevant information from his/her manager and senior leaders. Yeah, those Friday all-hands meetings now take place when?
Trait 7. [The Googley] manager has had a “meaningful discussion” with me about career development? In my view, terminating people via email when a senior manager gets a $200 million bonus is an outstanding way to stimulate a “meaningful discussion.”
Trait 8. [The Googley] manager communicates clear goals for our team. Absolutely. A good example is the existence of multiple chat apps, cancelation of some moon shots like solving death, and the fertility of the company’s legal department.
Trait 9. [The Googley manager] has technical expertise to manage a professional. Of course, that’s why a Google professional admitted that the AI software was alive and needed a lawyer. The management move of genius was to terminate the wizard. Mental health counseling? Ho ho ho.
Trait 10. [A Googler] recommends a super duper Googley manager to friends? Certainly. That’s what Glassdoor reviews permit. Also, there are posts on social media and oodles of praise opportunities on LinkedIn. The “secret” photographs at an off site? Those are perfect for a Telegram group.
Trait 11. [A true Googler] sees only greatness in Googley managers. Period.
Trait 12. [A Googler] loves Googley managers who are Googley. There is no such thing as too much Googley goodness.
Trait 13. [A Googley manager] does not change, including such actions as overdosing on a yacht with a “special services contractor” or dodging legal documents from a representative of a court or comparable entity from a non US nation state.
This article appears to be a recycling of either a Google science fiction story or a glitch in the matrix.
What’s remarkable is that a well known publication presents the information as substantive. Amazing. I wonder if this “content” is a product of an early version of smart software.
Stephen E Arnold, May 8, 2023