AI Builders and the Illusions they Promote
May 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Why do AI firms insist on calling algorithmic mistakes “hallucinations” instead of errors, malfunctions, or glitches? The Guardian‘s Naomi Klein believes AI advocates chose this very human and mystical term to perpetuate a fundamental myth: AI will be humanity’s salvation. And that stance, she insists, demonstrates that “AI Machines Aren’t ‘Hallucinating.’ But their Makers Are.”
It is true that, in a society built around citizens’ well-being and the Earth’s preservation, AI could help end poverty, eliminate disease, reverse climate change, and facilitate more meaningful lives. But that is not the world we live in. Instead, our systems are set up to exploit both resources and people for the benefit of the rich and powerful. AI is poised to help them do that even more efficiently than before.
The article discusses four specific hallucinations possessing AI proponents. First, the assertion AI will solve the climate crisis when it is likely to do just the opposite. Then there’s the hope AI will help politicians and bureaucrats make wiser choices, which assumes those in power base their decisions on the greater good in the first place. Which leads to hallucination number three, that we can trust tech giants “not to break the world.” Those paying attention saw that was a false hope long ago. Finally is the belief AI will eliminate drudgery. Not all work, mind you, just the “boring” stuff. Some go so far as to paint a classic leftist ideal, one where humans work not to survive but to pursue our passions. That might pan out if we were living in a humanist, Star Trek-like society, Klein notes, but instead we are subjects of rapacious capitalism. Those who lose their jobs to algorithms have no societal net to catch them.
So why are the makers of AI promoting these illusions? Kelin proposes:
“Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent. This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal. Why, for instance, should a for-profit company be permitted to feed the paintings, drawings and photographs of living artists into a program like Stable Diffusion or Dall-E 2 so it can then be used to generate doppelganger versions of those very artists’ work, with the benefits flowing to everyone but the artists themselves?”
The answer, of course, is that this should not be permitted. But since innovation moves much faster than legislatures and courts, tech companies have been operating on a turbo-charged premise of seeking forgiveness instead of permission for years. (They call it “disruption,” Klein notes.) Operations like Google’s book-scanning project, Uber’s undermining the taxi industry, and Facebook’s mishandling of user data, just to name a few, got so far so fast regulators simply gave in. Now the same thing appears to be happening with generative AI and the data it feeds upon. But there is hope. A group of top experts on AI ethics specify measures regulators can take. Will they?
Cynthia Murrell, May 24, 2023
More Google PR: For an Outfit with an Interesting Past, Chattiness Is Now a Core Competency
May 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
How many speeches, public talks, and interviews did Sergey Brin, Larry Page, and Eric Schmidt do? To my recollection, not too many. And what about now? Larry Page is tough to find. Mr. Brin is sort of invisible. Eric Schmidt has backed off his claim that Qwant keeps him up at night? But Sundar Pichai, one half of the Sundar and Prabhakar Comedy Show, is quite visible. AI everywhere keynote speeches, essays about smart software, and now an original “he wrote it himself” essay in the weird salmon-tinted newspaper The Financial Times. Yeah, pinkish.
Smart software provided me with an illustration of a fast talker pitching the future benefits of a new product. Yep, future probabilities. Rock solid. Thank you, MidJourney.
What’s with the spotlight on the current Google big wheel? Gentle reader, the visibility is one way Google is trying to advance its agenda. Before I offer my opinion about the Alphabet Google YouTube agenda, I want to highlight three statements in “Google CEO: building AI Responsibly Is the Only Race That Really Matters.”
Statement from the Google essay #1
At Google, we’ve been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right.
The theme is that Google has been doing smart software for a long time. Let’s not forget that the GOOG released the Transformer model as open source and sat on its Googley paws while “stuff happened” starting in 2018. Was that responsible? If so, what does Google mean when it uses the word “responsible” as it struggles to cope with the meme “Google is late to the game.” For example, Microsoft pulled off a global PR coup with its Davos’ smart software announcements. Google responded with the Paris demonstration of Bard, a hoot for many in the information retrieval killing field. That performance of the Sundar and Prabhakar Comedy Show flopped. Meanwhile, Microsoft pushed its “flavor” of AI into its enterprise software and cloud services. My experience is that for every big PR action, there is an equal or greater PR reaction. Google is trying to catch faster race cars with words, not a better, faster, and cheaper machine. The notion that Google “gets it right” means to me one thing: Maintaining quasi monopolistic control of its market and generating the ad revenue. Google, after 25 years of walking the same old Chihuahua in a dog park with younger, more agile canines. After 25 years of me too and flopping with projects like solving death, revenue is the ONLY thing that matters to stakeholders. More of the Sundar and Prabhakar routine are wearing thin.
Statement from the Google essay #2
We have many examples of putting those principles into practice…
The “principles” apply to Google AI implementation. But the word principles is an interesting one. Google is paying fines for ignoring laws and its principles. Google is under the watchful eye of regulators in the European Union due to Google’s principles. China wanted Google to change and then beavered away on a China-acceptable search system until the cat was let out of the bag. Google is into equality, a nice principle, which was implemented by firing AI researchers who complained about what Google AI was enabling. Google is not the outfit I would consider the optimal source of enlightenment about principles. High tech in general and Google in particular is viewed with increasing concern by regulators in US states and assorted nation states. Why? The Googley notion of principles is not what others understand the word to denote. In fact, some might say that Google operates in an unprincipled manner. Is that why companies like Foundem and regulatory officials point out behaviors which some might find predatory, mendacious, or illegal? Principles, yes, principles.
Statement from the Google essay #3
AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more.
Many years ago, I was in a meeting in DC, and the Donald Rumsfeld quote about information was making the rounds. Good appointees loved to cite this Donald.Here’s the quote from 2002:
There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.
I would humbly suggest that smart software is chock full of known unknowns. But humans are not very good at predicting the future. When it comes to acting “responsibly” in the face of unknown unknowns, I dismiss those who dare to suggest that humans can predict the future in order to act in a responsible manner. Humans do not act responsibly with either predictability or reliability. My evidence is part of your mental furniture: Racism, discrimination, continuous war, criminality, prevarication, exaggeration, failure to regulate damaging technologies, ineffectual action against industrial polluters, etc. etc. etc.
I want to point out that the Google essay penned by one half of the Sundar and Prabhakar Comedy Show team could be funny if it were not a synopsis of the digital tragedy of the commons in which we live.
Stephen E Arnold, May 23, 2023
Please, World, Please, Regulate AI. Oh, Come Now, You Silly Goose
May 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The ageing heart of capitalistic ethicality is beating in what some cardiologists might call arrhythmia. Beating fast and slow means that the coordinating mechanisms are out of whack. What’s the fix? Slam in an electronic gizmo for the humanoid. But what about a Silicon Valley with rhythm problems: Terminating employees, legal woes, annoying elected officials, and teen suicides? The outfits poised to make a Nile River of cash from smart software are doing the “begging” thing.
The Gen X whiz kid asks the smart software robot: “Will the losers fall for the call to regulate artificial intelligence?” The smart software robot responds, “Based on a vector and matrix analysis, there is a 75 to 90 percent probability that one or more nation states will pass laws to regulate us.” The Gen X whiz kid responds, “Great, I hate doing the begging and pleading thing.” The illustration was created by my old pal, MidJourney digital emulators.
“OpenAI Leaders Propose International Regulatory body for AI” is a good summation of the “please, regulate AI even though it is something most people don’t understand and a technology whose downstream consequences are unknown.” The write up states:
…AI isn’t going to manage itself…
We have some first hand experience with Silicon Valley wizards who [a] allow social media technology to destroy the fabric of civil order, [b] control information frames so that hidden hands can cause irrelevant ads to bedevil people looking for a Thai restaurant, [c] ignoring laws of different nation states because the fines are little more than the cost of sandwiches at an off site meeting, and [d] sporty behavior under the cover of attendance at industry conferences (why did a certain Google Glass marketing executive try to kill herself and the yacht incident with a controlled substance and subsequent death?).
What fascinated me was the idea that an international body should regulate smart software. The international bodies did a bang up job with the Covid speed bump. The United Nations is definitely on top of the situation in central Africa. And the International Criminal Court? Oh, right, the US is not a party to that organization.
What’s going on with these calls for regulation? In my opinion, there are three vectors for this line of begging, pleading, and whining.
- The begging can be cited as evidence that OpenAI and its fellow travelers tried to do the right thing. That’s an important psychological ploy so the company can go forward and create a Terminator version of Clippy with its partner Microsoft
- The disingenuous “aw, shucks” approach provides a lousy make up artist with an opportunity to put lipstick on a pig. The shoats and hoggets look a little better than some of the smart software champions. Dim light and a few drinks can transform a boarlet into something spectacular in the eyes of woozy venture capitalists
- Those pleading for regulation want to make sure their company has a fight chance to dominate the burgeoning market for smart software methods. After all, the ageing Googzilla is limping forward with billions of users who will chow down on the deprecated food available in the Google cafeterias.
At least Marie Antoinette avoided the begging until she was beheaded. Apocryphal or not, she held on the “Let them eat mille-feuille. But the blade fell anyway.
PS. There allegedly will be ChatGPT 5.0. Isn’t that prudent?
Stephen E Arnold, May 23, 2023
No kidding?
The Seven Wonders of the Google AI World
May 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read the content at this Google Web page: https://ai.google/responsibility/principles/. I found it darned amazing. In fact, I thought of the original seven wonders of the world. Let’s see how Google’s statements compare with the down-through-time achievements of mere mortals from ancient times.
Let’s imagine two comedians explaining the difference between the two important set of landmarks in human achievement. Here are the entertainers. These impressive individuals are a product of MidJourney’s smart software. The drawing illustrates the possibilities of artificial intelligence applied to regular intelligence and a certain big ad company’s capabilities. (That’s humor, gentle reader.)
Here are the seven wonders of the world according to the semi reliable National Geographic (l loved those old Nat Geos when I was in the seventh grade in 1956-1957!):
- The pyramids of Giza (tombs or alien machinery, take your pick)
- The hanging gardens of Babylon (a building with a flower show)
- The temple of Artemis (god of the hunt for maybe relevant advertising?)
- The statue of Zeus (the thunder god like Googzilla?)
- The mausoleum at Halicarnassus (a tomb)
- The colossus of Rhodes (Greek sun god who inspired Louis XIV and his just-so-hoity toity pals)
- The lighthouse of Alexandria (bright light which baffles some who doubt a fire can cast a bright light to ships at sea)
Now the seven wonders of the Google AI world:
- Socially beneficial AI (how does AI help those who are not advertisers?)
- Avoid creating or reinforcing unfair bias (What’s Dr. Timnit Gebru say about this?)
- Be built and tested for safety? (Will AI address video on YouTube which provide links to cracked software; e.g. this one?)
- Be accountable to people? (Maybe people who call for Google customer support?)
- Incorporate privacy design principles? (Will the European Commission embrace the Google, not litigate it?)
- Uphold high standards of scientific excellence? (Interesting. What’s “high” mean? What’s scientific about threshold fiddling? What’s “excellence”?)
- AI will be made available for uses that “accord with these principles”. (Is this another “Don’t be evil moment?)
Now let’s evaluate in broad strokes the two seven wonders. My initial impression is that the ancient seven wonders were fungible, not based on the future tense, the progressive tense, and breathing the exhaust fumes of OpenAI and others in the AI game. After a bit of thought, I am not sure Google’s management will be able to convince me that its personnel policies, its management of its high school science club, and its knee jerk reaction to the Microsoft Davos slam dunk are more than bloviating. Finally, the original seven wonders are either ruins or lost to all but a MidJourney reconstruction or a Bing output. Google is in the “careful” business. Translating: Google is Googley. OpenAI and ChatGPT are delivering blocks and stones for a real wonder of the world.
Net net: The ancient seven wonders represent something to which humans aspired or honored. The Google seven wonders of AI are, in my opinion, marketing via uncoordinated demos. However, Google will make more money than any of the ancient attractions did. The Google list may be perfect for the next Sundar and Prabhakar Comedy Show. Will it play in Paris? The last one there flopped.
Stephen E Arnold, May 12, 2023
Vint Cerf: Explaining Why Google Is Scrambling
May 9, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
One thing OpenAI’s ChatGPT legions of cheerleaders cannot do is use Dr. Vint Cerf as the pointy end of a PR stick. I recall the first time I met Dr. Cerf. He was the keynote at an obscure conference about search and retrieval. Indeed he took off his jacket. He then unbuttoned his shirt to display a white T shirt with “I TCP on everything.” The crowd laughed — not a Jack Benny 30 second blast of ebullience — but a warm sound.
Midjourney output this illustration capturing Googzilla in a rocking chair in the midst of the snow storm after the Microsoft asteroid strike at Davos. Does the Google look aged? Does the Google look angry? Does the Google do anything but talk in the future and progressive tenses? Of course not. Google is not an old dinosaur. The Google is the king of online advertising which is the apex of technology.
I thought about that moment when I read “Vint Cerf on the Exhilarating Mix of Thrill and Hazard at the Frontiers of Tech: That’s Always an Exciting Place to Be — A Place Where Nobody’s Ever Been Before.’” The interview is a peculiar mix of ignoring the fact that the Google is elegantly managing wizards (some who then terminate themselves by alleging falling or jumping off buildings), trapped in a conveyer belt of increasing expenses related to its plumbing and the maintenance thereof, and watching the fireworks ignited by the ChatGPT emulators. And Google is watching from a back alley, not the front row as I write this. The Google may push its way into the prime viewing zone, but it is OpenAI and a handful of other folks who are assembling the sky rockets and aerial bombs, igniting the fuses, and capturing attention.
Yes, that’s an exciting place to be, but at the moment that is not where Google is. Google is doing big time public relations as outfits like Microsoft expand the zing of smart Excel, Outlook, PowerPoint, and — believe it or not — Excel. Google is close enough to see the bright lights and hear the applause directed at lesser outfits. Google knows it is not the focus of attention. That’s where Vint Cerf’s comes into play on the occasion of winning an award for advancing technology (in general, not just online advertising).
Here are a handful of statements I noticed in the TechMeme “Featured Article” conversation with Dr. Cerf. Note, please, that my personal observations are in italic type in a color similar to that used for Alphabet’s Code Red emergency.
Snip 1: “Sergey has come back to do a little bit more on the artificial intelligence side of things…” Interesting. I interpret this as a college student getting a call to come back home to help out an ailing mom in what some health care workers call “sunset mode.” And Mr. Page? Maintaining a lower profile for non-Googley reasons? See the allegedly accurate report “Virgin Islands issued subpoena to Google co-founder Larry Page in lawsuit against JPMorgan Chase over Jeffrey Epstein.”
Snip 2: “a place where nobody’s ever been before.” I interpret this to mean that the Google is behind the eight ball or between an agile athlete and a team composed of yesterday’s champions or a helicopter pilot vaguely that the opposition is flying a nimble, smart rocket equipped fighter jet. Dinosaurs in rocking chairs watch the snow fall; they do not move to Nice, France.
Snip 3: “Be cautious about going too fast and trying to apply it without figuring out how to put guardrails in place.” How slow did Google go when it was inspired by the GoTo, Overture, and Yahoo ad model, settling for about $1 billion before the IPO? I don’t recall picking up the scent of ceramic brakes applied to the young, frisky, and devil-may-care baby Google. Step on the gas and go faster are the mantras I recall hearing.
Snip 4: “I will say that whenever something gets monetized, you should anticipate there will be emergent properties and possibly unexpected behavior, all driven by greed.” I wonder if the statement is a bit of a Freudian slip. Doesn’t the remark suggest that Google itself has manifested this behavior? It sure does to me, but I am no shrink. Who knew Google’s search-and-advertising business would become the poster reptile for surveillance capitalism?
Snip 5: “I think we are going to have to invest more in provenance and identity in order to evaluate the quality of that which we are experiencing.” Has Mr. Cerf again identified one of the conscious choices made by Google decades ago; that is, ignore date and time stamps for when the content was first spidered, when it was created, and when it was updated. What is the quality associated with the obfuscation of urls for certain content types, and remove a user’s ability to display the “content” the user wants; for example, a query for a bound phrase for an entity like “Amanda Rosenberg.” I also wonder about advertisements which link to certain types of content; for example, health care products or apps with gotcha functionalities.
Several observations:
- Google’s attempts to explain that its going slow is a mature business method for Google is amusing. I would recommend that the gag be included in the Sundar and Prabhakar comedy routine.
- The crafted phrases about guardrails and emergent behaviors do not explain why Google is talking and not doing. Furthermore, the talking is delivered not by users of a ChatGPT infused application. The words are flowing from a person who is no expert in smart software and has a few miles on his odometer as I do.
- The remarks ignore the raw fact that Microsoft dominated headlines with its Davos rocket launch. Google’s search wizards were thinking about cost control, legal hassles, and the embarrassing personnel actions related to smart software and intra-company guerilla skirmishes.
Net net: Read the interview and ask, “Where’s Googzilla now?” My answer is, “Prepping for retirement?”
Stephen E Arnold, May 9, 2023
Google Manager Checklist: What an Amazing Approach from the Online Ad Outfit!
May 8, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid. I tagged this write up about the cited story as “News.” I wish I had a suitable term at my disposal because “news” does not capture the essence of the write up in my opinion.
Please, take a moment to read and savor “15 Years Ago, Google Determined the Best Bosses Share These 11 Traits. But 1 Behavior Is Still Missing.” If the title were not a fancy birthday cake, here’s the cherry on top in the form of a subtitle:
While Google’s approach to identifying its best managers is great, it ignores the fact a ‘new’ employee isn’t always new to the company.
Imagine. Google defines new in a way incomprehensible to an observer of outstanding, ethical, exemplary, high-performing commercial enterprises.
What are the traits of a super duper best boss at the Google? In fact, let’s look at each as the traits have been applied in recent Google management actions. You can judge for yourself how the wizards are manifesting “best boss” behavior.
Trait 1. My [Googley] manager gives me “actionable” feedback that helps me improve my performance. Based on my conversations with Google full time employees, communications is not exactly a core competency.
Trait 2. My [Googley] manager does not micro-manage. Based on my personal experience, management of any type is similar to the behavior of the snipe.
Trait 3. My [Googley] manager shows consideration to me as a person. Based on reading about the treatment of folks disagreeing with other Googlers (for instance, Dr. Timnit Gebru), consideration must be defined in a unique Alphabet which I don’t understand.
Trait 4. The actions of [a Googley] manager show that the full time equivalent values the perspective and employee brings to his/her team, even if it is different from his/her own. Wowza. See the Dr. Timnit Gebru reference above or consider the snapshots of Googlers protesting.
Trait 5. [The Googley manager] keeps the team focused on our priority results/deliverables. How about those killed projects, the weird dead end help pages, and the mysteries swirling around ad click fraud allegations?
Trait 6. [The Googley] manager regularly shares relevant information from his/her manager and senior leaders. Yeah, those Friday all-hands meetings now take place when?
Trait 7. [The Googley] manager has had a “meaningful discussion” with me about career development? In my view, terminating people via email when a senior manager gets a $200 million bonus is an outstanding way to stimulate a “meaningful discussion.”
Trait 8. [The Googley] manager communicates clear goals for our team. Absolutely. A good example is the existence of multiple chat apps, cancelation of some moon shots like solving death, and the fertility of the company’s legal department.
Trait 9. [The Googley manager] has technical expertise to manage a professional. Of course, that’s why a Google professional admitted that the AI software was alive and needed a lawyer. The management move of genius was to terminate the wizard. Mental health counseling? Ho ho ho.
Trait 10. [A Googler] recommends a super duper Googley manager to friends? Certainly. That’s what Glassdoor reviews permit. Also, there are posts on social media and oodles of praise opportunities on LinkedIn. The “secret” photographs at an off site? Those are perfect for a Telegram group.
Trait 11. [A true Googler] sees only greatness in Googley managers. Period.
Trait 12. [A Googler] loves Googley managers who are Googley. There is no such thing as too much Googley goodness.
Trait 13. [A Googley manager] does not change, including such actions as overdosing on a yacht with a “special services contractor” or dodging legal documents from a representative of a court or comparable entity from a non US nation state.
This article appears to be a recycling of either a Google science fiction story or a glitch in the matrix.
What’s remarkable is that a well known publication presents the information as substantive. Amazing. I wonder if this “content” is a product of an early version of smart software.
Stephen E Arnold, May 8, 2023
Google: A PR Special Operation Underway
April 25, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
US television on Sunday, April 16, 2023. Assorted blog posts and articles by Google friends like Inc. Magazine. Now the British Guardian newspaper hops on the bandwagon.
Navigate to “Google Chief Warns AI Could Be Harmful If Deployed Wrongly.” Let me highlight a couple of statements in the write up and then offer a handful of observations designed intentionally to cause some humanoids indigestion.
The article includes this statement:
Sundar Pichai also called for a global regulatory framework for AI similar to the treaties used to regulate nuclear arms use, as he warned that the competition to produce advances in the technology could lead to concerns about safety being pushed aside.
Also, this gem:
Pichai added that AI could cause harm through its ability to produce disinformation.
And one more:
Pichai admitted that Google did not fully understand how its AI technology produced certain responses.
Enough. I want to shift to the indigestion inducing portion of this short essay.
First, Google is in Code Red. Why? What were the search wizards under the guidance of Sundar and Prabhakar doing for the last year? Obviously not paying attention to the activity of OpenAI. Microsoft was and stole the show at the hoe down in Davos. Now Microsoft has made available a number of smart services designed to surf on its marketing tsunami and provide more reasons for enterprise customers to pay for smart Microsoft software. Neither the Guardian nor Sundar seem willing to talk about the reality of Google finding itself in the position of Alta Vista, Lycos, or WebCrawler in the late 1990s and early 2000s when Google search delivered relevant results. At least Google did until it was inspired by the Yahoo, GoTo, and Overture approach to making cash. Back to the question: Why ignore the fact that Google is in Code Red? Why not ask one half of the Sundar and Prabhakar Comedy Team how they got aced by a non-headliner act at the smart software vaudeville show?
Second, I loved the “could cause harm.” What about the Android malware issue? What about the ads which link to malware in Google search results? What about the monopolization of online advertising and pricing ads beyond the reach of many small businesses? What about the “interesting” videos on YouTube? Google has its eye on the “could” of smart software without paying much attention to the here-and-now downsides of its current business. And disinformation? What is Google doing to scrub that content from its search results. My team identified a distributor of pornography operating in Detroit. That operator’s content can be located with a single Google query. If Google cannot identify porn, how will it flag smart software’s “disinformation”?
Finally, Google for decades has made a big deal of hiring the smartest people in the world. There was a teen whiz kid in Moscow. There was a kid in San Jose with a car service to get him from high school to the Mountain View campus. There is deep mind with its “deep” team of wizards. Now this outfit with more than 100,000 (more or less full time geniuses) does not know how its software works. How will that type of software be managed by the estimable Google? The answer is, “It won’t.” Google’s ability to manage is evident with heart breaking stories about its human relations and personnel actions. There are smart Googlers who think the software is alive. Does this person have company-paid mental health care? There are small businesses like an online automobile site in ruins because a Googler downchecked the site years ago for an unknown reason. The Google is going to manage something well?
My hunch is that Google wants to make sure that it becomes the primary vendor of ready-to-roll training data and microwavable models. The fact that Amazon, Microsoft, and a group of Chinese outfits are on the same information superhighway illustrates one salient fact: The PR tsunami highlights Google’s lack of positive marketing action and the taffy-pull sluggishness of demos that sort of work.
What about the media which ask softball questions and present as substance recommendations that the world agree on AI rules? Perhaps Google should offer to take over the United Nations or form a World Court of AI Technology? Maybe Google should just be allowed to put other AI firms out of business and keep trying to build a monopoly based on software the company doesn’t appear to understand?
The good news is that Sundar did not reprise the Paris demonstration of Bard. That only cost the company a few billion when the smart software displayed its ignorance. That was comedic, and I think these PR special operations are fodder for the spring Sundar and Prabhakar tour of major cities.
The T shirts will not feature a dinosaur (Googzilla, I believe) freezing in a heavy snow storm. The art can be produced using Microsoft Bing’s functions too. And that will be quite convenient if Samsung ditches Google search for Bing and its integrated smart software. To add a bit of spice to Googzilla’s catered lunch is the rumor that Apple may just go Bing. Bye, bye billions, baby, bye bye.
If that happens, Google loses: [a] a pickup truck filled with cash, [b] even more technical credibility, and [c] maybe Googzilla’s left paw and a fang. Can Sundar and Prabhakar get applause when doing one-liners with one or two performers wearing casts and sporting a tooth gap?
Stephen E Arnold, April 25, 2023
AI That Sort of, Kind of Did Not Work: Useful Reminders
April 24, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Epic AI Fails. A List of Failed Machine Learning Projects.” My hunch is that a write up suggesting that smart software may disappoint in some cases is not going to be a popular topics. I can hear the pooh-poohs now: “The examples used older technology.” And “Our system has been engineered to avoid that problem.” And “Our Large Language Model uses synthetic data which improves performance and the value of system outputs.” And “We have developed a meta-layer of AI which integrates multiple systems in order to produce a more useful response.”
Did I omit any promises other than “The check is in the mail” or “Our customer support team will respond to your call immediately, 24×7, and with an engineer, not a smart chatbot because. Humans, you know.”
The main point of the article from Analytics India, an online publication, provides some color on interesting flops; specifically:
- Amazon’s recruitment system. Think discrimination against females. Amazon’s Rekognition system and its identification of elected officials as criminals. Wait. Maybe those IDs were accurate?
- Covid 19 models. Moving on.
- Google and the diabetic retinopathy detection system. The marketing sounded fine. Candy for breakfast? Sure, why not?
- OpenAI’s Samantha. Not as crazy as Microsoft Tay but in the ballpark.
- Microsoft Tay. Yeah, famous self instruction in near real time.
- Sentient Investment AI Hedge Fund. Your retirement savings? There are jobs at Wal-Mart I think.
- Watson. Wow. Cognitive computing and Jeopardy.
The author takes a less light-hearted approach than I. Useful list with helpful reminders that it is easier to write tweets and marketing collateral than deliver smart software that delivers on sales confections.
Stephen E Arnold, April 24, 2023
Google Panic: Just Three Reasons?
April 20, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read tweets, heard from colleagues, and received articles emailed to me about Googlers’ Bard disgruntlement? In my opinion, Laptop Magazine’s summary captures the gist of the alleged wizard annoyance. “Bard: 3 Reasons Why the Google Staff Hates the New ChatGPT Rival.”
I want to sidestep the word “hate”. With 100,000 or so employees a hefty chunk of those living in Google Land will love Bard. Other Google staff won’t care because optimizing a cache function for servers in Brazil is a world apart. The result is a squeaky cart with more squeaky wheels than a steam engine built in 1840.
The three trigger points are, according to the write up:
- Google Bard outputs that are incorrect. The example provided is that Bard explains how to crash a plane when the Bard user wants to land the aircraft safely. So stupid.
- Google (not any employees mind you) is “indifferent to ethical concerns.” The example given references Dr. Timnit Gebru, my favorite Xoogler. I want to point out that Dr. Jeff Dean does not have her on this weekend’s dinner party guest list. So unethical.
- Bard is flawed because Google wizards had to work fast. This is the outcome of the sort of bad judgment which has been the hallmark of Google management for some time. Imagine. Work. Fast. Google. So haste makes waste.
I want to point out that there is one big factor influencing Googzilla’s mindless stumbling and snorting. The headline of the Laptop Magazine article presents the primum mobile. Note the buzzword/sign “ChatGPT.”
Google is used to being — well, Googzilla — and now an outfit which uses some Google goodness is in the headline. Furthermore, the headline calls attention to Google falling behind ChatGPT.
Googzilla is used to winning (whether in patent litigation or in front of incredibly brilliant Congressional questioners). Now even Laptop Magazine explains that Google is not getting the blue ribbon in this particular, over-hyped but widely followed race.
That’s the Code Red. That is why the Paris presentation was a hoot. That is why the Sundar and Prabhakar Comedy Tour generates chuckles when jokes include “will,” “working on,” “coming soon” as part of the routine.
Once again, I am posting this from the 2023 National Cyber Crime Conference. Not one of the examples we present are from Google, its systems, or its assorted innovation / acquisition units.
Googzilla for some is not in the race. And if the company is in the ChatGPT race, Googzilla has yet to cross the finish line.
That’s the Code Red. No PR, no Microsoft marketing tsunami, and no love for what may be a creature caught in a heavy winter storm. Cold, dark, and sluggish.
Stephen E Arnold, April 26, 2023
Sequoia on AI: Is The Essay an Example of What Informed Analysis Will Be in the Future?
April 10, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read an essay produced by the famed investment outfit Sequoia. Its title: “Generative AI: A Creative New World.” The write up contains buzzwords, charts, a modern version of a list, and this fascinating statement:
This piece was co-written with GPT-3. GPT-3 did not spit out the entire article, but it was responsible for combating writer’s block, generating entire sentences and paragraphs of text, and brainstorming different use cases for generative AI. Writing this piece with GPT-3 was a nice taste of the human-computer co-creation interactions that may form the new normal. We also generated illustrations for this post with Midjourney, which was SO MUCH FUN!
I loved the capital letters and the exclamation mark. Does smart software do that in its outputs?
I noted one other passage which caught my attention; to wit:
The best Generative AI companies can generate a sustainable competitive advantage by executing relentlessly on the flywheel between user engagement/data and model performance.
I understand “relentlessly.” To be honest, I don’t know about a “sustainable competitive advantage” or user engagement/data model performance. I do understand the Amazon flywheel, but my understand that it is slowing and maybe wobbling a bit.
My take on the passage in purple as in purple prose is that “best” AI depends not on accuracy, lack of bias, or transparency. Success comes from users and how well the system performs. “Perform” is ambiguous. My hunch is that the Sequoia smart software (only version 3) and the super smart Sequoia humanoids were struggling to express why a venture firm is having “fun” with a bit of B-school teaming — money.
The word “money” does not appear in the write up. The phrase “economic value” appears twice in the introduction to the essay. No reference to “payoff.” No reference to “exit strategy.” No use of the word “financial.”
Interesting. Exactly how does a money-centric firm write about smart software without focusing on the financial upside in a quite interesting economic environment.
I know why smart software misses the boat. It’s good with deterministic answers for which enough information is available to train the model to produce what seems like coherent answers. Maybe the smart software used by Sequoia was not clued in to the reports about Sequoia’s explanations of its winners and losers? Maybe the version of the smart software is not up the tough subject to which the Sequoia MBAs sought guidance?
On the other hand, maybe Sequoia did not think through what should be included in a write up by a financial firm interested in generating big payoffs for itself and its partners.
Either way. The essay seems like a class project which is “good enough.” The creative new world lacks the force that through the green fuse drives the cash.
Stephen E Arnold, April 10, 2023