The Google AI Way: EEAT or Video Injection?

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Over the weekend, I spotted a couple of signals from the Google marketing factory. The first is the cheerleading by that great champion of objective search results, Danny Sullivan who wrote with Chris Nelson “Rewarding High Quality Content, However, It Is Produced.” The authors pointed out that their essay is on behalf of the Google Search Quality team. This “team” speaks loudly to me when we run test queries on Once in a while — not often, mind you — a relevant result will appear in the first page or two of results.

The subject of this essay by Messrs.Sullivan and Nelson is EEAT. My research team and I think that the fascinating acronym is pronounced like to word “eat” in the sense of ingesting gummy cannabinoids. (One hopes these are not the prohibited compounds such as Delta-9 THC.) The idea is to pop something in your mouth and chew. As the compound (fact and fiction, GPT generated content and factoids) dissolve and make their way into one’s system, the psychoactive reaction is greater perceived dependence on the Google products. You may not agree, but that’s how I interpret the essay.

So what’s EEAT? I am not sure my team and I are getting with the Google script. The correct and Googley answer is:

Expertise, experience, authoritativeness, and trustworthiness.

The write up says:

Focusing on rewarding quality content has been core to Google since we began. It continues today, including through our ranking systems designed to surface reliable information and our helpful content system. The helpful content system was introduced last year to better ensure those searching get content created primarily for people, rather than for search ranking purposes.

I wonder if this text has been incorporated in the Sundar and Prabhakar Comedy Show? I would suggest that it replace the words about meeting users’ needs.

The meat of the synthetic turkey burger strikes me as:

it’s important to recognize that not all use of automation, including AI generation, is spam. Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.

Synthetic or manufactured information, content objects, data, and other outputs are okay with us. We’re Google, of course, and we are equipped with expertise, experience, authoritativeness, and trustworthiness to decide what is quality and what is not.

I can almost visualize a T shirt with the phrase “EEAT It” silkscreened on the back with a cheerful Google logo on the front. Catchy. EEAT It. I want one. Perhaps a pop tune can be sampled and used to generate a synthetic song similar to Michael Jackson’s “Beat It”? Google AI would dodge the Weird Al Yankovic version of the 1983 hit. Google’s version might include the refrain:

Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)

If chowing down on this Google information is not to your liking, one can get with the Google program via a direct video injection. Google has been publicizing its free video training program from India to LinkedIn (a Microsoft property to give the social media service its due). Navigate to “Master Generative AI for Free from Google’s Courses.” The free, free courses are obviously advertisements for the Google way of smart software. Remember the key sequence: Expertise, experience, authoritativeness, and trustworthiness.

The courses are:

  1. Introduction to Generative AI
  2. Introduction to Large Language Models
  3. Attention Mechanism
  4. Transformer Models and BERT Model
  5. Introduction to Image Generation
  6. Create Image Captioning Models
  7. Encoder-Decoder Architecture
  8. Introduction to Responsible AI (remember the phrase “Expertise, experience, authoritativeness, and trustworthiness.”)
  9. Introduction to Generative AI Studio
  10. Generative AI Explorer (Vertex AI).

Why is Google offering free infomercials about its approach to AI?

The cited article answers the question this way:

By 2030, experts anticipate the generative AI market to reach an impressive $109.3 billion, signifying a promising outlook that is captivating investors across the board. [Emphasis added.]

How will Microsoft respond to the EEAT It positioning?

Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)

Stephen E Arnold, June 5, 2023

IBM Dino Baby Unhappy about Being Outed as Dinobaby in the Baby Wizards Sandbox

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I learned the term “dinobaby” reading blog posts about IBM workers who alleged Big Blue wanted younger workers. After thinking about the term, I embraced it. This blog post features an animated GIF of me dancing in my home office. I try to avoid the following: [a] Millennials, GenX, GenZ, and GenY super wizards; [b] former IBM workers who grouse about growing old and not liking a world without CICS; and [c] individuals with advanced degrees who want to talk with me about “smart software.” I have to admit that I have not been particularly successful in this effort in 2023: Conferences, Zooms, face-to-face meetings, lunches, yada yada. Either I am the most magnetic dinobaby in Harrod’s Creek, or these jejune world changers are clueless. (Maybe I should live in a cave on a mountain and accept acolytes?)

I read “Laid-Off 60-Year-Old Kyndryl Exec Says He Was Told IT Giant Wanted New Blood.” The write up includes a number of interesting statements. Here’s one:

BM has been sued numerous times for age discrimination since 2018 when it was reported that company leadership carried out a plan to de-age its workforce – charges IBM has consistently denied, despite US Equal Employment Opportunity Commission (EEOC) findings to the contrary and confidential settlements.

Would IBM deny allegations of age discrimination? There are so many ways to terminate employees today. Why use the “you are old, so you are RIF’ed” ploy? In my opinion, it is an example of the lack of management finesse evident in many once high-flying companies today. I term the methods apparently in use at outfits like Twitter, Google, Facebook, and others as “high school science club management methods” or H2S2M2. The acronym has not caught one, but I assume that someone with a subscription to ChatGPT will use AI to write a book on the subject soon.

The write up also includes this statement:

Liss-Riordan [an attorney representing the dinobaby] said she has also been told that an algorithm was used to identify those who would lose their jobs, but had no further details to provide with regard to that allegation.

Several observations are warranted:

  1. Discrimination is nothing new. Oldsters will be nuked. No question about it. Why? Old people like me (I am 78) make younger folks nervous because we belong in warehouses for the soon dead, not giving lectures to the leaders of today and tomorrow.
  2. Younger folks do not know what they do not know. Consequently, opportunities exist to [a] make fun of young wizards as I do in this blog Monday through Friday since 2008 and [b] charge these “masters of the universe” money to talk about that which is part of their great unknowing. Billing is rejuvenating.
  3. No one cares. One can sue. One can rage. One can find solace in chemicals, fast cars, or climbing a mountain. But it is important to keep one thing in mind: No one cares.

Net net: Does IBM practice dark arts to rid the firm of those who slow down Zoom meetings, raise questions to which no one knows answers, and burdens on benefits plans? My hunch is that IBM type outfits will do what’s necessary to keep the camp ground free of old timers. Who wouldn’t?

Stephen E Arnold, June 5, 2023

Smart Software and a Re-Run of Paradise Lost Joined Progress

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I picked up two non-so-faint and definitely not-encrypted signals about the goals of Google and Microsoft for smart software.

6 3 god 2

Which company will emerge as the one true force in smart software? MidJourney did not pick a winner, just what the top dog will wear to the next quarterly sales report delivered via a neutral Zoom call.

Navigate to the visually thrilling podcast hosted by Lex Fridman, an American MIT wizard. He interviewed the voluble Google wizard Chris Lattner. The subject was the Future of Programming and AI. After listening to the interview, I concluded the following:

  1. Google wants to define and control the “meta” framework for artificial intelligence. What’s this mean? Think a digital version of a happy family: Vishnu, Brahma, and Shiva, among others.
  2. Google has an advantage when it comes to doing smart software because its humanoids have learned what works, what to do, and how to do certain things.
  3. The complexity of Google’s multi-pronged smart software methods, its home-brew programming languages, and its proprietary hardware are nothing more than innovation. Simple? Innovation means no one outside of the Google AI cortex can possibly duplicate, understand, or outperform Googzilla.
  4. Google has money and will continue to spend it to deliver the Vishnu, Brahma, and Shiva experience in my interpretation of programmer speak.

How’s that sound? I assume that the fruit fly start ups are going to ignore the vibrations emitted from Chris Lattner, the voluble Chris Lattner, I want to emphasize. But like those short-lived Diptera, one can derive some insights from the efforts of less well-informed, dependent, and less-well-funded lab experiments.

Okay, that’s signal number one.

Signal number two appears in “Microsoft Signs Deal for AI Computing Power with Nvidia-Backed CoreWeave That Could Be Worth Billions.” This “real news” story asserts:

… Microsoft has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave …

CoreWeave? Yep, the company “sells simplified access to Nvidia’s graphics processing units, or GPUs, which are considered the best available on the market for running AI models.” By the way, nVidia has invested in this outfit. What’s this signal mean to me? Here are the flickering lines on my oscilloscope:

  1. Microsoft wants to put smart software into its widely-used enterprise applications in order to make the one true religion of smart software. The idea, of course, is to pass the collection plate and convert dead dog software into racing greyhounds.
  2. Microsoft has an advantage because when an MBA does calculations and probably letters to significant others, Excel is the go-to solution. Some people create art in Excel and then sell it. MBAs just get spreadsheet fever and do leveraged buyouts. With smart software the Microsoft alleged monopoly does the billing.
  3. The wild and wonderful world of Azure is going to become smarter because… well, Microsoft does smart things. Imagine the demand for training courses, certification for Microsoft engineers, and how-to YouTube videos.
  4. Microsoft has money and will continue to achieve compulsory attendance at the Church of Redmond.

Net net: Two titans will compete. I am thinking about the battle between the John Milton’s protagonist and antagonist in “Paradise Lost.” This will be fun to watch whilst eating chicken korma.

Stephen E Arnold, June 5, 2023

AI Allegedly Doing Its Thing: Let Fake News Fly Free

June 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I cannot resist this short item about the smart software. Stories has appeared in my newsfeeds about AI which allegedly concluded that to complete its mission, it had to remove an obstacle — the human operator.

A number of news sources reported as actual factual that a human operator of a smart weapon system was annoying the smart software. The smart software decided that the humanoid was causing a mission to fail. The smart software concluded that the humanoid had to be killed so the smart software could go kill more humanoids.

I collect examples of thought provoking fake news. It’s my new hobby and provides useful material for my “OSINT Blindspots” lectures. (The next big one will be in October 2023 after I return from Europe in late September 2023.)

However, the write up “US Air Force Denies AI Drone Attacked Operator in Test” presents a different angle on the story about evil software. I noted this passage from an informed observer:

Steve Wright, professor of aerospace engineering at the University of the West of England, and an expert in unmanned aerial vehicles, told me jokingly that he had “always been a fan of the Terminator films” when I asked him for his thoughts about the story. “In aircraft control computers there are two things to worry about: ‘do the right thing’ and ‘don’t do the wrong thing’, so this is a classic example of the second,” he said. “In reality we address this by always including a second computer that has been programmed using old-style techniques, and this can pull the plug as soon as the first one does something strange.”

Now the question: Did smart software do the right thing. Did it go after its humanoid partner? In a hypothetical discussion perhaps? In real life, nope. My hunch is that the US Air Force anecdote is anchored in confusing “what if” thinking with reality. That’s easy for some younger than me to do in my experience.

I want to point out that in August 2020, a Heron Systems’ AI (based on Google technology) killed an Air Force “top gun” in a simulated aerial dog fight. How long did it take the smart software to neutralize the annoying humanoid? About a minute, maybe a minute and a half. See this Janes new item for more information.

My view is that smart software has some interesting capabilities. One scenario of interest to me is a hacked AI-infused weapons system? Pondering this idea opens the door some some intriguing “what if” scenarios.

Stephen E Arnold, June 2, 2023

The TikTok Addition: Has a Fortune Magazine Editor Been Up Swiping?

June 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A colleague called my attention to the Fortune Magazine article boldly titled “Gen Z Teens Are So Unruly in Malls, Fed by Their TikTok Addition, That a Growing Number Are requiring Chaperones and Supervision.” A few items I noted in this headline:

  1. Malls. I thought those were dead horses. There is a YouTube channel devoted to these real estate gems; for example, Urbex Offlimits and a creator named Brandon Moretti’s videos.
  2. Gen Z. I just looked up how old Gen Zs are. According to Mental Floss, these denizens of empty spaces are 11 to 26 years old. Hmmm. For what purpose are 21 to 25 year olds hanging out in empty malls? (Could that be a story for Fortune?)
  3. The “TikTok addition” gaffe. My spelling checker helps me out too. But I learned from a super-duper former Fortune writer whom I shall label Peter V, “Fortune is meticulous about its thorough research, its fact checking, and its proofreading.” Well, super-duper Peter, not in 2023. Please, explain in 25 words of less this image from the write up:


I did notice several factoids and comments in the write up; to wit:

Interesting item one:

“On Friday and Saturdays, it’s just been a madhouse,” she said on a recent Friday night while shopping for Mother’s Day gifts with Jorden and her 4-month-old daughter.

A madhouse is, according to the Cambridge dictionary is “a place of great disorder and confusion.” I think of malls as places of no people. But Fortune does the great fact checking, according to the attestation of Peter V.

Interesting item two:

Even a Chik-fil-A franchise in southeast Pennsylvania caused a stir with its social media post earlier this year that announced its policy of banning kids under 16 without an adult chaperone, citing unruly behavior.

I thought Chik-fil-A was a saintly, reserved institution with restaurants emulating Medieval monasteries. No longer. No wonder so many cars line up for a chickwich.

Interesting item three:

Cohen [a mall expert] said the restrictions will help boost spending among adults who must now accompany kids but they will also likely reduce the number of trips by teens, so the overall financial impact is unclear.

What these snippets tell me is that there is precious little factual data in the write up. The headline leading “TikTok addiction” is not the guts of the write up. Maybe the idea that kids who can’t go to the mall will play online games? I think it is more likely that kids and those lost little 21 to 25 year olds will find other interesting things to do with their time.

But malls? Kids can prowl Snapchat and TikTok, but those 21 to 25 year olds? Drink or other chemical activities?

Hey, Fortune, let’s get addicted to the Peter V. baloney: “Fortune is meticulous about its thorough research, its fact checking, and its proofreading.”

Stephen E Arnold, June 2, 2023

The Prospects for Prompt Engineers: English Majors, Rejoice

June 2, 2023

I noted some good news for English majors. I suppose some history and political science types may be twitching with constrained jubilation too.

Navigate to “9 in 10 Companies That Are Currently Hiring Want Workers with ChatGPT Experience.” The write up contains quite a number of factoids. (Are these statistically valid? I believe everything I read on the Internet with statistical data, don’t you.) Well, true or not, I found these statements interesting:

  • 91 percent of the companies in a human resourcey survey want workers with ChatGPT experience. What does “experience” mean? The write up does not deign to elucidate. The question about how to optimize phishing email counts.
  • 75 percent of those surveyed will fire people who are declared redundant, annoying, or too expensive to pay.
  • 30 percent of those in the sample say that hiring a humanoid with ChatGPT experience is “urgent.” Why not root around in the reason for this urgency? Oh, right. That’s research work.
  • 66 percent of the respondents perceive that ChatGPT will deliver a “competitive edge.” What about the link to cost reduction? Oh, I forgot. That’s additional research work.

What work functions will get to say, “Hello” to smart software? The report summary identifies six job categories:

  • Software engineering
  • Customer service
  • Human resources
  • Marketing
  • Data entry
  • Sale
  • Finance

For parents with a 22 to 40 year old working in one of these jobs, my suggestion is to get that spare bedroom ready. The progeny may return to the nest.

Stephen E Arnold, June 2, 2023

The Intellectual Titanic and Sister Ships at Sea: Ethical Ballast and Flawed GPS Aboard

June 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Researchers Retract Over 300 COVID-Era Medical Papers For Scientific Errors, Ethical Concerns.” I ignored the information about the papers allegedly hand crafted with cow outputs. I did note this statement, however:

Gunnveig Grødeland, a senior researcher at the Institute of Immunology at the University of Oslo, said many withdrawn papers during COVID-19 have been the result of ethical shortcomings.

Interesting. I recall hearing that the president of a big time university in Palo Alto was into techno sci-fi paper writing. I also think that the estimable Jeffrey Epstein affiliated MIT published some super positive information about the new IBM smart WatsonX. (Doesn’t IBM invest big bucks in MIT?) I have also memory tickles about inventors and entrepreneurs begging to be regulated.

5 31 bad info and kids

Bad, distorted values chase kids the Lane of Life. Imagine. These young people and their sense of right and wrong will be trampled by darker motives. Image produced by MidJourney, of course.

What this write up about peer reviewed and allegedly scholarly paper says to me is that ethical research and mental gyroscopes no longer align with what I think of as the common good.

Academics lie. Business executives lie. Entrepreneurs lie. Now what’s that mean for the quaint idea that individuals can be trusted? I can hear the response now:

Senator, thank you, for that question. I will provide the information you desire after this hearing.

I suppose one can look forward to made up information as the increasingly lame smart software marketing demonstrations thrill the uninformed.

Is it possible for flawed ethical concepts and out of kilter moral GPS system to terminate certain types of behavior?

Here’s the answer: Sure looks like it. That’s an interesting gain of function.

Stephen E Arnold, June 1, 2023

Does Jugalbandi Mean De-casting?

June 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Microsoft Launches Jugalbandi: An AI Powered Platform and Chatbot to Bridge Information Gap in India.” India connotes for me spicy food and the caste system. My understanding of this term comes from Wikipedia which says:

The caste system in India is the The caste system in India is the paradigmatic ethnographic instance of social classification based on castes. It has its origins in ancient India, and was transformed by various ruling elites in medieval, early-modern, and modern India, especially the Mughal Empire and the British Raj.

Like me, the Wikipedia can be incorrect, one-sided, and PR-ish.

The Jugalbandi write up contains some interesting statements which I interpret against my understanding of the Wikipedia article about castes in India. Here’s one example:

Microsoft, a pioneer in the artificial intelligence (AI) field, has made significant strides with its latest venture, Jugalbandi. This generative AI-driven platform and chatbot aim to revolutionize access to information about government initiatives and public programs in India. With nearly 22 official languages and considerable linguistic variations in the country, Jugalbandi seeks to address the challenges in disseminating information effectively.

I wonder if Microsoft’s pioneering smart software (based largely upon the less than open and often confused OpenAI technology) will do much to “address the challenges in disseminating information effectively.”

Wikipedia points out:

In 1948, negative discrimination on the basis of caste was banned by law and further enshrined in the Indian constitution in 1950; however, the system continues to be practiced in parts of India. There are 3,000 castes and 25,000 sub-castes in India, each related to a specific occupation.

If law and every day behavior have not mitigated castes and how these form fences in India and India outposts in London and Silicon Valley, exactly what will Microsoft (the pioneer in AI) accomplish?

My hunch the write up enshrines:

  1. The image of Microsoft as the champion of knocking down barriers and allowing communication to flow. (Why does smart Bing block certain queries?)
  2. Microsoft’s self-professed role as a “pioneer” in smart software. I think a pioneer in clever Davos messaging is closer to the truth.
  3. The’s word salad about something that may be quite difficult to accomplish in many social, business, and cultural settings.

Who created the concept of untouchables?

Stephen E Arnold, June 1, 2023

The Death of Digital News Upstarts: Woohoo!

May 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

When I worked at a “real” newspaper, I learned that obituaries were cooked; that is, the newspaper reports of death were written whilst the subject was still alive and presumably buying advertisements in the paper or at least subscribing. The Guardian ran its obituary for upstart digital news outfits. No, the opinion writer did not include the word “woohoo.” I just picked up the Hopf vibration with my spidey sense.

The essay is “Vice Is Boing Bankrupt, BuzzFeed News Is Dead. What Does It Mean?” I don’t want to be picky, but these are two separate entities and each, as far as I know, is still breathing. There may be life support equipment involved, but neither entity’s online presence delivers a cheerful 404 message… yet.

The essay sails forward with no interest in my online check or the fact that two separate entities do not in my mind comprise an “it”. I am not going to differentiate because if the Guardian sees two identical Lego blocks, that’s the reality.

The write up says via a quote from the “brilliant” Clay Shirky, author and meme generator:

“This is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place,” Shirky wrote. And, amid the ensuing chaos, it’s extremely hard to see what’s going next: “The importance of any given experiment isn’t apparent at the moment it appears, big changes stall, small changes spread.”

There are some bright spots; for example, ProPublica, the Gray Lady of Wordle fame, the Bezos news service, and most important, The Guardian, “owned by the Scott Trust and sustained by its endowment” and supported by readers who roll over for the jazzy pop ups in blue and yellow saying, “Give cash.”

Too bad the write up did not include the woohoo.

Stephen E Arnold, May 31, 2023

MBAs and Advisors, Is Your Nuclear Winter Looming?

May 31, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Big time, blue chip consulting firms are quite competent in three areas: [1] Sparking divorces because those who want money marry the firm, [2] Ingesting legions of MBAs to advise clients who are well compensated but insecure, and [3] Finding ways to cuts costs and pay the highly productive partners more money. I assume some will disagree, but that’s what kills horses at the Kentucky Derby.

I read but did not think twice about believing every single word in “Amid Mass Layoff, Accenture Identifies 300+ Generative AI Use Cases.” My first mental reaction was this question, “Just 300?”

The write up points out:

Accenture has identified five broad areas where generative AI can be implemented – advising, creating, automation, software creation and protection. The company is also working with a multinational bank to use generative AI to route large numbers of post-trade processing emails and draft responses with recommended actions to reduce manual effort and risk.

With fast food joints replacing humans with robots, what’s an MBA to do? The article does not identify employment opportunities for those who will be replaced with zeros and ones. As a former blue chip worker bee, I would suggest to anyone laboring in the intellectual vineyards to consider a career as an influencer.

Who will get hired and make big bucks at the Bains, the BCGs, the Boozers, and the McKinseys, et al? Here’s my short list:

  1. MBAs or people admitted to a fancy university with super connections. If one’s mom or dad was an ambassador or frequents parties drooled upon by Town & Country Magazine, you may be in the game.
  2. Individuals even if they worked at low rent used car lots who can sell big buck projects. The future at the blue chips is bright indeed.
  3. Individuals who are pals with highly regarded partners.

What about the quality of the work produced by the smart software? That is a good question. The idea is to make the client happy and sell follow on work. The initial work product may be reviewed by a partner or maybe not. The proof of the pudding are the revenue, costs, and profit figures.

That influencer opportunity looks pretty good, doesn’t it? I think snow is falling. Grab a Ralph Lauren Purple Label before you fire up that video camera.

Stephen E Arnold, May 31, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta