Handwaving at Light Speed: Control Smart Software Now!

June 13, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Here is an easy one: Vox ponders, “What Will Stop AI from Flooding the Internet with Fake Images?” “Nothing” is the obvious answer. Nevertheless, tech companies are making a show of making an effort. Writer Shirin Ghaffary begins by recalling the recent kerfuffle caused by a realistic but fake photo of a Pentagon explosion. The spoof even affected the stock market, though briefly. We are poised to see many more AI-created images swamp the Internet, and they won’t all be so easily fact checked. The article explains:

“This isn’t an entirely new problem. Online misinformation has existed since the dawn of the internet, and crudely photoshopped images fooled people long before generative AI became mainstream. But recently, tools like ChatGPT, DALL-E, Midjourney, and even new AI feature updates to Photoshop have supercharged the issue by making it easier and cheaper to create hyper realistic fake images, video, and text, at scale. Experts say we can expect to see more fake images like the Pentagon one, especially when they can cause political disruption. One report by Europol, the European Union’s law enforcement agency, predicted that as much as 90 percent of content on the internet could be created or edited by AI by 2026. Already, spammy news sites seemingly generated entirely by AI are popping up. The anti-misinformation platform NewsGuard started tracking such sites and found nearly three times as many as they did a few weeks prior.”

Several ideas are being explored. One is to tag AI-generated images with watermarks, metadata, and disclosure labels, but of course those can be altered or removed. Then there is the tool from Adobe that tracks whether images are edited by AI, tagging each with “content credentials” that supposedly stick with a file forever. Another is to approach from the other direction and stamp content that has been verified as real. The Coalition for Content Provenance and Authenticity (C2PA) has created a specification for this purpose.

But even if bad actors could not find ways around such measures, and they can, will audiences care? So far it looks like that is a big no. We already knew confirmation bias trumps facts for many. Watermarks and authenticity seals will hold little sway for those already inclined to take what their filter bubbles feed them at face value.

Cynthia Murrell, June 13, 2023

Bad News for Humanoids: AI Writes Better Pitch Decks But KFC Is Hiring

June 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Who would have envisioned a time when MBA with undergraduate finance majors would be given an opportunity to work at a Kentucky Fried Chicken store. What was the slogan about fingers? I can’t remember.

“If You’re Thinking about Writing Your Own Pitch Decks, Think Again” provides some interesting information. I assume that today’s version of Henry Robinson Luce’s flagship magazine (no the Sports Illustrated swimsuit edition) would shatter the work life of those who create pitch decks. A “pitch deck” is a sonnet for our digital era. The phrase is often associated with a group of PowerPoint slides designed to bet a funding source to write a check. That use case, however, is not where pitch decks come into play: Academics use them when trying to explain why a research project deserves funding. Ad agencies craft them to win client work or, in some cases, to convince a client to not fire the creative team. (Hello, Bud Light advisors, are you paying attention.) Real estate professionals created them to show to high net worth individuals. The objective is to close a deal for one of those bizarro vacant mansions shown by YouTube explorers. See, for instance, this white elephant lovingly presented by Dark Explorations. And there are more pitch deck applications. That’s why the phrase, “Death by PowerPoint is real”, is semi poignant.

What if a pitch deck could be made better? What is pitch decks could be produced quickly? What if pitch decks could be graphically enhanced without fooling around with Fiverr.com artists in Armenia or the professionals with orange and blue hair?

The Fortune article states: The study [funded by Clarify Capital] revealed that machine-generated pitch decks consistently outperformed their human counterparts in terms of quality, thoroughness, and clarity. A staggering 80% of respondents found the GPT-4 decks compelling, while only 39% felt the same way about the human-created decks. [Emphasis added]

The cited article continues:

What’s more, GPT-4-presented ventures were twice as convincing to investors and business owners compared to those backed by human-made pitch decks. In an even more astonishing revelation, GPT-4 proved to be more successful in securing funding in the creative industries than in the tech industry, defying assumptions that machine learning could not match human creativity due to its lack of life experience and emotions. [Emphasis added]

6 10 grad at kfc

Would you like regular or crispy? asks the MBA who wants to write pitch decks for a VC firm whose managing director his father knows. The image emerged from the murky math of MidJourney. Better, faster, and cheaper than a contractor I might add.

Here’s a link to the KFC.com Web site. Smart software works better, faster, and cheaper. But it has a drawback: At this time, the KFC professional is needed to put those thighs in the fryer.

Stephen E Arnold, June 12, 2023


OpenAI: Someone, Maybe the UN? Take Action Before We Sign Up More Users

June 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I wrote about Sam AI-man’s use of language my humanoid-written essay “Regulate Does Not Mean Regulate. Leave the EU Does Not Mean Leave the EU. Got That?” Now the vocabulary of Mr. AI-man has been enriched. For a recent example, please, navigate to “OpenAI CEO Suggests International Agency Like UN’s Nuclear Watchdog Could Oversee AI.” I am loath to quote from an AP (once an “associated press”) due to the current entity’s policy related to citing their “real news.”

In the allegedly accurate “real news” story, I learned that Mr. AI-man has floated the idea for a United Nation’s agency to oversee global smart software. Now that is an idea worthy of a college dorm room discussion at Johns Hopkins University’s School of Advanced International Studies in always-intellectually sharp Washington, DC.

6 8 bureaucrats

UN Representative #1: What exactly is artificial intelligence? UN Representative #2. How can we leverage it for fund raising? UN Representative # 3. Does anyone have an idea how we could use smart software to influence our friends in certain difficult nation states? UN Representative #4. Is it time for lunch? Illustration crafted with imagination, love, and care by MidJourney.

The model, as I understand the “real news” story is that the UN would be the guard dog for bad applications of smart software. Mr. AI-man’s example of UN effectiveness is the entity’s involvement in nuclear power. (How is that working out in Iran?) The write up also references the notion of guard rails. (Are there guard rails on other interesting technology; for example, Instagram’s somewhat relaxed approach to certain information related to youth?)

If we put the “make sure we come together as a globe” statement in the context of Sam AI-man’s other terminology, I wonder if PR and looking good is more important than generating traction and revenue from OpenAI’s innovations.

Of course not. The UN can do it. How about those UN peace keeping actions in Africa? Complete success from Mr. AI-man’s point of view.

Stephen E Arnold, June 8, 2023, 929 am US Eastern

The Google AI Way: EEAT or Video Injection?

June 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Over the weekend, I spotted a couple of signals from the Google marketing factory. The first is the cheerleading by that great champion of objective search results, Danny Sullivan who wrote with Chris Nelson “Rewarding High Quality Content, However, It Is Produced.” The authors pointed out that their essay is on behalf of the Google Search Quality team. This “team” speaks loudly to me when we run test queries on Google.com. Once in a while — not often, mind you — a relevant result will appear in the first page or two of results.

The subject of this essay by Messrs.Sullivan and Nelson is EEAT. My research team and I think that the fascinating acronym is pronounced like to word “eat” in the sense of ingesting gummy cannabinoids. (One hopes these are not the prohibited compounds such as Delta-9 THC.) The idea is to pop something in your mouth and chew. As the compound (fact and fiction, GPT generated content and factoids) dissolve and make their way into one’s system, the psychoactive reaction is greater perceived dependence on the Google products. You may not agree, but that’s how I interpret the essay.

So what’s EEAT? I am not sure my team and I are getting with the Google script. The correct and Googley answer is:

Expertise, experience, authoritativeness, and trustworthiness.

The write up says:

Focusing on rewarding quality content has been core to Google since we began. It continues today, including through our ranking systems designed to surface reliable information and our helpful content system. The helpful content system was introduced last year to better ensure those searching get content created primarily for people, rather than for search ranking purposes.

I wonder if this text has been incorporated in the Sundar and Prabhakar Comedy Show? I would suggest that it replace the words about meeting users’ needs.

The meat of the synthetic turkey burger strikes me as:

it’s important to recognize that not all use of automation, including AI generation, is spam. Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. AI has the ability to power new levels of expression and creativity, and to serve as a critical tool to help people create great content for the web.

Synthetic or manufactured information, content objects, data, and other outputs are okay with us. We’re Google, of course, and we are equipped with expertise, experience, authoritativeness, and trustworthiness to decide what is quality and what is not.

I can almost visualize a T shirt with the phrase “EEAT It” silkscreened on the back with a cheerful Google logo on the front. Catchy. EEAT It. I want one. Perhaps a pop tune can be sampled and used to generate a synthetic song similar to Michael Jackson’s “Beat It”? Google AI would dodge the Weird Al Yankovic version of the 1983 hit. Google’s version might include the refrain:

Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)

If chowing down on this Google information is not to your liking, one can get with the Google program via a direct video injection. Google has been publicizing its free video training program from India to LinkedIn (a Microsoft property to give the social media service its due). Navigate to “Master Generative AI for Free from Google’s Courses.” The free, free courses are obviously advertisements for the Google way of smart software. Remember the key sequence: Expertise, experience, authoritativeness, and trustworthiness.

The courses are:

  1. Introduction to Generative AI
  2. Introduction to Large Language Models
  3. Attention Mechanism
  4. Transformer Models and BERT Model
  5. Introduction to Image Generation
  6. Create Image Captioning Models
  7. Encoder-Decoder Architecture
  8. Introduction to Responsible AI (remember the phrase “Expertise, experience, authoritativeness, and trustworthiness.”)
  9. Introduction to Generative AI Studio
  10. Generative AI Explorer (Vertex AI).

Why is Google offering free infomercials about its approach to AI?

The cited article answers the question this way:

By 2030, experts anticipate the generative AI market to reach an impressive $109.3 billion, signifying a promising outlook that is captivating investors across the board. [Emphasis added.]

How will Microsoft respond to the EEAT It positioning?

Just EEAT it (EEAT it, EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it, ha, ha, ha, ha)
EEAT it (EEAT it, EEAT it)
EEAT it (EEAT it, EEAT it)

Stephen E Arnold, June 5, 2023

Trust in Google and Its Smart Software: What about the Humans at Google?

May 26, 2023

The buzz about Google’s injection of its smart software into its services is crowding out other, more interesting sounds. For example, navigate to “Texas Reaches $8 Million Settlement With Google Over Blatantly False Pixel Ads: Google Settled a Lawsuit Filed by AG Ken Paxton for Alleged False Advertisements for its Google Pixel 4 Smartphone.”

The write up reports:

A press release said Google was confronted with information that it had violated Texas laws against false advertising, but instead of taking steps to correct the issue, the release said, “Google continued its deceptive advertising, prioritizing profits over truthfulness.”

Google is pushing forward with its new mobile devices.

Let’s consider Google’s seven wonders of its software. You can find these at this link or summarized in my article “The Seven Wonders of the Google AI World.”

Let’s consider principle one: Be socially beneficial.

I am wondering how the allegedly deceptive advertising encourages me to trust Google.

Principle 4 is Be accountable to people.

My recollection is that Google works overtime to avoid being held accountable. The company relies upon its lawyers, its lobbyists, and its marketing to float above the annoyances of nation states. In fact, when greeted with substantive actions by the European Union, Google stalls and does not make available its latest and greatest services. The only accountability seems to be a legal action despite Google’s determined lawyerly push back. Avoiding accountability requires intermediaries because Google’s senior executives are busy working on principles.

Kindergarten behavior.

5 13 kids squabbling

MidJourney captures the thrill of two young children squabbling over a piggy bank. I wonder if MidJourney knows what is going in the newly merged Google smart software units.

Google approaches some problems like kids squabbling over a piggy bank.

Net net: The Texas fine makes clear that some do not trust Google. The “principles” are marketing hoo hah. But everyone loves Google, including me, my French bulldog, and billions of users worldwide. Everyone will want a new $1800 folding Pixel, which is just great based on the marketing information I have seen. It has so many features and works wonders.

Stephen E Arnold, May 26, 2023

The Return: IBM Watsonx!

May 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It is no surprise IBM’s entry into the recent generative AI hubbub is a version of Watson, the company’s longtime algorithmic representative. Techspot reports, “IBM Unleashes New AI Strategy with ‘watsonx’.” The new suite of tools was announced at the company’s recent Think conference. Note “watsonx” is not interchangeable with “Watson.” The older name with the capital letter and no trendy “x” is to be used for tools individuals rather than company-wide software. That won’t be confusing at all. Writer Bob O’Donnell describes the three components of watsonx:

“Watsonx.ai is the core AI toolset through which companies can build, train, validate and deploy foundation models. Notably, companies can use it to create original models or customize existing foundation models. Watsonx.data, is a datastore optimized for AI workloads that’s used to gather, organize, clean and feed data sources that go into those models. Finally, watsonx.governance is a tool for tracking the process of the model’s creation, providing an auditable record of all the data going into the model, how it’s created and more.Another part of IBM’s announcement was the debut of several of its own foundation models that can be used with the watsonx toolset or on their own. Not unlike others, IBM is initially unveiling a LLM-based offering for text-based applications, as well as a code generating and reviewing tool. In addition, the company previewed that it intends to create some additional industry and application-specific models, including ones for geospatial, chemistry, and IT operations applications among others. Critically, IBM said that companies can run these models in the cloud as a service, in a customer’s own data center, or in a hybrid model that leverages both. This is an interesting differentiation because, at the moment, most model providers are not yet letting organizations run their models on premises.”

Just to make things confusing, er, offer more options, each of these three applications will have three different model architectures. On top of that, each of these models will be available with varying numbers of parameters. The idea is not, as it might seem, to give companies decision paralysis but to provide flexibility in cost-performance tradeoffs and computing requirements. O’Donnell notes watsonx can also be used with open-source models, which is helpful since many organizations currently lack staff able build their own models.

The article notes that, despite the announcement’s strategic timing, it is clear watsonx marks a change in IBM’s approach to software that has been in the works for years: generative AI will be front and center for the foreseeable future. Kinda like society as a whole, apparently.

Cynthia Murrell, May 26, 2023

AI Builders and the Illusions they Promote

May 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Why do AI firms insist on calling algorithmic mistakes “hallucinations” instead of errors, malfunctions, or glitches? The Guardian‘s Naomi Klein believes AI advocates chose this very human and mystical term to perpetuate a fundamental myth: AI will be humanity’s salvation. And that stance, she insists, demonstrates that “AI Machines Aren’t ‘Hallucinating.’ But their Makers Are.”

It is true that, in a society built around citizens’ well-being and the Earth’s preservation, AI could help end poverty, eliminate disease, reverse climate change, and facilitate more meaningful lives. But that is not the world we live in. Instead, our systems are set up to exploit both resources and people for the benefit of the rich and powerful. AI is poised to help them do that even more efficiently than before.

The article discusses four specific hallucinations possessing AI proponents. First, the assertion AI will solve the climate crisis when it is likely to do just the opposite. Then there’s the hope AI will help politicians and bureaucrats make wiser choices, which assumes those in power base their decisions on the greater good in the first place. Which leads to hallucination number three, that we can trust tech giants “not to break the world.” Those paying attention saw that was a false hope long ago. Finally is the belief AI will eliminate drudgery. Not all work, mind you, just the “boring” stuff. Some go so far as to paint a classic leftist ideal, one where humans work not to survive but to pursue our passions. That might pan out if we were living in a humanist, Star Trek-like society, Klein notes, but instead we are subjects of rapacious capitalism. Those who lose their jobs to algorithms have no societal net to catch them.

So why are the makers of AI promoting these illusions? Kelin proposes:

“Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent. This should not be legal. In the case of copyrighted material that we now know trained the models (including this newspaper), various lawsuits have been filed that will argue this was clearly illegal. Why, for instance, should a for-profit company be permitted to feed the paintings, drawings and photographs of living artists into a program like Stable Diffusion or Dall-E 2 so it can then be used to generate doppelganger versions of those very artists’ work, with the benefits flowing to everyone but the artists themselves?”

The answer, of course, is that this should not be permitted. But since innovation moves much faster than legislatures and courts, tech companies have been operating on a turbo-charged premise of seeking forgiveness instead of permission for years. (They call it “disruption,” Klein notes.) Operations like Google’s book-scanning project, Uber’s undermining the taxi industry, and Facebook’s mishandling of user data, just to name a few, got so far so fast regulators simply gave in. Now the same thing appears to be happening with generative AI and the data it feeds upon. But there is hope. A group of top experts on AI ethics specify measures regulators can take. Will they?

Cynthia Murrell, May 24, 2023

More Google PR: For an Outfit with an Interesting Past, Chattiness Is Now a Core Competency

May 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How many speeches, public talks, and interviews did Sergey Brin, Larry Page, and Eric Schmidt do? To my recollection, not too many. And what about now? Larry Page is tough to find. Mr. Brin is sort of invisible. Eric Schmidt has backed off his claim that Qwant keeps him up at night? But Sundar Pichai, one half of the Sundar and Prabhakar Comedy Show, is quite visible. AI everywhere keynote speeches, essays about smart software, and now an original “he wrote it himself” essay in the weird salmon-tinted newspaper The Financial Times. Yeah, pinkish.

5 23 fast talking salesman

Smart software provided me with an illustration of a fast talker pitching the future benefits of a new product. Yep, future probabilities. Rock solid. Thank you, MidJourney.

What’s with the spotlight on the current Google big wheel? Gentle reader, the visibility is one way Google is trying to advance its agenda. Before I offer my opinion about the Alphabet Google YouTube agenda, I want to highlight three statements in “Google CEO: building AI Responsibly Is the Only Race That Really Matters.”

Statement from the Google essay #1

At Google, we’ve been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right.

The theme is that Google has been doing smart software for a long time. Let’s not forget that the GOOG released the Transformer model as open source and sat on its Googley paws while “stuff happened” starting in 2018. Was that responsible? If so, what does Google mean when it uses the word “responsible” as it struggles to cope with the meme “Google is late to the game.” For example, Microsoft pulled off a global PR coup with its Davos’ smart software announcements. Google responded with the Paris demonstration of Bard, a hoot for many in the information retrieval killing field. That performance of the Sundar and Prabhakar Comedy Show flopped. Meanwhile, Microsoft pushed its “flavor” of AI into its enterprise software and cloud services. My experience is that for every big PR action, there is an equal or greater PR reaction. Google is trying to catch faster race cars with words, not a better, faster, and cheaper machine. The notion that Google “gets it right” means to me one thing: Maintaining quasi monopolistic control of its market and generating the ad revenue. Google, after 25 years of walking the same old Chihuahua in a dog park with younger, more agile canines. After 25 years of me too and flopping with projects like solving death, revenue is the ONLY thing that matters to stakeholders. More of the Sundar and Prabhakar routine are wearing thin.

Statement from the Google essay #2

We have many examples of putting those principles into practice…

The “principles” apply to Google AI implementation. But the word principles is an interesting one. Google is paying fines for ignoring laws and its principles. Google is under the watchful eye of regulators in the European Union due to Google’s principles. China wanted Google to change and then beavered away on a China-acceptable search system until the cat was let out of the bag. Google is into equality, a nice principle, which was implemented by firing AI researchers who complained about what Google AI was enabling. Google is not the outfit I would consider the optimal source of enlightenment about principles. High tech in general and Google in particular is viewed with increasing concern by regulators in US states and assorted nation states. Why? The Googley notion of principles is not what others understand the word to denote. In fact, some might say that Google operates in an unprincipled manner. Is that why companies like Foundem and regulatory officials point out behaviors which some might find predatory, mendacious, or illegal? Principles, yes, principles.

Statement from the Google essay #3

AI presents a once-in-a-generation opportunity for the world to reach its climate goals, build sustainable growth, maintain global competitiveness and much more.

Many years ago, I was in a meeting in DC, and the Donald Rumsfeld quote about information was making the rounds. Good appointees loved to cite this Donald.Here’s the quote from 2002:

There are known knowns; there are things we know we know.  We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.

I would humbly suggest that smart software is chock full of known unknowns. But humans are not very good at predicting the future. When it comes to acting “responsibly” in the face of unknown unknowns, I dismiss those who dare to suggest that humans can predict the future in order to act in a responsible manner. Humans do not act responsibly with either predictability or reliability. My evidence is part of your mental furniture: Racism, discrimination, continuous war, criminality, prevarication, exaggeration, failure to regulate damaging technologies, ineffectual action against industrial polluters, etc. etc. etc.

I want to point out that the Google essay penned by one half of the Sundar and Prabhakar Comedy Show team could be funny if it were not a synopsis of the digital tragedy of the commons in which we live.

Stephen E Arnold, May 23, 2023

Please, World, Please, Regulate AI. Oh, Come Now, You Silly Goose

May 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The ageing heart of capitalistic ethicality is beating in what some cardiologists might call arrhythmia. Beating fast and slow means that the coordinating mechanisms are out of whack. What’s the fix? Slam in an electronic gizmo for the humanoid. But what about a Silicon Valley with rhythm problems: Terminating employees, legal woes, annoying elected officials, and teen suicides? The outfits poised to make a Nile River of cash from smart software are doing the “begging” thing.

523 will they fall for reg small

The Gen X whiz kid asks the smart software robot: “Will the losers fall for the call to regulate artificial intelligence?” The smart software robot responds, “Based on a vector and matrix analysis, there is a 75 to 90 percent probability that one or more nation states will pass laws to regulate us.” The Gen X whiz kid responds, “Great, I hate doing the begging and pleading thing.” The illustration was created by my old pal, MidJourney digital emulators.

OpenAI Leaders Propose International Regulatory body for AI” is a good summation of the “please, regulate AI even though it is something most people don’t understand and a technology whose downstream consequences are unknown.” The write up states:

…AI isn’t going to manage itself…

We have some first hand experience with Silicon Valley wizards who [a] allow social media technology to destroy the fabric of civil order, [b] control information frames so that hidden hands can cause irrelevant ads to bedevil people looking for a Thai restaurant, [c] ignoring laws of different nation states because the fines are little more than the cost of sandwiches at an off site meeting, and [d] sporty behavior under the cover of attendance at industry conferences (why did a certain Google Glass marketing executive try to kill herself and the yacht incident with a controlled substance and subsequent death?).

What fascinated me was the idea that an international body should regulate smart software. The international bodies did a bang up job with the Covid speed bump. The United Nations is definitely on top of the situation in central Africa. And the International Criminal Court? Oh, right, the US is not a party to that organization.

What’s going on with these calls for regulation? In my opinion, there are three vectors for this line of begging, pleading, and whining.

  1. The begging can be cited as evidence that OpenAI and its fellow travelers tried to do the right thing. That’s an important psychological ploy so the company can go forward and create a Terminator version of Clippy with its partner Microsoft
  2. The disingenuous “aw, shucks” approach provides a lousy make up artist with an opportunity to put lipstick on a pig. The shoats and hoggets look a little better than some of the smart software champions. Dim light and a few drinks can transform a boarlet into something spectacular in the eyes of woozy venture capitalists
  3. Those pleading for regulation want to make sure their company has a fight chance to dominate the burgeoning market for smart software methods. After all, the ageing Googzilla is limping forward with billions of users who will chow down on the deprecated food available in the Google cafeterias.

At least Marie Antoinette avoided the begging until she was beheaded. Apocryphal or not, she held on the “Let them eat mille-feuille. But the blade fell anyway.

PS. There allegedly will be ChatGPT 5.0. Isn’t that prudent?

Stephen E Arnold, May 23, 2023

No kidding?

The Seven Wonders of the Google AI World

May 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read the content at this Google Web page: https://ai.google/responsibility/principles/. I found it darned amazing. In fact, I thought of the original seven wonders of the world. Let’s see how Google’s statements compare with the down-through-time achievements of mere mortals from ancient times.

Let’s imagine two comedians explaining the difference between the two important set of landmarks in human achievement. Here are the entertainers. These impressive individuals are a product of MidJourney’s smart software. The drawing illustrates the possibilities of artificial intelligence applied to regular intelligence and a certain big ad company’s capabilities. (That’s humor, gentle reader.)

clowns 5 11 23

Here are the seven wonders of the world according to the semi reliable National Geographic (l loved those old Nat Geos when I was in the seventh grade in 1956-1957!):

  1. The pyramids of Giza (tombs or alien machinery, take your pick)
  2. The hanging gardens of Babylon (a building with a flower show)
  3. The temple of Artemis (god of the hunt for maybe relevant advertising?)
  4. The statue of Zeus (the thunder god like Googzilla?)
  5. The mausoleum at Halicarnassus (a tomb)
  6. The colossus of Rhodes (Greek sun god who inspired Louis XIV and his just-so-hoity toity pals)
  7. The lighthouse of Alexandria (bright light which baffles some who doubt a fire can cast a bright light to ships at sea)

Now the seven wonders of the Google AI world:

  1. Socially beneficial AI (how does AI help those who are not advertisers?)
  2. Avoid creating or reinforcing unfair bias (What’s Dr. Timnit Gebru say about this?)
  3. Be built and tested for safety? (Will AI address video on YouTube which provide links to cracked software; e.g. this one?)
  4. Be accountable to people? (Maybe people who call for Google customer support?)
  5. Incorporate privacy design principles? (Will the European Commission embrace the Google, not litigate it?)
  6. Uphold high standards of scientific excellence? (Interesting. What’s “high” mean? What’s scientific about threshold fiddling? What’s “excellence”?)
  7. AI will be made available for uses that “accord with these principles”. (Is this another “Don’t be evil moment?)

Now let’s evaluate in broad strokes the two seven wonders. My initial impression is that the ancient seven wonders were fungible, not based on the future tense, the progressive tense, and breathing the exhaust fumes of OpenAI and others in the AI game. After a bit of thought, I am not sure Google’s management will be able to convince me that its personnel policies, its management of its high school science club, and its knee jerk reaction to the Microsoft Davos slam dunk are more than bloviating. Finally, the original seven wonders are either ruins or lost to all but a MidJourney reconstruction or a Bing output. Google is in the “careful” business. Translating: Google is Googley. OpenAI and ChatGPT are delivering blocks and stones for a real wonder of the world.

Net net: The ancient seven wonders represent something to which humans aspired or honored. The Google seven wonders of AI are, in my opinion, marketing via uncoordinated demos. However, Google will make more money than any of the ancient attractions did. The Google list may be perfect for the next Sundar and Prabhakar Comedy Show. Will it play in Paris? The last one there flopped.

Stephen E Arnold, May 12, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta