Will the Judge Notice? Will the Clients If Convicted?

June 12, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Law offices are eager to lighten their humans’ workload with generative AI. Perhaps too eager. Stanford University’s HAI reports, “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.” Close enough for horseshoes, but for justice? And that statistic is with improved, law-specific software. We learn:

“In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.”

But that was before tailor-made retrieval-augmented generation tools. The article continues:

“Across all areas of industry, retrieval-augmented generation (RAG) is seen and promoted as the solution for reducing hallucinations in domain-specific contexts. Relying on RAG, leading legal research services have released AI-powered legal research products that they claim ‘avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations. RAG systems promise to deliver more accurate and trustworthy legal information by integrating a language model with a database of legal documents. Yet providers have not provided hard evidence for such claims or even precisely defined ‘hallucination,’ making it difficult to assess their real-world reliability.”

So the Stanford team tested three of the RAG systems for themselves, Lexis+ AI from LexisNexis and Westlaw AI-Assisted Research & Ask Practical Law AI from Thomson Reuters. The authors note they are not singling out LexisNexis or Thomson Reuters for opprobrium. On the contrary, these tools are less opaque than their competition and so more easily examined. They found that these systems are more accurate than the general-purpose models like GPT-4. However, the authors write:

“But even these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”

These hallucinations come in two flavors. Many responses are flat out wrong. Others are misgrounded: they are correct about the law but cite irrelevant sources. The authors stress this second type of error is more dangerous than it may seem, for it may lure users into a false sense of security about the tool’s accuracy.

The post examines challenges particular to RAG-based legal AI systems and discusses responsible, transparent ways to use them, if one must. In short, it recommends public benchmarking and rigorous evaluations. Will law firms listen?

Cynthia Murrell, June 12, 2024

Will AI Kill Us All? No, But the Hype Can Be Damaging to Mental Health

June 11, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I missed the talk about how AI will kill us all. Planned? Nah, heavy traffic. From what I heard, none of the cyber investigators believed the person trying hard to frighten law enforcement cyber investigators. There are other — slightly more tangible threats. One of the attendees whose name I did not bother to remember asked me, “What do you think about artificial intelligence?” My answer was, “Meh.”

image

A contrarian walks alone. Why? It is hard to make money being negative. At the conference I attended June 4, 5, and 6, attendees with whom I spoke just did not care. Thanks, MSFT Copilot. Good enough.

Why you may ask? My method of handling the question is to refer to articles like this: “AI Appears to Rapidly Be Approaching Be Approaching a Brick Wall Where It Can’t Get Smarter.” This write up offers an opinion not popular among the AI cheerleaders:

Researchers are ringing the alarm bells, warning that companies like OpenAI and Google are rapidly running out of human-written training data for their AI models. And without new training data, it’s likely the models won’t be able to get any smarter, a point of reckoning for the burgeoning AI industry

Like the argument that AI will change everything, this claim applies to systems based upon indexing human content. I am reasonably certain that more advanced smart software with different concepts will emerge. I am not holding my breath because much of the current AI hoo-hah has been gestating longer than new born baby elephant.

So what’s with the doom pitch? Law enforcement apparently does not buy the idea. My team doesn’t. For the foreseeable future, applied smart software operating within some boundaries will allow some tasks to be completed quickly and with acceptable reliability.  Robocop is not likely for a while.

One interesting question is why the polarization. First, it is easy. And, second, one can cash in. If one is a cheerleader, one can invest in a promising AI start and make (in theory) oodles of money. By being a contrarian, one can tap into the segment of people who think the sky is falling. Being a contrarian is “different.” Plus, by predicting implosion and the end of life one can get attention. That’s okay. I try to avoid being the eccentric carrying a sign.

The current AI bubble relies in a significant way on a Google recipe: Indexing text. The approach reflects Google’s baked in biases. It indexes the Web; therefore, it should be able to answer questions by plucking factoids. Sorry, that doesn’t work. Glue cheese to pizza? Sure.

Hopefully new lines of investigation may reveal different approaches. I am skeptical about synthetic (or made up data that is probably correct). My fear is that we will require another 10, 20, or 30 years of research to move beyond shuffling content blocks around. There has to be a higher level of abstraction operating. But machines are machines and wetware (human brains) are different.

Will life end? Probably but not because of AI unless someone turns over nuclear launches to “smart” software. In that case, the crazy eccentric could be on the beam.

Stephen E Arnold, June 11, 2024

AI May Not Be Magic: The Salesforce Signal

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Salesforce has been a steady outfit. However, the company suffered a revenue miss, its first in about a quarter century. The news reports cited broad economic factors like “macro headwinds.” Salesforce, according to the firm’s chief people officer, the company has been experimenting with AI for “over a decade.” But the magic of AI was not able to ameliorate the company’s dip or add some chrome trim to its  revenue guidance.

image

John Milton’s god character from Paradise Lost watches the antics of super-sophisticated artificial intelligence algorithms. This character quickly realizes that zeros and ones crafted by humans and enhanced by smart machines is definitely not omniscient, omnipresent, and omnipotent character who knows everything before it happens no matter what the PR firms or company spokesperson asserts. Thanks, MSFT Copilot. Good enough.

Piecing together fragments of information, it appears that AI has added to the company’s administrative friction. In a Fortune interview, recycled for MSN.com, consider these administrative process examples:

  • The company has deployed 50 AI tools.
  • Salesforce has an AI governance council.
  • There is an Office of Ethical and Humane Use, started in 2019.
  • Salesforce uses surveys to supplement its “robust listening strategies.”
  • There are phone calls and meetings.

Some specific uses of AI appear to address inherent design constraints in Salesforce software; for example, AI has:

saved employees 50,000 hours within one business quarter, and the bot answered nearly 370,000 employee questions, according to the company. Merging into Project Basecamp, the company’s project management platform, has resolved 88,000 worker requests, speeding up issue resolution from an average of 48 hours to just 30 minutes.

What’s the payoff to the bottom line? That information is scant. What we know is that Salesforce may not be benefiting from the additional AI investment or the friction AI’s bureaucratic processes imposes on the company.

What’s this mean for those who predict that AI will change everything? I continue to think about the two ends of the spectrum: Go fast and break things crowd and the stop AI contingent.

First, the type of AI which is the one that does high school essay writing is easy to understand. These systems work as long as the subject matter clumps into piles of factoids which limit the craziness of the algorithms’ outputs. The topic “How to make a taco” is nailed down. The topic “How to decrypt Telegram’s encryption system” is not. Big brains can explain why the taco question is relatively hallucination free but not why the Telegram question generates useless drivel. I have, therefore, concluded, “Limited, narrow domain questions are okay for AI.”

Second, the current systems are presented as super wonderful. An example is the steady flow of PR about Google DeepMind’s contributions to biological science. Yet Google’s search system generates baloney. I think the different is that whacking away at proteins is a repetitive combinatorial problem. Calling the methods AI is similar to describing Daylight Chemical Information Systems a manifestation of the Oracle at Delphi is hogwash. PR erases important differences in critical lines of research. Does Google DeepMind feel shame? Let’s ask IBM Watson. That will be helpful. PR has a role; it is not AI.

Third, the desire for a silver bullet is deep-seated in many Peter Principle managers. These “leaders” of “leadership teams” don’t know what to do. Managing becomes figuring out risks. AI has legs, so let’s give that pony a chance to win the cart race. But pony cart races are trivial. The real races require winning three competitions. Few horses pull of that trick. I watch in wonder the launch, retreat, PR explanation, and next launch of some AI outfits. The focus seems to be on getting $20 per month. Degrading the service. Asking for more money. Then repeat.

The lack of AI innovation is becoming obvious. From the starter’s gun cracking in time with Microsoft’s AI announcement in January 2023 how much progress has been made?

We have the Salesforce financial report. We have the management craziness at OpenAI. We have Microsoft investing in or partnering with a number of technology outfits, including one in Paris. We have Google just doddering and fumbling. We have lawsuits. We have craziness like Adobe’s “owning” any image created with its software. We have start ups which bandy about the term “AI” like a shuttlecock at a high school in India badminton league. We have so many LinkedIn AI experts, I marvel that no one pins these baloney artists to a piece of white bread. We have the Dutch police emphasizing home-grown AI which helped make sense of the ANOM phone stings when the procedures are part of most policeware systems. Statistics, yes. AI, no. Clustering, yes. AI, no. Metadata assignment, yes. AI, no. The ANOM operation took place about 2017 to its shut down four years later. AI? Nope.

What does the lack of financial payoff and revenue generating AI solutions tell me? My answer to this question is:

  1. The cost of just using and letting prospects use an AI system are high. Due to the lack of a Triple Crown contender, no company has the horse or can afford the costs of getting the nag ready to race and keeping the animal from keeling over dead.
  2. The tangible results are tough to express. Despite the talk about reducing the costs of customer service, the cost of the AI system and the need to have humans ride herd on what the crazed cattle-like algorithms yield is not evident to me. The Salesforce experience is that AI cannot fix or make the Slack system generate oodles of cost savings or revenues from new, happy customers.
  3. The AI systems, particularly the services promoted via Product Hunt, are impossible for me to differentiate. Some do images, but the functions are similar. Some AI system do text things. Okay. But what’s new? Money is being spent to produce endless variations and me-too services. Fun for some. But boring and a waste of time to a dinobaby like me.

Net net: With economic problems growing in numerous sectors, those with money or a belief that garlic will kill Count Vampire, Baron of Revenue Loss are in for a surprise. Sorry. No software equivalent to Milton’s eternal, all-knowing, omnipotent God. I won’t tell the PR people. That Salesforce signal is meaningful.

Stephen E Arnold, June 10, 2024

Now Teachers Can Outsource Grading to AI

June 10, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

In a prime example of doublespeak, the “No Child Left Behind” act of 2002 ushered in today’s teach-to-the-test school environment. Once upon a time, teachers could follow student interest deeper into subject, explore topics tangential to the curriculum, and encourage children’s creativity. Now it seems if it won’t be on the test, there is no time for it. Never mind evidence that standardized tests do not even accurately measure learning. Or the psychological toll they take on students. But education degradation is about to get worse.

Get ready for the next level in impersonal instruction. Graded.Pro is “AI Grading and Marking for Teachers and Educators.” Now teachers can hand the task of evaluating every classroom assignment off to AI. On the Graded.Pro website, one can view explanatory videos and see examples of AI-graded assignments. Math, science, history, English, even art. The test maker inputs the criteria for correct responses and the AI interprets how well answers adhere to those descriptions. This means students only get credit for that which an AI can measure. Sure, there is an opportunity for teachers to review the software’s decisions. And some teachers will do so closely. Others will merely glance at the results. Most will fall somewhere in between.

Here are the assignment and solution description from the Art example: “Draw a lifelike skull with emphasis on shading to develop and demonstrate your skills in observational drawing.

Solutions:

  • The skull dimensions and proportions are highly accurate.
  • Exceptional attention to fine details and textures.
  • Shading is skillfully applied to create a dynamic range of tones.
  • Light and shadow are used effectively to create a realistic sense of volume and space.
  • Drawing is well-composed with thoughtful consideration of the placement and use of space.”

See the website for more examples as well as answers and grades. Sure, these are all relevant skills. But evaluation should not stop at the limits of an AI’s understanding. An insightful interpretation in a work of art? Brilliant analysis in an essay? A fresh take on an historical event? Qualities like those take a skilled human teacher to spot, encourage, and develop. But soon there may be no room for such niceties in education. Maybe, someday, no room for human teachers at all. After all, software is cheaper and does not form pesky unions.

Most important, however, is that teaching is a bummer. Every child is exceptional. So argue with the robot that little Debbie got an F.

Cynthia Murrell, June 10, 2024

Our Privacy Is Worth $47 It Seems

June 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Multimillion dollar lawsuits made on behalf of the consumer keep businesses in check. These lawsuits fight greedy companies that want to squeeze every last cent from consumers and take advantage of their ignorance. Thankfully many of these lawsuits are settled in favor of the consumers, like the Federal Trade Commission (FTC) vs. Ring. Unfortunately, the victims aren’t getting much in the form of compensation says OM in: “You Are Worth $47.”

Ring is a camera security company that allowed its contractors and employees to access users’ private data. The FTC and Ring reached a settlement in the case, resulting in $5.6 million to be given to 117,000 victims. That will be $47 per person. That amount will at least pay for a tank of gas or a meal for two in some parts of the country. It’s better than what other victims received:

“That is what your data (and perhaps your privacy) is worth — at least today. It is worth more than what T-Mobile or Experian paid as a fine per customer: $4.50 and $9, respectively. This minuscule fine is one of the reasons why companies get away with playing loose and easy with our privacy and data.”

OM is exactly right that the small compensation amounts only stirs consumers’ apathy more. What’s the point of fighting these mega conglomerates when the pay out is so small? Individuals, unless they’re backed with a boatload of money and strong sense of stubborn, righteous justice, won’t fight big businesses.

It’s the responsibility of law makers to fight these companies, but they don’t. They don’t fight for consumers because they’re either in the pocket of big businesses or they’re struck down before they even begin.

My listing is inactive and says I need approval to sell this item. I have approval to sell it.

Whitney Grace, June 6, 2024

AI Will Not Definitely, Certainly, Absolute Not Take Some Jobs. Whew. That Is News

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Outfits like McKinsey & Co. are kicking the tires of smart software. Some bright young sprouts I have heard arrive with a penchant for AI systems to create summaries and output basic information on a subject the youthful masters of the universe do not know. Will consulting services firms, publishers, and customer service outfits embrace smart software? The answer is, “You bet your bippy.”

“Why?” Answer: Potential cost savings. Humanoids require vacations, health care, bonuses, pension contributions (ho ho ho), and an old-fashioned and inefficient five-day work week.

image

Cost reductions over time, cost controls in real time, and more consistent outputs mean that as long as smart software is good enough, the technologies will go through organizations with more efficiency than Union General William T. Sherman led some 60,000 soldiers on a 285-mile march from Atlanta to Savannah, Georgia. Thanks, MSFT Copilot. Working on security today?

Software is allegedly better, faster, and cheaper. Software, particularly AI, may not be better, faster, or cheaper. But once someone is fired, the enthusiasm to return to the fold may be diminished. Often the response is a semi-amusing and often negative video posted on social media.

Here’s Why AI Probably Isn’t Coming for Your Job Anytime Soon” disagrees with my fairly conservative prediction that consulting, publishing, and some service outfits will be undergoing what I call “humanoid erosion” and “AI accretion.” The write up asserts:

We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence (AI) has begun to alter, even reverse, this evolution.

Okay, Econ 101. Wonderful. But… and there are some, of course. the write up says:

But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.

Yikes. I am not sure that AI is going in any direction. Senior managers are going toward reducing costs. “Good enough,” not excellence, is the high-water mark today.

Here’s another “but”:

But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.

I don’t remember much about economics. In fact, I only think about economics in terms of reducing costs and having more money for myself. Good old Adam wrote:

Wherever there is great property there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many.

When it comes to AI, inequality is baked in. The companies that are competing fiercely to dominate the core technology are not into equality. The senior managers who want to reduce costs associated with publishing, writing consulting reports based on business school baloney, or reviewing documents hunting for nuggets useful in a trial. AI is going into these and similar knowledge professions. Most of those knowledge workers will have an opportunity to find their future elsewhere. But what about in-take professionals in hospitals? What about dispatchers at trucking companies? What about government citizen service jobs? Sorry. Software is coming. Companies are developing orchestrator software to allow smart software to function across multiple related and inter-related tasks. Isn’t that what most work in a many organizations is?

Here’s another test question from Econ 101:

Discuss the meaning of “It was not by gold or by silver, but by labor, that all wealth of the world was originally purchased.” Give examples of how smart software will replace labor and generate more money for those who own the rights to digital gold or silver.

Send me you blue book answers within 24 hours. You must write in legible cursive. You are not permitted to use artificial intelligence in any form to answer this question which counts for 95 percent of your grade in Economics 102: Work in the Age of AI.

Stephen E Arnold, June 3, 2024

So AI Is — Maybe, Just Maybe — Not the Economic Big Kahuna?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I find it amusing how AI has become the go-to marketing word. I suppose if I were desperate, lacking an income, unsure about what will sell, and a follow-the-hyperbole-type person I would shout, “AI.” Instead I vocalize, “Ai-Yai-Ai” emulating the tones of a Central American death whistle. Yep, “Ai-Yai-AI.”

image

Thanks, MSFT Copilot. A harbinger? Good enough.

I read “MIT Professor Hoses Down Predictions AI Will Put a Rocket under the Economy.” I won’t comment upon the fog of distrust which I discern around Big Name Universities, nor will I focus my adjustable Walgreen’s spectacles on MIT’s fancy dancing with the quite interesting and decidedly non-academic Jeffrey Epstein. Nope. Forget those two factoids.

The write up reports:

…Daron Acemoglu, professor of economics at Massachusetts Institute of Technology, argues that predictions AI will improve productivity and boost wages in a “blue-collar bonanza” are overly optimistic.

The good professor is rowing against the marketing current. According to the article, the good professor identifies some wild and crazy forecasts. One of these is from an investment bank whose clients are unlikely to be what some one percenters perceive as non-masters of the universe.

That’s interesting. But it pales in comparison to the information in “Few People Are Using ChatGPT and Other AI Tools Regularly, Study Suggests.” (I love suggestive studies!) That write up reports about a study involving Thomson Reuters, the “trust” outfit:

Carried out by the Reuters Institute and Oxford University and involving 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina, the researchers found that OpenAI’s ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products — Google Gemini and Microsoft Copilot. But despite all the hype surrounding generative AI over the last 18 months, only 1% of those surveyed are using ChatGPT on a daily basis in Japan, 2% in France and the UK, and 7% in the U.S. The study also found that between 19% and 30% of the respondents haven’t even heard of any of the most popular generative AI tools, and while many of those surveyed have tried using at least one generative-AI product, only a very small minority are, at the current time, regular users deploying them for a variety of tasks.

My hunch is that these contrarians want clicks. Well, the tactic worked for me. However, how many of those in AI-Land will take note? My thought is that these anti-AI findings are likely to be ignored until some of the Big Money folks lose their cash. Then the voices of negativity will be heard.

Several observations:

  1. The economics of AI seem similar to some early online ventures like Pets.com, not “all” mind you, just some
  2. Expertise in AI may not guarantee a job at a high-flying techno-feudalist outfit
  3. The difficulties Google appears to be having suggest that the road to AI-Land on the information superhighway may have some potholes. (If Google cannot pull AI off, how can Bob’s Trucking Company armed with Microsoft Word with Copilot?)

Net net: It will be interesting to monitor the frequency of “AI balloon deflating” analyses.

Stephen E Arnold,  June 3, 2024

x

AI Overviews: A He Said, She Said Argument

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Google has begun the process of setting up an AI Overview object in search results. The idea is that Google provides an “answer.” But the machine-generated response is a platform for selling sentences, “meaning,” and probably words. Most people who have been exposed to the Overview object point out some of the object’s flaws. Those “mistakes” are not the point. Before I offer some ideas about the advertising upside of an AI Overview, I want to highlight both sides of this “he said, she said” dust up. Those criticizing the Google’s enhancement to search results miss the point of generating a new way to monetize information. Those who are taking umbrage at the criticism miss the point of people complaining about how lousy the AI Overviews are perceived to be.

The criticism of Google is encapsulated in “Why Google Is (Probably) Stuck Giving Out AI Answers That May or May Not Be Right.” A “real” journalist explains:

What happens if people keep finding Bad Answers on Google and Google can’t whac-a-mole them fast enough? And, crucially, what if regular people, people who don’t spend time reading or talking about tech news, start to hear about Google’s Bad And Potentially Dangerous Answers? Because that would be a really, really big problem. Google does a lot of different things, but the reason it’s worth more than $2 trillion is still its two core products: search, and the ads that it generates alongside search results. And if people — normal people — lose confidence in Google as a search/answer machine … Well, that would be a real problem.

The idea is that the AI Overview makes Google Web search less useful than it was before AI. Whether the idea is accurate or not makes no difference to the “he said, she said” argument. The “real” news is that Google is doing something that many people may perceive as a negative. The consequence is that Google’s shiny carapace will be scratched and dented. A more colorful approach to this side of the “bad Google” argument appears in Android Authority. “Shut It Down: Google’s AI Search Results Are Beyond Terrible” states:

The new Google AI Overview feature is offering responses to queries that range from bizarre and funny to very dangerous.

Ooof. Bizarre and dangerous. Yep, that’s the new Google AI Overview.

The Red Alert Google is not taking the criticism well. Instead of Googzilla retreating into a dark, digital cave, the beastie is coming out fighting. Imagine. Google is responding to pundit criticism. Fifteen years ago, no one would have paid any attention to a podcaster writer and a mobile device news service. Times have indeed changed.

Google Scrambles to Manually Remove Weird AI Answers in Search” provides an allegedly accurate report about how Googzilla is responding to criticism. In spite of the split infinitive, the headline makes clear that the AI-infused online advertising machine is using humans (!) to fix up wonky AI Overviews. The write up pontificates:

Google continues to say that its AI Overview product largely outputs “high quality information” to users. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” Google spokesperson Meghann Farnsworth said in an email to The Verge. Farnsworth also confirmed that the company is “taking swift action” to remove AI Overviews on certain queries “where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

Google seems to acknowledge that action is required. But the Google is not convinced that it has stepped on a baby duckling or two with its AI Overview innovation.

image

AI Overviews represent a potential revenue flow into Alphabet. The money, not the excellence of the outputs, is what matters in today’s Google. Thanks, MSFT Copilot. Back online and working on security today?

Okay, “he said, she said.” What’s the bigger picture? I worked on a project which required setting up an ad service which sold words in a text passage. I am not permitted to name the client or the outfit with the idea. On a Web page, some text would appear with an identified like an underline or bold face. When the reader of the Web page clicked (often inadvertently) on the word, that user would be whisked to another Web site or a pop up ad. The idea is that instead of an Oingo (Applied Semantics)-type of related concept expansion, the advertiser was buying a word. Brilliant.

The AI Overview, based on my team’s look at what the Google has been crafting, sets up a similar opportunity. Here’s a selection from our discussion at lunch on Friday, May 24, 2024 at a restaurant which featured a bridge club luncheon. Wow, was it noisy? Here’s what emerged from our frequently disrupted conversation:

  1. The AI Overview is a content object. It sits for now at the top of the search results page unless the “user” knows to add the string udm=14 to a query
  2. Advertising can be “sold” to the advertiser[s] who want[s] to put a message on the “topic” or “main concept” of the search
  3. Advertising can be sold to the organizations wanting to be linked to a sentence or a segment of a sentence in the AI Overview
  4. Advertising can be sold to the organizations wanting to be linked to a specific word in the AI Overview
  5. Advertising can be sold to the organizations wanting to be linked to a specific concept in the AI Overview.

Whether the AI Overview is good, bad, or indifferent will make zero difference in practice to the Google advertising “machine,” its officers, and its soon-to-be replaced by smart software staff makes no, zero, zip difference. AI has given Google the opportunity to monetize a new content object. That content object and its advertising is additive. People who want “traditional” Google online advertising can still by it. Furthermore, as one of my team pointed out, the presence of the new content object “space” on a search results page opens up additional opportunities to monetize certain content types. One example is buying a link to a related video which appears as an icon below, along side, or within the content object space. The monetization opportunities seem to have some potential.

Net net: Googzilla may be ageing. To poobahs and self-appointed experts, Google may be lost in space, trembling in fear, and growing deaf due to the blaring of the Red Alert klaxons. Whatever. But the AI Overview may have some upside even if it is filled with wonky outputs.

Stephen E Arnold, May 29, 2024

French AI Is Intelligent and Not Too Artificial

May 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Macron: French AI Can Challenge Insane Dominance of US and China.” In the CNBC interview, Emmanuel Macron used the word “insane.” The phrase, according to the cited article was:

French President Emmanuel Macron has called for his country’s AI leaders to challenge the “insane” dominance of US and Chinese tech giants.

French offers a number of ways to explain a loss of mental control or something that goes well beyond normal behaviors; for example, aliéné which can suggest something quite beyond the normal. The example which comes to mind might include the market dominance of US companies emulating Google-type methods. Another choice is comme un fou. This phrase suggests a crazy high speed action or event; for example, the amount of money OpenAI generated by selling $20 subscriptions to ChatGPTo iPhone app in a few days. My personal favorite is dément which has a nice blend of demented behavior and incredible actions. Microsoft’s recent litany of AI capabilities creating a new category of computers purpose-built to terminate with extreme prejudice the market winner MacBook devices; specifically, the itty bitty Airs.

image

The road to Google-type AI has a few speed bumps. Thanks, MSFT Copilot. Security getting attention or is Cloud stability the focal point of the day?

The write up explains what M. Macron really meant:

For now, however, Europe remains a long way behind the US and Chinese leaders. None of the 10 largest tech companies by market cap are based in the continent and few feature in the top 50. The French President decried that landscape. “It’s insane to have a world where the big giants just come from China and US.”

Ah, ha. The idea appears to be a lack of balance and restraint. Well, it seems, France is going to do its best to deliver the digital equivalent of a chicken with a Label Rouge; that is, AI that is going to meet specific standards and be significantly superior to something like the $5 US Costco chicken. I anticipate that M. Macron’s government will issue a document like this Fiche filière volaille de chair 2020 for AI.

M. Macron points to two examples of French AI technology: Mistral and H (formerly Holistic). I was disappointed that M. Macron did not highlight the quite remarkable AI technology of Preligens, which is in the midst of a sale. I would suggest that Preligens is an example of why the “insane”  dominance of China and the US in AI is the current reality. The company is ensnared in French regulations and in need of the type of money pumped into AI start ups in the two countries leading the pack in AI.

M. Macron is making changes; specifically, according to the write up:

Macron has cut red tape, loosened labor protections, and reduced taxes on the wealthy. He’s also attracted foreign investment, including a €15bn funding package from the likes of Microsoft and Amazon announced earlier this month. Macron has also committed to a pan-European AI strategy. At a meeting in the  Elysée Palace this week, he hinted at the first step of a new plan: “Our aim is to Europeanize [AI], and we’re going to start with a Franco-German initiative.”

I know from experience the quality of French information-centric technologists. The principal hurdles for France are, in my opinion, are:

  1. Addressing the red tape. (One cannot grasp the implications of this phrase unless one tries to rent an apartment in France.)
  2. Juicing up the investment system and methods.
  3. Overcoming the ralentisseurs on the Information Superhighway running between Paris, DC, and Beijing.

Net net: Check out Preligens.

Stephen E Arnold, May 28, 2024

AI and Work: Just the Ticket for Monday Morning

May 20, 2024

dinosaur30aThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Well, here’s a cheerful essay for the average worker in a knowledge industry. “If Your Work’s Average, You’re Screwed It’s Over for You” is the ideal essay to kick off a new work week. The source of the publication is Digital Camera World. I thought traditional and digital cameras were yesterday’s news. Therefore, I surmise the author of the write up misses the good old days of Kodak film, chemicals, and really expensive retouching.

image

How many US government professionals will find themselves victims of good enough AI? Answer: More than than the professional photographers? Thanks, MSFT Copilot. Good enough, a standard your security systems seem to struggle to achieve.

What’s the camera-focuses (yeah, lame pun) essay report. Consider this passage:

there’s one thing that only humans can do…

Okay, one thing. I give up. What’s that? Create other humans? Write poetry? Take fentanyl and lose the ability to stand up for hours? Captain a boat near orcas who will do what they can to sink the vessel? Oh, well. What’s that one thing?

"But I think the thing that AI is going to have an impossible job of achieving is that last 1% that stands between everything [else] and what’s great. I think that that last 1%, only a human can impart that.

AI does the mediocre. Humans, I think, do the exceptional. The logic seems to point to someone in the top tier of humans will have a job. Everyone else will be standing on line to get basic income checks, pursuing crime, or reading books. Strike that. Scrolling social media. No doom required. Those not in the elite will know doom first hand.

Here’s another passage to bring some zip to a Monday morning:

What it’s [smart software] going to do is, if your work’s average, you’re screwed. It’s [having a job] over for you. Be great, because AI is going to have a really hard time being great itself.

Observations? Just that cost cutting may be Job One.

Stephen E Arnold, May 20, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta