So AI Is — Maybe, Just Maybe — Not the Economic Big Kahuna?

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I find it amusing how AI has become the go-to marketing word. I suppose if I were desperate, lacking an income, unsure about what will sell, and a follow-the-hyperbole-type person I would shout, “AI.” Instead I vocalize, “Ai-Yai-Ai” emulating the tones of a Central American death whistle. Yep, “Ai-Yai-AI.”


Thanks, MSFT Copilot. A harbinger? Good enough.

I read “MIT Professor Hoses Down Predictions AI Will Put a Rocket under the Economy.” I won’t comment upon the fog of distrust which I discern around Big Name Universities, nor will I focus my adjustable Walgreen’s spectacles on MIT’s fancy dancing with the quite interesting and decidedly non-academic Jeffrey Epstein. Nope. Forget those two factoids.

The write up reports:

…Daron Acemoglu, professor of economics at Massachusetts Institute of Technology, argues that predictions AI will improve productivity and boost wages in a “blue-collar bonanza” are overly optimistic.

The good professor is rowing against the marketing current. According to the article, the good professor identifies some wild and crazy forecasts. One of these is from an investment bank whose clients are unlikely to be what some one percenters perceive as non-masters of the universe.

That’s interesting. But it pales in comparison to the information in “Few People Are Using ChatGPT and Other AI Tools Regularly, Study Suggests.” (I love suggestive studies!) That write up reports about a study involving Thomson Reuters, the “trust” outfit:

Carried out by the Reuters Institute and Oxford University and involving 6,000 respondents from the U.S., U.K., France, Denmark, Japan, and Argentina, the researchers found that OpenAI’s ChatGPT is by far the most widely used generative-AI tool and is two or three times more widespread than the next most widely used products — Google Gemini and Microsoft Copilot. But despite all the hype surrounding generative AI over the last 18 months, only 1% of those surveyed are using ChatGPT on a daily basis in Japan, 2% in France and the UK, and 7% in the U.S. The study also found that between 19% and 30% of the respondents haven’t even heard of any of the most popular generative AI tools, and while many of those surveyed have tried using at least one generative-AI product, only a very small minority are, at the current time, regular users deploying them for a variety of tasks.

My hunch is that these contrarians want clicks. Well, the tactic worked for me. However, how many of those in AI-Land will take note? My thought is that these anti-AI findings are likely to be ignored until some of the Big Money folks lose their cash. Then the voices of negativity will be heard.

Several observations:

  1. The economics of AI seem similar to some early online ventures like, not “all” mind you, just some
  2. Expertise in AI may not guarantee a job at a high-flying techno-feudalist outfit
  3. The difficulties Google appears to be having suggest that the road to AI-Land on the information superhighway may have some potholes. (If Google cannot pull AI off, how can Bob’s Trucking Company armed with Microsoft Word with Copilot?)

Net net: It will be interesting to monitor the frequency of “AI balloon deflating” analyses.

Stephen E Arnold,  June 3, 2024


AI Overviews: A He Said, She Said Argument

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Google has begun the process of setting up an AI Overview object in search results. The idea is that Google provides an “answer.” But the machine-generated response is a platform for selling sentences, “meaning,” and probably words. Most people who have been exposed to the Overview object point out some of the object’s flaws. Those “mistakes” are not the point. Before I offer some ideas about the advertising upside of an AI Overview, I want to highlight both sides of this “he said, she said” dust up. Those criticizing the Google’s enhancement to search results miss the point of generating a new way to monetize information. Those who are taking umbrage at the criticism miss the point of people complaining about how lousy the AI Overviews are perceived to be.

The criticism of Google is encapsulated in “Why Google Is (Probably) Stuck Giving Out AI Answers That May or May Not Be Right.” A “real” journalist explains:

What happens if people keep finding Bad Answers on Google and Google can’t whac-a-mole them fast enough? And, crucially, what if regular people, people who don’t spend time reading or talking about tech news, start to hear about Google’s Bad And Potentially Dangerous Answers? Because that would be a really, really big problem. Google does a lot of different things, but the reason it’s worth more than $2 trillion is still its two core products: search, and the ads that it generates alongside search results. And if people — normal people — lose confidence in Google as a search/answer machine … Well, that would be a real problem.

The idea is that the AI Overview makes Google Web search less useful than it was before AI. Whether the idea is accurate or not makes no difference to the “he said, she said” argument. The “real” news is that Google is doing something that many people may perceive as a negative. The consequence is that Google’s shiny carapace will be scratched and dented. A more colorful approach to this side of the “bad Google” argument appears in Android Authority. “Shut It Down: Google’s AI Search Results Are Beyond Terrible” states:

The new Google AI Overview feature is offering responses to queries that range from bizarre and funny to very dangerous.

Ooof. Bizarre and dangerous. Yep, that’s the new Google AI Overview.

The Red Alert Google is not taking the criticism well. Instead of Googzilla retreating into a dark, digital cave, the beastie is coming out fighting. Imagine. Google is responding to pundit criticism. Fifteen years ago, no one would have paid any attention to a podcaster writer and a mobile device news service. Times have indeed changed.

Google Scrambles to Manually Remove Weird AI Answers in Search” provides an allegedly accurate report about how Googzilla is responding to criticism. In spite of the split infinitive, the headline makes clear that the AI-infused online advertising machine is using humans (!) to fix up wonky AI Overviews. The write up pontificates:

Google continues to say that its AI Overview product largely outputs “high quality information” to users. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” Google spokesperson Meghann Farnsworth said in an email to The Verge. Farnsworth also confirmed that the company is “taking swift action” to remove AI Overviews on certain queries “where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

Google seems to acknowledge that action is required. But the Google is not convinced that it has stepped on a baby duckling or two with its AI Overview innovation.


AI Overviews represent a potential revenue flow into Alphabet. The money, not the excellence of the outputs, is what matters in today’s Google. Thanks, MSFT Copilot. Back online and working on security today?

Okay, “he said, she said.” What’s the bigger picture? I worked on a project which required setting up an ad service which sold words in a text passage. I am not permitted to name the client or the outfit with the idea. On a Web page, some text would appear with an identified like an underline or bold face. When the reader of the Web page clicked (often inadvertently) on the word, that user would be whisked to another Web site or a pop up ad. The idea is that instead of an Oingo (Applied Semantics)-type of related concept expansion, the advertiser was buying a word. Brilliant.

The AI Overview, based on my team’s look at what the Google has been crafting, sets up a similar opportunity. Here’s a selection from our discussion at lunch on Friday, May 24, 2024 at a restaurant which featured a bridge club luncheon. Wow, was it noisy? Here’s what emerged from our frequently disrupted conversation:

  1. The AI Overview is a content object. It sits for now at the top of the search results page unless the “user” knows to add the string udm=14 to a query
  2. Advertising can be “sold” to the advertiser[s] who want[s] to put a message on the “topic” or “main concept” of the search
  3. Advertising can be sold to the organizations wanting to be linked to a sentence or a segment of a sentence in the AI Overview
  4. Advertising can be sold to the organizations wanting to be linked to a specific word in the AI Overview
  5. Advertising can be sold to the organizations wanting to be linked to a specific concept in the AI Overview.

Whether the AI Overview is good, bad, or indifferent will make zero difference in practice to the Google advertising “machine,” its officers, and its soon-to-be replaced by smart software staff makes no, zero, zip difference. AI has given Google the opportunity to monetize a new content object. That content object and its advertising is additive. People who want “traditional” Google online advertising can still by it. Furthermore, as one of my team pointed out, the presence of the new content object “space” on a search results page opens up additional opportunities to monetize certain content types. One example is buying a link to a related video which appears as an icon below, along side, or within the content object space. The monetization opportunities seem to have some potential.

Net net: Googzilla may be ageing. To poobahs and self-appointed experts, Google may be lost in space, trembling in fear, and growing deaf due to the blaring of the Red Alert klaxons. Whatever. But the AI Overview may have some upside even if it is filled with wonky outputs.

Stephen E Arnold, May 29, 2024

French AI Is Intelligent and Not Too Artificial

May 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Macron: French AI Can Challenge Insane Dominance of US and China.” In the CNBC interview, Emmanuel Macron used the word “insane.” The phrase, according to the cited article was:

French President Emmanuel Macron has called for his country’s AI leaders to challenge the “insane” dominance of US and Chinese tech giants.

French offers a number of ways to explain a loss of mental control or something that goes well beyond normal behaviors; for example, aliéné which can suggest something quite beyond the normal. The example which comes to mind might include the market dominance of US companies emulating Google-type methods. Another choice is comme un fou. This phrase suggests a crazy high speed action or event; for example, the amount of money OpenAI generated by selling $20 subscriptions to ChatGPTo iPhone app in a few days. My personal favorite is dément which has a nice blend of demented behavior and incredible actions. Microsoft’s recent litany of AI capabilities creating a new category of computers purpose-built to terminate with extreme prejudice the market winner MacBook devices; specifically, the itty bitty Airs.


The road to Google-type AI has a few speed bumps. Thanks, MSFT Copilot. Security getting attention or is Cloud stability the focal point of the day?

The write up explains what M. Macron really meant:

For now, however, Europe remains a long way behind the US and Chinese leaders. None of the 10 largest tech companies by market cap are based in the continent and few feature in the top 50. The French President decried that landscape. “It’s insane to have a world where the big giants just come from China and US.”

Ah, ha. The idea appears to be a lack of balance and restraint. Well, it seems, France is going to do its best to deliver the digital equivalent of a chicken with a Label Rouge; that is, AI that is going to meet specific standards and be significantly superior to something like the $5 US Costco chicken. I anticipate that M. Macron’s government will issue a document like this Fiche filière volaille de chair 2020 for AI.

M. Macron points to two examples of French AI technology: Mistral and H (formerly Holistic). I was disappointed that M. Macron did not highlight the quite remarkable AI technology of Preligens, which is in the midst of a sale. I would suggest that Preligens is an example of why the “insane”  dominance of China and the US in AI is the current reality. The company is ensnared in French regulations and in need of the type of money pumped into AI start ups in the two countries leading the pack in AI.

M. Macron is making changes; specifically, according to the write up:

Macron has cut red tape, loosened labor protections, and reduced taxes on the wealthy. He’s also attracted foreign investment, including a €15bn funding package from the likes of Microsoft and Amazon announced earlier this month. Macron has also committed to a pan-European AI strategy. At a meeting in the  Elysée Palace this week, he hinted at the first step of a new plan: “Our aim is to Europeanize [AI], and we’re going to start with a Franco-German initiative.”

I know from experience the quality of French information-centric technologists. The principal hurdles for France are, in my opinion, are:

  1. Addressing the red tape. (One cannot grasp the implications of this phrase unless one tries to rent an apartment in France.)
  2. Juicing up the investment system and methods.
  3. Overcoming the ralentisseurs on the Information Superhighway running between Paris, DC, and Beijing.

Net net: Check out Preligens.

Stephen E Arnold, May 28, 2024

AI and Work: Just the Ticket for Monday Morning

May 20, 2024

dinosaur30aThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Well, here’s a cheerful essay for the average worker in a knowledge industry. “If Your Work’s Average, You’re Screwed It’s Over for You” is the ideal essay to kick off a new work week. The source of the publication is Digital Camera World. I thought traditional and digital cameras were yesterday’s news. Therefore, I surmise the author of the write up misses the good old days of Kodak film, chemicals, and really expensive retouching.


How many US government professionals will find themselves victims of good enough AI? Answer: More than than the professional photographers? Thanks, MSFT Copilot. Good enough, a standard your security systems seem to struggle to achieve.

What’s the camera-focuses (yeah, lame pun) essay report. Consider this passage:

there’s one thing that only humans can do…

Okay, one thing. I give up. What’s that? Create other humans? Write poetry? Take fentanyl and lose the ability to stand up for hours? Captain a boat near orcas who will do what they can to sink the vessel? Oh, well. What’s that one thing?

"But I think the thing that AI is going to have an impossible job of achieving is that last 1% that stands between everything [else] and what’s great. I think that that last 1%, only a human can impart that.

AI does the mediocre. Humans, I think, do the exceptional. The logic seems to point to someone in the top tier of humans will have a job. Everyone else will be standing on line to get basic income checks, pursuing crime, or reading books. Strike that. Scrolling social media. No doom required. Those not in the elite will know doom first hand.

Here’s another passage to bring some zip to a Monday morning:

What it’s [smart software] going to do is, if your work’s average, you’re screwed. It’s [having a job] over for you. Be great, because AI is going to have a really hard time being great itself.

Observations? Just that cost cutting may be Job One.

Stephen E Arnold, May 20, 2024

Flawed AI Will Still Take Jobs

May 16, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Shocker. Organizations are using smart software which is [a] operating in an way its creators cannot explain, [b] makes up information, and [c] appears to be dominated by a handful of “above the law” outfits. Does this characterization seem unfair? No, well, stop reading. If it seems anchored in reality, you may find my comments about jobs for GenX, GenY or GenWhy?, millennials, and Alphas (I think this is what marketers call wee lads and lasses) somewhat in line with the IMF’s view of AI.


The answer is, “Your daughter should be very, very intelligent and very, very good at an in-demand skill. If she is not, then it is doom scrolling for sure. Thanks, MSFT Copilot. Do your part for the good of mankind today.

Artificial Intelligence Hitting Labour Forces Like a Tsunami – IMF Chief” screws up the metaphor. A tsunami builds, travels, dissipates. I am not sure what the headline writer thinks will dissipate in AI land. Jobs for sure. But AI seems to have some sticking power.

What does the IMF say? Here’s a bit of insight:

Artificial intelligence is likely to impact 60% of jobs in advanced economies and 40% of jobs around the world in the next two years…

So what? The IMF Big Dog adds:

“It could bring tremendous increase in productivity if we manage it well, but it can also lead to more misinformation and, of course, more inequality in our society.”

Could. I think it will but for those who know their way around AI and are in the tippy top of smart people. ATM users, TikTok consumers, and those who think school is stupid may not emerge as winners.

I find it interesting to consider what a two-tier society in the US and Western Europe will manifest. What will the people who do not have jobs do? Volunteer to work at the local animal shelter, pick up trash, or just kick back. Yeah, that’s fun.

What if one looks back over the last 50 years? When I grew up, my father had a job. My mother worked at home. I went to school. The text books were passed along year to year. The teachers grouped students by ability and segregated some students into an “advanced” track. My free time was spent outside “playing” or inside reading. When I was 15, I worked as a car hop. No mobile phones. No computer. Just radio, a record player, and a crappy black-and-white television which displayed fuzzy programs. The neighbors knew me and the other “kids.” From my eighth grade class, everyone went to college after high school. In my high school class of 1962, everyone was thinking about an advanced degree. Social was something a church sponsored. Its main feature was ice cream. After getting an advanced degree in 1965 I believe, I got a job because someone heard me give a talk about indexing Latin sermons and said, “We need you.” Easy.

A half century later, what is the landscape. AI is eliminating jobs. Many of these will be either intermediating jobs like doing email spam for a PR firm’s client or doing legal research. In the future, knowledge work will move up the Great Chain of Being. Most won’t be able to do the climbing to make it up to a rung with decent pay, some reasonable challenges, and a bit of power.

Let’s go back to the somewhat off-the-mark tsunami metaphor. AI is going to become more reliable. The improvements will continue. Think about what an IBM PC looked like in the 1980s. Now think about the MacBook Air you or your colleague has. They are similar but not equivalent. What happens when AI systems and methods keep improving? That’s tough to predict. What’s obvious is that the improvements and innovations in smart software are not a tsunami.

I liken it more like the continuous pressure in a petroleum cracking facility. Work is placed in contact with smart software, and stuff vaporizes. The first component to be consumed are human jobs. Next, the smart software will transform “work” itself. Most work is busy work; smart software wants “real” work. As long as the electricity stays on, the impact of AI will be on-going. AI will transform. A tsunami crashes, makes a mess, and then is entropified. AI is a different and much hardier development.

The IMF is on the right track; it is just not making clear how much change is now underway.

Stephen E Arnold, May 16, 2024

AdVon: Why So Much Traction and Angst?

May 14, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

AdVon. AdVon. AdVon. Okay, the company is in the news. Consider this write up: “Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry.” So why meet AdVon? The subtitle explains:

Remember that AI company behind Sports Illustrated’s fake writers? We did some digging — and it’s got tendrils into other surprisingly prominent publications.

Let’s consider the question: Why is AdVon getting traction among “prominent publications” or any other outfit wanting content? The answer is not far to see: Cutting costs, doing more with less, get more clicks, get more money. This is not a multiple choice test in a junior college business class. This is common sense. Smart software makes it possible for those with some skill in the alleged art of prompt crafting and automation to sell “stories” to publishers for less than those publishers can produce the stories themselves.


The future continues to arrive. Here’s smart software is saying “Hasta la vista” to the human information generator. The humanoid looks very sad. The AI software nor its owner does not care. Revenue and profit are more important as long as the top dogs get paid big bucks. Thanks, MSFT Copilot. Working on your security systems or polishing the AI today?

Let’s look at the cited article’s peregrination to the obvious: AI can reduce costs of “publishing”. Plus, as AI gets more refined, the publications themselves can be replaced with scripts.

The write up says:

Basically, AdVon engages in what Google calls “site reputation abuse”: it strikes deals with publishers in which it provides huge numbers of extremely low-quality product reviews — often for surprisingly prominent publications — intended to pull in traffic from people Googling things like “best ab roller.” The idea seems to be that these visitors will be fooled into thinking the recommendations were made by the publication’s actual journalists and click one of the articles’ affiliate links, kicking back a little money if they make a purchase. It’s a practice that blurs the line between journalism and advertising to the breaking point, makes the web worse for everybody, and renders basic questions like “is this writer a real person?” fuzzier and fuzzier.

Okay. So what?

In spite of the article being labeled as “AI” in AdVon’s CMS, the Outside Inc spokesperson said the company had no knowledge of the use of AI by AdVon — seemingly contradicting AdVon’s claim that automation was only used with publishers’ knowledge.

Okay, corner cutting as part of AdVon’s business model. What about the “minimum viable product” or “good enough” approach to everything from self driving auto baloney to Boeing air craft doors? AI use is somehow exempt from what is the current business practice. Major academic figures take short cuts. Now an outfit with some AI skills is supposed to operate like a hybrid of Joan of Arc and Mother Theresa? Sure.

The write up states:

In fact, it seems that many products only appear in AdVon’s reviews in the first place because their sellers paid AdVon for the publicity. That’s because the founding duo behind AdVon, CEO Ben Faw and president Eric Spurling, also quietly operate another company called SellerRocket, which charges the sellers of Amazon products for coverage in the same publications where AdVon publishes product reviews.

To me, AdVon is using a variant of the Google type of online advertising concept. The bar room door swings both ways. The customer pays to enter and the customer pays to leave. Am I surprised? Nope. Should anyone? How about a government consumer protection watch dog. Tip: Don’t hold your breath. New York City tested a chatbot that provided information that violated city laws.

The write up concludes:

At its worst, AI lets unscrupulous profiteers pollute the internet with low-quality work produced at unprecedented scale. It’s a phenomenon which — if platforms like Google and Facebook can’t figure out how to separate the wheat from the chaff — threatens to flood the whole web in an unstoppable deluge of spam. In other words, it’s not surprising to see a company like AdVon turn to AI as a mechanism to churn out lousy content while cutting loose actual writers. But watching trusted publications help distribute that chum is a unique tragedy of the AI era.

The kicker is that the company owning the publication “exposing” AdVon used AdVon.

Let me offer several observations:

  1. The research reveals what will become an increasingly wide spread business practice. But the practice of using AI to generate baloney and spam variants is not the future. It is now.
  2. The demand for what appears to be old fashioned information generation is high. The cost of producing this type of information is going to force those who want to generate information to take short cuts. (How do I know? How about the president of Stanford University who took short cuts. That’s how. When a university president muddles forward for years and gets caught by accident, what are students learning? My answer: Cheat better than that.)
  3. AI diffusion is like gerbils. First, you have a couple of cute gerbils in your room. As a nine year old, you think those gerbils are cute. Then you have more gerbils. What do you do? You get rid of the gerbils in your house. What about the gerbils? Yeah, they are still out there. One can see gerbils; it is more difficult to see the AI gerbils. The fix is not the plastic bag filled with gerbils in the garbage can. The AI gerbils are relentless.

Net net: Adapt and accept that AI is here, reproducing rapidly, and evolving. The future means “adapt.” One suggestion: Hire McKinsey & Co. to help your firm make tough decisions. That sometimes works.

Stephen E Arnold, May 14, 2024

Big Tech and Their Software: The Tent Pole Problem

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.

4 27 tent collapse

Okay, ChatGPT, good enough.

I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.

Let’s look at the laws explicated in the essay.

The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.

The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.

The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:

Code can be shaped and built upon.

My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers.  Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.

The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:

… we should embrace the malleability of code and avoid redesign processes at all costs!

Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.

Stephen E Arnold, May 1, 2024

AI Versus People? That Is Easy. AI

April 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t like to include management information in Beyond Search. I have noticed more stories related to management decisions related to information technology. Here’s an example of my breaking my own editorial policies. Navigate to “SF Exec Defends Brutal Tech Trend: Lay Off Workers to Free Up Cash for AI.” I noted this passage:

Executives want fatter pockets for investing in artificial intelligence.


Okay, Mr. Efficiency and mobile phone betting addict, you have reached a logical decision. Why are there no pictures of friends, family, and achievements in your window office? Oh, that’s MSFT Copilot’s work. What’s that say?

I think this means that “people resources” can be dumped in order to free up cash to place bets on smart software. The write up explains the management decision making this way:

Dropbox’s layoff was largely aimed at freeing up cash to hire more engineers who are skilled in AI.

How expensive is AI for the big technology companies? The write up provides this factoid which comes from the masterful management bastion:

Google AI leader Demis Hassabis said the company would likely spend more than $100 billion developing AI.

Smart software is the next big thing. Big outfits like Amazon, Google, Facebook, and Microsoft believe it. Venture firms appear to be into AI. Software development outfits are beavering away with smart technology to make their already stellar “good enough” products even better.

Money buys innovation until it doesn’t. The reason is that the time from roll out to saturation can be difficult to predict. Look how long it has taken the smart phones to become marketing exercises, not technology demonstrations. How significant is saturation? Look at the machinations at Apple or CPUs that are increasingly difficult to differentiate for a person who wants to use a laptop for business.

There are benefits. These include:

  • Those getting fired can say, “AI RIF’ed me.”
  • Investments in AI can perk up investors.
  • Jargon-savvy consultants can land new clients.
  • Leadership teams can rise about termination because these wise professionals are the deciders.

A few downsides can be identified despite the immaturity of the sector:

  • Outputs can be incorrect leading to what might be called poor decisions. (Sorry, Ms. Smith, your child died because the smart dosage system malfunctioned.)
  • A large, no-man’s land is opening between the fast moving start ups who surf on cloud AI services and the behemoths providing access to expensive infrastructure. Who wants to operate in no-man’s land?
  • The lack of controls on smart software guarantee that bad actors will have ample tools with which to innovate.
  • Knock-on effects are difficult to predict.

Net net: AI may be diffusing more quickly and in ways some experts chose to ignore… until they are RIF’ed.

Stephen E Arnold, April 25, 2024

Kicking Cans Down the Street Is Not Violence. Is It a Type of Fraud Perhaps?

April 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Ah, spring, when young men’s fancies turn to thoughts of violence. Forget the Iran Israel dust up. Forget the Russia special operation. Think about this Bloomberg headline:

Tech’s Cash Crunch Sees Creditors Turn ‘Violent’ With One Another


Thanks, ChatGPT. Good enough.

Will this be drones? Perhaps a missile or two? No. I think it will be marketing hoo hah. Even though news releases may not inflict mortal injury, although someone probably has died from bad publicity, the rhetorical tone seems — how should we phrase it — over the top maybe?

The write up says:

Software and services companies are in the spotlight after issuing almost $30 billion of debt that’s classed as distressed, according to data compiled by Bloomberg, the most in any industry apart from real estate.

How do wizards of finance react to this “risk”? Answer:

“These two phenomena, coupled with the covenant-lite nature of leveraged loans today, have been the primary drivers of the creditor-on-creditor violence we’re seeing,” he [Jason Mudrick, founder of distressed credit investor Mudrick Capital] said.

Shades of the Sydney slashings or vehicle fires in Paris.

Here’s an example:

One increasingly popular maneuver these days, known as non-pro rata uptiering, sees companies cut a deal with a small group of creditors who provide new money to the borrower, pushing others further back in the line to be repaid. In return, they often partake in a bond exchange in which they receive a better swap price than other creditors.

Does this sound like “Let’s kick the can down the road.” Not articulated is the idea, “Let’s see what happens. If we fail, our management team is free to bail out.”

Nifty, right?

Financial engineering is a no harm, no foul game for some. Those who lose money? Yeah, too bad.

Stephen E Arnold, April 25, 2024

Paranoia or Is it Parano-AI? Yes

April 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I get a kick out of the information about the future impact of smart software. If those writing about the downstream consequences of artificial intelligence were on the beam, those folks would be camping out in one of those salubrious Las Vegas casinos. They are not. Thus, the prognostications provide more insight into the authors’ fears in my opinion.

4 15 scared executive

OpenAI produced this good enough image of a Top Dog reading reports about AI’s taking jobs from senior executives. Quite a messy desk, which is an indicator of an inferior executive mindset.

Here’s an example: “Even the Boss Is Worried! Hundreds of Chief Executives Fear AI Could Steal Their Jobs Too.” The write up is based on a study conducted by Censuswide for AND Digital. Here we go, fear lovers:

  1. A “jobs apocalypse”: “AI experts have predicted a 50-50 chance machines could take over all our jobs within a century.”
  2. Scared yet? “Nearly half – 43 per cent – of bosses polled admitted they too were worried AI could take steal their job.”
  3. Ignorance is bliss: “44 per cent of global CEOs did not think their staff were ready to handle AI.”
  4. Die now? “A survey of over 2,700 AI researchers in January meanwhile suggested AI could well be ‘better and cheaper’ than humans in every profession by 2116.”

My view is that the diffusion of certain types of smart software will occur over time. If the technology proves it can cuts costs and be good enough, then it will be applied where the benefits are easy to identify and monitor. When something goes off the rails, the smart software will suffer a set back. Changes will be made, and the “Let’s try again” approach will kick in. Can motivated individuals adapt? Sure. The top folks will adjust and continue to perform. The laggards will get an “Also Participated” ribbon and collect money by busking, cleaning houses, or painting houses. The good old Darwinian principles don’t change. A digital panther can kill you just as dead as a real panther.

Exciting? Not for a surviving dinobaby.

Stephen E Arnold, April 22, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta