The Secret to Business Success

June 18, 2025

Dino 5 18 25_thumbJust a dinobaby and a tiny bit of AI goodness: How horrible is this approach?

I don’t know anything about psychological conditions. I read “Why Peter Thiel Thinks Asperger’s Is A Key to Succeeding in Business.” I did what any semi-hip dinobaby would do. I logged into You.com and ask what the heck Asperger’s was. Here’s what I learned:

  • The term "Asperger’s Syndrome" was introduced in the 1980s by Dr. Lorna Wing, based on earlier work by Hans Asperger. However, the term has become controversial due to revelations about Hans Asperger’s involvement with the Nazi regime
  • Diagnostic Shift: Asperger’s Syndrome was officially included in the DSM-IV (1994) and ICD-10 (1992) but was retired in the DSM-5 (2013) and ICD-11 (2019). It is now part of the autism spectrum, with severity levels used to indicate the level of support required.

image

Image appeared with the definition of Asperger’s “issue.” A bit of a You.com bonus for the dinobaby.

These factoids are new to me.

The You.com smart report told me:

Key Characteristics of Asperger’s Syndrome (Now ASD-Level 1)

  1. Social Interaction Challenges:
    • Difficulty understanding social cues, body language, and emotions.
    • Limited facial expressions and awkward social interactions.
    • Conversations may revolve around specific topics of interest, often one-sided
  1. Restricted and Repetitive Behaviors:
    • Intense focus on narrow interests (e.g., train schedules, specific hobbies).
    • Adherence to routines and resistance to change
  1. Communication Style:
    • No significant delays in language development, but speech may be formal, monotone, or unusual in tone.
    • Difficulty using language in social contexts, such as understanding humor or sarcasm
  1. Motor Skills and Sensory Sensitivities:
    • Clumsiness or poor coordination.
    • Sensitivity to sensory stimuli like lights, sounds, or textures.

Now what does the write up say? Mr. Thiel (Palantir Technology and other interests) believes:

Most of them [people with Asperger’s] have little sense of unspoken social norms or how to conform to them. Instead they develop a more self-directed worldview. Their beliefs on what is or is not possible come more from themselves, and less from what others tell them they can do or cannot do. This causes a lot anxiety and emotional hardship, but it also gives them more freedom to be different and experiment with new ideas.

The idea is that the alleged disorder allows certain individuals with Asperger’s to change the world.

The write up says:

The truth is that if you want to start something truly new, you almost by definition have to be unconventional and do something that everyone else thinks is crazy. This is inevitably going to mean you face criticism, even for trying it. In Thiel’s view, because those with Aspergers don’t register that criticism as much, they feel freer to make these attempts.

Is it possible for universities with excellent reputations and prestigious MBA programs to create people with the “virtues” of Aspberger’s? Do business schools aspire to impart this type of “secret sauce” to their students?

I suppose one could ask a person with the blessing of Aspberger’s but as the You.com report told me, some of these lucky individuals may [a] use speech may formal, monotone, or unusual in tone and [b] difficulty using language in social contexts, such as understanding humor or sarcasm.

But if one can change the world, carry on in the spirit of Hans Asperger, and make a great deal of money, it is good to have this unique “skill.”

Stephen E Arnold, June 18, 2025

AI Can Do Code, Right?

June 18, 2025

Developer Jj at Blogmobly deftly rants against AI code assistants in, “The Copilot Delusion.” Jj admits tools like GitHub Copilot and Claude Codex are good at some things, but those tasks are mere starting points for skillful humans to edit or expand upon. Or they should be. Instead, firms turn to bots more than they should in the name of speed. But AI gets its information from random blog posts and comment sections. Those are nowhere near the reasoning and skills of an experienced human coder. What good are lines of code that are briskly generated if they do not solve the problem?

Read the whole post for the strong argument for proficient humans and against overreliance on bots. These paragraphs stuck out to us:

“The real horror isn’t that AI will take our jobs. It’s that it will entice people who never wanted the job to begin with. People who don’t care for quality. It’ll remove the already tiny barrier to entry that at-least required people to try and comprehend control flow. Vampires with SaaS dreams and Web3 in their LinkedIn bio. Empty husks who see the terminal not as a frontier, but as a shovel for digging up VC money. They’ll drool over their GitHub Copilot like it’s the holy spirit of productivity, pumping out React CRUD like it’s oxygen. They’ll fork VS Code yet again, just to sell the same dream to a similarly deluded kid.”

Also:

“And what’s worse, we’ll normalize this mediocrity. Cement it in tooling. Turn it into a best practice. We’ll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software. The idea that building something lean and wild and precise, or even squeezing every last drop of performance out of a system, will sound like folklore. If that happens? If the last real programmers are drowned in a sea of button-clicking career-chasers – then I pity the smart outsider kids to come after me. Defer your thinking to the bot, and we all rot.”

Eloquently put: Good enough is  now excellence.

Cynthia Murrell, June 18, 2025

Control = Power and Money: Anything Else Is an Annoyance

June 17, 2025

I read “Self-Hosting Your Own Media Considered Harmful.” I worked through about 300 comments on Ycombinator’s hacker news page. The write up by Jeff Geerling, a YouTube content creator, found himself in the deadfall of a “strike” or “takedown” or whatever unilateral action by Google is called. The essay says:

Apparently self-hosted open source media library management is harmful. Who knew open source software could be so subversive?

Those YCombinator comments make clear that some people understand the Google game. Other comments illustrate the cloud of unknowing that distorts one’s perception of the nature of the Google magic show which has been running longer than the Sundar & Prabhakar Comedy Act.

YouTube, unlike Google AI, is no joke to many people who believe that they can build a life by creating videos without pay and posting them to a service that is what might be called a new version of the “old Hollywood” studio system.

Let’s think about an answer to this subversive question. (Here’s the answer: Content that undermines Google’s power, control, or money flow. But you knew that, right?)

Let’s expand, shall we?

First, Google makes rules, usually without much more than a group of wizards of assorted ages talking online, at Foosball, or (sometimes) in a room with a table, chairs, a whiteboard, and other accoutrements of what business life was like in the 1970s. Management friction is largely absent; sometimes when leadership input is required, leadership avoids getting into the weeds. “Push down” is much better than an old-fashioned, hierarchical “dumb” approach. Therefore, the decisions are organic and usually arbitrary until something “big” happens like the 2023 Microsoft announced about its deal with OpenAI. Then leadership does the deciding. Code Red or whatever it was called illustrates the knee-jerk approach to issues that just go critical. Phase change.

Second, the connections between common sense, professional behavior (yes, I am including suicide attempts induced by corporate dalliance and telling customers “they have created a problem”), and consistency are irrelevant. Actions are typically local and context free. Consequently the mysterious and often disastrous notifications of a “violation.” I love it when companies judged to be operating in an illegal manner dole out notices of an “offense.” Keep the idea of “power” in mind, please.

Third, the lack of consistent, informed mechanisms to find out the “rule” an individual allegedly violated are the preferred approach to grousing. If an action intentional or unintentional could, might, did, would, will, or some other indicator of revenue loss is identified, then the perpetrator is guilty. Some are banned. Others like a former CIA professional are just told, “Take that video down.”

How does the cited essay handle the topic? Mr. Geerling says:

I was never able to sustain my open source work based on patronage, and content production is the same—just more expensive to maintain to any standard (each video takes between 10-300 hours to produce, and I have a family to feed, and US health insurance companies to fund). YouTube was, and still is, a creative anomaly. I’m hugely thankful to my PatreonGitHub, and Floatplane supporters—and I hope to have direct funding fully able to support my work someday. But until that time, YouTube’s AdSense revenue and vast reach is a kind of ‘golden handcuff.’ The handcuff has been a bit tarnished of late, however, with Google recently adding AI summaries to videos—which seems to indicate maybe Gemini is slurping up my content and using it in their AI models?

This is an important series of statements. First, YouTube relies on content creators who post their work on YouTube for the same reason people use Telegram or BlueSky: These are free publicity channels that might yield revenue or a paying gig. Content creators trade off control and yield power to these “comms conduits” for the belief that something will come out of the effort. These channels are designed to produce revenue for their owners, not the content creators. The “hope” of a payoff means the content will continue to flow. No grousing, lawyer launch, or blog post is going to change the mechanism that is now entrenched.

Second, open source is now a problematic issue. For the Google the open source DeepSeek means that it must market its AI prowess more aggressively because it is threatened. For the Google content that could alienate an advertiser and a revenue stream is, by definition, bad content. That approach will become more widely used and more evident as the shift from Google search-based advertising is eroded by rather poor “smart” systems that just deliver answers. Furthermore, figuring out how to pay for smart software is going to lead to increasingly Draconian measures from Google-type outfits to sustain and grow revenue. Money comes from power to deliver information that will lure or force advertisers to buy access. End of story.

Third, Mr. Geerling politely raises the question about Google’s use of YouTube content to make its world-class smart software smarter. The answer to the question, based on what I have learned from my sources, is, “Yes.” Is this a surprise? Not to me. Maybe a content creator thinks that YouTube will set out rules, guidelines, and explanations of how it uses its digital vacuum cleaner to decrease the probability that that its AI system will spout stupidity like “Kids, just glue cheese on pizza”? That will not happen b because the Google-type of organization does not see additional friction as desirable. Google wants money. It has power.

What’s the payoff for Google? Control. If you want to play, you have to pay. Advertisers provide cash based on a rigged slot machine model. User provide “data exhaust” to feed into the advertising engine. YouTube creators provide free content to produce clicks, clusters of intent, and digital magnets designed to stimulate interest in that which Google provides.

Mr. Geerling’s essay is pretty good. Using good judgment, he does not work through the blood-drawing brambles of what Google does. That means he operates in a professional manner.

Bad news, Mr. Geering, that won’t work. The Google has been given control of information flows and that translates to money and power.

Salute the flag, adapt, and just post content that sells ads. Open source is a sub-genre of offensive content. Adapt or be deprived of Googley benefits.

Stephen E Arnold, June 17, 2025

Googley: A Dip Below Good Enough

June 16, 2025

Dino 5 18 25_thumbA dinobaby without AI wrote this. Terrible, isn’t it? I did use smart software for the good enough cartoon. See, this dinobaby is adapting.

I was in Washington, DC, from June 9 to 11, 2025. My tracking of important news about the online advertising outfit was disrupted. I have been trying to catch up with new product mist, AI razzle dazzle, and faint signals of importance. The first little beep I noticed appeared in “Google’s Voluntary Buyouts Lead its Internal Restructuring Efforts.” “Ah, ha,” I thought. After decades of recruiting the smartest people in the world, the Google is dumping full time equivalents. Is this a move to become more efficient? Google has indicated that it is into “efficiency”; therefore, has the Google redefined the term? Had Google figured out that the change to tax regulations about research investments sparked a re-thing? Is Google so much more advanced than other firms, its leadership can jettison staff who choose to bail with a gentle smile and an enthusiastic wave of leadership’s hand?

image

The home owner evidences a surge in blood pressure. The handyman explains that the new door has been installed in a “good enough” manner. If it works for service labor, it may work for Google-type outfits too. Thanks, Sam AI-Man. Your ChatGPT came through with a good enough cartoon. (Oh, don’t kill too many dolphins, snail darters, and lady bugs today, please.)

Then I read “Google Cloud Outage Brings Down a Lot of the Internet.” Enticed by the rock solid metrics for the concept of “a lot,” I noticed this statement:

Large swaths of the internet went down on Thursday (June 12, 2025), affecting a range of services, from global cloud platform Cloudflare to popular apps like Spotify. It appears that a Google Cloud outage is at the root of these other service disruptions.

What? Google the fail over champion par excellence went down. Will the issue be blamed on a faulty upgrade? Will a single engineer who will probably be given an opportunity to find his or her future elsewhere be identified? Will Google be able to figure out what happened?

What are the little beeps my system continuously receives about the Google?

  1. Wikipedia gets fewer clicks than OpenAI’s ChatGPT? Where’s the Google AI in this? Answer: Reorganizing, buying out staff, and experiencing outages.
  2. Google rolls out more Gemini functions for Android devices. Where’s the stability and service availability for these innovations? Answer: I cannot look up the answer. Google is down.
  3. Where’s the revenue from online advertising as traditional Web search presents some thunderclouds? Answer: Well, that is a good question. Maybe revenues from Waymo, a deal with Databricks, or a bump in Pixel phone sales?

My view is that the little beeps may become self-amplifying. The magic of the online advertising model seems to be fading like the allure of Disneyland. When imagineering becomes imitation, more than marketing fairy dust may be required.

But what’s evident from the tiny beeps is that Google is now operating in “good enough” mode. Will it be enough to replace the Yahoo-GoTo-Overture pay-to-play approach to traffic?

Maybe Waymo is the dark horse when the vehicles are not combustible?

Stephen E Arnold, June 16, 2025

Another Vote for the Everything App

June 13, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

An online information service named 9 to 5 Mac published an essay / interview summary titled “Nothing CEO says Apple No Longer Creative; Smartphone Future Is a Single App.” The write up focuses on the “inventor / coordinator” of the OnePlus mobile devices and the Nothing Phone. The key point of the write up is the idea that at some point in the future, one will have a mobile device and a single app, the everything app.

The article quotes a statement Carl Pei (the head of the Nothing Phone) made to another publication; to wit:

I believe that in the future, the entire phone will only have one app—and that will be the OS. The OS will know its user well and will be optimized for that person […] The next step after data-driven personalization, in my opinion, is automation. That is, the system knows you, knows who you are, and knows what you want. For example, the system knows your situation, time, place, and schedule, and it suggests what you should do. Right now, you have to go through a step-by-step process of figuring out for yourself what you want to do, then unlocking your smartphone and going through it step by step. In the future, your phone will suggest what you want to do and then do it automatically for you. So it will be agentic and automated and proactive.

This type of device will arrive in seven to 10 years.

For me, the notion of an everything app or a super app began in 2010, but I am not sure who first mentioned the phrase to me. I know that WeChat, the Chinese everything app, became available in 2011. The Chinese government was aware at some point that an “everything” app would make surveillance, social scoring, and filtering much easier. The “let many approved flowers bloom” approach of the Apple and Google online app stores was inefficient. One app was more direct, and I think the A to B approach to tracking and blocking online activity makes sense to many in the Middle Kingdom. The trade off of convenience for a Really Big Brother was okay with citizens of China. Go along and get along may have informed the uptake of WeChat.

Now the everything app seems like a sure bet. The unknown is which outstanding technology firm will prevail. The candidates are WeChat, Telegram, X.com, Sam Altman’s new venture, or a surprise player. Will other apps (the not everything apps from restaurant menus to car washes) survive? Sure. But if Sam AI-Man is successful with his Ive smart device and his stated goal of buying the Chrome browser from the Google catch on, the winner may be a CEO who was fired by his board, came back, and cleaned out those who did not jump on the AI-Man’s bandwagon.

That’s an interesting thought. It is Friday the 13th, Google. You too Microsoft. And Apple. How could I have forgotten Tim Cook and his team of AI adepts?

Stephen E Arnold, June 13, 2025

Will Amazon Become the Bell Labs of Consumer Products?

June 12, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

I did some work at Bell Labs and then at the Judge Greene crafted Bellcore (Bell Communications Research). My recollection is that the place was quiet, uneventful, and had a lousy cafeteria. The Cherry Hill Mall provided slightly better food, just slightly. Most of the people were normal compared to the nuclear engineers at Halliburton and my crazed colleagues at the blue chip consulting firm dumb enough to hire me before I became a dinobaby. (Did you know that security at the Cherry Hill Mall had a gold cart to help Bell Labs’ employees find their vehicle? The reason? Bell Labs hired staff to deal with this recuring problem. Yes, Howard, Alan, and I lost our car when we went to lunch. I finally started parking in the same place and wrote the door exit and lamp number down in my calendar. Problem solved!)

Is Amazon like that? On a visit to Amazon, I formed an impression somewhat different from Bell Labs, Halliburton, and the consulting firm. The staff were not exactly problematic. I just recall having to repeat and explain things. Amazon struck me as an online retailer with money and challenges in handling traffic. The people with whom I interacted when I visited with several US government professionals were nice and different from the technical professionals at the organizations which paid me cash money.

Is this important? Yes. I don’t think of Amazon as particularly innovative. When it wanted to do open source search, it hired some people from Lucid Imagination, now Lucid Works. Amazon just did what other Lucene/Solr large-scale users did: Index content and allow people to run queries. Not too innovative in my book. Amazon also industrialized back office and warehouse projects. These are jobs that require finding existing products and consultants, asking them to propose “solutions,” picking one, and getting the workflow working. Again, not particularly difficult when compared to the holographic memory craziness at Bell Labs or the consulting firm’s business of inventing consumer products for companies in the Fortune 500 that would sell and get the consulting firm’s staggering fees paid in cash promptly. In terms of the nuclear engineering work, Amazon was and probably still is, not in the game. Some of the rocket people are, but the majority of the Amazon workers are in retail, digital plumbing, and creating dark pattern interfaces. This is “honorable” work, but it is not invention in the sense of slick Monte Carlo code cranked out by Halliburton’s Dr. Julian Steyn or multi-frequency laser technology for jamming more data through a fiber optic connection.

I read “Amazon Taps Xbox Co-Founder to Lead new Team Developing Breakthrough Consumer Products.” I asked myself, “Is Amazon now in the Bell Labs’ concept space? The write up tries to answer my question, stating:

The ZeroOne team is spread across Seattle, San Francisco and Sunnyvale, California, and is focused on both hardware and software projects, according to job postings from the past month. The name is a nod to its mission of developing emerging product ideas from conception to launch, or “zero to one.” Amazon has a checkered history in hardware, with hits including the Kindle e-reader, Echo smart speaker and Fire streaming sticks, as well as flops like the Fire Phone, Halo fitness tracker and Glow kids teleconferencing device. Many of the products emerged from Lab126, Amazon’s hardware research and development unit, which is based in Silicon Valley.

Okay, the Fire Phone (maybe Foney) and the Glow thing for kids? Innovative? I suppose. But to achieve success in raw innovation like the firms at which I was an employee? No, Amazon is not in that concept space. Amazon is more comfortable cutting a deal with Elastic instead of “inventing” something like Google’s Transformer or Claude Shannon’s approach to extracting a signal from noise. Amazon sells books and provides an almost clueless interface to managing those on the Kindle eReader.

The write up says (and I believer everything I read on the Internet):

Amazon has pulled in staffers from other business units that have experience developing innovative technologies, including its Alexa voice assistant, Luna cloud gaming service and Halo sleep tracker, according to LinkedIn profiles of ZeroOne employees. The head of a projection mapping startup called Lightform that Amazon acquired is helping lead the group. While Amazon is expanding this particular corner of its devices group, the company is scaling back other areas of the sprawling devices and services division.

Innovation is a risky business. Amazon sells stuff and provides online access with uptime of 98 or 99 percent. It does not “do” innovation. I wrote a book chapter about Amazon’s blockchain patents. What happened to that technology, some of which struck me as promising and sort of novel given the standards for US patents? The answer, based on the information I have seen since I wrote the book chapter, is, “Not much.” In less time, Telegram dumped out dozens of “inventions.” These have ranged from sticking crypto wallets into every Messenger users’ mini app to refining the bot technology to display third-party, off-Telegram Web sites on the fly for about 900 million Messenger users.

Amazon hit a dead end with Alexa and something called Halo.

When an alleged criminal organization operating as an “Airbnb” outfit with no fixed offices and minimal staff can innovate and Amazon with its warehouses cannot, there’s a useful point of differentiation in my mind.

The write up reports:

Earlier this month, Amazon laid off about 100 of the group’s employees. The job cuts included staffers working on Alexa and Amazon Kids, which develops services for children, as well as Lab126, according to public filings and people familiar with the matter who asked not to be named due to confidentiality. More than 50 employees were laid off at Amazon’s Lab126 facilities in Sunnyvale, according to Worker Adjustment and Retraining Notification (WARN) filings in California.

Okay. Fire up a new unit. Will the approach work? I hope for stakeholders’ and employees’ sake, Amazon hits a home run. But in the back of my mind, innovation is difficult. Quite special people are needed. The correct organizational set up or essentially zero set up is required. Then the odds are usually against innovation, which, if truly novel, evokes resistance. New is threatening.

Can the Bezos bulldozer shift into high gear and do the invention thing? I don’t know but I have some nagging doubts.

Stephen E Arnold, June 12, 2025

Musk, Grok, and Banning: Another Burning Tesla?

June 12, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Elon Musk’s Grok Chatbot Banned by a Quarter of European Firms” reports:

A quarter of European organizations have banned Elon Musk’s generative AI chatbot Grok, according to new research from cybersecurity firm Netskope.

I find this interesting because my own experiences with Grok have been underwhelming. My first query to Grok was, “Can you present only Twitter content?” The answer was a bunch of jabber which meant, “Nope.” Subsequent queries were less than stellar, and I moved it out of my rotation for potentially useful AI tools. Did the sample crafted by Netskope have a similar experience?

The write up says:

Grok has been under the spotlight recently for a string of blunders. They include spreading false claims about a “white genocide” in South Africa and raising doubts about Holocaust facts.  Such mishaps have raised concerns about Grok’s security and privacy controls. The report said the chatbot is frequently blocked in favor of “more secure or better-aligned alternatives.”

I did not feel comfortable with Grok because of content exclusion or what I like to call willful or unintentional coverage voids. The easiest way to remove or weaponize content in the commercial database world is to exclude it. When a person searches a for fee database, the editorial policy for that service should make clear what’s in and what’s out. Filtering out is the easiest way to marginalize a concept, push down a particular entity, or shape an information stream.

The cited write up suggests that Grok is including certain content to give it credence, traction, and visibility. Assuming that an electronic information source is comprehensive is a very risky approach to assembling data.

The write up adds another consideration to smart software, which — like it or not — is becoming the new way to become informed or knowledgeable. The information may be shallow, but the notion of relying on weaponized information or systems that spy on the user presents new challenges.

The write up reports:

Stable Diffusion, UK-based Stability AI’s image generator, is the most blocked AI app in Europe, barred by 41% of organizations. The app was often flagged because of concerns around privacy or licensing issues, the report found.

How concerned should users of Grok or any other smart software be? Worries about Grok may be an extension of fear of a burning Tesla or the face of the Grok enterprise. In reality, smart software fosters the illusion of completeness, objectivity, and freshness of the information presented. Users are eager to use a tool that seems to make life easier and them appear more informed.

The risks of reliance on Grok or any other smart software include:

  1. The output is incomplete
  2. The output is weaponized or shaped by intentional or factors beyond the developers’ control
  3. The output is simply wrong, made up, or hallucinated
  4. Users who act as though shallow knowledge is sufficient for a decision.

The alleged fact that 25 percent of the Netskope sample have taken steps to marginalize Grok is interesting. That may be a positive step based on my tests of the system. However, I am concerned that the others in the sample are embracing a technology which appears to be delivering the equivalent of a sugar rush after a gym workout.

Smart software is being applied in novel ways in many situations. However, what are the demonstrable benefits other than the rather enthusiastic embrace of systems and methods known to output errors? The rejection of Grok is one interesting factoid if true. But against the blind acceptance of smart software, Grok’s down check may be little more than a person stepping away from a burning Tesla. The broader picture is that the buildings near the immolating vehicle are likely to catch on fire.

Stephen E Arnold, June 12, 2025

LLMs, Dread, and Good Enough Software (Fast and Cheap)

June 11, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

More philosopher programmers have grabbed a keyboard and loosed their inner Plato. A good example is the essay “AI: Accelerated Incompetence” by Doug Slater. I have a hypothesis about this embrace of epistemological excitement, but that will appear at the end of this dinobaby post.

The write up posits:

In software engineering, over-reliance on LLMs accelerates incompetence. LLMs can’t replace human critical thinking.

The driver of the essay is that some believe that programmers should use outputs from large language models to generate software. Doug does not focus on Google and Microsoft. Both companies are convinced that smart software can write good enough code. (Good enough is the new standard of excellence at many firms, including the high-flying, thin-air breathing Googlers and Softies.)

The write up identifies three beliefs, memes, or MBAisms about this use of LLMs. These are:

  • LLMs are my friend. Actually LLMs are part of a push to get more from humanoids involved in things technical. For a believer, time is gained using LLMs. To a person with actual knowledge, LLMs create work in order to catch errors.
  • Humans are unnecessary. This is the goal of the bean counter. The goal of the human is to deliver something that works (mostly). The CFO is supposed to reduce costs and deliver (real or spreadsheet fantasy) profits. Humans, at least for now, are needed when creating software. Programmers know how to do something and usually demonstrate “nuance”; that is, intuitive actions and thoughts.
  • LLMs can do what humans do, especially programmers and probably other technical professionals. As evidence of doing what humans do, the anecdote about the robot dog attacking its owner illustrates that smart software has some glitches. Hallucinations? Yep, those too.

The wrap up to the essay states:

If you had hoped that AI would launch your engineering career to the next level, be warned that it could do the opposite. LLMs can accelerate incompetence. If you’re a skilled, experienced engineer and you fear that AI will make you unemployable, adopt a more nuanced view. LLMs can’t replace human engineering. The business allure of AI is reduced costs through commoditized engineering, but just like offshore engineering talent brings forth mixed fruit, LLMs fall short and open risks. The AI hype cycle will eventually peak10. Companies which overuse AI now will inherit a long tail of costs, and they’ll either pivot or go extinct.

As a philosophical essay crafted by a programmer, I think the write up is very good. If I were teaching again, I would award the essay an A minus. I would suggest some concrete examples like “Google suggests gluing cheese on pizza”, for instance.

Now what’s the motivation for the write up. My hypothesis is that some professional developers have a Spidey sense that the diffident financial professional will license smart software and fire humanoids who write code. Is this a prudent decision? For the bean counter, it is self preservation. He or she does not want to be sent to find a future elsewhere. For the programmer, the drum beat of efficiency and the fife of cost reduction are now loud enough to leak through noise reduction head phones. Plato did not have an LLM, and he hallucinated with the chairs and rear view mirror metaphors.

Stephen E Arnold, June 11, 2025

A Decade after WeChat a Marketer Touts OpenAI as the Everything App

June 10, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Lester thinks OpenAI will become the Internet. Okay, Lester, are you on the everything app bandwagon. That buggy rolled in China and became one of the little engines that could for social scoring? “How ChatGPT Could Replace the Internet As We Know It” provides quite a bit about Lester. Zipping past the winner prose, I noted this passage:

In fact, according to Khyati Hooda of Keywords Everywhere, ChatGPT handles 54% of queries without using traditional search engines. This alarming stat indicates a shift in how users seek information. As the adoption grows and ChatGPT cements itself as the single source of information, the internet as we know it becomes kinda pointless.

One question? Where does the information originate? From intercepted mobile communications, from nifty listening devices like smart TVs, or from WeChat-style methods? The jump from the Internet to an everything app is a nifty way to state that everything is reducible to bits. Get the bits, get the “information.”

Lester says:

Basically, ChatGPT is cutting out the middleman, but what’s even scarier is that it’s working. ChatGPT reached 1 million users in just 5 days and has 400 million weekly active users as of early 2025, making it the fastest-growing consumer app in history. The platform receives over 5.19 billion visits per month, ranking as the 8th most visited website in the world.

He explains:

What started as a chatbot has become a platform where people book travel, plan meals, write emails, create schedules, and even do homework. Surveys show that around 80% of ChatGPT users leverage it for professional tasks such as drafting emails, creating reports, and generating marketing content. This marks a fundamental shift in how we engage with the internet, where more everyday tasks move from web browsing to a prompt.

How likely is this shift, Lester? Lester responds in a ZDNet-type way:

I wouldn’t be surprised if ChatGPT added a super agent that does tasks autonomously by December of this year. Amazed? Sure. But surprised? Nah. It’s not hard to imagine a near future where ChatGPT doesn’t just replace the internet but OpenAI becomes the foundation for future companies, in the same way that roads became the foundation for civilization.

Lester interprets the shift as mostly good news. Jobs will be created. There are a few minor problems; for instance, retraining and changing business models. Otherwise, Lester does not see too many new problems. In fact, he makes his message clear:

If you stand still, never evolve, never improve your skills, and never push yourself to be better, life will decimate you like a gorilla vs 100 men.

But what if the gorilla is Google? And that Google creature has friends like Microsoft and others. A super human like Elon Musk or Pavel Durov might jump into the fray against the men, presumably from OpenAI.

Convergence and collapsing to an “everything” app is logical. However, humans are not logical. Plus smart software has a couple of limitations. These include cost, energy requirements, access to information, pushback from humans who cannot be or do not want to be “retrained,” and making stuff up (you know, hallucinations like gluing cheese on pizza).

Net net: Old school search is now wearing a new furry suit, but WeChat and Telegram are existing “everything” apps. Mr. Musk and Sam AI-Man know or sense there is a future in co-opting the idea, bolting on smart software, and hitting the marketing start button. However, envisioning and pulling off are two different things. China allegedly helped WeChat think about its role; Telegram’s founder visited Russia dozens of times prior to his arrest in France. What nation state will husband a Western European or American “everything” app?

Mr. Musk has a city in Texas. Perhaps that’s why he has participated in a shadow dance with Telegram?

Lester, you have identified the “everything” app. Good work. Don’t forget WeChat débuted in 2011. Telegram rolled out in 2013. Now a decade later, the “everything” app is the next big thing. Okay. But who is the “we” in the essay’s title? It is not this dinobaby.

Stephen E Arnold, June 10, 2025

Google Places a Big Bet, and It May Not Pay Off

June 10, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Each day brings more AI news. I have playing in the background a video called “The AI Math That Left Number Theorists Speechless.” That word “speechless” does not apply because the interlocutor and the math whiz are chatty Cathies. The video runs a little less that two hours. Speechless? No, when it comes to smart software some people become verbose and excited. I like to be verbose. I don’t like to get excited about artificial intelligence. I am a dinobaby, remember?

I clicked on the first item in my trusty Overflight service and this write up greeted me: “Google Is Burying the Web Alive.” How does one “bury” a digital service? I assumed or inferred that the idea is that the alleged multi-monopoly Google was going to create another monopoly for itself anchored in AI.

The write up says:

[AI Overviews are] Google’s “most powerful AI search, with more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web,” the company says, “breaking down your question into subtopics and issuing a multitude of queries simultaneously on your behalf.” It’s available to everyone. It’s a lot like using AI-first chatbots that have search functions, like those from OpenAI, Anthropic, and Perplexity, and Google says it’s destined for greater things than a small tab. “As we get feedback, we’ll graduate many features and capabilities from AI Mode right into the core Search experience,” the company says.

Let’s slow down the buggy. A completely new product or service has some baggage on board. Like “New Coke”, quite a few people liked “old Coke.” The company figured it out and innovated and finally just started buying beverage outfits that were pulling new customers. Then there is the old chestnut by the buggy stand which says, “Most start ups fail.” Finally, there is the shadow of impatient stakeholders. Fail to keep those numbers up, and consequences manifest themselves.

The write up gallops forward:

From the very first use, however, AI Mode crystallized something about Google’s priorities and in particular its relationship to the web from which the company has drawn, and returned, many hundreds of billions of dollars of value. AI Overviews demoted links, quite literally pushing content from the web down on the page, and summarizing its contents for digestion without clicking…

Those clicks make Google’s money flow. It does not matter if the user clicks to view a YouTube short or a click to view a Web page about a vacation rental. Clicks equal revenue. Fewer clicks may translate to less revenue. If this is true, then what happens?

The write up suggests an answer: The good old Web is marginalized. Kaput. Dead as a door nail:

of course, Google is already working on ads for both Overviews and AI Mode). In its drive to embrace AI, Google is further concealing the raw material that fuels it, demoting links as it continues to ingest them for abstraction. Google may still retain plenty of attention to monetize and perhaps keep even more of it for itself, now that it doesn’t need to send people elsewhere; in the process, however, it really is starving the web that supplies it with data on which to train and from which to draw up-to-date details. (Or, one might say, putting it out of its misery.)

As a dinobaby, I quite like the old Web. Again we have a giant company doing something “new” and “different.” How will those bold innovations work out? That’s the $64 question (a rigged game show my mother told me).

The article concludes:

In any case, the signals from Google — despite its unconvincing suggestions to the contrary — are clear: It’ll do anything to win the AI race. If that means burying the web, then so be it.

Whoa, Nellie!

Let’s think about what the Google is allegedly doing. First, the Google is spending money to index the “Web.” My team tells me that Google is indexing less thoroughly than it was 10 years ago. Google indexes where the traffic is, and quite a bit of that traffic is to Google itself. The losers have been grousing about a lack of traffic for years. I have worked with a consumer Web site since 1993, and the traffic cratered about seven years ago. Why? Google selected sites to boost because of the link between advertiser appetite and clicks. The owner of this consumer Web site cooked up a bit of jargon for what Google was doing; he called it “steering.” The idea is that Google shaped its crawls and “relevance” in order to maximize revenue from known big ad spenders.

Google is not burying anything. The company is selecting to maximize financial benefits. My experience suggests that when Google strays too far from what stakeholders want, the company will be whipped until it gets the horses under control. Second, the AI revolution poses a significant challenge for a number of reasons. Among these is the users’ desire for the information equivalent of a “dumb” mobile phone. The cacophony of digital information is too much and creates a “why bother” need. Google wants to respond in the hope that it can come up with a product or service that produces as much money as the old Yahoo Overture GoTo model. Hope, however, is not reality.

As a dinobaby, I think Google has a reasonably good chance of stratifying its “users”. Some will pay. Some will consume the sponsored by ads AI output. Some will find a way to get the restaurant address surrounded by advertisements.

What about AI?

I am not sure that anyone knows. Both Google and Microsoft have to find a way to produce significant and sustainable revenue from the large language model method which has come to be synonymous with smart software. The costs are massive. The use cases usually focus on firing people for cost savings until the AI doesn’t work. Then the AI supporters just hire people again. That’s the Klarna call to think clearly again.

Net net: The Google is making a big bet that it can increase its revenues with smart software. How probable is it that the “new” Google will turn out like the “New Coke”?  How much of the AI hype is just l’entreprise parle dans le vide? The hype may be the inverse of reality. Something will be buried, and it may not be the “Web.”

Stephen E Arnold, June 10, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta