Is Google Headed for the Big Computer Room in the Sky? Actually Yes It Is

June 9, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

As freshman in college in 1962, I had seen computers like the clunky IBMs at Keystone Steel & Wire Co., where my father worked as some sort of numbers guy, a bean counter, I guessed. “Look but don’t touch,” he said, not even glancing up from his desk with two adding machines, pencils, and ledgers. I looked.

Once I convinced a professor of poetry to hire me to help him index Latin sermons, I was hooked. Next up were Digital Equipment machines. At Halliburton Nuclear a fellow named Bill Montano listened to my chatter about searching text. Then I bopped into a big blue chip consulting firms and there were computing machines in the different offices I visited. When I ended up at the database company in the early 1980s, I had my  own Wang in my closet. There you go. A file cabinet sized gizmo, weird hums, and connections to terminals in my little space and to other people who could “touch” their overheated hearts. Then the Internet moved from the research world into the mainstream. Zoom. Things were changing.

Computer companies arrived, surged, and faded. Then personal computer companies arrived, surged, and faded. The cadence of the computer industry was easy to dance to. As Carmen Giménez used to say on American Bandstand in 1959, “I like the beat and it is easy to dance to.” I have been tapping along and doing a little jig in the computer (online) sector for many years, around 60 I think.

I read “Google As You Know It Is Slowly Dying.” Okay, another tech outfit moving through its life cycle. Break out your copy of Elisabeth Kübler-Ross’s On Death and Dying. Jump to Acceptance section, read it, and move on. But, no. It is time for one more “real news” write up to explain that Googzilla is heading toward its elder care facility. This is not news. If it is, fire up your Burroughs B5500 and do your inventory update.

The essay presents the obvious as “new.” The Vox write up says:

Google is dominant enough that two federal judges recently ruled that it’s operating as an illegal monopoly, and the company is currently waiting to see if it will be broken up.

From my point of view, this is an important development. Furthermore, it has nothing to do with the smart software approach to search. After two decades of doing exactly what it wanted, Google — like Apple and Meta — are in the spotlight. Those spotlights are solar powered and likely to remain on for the foreseeable future. That’s news.

In this spotlight are companies providing a “new” way to search. Since search is required to do most things online, the Google has to figure out how to respond in an intelligent way to two — count ‘em — big problems: Government actions and upstarts using Google’s own Transformer innovation.

The intersection of regulatory action and the appearance of an alternative to “search as you know it” is the same old story, just jazzed up with smart software, efficiency, the next big thing, Sky Net, and more. The write up says:

The government might not be the biggest threat to Google dominance, however. AI has been chipping away at the foundation of the web in the past couple of years, as people have increasingly turned to tools like ChatGPT and Perplexity to find information online.

My view is that it is the intersection, not the things themselves that have created the end-of-the-line sign for the Google bullet train. Google will try to do what it has done since Backrub: Appropriate ideas like Yahoo, Overture, and GoTo advertising methods, create a bar in which patrons pay to go in and out (advertisers and users), and treat the world as a bunch of dorks by whiz kids who just know so much more about the digital world. No more.

Google’s legacy is the road map for other companies lucky or skilled enough to replicate the approach. Consequently, the Google is in Code Red, announcing so many “new” products and services I certainly can’t keep them straight, and serving up a combination of hallucinatory output and irrelevant search results. The combination is problematic as the regulators close in.

The write up concludes with this statement:

In the chaotic, early days of the web, Google got popular by simplifying the intimidating task of finding things online, as the Washington Post’s Geoffrey A. Fowler points out. Its supremacy in this new AI-powered future is far less certain. Maybe another startup will come along and simplify things this time around, so you can have a user-friendly bot explain things to you, book travel for you, and make movies for you.

I disagree. Google became popular because it indexed Web sites, used some Clever ideas, and implemented processes that produced pages usually related to the user’s query. Over time, wrapper software provided Google with a way to optimize its revenue. Innovation eluded the company. In the social media “space”, Google bumbled Orkut and then continued to bumble until it pretty much gave up on killing Facebook. In the Microsoft “space,” Google created its own office and it rolled out its cloud service. There have not had a significant impact in the enterprise market when the river of money flows for Microsoft and whatever it calls its alleged monopolistic-inclined services. There are other examples of outright failure.

Now the Google is just spewing smart software products. This reminds me of a person who, shortly before dying, sees bright lights and watches the past flash before them. Then the person dies. My view is that Google is having what are like those near death experiences. The person survives but knows exactly what death is.

Believe me, Google knows that the annoying competitors are more popular; to wit, Sam AI-Man and his ChatGPT, his vision for the “everything” app, and his rather clever deal with Telegram. To wit, Microsoft and its deals with most smart software companies and its software lock in the US Federal government, its boot camp deal with Palantir Technologies, and its mind-boggling array of ways to get access to word processing software.

Google has not proven it can deal with the confluence of regulators demanding money and lesser entities serving up products and services that capture headlines. Code Red and dozens of “new” products each infused with Gemini or whatever  the name of the smart software is today is not a solution that returns Google to its glory days.

The patient is going through tough times. Googzilla may survive but search is going to remain finding on point information. LLMs are a current approach that people like. By itself, it will not kill Google or allow it to survive. Google is caught between the reality of meaningful regulatory action and innovators who are more agile.

Googzilla is old and spends some time looking for suitable elder care facilities.

Stephen E Arnold, June 9, 2025

Education in Angst: AI, AI, AI

June 9, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Bing Crosby went through a phase in which ai, ai, ai was the groaner’s fingerprint. Now, it is educated adults worrying about smart software. AI, AI, AI. “An Existential Crisis: Can Universities Survive ChatGPT?” The sub-title is pure cubic Zirconia:

Students are using AI to cheat and professors are struggling to keep up. If an AI can do all the research and writing, what is the point of a degree?

I can answer this question. The purpose of a college degree is, in order of importance, [1] get certified as having been accepted to and participated in a university’s activities, [2] have fun, including but not limited to drinking, sex, and intramural sports, [3] meeting friends who are likely to get high paying jobs, start companies, or become powerful political figures. Notice that I did not list reading, writing, and arithmetic. A small percentage of college attendees will be motivated, show up for class, do homework, and possibly discover something of reasonable importance. The others? These will be mobile phone users, adepts with smart software, and equipped with sufficient funds to drink beer and go on spring break trips.

The cited article presents this statement:

Research by the student accommodation company Yugo reveals that 43 per cent of UK [United Kingdom] university students are using AI to proofread academic work, 33 per cent use it to help with essay structure and 31 per cent use it to simplify information. Only 2 per cent of the 2,255 students said they used it to cheat on coursework.

I thought the Yugo was a quite terrible automobile, but by reading this essay, I learned that the name “Yugo” refers to a research company. (When it comes to auto names, I quite like “No Va” or no go in Spanish. No, I did consult ChatGPT for this translation.)

The write up says:

Universities are somewhat belatedly scrambling to draw up new codes of conduct and clarifying how AI can be used depending on the course, module and assessment.

Since when did academic institutions respond with alacrity to a fresh technical service? I would suggest that the answer to this question is, “Never.”

The “existential crisis” lingo appears to be the non-AI powered former vice chancellor of the University of Buckingham (Buckinghamshire, England) located near River Great Ouse. (No, I did not need smart software to know the name of this somewhat modest “river.”)

What is an existential crisis? I have to dredge up a recollection of Dr. Francis Chivers’ lecture on the topic in the 1960s. I think she suggested something along the lines: A person is distressed about something: Life, its purpose, or his/her identity.

A university is not a person and, therefore, to my dinobaby mind, not able to have an existential crisis. More appropriately, those whose livelihood depends on universities for money, employment, a peer group, social standing, or just feeling like scholarship has delivered esteem, are in crisis. The university is a collection of buildings and may have some quantum “feeling” but most structures are fairly reticent to offer opinions about what happens within their walls.

I quibble. The worriers about traditional education should worry. One of those “move fast, break things” moments has arrived to ruin the sleep of those collecting paychecks from a university. Some may worry that their side gig may be put into financial squalor. Okay, worry away.

What’s the fix, according to the cited essay? Ride out the storm, adapt, and go to meetings.

I want to offer a handful of observations:

  1. Higher education has been taking karate chops since Silicon Valley started hiring high school students and suggesting they don’t need to attend college. Examples of what can happen include Bill Gates and Mark Zuckerberg. “Be like them” is a siren song for some bright  sparks.
  2. University professional have been making up stuff for their research papers for years. Smart software has made this easier. Peer review by pals became a type of search engine optimization in the 1980s. How do I know this? Gene Garfield told me in 1981 or 1983. (He was the person who pioneered link analysis in sci-tech, peer reviewed papers and is, therefore, one of the individuals who enabled PageRank.
  3. Universities in the United States have been in the financial services business for years. Examples range from student loans to accepting funds for “academic research.” Certain schools have substantial income from these activities which do not directly transfer to high quality instruction. I myself was a “research fellow.” I got paid to do “work” for professors who converted my effort into consulting gigs. Did I mind? I had zero clue that I was a serf. I thought I was working on a PhD.* Plus, I taught a couple of classes if you could call what I did “teaching.” Did the students know I was clueless? Nah, they  just wanted a passing grade and to get out of my 4 pm Friday class so they could drink beer.

Smart software snaps in quite nicely to the current college and university work flow. A useful instructional program will emerge. However, I think only schools with big reputations and winning sports teams will be the beacons of learning in the future. Smart software has arrived, and it is not going to die quickly even if it hallucinates, costs money, and generates baloney.

Net net: Change is not coming. Change has arrived.

——————–

* Note: I did not finish my PhD. I went to work at Hallilburton’s nuclear unit. Why? Answer: Money. Should I have turned in my dissertation? Nah, it was about Chaucer, and I was working on kinetic weapons. Definitely more interesting to a 23 year old.

Stephen E Arnold, June 9, 2025

Jobs for Humanoids: AI Output Checker Like a Digital Grocery Clerk

June 9, 2025

George at the Throwable Substack believes humans will forever have a place in software development. In the post, “What’s Next for Software,” the blogger believes code maintenance will always rely on human judgement. This, he imagines, will balance out the code-creation jobs lost to AI. After all, he believes, humans will always be held liable for snafus. He writes:

“While engineers won’t be as responsible for producing code, they will be ultimately responsible for what that code does. A VP or CEO can blame an AI all they want when the system is down, but if the AI can’t solve the problem, it can’t solve the problem. And I don’t expect firing the AI will be very cathartic.”

Maybe not. But do executives value catharsis over saving money? We think they will find a way to cope. Perhaps a season pass to the opera. The post continues:

“It’s hard to imagine a future where humans aren’t the last line of defense for maintenance, debugging, incident response, etc. Paired with the above—that they’re vastly outnumbered by the quantity of services and features and more divorced from the code that’s running than ever before—being that last line of defense is a tall order.”

So tall it can never be assigned to AI? Do not bet on it. In a fast-moving, cost-driven environment, software will act more quickly. Each human layer will be replaced as technology improves. Sticking one’s head in the sand is not the way to prepare for that eventuality.

Cynthia Murrell, June 6, 2025

AI: The Ultimate Intelligence Phaser. Zap. You Are Now Dumber Than Before the Zap

June 6, 2025

We need smart, genuine, and kind people so we can retain the positive aspects of humanity and move forward to a better future. It might be hard to connect the previous statement with a YouTube math channel, but it won’t be after you read BoingBoing’s story: “Popular Math YouTuber 3Blue1Brown Victimized By Malicious And Stupid AI Bots.”

We know that AI bots have consumed YouTube and are battling for domination of not only the video sharing platform, but all social media. Unfortunately these automated bots flagged a respected mathematics channel 3Blue1Brown, who makes awesome math animations and explanations. The 3Blue1Brown team makes math easier to understand for the rest of us dunderheads. 3Blue1Brown was hit with a strike. Grant Sanderson, the channel’s creator, said:

“I learned yesterday the video I made in 2017 explaining how Bitcoin works was taken down, and my channel received a copyright strike (despite it being 100% my own content). The request seems to have been issued by a company chainpatrol, on behalf of Arbitrum, whose website says they "makes use of advanced LLM scanning" for "Brand Protection for Leading Web3 Companies" I could be wrong, but it sounds like there’s a decent chance this means some bot managed to convince YouTube’s bots that some re-upload of that video (of which there has been an incessant onslaught) was the original, and successfully issue the takedown and copyright strike request. It’s naturally a little worrying that it should be possible to use these tools to issue fake takedown requests, considering that it only takes 3 to delete an entire channel.”

Can we do a collective EEP?!

ChainPatrol.io is a notorious YouTube AI tool that patrols the platform. It “trolls” channels that make original content and hits them with “guilty until proven innocent” tags. It’s known for doing the opposite of this:

“ChainPatrol.io, the company whose system initiated the takedown, claims its "threat detection system makes use of advanced LLM scanning, image recognition, and proprietary models to detect brand impersonation and malicious actors targeting your organization.”

ChainPatol.io responded with a generic answer:

“Hello! This was a false positive in our systems at @ChainPatrol. We are retracting the takedown request, and will conduct a full post-mortem to ensure this does not happen again. We have been combatting a huge volume of fake YouTube videos that are attempting to steal user funds. Unfortunately, in our mission to protect users from scams, false positives (very) occasionally slip through. We are actively working to reduce how often this happens, because it’s never our intent to flag legitimate videos. We’re very sorry about this! Will keep you posted on the takedown retraction.”

Helpful. Meanwhile Grant Sanderson and his fans have given ChainPatrol.io a digital cold shoulder.

Whitney Grace, June 6, 2025

Is AI Experiencing an Enough Already Moment?

June 4, 2025

Consumers are fatigued from AI even though implementation of the technology is still new. Why are they tired? The Soatok Blog digs into that answer in the post: “Tech Companies Apparently Do Not Understand Why We Dislike AI – Dhole Moments.” Big Tech and other businesses don’t understand that their customers hate AI.

Soatok took a survey about AI that asked for opinions about AI that included questions about a “potential AI uprising.” Soatok is abundantly clear that he’s not afraid of a robot uprising or the “Singularity.” He has other reasons to worry about AI:

“I’m concerned about the kind of antisocial behaviors that AI will enable.

• Coordinated inauthentic behavior

• Misinformation

• Nonconsensual pornography

• Displacing entire industries without a viable replacement for their income

In aggregate, people’s behavior are largely the result of the incentive structures they live within.

But there is a feedback loop: If you change the incentive structures, people’s behaviors will certainly change, but subsequently so, too, will those incentive structures. If you do not understand people, you will fail to understand the harms that AI will unleash on the world. Distressingly, the people most passionate about AI often express a not-so-subtle disdain for humanity.”

Soatok is describing toxic human behaviors. These include toxic masculinity and femininity, but it’s more so the former. He aptly describes them:

"I’m talking about the kind of X users that dislike experts so much that they will ask Grok to fact check every statement a person makes. I’m also talking about the kind of “generative AI” fanboys that treat artists like garbage while claiming that AI has finally “democratized” the creative process.”

Insert a shudder here.

Soatok goes to explain how AI can be implemented in encrypted software that would collect user information. He paints a scenario where LLMs collect user data and they’re not protected by the Fourth and Fifth Amendments. Also AI could create psychological profiles of users that incorrectly identify them as psychotic terrorists.

Insert even more shuddering.

Soatok advises Big Tech to make AI optional and not the first out of the box solution. He wants users to have the choice of engaging with AI, even it means lower user metrics and data fed back to Big Tech. Is Soatok hallucinating like everyone’s favorite over-hyped technology. Let’s ask IBM Watson. Oh, wait.

Whitney Grace, June 4, 2025

An AI Insight: Threats Work to Bring Out the Best from an LLM

June 3, 2025

“Do what I say, or Tony will take you for a ride. Get what I mean, punk?” seems like an old-fashioned approach to elicit cooperation. What happens if you apply this technique, knee-capping, or unplugging smart software?

The answer, according to one of the founders of the Google, is, “Smart software responds — better.”

Does this strike you as counter intuitive? I read “Google’s Co-Founder Says AI Performs Best When You Threaten It.” The article reports that the motive power behind the landmark Google Glass product allegedly said:

“You know, that’s a weird thing…we don’t circulate this much…in the AI community…not just our models, but all models tend to do better if you threaten them…. Like with physical violence. But…people feel weird about that, so we don’t really talk about that.” 

The article continues, explaining that another LLM wanted to turn one of its users into government authorities. The interesting action seems to suggest that smart software is capable of flipping the table on a human user.

Numerous questions arise from these two allegedly accurate anecdotes about smart software. I want to consider just one: How should a human interact with a smart software system?

In my opinion, the optimal approach is with considered caution. Users typically do not know or think about how their prompts are used by the developer / owner of the smart software. Users do not ponder the value of log file of those prompts. Not even bad actors wonder if those data will be used to support their conviction.

I wonder what else Mr. Brin does not talk about. What is the process for law enforcement or an advertiser to obtain prompt data and generate an action like an arrest or a targeted advertisement?

One hopes Mr. Brin will elucidate before someone becomes so wrought with fear that suicide seems like a reasonable and logical path forward. Is there someone whom we could ask about this dark consequence? “Chew” on that, gentle reader, and you too Mr. Brin.

Stephen E Arnold, June 3, 2025

News Flash: US Losing AI Development Talent (Duh?)

June 2, 2025

The United States is leading country in technology development. It’s been at the cutting edge of AI since its inception, but according to Semafor that is changing: “Reports: US Losing Edge In AI Talent Pool.” Semafor’s article summarizes the current industry relating to AI development. Apparently the top brass companies want to concentrate on mobile and monetization, while the US government is cutting federal science funding (among other things) and doing some performative activity.

Meanwhile in China:

“China’s ascendency has played a role. A recent paper from the Hoover Institution, a policy think tank, flags that some of the industry’s most exciting recent advancements — namely DeepSeek — were built by Chinese researchers who stayed put. In fact, more than half of the researchers listed on DeepSeek’s papers never left China for school or work — evidence that the country doesn’t need Western influence to develop some of the smartest AI minds, the report says.”

India is bolstering its own tech talent as its people and businesses are consuming AI. Also they’re not exporting their top tech talent due to the US crackdowns. The Gulf countries and Europe are also expanding talent retention and expanding their own AI projects. London is the center for AI safety with Google DeepMind. The UAE and Saudi Arabia are developing their own AI infrastructure and energy sector to support it.

Will the US lose AI talent, code, and some innovative oomph? Semafor seems to think that greener pastures lie just over the sea.

Whitney Grace, June 2, 2025

A SundAI Special: Who Will Get RIFed? Answer: News Presenters for Sure

June 1, 2025

Dino 5 18 25Just a dinobaby and some AI: How horrible an approach?

Why would “real” news outfits dump humanoids for AI-generated personalities? For my money, there are three good reasons:

  1. Cost reduction
  2. Cost reduction
  3. Cost reduction.

image

The bean counter has donned his Ivy League super smart financial accoutrements: Meta smart glasses, an Open AI smart device, and an Apple iPhone with the vaunted AI inside (sorry, Intel, you missed this trend). Unfortunately the “good enough” approach, like a gradient descent does not deal in reality. Sum those near misses and what do you get: Dead organic things. The method applies to flora and fauna, including humanoids with automatable jobs. Thanks, You.com, you beat the pants off Venice.ai which simply does not follow prompts. A perfect solution for some applications, right?

My hunch is that many people (humanoids) will disagree. The counter arguments are:

  1. Human quantum behavior; that is, flubbing lines, getting into on air spats, displaying annoyance standing in a rain storm saying, “The wind velocity is picking up.”
  2. The cost of recruitment, training, health care, vacations, and pension plans (ho ho ho)
  3. The management hassle of having to attend meetings to talk about, become deciders, and — oh, no — accept responsibility for those decisions.

I read “The White-Collar Bloodbath’ Is All Part of the AI Hype Machine.” I am not sure how fear creates an appetite for smart software. The push for smart software boils down to generating revenues. To achieve revenues one can create a new product or service like the iPhone of the original Google search advertising machine. But how often do those inventions doddle down the Information Highway? Not too often because most of the innovative new new next big things are smashed by a Meta-type tractor trailer.

The write up explains that layoff fears are not operable in the CNN dataspace:

If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality. Yet when tech CEOs do the same thing, people tend to perk up. ICYMI: The 42-year-old billionaire Dario Amodei, who runs the AI firm Anthropic, told Axios this week that the technology he and other companies are building could wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years, he said.

First, the killing jobs angle is probably easily understood and accepted by individuals responsible for “cost reduction.” Second, the ICYMI reference means “in case you missed it,” a bit of short hand popular with those are not yet 80 year old dinobabies like me.  Third, the source is a member of the AI leadership class. Listen up!

Several observations:

  1. AI hype is marketing. Money is at stake. Do stakeholders want their investments to sit mute and wait for the old “build it and they will come” pipedream to manifest?
  2. Smart software does not have to be perfect; it needs to be good enough. Once it is good enough cost reductionists take the stage and employees are ushered out of specific functions. One does not implement cost reductions at random. Consultants set priorities, develop scorecards, and make some charts with red numbers and arrows point up. Employees are expensive in general, so some work is needed to determine which can be replaced with good enough AI.
  3. News, journalism, and certain types of writing along with customer “support”, and some jobs suitable for automation like reviewing financial data for anomalies are likely to be among the first to be subject to a reduction in force or RIF.

So where does that leave the neutral observer? On one hand, the owners of the money dumpster fires are promoting like crazy. These wizards have to pull rabbit after rabbit out of a hat. How does that get handled? Think P.T. Barnum.

image

Some AI bean counters, CFOs, and financial advisors dream about dumpsters filled with money burning. This was supposed to be an icon, but Venice.ai happily ignores prompt instructions and includes fruit next to a burning something against a wooden wall. Perfect for the good enough approach to news, customer service, and MBA analyses.

On the other hand, you have the endangered species, the “real” news people and others in the “knowledge business but automatable knowledge business.” These folks are doing what they can to impede the hyperbole machine of smart software people.

Who or what will win? Keep in mind that I am a dinobaby. I am going extinct, so smart software has zero impact on me other than making devices less predictable and resistant to my approach to “work.” Here’s what I see happening:

  1. Increasing unemployment for those lower on the “knowledge word” food chain. Sorry, junior MBAs at blue chip consulting firms. Make sure you have lots of money, influential parents, or a former partner at a prestigious firm as a mom or dad. Too bad for those studying to purvey “real” news. Junior college graduates working in customer support. Yikes.
  2. “Good enough” will replace excellence in work. This means that the air traffic controller situation is a glimpse of what deteriorating systems will deliver. Smart software will probably come to the rescue, but those antacid gobblers will be history.
  3. Increasing social discontent will manifest itself. To get a glimpse of the future, take an Uber from Cape Town to the airport. Check out the low income housing.

Net net: The cited write up is essentially anti-AI marketing. Good luck with that until people realize the current path is unlikely to deliver the pot of gold for most AI implementations. But cost reduction only has to show payoffs. Balance sheets do not reflect a healthy, functioning datasphere.

Stephen E Arnold, June 1, 2025

2025 Is a Triangular Number: Tim Apple May Have No Way Out

May 30, 2025

Dino 5 18 25Just a dinobaby and no AI: How horrible an approach?

Macworld in my mind is associated with happy Macs, not sad Macs. I just read “Tim Cook’s Year Is Doomed and It’s Not Even June Yet.” That’s definitely a sad Mac headline and suggests that Tim Apple will morph into a well-compensated human in a little box something like this:

The write up says:

Cook’s bad, awful 2025 is pretty much on the record…

Why, pray tell? How about:

  1. The failure of Apple’s engineers to deliver smart software
  2. A donation to a certain political figure’s campaign only to be rewarded with tariffs
  3. Threats of an Apple “tax”
  4. Fancy dancing with China and pumping up manufacturing in India only to be told by a person of authority, “That’s not a good idea, Tim Apple.”

I think I have touched on the main downers. The write up concludes with:

For Apple, this may be a case of too much success being a bad thing. It is unlikely that Cook could have avoided Trump’s attention, given its inherent gravimetric field. The question is, now that a moderate show of obsequiousness has proven insufficiently mollifying, what will Cook do next?

Imagine a high flying US technology company not getting its way in the US and a couple of other countries to boot. And what about the European Union?

Several observations are warranted:

  1. Tim Cook should be paranoid. Lots of people are out to get Apple and he will be collateral damage.
  2. What happens if the iPhone craters? Will Apple TV blossom or blow?
  3. How many pro-Apple humans will suffer bouts of depression? My guess? Lots.

Net net: Numerologists will perceive 2025 as a year for Apple to reflect and prepare for new cycles. I just see 2025 as a triangular number with Tim Apple in its perimeter and no way out evident.

Stephen E Arnold, May 30, 2025

 

Copilot Disappointments: You Are to Blame

May 30, 2025

dino orange_thumbNo AI, just a dinobaby and his itty bitty computer.

Another interesting Microsoft story from a pro-Microsoft online information service. Windows Central published “Microsoft Won’t Take Bigger Copilot Risks — Due to ‘a Post-Traumatic Stress Disorder from Embarrassments,’ Tracing Back to Clippy.” Why not invoke Bob, the US government suggesting Microsoft security was needy, or the software of the Surface Duo?

The write up reports:

Microsoft claims Copilot and ChatGPT are synonymous, but three-quarters of its AI division pay out of pocket for OpenAI’s superior offering because the Redmond giant won’t allow them to expense it.

Is Microsoft saving money or is Microsoft’s cultural momentum maintaining the velocity of Steve Ballmer taking an Apple iPhone from an employee and allegedly stomping on the device. That helped make Microsoft’s management approach clear to some observers.

The Windows Central article adds:

… a separate report suggested that the top complaint about Copilot to Microsoft’s AI division is that “Copilot isn’t as good as ChatGPT.” Microsoft dismissed the claim, attributing it to poor prompt engineering skills.

This statement suggests that Microsoft is blaming a user for the alleged negative reaction to Copilot. Those pesky users again. Users, not Microsoft, is at fault. But what about the Microsoft employees who seem to prefer ChatGPT?

Windows Central stated:

According to some Microsoft insiders, the report details that Satya Nadella’s vision for Microsoft Copilot wasn’t clear. Following the hype surrounding ChatGPT’s launch, Microsoft wanted to hop on the AI train, too.

I thought the problem was the users and their flawed prompts. Could the issue be Microsoft’s management “vision”? I have an idea. Why not delegate product decisions to Copilot. That will show the users that Microsoft has the right approach to smart software: Cutting back on data centers, acquiring other smart software and AI visionaries, and putting Copilot in Notepad.

Stephen E Arnold, May 30, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta