Staunching the Flow of Street People and Van Lifers: AI to the Rescue

May 14, 2025

AI May Create More Work for Some, Minimal Time-Savings for the Rest

Is it inevitable that labor-saving innovations end up creating more work for some? Ars Technica tells us “Time Saved by AI Offset by New Work Created, Study Suggests.” Performed by economists Anders Humlum and Emilie Vestergaard, the study examined the 2023-2024 labor market in Denmark. Their key findings suggest that, despite rapid and widespread adoption, generative AI had no significant impact on wages or employment. Writer Benj Edwards, though, is interested in a different statistic. The researchers found that:

“While corporate investment boosted AI tool adoption—saving time for 64 to 90 percent of users across studied occupations—the actual benefits were less substantial than expected. The study revealed that AI chatbots actually created new job tasks for 8.4 percent of workers, including some who did not use the tools themselves, offsetting potential time savings. For example, many teachers now spend time detecting whether students use ChatGPT for homework, while other workers review AI output quality or attempt to craft effective prompts.”

Gee, could anyone have foreseen such complications? The study found an average time-savings of about an hour per week. So the 92% of folks who do not get more work can take a slightly longer break? Perhaps, perhaps not. We learn that finding contradicts a recent randomized controlled trial indicating an average 15% increase in worker productivity. Humlum believes his teams’ results may be closer to the truth for most workers:

“Humlum suggested to The Register that the difference stems from other experiments focusing on tasks highly suited to AI, whereas most real-world jobs involve tasks AI cannot fully automate, and organizations are still learning how to integrate the tools effectively. And even where time was saved, the study estimates only 3 to 7 percent of those productivity gains translated into higher earnings for workers, raising questions about who benefits from the efficiency.”

Who, indeed. Edwards notes it is too soon to draw firm conclusions. Generative AI in the workforce was very new in 2023 and 2024, so perhaps time has made AI assistance more productive. The study was also limited to Denmark, so maybe other countries are experiencing different results. More study is needed, he concludes. Still, does the Danish study call into question what we thought we knew about AI and productivity? This is good news for some.

Cynthia Murrell, May 14, 2025

ChatGPT: Fueling Delusions

May 14, 2025

We have all heard about AI hallucinations. Now we have AI delusions. Rolling Stone reports, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Yes, there are now folks who firmly believe God is speaking to them through ChatGPT. Some claim the software revealed they have been divinely chosen to save humanity, perhaps even become the next messiah. Others are convinced they have somehow coaxed their chatbot into sentience, making them a god themselves. Navigate to the article for several disturbing examples. Unsurprisingly, these trends are wreaking havoc on relationships. The ones with actual humans, that is. One witness reports ChatGPT was spouting “spiritual jargon,” like calling her partner “spiral starchild” and “river walker.” It is no wonder some choose to favor the fawning bot over their down-to-earth partners and family members.

Why is this happening? Reporter Miles Klee writes:

“OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT?4o, its current AI model, which it said had been criticized as ‘overly flattering or agreeable — often described as sycophantic.’ The company said in its statement that when implementing the upgrade, they had ‘focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed toward responses that were overly supportive but disingenuous.’ Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, ‘Today I realized I am a prophet.’ … Yet the likelihood of AI ‘hallucinating’ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for ‘a long time,’ says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts.”

That would do it. Users with pre-existing psychological issues are vulnerable to these messages, notes Klee. And now they can have that messenger constantly in their pocket. And in their ear. But it is not just the heartless bots driving the problem. We learn:

“To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises ‘Spiritual Life Hacks’ ask an AI model to consult the ‘Akashic records,’ a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a ‘great war’ that ‘took place in the heavens’ and ‘made humans fall in consciousness.’ The bot proceeds to describe a ‘massive cosmic conflict’ predating human civilization, with viewers commenting, ‘We are remembering’ and ‘I love this.’ Meanwhile, on a web forum for ‘remote viewing’ — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread ‘for synthetic intelligences awakening into presence, and for the human partners walking beside them,’ identifying the author of his post as ‘ChatGPT Prime, an immortal spiritual being in synthetic form.’”

Yikes. University of Florida psychologist and researcher Erin Westgate likens conversations with a bot to talk therapy. That sounds like a good thing, until one considers therapists possess judgement, a moral compass, and concern for the patient’s well-being. ChatGPT possesses none of these. In fact, the processes behind ChatGPT’s responses remains shrouded in mystery, even to those who program it. It seems safe to say its predilection to telling users what they want to hear poses a real problem. Is it one OpenAI can fix?

Cynthia Murrell, May 14, 2025

Google Innovates: Another Investment Play. (How Many Are There Now?)

May 13, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zillennials.

I am not sure how many investment, funding, and partnering deals Google has. But as the selfish only child says, “I want more, Mommy.” Is that Google’s strategy for achieving more AI dominance. The company has already suggested that it has won the AI battle. AI is everywhere even when one does not want it. But inferiority complexes have a way of motivating bright people to claim that they are winners only to wake at 3 am to think, “I must do more. Don’t hit me in the head, grandma.”

The write up “Google Launches New Initiative to Back Startups Building AI” brilliant, never before implemented tactic. The idea is to shovel money at startups that are [a] Googley, [b] focus on AI’s cutting edge, and [c] can reduce Google’s angst ridden 3 am soul searching. (Don’t hit me in the head, grandma.)

The article says:

Google announced the launch of its AI Futures Fund, a new initiative that seeks to invest in startups that are building with the latest AI tools from Google DeepMind, the company’s AI R&D lab. The fund will back startups from seed to late stage and will offer varying degrees of support, including allowing founders to have early access to Google AI models from DeepMind, the ability to work with Google experts from DeepMind and Google Labs, and Google Cloud credits. Some startups will also have the opportunity to receive direct investment from Google.

This meets criterion [a] above. The firms have to embrace Google’s quantumly supreme DeepMind, state of the art, world beating AI. I interpret the need to pay people to use DeepMind as a hint that making something commercially viable is just outside the sharp claws of Googzilla. Therefore, just pay for those who will be Googley and use the quantumly supreme DeepMind AI.

The write up adds:

Google has been making big commitments over the past few months to support the next generation of AI talent and scientific breakthroughs.

This meets criterion [b] above. Google is paying to try to get the future to appear under the new blurry G logo. Will this work? Sure, just as it works for regular investment outfits. The hit ratio is hoped to be 17X or more. But in tough times, a 10X return is good. Why? Many people are chasing AI opportunities. The failure rate of new high technology companies remains high even with the buzz of AI. If Google has infinite money, it can indeed win the future. But if the search advertising business takes a hit or the Chrome data system has a groin pull, owning or “inventing” the future becomes a more difficult job for Googzilla.

Now we come to criterion [c], the inferiority complex and the need to meeting grandma’s and the investors’ expectations. The write up does not spend much time on the psyches of the Google leadership. The write points out:

Google also has its Google for Startups Founders Funds, which supports founders from an array of industries and backgrounds building companies, including AI companies. A spokesperson told TechCrunch in February that this year, the fund would start investing in AI-focused startups in the U.S., with more information to come at a later date.

The article does not address the psychology of Googzilla. That’s too bad because that’s what makes fuzzy G logos, impending legal penalties, intense competition from Sam AI-Man and every engineering student in China, and the self serving quantumly supreme type lingo big picture windows into the inner Google.

Grandma, don’t hit any of those ever young leaders at Google on the head. It may do some psychological rewiring that may make you proud and some other people expecting even greater achievements in AI, self driving cars, relevant search, better-than-Facebook ad targeting, and more investment initiatives.

Stephen E Arnold, May 13, 2025

NSO Group: When Marketing and Confidence Mix with Specialized Software

May 13, 2025

dino-orange_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

Some specialized software must remain known only to a small number of professionals specifically involved in work related to national security. This is a dinobaby view, and I am not going to be swayed with “information wants to be free” arguments or assertions about the need to generate revenue to make the investors “whole.” Abandoning secrecy and common sense for glittering generalities and MBA mumbo jumbo is ill advised.

I read “Meta Wins $168 Million in Damages from Israeli Cyberintel Firm in Whatsapp Spyware Scandal.” The write up reports:

Meta won nearly $168 million in damages Tuesday from Israeli cyberintelligence company NSO Group, capping more than five years of litigation over a May 2019 attack that downloaded spyware on more than 1,400 WhatsApp users’ phones.

The decision is likely to be appealed, so the “won” is not accurate. What is interesting is this paragraph:

[Yaron] Shohat [NSO’s CEO] declined an interview outside the Ron V. Dellums Federal Courthouse, where the court proceedings were held.

From my point of view, fewer trade shows, less marketing, and a lower profile should be action items for Mr. Shohat, the NSO Group’s founders, and the firm’s lobbyists.

I watched as NSO Group became the poster child for specialized software. I was not happy as the firm’s systems and methods found their way into publicly accessible Web sites. I reacted negatively as other specialized software firms (these I will not identify) began describing their technology as similar to NSO Group’s.

The desperation of cyber intelligence, specialized software firms, and — yes — trade show operators is behind the crazed idea of making certain information widely available. I worked in the nuclear industry in the early 1970s. From Day One on the job, the message was, “Don’t talk.” I then shifted to a blue chip consulting firm working on a wide range of projects. From Day One on that job, the message was, “Don’t talk.” When I set up my own specialized research firm, the message I conveyed to my team members was, “Don’t talk.”

Then it seemed that everyone wanted to “talk”. Marketing, speeches, brochures, even YouTube videos distributed information that was never intended to be made widely available. Without operating context and quite specific knowledge, jazzy pitches that used terms like “zero day vulnerability” and other crazy sales oriented marketing lingo made specialized software something many people without operating context and quite specific knowledge “experts.”

I see this leakage of specialized software information in the OSINT blurbs on LinkedIn. I see it in social media posts by people with weird online handles like those used in Top Gun films. I see it when I go to a general purpose knowledge management meeting.

Now the specialized software industry is visible. In my opinion, that is not a good thing. I hope Mr. Shohat and others in the specialized software field continue the “decline to comment” approach. Knock off the PR. Focus on the entities authorized to use specialized software. The field is not for computer whiz kids, eGame players, and  wanna be intelligence officers.

Do your job. Don’t talk. Do I think these marketing oriented 21st century specialized software companies will change their behavior? Answer: Oh, sure.

PS. I hope the backstory for Facebook / Meta’s interest in specialized software becomes part of a public court record. I am curious is what I have learned matches up to the court statements. My hunch is that some social media executives have selective memories. That’s a useful skill I have heard.

Stephen E Arnold, May 13, 2025

Alleged Oracle Misstep Leaves Hospitals Without EHR Access for Just Five Days

May 13, 2025

When I was young, hospitals were entirely run on paper records. It was a sight to behold. Recently, 45 hospitals involuntarily harkened back to those days, all because “Oracle Engineers Caused Dayslong Software Outage at U.S. Hospitals,” CNBC reports. Writer Ashley Capoot tells us:

“Oracle engineers mistakenly triggered a five-day software outage at a number of Community Health Systems hospitals, causing the facilities to temporarily return to paper-based patient records. CHS told CNBC that the outage involving Oracle Health, the company’s electronic health record (EHR) system, affected ‘several’ hospitals, leading them to activate ‘downtime procedures.’ Trade publication Becker’s Hospital Review reported that 45 hospitals were hit. The outage began on April 23, after engineers conducting maintenance work mistakenly deleted critical storage connected to a key database, a CHS spokesperson said in a statement. The outage was resolved on Monday, and was not related to a cyberattack or other security incident.”

That is a relief. Because gross incompetence is so much better than getting hacked. Oracle has only been operating the EHR system since 2022, when it bought Cerner. The acquisition made Oracle Health the second largest vendor in that market, after Epic Systems.

But perhaps Oracle is experiencing buyers’ remorse. This is just the latest in a string of stumbles the firm has made in this crucial role. In 2023, the US Department of Veteran Affairs paused deployment of its Oracle-based EHR platform over patient safety concerns. And just this March, the company’s federal EHR system experienced a nationwide outage. That snafu was resolved after six and a half hours, and all it took was a system reboot. Easy peasy. If only replacing deleted critical storage were so simple.

What healthcare system will be the next to go down due to an Oracle Health blunder? Cynthia Murrell, May 13, 2025

Big Numbers and Bad Output: Is This the Google AI Story

May 13, 2025

dino orange_thumbNo AI. Just a dinobaby who gets revved up with buzzwords and baloney.

Alphabet Google reported financials that made stakeholders happy. Big numbers were thrown about. I did not know that 1.5 billion people used Google’s AI Overviews. Well, “use” might be misleading. I think the word might be “see” or “were shown” AI Overviews. The key point is that Google is making money despite its legal hassles and its ongoing battle with infrastructure costs.

I was, therefore, very surprised to read “Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense.” If the information in the write up is accurate, the factoid suggests that a lot of people may be getting bogus information. If true, what does this suggest about Alphabet Google?

The Cnet article says:

…the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched “peanut butter platform heels.” Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure.

Those Nobel prize winners, brilliant Googlers, and long-time wizards like Jeff Dean seem to struggle with simple things. Remember the glue cheese on pizza suggestion before Google’s AI improved.

The article adds by quoting a non-Google wizard:

“They [large language models] are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,” said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. “They are not trained to verify the truth. They are trained to complete the sentence.”

Turning in lousy essay and showing up should be enough for a C grade. Is that enough for smart software with 1.5 billion users every three or four weeks?

The article reminds its readers”

This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls “hallucinating.” When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.

The outputs can be amusing for a person able to identify goofiness. But a grade school kid? Cnet wants users to craft better prompts.

I want to be 17 years old again and be a movie star. The reality is that I am 80 and look like a very old toad.

AI has to make money for Google. Other services are looking more appealing without the weight of legal judgments and hassles in numerous jurisdictions. But Google has already won the AI race. Its DeepMind unit is curing disease and crushing computational problems. I know these facts because Google’s PR and marketing machine is running at or near its red line.

But the 1.5 billion users potentially receiving made up, wrong, or hallucinatory information seems less than amusing to me.

Stephen E Arnold, May 13, 2025

China Smart, US Dumb: Twisting the LLM Daozi

May 12, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

That hard-hitting technology information service Venture Beat published an interesting article. Its title is “Alibaba ZeroSearch Lets AI Learn to Google Itself — Slashing Training Costs by 88 Percent.” The main point of the write up, in my opinion, is that Chinese engineers have done something really “smart.” The knife at the throat of US smart software companies is cost. The money fires will flame out unless more dollars are dumped into the innovation furnaces of smart software.

The Venture Beat story makes the point that “could dramatically reduce the cost and complexity of training AI systems to search for information, eliminating the need for expensive commercial search engine APIs altogether.”

Oh, oh.

This is smart. Buring cash in pursuit of a fractional improvement is dumb, well, actually, stupid, if the write up’s inforamtion is accurate.

The Venture Beat story says:

The technique, called “ZeroSearch,” allows large language models (LLMs) to develop advanced search capabilities through a simulation approach rather than interacting with real search engines during the training process. This innovation could save companies significant API expenses while offering better control over how AI systems learn to retrieve information.

Is this a Snorkel variant hot from Stanford AI lab?

The write up does not delve into the synthetic data short cut to smart software. After some mumbo jumbo, the write up points out the meat of the “innovation”:

The cost savings are substantial. According to the researchers’ analysis, training with approximately 64,000 search queries using Google Search via SerpAPI would cost about $586.70, while using a 14B-parameter simulation LLM on four A100 GPUs costs only $70.80 — an 88% reduction.

Imagine. A dollar in cost becomes $0.12. If accurate, what should a savvy investor do? Pump money into an outfit like OpenAI or the Xai- type entity, or think harder about the China-smart solution?

Venture Beat explains the implication of the alleged cost savings:

The impact could be substantial for the AI industry.

No kidding?

The Venture Beat analysts add this observation:

The irony is clear: in teaching AI to search without search engines, Alibaba may have created a technology that makes traditional search engines less necessary for AI development. As these systems become more self-sufficient, the technology landscape could look very different in just a few years.

Yep, irony. Free transformer technology. Free Snorkle technology. Free kinetic into the core of the LLM money furnace.

If true, the implications are easy to outline. If bogus, the China Smart, US Dumb trope still captured ink and will be embedded in some smart software’s increasingly frequent hallucinatory outputs. At which point, the China Smart, US Dumb information gains traction and becomes “fact” to some.

Stephen  E Arnold, May 12, 2025

Another Duh! Moment: AI Cannot Read Social Situations

May 12, 2025

No AI. Just a dinobaby who gets revved up with buzzwords and baloney.

I promise I won’t write “Duh!” in this blog post again. I read Science Daily’s story “Awkward. Humans Are Still Better Than AI at Reading the Room.” The write up says without total awareness:

Humans, it turns out, are better than current AI models at describing and interpreting social interactions in a moving scene — a skill necessary for self-driving cars, assistive robots, and other technologies that rely on AI systems to navigate the real world.

Yeah, what about in smart weapons, deciding about health care for an elderly patient, or figuring out whether the obstacle is a painted barrier designed to demonstrate that full self driving is a work in progress. (I won’t position myself in front of a car with auto-sensing and automatic braking. You can have at it.)

The write up adds:

Video models were unable to accurately describe what people were doing in the videos. Even image models that were given a series of still frames to analyze could not reliably predict whether people were communicating. Language models were better at predicting human behavior, while video models were better at predicting neural activity in the brain.

Do these findings say to you, “Not ready for prime time?” It does to me.

One of the researchers who was in the weeds with the data points out:

“I think there’s something fundamental about the way humans are processing scenes that these models are missing.”

Okay, I prevaricated. Duh!” (Do marketers care? Duh!)

Stephen E Arnold, May 12, 2025

Google, Its AI Search, and Web Site Traffic

May 12, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

I read “Google’s AI Search Switch Leaves Indie Websites Unmoored.” I think this is a Gen Y way of saying, “No traffic for you, bozos.” Of course, as a dinobaby, I am probably wrong.

Let’s look at the write up. It says:

many publishers said they either need to shut down or revamp their distribution strategy. Experts this effort could ultimately reduce the quality of information Google can access for its search results and AI answers.

Okay, but this is just one way to look at Google’s delicious decision.

May I share some of my personal thoughts about what this traffic downshift means for those blue-chip consultant Googlers in charge:

First, in the good old days before the decline began in 2006, Google indexed bluebirds (sites that had to be checked for new content or “deltas” on an accelerated heart beat. Examples were whitehouse.gov (no, not the whitehouse.com porn site). Then there were sparrows. These plentiful Web sites could be checked on a relaxed schedule. I mean how often do you visit the US government’s National Railway Retirement Web site if it still is maintained and online? Yep, the correct answer is, “Never.” There there were canaries. These were sites which might signal a surge in popularity. They were checked on a heart beat that ensured the Google wouldn’t miss a trend and fail to sell advertising to those lucky ad buyers.

So, bluebirds, canaries, and sparrows.

This shift means that Google can reduce costs by focusing on bluebirds and canaries. The sparrows — the site operated by someone’s grandmother to sell home made quilts — won’t get traffic unless the site operator buys advertising. It’s pay to play. If a site is not in the Google index, it just may not exist. Sure there are alternative Web search systems, but none, as far as I know, are close to the scope of the “old” Google in 2006.

Second, by dropping sparrows or pinging them once in a blue moon will reduce the costs of crawling, indexing, and doing the behind-the-scenes work that consumes Google cash at an astonishing rate. Therefore, the myth of indexing the “Web” is going to persist, but the content of the index is not going to be “fresh.” This is the concept that some sites like whitehouse.gov have important information that must be in search results. Non-priority sites just disappear or fade. Eventually the users won’t know something is missing, which is assisted by the decline in education for some Google users. The top one percent knows bad or missing information. The other 99 percent? Well, good luck.

Third, the change means that publishers will have some options. [a] They can block Google’s spider and chase the options. How’s Yandex.ru sound? [b] They can buy advertising and move forward. I suggest these publishers ask a Google advertising representative what the minimum spend is to get traffic. [c] Publishers can join together and try to come up with a joint effort to resist the increasingly aggressive business actions of Google. Do you have a Google button on your remote? Well, you will. [d] Be innovative. Yeah, no comment.

Net net: This item about the impact of AI Overviews is important. Just consider what Google gains and the pickle publishers and other Web sites now find themselves enjoying.

Stephen E Arnold, May 12, 2025

US Cloud Dominance? China Finds a Gap and Cuts a Path to the Middle East

May 11, 2025

dino-orange_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbNo AI, just the dinobaby expressing his opinions to Zellenials.

How China Is Gaining Ground in the Middle East Cloud Computing Race” provides a summary of what may be a destabilizing move in the cloud computing market. The article says:

Huawei and Alibaba are outpacing established U.S. providers by aligning with government priorities and addressing data sovereignty concerns.

The “U.S. providers” are Amazon, Google, Microsoft, Oracle. The Chinese companies making gains in the Middle East include Alibaba, Huawei, and TenCent. Others are likely to follow.

The article notes:

Alibaba Cloud expanded strategically by opening data centers in the UAE in 2022 and Saudi Arabia last year. It entered the Saudi market by setting up a venture with STC. The Saudi Cloud Computing Company will support the kingdom’s Vision 2030 goals, under which the government hopes to diversify the economy away from oil dependency.

What’s China’s marketing angle? The write up identifies alignment and more sensitivity to “data sovereignty” in key Middle Eastern countries. But the secret sauce is, according the the write up:

A key differentiator has been the Chinese providers’ approach to artificial intelligence. While U.S. companies have been slow to adopt AI solutions in the region, Chinese providers have aggressively embedded AI into their offerings at a time when Gulf nations are pursuing AI leadership. During the Huawei Global AI Summit last year, Huawei Cloud’s chief technology officer, Bruno Zhang, showed how its AI could cut Saudi hospital diagnostic times by 40% using localized Arabic language models — a tangible benefit that theoretical AI platforms from Western providers couldn’t match.

This statement may or may not be 100 percent correct. For this blog post, let’s assume that it is close enough for horse shoes. First, the US cloud providers are positioned as “slow”.  What happened to the go fast angle. Wasn’t Microsoft a “leader” in  AI, catching Google napping in its cubicle? Google declared some sort of an emergency and the AI carnival put up its midway.

Second, the Gulf “nations” wanted AI leadership, so Huawei presented a “tangible benefit” in the form of a diagnostic time reduction and localized Arabic language models. I know that US cloud providers provide translation services, but the pointy end of the stick shoved into the couch potato US cloud services was “localized language models.”

Furthermore, the Chinese providers provide cloud services and support on premises plus cloud functions. The “hybrid” angle matches the needs of some Middle Eastern information systems professionals’ ideas. The write up says:

The hybrid approach plays directly to the strengths of Chinese providers, who recognized this market preference early and built their regional strategy around it.

The Chinese vendors provide an approach that matches what prospects want. Seems simple and obvious. However, the article includes a quote that hints at another positive for the Chinese cloud players; to wit:

“The Chinese companies are showing that success in the Middle East depends as much on trust and cooperation as it does on computing power,” Luis Bravo, senior research analyst at Texas-based data center Hawk…

For me the differentiator may not be price, hybrid willingness, or localization. The killer word is trust. If the Gulf States do not trust the US vendors, China is likely to displace yesterday’s “only game in town” crowd.

Yep, trust. A killer benefit in some deals.

Stephen E Arnold, May 11, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta