UAE: Will It Become U-AI?
September 23, 2025
Written by an unteachable dinobaby. Live with it.
UAE is moving forward in smart software, not just crypto. “Industry Leading AI Reasoning for All” reports that the Institute of foundation Models has “industry leading AI reasoning for all.” The new item reports:
Built on six pillars of innovation, K2 Think represents a new class of reasoning model. It employs long chain-of-thought supervised fine-tuning to strengthen logical depth, followed by reinforcement learning with verifiable rewards to sharpen accuracy on hard problems. Agentic planning allows the model to decompose complex challenges before reasoning through them, while test-time scaling techniques further boost adaptability.
I am not sure what the six pillars of innovation are, particularly after looking at some of the UAE’s crypto plays, but there is more. Here’s another passage which suggests that Intel and Nvidia may not be in the k2think.ai technology road map:
K2 Think will soon be available on Cerebras’ wafer-scale, inference-optimized compute platform, enabling researchers and innovators worldwide to push the boundaries of reasoning performance at lightning-fast speed. With speculative decoding optimized for Cerebras hardware, K2 Think will achieve unprecedented throughput of 2,000 tokens per second, making it both one of the fastest and most efficient reasoning systems in existence.
If you want to kick its tires (tAIres?), the system is available at k2think.ai and on Hugging Face. Oh, the write up quotes two people with interesting names: Eric Xing and Peng Xiao.
Stephen E Arnold, September 23, 2025
Can Meta Buy AI Innovation and Functioning Demos?
September 22, 2025
This essay is the work of a dumb dinobaby. No smart software required.
That “move fast and break things” has done a bang up job. Mark Zuckerberg, famed for making friends in Hawaii, demonstrated how “think and it becomes real” works in the real world. “Bad Luck for Zuckerberg: Why Meta Connect’s Live Demos Flopped” reported
two of Meta’s live demos epically failed. (A third live demo took some time but eventually worked.) During the event, CEO Mark Zuckerberg blamed it on the Wi-Fi connection.
Yep, blame the Wi-Fi. Bad Wi-Fi, not bad management or bad planning or bad prepping or bad decision making. No, it is bad Wi-Fi. Okay, I understand: A modern management method in action at Meta, Facebook, WhatsApp, and Instagram. Or, bad luck. No, bad Wi-Fi.
Thanks Venice.ai. You captured the baffled look on the innovator’s face when I asked Ron K., “Where did you get the idea for the hair dryer, the paper bag, and popcorn?”
Let’s think about another management decision. Navigate to the weirdly named write up “Meta Gave Millions to New AI Project Poaches, Now It Has a Problem.” That write up reports that Meta has paid some employees as much as $300 million to work on AI. The write up adds:
Such disparities appear to have unsettled longer-serving Meta staff. Employees were said to be lobbying for higher pay or transfers into the prized AI lab. One individual, despite receiving a grant worth millions, reportedly quit after concluding that newcomers were earning multiples more…
My recollection that there is some research that suggests pay is important, but other factors enter into a decision to go to work for a particular organization. I left the blue chip consulting game decades ago, but I recall my boss (Dr. William P. Sommers) explaining to me that pay and innovation are hoped for but not guaranteed. I saw that first hand when I visited the firm’s research and development unit in a rust belt city.
This outfit was cranking out innovations still able to wow people. A good example is the hot air pop corn pumper. Let that puppy produce popcorn for a group of six-year-olds at a birthday party, and I know it will attract some attention.
Here’s the point of the story. The fellow who came up with the idea for this innovation was an engineer, but not a top dog at the time. His wife organized a birthday party for a dozen six and seven year olds to celebrate their daughter’s birthday. But just as the girls arrived, the wife had to leave for a family emergency. As his wife swept out the door, she said, “Find some way to keep them entertained.”
The hapless engineer looked at the group of young girls and his daughter asked, “Daddy, will you make some popcorn?” Stress overwhelmed the pragmatic engineer. He mumbled, “Okay.” He went into the kitchen and found the popcorn. Despite his engineering degree, he did not know where the popcorn pan was. The noise from the girls rose a notch.
He poked his head from the kitchen and said, “Open your gifts. Be there in a minute.”
Adrenaline pumping, he grabbed the bag of popcorn, took a brown paper sack from the counter, and dashed into the bathroom. He poked a hole in the paper bag. He dumped in a handful of popcorn. He stuck the nozzle of the hair dryer through the hole and turned it on. Ninety seconds later, the kernels began popping.
He went into the family room and said, “Let’s make popcorn in the kitchen. He turned on the hair dryer and popped corn. The kids were enthralled. He let his daughter handle the hair dryer. The other kids scooped out the popcorn and added more kernels. Soon popcorn was every where.
The party was a success even though his wife was annoyed at the mess he and the girls made.
I asked the engineer, “Where did you get the idea to use a hair dryer and a paper bag?”
He looked at me and said, “I have no idea.”
That idea became a multi-million dollar product.
Money would not have caused the engineer to “innovate.”
Maybe Mr. Zuckerberg, once he has resolved his demo problems to think about the assumption that paying a person to innovate is an example of “just think it and it will happen” generates digital baloney?
Stephen E Arnold, September 22, 2025
AI and the Media: AI Is the Answer for Some Outfits
September 22, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
I spotted a news item about Russia’s government Ministry of Defense. The estimable defensive outfit now has an AI-generated news program. Here’s the paywalled source link. I haven’t seen it yet, but the statistics for viewership and the Telegram comments will be interesting to observe. Gee, do you think that bright Russian developers have found a way to steer the output to represent the political views of the Russian government? Did you say, “No.” Congratulations, you may qualify for a visa to homestead in Norilsk. Check it out on Google Maps.
Back in Germany, Axel Springer SE is definitely into AI as well. Coincidentally, Axel Springer will use AI for news. I noted Business Insider will allow its real and allegedly human journalists to use AI to write “drafts” of news stories. Here’s the paywalled source link. Hey, Axel, aren’t your developers able to pipe the AI output into slamma jamma banana and produce via AI complete TikTok-type news videos? Russia’s Ministry of Defense has this angle figured out. YouTube may be in the MoD’s plans. One has to fund that “defensive” special operation in Ukraine somehow.
Several observations:
- Steering or weaponing large language models is a feature of the systems. Can one trust AI generated news? Can one trust any AI output from a large organization? You may. I don’t.
- The economics of producing Walter Cronkite type news make “real” news expensive. Therefore, say hello to AI written news and AI delivered news. GenX and GenY will love this approach to information in my opinion.
- How will government regulators respond to AI news? In Russia, government controlled AI news will get a green light. Elsewhere, the shift may be slightly more contentious.
Net net: AI is great.
Stephen E Arnold, September 22, 2025
OpenAI Says, Hallucinations Are Here to Stay?
September 22, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
I read “OpenAI Admits AI Hallucinations Are Mathematically Inevitable, Not Just Engineering Flaws.” I am not sure the information in the write up will make people who are getting smart software whether they want it or not happy. Even less thrilled with the big outfits who are implementing AI with success ranging from five percent to 90 percent hoorahs. Close enough for horse shoes works for putting shoes on equines. I am not sure how that will work out for medical and financial applications. I won’t comment on the kinetic applications of smart software, but hallucination may not be a plus in some situations.
The write up begins with what may make some people — how shall I say it? — nervous, frightened, squeamish. I quote:
… OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
I quite liked the word always. It is obviously a statement that must persist for eternity, which to a dinobaby like me, quite a long time. I found the distinction between plausible and false delicious. The burden to figure out what is “correct,” “wrong,” slightly wonky, and false shifts to the user of smart software. But there is another word that struck me as significant: Perfect. Now that is another logical tar pit.
After this, I am not sure where the write up is going. I noted this passage:
OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.
There you go. The fundamental method in use today and believed to be the next big thing is always going to produce incorrect information. Always.
The Computerworld story points to the “research paper.” Computerworld points out that industry evaluations of smart software are slippery fish. Computerworld reminds its readers that “enterprises must adapt strategies.” (I would imagine. If smart software gets chemical formula wrong or outputs information that leads to a substantial loss of revenue, problems might arise, might they not?) Computerworld concludes with a statement that left me baffled; to wit: “Market already adapting.”
Okay.
I wonder how many Computerworld readers will consume this story standing next to a burning pile of cash tossed into the cost black holes of smart software.
Stephen E Arnold, September 22, 2025
Google Emits a Tiny Signal: Is It Stress or Just a Brilliant Management Move?
September 22, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
Google is chock full of technical and management wizards. Anything the firm does is a peak action. With the Google doing so many forward leaning things each day, it is possible for certain staggering insightful moments to be lost in the blitz of scintillating breakthroughs.
Tom’s Hardware spotted one sparkling decider diamond. “Google Terminates 200 AI Contractors — Ramp-Down Blamed, But Workers Claim Questions Over Pay and Job Insecurity Are the Real Reason Behind Layoffs” says:
Some believe they were let go because of complaints over working conditions and compensation.
Goes Google have a cancel culture?
The write up notes:
For the first half of 2025, AI growth was everywhere, and all the major companies were spending big to try to get ahead. Meta was offering individuals hundreds of millions to join its ranks … But while announcements of enormous industry deals continue, there’s also a lot of talk of contraction, particularly when it comes to lower-level positions like data annotation and AI response rating.
The individuals who are now free to find their future elsewhere have some ideas about why they were deleted from Google and promoted to Xooglers (former Google employees). The write up reports:
… many of them [the terminated with extreme Googliness] believe that it is their complaints over compensation that lead to them being laid off…. [Some] workers “attempted to unionize” earlier in the year to no avail. According to the report, “they [the future finders] allege that the company has retaliated against them.” … For its part, Google said in a statement that GlobalLogic is responsible for the working conditions of its employees.
See the brilliance of the management move. Google blames another outfit. Google reduces costs. Google makes it clear that grousing is not an path to the Google leadership enclave. Google AI is unscathed.
Google is A Number One in management in my opinion.
Stephen E Arnold, September 22, 2025
AI Poker: China Has Three Aces. Google, Your Play
September 19, 2025
No smart software involved. Just a dinobaby’s work.
TV poker seems to be a thing on free or low cost US television streams. A group of people squint, sigh, and fiddle as each tries to win the big pile of cash. Another poker game is underway in the “next big thing” of smart software or AI.
Google released the Nano Banana image generator. Social media hummed. Okay, that looks like a winning hand. But another player dropped some coin on the table, squinted at the Google, and smirked just a tiny bit.
“ByteDance Unveils New AI Image Model to Rival DeepMind’s Nano Banana” explains the poker play this way:
TikTok-owner ByteDance has launched its latest image generation artificial intelligence tool Seedream 4.0, which it said surpasses Google DeepMind’s viral “Nano Banana” AI image editor across several key indicators.
Now the cute jargon may make the poker hand friendly, there is menace behind the terminology. The write up states:
ByteDance claims that Seedream 4.0 beat Gemini 2.5 Flash Image for image generation and editing on its internal evaluation benchmark MagicBench, with stronger performance in prompt adherence, alignment and aesthetics.
Okay, prompt adherence, alignment (what the heck is that?), and aesthetics. That’s three aces right.
Who has the cost advantage? The write up says:
On Fal.ai, a global generative media hosting platform, Seedream 4.0 costs US$0.03 per generated image, while Gemini 2.5 Flash Image is priced at US$0.039.
I thought in poker one raised the stakes. Well, in AI poker one lowers the price in order to raise the stakes. These players are betting the money burned in the AI furnace will be “won” as the game progresses. Will AI poker turn up on the US free TV services? Probably. Burning cash makes for wonderful viewing, especially for those who are writing the checks.
What’s China’s view of this type of gambling? The write up says:
The state has signaled its support for AI-generated content by recognizing their copyright in late 2023, but has also recently introduced mandatory labelling of such content.
The game is not over. (Am I the only person who thinks that the name “nana banana” would have been better than “nano banana”?)
Stephen E Arnold, September 19, 2025
AI: The Tool for Humanity. Do Not Laugh.
September 19, 2025
Both sides of the news media are lamenting that AI is automating jobs and putting humans out of work. Conservative and liberals remain separated on how and why AI is “stealing” jobs, but the fear remains that humans are headed to obsoleteness…again. Humans have faced this issue since the start of human ingenuity. The key is to adapt and realize what AI truly is. Elizabeth Mathew of Signoz.io wrote: “I Built An MCP Server For Observability. This Is My Unhyped Take.”
If you’re unfamiliar with an MCP server it is an open standard that defines how LLMS or AI agents (i.e. Claude) uniformly connect external tools and data sources. It can be decoupled and used similar to a USB-C device then used for any agent.
After explaining some issues with MCP servers and why they are “schizophrenic”,
Mathew concludes with this:
“Ultimately, MCP-powered agents are not bringing us closer to automated problem-solving. They are giving us sophisticated hypothesis generators. They excel at exploring the known, but the unknown remains the domain of the human engineer. We’re not building an automated SRE; we’re building a co-pilot that can brainstorm, but can’t yet reason. And recognizing that distinction is the key to using these tools effectively without falling for the hype.”
She might be true from an optimistic and expert perspective, but that doesn’t prevent CEOs from implementing AI to replace their workforce or young adults being encouraged away from coding careers. Recent college graduates, do you have a job, any job?
Whitney Grace, September 19, 2025
Google: Is It Becoming Microapple?
September 19, 2025
Google’s approach to Android, the freedom to pay Apple to make Google search the default for Safari, and the registering of developers — These are Tim Apple moves. Google has another trendlet too.
Google has 1.8 billion users around the world and according to the Mens Journal Google has a new problem: “Google Issues Major Warning to All 1.8 Billion Users.” There’s a new digital security threat and it involves AI. That’s not a surprise, because artificial intelligence has been a growing concern for cyber security experts for years. Since the technology is becoming more advanced, bad actors are using it for devious actions. The newest round of black hat tricks are called “indirect prompt injections.”
Indirect prompt injections are a threat for individual users, businesses, and governments. Google warned users about this new threat and how it works:
“‘Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions,’ the blog post continued.
The Google blog post warned that this puts individuals and entities at risk.
‘As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures,’ the blog post continued.”
Bad actors have tasked Google’s Gemini (Shock! Gasp!) to infiltrate emails and ask users for their passwords and login information. That’s not the scary part. Most spammy emails have a link for users to click on to collect data, instead this new hack uses Gemini to prompt users for the information. Downloading fear.
Google is already working on counter measures for Gemini. Good luck! Microsoft has had this problem for years! Google and Microsoft are now twins! Is this the era of Google as Microapple?
Whitney Grace, September 19, 2025
AI Search Is Great. Believe It. Now!
September 18, 2025
Sadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.
Cheerleaders are necessary. The idea is that energetic people lead other people to chant: Stand Up, Sit Down, Fight! Fight! Fight! If you get with the program, you stand up. You sit down. You shout, of course, fight, fight, fight. Does it help? I don’t know because I don’t cheer at sports events. I say, “And again” or some other statement designed to avoid getting dirty looks or caught up in standing, sitting, and chanting.
Others are different. “GPT-5 Thinking in ChatGPT (aka Research Goblin) Is Shockingly Good at Search” states:
Don’t use chatbots as search engines” was great advice for several years… until it wasn’t. I wrote about how good OpenAI’s o3 was at using its Bing-backed search tool back in April. GPT-5 feels even better.
The idea is that instead of working with a skilled special librarian and participating in a reference interview, people started using online Web indexes. Now we have moved from entering a query to asking a smart software system for an answer.
Consider the trajectory. A person seeking information works with a professional with knowledge of commercial databases, traditional (book) reference tools, and specific ways of tracking down and locating information needed to answer the user’s question. When the user was not sure, the special librarian would ask, “What specific information do you need?” Some users would reply, “Get me everything about subject X?” The special librarian would ask other questions until a particular item could be identified. In the good old days, special librarians would seek the information and provide selected items to the person with the question. Ellen Shedlarz at Booz, Allen & Hamilton when I was a lowly peon did this type of work as did Dominque Doré at Halliburton NUS (a nuclear outfit).
We then moved to the era of PCs and do-it-yourself research. Everyone became an expert. Google just worked. Then mobile phones arrived so research on the go was a thing. But keying words into a search box and fiddling with links was a drag. Now just tell the smart software your problem. The solution is just there like instant oatmeal.
The Stone Age process was knowledge work. Most people seeking information did not ask, preferring as one study found to look through trade publications in an old-fashioned in box or pick up the telephone and ask a person whom one assumed knew something about a particular subject. The process was slow, inefficient, and fraught with delays. Let’s be efficient. Let’s let software do everything.
Flash forward to the era of smart software or seemingly smart software. The write up reports:
I’ve been trying out hints like “go deep” which seem to trigger a more thorough research job. I enjoy throwing those at shallow and unimportant questions like the UK Starbucks cake pops one just to see what happens! You can throw questions at it which have a single, unambiguous answer—but I think questions which are broader and don’t have a “correct” answer can be a lot more fun. The UK supermarket rankings above are a great example of that. Since I love a questionable analogy for LLMs Research Goblin is… well, it’s a goblin. It’s very industrious, not quite human and not entirely trustworthy. You have to be able to outwit it if you want to keep it gainfully employed.
The reference / special librarians are an endangered species. The people seeking information use smart software. Instead of a back-and-forth and human-intermediated interaction between a trained professional and a person with a question, we get “trying out” and “accepting the output.”
I think there are three issues inherent in this cheerleading:
- Knowledge work is short circuited. Instead of information-centric discussion, users accept the output. What if the output is incorrect, biased, incomplete, or made up? Cheerleaders shout more enthusiastically until a really big problem occurs.
- The conditioning process of accepting outputs makes even intelligent people susceptible to mental short cuts. These are good, but accuracy, nuance, and a sense of understanding the information may be pushed to the side of the information highway. Sometimes those backroads deliver unexpected and valuable insights. Forget that. Grab a burger and go.
- The purpose of knowledge work is to make certain that an idea, diagnosis, research study can be trusted. The mechanisms of large language models are probabilistic. Think close enough for horseshoes. Cheering loudly does not deliver accuracy of output, just volume.
Net net: Inside each large language model lurks a system capable of suggesting glue cheese on pizza, the gray mass is cancer, and eat rocks.
What’s been lost? Knowledge value from the process of obtaining information the Stone Age way. Let’s work in caves with fire provided by burning books. Sounds like a plan, Sam AI-Man. Use GPT5, use GPT5, use GPT5.
Stephen E Arnold, September 18, 2025
AI Maggots: Are These Creatures Killing the Web?
September 18, 2025
The short answer is, “Yep.”
The early days of the free, open Web held such promise. Alas, AI is changing the Internet and there is, apparently, nothing we can do about it. The Register laments, “AI Web Crawlers Are Destroying Websites in their Never-Ending Hunger for Any and All Content: But the Cure May Ruin The Web.…” Writer Steven J. Vaughan-Nichols tells us a whopping 30% of traffic is now bots, according to Cloudflare. And 80% of that, reports Fastly, comes from AI-data fetcher bots. Web crawlers have been around since 1993, of course, but this volume is something new. And destructive. Vaughan-Nichols writes:
“Fastly warns that [today’s AI crawlers are] causing ‘performance degradation, service disruption, and increased operational costs.’ Why? Because they’re hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes. Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts. he result? If you’re using a shared server for your website, as many small businesses do, even if your site isn’t being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site’s performance drops through the floor even if an AI crawler isn’t raiding your website. Smaller sites, like my own Practical Tech, get slammed to the point where they’re simply knocked out of service. Thanks to Cloudflare Distributed Denial of Service (DDoS) protection, my microsite can shrug off DDoS attacks. AI bot attacks – and let’s face it, they are attacks – not so much.”
Even big websites are shelling out for more processor, memory, and network resources to counter the slowdown. And no wonder: According to Web hosting firms, most visitors abandon a site that takes more than three seconds to load. Site owners have some tools to try mounting a defense, like paywalls, logins, and annoying CAPTCHA games. Unfortunately, AI is good at getting around all of those. As for the tried and true, honor-system based robots.txt files, most AI crawlers breeze right on by. Hey, love maggots.
Cynthia Murrell, September 18, 2025

