AI Embraces the Ethos of Enterprise Search
October 9, 2025
This essay is the work of a dumb dinobaby. No smart software required.
In my files, I have examples of the marketing collateral generated by enterprise search vendors. I have some clippings from trade publications and other odds and ends dumped into my enterprise search folder. One of these reports is “Fastgründer John Markus Lervik dømt til fengsel.” The article is no longer online, but you can read my 2014 summary at this Beyond Search link. The write up documents an enterprise search vendor who used some alleged accounting methods to put a shine on the company. In 2008, Microsoft purchased Fast Search & Transfer putting an end to this interesting company.
A young CPA MBA BA (with honors) is jockeying a spreadsheet. His father worked for an enterprise search vendor based in the UK. His son is using his father’s template but cannot get the numbers to show positive cash flows across six quarters. Thanks, Venice.ai. Good enough.
Why am I mentioning Fast Search & Transfer? The information in Fortune Magazine’s “‘There’s So Much Pressure to Be the Company That Went from Zero to $100 Million in X Days’: Inside the Sketchy World of ARR and Inflated AI Startup Accounting” jogged my memory about Fast Search and a couple of other interesting companies in the enterprise search sector.
Enterprise search was the alleged technology to put an organization’s information at the fingertips of employees. Enterprise search would unify silos of information. Enterprise search would unlock the value of an organization’s “hidden” or “dark” data. Enterprise search would put those hours wasted looking for information to better use. (IDC was the cheerleader for the efficiency payoff from enterprise search.)
Does this sound familiar? It should every vendor applying AI to an organization’s information challenges is either recycling old chestnuts from the Golden Age of Enterprise Search or wandering in the data orchard discovering these glittering generalities amidst nuggets of high value jargon.
The Fortune article states:
There’s now a massive amount of pressure on AI-focused founders, at earlier stages than ever before: If you’re not generating revenue immediately, what are you even doing? Founders—in an effort to keep up with the Joneses—are counting all sorts of things as “long-term revenue” that are, to be blunt, nothing your Accounting 101 professor would recognize as legitimate. Exacerbating the pressure is the fact that more VCs than ever are trying to funnel capital into possible winners, at a time where there’s no certainty about what evaluating success or traction even looks like.
Would AI start ups fudge numbers? Of course not. Someone at the start up or investment firm took a class in business ethics. (The pizza in those study groups was good. Great if it could be charged to another group member’s Visa without her knowledge. Ho ho ho.)
The write up purses the idea that ARR or annual recurring revenue is a metric that may not reflect the health of an AI business. No kidding? When an outfit has zero revenue resulting from dumping investor case into a burning dumpster fire, it is difficult for me to understand how people see a payoff from AI. The “payoff” comes from moving money around, not from getting cash from people or organizations on a consistent basis. Subscription-like business models are great until churn becomes a factor.
The real point of the write up for me is that financial tricks, not customers paying for the product or service, are the name of the game. One big enterprise search outfit used “circular” deals to boost revenue. I did some small work for this outfit, so I cannot identify it. The same method is now part of the AI revolution involving Nvidia, OpenAI, and a number of other outfits. Whose money is moving? Who gets it? What’s the payoff? These are questions not addressed in depth in the information to which I have access?
I think financial intermediaries are the folks taking home the money. Some vendors may get paid like masters of black art accounting. But investor payoff? I am not so sure. For me the good old days of enterprise search are back again, just with bigger numbers and more impactful financial consequences.
As an aside, the Fortune article uses the word “shit” twice. Freudian slip or a change in editorial standards at Fortune? That word was applied by one of my team when asked to describe the companies I profiled in the Enterprise Search Report I wrote many years ago. “Are you talking about my book or enterprise search?” I asked. My team member replied, “The enterprise search thing.”
Stephen E Arnold, October 2025
AI Security: Big Plus or Big Minus?
October 9, 2025
Agentic AI presents a new security crisis. But one firm stands ready to help you survive the threat. Cybersecurity firm Palo Alto Networks describes “Agentic AI and the Looming Board-Level Security Crisis.” Writer and CSO Haider Pasha sounds the alarm:
“In the past year, my team and I have spoken to over 3,000 of Europe’s top business leaders, and these conversations have led me to a stark conclusion: Three out of four current agentic AI projects are on track to experience significant security challenges. The hype, and resulting FOMO, around AI and agentic AI has led many organisations to run before they’ve learned to walk in this emerging space. It’s no surprise how Gartner expects agentic AI cancellations to rise through 2027 or that an MIT report shows most enterprise GenAI pilots already failing. The situation is even worse from a cybersecurity perspective, with only 6% of organizations leveraging an advanced security framework for AI, according to Stanford.
But the root issue isn’t bad code, it’s bad governance. Unless boards instill a security mindset from the outset and urgently step in to enforce governance while setting clear outcomes and embedding guardrails in agentic AI rollouts, failure is inevitable.”
The post suggests several ways to implement this security mindset from the start. For example, companies should create a council that oversees AI agents across the organization. They should also center initiatives on business goals and risks, not shiny new tech for its own sake. Finally, enforce least-privilege access policies as if the AI agent were a young intern. See the write-up for more details on these measures.
If one is overwhelmed by the thought of implementing these best practices, never fear. Palo Alto Networks just happens to have the platform to help. So go ahead and fear the future, just license the fix now.
Cynthia Murrell, October 9, 2025
Antitrust: Can Google Dodge Guilt Again?
October 9, 2025
The US Department of Justice brought an antitrust case against Google and Alphabet Inc. got away with a slap on the wrist. John Polonis via Medium shared the details and his opinion in, “Google’s Antitrust Escape And Tech’s Uncertain Future.” The Department of Justice can’t claim a victory in this case, because none of the suggestions to curtail Google’s power will be implemented.
Some restrictions were passed that ban exclusivity deals and require data sharing, but that’s all. It’s also nothing like the antitrust outcome of the Microsoft case in the 2000s. The judge behind the decision was Amit Mehta and he did want to deliver a dose of humility to Google:
“Judge Mehta also exercised humility when forcing Google to share data. Google will need to share parts of its search index with competitors, but it isn’t required to share other data related to those results (e.g., the quality of web pages). The reason for so much humility? Artificial intelligence. The judge emphasized Google’s new reality; how much harder it must fight to keep up with competitors who are seizing search queries that Google previously monopolized across smartphones and browsers.
Google can no longer use its financial clout like it did when it was the 900 pound gorilla of search. It’s amazing how much can change between the filing of an antitrust case and adjudication (generative AI didn’t even exist!).”
Google is now free to go hog wild with its AI projects without regulation. Google hasn’t lost any competitive edge, unlike Microsoft in its antitrust litigation. They’re now free to do whatever they want as well.
Polonis makes a very accurate point:
“The message is clear. Unless the government uncovers smoking gun evidence of deliberate anticompetitive intent — the kind of internal emails and memos that doomed Microsoft in the late 1990s (“cut off Netscape’s air supply”) — judges are reluctant to impose the most extreme remedies. Courts want narrow, targeted fixes that minimize unnecessary disruption. And the remedies should be directly tied to the anticompetitive conduct (which is why Judge Mehta focused so heavily on exclusivity agreements).”
Big Tech has a barrier free sandbox to experiment and conduct AI business deals. Judge Mehta’s decision has shaped society in ways we can’t predict, even AI doesn’t know the future yet. What will the US judicial process deliver in Google’s advertising legal dust up? We know Google can write checks to make problems go away. Will this work again for this estimable firm?
Whitney Grace, October 8, 2025
Google Bricks Up Its Walled Garden
October 8, 2025
Google is adding bricks to its garden wall, insisting Android-app developers must pay up or stay out. Neowin declares, “Google’s Shocking Developer Decree Struggles to Justify the Urgent Threat to F-Droid.” The new edict requires anyone developing an app for Android to register with Google, whether or not they sell through its Play Store. Registration requires paying a fee, uploading personal IDs, and agreeing to Google’s fine print.
The measure will have a large impact on alternative app stores like F-Droid. That open-source publisher, with its focus on privacy, is particularly concerned about the requirements. In fact, it would rather shutter its project than force developers to register with Google. That would mean thousands of verified apps will vanish from the Web, never to be downloaded or updated again. F-Droid suspects Google’s motives are far from pure. Writer Paul Hill tells us:
“F-Droid has questioned whether forced registration will really solve anything because lots of malware apps have been found in the Google Play Store over the years, demonstrating that corporate gatekeeping doesn’t mean users are protected. F-Droid also points out that Google already defends users against malicious third-party apps with the Play Protect services which scan and disable malware apps, regardless of their origin. While not true for all alternative app stores, F-Droid already has strong security because the apps it includes are all open source that anyone can audit, the build logs are public, and builds are reproducible. When you submit an app to F-Droid, the maintainers help set up your repository properly so that when you publish an update to your code, F-Droid’s servers manually build the executable, this prevents the addition of any malware not in the source code.”
Sounds at least as secure as the Play Store to us. So what is really going on? The write-up states:
“The F-Droid project has said that it doesn’t believe that the developer registration is motivated by security. Instead, it thinks that Google is trying to consolidate power by tightening control over a formerly open ecosystem. It said that by tying application identifiers to personal ID checks and fees, it creates a choke point that restricts competition and limits user freedom.”
F-Droid is responding with a call for regulators to scrutinize this and other Googley moves for monopolistic tendencies. It also wants safeguards for app stores that wish to protect developers’ privacy. Who will win this struggle between independent app stores and the tech giant?
Cynthia Murrell, October 8, 2025
With or Without AI: Winners Win and Losers Lose
October 8, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Some outfits are just losers. That’s the message I got after reading “AI Magnifies Your Teams’ Strengths – and Weaknesses, Google Report Finds.” Keep in mind that this report — the DORA Report or DevOps Research & Assessment — is Googley. The write up makes clear that Google is not hallucinating. The outstanding company:
surveyed 5,000 software development professionals across industries and followed up with more than 100 hours of interviews. It may be one of the most comprehensive studies of AI’s changing role in software development, especially at the enterprise level.

Winners with AI win bigger. Losers with AI continue to lose. Is that sad team mascot one of Sam Altman’s AI cheerleaders. I think it is. Thanks, MidJourney. Good enough.
Obviously the study is “one of the most comprehensive”; of course, it is Google’s study!
The big finding seems to be:
… AI has moved from hype to mainstream in the enterprise software development world. Second, real advantage isn’t about the tools (or even the AI you use). It’s about building solid organizational systems. Without those systems, AI has little advantage. And third, AI is a mirror. It reflects and magnifies how well (or poorly) you already operate.
I interpret the findings of the DORA Report in an easy-to-remember way: Losers still lose even if their teams use AI. I think of this as a dominant football team. The team has the money to induce or direct events. As a result, the team has the best players. The team has the best coaches (leadership). The team has the best infrastructure. In short, when one is the best, AI makes the best better.
On the other hand, a losing team composed of losers will use AI and still lose.
I noted that the report about DORA did not include:
- Method of sample selection
- Questions asked
- Methodology for generating the numerous statistics in the write up.
What happens if one conducts a study to validate the idea that winners win and losers keep on losing? I think it sends a clear signal that a monopoly-type of outfit has a bit of an inferiority or fear-centric tactical view. Even the quantumly supreme need a marketing pick me up now and then.
Stephen E Arnold, October 8, 2025
Slopity Slopity Slop: Nice Work AI Leaders
October 8, 2025
Remember that article about academic and scientific publishers using AI to churn out pseudoscience and crap papers? Or how about that story relating to authors’ works being stolen to train AI algorithms? Did I mention they were stealing art too?
Techdirt literally has the dirt on AI creating more slop: “AI Slop Startup To Flood The Internet With Thousands Of AI Slop Podcasts, Calls Critics Of AI Slop ‘Luddites’.” AI is a helpful tool. It’s great to assist with mundane things of life or improve workflows. Automation, however, has become the newest sensation. Big Tech bigwigs and other corporate giants are using it to line their purses, while making lives worse for others.
Note this outstanding example of a startup that appears to be interested in slop:
“Case in point: a new startup named Inception Point AI is preparing to flood the internet with a thousands upon thousands of LLM-generated podcasts hosted by fake experts and influencers. The podcasts cost the startup a dollar or so to make, so even if just a few dozen folks subscribe they hope to break even…”
They’ll make the episodes for less than a dollar. Podcasting is already a saturated market, but Point AI plans to flush it with garbage. They don’t care about the ethics. It’s going to be the Temu of podcasts. It would be great if people would flock to true human-made stuff, but they probably won’t.
Another reason we’re in a knowledge swamp with crocodiles.
Whitney Grace, October 9, 2025
Google Gets the Crypto Telegram
October 7, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Not too many people cared that Google cut a deal with Alibaba’s ANT financial services outfit. My view is that at some point down the information highway, the agreement will capture more attention. Today (September 27, 2025), I want to highlight another example of Google’s getting a telegram about crypto.

Finding inspiration? Yep. Thanks, Venice.ai. Good enough.
Navigate to what seems to be just another crypto mining news announcement: “Cipher Mining Signs 168 MW, 10-Year AI Hosting Agreement with Fluidstack.”
So what’s a Cipher Mining? This is a publicly traded outfit engaged in crypto mining. My understanding is that the company’s primary source of revenue is bitcoin mining. Some may disagree, pointing to its business as “owner, developer and operator of industrial-scale data centers.”
The news release says:
[Cipher Mining] announces a 10-year high-performance computing (HPC) colocation agreement with Fluidstack, a premier AI cloud platform that builds and operates HPC clusters for some of the world’s largest companies.
So what?
The news release also offers this information:
Google will backstop $1.4 billion of Fluidstack’s lease obligations to support project-related debt financing and will receive warrants to acquire approximately 24 million shares of Cipher common stock, equating to an approximately 5.4% pro forma equity ownership stake, subject to adjustment and a potential cash settlement under certain circumstances. Cipher plans to retain 100% ownership of the project and access the capital markets as necessary to fund a portion of the project.
Okay, three outfits: crypto, data centers, and billions of dollars. That’s quite an information cocktail.
Several observations:
- Like the Alibaba / ANT relationship, the move is aligned with facilitating crypto activities on a large scale
- In the best tradition of moving money, Google seems to be involved but not the big dog. I think that Google may indeed be the big dog. Puzzle pieces that fit together? Seems like it to me.
- Crypto and financial services could — note I say “could” — be the hedge against future advertising revenue potholes.
Net net: Worth watching and asking, “What’s the next Google message received from Telegram?” Does this question seem cryptic? It isn’t. Like Meta, Google is following a path trod by a certain outfit now operating in Dubai. Is the path intentional or accidental? Where Google is concerned, everything is original, AI, and quantumly supreme.
Stephen E Arnold, October 7, 2025
The Future: Autonomous Machines
October 7, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Does mass customization ring a bell? I cannot remember whether it was Joe Pine or Al Toffler who popularized the idea. The concept has become a trendlet. Like many high-technology trends, a new term is required to help communication the sizzle of “new.”
An organization is now an “autonomous machine.” The concept is spelled out in “This Is Why Your Company Is Transforming into an Autonomous Machine.” The write up asserts:
Industries are undergoing a profound transformation as products, factories, and companies adopt the autonomous machine design model, treating each element as an integrated system that can sense, understand, decide, and act (SUDA business operating system) independently or in coordination with other platforms.
I assume SUDA rhymes with OODA (Observe, Orient, Decide, Act), but who knows?
The inspiration for the autonomous machine may be Elon Musk, who allegedly said: “I’m really thinking of the factory like a product.” Gnomic stuff.
The write up adds:
The Tesla is a cyber-physical system that improves over time through software updates, learns from millions of other vehicles, and can predict maintenance needs before problems occur.
I think this is an interesting idea. There is a logical progression at work; specifically:
- An autonomous “factory”
- Autonomous “companies” but I think one could just think about organizations and not be limited to commercial enterprises
- Agentic enterprises.
The future appears to be like this:
The path to becoming an autonomous enterprise, using a hybrid workforce of humans and digital labor powered by AI agents, will require constant experimentation and learning. Go fast, but don’t hurry. A balanced approach, using your organization’s brains and hearts, will be key to success. Once you start, you will never go back. Adopt a beginner’s mindset and build. Companies that are built like autonomous machines no longer have to decide between high performance and stability. Thanks to AI integration, business leaders are no longer forced to compromise. AI agents and physical AI can help business leaders design companies like a stealth aircraft. The technology is ready, and the design principles are proven in products and production. The fittest companies are autonomous companies.
I am glad I am a dinobaby, a really old dinobaby. Mass customization alright. Oligopolies producing what they want for humans who are supposed to have a job to buy the products and services. Yeah.
Stephen E Arnold, October 7, 2025
AI May Be Like a Disneyland for Threat Actors
October 7, 2025
AI is supposed to revolutionize the world, but bad actors are the ones who are benefitting the most tight now. AI is the ideal happy place for bad actors, because there’s an easy hack using autonomous browser based agents that use them as a tool for their nefarious deeds. This alert cokes from Hacker Noon’s story: “Studies Show AI Agents And Browsers Are A Hacker’s Perfect Playground.”
Many companies are running on at least one AI enterprise agent, using it as a tool to fetch external data, etc. Security, however, is still viewed as an add-on for the developers in this industry. Zenity Labs, a leading Agentic AI security and governance company, discovered that 3000 publicly accessible MS Copilot agents.
The Copilot agents failed because they relied on soft boundaries:
“…i.e., fragile, surface-level protections (i.e., instructions to the AI about what it should and shouldn’t do, with no technical controls). Agents were instructed in their prompts to “only help legitimate customers,” yet such rules were easy to bypass. Prompt shields designed to filter malicious inputs proved ineffective, while system messages outlining “acceptable behavior” did little to stop crafted attacks. Critically, there was no technical validation of the input sources feeding the agents, leaving them open to manipulation. With no sandboxing layer separating the agent from live production data, attackers can exploit these weaknesses to access sensitive systems directly.”
White hat hackers also found other AI exploits that were demonstrated at Black Hat USA 2025. Here’s a key factoid: “The more autonomous the AI agent, the higher the security risk.”
Many AI agents are vulnerable to security exploits and it’s a scary thought information is freely available to bad actors. Hacker Noon suggests putting agents through stress tests to find weak points then adding the necessary security levels. But Oracle (the marketer of secure enterprise search) and Google (owner of the cyber security big dog Mandiant) have both turned on their klaxons for big league vulnerabilities. Is AI helping? It depends whom one asks.
Whitney Grace, October 7, 2025
Telegram and EU Regulatory Consolidation: Trouble Ahead
October 6, 2025
This essay is the work of a dumb dinobaby. No smart software required.
Imagine you are Pavel Durov. The value of TONcoin is problematic. France asked you to curtail some content in a country unknown to the folks who hang out at the bar at the Harrod’s Creek Inn in rural Kentucky. Competitors are announcing plans to implement Telegram-type functions in messaging apps built with artificial intelligence as steel girders. How can the day become more joyful?
Thanks, Midjourney. Good enough pair of goats. One an actual goat and the other a “Greatest of All Time” goat.
The orange newspaper has an answer to that question. “EU Watchdog Prepares to Expand Oversight of Crypto and Exchanges” reports:
Stock exchanges, cryptocurrency companies and clearing houses operating in the EU are set to come under the supervision of the bloc’s markets watchdog…
Crypto currency and some online services (possibly Telegram) operate across jurisdictions. The fragmented rules and regulations allow organizations with sporty leadership to perform some remarkable financial operations. If you poke around, you will find the names of some outfits allied with industrious operators linked to a big country in Asia. Pull some threads, and you may find an unknown Russian space force professional beavering away in the shadows of decentralized financial activities.
The write up points out:
Maria Luís Albuquerque, EU commissioner for financial services, said in a speech last month that it was “considering a proposal to transfer supervisory powers to Esma for the most significant cross-border entities” including stock exchanges, crypto companies and central counterparties.
How could these rules impact Telegram? It is nominally based in the United Arab Emirates? Its totally independent do-good Open Network Foundation works tirelessly from a rented office in Zug, Switzerland. Telegram is home free, right?
No pesky big government rules can ensnare the Messenger crowd.
Possibly. There is that pesky situation with the annoying French judiciary. (Isn’t that country with many certified cheeses collapsing?) One glitch: Pavel Durov is a French citizen. He has been arrested, charged, and questioned about a dozen heinous crimes. He is on a leash and must check in with his grumpy judicial “mom” every couple of weeks. He allegedly refused to cooperate with a request from a French government security official. He is awaiting more thrilling bureaucracy from the French judicial system. How does he cope? He criticizes France, the legal processes, and French officials asking him to do for France what Mr. Durov did for Russia earlier this year.
Now these proposed regulations may intertwine with Mr. Durov’s personal legal situation. As the Big Dog of Telegram, the French affair is likely to have some repercussions for Telegram and its Silicon Valley big tech approach to rules and regulations. EU officials are indeed aware of Mr. Durov and his activities. From my perspective in nowheresville in rural Kentucky, the news in the Financial Times on October 6, 2025, is problematic for Mr. Durov. The GOAT of Messaging, his genius brother, and a close knit group of core engineers will have to do some hard thinking to figure out how to deal with these European matters. Can he do it? Does a GOAT eat what’s available?
Stephen E Arnold, October 6, 2025

