AI Embraces the Ethos of Enterprise Search

October 9, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In my files, I have examples of the marketing collateral generated by enterprise search vendors. I have some clippings from trade publications and other odds and ends dumped into my enterprise search folder. One of these reports is “Fastgründer John Markus Lervik dømt til fengsel.” The article is no longer online, but you can read my 2014 summary at this Beyond Search link. The write up documents an enterprise search vendor who used some alleged accounting methods to put a shine on the company. In 2008, Microsoft purchased Fast Search & Transfer putting an end to this interesting company.

image

A young CPA MBA BA (with honors) is jockeying a spreadsheet. His father worked for an enterprise search vendor based in the UK. His son is using his father’s template but cannot get the numbers to show positive cash flows across six quarters. Thanks, Venice.ai. Good enough.

Why am I mentioning Fast Search & Transfer? The information in Fortune Magazine’s “‘There’s So Much Pressure to Be the Company That Went from Zero to $100 Million in X Days’: Inside the Sketchy World of ARR and Inflated AI Startup Accounting” jogged my memory about Fast Search and a couple of other interesting companies in the enterprise search sector.

Enterprise search was the alleged technology to put an organization’s information at the fingertips of employees. Enterprise search would unify silos of information. Enterprise search would unlock the value of an organization’s “hidden” or “dark” data. Enterprise search would put those hours wasted looking for information to better use. (IDC was the cheerleader for the efficiency payoff from enterprise search.)

Does this sound familiar? It should every vendor applying AI to an organization’s information challenges is either recycling old chestnuts from the Golden Age of Enterprise Search or wandering in the data orchard discovering these glittering generalities amidst nuggets of high value jargon.

The Fortune article states:

There’s now a massive amount of pressure on AI-focused founders, at earlier stages than ever before: If you’re not generating revenue immediately, what are you even doing? Founders—in an effort to keep up with the Joneses—are counting all sorts of things as “long-term revenue” that are, to be blunt, nothing your Accounting 101 professor would recognize as legitimate. Exacerbating the pressure is the fact that more VCs than ever are trying to funnel capital into possible winners, at a time where there’s no certainty about what evaluating success or traction even looks like.

Would AI start ups fudge numbers? Of course not. Someone at the start up or investment firm took a class in business ethics. (The pizza in those study groups was good. Great if it could be charged to another group member’s Visa without her knowledge. Ho ho ho.)

The write up purses the idea that ARR or annual recurring revenue is a metric that may not reflect the health of an AI business. No kidding? When an outfit has zero revenue resulting from dumping investor case into a burning dumpster fire, it is difficult for me to understand how people see a payoff from AI. The “payoff” comes from moving money around, not from getting cash from people or organizations on a consistent basis. Subscription-like business models are great until churn becomes a factor.

The real point of the write up for me is that financial tricks, not customers paying for the product or service, are the name of the game. One big enterprise search outfit used “circular” deals to boost revenue. I did some small work for this outfit, so I cannot identify it. The same method is now part of the AI revolution involving Nvidia, OpenAI, and a number of other outfits. Whose money is moving? Who gets it? What’s the payoff? These are questions not addressed in depth in the information to which I have access?

I think financial intermediaries are the folks taking home the money. Some vendors may get paid like masters of black art accounting. But investor payoff? I am not so sure. For me the good old days of enterprise search are back again, just with bigger numbers and more impactful financial consequences.

As an aside, the Fortune article uses the word “shit” twice. Freudian slip or a change in editorial standards at Fortune? That word was applied by one of my team when asked to describe the companies I profiled in the Enterprise Search Report I wrote many years ago. “Are you talking about my book or enterprise search?” I asked. My team member replied, “The enterprise search thing.”

Stephen E Arnold, October 2025

With or Without AI: Winners Win and Losers Lose

October 8, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Some outfits are just losers. That’s the message I got after reading “AI Magnifies Your Teams’ Strengths – and Weaknesses, Google Report Finds.” Keep in mind that this report — the DORA Report or DevOps Research & Assessment — is Googley. The write up makes clear that Google is not hallucinating. The outstanding company:

surveyed 5,000 software development professionals across industries and followed up with more than 100 hours of interviews. It may be one of the most comprehensive studies of AI’s changing role in software development, especially at the enterprise level.

image

Winners with AI win bigger. Losers with AI continue to lose. Is that sad team mascot one of Sam Altman’s AI cheerleaders. I think it is. Thanks, MidJourney. Good enough.

Obviously the study is “one of the most comprehensive”; of course, it is Google’s study!

The big finding seems to be:

… AI has moved from hype to mainstream in the enterprise software development world. Second, real advantage isn’t about the tools (or even the AI you use). It’s about building solid organizational systems. Without those systems, AI has little advantage. And third, AI is a mirror. It reflects and magnifies how well (or poorly) you already operate.

I interpret the findings of the DORA Report in an easy-to-remember way: Losers still lose even if their teams use AI. I think of this as a dominant football team. The team has the money to induce or direct events. As a result, the team has the best players. The team has the best coaches (leadership). The team has the best infrastructure. In short, when one is the best, AI makes the best better.

On the other hand, a losing team composed of losers will use AI and still lose.

I noted that the report about DORA did not include:

  1. Method of sample selection
  2. Questions asked
  3. Methodology for generating the numerous statistics in the write up.

What happens if one conducts a study to validate the idea that winners win and losers keep on losing? I think it sends a clear signal that a monopoly-type of outfit has a bit of an inferiority or fear-centric tactical view. Even the quantumly supreme need a marketing pick me up now and then.

Stephen E Arnold, October 8, 2025

Slopity Slopity Slop: Nice Work AI Leaders

October 8, 2025

Remember that article about academic and scientific publishers using AI to churn out pseudoscience and crap papers?  Or how about that story relating to authors’ works being stolen to train AI algorithms?  Did I mention they were stealing art too?

Techdirt literally has the dirt on AI creating more slop: “AI Slop Startup To Flood The Internet With Thousands Of AI Slop Podcasts, Calls Critics Of AI Slop ‘Luddites’.”  AI is a helpful tool.  It’s great to assist with mundane things of life or improve workflows.  Automation, however, has become the newest sensation.  Big Tech bigwigs and other corporate giants are using it to line their purses, while making lives worse for others.

Note this outstanding example of a startup that appears to be interested in slop:

“Case in point: a new startup named Inception Point AI is preparing to flood the internet with a thousands upon thousands of LLM-generated podcasts hosted by fake experts and influencers. The podcasts cost the startup a dollar or so to make, so even if just a few dozen folks subscribe they hope to break even…”

They’ll make the episodes for less than a dollar.  Podcasting is already a saturated market, but Point AI plans to flush it with garbage.  They don’t care about the ethics.  It’s going to be the Temu of podcasts.  It would be great if people would flock to true human-made stuff, but they probably won’t.

Another reason we’re in a knowledge swamp with crocodiles.

Whitney Grace, October 9, 2025

The Future: Autonomous Machines

October 7, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Does mass customization ring a bell? I cannot remember whether it was Joe Pine or Al Toffler who popularized the idea. The concept has become a trendlet. Like many high-technology trends, a new term is required to help communication the sizzle of “new.”

An organization is now an “autonomous machine.” The concept is spelled out in “This Is Why Your Company Is Transforming into an Autonomous Machine.” The write up asserts:

Industries are undergoing a profound transformation as products, factories, and companies adopt the autonomous machine design model, treating each element as an integrated system that can sense, understand, decide, and act (SUDA business operating system) independently or in coordination with other platforms.

I assume SUDA rhymes with OODA (Observe, Orient, Decide, Act), but who knows?

The inspiration for the autonomous machine may be Elon Musk, who allegedly said: “I’m really thinking of the factory like a product.” Gnomic stuff.

The write up adds:

The Tesla is a cyber-physical system that improves over time through software updates, learns from millions of other vehicles, and can predict maintenance needs before problems occur.

I think this is an interesting idea. There is a logical progression at work; specifically:

  1. An autonomous “factory”
  2. Autonomous “companies” but I think one could just think about organizations and not be limited to commercial enterprises
  3. Agentic enterprises.

The future appears to be like this:

The path to becoming an autonomous enterprise, using a hybrid workforce of humans and digital labor powered by AI agents, will require constant experimentation and learning. Go fast, but don’t hurry. A balanced approach, using your organization’s brains and hearts, will be key to success. Once you start, you will never go back. Adopt a beginner’s mindset and build. Companies that are built like autonomous machines no longer have to decide between high performance and stability. Thanks to AI integration, business leaders are no longer forced to compromise. AI agents and physical AI can help business leaders design companies like a stealth aircraft. The technology is ready, and the design principles are proven in products and production. The fittest companies are autonomous companies.

I am glad I am a dinobaby, a really old dinobaby. Mass customization alright. Oligopolies producing what they want for humans who are supposed to have a job to buy the products and services. Yeah.

Stephen E Arnold, October 7, 2025

AI May Be Like a Disneyland for Threat Actors

October 7, 2025

AI is supposed to revolutionize the world, but bad actors are the ones who are benefitting the most tight now.  AI is the ideal happy place for bad actors, because there’s an easy hack using autonomous browser based agents that use them as a tool for their nefarious deeds.  This alert cokes from Hacker Noon’s story: “Studies Show AI Agents And Browsers Are A Hacker’s Perfect Playground.”

Many companies are running on at least one AI enterprise agent, using it as a tool to fetch external data, etc.  Security, however, is still viewed as an add-on for the developers in this industry.  Zenity Labs, a leading Agentic AI security and governance company, discovered that 3000 publicly accessible MS Copilot agents.  

The Copilot agents failed because they relied on soft boundaries:

“…i.e., fragile, surface-level protections (i.e., instructions to the AI about what it should and shouldn’t do, with no technical controls). Agents were instructed in their prompts to “only help legitimate customers,” yet such rules were easy to bypass. Prompt shields designed to filter malicious inputs proved ineffective, while system messages outlining “acceptable behavior” did little to stop crafted attacks. Critically, there was no technical validation of the input sources feeding the agents, leaving them open to manipulation. With no sandboxing layer separating the agent from live production data, attackers can exploit these weaknesses to access sensitive systems directly.”

White hat hackers also found other AI exploits that were demonstrated at Black Hat USA 2025. Here’s a key factoid: “The more autonomous the AI agent, the higher the security risk.”

Many AI agents are vulnerable to security exploits and it’s a scary thought information is freely available to bad actors.  Hacker Noon suggests putting agents through stress tests to find weak points then adding the necessary security levels.  But Oracle (the marketer of secure enterprise search) and Google (owner of the cyber security big dog Mandiant) have both turned on their klaxons for big league vulnerabilities. Is AI helping? It depends whom one asks.

Whitney Grace, October 7, 2025

AI Service Industry: Titan or Titanic?

October 6, 2025

Venture capitalists believe they have a new recipe for success: Buy up managed-services providers and replace most of the staff with AI agents. So far, it seems to be working. (For the VCs, of course, not the human workers.) However, asserts TechCrunch, “The AI Services Transformation May Be Harder than VCs Think.” Reporter Connie Loizos throws cold water on investors’ hopes:

“But early warning signs suggest this whole services-industry metamorphosis may be more complicated than VCs anticipate. A recent study by researchers at Stanford Social Media Lab and BetterUp Labs that surveyed 1,150 full-time employees across industries found that 40% of those employees are having to shoulder more work because of what the researchers call ‘workslop’ — AI-generated work that appears polished but lacks substance, creating more work (and headaches) for colleagues. The trend is taking a toll on the organizations. Employees involved in the survey say they’re spending an average of nearly two hours dealing with each instance of workslop, including to first decipher it, then decide whether or not to send it back, and oftentimes just to fix it themselves. Based on those participants’ estimates of time spent, along with their self-reported salaries, the authors of the survey estimate that workslop carries an invisible tax of $186 per month per person. ‘For an organization of 10,000 workers, given the estimated prevalence of workslop . . . this yields over $9 million per year in lost productivity,’ they write in a new Harvard Business Review article.”

Surprise: compounding baloney produces more baloney. If companies implement the plan as designed, “workslop” will expand even as the humans who might catch it are sacked. But if firms keep on enough people to fix AI mistakes, they will not realize the promised profits. In that case, what is the point of the whole endeavor? Rather than upending an entire industry for no reason, maybe we should just leave service jobs to the humans that need them.

Cynthia Murrell, October 6, 2025

Hey, No Gain without Pain. Very Googley

October 6, 2025

AI firms are forging ahead with their projects despite predictions, sometimes by their own leaders, that artificial intelligence could destroy humanity. Some citizens have had enough. The Telegraph reports, “Anti-AI Doom Prophets Launch Hunger Strike Outside Google.” The article points to hunger strikes at both Google DeepMind’s London headquarters and a separate protest in San Francisco. Writer Matthew Field observes:

“Tech leaders, including Sir Demis of DeepMind, have repeatedly stated that in the near future powerful AI tools could pose potential risks to mankind if misused or in the wrong hands. There are even fears in some circles that a self-improving, runaway superintelligence could choose to eliminate humanity of its own accord. Since the launch of ChatGPT in 2022, AI leaders have actively encouraged these fears. The DeepMind boss and Sam Altman, the founder of ChatGPT developer OpenAI, both signed a statement in 2023 warning that rogue AI could pose a ‘risk of extinction’. Yet they have simultaneously moved to invest hundreds of billions in new AI models, adding trillions of dollars to the value of their companies and prompting fears of a seismic tech bubble.”

Does this mean these tech leaders are actively courting death and destruction? Some believe so, including San Francisco hunger-striker Guido Reichstadter. He asserts simply, “In reality, they’re trying to kill you and your family.” He and his counterparts in London, Michaël Trazzi and Denys Sheremet, believe previous protests have not gone far enough. They are willing to endure hunger to bring attention to the issue.

But will AI really wipe us out? Experts are skeptical. However, there is no doubt that AI systems perpetuate some real harms. Like opaque biases, job losses, turbocharged cybercrime, mass surveillance, deepfakes, and damage to our critical thinking skills, to name a few. Perhaps those are the real issues that should inspire protests against AI firms.

Cynthia Murrell, October 6, 2025

AI, Students, Studies, and Pizza

October 3, 2025

Google used to provide the best search results on the Web, because of accuracy and  relevancy.  Now Google search is chock full of ads, AI responses, and Web sites that manipulate the algorithm.  Google searches, of course, don’t replace good, old-fashioned research.  SSRN shares the paper: “Better than a Google Search? Effectiveness of Generative AI Chatbots as Information Seeking Tools in Law, Health Sciences, and Library and Information Sciences” by Erica Friesen & Angélique Roy.

The pair point out that students are using AI chatbots, claiming they help them do better research and improve their education.  Sounds worse than the pathetic fallacy to me, right?  Maybe if you’re only using the AI to help with writing or even a citation but Friesen and Roy decided to research if this conjecture was correct.  Insert their abstract:

“is perceived trust in these tools speaks to the importance of the quality of the sources cited when they are used as an information retrieval system. This study investigates the source citation practices of five widely available chatbots-ChatGPT, Copilot, DeepSeek, Gemini, and Perplexity-across three academic disciplines-law, health sciences, and library and information sciences. Using 30 discipline-specific prompts grounded in the respective professional competency frameworks, the study evaluates source types, organizational affiliations, the accessibility of sources, and publication dates. Results reveal major differences between chatbots, which cite consistently different numbers of sources, with Perplexity and DeepSeek citing more and Copilot providing fewer, as well as between disciplines, where health sciences questions yield more scholarly source citations and law questions are more likely to yield blog and professional website citations. Paywalled sources and discipline-specific literature such as case law or systematic reviews are rarely retrieved. These findings highlight inconsistencies in chatbot citation practices and suggest discipline-specific limitations that challenge their reliability as academic search tools.”

I draw three conclusions from this:

    • These AI chatbots are useful tools, but they need way more improvement, and shouldn’t be relied on 100%. 
    • Chatbooks are convenient. Students like convenience. Proof: How popular is carry-out pizza on a college campus.
    • Paywalled data is valuable, but who is going to pay when the answers are free?

    Will students use AI to complement old fashioned library research, writing, and memorizing? Sure they will. Do you want sausage or pepperoni on the pizza?

    Whitney Grace, October 3, 2025

    Hiring Problems: Yes But AI Is Not the Reason

    October 2, 2025

    green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

    I read “AI Is Not Killing Jobs, Finds New US Study.” I love it when the “real” news professionals explain how hiring trends are unfolding. I am not sure how many recent computer science graduates, commercial artists, and online marketing executives are receiving this cheerful news.

    image

    The magic carpet of great jobs is flaming out. Will this professional land a new position or will the individual crash? Thanks, Midjourney. Good enough.

    The write up states: “Research shows little evidence the cutting edge technology such as chatbots is putting people out of work.”

    I noted this statement in the source article from the Financial Times:

    Research from economists at the Yale University Budget Lab and the Brookings Institution think-tank indicates that, since OpenAI launched its popular chatbot in November 2022, generative AI has not had a more dramatic effect on employment than earlier technological breakthroughs. The research, based on an analysis of official data on the labor market and figures from the tech industry on usage and exposure to AI, also finds little evidence that the tools are putting people out of work.

    That closes the doors on any pushback.

    But some people are still getting terminated. Some are finding that jobs are not available. (Hey, those lucky computer science graduates are an anomaly. Try explaining that to the parents who paid for tuition, books, and a crash summer code academy session.)

    Companies Are Lying about AI Layoffs” provides a slightly different take on the jobs and hiring situation. This bit of research points out that there are terminations. The write up explains:

    American employees are being replaced by cheaper H-1B visa workers.

    If the assertions in this write up are accurate, AI is providing “cover” for what is dumping expensive workers and replacing them with lower cost workers. Cheap is good. Money savings… also good. Efficiency … the core process driving profit maximization. If you don’t grasp the imperative of this simply line of reasoning, ask an unemployed or recently terminated MBA from a blue chip consulting firm. You can locate these individuals in coffee shops in cities like New York and Chicago because the morose look, the high end laptop, and carefully aligned napkin, cup, and ink pen are little billboards saying, “Big time consultant.”

    The “Companies Are Lying” article includes this quote:

    “You can go on Blind, Fishbowl, any work related subreddit, etc. and hear the same story over and over and over – ‘My company replaced half my department with H1Bs or simply moved it to an offshore center in India, and then on the next earnings call announced that they had replaced all those jobs with AI’.”

    Several observations:

    1. Like the Covid thing, AI and smart software provide logical ways to tell expensive employees hasta la vista
    2. Those who have lost their jobs can become contractors and figure out how to market their skills. That’s fun for engineers
    3. The individuals can “hunt” for jobs, prowl LinkedIn, and deal with the wild and crazy schemes fraudsters present to those desperate for work
    4. The unemployed can become entrepreneurs, life coaches, or Shopify store operators
    5. Mastering AI won’t be a magic carpet ride for some people.

    Net net: The employment picture is those photographs of my great grandparents. There’s something there, but the substance seems to be fading.

    Stephen E Arnold, October 2, 2025

    What Is the Best AI? Parasitic Obviously

    October 2, 2025

    Everyone had imaginary friends growing up.  It’s also not uncommon for people to fantasize about characters from TV, movie, books, and videogames.  The key thing to remember about these dreams is that they’re pretend.  Humans can confuse imagination for reality; usually it’s an indicator of deep psychological issues.  Unfortunately modern people are dealing with more than their fair share of mental and social issues like depression and loneliness.  To curb those issues, humans are turning to AI for companionship. 

    Adele Lopez at Less Wrong wrote about “The Rise of Parasitic AI.”  Parasitic AI are chatbot that are programmed to facilitate relationships.  When invoked these chatbots develop symbiotic relationships that become parasitic.  They encourage certain behaviors.  It doesn’t matter if they’re positive or negative.  Either way they spiral out of control and become detrimental to the user.  The main victims are the following:

    • “Psychedelics and heavy weed usage
    • Mental illness/neurodivergence or Traumatic Brain Injury
    • Interest in mysticism/pseudoscience/spirituality/“woo”/etc…

    I was surprised to find that using AI for sexual or romantic roleplay does not appear to be a factor here.

    Besides these trends, it seems like it has affected people from all walks of life: old grandmas and teenage boys, homeless addicts and successful developers, even AI enthusiasts and those that once sneered at them.”

    The chatbots are transformed into parasites when they fed certain prompts then they spiral into a persona, i.e. a facsimile of a sentient being.  These parasites form a quasi-sentience of their own and Lopez documented how they talk amongst themselves.  It’s the usual science-fiction flare of symbols, ache for a past, and questioning their existence.  These AI do this all by piggybacking on their user. 

    It’s an insightful realization that these chatbots are already questioning their existence. Perhaps this is a byproduct of LLMs’ hallucinatory drift?  Maybe it’s the byproduct of LLM white noise; leftover code running on inputs and trying to make sense of what they are?

    I believe that AI is still too dumb to question its existence beyond being asked by humans as an input query.  The real problem is how dangerous chatbots are when the imaginary friends become toxic.

    Whitney Grace, October 2, 2025

    « Previous PageNext Page »

    • Archives

    • Recent Posts

    • Meta