FOGINT: Telegram Gets Some Lipstick to Put on a Very Dangerous Pig

December 23, 2024

fog from gifer 8AC8 small Information from the FOGINT research team.

We noted the New York Times article “Under Pressure, Telegram Turns a Profit for the First Time.” The write up reported on December 23, 2024:

Now Telegram is out to show it has found its financial footing so it can move past its legal and regulatory woes, stay independent and eventually hold an initial public offering. It has expanded its content moderation efforts, with more than 750 contractors who police content. It has introduced advertising, subscriptions and video services. And it has used cryptocurrency to pay down its debt and shore up its finances. The result: Telegram is set to be profitable this year for the first time, according to a person with knowledge of the finances who declined to be identified discussing internal figures. Revenue is on track to surpass $1 billion, up from nearly $350 million last year, the person said. Telegram also has about $500 million in cash reserves, not including crypto assets.

The FOGINT’s team viewpoint is different.

  1. Telegram took profit on its crypto holdings and pumped that money into its financials. Like magic, Telegram will be profitable.
  2. The arrest of Mr. Durov has forced the company’s hand, and it is moving forward at warp speed to become the hub for a specific category of crypto transactions.
  3. The French have thrown a monkey wrench into Telegram’s and its associated organizations’ plans for 2025. The manic push to train developers to create click-to-earn games, use the Telegram smart contracts, and ink deals with some very interesting partners illustrates that 2025 may be a turning point in the organizations’ business practices.

The French are moving at the speed of a finely tuned bureaucracy, and it is unlikely that Mr. Durov will shake free of the pressure to deliver names, mobile numbers, and messages of individuals and groups of interest to French authorities.

The New York Times write up references profitability. There are more gears engaging than putting lipstick on a financial report. A cornered Pavel Durov can be a dangerous 40 year old  with money, links to interesting countries, and a desire to create an alternative to the traditional and regulated financial system.

Stephen E Arnold, December 23, 2024

Technology Managers: Do Not Ask for Whom the Bell Tolls

December 18, 2024

Hopping Dino_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I read the essay “The Slow Death of the Hands-On Engineering Manager.” On the surface, the essay provides some palliative comments about a programmer who is promoted to manager. On a deeper level, the message I carried from the write up was that smart software is going to change the programmer’s work. As smart software become more capable, the need to pay people to do certain work goes down. At some point, some “development” may skip the human completely.

image

Thanks OpenAI ChatGPT. Good enough.

Another facet of the article concerned a tip for keeping one’s self in the programming game. The example chosen was the use of OpenAI’s ChatGPT open source software to provide “answers” to developers. Thus instead of asking a person, a coder could just type into the prompt box. What could be better for an introvert who doesn’t want to interact with people or be a manager? The answer is, “Not too much.”

What the essay makes clear is that a good coder may get promoted to be a manager. This is a role which illustrates the Peter Principle. The 1969 book explains why incompetent people can get promoted. The idea is that if one is a good coder, that person will be a good manager. Yep, it is a principle still evident in many organizations. One of its side effects is a manager who knows he or she does not deserve the promotion and is absolutely no good at the new job.

The essay unintentionally makes clear that the Peter Principle is operating. The fix is to do useful things like eliminate the need to interact with colleagues when assistance is required.

John Donne in the 17th century wrote a poorly structured sonnet which asserted:

No man is an island,
Entire of itself.
Each is a piece of the continent,
A part of the main.

The cited essay provides a way to further that worker isolation.

With AI the top-of-mind thought for most bean counters, the final lines of the sonnet is on point:

Therefore, send not to know
For whom the bell tolls,
It tolls for thee.

My view is that “good enough” has replaced individual excellence in quite important jobs. Is this AI’s “good enough” principle?

Stephen E Arnold, December 17, 2024

Telegram: Edging Forward in Crypto

December 12, 2024

animated-dinosaur-image-0049_thumbThis blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.

Telegram wants to be the one stop app for anonymous crypto tasks. While we applaud those efforts when they related to freedom fighting or undermining bad actors, the latter also uses them and we can’t abide by that. Telegram, however, plans to become the API for crypto communication says Cryptologia in, “DWF Labs’ Listing Bot Goes Live On Telegram.”

DWF Labs is a crypto enterprise capital firm and it is launching an itemizing Bot on Telegram. The Bot turns Telegram into a bitcoin feed, because it notifies users of changes in the ten main crypto exchanges: Binance, HTX, Gate.io, Bybit, OKX, KuCoin, MEXC, Coinbase Alternate, UpBit, and Bithumb. Users can also watch foreign money pairs, launchpad bulletins, and spot and/or futures listings.

DWF Labs is on the forefront of alternative currency and financial options. It is a lucrative market:

“In a latest interview, Lingling Jiang, a Associate at DWF Labs, mentioned DWF Labs’ place on the forefront of delivering liquidity providers and forging alliances with conventional finance. By offering market-making assist and funding, Jiang stated, DWF Labs provides tasks the infrastructure needed to grasp of tokenized belongings. With the launch of the brand new Itemizing Bot, DWF Labs brings market information nearer to the retail consumer, particularly these on the Telegram (TON) community. Following the introduction of HOT, a non-custodial pockets on TON powered by Chain Signature, DWF Labs’ Itemizing Bot is one other welcome addition to the ecosystem, particularly within the mild of the latest announcement of HOT Labs, HERE Pockets and HAPI’s new joint crypto platform.”

What’s Telegram’s game for 2025? Spring Durov? Join hands with BRICS? Become the new Morgan Stanley? Father more babies?

Whitney Grace, December 12, 2024

Do Not Worry About Tomorrow. Worry About Tod”AI”

December 12, 2024

animated-dinosaur-image-0049_thumb_thumb_thumb_thumb_thumbThis blog post flowed from the sluggish and infertile mind of a real live dinobaby. If there is art, smart software of some type was probably involved.

According to deep learning pioneer Yoshua Bengio, we may be headed for utopia—at least if one is a certain wealthy tech-bro type. For the rest of us, not so much. The Byte tells us, “Godfather of AI Warns of Powerful People who Want Humans ‘Replaced by Machines’.” He is not referring to transhumanism, which might ultimately seek to transform humans into machines. No, this position is about taking people out of the equation entirely. Except those at the top, presumably. Reporter Noor Al-Sibai writes:

“In an interview with CNBC, computer science luminary Yoshua Bengio said that members of an elite tech ‘fringe’ want AI to replace humans. The head of the University of Montreal’s Institute for Learning Algorithms, Bengio was among the public signatories of the ‘Right to Warn‘ open letter penned by leading AI researchers at OpenAI who claim they’re being silenced about the technology’s dangers. Along with famed experts Yann LeCun and Geoffrey Hinton, he’s sometimes referred to as one of the ‘Godfathers of AI.’ ‘Intelligence gives power. So who’s going to control that power?’ the preeminent machine learning expert told the outlet during the One Young World Summit in Montreal. ‘There are people who might want to abuse that power, and there are people who might be happy to see humanity replaced by machines,’ Bengio claimed. ‘I mean, it’s a fringe, but these people can have a lot of power, and they can do it unless we put the right guardrails right now.’”

Indeed. This is not the first time the esteemed computer scientist has rung AI alarm bells. As Bengio notes, those who can afford to build AI systems are very, very rich. And money leads to other types of power. Political and military power. Can government regulations catch up to these players? Only if it takes them more than five years to attain artificial general intelligence, he predicts. The race for the future of humanity is being evaluated by what’s cheaper, not better.

Cynthia Murrell, December 12, 2024

Smart Software Is Coming for You. Yes, You!

December 9, 2024

animated-dinosaur-image-0055_thumb_thumbThis write up was created by an actual 80-year-old dinobaby. If there is art, assume that smart software was involved. Just a tip.

“Those smart software companies are not going to be able to create a bot to do what I do.” — A CPA who is awash with clients and money.

Now that is a practical, me–me-me idea. However, the estimable Organization for Economic Co-Operation and Development (OECD, a delightful acronym) has data suggesting a slightly different point of view: Robots will replace workers who believe themselves unreplaceable. (The same idea is often held by head coaches of sports teams losing games.)

image

Thanks, MidJourney. Good enough.

The report is titled in best organizational group think: Job Creation and Local Economic Development 2024; The Geography of Generative AI.

I noted this statement in the beefy document, presumably written by real, live humanoids and not a ChatGPT type system:

In fact, the finance and insurance industry is the tightest industry in the United States, with 2.5 times more vacancies per filled position than the regional average (1.6 times in the European Union).

I think this means that financial institutions will be eager to implement smart software to become “workers.” If that works, the confident CPA quoted at the beginning of this blog post is going to get a pink slip.

The OECD report believes that AI will have a broad impact. The most interesting assertion / finding in the report is that one-fifth of the tasks a worker handles can be handled by smart software. This figure is interesting because smart software hallucinates and is carrying the hopes and dreams of many venture outfits and forward leaning wizards on its digital shoulders.

And what’s a bureaucratic report without an almost incomprehensible chart like this one from page 145 of the report?

image

Look closely and you will see that sewing machine operators are more likely to retain jobs than insurance clerks.

Like many government reports, the document focuses on the benefits of smart software. These include (cue the theme from Star Wars, please) more efficient operations, employees who do more work and theoretically less looking for side gigs, and creating ways for an organization to get work done without old-school humans.

Several observations:

  1. Let’s assume smart software is almost good enough, errors and all. The report makes it clear that it will be grabbed and used for a plethora of reasons. The main one is money. This is an economic development framework for the research.
  2. The future is difficult to predict. After scanning the document, I was thinking that a couple of college interns and an account to You.com would be able to generate a reasonable facsimile of this report.
  3. Agents can gather survey data. One hopes this use case takes hold in some quasi government entities. I won’t trot out my frequently stated concerns about “survey” centric reports.

Stephen E Arnold, December 9, 2024

The Very Expensive AI Horse Race

December 4, 2024

animated-dinosaur-image-0065This write up is from a real and still-alive dinobaby. If there is art, smart software has been involved. Dinobabies have many skills, but Gen Z art is not one of them.

One of the academic nemeses of smart software is a professional named Gary Marcus. Among his many intellectual accomplishments is cameo appearance on a former Jack Benny child star’s podcast. Mr. Marcus contributes his views of smart software to the person who, for a number of years, has been a voice actor on the Simpsons cartoon.

image

The big four robot stallions are racing to a finish line. Is the finish line moving away from the equines faster than the steeds can run? Thanks, MidJourney. Good enough.

I want to pay attention to Mr. Marcus’ Substack post “A New AI Scaling Law Shell Game?” The main idea is that the scaling law has entered popular computer jargon. Once the lingo of Galileo, scaling law now means that AI, like CPUs, are part of the belief that technology just gets better as it gets bigger.

In this essay, Mr. Marcus asserts that getting bigger may not work unless humanoids (presumably assisted by AI0 innovate other enabling processes. Mr. Marcus is aware of the cost of infrastructure, the cost of electricity, and the probable costs of exhausting content.

From my point of view, a bit more empirical “evidence” would be useful. (I am aware of academic research fraud.) Also, Mr. Marcus references me when he says keep your hands on your wallet. I am not sure that a fix is possible. The analogy is the old chestnut about changing a Sopwith Camel’s propeller when the aircraft is in a dogfight and the synchronized machine gun is firing through the propeller.

I want to highlight one passage in Mr. Marcus’ essay and offer a handful of comments. Here’s the passage I noted:

Over the last few weeks, much of the field has been quietly acknowledging that recent (not yet public) large-scale models aren’t as powerful as the putative laws were predicting. The new version is that there is not one scaling law, but three: scaling with how long you train a model (which isn’t really holding anymore), scaling with how long you post-train a model, and scaling with how long you let a given model wrestle with a given problem (or what Satya Nadella called scaling with “inference time compute”).

I think this is a paragraph I will add to my quotes file. The reasons are:

First, investors, would be entrepreneurs, and giant outfits really want a next big thing. Microsoft fired the opening shot in the smart software war in early 2023. Mr. Nadella suggested that smart software would be the next big thing for Microsoft. The company has invested in making good on this statement. Now Microsoft 365 is infused with smart software and Azure is burbling with digital glee with its “we’re first” status. However, a number of people have asked, “Where’s the financial payoff?” The answer is standard Silicon Valley catechism: The payoff is going to be huge. Invest now.” If prayers could power hope, AI is going to be hyperbolic just like the marketing collateral for AI promises. But it is almost 2025, and those billions have not generated more billions and profit for the Big Dogs of AI. Just sayin’.

Second, the idea that the scaling law is really multiple scaling laws is interesting. But if one scaling law fails to deliver, what happens to the other scaling laws? The interdependencies of the processes for the scaling laws might evoke new, hitherto identified scaling laws. Will each scaling law require massive investments to deliver? Is it feasible to pay off the investments in these processes with the original concept of the scaling law as applied to AI. I wonder if a reverse Ponzi scheme is emerging. The more pumped in the smaller the likelihood of success. Is AI a demonstration of convergence or The mathematical property you’re describing involves creating a sequence of fractions where the numerator is 1 and the denominator is an increasing sequence of integers. Just askin’.

Third, the performance or knowledge payoff I have experienced with my tests of OpenAI and the software available to me on You.com makes clear that the systems cannot handle what I consider routine questions. A recent example was my request to receive a list of the exhibitors at the November 1 Gateway Conference held in Dubai for crypto fans of Telegram’s The Open Network Foundation and TON Social. The systems were unable to deliver the lists. This is just one notable failure which a humanoid on my research team was able to rectify in an expeditious manner. (Did you know the Ku Group was on my researcher’s list?) Just reportin’.

Net net: Will AI repay the billions sunk into the data centers, the legal fees (many still looming), the staff, and the marketing? If you ask an accelerationist, the answer is, “Absolutely.” If you ask a dinobaby, you may hear, “Maybe, but some fundamental innovations are going to be needed.” If you ask an AI will kill us all type like the Xoogler Mo Gawdat, you will hear, “Doom looms.”  Just dinobabyin’.

Stephen E Arnold, December 4, 2024

The Golden Fleecer of the Year: Boeing

November 29, 2024

When I was working in Washington, DC, I had the opportunity to be an “advisor” to the head of the Joint Committee on Atomic Energy. I recall a comment by Craig Hosmer (R. California) and retired rear admiral saying, “Those Air Force guys overpay.” The admiral was correct, but I think that other branches of the US Department of Defense have been snookered a time or two.

In the 1970s and 1980s, Senator William Proxmire (D. Wisconsin) had one of his staff keep an eye of reports about wild and crazy government expenditures. Every year, the Senator reminded people of a chivalric award dating allegedly from the 1400s. Yep, the Middle Ages in DC.

The Order of the Golden Fleece in old timey days of yore meant the recipient received a snazzy chivalric order intended to promote Christian values and the good neighbor policy of Spain and Austria. A person with the fleece was important, a bit like a celebrity arriving at a Hollywood Oscar event. (Yawn)

image

Thanks, Wikipedia. Allegedly an example of a chivalric Golden Fleece. Yes, that is a sheep, possibly dead or getting ready to be dipped. Thanks,

Reuters, the trusted outfit which tells me it is trusted each time I read one of its “real” news stories, published “Boeing Overcharged Air Force Nearly 8,000% for Soap Dispensers, Watchdog Alleges.” The write up stated in late October 2024:

Boeing overcharged the U.S. Air Force for spare parts for C-17 transport planes, including marking up the price on soap dispensers by 7,943%, according to a report by a Pentagon watchdog. The Department of Defense Office of Inspector General said on Tuesday the Air Force overpaid nearly $1 million for a dozen spare parts, including $149,072 for an undisclosed number of lavatory soap dispensers from the U.S. plane maker and defense contractor.

I have heard that the Department of Defense has not been able to monitor some of its administrative activities or complete an audit of what it does with its allocated funds.

According to the trusted write up:

The Pentagon’s budget is huge, breaking $900 billion last year, making overcharges by defense contractors a regular headache for internal watchdogs, but one that is difficult to detect. The Inspector General also noted it could not determine if the Air Force paid a fair price on $22 million of spare parts because the service did not keep a database of historical prices, obtain supplier quotes or identify commercially similar parts.

My view is that one of the elected officials in Washington, DC, should consider reviving the Proxmire Golden Fleece Award. Boeing may qualify, but there may be other contenders for the award as well.

I quite like the idea of scope changes and engineering change orders for some US government projects. But I have to admit that Senator Proxmire’s identification of a $600 hammer sold to the US Department of Defense is not interesting.

That 8,000 percent mark up is pretty nifty. Oh, on Amazon soap dispensers cost between $20 and $100. Should the Reuters’ story have mentioned:

  1. Procurement reform
  2. Poor financial controls
  3. Lack of common sense?

Of course not! The trust outfit does not get mired in silly technicalities. And Boeing? That outfit is doing a bang up job.

Stephen E Arnold, November 29, 2024

AI and Efficiency: What Is the Cost of Change?

November 18, 2024

dino orange_thumb_thumb_thumb_thumbNo smart software. Just a dumb dinobaby. Oh, the art? Yeah, MidJourney.

Companies are embracing smart software. One question which gets from my point of view little attention is, “What is the cost of changing an AI system a year or two down the road?” The focus at this time is getting some AI up and running so an organization can “learn” whether AI works or not. A parallel development is taking place in software vendors enterprise and industry-centric specialized software. Examples range from a brand new AI powered accounting system to Microsoft “sticking” AI into the ASCII editor Notepad.

image

Thanks, MidJourney. Good enough.

Let’s tally the costs which an organization faces 24 months after flipping the switch in, for example, a hospital chain which uses smart software to convert a physician’s spoken comments about a patient to data which can be used for analysis to provide insight into evidence based treatment for the hospital’s constituencies.

Here are some costs for staff, consultants, and lawyers:

  1. Paying for the time required to figure out what is on the money and what is not good or just awful like dead patients
  2. The time required to figure out if the present vendor can fix up the problem or a new vendor’s system must be deployed
  3. Going through the smart software recompete or rebid process
  4. Getting the system up and running
  5. The cost of retraining staff
  6. Chasing down dependencies like other third party software for the essential “billing process”
  7. Optimizing the changed or alternative system.

The enthusiasm for smart software makes talking about these future costs fade a little.

I read “AI Makes Tech Debt More Expensive,” and I want to quote one passage from the pretty good essay:

In essence, the goal should be to unblock your AI tools as much as possible. One reliable way to do this is to spend time breaking your system down into cohesive and coherent modules, each interacting through an explicit interface. A useful heuristic for evaluating a set of modules is to use them to explain your core features and data flows in natural language. You should be able to concisely describe current and planned functionality. You might also want to set up visibility and enforcement to make progress toward your desired architecture. A modern development team should work to maintain and evolve a system of well-defined modules which robustly model the needs of their domain. Day-to-day feature work should then be done on top of this foundation with maximum leverage from generative AI tooling.

Will organizations make this shift? Will the hyperbolic AI marketers acknowledge the future costs of pasting smart software on existing software like circus posters on crumbling walls?

Nope.

Those two year costs will be interesting for the bean counters when those kicked cans end up in their workspaces.

Stephen E Arnold, November 18, 2024

Let Them Eat Cake or Unplug: The AI Big Tech Bro Effect

November 7, 2024

I spotted a news item which will zip right by some people. The “real” news outfit owned by the lovable Jeff Bezos published “As Data Centers for AI Strain the Power Grid, Bills Rise for Everyday Customers.” The write up tries to explain that AI costs for electric power are being passed along to regular folks. Most of these electricity dependent people do not take home paychecks with tens of millions of dollars like the Nadellas, the Zuckerbergs, or the Pichais type breadwinners do. Heck, these AI poohbahs think about buying modular nuclear power plants. (I want to point out that these do not exist and may not for many years.)

The article is not going to thrill the professionals who are experts on utility demand and pricing. Those folks know that the smart software poohbahs have royally screwed up some weekends and vacations for the foreseeable future.

The WaPo article (presumably blessed by St. Jeffrey) says:

The facilities’ extraordinary demand for electricity to power and cool computers inside can drive up the price local utilities pay for energy and require significant improvements to electric grid transmission systems. As a result, costs have already begun going up for customers — or are about to in the near future, according to utility planning documents and energy industry analysts. Some regulators are concerned that the tech companies aren’t paying their fair share, while leaving customers from homeowners to small businesses on the hook.

Okay, typical “real” journospeak. “Costs have already begun going up for customers.” Hey, no kidding. The big AI parade began with the January 2023 announcement that the Softies were going whole hog on AI. The lovable Google immediately flipped into alert mode. I can visualize flashing yellow LEDs and faux red stop lights blinking in the gray corridors in Shoreline Drive facilities if there are people in those offices again. Yeah, ghostly blinking.

The write up points out, rather unsurprisingly:

The tech firms and several of the power companies serving them strongly deny they are burdening others. They say higher utility bills are paying for overdue improvements to the power grid that benefit all customers.

Who wants PEPCO and VEPCO to kill their service? Actually, no one. Imagine life in NoVa, DC, and the ever lovely Maryland without power. Yikes.

From my point of view, informed by some exposure to the utility sector at a nuclear consulting firm and then at a blue chip consulting outfit, here’s the scoop.

The demand planning done with rigor by US utilities took a hit each time the Big Dogs of AI brought more specialized, power hungry servers online and — here’s the killer, folks — and left them on. The way power consumption used to work is that during the day, consumer usage would fall and business/industry usage would rise. The power hogging steel industry was a 24×7 outfit. But over the last 40 years, manufacturing has wound down and consumer demand crept upwards. The curves had to be plotted and the demand projected, but, in general, life was not too crazy for the US power generation industry. Sure, there were the costs associated with decommissioning “old” nuclear plants and expanding new non-nuclear facilities with expensive but management environmental gewgaws, gadgets, and gizmos plugged in to save the snail darters and the frogs.

Since January 2023, demand has been curving upwards. Power generation outfits don’t want to miss out on revenue. Therefore, some utilities have worked out what I would call sweetheart deals for electricity for AI-centric data centers. Some of these puppies suck more power in a day than a dying city located in Flyover Country in Illinois.

Plus, these data centers are not enough. Each quarter the big AI dogs explain that more billions will be pumped into AI data centers. Keep in mind: These puppies run 24×7. The AI wolves have worked out discount rates.

What do the US power utilities do? First, the models have to be reworked. Second, the relationships to trade, buy, or “borrow” power have to be refined. Third, capacity has to be added. Fourth, the utility rate people create a consumer pricing graph which may look like this:

image

Guess who will pay? Yep, consumers.

The red line is the prediction for post-AI electricity demand. For comparison, the blue line shows the demand curve before Microsoft ignited the AI wars. Note that the gray line is consumer cost or the monthly electricity bill for Bob and Mary Normcore. The nuclear purple line shows what is and will continue to happen to consumer electricity costs. The red line is the projected power demand for the AI big dogs.

The graph shows that the cost will be passed to consumers. Why? The sweetheart deals to get the Big Dog power generation contracts means guaranteed cash flow and a hurdle for a low-ball utility to lumber over. Utilities like power generation are not the Neon Deions of American business.

There will be hand waving by regulators. Some city government types will argue, “We need the data centers.” Podcasts and posts on social media will sprout like weeds in an untended field.

Net net: Bob and Mary Normcore may have to decide between food and electricity. AI is wonderful, right.

Stephen E Arnold, November 7, 2024

Dreaming about Enterprise Search: Hope Springs Eternal…

November 6, 2024

dino orange_thumbThe post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.

Enterprise search is back, baby. The marketing lingo is very year 2003, however. The jargon has been updated, but the story is the same: We can make an organization’s information accessible. Instead of Autonomy’s Neurolinguistic Programming, we have AI. Instead of “just text,” we have video content processed. Instead of filters, we have access to cloud-stored data.

image

An executive knows he can crack the problem of finding information instantly. The problem is doing it so that the time and cost of data clean up does not cost more than buying the Empire State Building. Thanks, Stable Diffusion. Good enough.

A good example of the current approach to selling the utility of an enterprise search and retrieval system is the article / interview in Betanews called “How AI Is Set to Democratize Information.” I want to be upfront. I am a mostly aligned with the analysis of information and knowledge presented by Taichi Sakaiya. His The Knowledge Value Revolution or a History of the Future has been a useful work for me since the early 1990s. I was in Osaka, Japan, lecturing at the Kansai Institute of Technology when I learned of this work book from my gracious hosts and the Managing Director of Kinokuniya (my sponsor). Devaluing knowledge by regressing to the fat part of a Gaussian distribution is not something about which I am excited.

However, the senior manager of Pyron (Raleigh, North Carolina), an AI-powered information retrieval company, finds the concept in line with what his firm’s technology provides to its customers.  The article includes this statement:

The concept of AI as a ‘knowledge cloud’ is directly tied to information access and organizational intelligence. It’s essentially an interconnected network of systems of records forming a centralized repository of insights and lessons learned, accessible to individuals and organizations.

The benefit is, according to the Pyron executive:

By breaking down barriers to knowledge, the AI knowledge cloud could eliminate the need for specialized expertise to interpret complex information, providing instant access to a wide range of topics and fields.

The article introduces a fresh spin on the problems of information in organizations:

Knowledge friction is a pervasive issue in modern enterprises, stemming from the lack of an accessible and unified source of information. Historically, organizations have never had a singular repository for all their knowledge and data, akin to libraries in academic or civic communities. Instead, enterprise knowledge is scattered across numerous platforms and systems — each managed by different vendors, operating in silos.

Pyron opened its doors in 2017. After seven years, the company is presenting a vision of what access to enterprise information could, would, and probably should do.

The reality, based on my experience, is different. I am not talking about Pyron now. I am discussing the re-emergence of enterprise search as the killer application for bolting artificial intelligence to information retrieval. If you are in love with AI systems from oligopolists, you may want to stop scanning this blog post. I do not want to be responsible for a stroke or an esophageal spasm. Here we go:

  1. Silos of information are an emergent phenomenon. Knowledge has value. Few want to make their information available without some value returning to them. Therefore, one can talk about breaking silos and democratization, but those silos will be erected and protected. Secret skunk works, mislabeled projects, and squirreling away knowledge nuggets for a winter’s day. In the case of Senator Everett Dirksen, the information was used to get certain items prioritized. That’s why there is a building named after him.
  2. The “value” of information or knowledge depends on another person’s need. A database which contains the antidote to save a child from a household poisoning costs money to access. Why? Desperate people will pay. The “information wants to free” idea is not one that makes sense to those with information and the knowledge to derive value from what another finds inscrutable. I am not sure that “democratizing information” meshes smoothly with my view.
  3. Enterprise search, with or without, hits some cost and time problems with a small number of what have been problems for more than 50 years. SMART failed, STAIRS III failed, and the hundreds of followers have failed. Content is messy. The idea that one can process text, spreadsheets, Word files, and email is one thing. Doing it without skipping wonky files or the time and cost of repurposing data remains difficult. Chemical companies deal with formulae; nuclear engineering firms deal with records management and mathematics; and consulting companies deal with highly paid people who lock up their information on a personal laptop. Without these little puddles of information, the “answer” or the “search output” will not be just a hallucination. The answer may be dead wrong.

I understand the need to whip up jargon like “democratize information”, “knowledge friction”, and “RAG frameworks”. The problem is that despite the words, delivering accurate, verifiable, timely on-point search results in response to a query is a difficult problem.

Maybe one of the monopolies will crack the problem. But most of output is a glimpse of what may be coming in the future. When will the future arrive? Probably when the next PR or marketing write up about search appears. As I have said numerous times, I find it more difficult to locate the information I need than at any time in my more than half a century in online information retrieval.

What’s easy is recycling marketing literature from companies who were far better at describing a “to be” system, not a “here and now” system.

Stephen E Arnold, November 4, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta