Another Big Consulting Firms Does Smart Software… Sort Of
September 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Will programmers and developers become targets for prosecution when flaws cripple vital computer systems? That may be a good idea because pointing to the “algorithm” as the cause of a problem does not seem to reduce the number of bugs, glitches, and unintended consequences of software. A write up which itself may be a blend of human and smart software suggests change is afoot.
Thanks, MSFT Copilot. Good enough.
“Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits” reports that software crafted by the services firm Deloitte did not work as the State of Tennessee assumed. Yep, assume. A very interesting word.
The article explains:
The TennCare Connect system—built by Deloitte and other contractors for more than $400 million—is supposed to analyze income and health information to automatically determine eligibility for benefits program applicants. But in practice, the system often doesn’t load the appropriate data, assigns beneficiaries to the wrong households, and makes incorrect eligibility determinations, according to the decision from Middle District of Tennessee Judge Waverly Crenshaw Jr.
At one time, Deloitte was an accounting firm. Then it became a consulting outfit a bit like McKinsey. Well, a lot like that firm and other blue-chip consulting outfits. In its current manifestation, Deloitte is into technology, programming, and smart software. Well, maybe the software is smart but the programmers and the quality control seem to be riding in a different school bus from some other firms’ technical professionals.
The write up points out:
Deloitte was a major beneficiary of the nationwide modernization effort, winning contracts to build automated eligibility systems in more than 20 states, including Tennessee and Texas. Advocacy groups have asked the Federal Trade Commission to investigate Deloitte’s practices in Texas, where they say thousands of residents are similarly being inappropriately denied life-saving benefits by the company’s faulty systems.
In 2016, Cathy O’Neil published Weapons of Math Destruction. Her book had a number of interesting examples of what goes wrong when careless people make assumptions about numerical recipes. If she does another book, she may include this Deloitte case.
Several observations:
- The management methods used to create these smart systems require scrutiny. The downstream consequences are harmful.
- The developers and programmers can be fired, but the failure to have remediating processes in place when something unexpected surfaces must be part of the work process.
- Less informed users and more smart software strikes me as a combustible mixture. When a system ignites, the impacts may reverberate in other smart systems. What entity is going to fix the problem and accept responsibility? The answer is, “No one” unless there are significant consequences.
The State of Tennessee’s experience makes clear that a “brand name”, slick talk, an air of confidence, and possibly ill-informed managers can do harm. The opioid misstep was bad. Now imagine that type of thinking in the form of a fast, indifferent, and flawed “system.” Firing a 25 year old is not the solution.
Stephen E Arnold, September 3, 2024
Consensus: A Gen AI Search Fed on Research, not the Wild Wild Web
September 3, 2024
How does one make an AI search tool that is actually reliable? Maybe start by supplying it with only peer-reviewed papers instead of the whole Internet. Fast Company sings the praises of Consensus in, “Google Who? This New Service Actually Gets AI Search Right.” Writer JR Raphael begins by describing why most AI-powered search engines, including Google, are terrible:
“The problem with most generative AI search services, at the simplest possible level, is that they have no idea what they’re even telling you. By their very nature, the systems that power services like ChatGPT and Gemini simply look at patterns in language without understanding the actual context. And since they include all sorts of random internet rubbish within their source materials, you never know if or how much you can actually trust the info they give you.”
Yep, that pretty much sums it up. So, like us, Raphael was skeptical when he learned of yet another attempt to bring generative AI to search. Once he tried the easy-to-use Consensus, however, he was convinced. He writes:
“In the blink of an eye, Consensus will consult over 200 million scientific research papers and then serve up an ocean of answers for you—with clear context, citations, and even a simple ‘consensus meter’ to show you how much the results vary (because here in the real world, not everything has a simple black-and-white answer!). You can dig deeper into any individual result, too, with helpful features like summarized overviews as well as on-the-fly analyses of each cited study’s quality. Some questions will inevitably result in answers that are more complex than others, but the service does a decent job of trying to simplify as much as possible and put its info into plain English. Consensus provides helpful context on the reliability of every report it mentions.”
See the post for more on using the web-based app, including a few screenshots. Raphael notes that, if one does not have a specific question in mind, the site has long lists of its top answers for curious users to explore. The basic service is free to search with no query cap, but creators hope to entice us with an $8.99/ month premium plan. Of course, this service is not going to help with every type of search. But if the subject is worthy of academic research, Consensus should have the (correct) answers.
Cynthia Murrell, September 3, 2024
Elastic N.V. Faces a New Search Challenge
September 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Elastic N.V. and Shay Banon are what I call search survivors. Gone are Autonomy (mostly), Delphis, Exalead, Fast Search & Transfer (mostly), Vivisimo, and dozens upon dozens of companies who sought to put an organization’s information at an employee’s fingertips. The marketing lingo of these and other now-defunct enterprise search vendors is surprisingly timely. Once can copy and paste chunks of Autonomy’s white papers into the OpenAI ChatGPT search is coming articles and few would notice that the assertions and even the word choice was more than 40 years old.
Elastic N.V. survived. It rose from a failed search system called Compass. Elastic N.V. recycled the Lucene libraries, released the open source Elasticsearch, and did an IPO. Some people made a lot of money. The question is, “Will that continue?”
I noted the Silicon Angle article “Elastic Shares Plunge 25% on Lower Revenue Projections Amid Slower Customer Commitments.” That write up says:
In its earnings release, Chief Executive Officer Ash Kulkarni started positively, noting that the results in the quarter we solid and outperformed previous guidance, but then comes the catch and the reason why Elastic stock is down so heavily after hours. “We had a slower start to the year with the volume of customer commitments impacted by segmentation changes that we made at the beginning of the year, which are taking longer than expected to settle,” Kulkarni wrote. “We have been taking steps to address this, but it will impact our revenue this year.” With that warning, Elastic said that it expects fiscal second-quarter adjusted earnings per share of 37 to 39 cents on revenue of $353 million to $355 million. The earnings per share forecast was ahead of the 34 cents expected by analysts, but revenue fell short of an expected $360.8 million. It was a similar story for Elastic’s full-year outlook, with the company forecasting earnings per share of $1.52 to $1.56 on revenue of $1.436 billion to $1.444 billion. The earnings per share outlook was ahead of an expected $1.42, but like the second quarter outlook, revenue fell short, as analysts had expected $1.478 billion.
Elastic N.V. makes money via service and for-fee extras. I want to point out that the $300 million or so revenue numbers are good. Elastic B.V. has figured out a business model that has not required [a] fiddling the books, [b] finding a buyer as customers complain about problems with the search software, [c] the sources of financing rage about cash burn and lousy revenue, [d] government investigators are poking around for tax and other financial irregularities, [e] the cost of running the software is beyond the reach of the licensee, or [f] the system simply does not search or retrieve what the user wanted or expected.
Elastic B.V. and its management team may have a challenge to overcome. Thanks, OpenAI, the MSFT Copilot thing crashed today.
So what’s the fix?
A partial answer appears in the Elastic B.V. blog post titled “Elasticsearch Is Open Source, Again.” The company states:
The tl;dr is that we will be adding AGPL as another license option next to ELv2 and SSPL in the coming weeks. We never stopped believing and behaving like an open source community after we changed the license. But being able to use the term Open Source, by using AGPL, an OSI approved license, removes any questions, or fud, people might have.
Without slogging through the confusion between what Elastic B.V. sells, the open source version of Elasticsearch, the dust up with Amazon over its really original approach to search inspired by Elasticsearch, Lucid Imagination’s innovation, and the creaking edifice of A9, Elastic B.V. has released Elasticsearch under an additional open source license. I think that means one can use the software and not pay Elastic B.V. until additional services are needed. In my experience, most enterprise search systems regardless of how they are explained need the “owner” of the system to lend a hand. Contrary to the belief that smart software can do enterprise search right now, there are some hurdles to get over.
Will “going open source again” work?
Let me offer several observations based on my experience with enterprise search and retrieval which reaches back to the days of punch cards and systems which used wooden rods to “pull” cards with a wanted tag (index term):
- When an enterprise search system loses revenue momentum, the fix is to acquire companies in an adjacent search space and use that revenue to bolster the sales prospects for upsells.
- The company with the downturn gilds the lily and seeks a buyer. One example was the sale of Exalead to Dassault Systèmes which calculated it was more economical to buy a vendor than to keep paying its then current supplier which I think was Autonomy, but I am not sure. Fast Search & Transfer pulled of this type of “exit” as some of the company’s activities were under scrutiny.
- The search vendor can pivot from doing “search” and morph into a business intelligence system. (By the way, that did not work for Grok.)
- The company disappears. One example is Entopia. Poof. Gone.
I hope Elastic B.V. thrives. I hope the “new” open source play works. Search — whether enterprise or Web variety — is far from a solved problem. People believe they have the answer. Others believe them and license the “new” solution. The reality is that finding information is a difficult challenge. Let’s hope the “downturn” and “negativism” goes away.
Stephen E Arnold, September 2, 2024
Social Media Cowboys, the Ranges Are Getting Fences
September 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Several recent developments suggest that the wide open and free ranges are being fenced in. How can I justify this statement, pardner? Easy. Check out these recent developments:
- The founder of Telegram is Pavel Durov. He was arrested on Saturday, August 26, 2024, at Le Bourget airport near Paris
- TikTok will stand trial for the harms to children caused by the “algorithm”
- Brazil has put up barbed wire to keep Twitter (now X.com) out of the country.
I am not the smartest dinobaby in the rest home, but even I can figure out that governments are taking action after decades of thinking about more weighty matters than the safety of children, the problems social media causes for parents and teachers, and the importance of taking immediate and direct action against those breaking laws.
A couple of social media ranchers are wondering about the actions of some judicial officials. Thanks, MSFT Copilot. Good enough like most software today.
Several questions seem to be warranted.
First, the actions are uncoordinated. Brazil, France, and the US have reached conclusions about different social media companies and acted without consulting one another. How quickly with other countries consider their particular situation and reach similar conclusions about free range technology outfits?
Second, why have legal authorities and legislators in many countries failed to recognize the issues radiating from social media and related technology operators? Was it the novelty of technology? Was it a lack of technology savvy? Was it moral or financial considerations?
Third, how will the harms be remediated? Is it enough to block a service or change penalties for certain companies?
I am personally not moved by those who say speech must be free and unfettered. Sorry. The obvious harms outweigh that self-serving statement from those who are mesmerized by online or paid to have that idea and promote it. I understand that a percentage of students will become high achievers with or without traditional reading, writing, and arithmetic. However, my concern is the other 95 percent of students. Structured learning is necessary for a society to function. That’s why there is education.
I don’t have any big ideas about ameliorating the obvious damage done by social media. I am a dinobaby and largely untouched by TikTok-type videos or Facebook-type pressures. I am, however, delighted to be able to cite three examples of long overdue action by Brazilian, French, and US officials. Will some of these wild west digital cowboys end up in jail? I might support that, pardner.
Stephen E Arnold, September 2, 2024
Google Claims It Fixed Gemini’s “Degenerate” People
September 2, 2024
History revision is a problem. It’s been a problem for…well…since the start of recorded history. The Internet and mass media are infamous for being incorrect about historical facts, but image generating AI, like Google’s Gemini, is even worse. Tech Crunch explains what Google did to correct its inaccurate algorithm: “Google Says It’s Fixed Gemini’s People-Generating Feature.”
Google released Gemini in early 2023, then over a year later paused the chatbot for being too “woke,”“politically incorrect,” and “historically inaccurate.” The worst of Gemini’s offending actions was when it (for example) was asked to depict a Roman legion as ethnically diverse which fit the woke DEI agenda, while when it was asked to make an equally ethnically diverse Zulu warrior army Gemini only returned brown-skinned people. The latter is historically accurate, because Google doesn’t want to offend western ethnic minorities and, of course, Europe (where light skinned pink people originate) was ethnically diverse centuries ago.
Everything was A OK, until someone invoked Godwin’s Law by asking Gemini to generate (degenerate [sic]) an image of Nazis. Gemini returned an ethnically diverse picture with all types of Nazis, not the historically accurate light-skinned Germans-native to Europe.
Google claims it fixed Gemini and it took way longer than planned. The people generative feature is only available to paid Gemini plans. How does Google plan to make its AI people less degenerative? Here’s how:
“According to the company, Imagen 3, the latest image-generating model built into Gemini, contains mitigations to make the people images Gemini produces more “fair.” For example, Imagen 3 was trained on AI-generated captions designed to ‘improve the variety and diversity of concepts associated with images in [its] training data,’ according to a technical paper shared with TechCrunch. And the model’s training data was filtered for “safety,” plus ‘review[ed] … with consideration to fairness issues,’ claims Google…;We’ve significantly reduced the potential for undesirable responses through extensive internal and external red-teaming testing, collaborating with independent experts to ensure ongoing improvement,” the spokesperson continued. ‘Our focus has been on rigorously testing people generation before turning it back on.’”
Google will eventually make it work and the company is smart to limit Gemini’s usage to paid subscriptions. Limiting the user pool means Google can better control the chatbot and (if need be) turn it off. It will work until bad actors learn how to abuse the chatbot again for their own sheets and giggles.
Whitney Grace, September 2, 2024
The Seattle Syndrome: Definitely Debilitating
August 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I think the film “Sleepless in Seattle” included dialog like this:
What do they call it when everything intersects?
The Bermuda Triangle.”
Seattle has Boeing. The company is in the news not just for doors falling off its aircraft. The outfit has stranded two people in earth orbit and has to let Elon Musk bring them back to earth. And Seattle has Amazon, an outfit that stands behind the products it sells. And I have to include Intel Labs, not too far from the University of Washington, which is famous in its own right for many things.
Two job seekers discuss future opportunities in some of Seattle and environ’s most well-known enterprises. The image of the city seems a bit dark. Thanks, MSFT Copilot. Are you having some dark thoughts about the area, its management talent pool, and its commitment to ethical business activity? That’s a lot of burning cars, but whatever.
Is Seattle a Bermuda Triangle for large companies?
This question invites another; specifically, “Is Microsoft entering Seattle’s Bermuda Triangle?
The giant outfit has entered a deal with the interesting specialized software and consulting company Palantir Technologies Inc. This firm has a history of ups and downs since its founding 21 years ago. Microsoft has committed to smart software from OpenAI and other outfits. Artificial intelligence will be “in” everything from the Azure Cloud to Windows. Despite concerns about privacy, Microsoft wants each Windows user’s machine to keep screenshot of what the user “does” on that computer.
Microsoft seems to be navigating the Seattle Bermuda Triangle quite nicely. No hints of a flash disaster like the sinking of the sailing yacht Bayesian. Who could have predicted that? (That’s a reminder that fancy math does not deliver 1.000000 outputs on a consistent basis.
Back to Seattle. I don’t think failure or extreme stress is due to the water. The weather, maybe? I don’t think it is the city government. It is probably not the multi-faceted start up community nor the distinctive vocal tones of its most high profile podcasters.
Why is Seattle emerging as a Bermuda Triangle for certain firms? What forces are intersecting? My observations are:
- Seattle’s business climate is a precursor of broader management issues. I think it is like the pigeons that Greeks examined for clues about their future.
- The individuals who works at Boeing-type outfits go along with business processes modified incrementally to ignore issues. The mental orientation of those employed is either malleable or indifferent to downstream issues. For example, Windows update killed printing or some other function. The response strikes me as “meh.”
- The management philosophy disconnects from users and focuses on delivering financial results. Those big houses come at a cost. The payoff is personal. The cultural impacts are not on the radar. Hey, those quantum Horse Ridge things make good PR. What about the new desktop processors? Just great.
Net net: I think Seattle is a city playing an important role in defining how businesses operate in 2024 and beyond. I wish I was kidding. But I am bedeviled by reminders of a space craft which issues one-way tickets, software glitches, and products which seem to vary from the online images and reviews. (Maybe it is the water? Bermuda Triangle water?)
Stephen E Arnold, August 30, 2024
Pavel Durov: Durable Appeal Despite Crypto, French Allegations, and a Travel Restriction
August 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Telegram, a Web3 crypto facilitator, is in the news because its Big Dog is a French dog house. He can roam free, but he cannot leave the country. I mention Pavel Durov, the brother of Nikolai who has two PhDs in his brain, because he has appeal. Allegedly he also has more than 100 children. I find Mr. Durov’s fecundity an anomaly if the information in “Men in Crypto Less Attractive to Women Than Cosplayers, Anime Buffs: Survey.” That story suggests that men in crypto will not be at the front of the line when it comes to fathering five score bambinos.
Thanks, Microsoft Copilot. Nice cosplay. Who is the fellow in the bunny suit?
The write up reports:
Crypto was seen as the ninth-most unattractive hobby for males, the Aug. 24 survey by the Date Psychology blog found, which was a convenience sample of 814 people, 48% of which were female. The authors noted that based on past surveys, their sample population disproportionately includes women of “high social status,” with a high level of education and who are predominately white.
I will not point out that the sample size seems a few cans short of a six pack nor that lack of an unbiased sample is usually a good idea. But the idea is interesting.
The article continues with what I think are unduly harsh words:
Female respondents were asked if they found a list of 74 hobbies either “attractive” or “unattractive.” Only 23.1% said crypto was an attractive hobby, while around a third found comic books and cosplaying attractive. It left crypto as the second-most unattractive so-called “nerd” hobby to women — behind collecting products from Funko, which makes pop culture and media-based bobblehead figures.
The article includes some interesting data:
The results show that females thought reading was the most attractive hobby for a man (98.2%), followed by knowing or learning a foreign language (95.6%) and playing an instrument (95.4%).
I heard that Pavel Durov, not the brother with the two PhD brain, has a knack for languages. He allegedly speaks Russian (seems logical. His parents are Russian.), French (seems logical. He has French citizenship.), “Persian” (seems logical he has UAE citizenship and lives in quite spartan quarters in Dubai.), and Saint Kitts and Nevis (seems logical that he would speak English and some Creole). Now that he is in France with only a travel restriction he can attend some anime and cosplay events. It is possible that Parisian crypto enthusiasts will have a “Crypto Night” at a bistro like Le Procope. In order to have more appeal, he may wear a git-up.
I would suggest that his billionaire status and “babes near me” function in Telegram might enhance his appeal. If he has more than 100 Durov bambinos, why not shoot for 200 or more? He is living proof that surveys are not 100 percent reliable.
Stephen E Arnold, August 30, 2024
What Is a Good Example of AI Enhancing Work Processes? Klarna
August 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Klarna is a financial firm in Sweden. (Did you know Sweden has a violence problem?) The country also has a company which is quite public about the value of smart software to its operations. “‘Our Chatbots Perform The Tasks Of 700 People’: Buy Now, Pay Later Company Klarna To Axe 2,000 Jobs As AI Takes On More Roles” reports:
Klarna has already cut over 1,000 employees and plans to remove nearly 2,000 more
Yep, that’s the use case. Smart software allows the firm’s leadership to terminate people. (Does that managerial attitude contribute to the crime problem in Sweden? Of course not. The company is just being efficient.)
The write up states:
Klarna claims that its AI-powered chatbot can handle the workload previously managed by 700 full-time customer service agents. The company has reduced the average resolution time for customer service inquiries from 11 minutes to two while maintaining consistent customer satisfaction ratings compared to human agents.
What’s the financial payoff for this leader in AI deployment? The write up says:
Klarna reported a 73 percent increase in average revenue per employee compared to last year.
Klarna, however, is humane. According to the article:
Notably, none of the workforce reductions have been achieved through layoffs. Instead, the company has relied on a combination of natural staff turnover and a hiring freeze implemented last year.
That’s a relief. Some companies would deploy Microsoft software with AI and start getting rid of people. The financial benefits are significant. Plus, as long as the company chugs along in good enough mode, the smart software delivers a win for the firm.
Are there any downsides? None in the write up. There is a financial payoff on the horizon. The article states:
In July [2024], Chrysalis Investments, a major Klarna investor, provided a more recent valuation estimate, suggesting that the fintech firm could achieve a valuation between 15 billion and 20 billion dollars in an initial public offering.
But what if the AI acts like a brake on firm’s revenue growth and sales? Hey, this is an AI success. Why be negative? AI is wonderful and Klarna’s customers appear to be thrilled with smart software. I personally love speaking to smart chatbots, don’t you?
Stephen E Arnold, August 30, 2024
New Research about Telegram and Its Technology
August 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Next week, my team and I will be presenting a couple of lectures to a group of US government cyber experts. Our topic is Telegram, which has been a focal point of my research team for most of 2024. Much of the information we have included in our talks will be new; that is, it presents a view of Telegram which is novel. However, we have available a public version of the material. Most of our work is delivered via video conferencing with PDFs of selected exhibits provided to those participating in a public version of our research.
For the Telegram project, the public lecture includes:
- A block diagram of the Telegram distributed system, including the crypto and social media components
- A timeline of Telegram innovations with important or high-impact innovations identified
- A flow diagram of the Open Network and its principal components
- Likely “next steps” for the distributed operation.
With the first stage of the French judiciary process involving the founder of Telegram completed, our research project has become one of the first operational analyses of what to many people outside of Russia, the Russian Federation, Ukraine, and other countries is unfamiliar. Although usage of Telegram in North America is increasing, the service is off the radar of many people.
In fact, knowledge of Telegram’s basic functions is sketchy. Our research revealed:
- Users lack knowledge of Telegram’s approach to encryption
- The role US companies play in keeping the service online and stable
- The automation features of the system
- The reach of certain Telegram dApps (distributed applications) and YouTube, to cite one example.
The public version of our presentation at the US government professionals will be available in mid-September 2024. If you are interested in this lecture, please, write benkent2020 at yahoo dot com. One of the Beyond Search team will respond to your inquiry with dates and fees, if applicable.
Stephen E Arnold, August 29, 2024
Yelp Google Legal Matter: A Glimpse of What Is to Come
August 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Yelp.com is one of the surviving re-inventions of the Yellow Pages. The online guide includes snapshots of a business, user reviews, and conveniences like classifications of business types. The company has asserted that Google has made the finding services’ life difficult. “Yelp Sues Google in Wake of Landmark Antitrust Ruling on Search” reports:
Yelp has spoken out about what it considers to be Google’s anticompetitive conduct for well over a decade. But the timing of Yelp’s lawsuit, filed just weeks after a Washington federal judge ruled that Google illegally monopolized the search market through exclusive deals, suggests that more companies may be emboldened to take action against the search leader in the coming months.
Thanks, MSFT Copilot. Good enough.
Yelp, like other efforts to build a business in the shadow of Google’s monolith has pointed out that the online advertising giant has acted in a way that inhibited Yelp’s business. In the years prior to Judge Mehta’s ruling that Google was — hang on now, gentle reader — a monopoly, Yelp’s objections went nowhere. However, since Google learned that Judge Mehta decided against Google’s arguments that it was a mom and pop business too, Yelp is making another run at Googzilla.
The write up points out:
In its complaint, Yelp recounts how Google at first sought to move users off its search page and out onto the web as quickly as possible, giving rise to a thriving ecosystem of sites like Yelp that sought to provide the information consumers were seeking. But when Google saw just how lucrative it could be to help users find which plumber to hire or which pizza to order, it decided to enter the market itself, Yelp alleges.
What’s an example of Google’s behavior toward Yelp and presumably other competitors? The write up says:
In its complaint, Yelp recounts how Google at first sought to move users off its search page and out onto the web as quickly as possible, giving rise to a thriving ecosystem of sites like Yelp that sought to provide the information consumers were seeking. But when Google saw just how lucrative it could be to help users find which plumber to hire or which pizza to order, it decided to enter the market itself, Yelp alleges.
The Google has, it appears, used a relatively simple method of surfing on queries for Yelp content. The technique is “self preferencing”; that is, Google just lists its own results above Yelp hits.
Several observations:
- Yelp has acted quickly, using the information in Judge Mehta’s decision as a surfboard
- Other companies will monitor this Yelp Google matter. If Yelp prevails, other companies which perceive themselves as victims of Google’s business tactics may head to court as well
- Google finds itself in a number of similar legal dust ups which add operating friction to the online advertising vendor’s business processes.
Google, like Gulliver, may be pinned down, tied up, and neutralized the way Gulliver was in Lilliput. That was satirical fiction; Yelp is operating in actual life.
Stephen E Arnold, August 29, 2024