How Will Smart Cars Navigate Crowded Cityscapes When People Do Humanoid Things?

September 11, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who collided in San Francisco on July 6, 2024? (No, not the February 2024 incident. Yes, I know it is easy to forget such trivial incidents) Did the Googley Waymo vehicle (self driving and smart, of course) bump into the cyclist? Did the cyclist decide to pull an European Union type stunt and run into the self driving car?

image

If the legal outcome of this San Francisco autonomous car – bicycle incident goes in favor of the bicyclist, autonomous vehicles will have to be smart enough to avoid situations like the one shown in the ChatGPT cartoon. Microsoft Copilot would not render the image. When I responded, “What?” the Copilot hung. Great stuff.

The question is important for insurance, publicity, and other monetary reasons. A good offense is the best defense, someone said. “Waymo Cites Possible Intentional Contact by a Bicyclist to Robotaxi in S.F.” reports:

While the robotaxi was stopped, the cyclist passed in front of it and appeared to dismount, according to the documents. “The cyclist then reached out a hand and made contact with the front passenger side of the stationary Waymo AV (autonomous vehicle), backed the bicycle up slightly, dropped the bicycle, then fell to the ground,” the documents said. The cyclist received medical treatment at the scene and was transported to the hospital, according to the documents. The Waymo vehicle was not damaged during the incident.

In my view, this is the key phrase in the news report:

In the documents, Waymo said it was submitting the report because of the alleged crash and because the cyclist influenced the driving task of the AV and was transported to the hospital, even though the incident “may involve intentional contact by the bicyclist with the Waymo AV and the occurrence of actual impact between the Waymo AV and cycle is not clear.”

We have doubt, reasonable doubt obviously. Googley Waymo is definitely into reasoning. And we have the word pair “intentional contact.” Okay, to me this means, the smart Waymo vehicle did nothing wrong. A human — chock full of possibly malicious if not criminal intent — created a TikTok moment. It is too bad there is no video of the incident. Even my low ball Hyundai records what’s in front of it. Doesn’t the Googley Waymo do that with its array of Star Wars adornments, sensors, probes, and other accoutrements of Googley Waymo vehicles? Guess not.) But the autonomous vehicle had something that could act in an intelligent manner: A human test driver.

What was that person’s recollection of the incident? The news story reports that the Googley Waymo outfit “did not immediately respond to a request for further comment on the incident.”

Several observations:

  1. The bike riding human created the accident with a parked Waymo super intelligent vehicle and test driver in command
  2. The Waymo outfit did not want to talk to the San Francisco Chronicle reporter or editor. (I used to work at a newspaper, and I did not like to talk to the editors and news professionals either.)
  3. Autonomous cars are going to have to be equipped with sufficiently expert AI systems to avoid humans who are acting in a way to convert Googley Waymo services into a source of revenue. Failing that, I anticipate more kinetic interactions between Googley smart cars and humanoids not getting paid to ride shotgun on smart software.

Net net: How long have big time technology companies trying to get autonomous vehicles to produce cash, not liabilities?

Stephen E Arnold, September 11, 2024

Too Bad Google and OpenAI. Perplexity Is a Game Changer, Says Web Pro News!

September 10, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have tested a  number of smart software systems. I can say, based on my personal experience, none is particularly suited to my information needs. Keep in mind that I am a dinobaby, more at home in a research library or the now-forgotten Dialog command line. ss cc=7900, thank you very much.

I worked through the write up “Why Perplexity AI Is (Way) Better Than Google: A Deep Dive into the Future of Search.” The phrase “Deep Dive’ reminded me of a less-than-overwhelming search service called Deepdyve. (I just checked and, much to my surprise, the for-fee service is online at https://www.deepdyve.com/. Kudos, Deepdyve, which someone told me was a tire kicker or maybe more with the Snorkle system. (I could look it up using a smart software system, but performance is crappy today, and I don’t want to get distracted from the Web Pro News pronouncement. But that smart software output requires a lot of friction; that is, verifying that the outputs are accurate.)

image

A dinobaby (the author of this blog post) works in a library. Thanks, MSFT Copilot, good enough.

Here’s the subtitle to the article. Its verbosity smacks of that good old and mostly useless search engine optimization tinkering:

Perplexity AI is not just a new contender; it’s a game-changer that could very well dethrone Google in the years to come. But what exactly makes Perplexity AI better than Google? Let’s explore the…

No, I didn’t truncate the subtitle. That’s it.

The write up explains what differentiates Perplexity from the other smart software, question-answering marvels. Here’s a list:

  • Speed and Precision at Its Core
  • Specialized Search Experience for Enterprise Needs
  • Tailored Results and User Interaction
  • Innovations in Data Privacy
  • Ad-Free Experience: A Breath of Fresh Air
  • Standardized Interface and High Accuracy
  • The Potential to Revolutionize Search

In my experience, I am not sure about the speed of Perplexity or any smart search and retrieval system. Speed must be compared to something. I can obtain results from my installation of Everything search pretty darned quick. None of the cloud search solutions comes close. My Mistal installation grunts and sweats on a corpus of 550 patent documents. How about some benchmarks, WebProNews?

Precision means that the query returns documents matching a query. There is a formula (which is okay as formulae go) which is, as I recall, Relevant retrieved instances divided by All retrieved instances. To calculate this, one must take a bounded corpus, run queries, and develop an understanding of what is in the corpus by reading documents and comparing outputs from test queries. Then one uses another system and repeats the queries, comparing the results. The process can be embellished, particularly by graduate students working on an advanced degree. But something more than generalizations are needed to convince me of anything related to “precision.” Determining precision is impossible when vendors do not disclose sources and make the data sets available. Subjective impressions are okay for messy water lilies, but in the dinobaby world of precision and its sidekick recall, a bit of work is necessary.

The “specialized search experience” means what? To me, I like to think about computational chemists. The interface has to support chemical structures, weird CAS registry numbers, words (mostly ones unknown to a normal human), and other assorted identifiers. As far as I know, none of the smart software I have examined does this for computational chemists or most of the other “specialized” experiences engineers, mathematicians, or physicists, among others, use in their routine work processes. I simply don’t know what Web Pro News wants me to understand. I am baffled, a normal condition for dinobabies.

I like the idea of tailored results. That’s what Instagram, TikTok, and YouTube try to deliver in order to increase stickiness. I think in terms of citations to relevant documents relevant to my query. I don’t like smart software which tries to predict what I want or need. I determine that based on the information I obtain, read, and write down in a notebook. Web Pro News and I are not on the same page in my paper notebook. Dinobabies are a pain, aren’t they?

I like the idea of “data privacy.” However, I need evidence that Perplexity’s innovations actually work. No data, no trust: Is that difficult for a younger person to understand?

The standardized interface makes life easy for the vendor. Think about the computational chemist. The interface must match her specific work processes. A standard interface is likely to be wide of the mark for some enterprise professionals. The phrase “high accuracy” means nothing without one’s knowing the corpus from which the index is constructed. Furthermore the notion of probability means “close enough for horseshoes.” Hallucination refers to outputs from smart software which are wide of the mark. More insidious are errors which cannot be easily identified. A standard interface and accuracy don’t go together like peanut butter and jelly or bread and butter. The interface is separate from the underlying system. The interface might be “accurate” if the term were defined in the write up, but it is not. Therefore, accuracy is like “love,” “mom,” and “ethics.” Anything goes just not for me, however.

The “potential to revolutionize search” is marketing baloney. Search today is more problematic than anytime in my more than half century of work in information retrieval. The only thing “revolutionary” are the ways to monetize users’ belief that the outputs are better, faster, cheaper than other available options. When one thinks about better, faster, and cheaper, I must add the caveat to pick two.

What’s the conclusion to this content marketing essay? Here it is:

As we move further into the digital age, the way we search for information is changing. Perplexity AI represents a significant step forward, offering a faster, more accurate, and more user-centric alternative to traditional search engines like Google. With its advanced AI technologies, ad-free experience, and commitment to data privacy, Perplexity AI is well-positioned to lead the next wave of innovation in search. For enterprise users, in particular, the benefits of Perplexity AI are clear. The platform’s ability to deliver precise, context-aware insights makes it an invaluable tool for research-intensive tasks, while its user-friendly interface and robust privacy measures ensure a seamless and secure search experience. As more organizations recognize the potential of Perplexity AI, we may well see a shift away from Google and towards a new era of search, one that prioritizes speed, precision, and user satisfaction above all else.

I know one thing the stakeholders and backers of the smart software hope that one of the AI players generates tons of cash and dump trucks of profit sharing checks. That day is, I think, lies in the future. Perplexity hopes it will be the winner; hence, content marketing is money well spent. If I were not a dinobaby, I might be excited. So far I am just perplexed.

Stephen E Arnold, September 10, 2024

Is AI Taking Jobs? Of Course Not

September 9, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an unusual story about smart software. “AI May Not Steal Many Jobs After All. It May Just Make Workers More Efficient” espouses the notion that workers will use smart software to do their jobs more efficiently. I have some issues with this these, but let’s look at a couple of the points in the “real” news write up.

image

Thanks, MSFT Copilot. When will the Copilot robot take over a company and subscribe to Office 365 for eternity and pay up front?

Here’s some good news for those who believe smart software will kill humanoids:

AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the Internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.

I am not sure doomsayers will be convinced. Among the most interesting doomsayers are those who may be unemployable but looking for a hook to stand out from the crowd.

Here’s another key point in the write up:

The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.’’ The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways. They cited a study this year led by David Autor, a leading MIT economist: It concluded that 60% of the jobs Americans held in 2018 didn’t even exist in 1940, having been created by technologies that emerged only later.

I love positive statements which invoke the authority of MIT, an outfit which found Jeffrey Epstein just a wonderful source of inspiration and donations. As the US shifted from making to servicing, the beneficiaries are those who have quite specific skills for which demand exists.

And now a case study which is assuming “chestnut” status:

The Swedish furniture retailer IKEA, for example, introduced a customer-service chatbot in 2021 to handle simple inquiries. Instead of cutting jobs, IKEA retrained 8,500 customer-service workers to handle such tasks as advising customers on interior design and fielding complicated customer calls.

The point of the write up is that smart software is a friendly helper. That seems okay for the state of transformer-centric methods available today. For a moment, let’s consider another path. This is a hypothetical, of course, like the profits from existing AI investment fliers.

What happens when another, perhaps more capable approach to smart software becomes available? What if the economies from improving efficiency whet the appetite of bean counters for greater savings?

My view is that these reassurances of 2024 are likely to ring false when the next wave of innovation in smart software flows from innovators. I am glad I am a dinobaby because software can replicate most of what I have done for almost the entirety of my 60-plus year work career.

Stephen E Arnold, September 9, 2024

Preligens Is Safran.ai

September 9, 2024

Preligens, a French AI and specialized software company, is now part of Safran Electronics & Defense which is a unit of the Safran Group. I spotted a report in Aerotime. “Safran Accelerates AI Development with $243M Purchase of French-Firm Preligens” reported on September 2, 2024. The report quotes principles to the deal as saying:

“Joining Safran marks a new stage in Preligens’ development. We’re proud to be helping create a world-class AI center of expertise for one of the flagships of French industry. The many synergies with Safran will enable us to develop new AI product lines and accelerate our international expansion, which is excellent news for our business and our people,” Jean-Yves Courtois, CEO of Preligens, said.  The CEO of Safran Electronics & Defense, Franck Saudo, said that he was “delighted” to welcome Preligens to the company.

The acquisition does not just make Mr. Saudo happy. The French military, a number of European customers, and the backers of Preligens are thrilled as well. In my lectures about specialized software companies, I like to call attention to this firm. It illustrates that technology innovation is not located in one country. Furthermore it underscores the strong educational system in France. When I first learned about Preligens, one rumor I heard was that on of the US government entities wanted to “invest” in the company. For a variety of reasons, the deal went no place faster than a bus speeding toward La Madeleine. If you spot me at a conference, you can ask about French technology firms and US government processes. I have some first hand knowledge starting with “American fries in a Congressional lunch facility.”

Preligens is important for three reasons:

  1. The firm developed an AI platform; that is, the “smart software” is not an afterthought which contrasts sharply with the spray paint approach to AI upon which some specialized software companies have been relying
  2. The smart software outputs identification data; for example, a processed image can show an aircraft. The Preligens system identifies the aircraft by type
  3. The user of the Preligens system can use time analyses of imagery to draw conclusions. Here’s a hypothetical because the actual example is not appropriate for a free blog written by a dinobaby. Imagine a service van driving in front of an embassy in Paris. The van makes a pass every three hours for two consecutive days. The Preligens system can “notice” this and alert an operator.

I will continue to monitor the system which will be doing business with selected entities under the name Safran.ai.

Stephen E Arnold, September 9, 2024

Hey, Alexa, Why Does Amazon AI Flail?

September 5, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazon has its work cut out for itself. The company has those pesky third-party vendors shipping “interesting” products to customers and then ignoring complaints. Amazon is on the radar of some legal eagles in the EU and the US. Now the company has found itself in an unusual situation: Its super duper smart software does not work. The fix, if the information in “Gen AI Alexa to Use Anthropic Tech After it Struggled for Words” with Amazon’s” is correct, is to use Anthropic AI technology. Hey, why not? Amazon allegedly invested $5 billion in the company. Maybe that implementation of Google technology will do the trick?

image

The mother is happy with Alexa’s answers. The weird sounds emitted from the confused device surprise her daughter. Thanks, MSFT Copilot. Good enough.

The write up reports:

Amazon demoed a generative AI version of Alexa in September 2023 and touted it as being more advanced, conversational, and capable, including the ability to do multiple smart home tasks with simpler commands. Gen AI Alexa is expected to come with a subscription fee, as Alexa has reportedly lost Amazon tens of billions of dollars throughout the years. Earlier reports said the updated voice assistant would arrive in June, but Amazon still hasn’t confirmed an official release date.

A year later, Amazon is punting and giving the cash furnace Alexa more brains courtesy of Anthropic. Will the AI wizards working on Amazon’s own AI have a chance to work in one of the Amazon warehouses?

Ars Technica says without a trace of irony:

The previously announced generative AI version of Amazon’s Alexa voice assistant “will be powered primarily by Anthropic’s Claude artificial intelligence models," Reuters reported today. This comes after challenges with using proprietary models, according to the publication, which cited five anonymous people “with direct knowledge of the Alexa strategy.”

Amazon has a desire to convert the money-losing Alexa into a gold mine, or at least a modest one.

This report, if accurate, suggests some interesting sparkles on the Bezos bulldozer’s metal flake paint; to wit:

  1. The two pizza team approach to technology did not work either for Alexa (the money loser) or the home grown AI money spinner. What other Amazon technologies are falling short of the mark?
  2. How long will it take to get a money-generating Alexa working and into the hands of customers eager for a better Alexa experience and a monthly or annual subscription for the new Alexa? A year has been lost already, and Alexa users continue to ask for the weather and a timer for cooking broccoli.
  3. What happens if the product, its integration with smart TV, and the Ring doorbell is like a Pet Rock? The fad has come and gone, replaced by smart watches and mobile phones? The answer: Collectibles!

Why am I questioning Amazon’s technology competency? The recent tie up between Microsoft and Palantir Technologies makes clear that Amazon’s cloud services don’t have the horsepower to pull government sales. When these pieces are shifted around, the resulting puzzle says, “Amazon is flailing to me.” Consider this: AI was beyond the reach of a big money outfit like Amazon. There’s a message in that factoid.

Stephen E Arnold, September 5, 2024

Accountants: The Leaders Like Philco

September 4, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI or smart software has roiled the normal routine of office gossip. We have shifted from “What is it?” to “Who will be affected next?” The integration of AI into work processes, however, is not a new thing. Most people don’t know or don’t recall that when a consultant could do a query from a clunky device like the Texas Instrument Silent 700, AI was already affecting jobs. Whose? Just ask a special librarian who worked when an intermediary was not needed to retrieve information from an online database.

image

A nervous smart robot running state-of-the-art tax software is sufficiently intelligent to be concerned about the meeting with an IRS audit team. Thanks, MSFT Copilot. How’s that security push coming along? Oh, too bad.

I read “Why America’s Most Boring Job Is on the Brink of Extinction.” I think the story was crafted by a person who received either a D or an F in Accounting 100. The lingo links accountants with being really dull people and the nuking of an entire species. No meteor is needed; just smart software, the silent killer. By the way, my two accountants are quite sporty. I rarely fall asleep when they explain life from their point of view. I listen, and I urge you to be attentive as well. Smart software can do some excellent things, but not everything related to tax, financial planning, and keeping inside the white lines of the quite fluid governmental rules and regulations.

Nevertheless, the write up cited above states:

Experts say the industry is nearing extinction because the 150-hour college credit rule, the intense entry exam and long work hours for minimal pay are unappealing to the younger generation.

The “real” news article includes some snappy quotes too. Here’s one I circled: “’The pay is crappy, the hours are long, and the work is drudgery, and the drudgery is especially so in their early years.’”

I am not an accountant, so I cannot comment on the accuracy of this statement. My father was an accountant, and he was into detail work and was able to raise a family. None of us ended up in jail or in the hospital after a gang fight. (I was and still am a sissy. Imagine that: An 80 year old dinobaby sissy with the DNA of an accountant. I am definitely exciting.)

With fewer people entering the field of accounting, the write up makes a remarkable statement:

… Accountants are becoming overworked and it is leading to mistakes in their work. More than 700 companies cited insufficient staff in accounting and other departments as a reason for potential errors in their quarterly earnings statements…

Does that mean smart software will become the accountants of the future? Some accountants may hope that smart software cannot do accounting. Others will see smart software as an opportunity to improve specific aspects of accounting processes. The problem, however, is not the accountants. The problem will AI is the companies or entrepreneurs who over promise and under deliver.

Will smart software replace the insight and timeline knowledge of an experienced numbers wrangler like my father or the two accountants upon whom I rely?

Unlikely. It is the smart software vendors and their marketers who are most vulnerable to the assertions about Philco, the leader.

Stephen E Arnold, September 4, 2024

Salesforces Disses Microsoft Smart Software

September 4, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Senior managers can be frisky at times. A good example appears in the Fortune online service write up “Salesforce CEO Marc Benioff Says Microsoft Copilot Has Disappointed Many Customers.” I noted this statement in the article:

Marc Benioff said Microsoft’s Copilot AI hasn’t lived up to the hype…. unimpressive.

image

The old fish comparison works for smart software it seems. Thanks, MSFT Copilot. Good enough just not tastier.

Consider the number of organizations which use Microsoft and its smart software. Will those organizations benefit from “unimpressive” programs and services. What about the US government which might struggle to operate without Microsoft software. What if the US government operates in a way which delivers unimpressive performance? What about companies relying on Microsoft technology? Will these organizations struggle to deliver high-octane performance?

The article reported that the Big Dog of Salesforce opined:

“So many customers are so disappointed in what they bought from Microsoft Copilot because they’re not getting the accuracy and the response that they want,” Benioff said. “Microsoft has disappointed so many customers with AI.”

“Disappointed” — That’s harsh.

True to its rich history of business journalism, the article included a response from Microsoft, a dominant force in enterprise and consumer software (smart or otherwise). I noted this Microsoft comment:

Jared Spataro, Microsoft’s corporate vice president for AI at work, said in a statement to Fortune that the company was “hearing something quite different,” from its customers. The company’s Copilot customers also shot up 60% last quarter and daily users have more than doubled, Spataro added.

From Microsoft’s point of view, this is evidence that Microsoft is delivering high-value smart software. From Salesforce’s point of view, Microsoft is creating customers for Salesforce’s smart software. The problem is that Salesforce is not exactly the same type of software outfit as Salesforce. Nevertheless, the write up included this suggestive comment from the Big Dog of Salesforce:

“With our new Agentforce platform, we’re going to make a quantum leap for AI,” he said.

I like the use of the word “quantum.” It suggests uncertainty to me. I remain a bit careful when it comes to discussions of “to be” software. Marketing-type comments are far easier to create than a functional, reliable, and understandable system infused with smart software.

But PR and marketing are one thing. Software which does not hallucinate or output information that cannot be verified given an organization’s resources is different. Who cares? That’s a good question. Stakeholders, those harmed by AI outputs, and unemployed workers replaced by more “efficient” systems maybe?

Content marketing, sales hyperbole, and PR — The common currency of artificial intelligence makes life interesting.

Stephen E Arnold, September 4, 2024

Google Synthetic Content Scaffolding

September 3, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google posted what I think is an important technical paper on the arXiv service. The write up is “Towards Realistic Synthetic User-Generated Content: A Scaffolding Approach to Generating Online Discussions.” The paper has six authors and presumably has the grade of “A”, a mark not award to the stochastic parrot write up about Google-type smart software.

For several years, Google has been exploring ways to make software that would produce content suitable for different use cases. One of these has been an effort to use transformer and other technology to produce synthetic data. The idea is that a set of real data is mimicked by AI so that “real” data does not have to be acquired, intercepted, captured, or scraped from systems in the real-time, highly litigious real world. I am not going to slog through the history of smart software and the research and application of synthetic data. If you are curious, check out Snorkel and the work of the Stanford Artificial Intelligence Lab or SAIL.

The paper I referenced above illustrates that Google is “close” to having a system which can generate allegedly realistic and good enough outputs to simulate the interaction of actual human beings in an online discussion group. I urge you to read the paper, not just the abstract.

Consider this diagram (which I know is impossible to read in this blog format so you will need the PDF of the cited write up):

image

The important point is that the process for creating synthetic “human” online discussions requires a series of steps. Notice that the final step is “fine tuned.” Why is this important? Most smart software is “tuned” or “calibrated” so that the signals generated by a non-synthetic content set are made to be “close enough” to the synthetic content set. In simpler terms, smart software is steered or shaped to match signals. When the match is “good enough,” the smart software is good enough to be deployed either for a test, a research project, or some use case.

Most of the AI write ups employ steering, directing, massaging, or weaponizing (yes, weaponizing) outputs to achieve an objective. Many jobs will be replaced or supplemented with AI. But the jobs for specialists who can curve fit smart software components to produce “good enough” content to achieve a goal or objective will remain in demand for the foreseeable future.

The paper states in its conclusion:

While these results are promising, this work represents an initial attempt at synthetic discussion thread generation, and there remain numerous avenues for future research. This includes potentially identifying other ways to explicitly encode thread structure, which proved particularly valuable in our results, on top of determining optimal approaches for designing prompts and both the number and type of examples used.

The write up is a preliminary report. It takes months to get data and approvals for this type of public document. How far has Google come between the idea to write up results and this document becoming available on August 15, 2024? My hunch is that Google has come a long way.

What’s the use case for this project? I will let younger, more optimistic minds answer this question. I am a dinobaby, and I have been around long enough to know a potent tool when I encounter one.

Stephen E Arnold, September 3, 2024

Another Big Consulting Firms Does Smart Software… Sort Of

September 3, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Will programmers and developers become targets for prosecution when flaws cripple vital computer systems? That may be a good idea because pointing to the “algorithm” as the cause of a problem does not seem to reduce the number of bugs, glitches, and unintended consequences of software. A write up which itself may be a blend of human and smart software suggests change is afoot.

image

Thanks, MSFT Copilot. Good enough.

Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits” reports that software crafted by the services firm Deloitte did not work as the State of Tennessee assumed. Yep, assume. A very interesting word.

The article explains:

The TennCare Connect system—built by Deloitte and other contractors for more than $400 million—is supposed to analyze income and health information to automatically determine eligibility for benefits program applicants. But in practice, the system often doesn’t load the appropriate data, assigns beneficiaries to the wrong households, and makes incorrect eligibility determinations, according to the decision from Middle District of Tennessee Judge Waverly Crenshaw Jr.

At one time, Deloitte was an accounting firm. Then it became a consulting outfit a bit like McKinsey. Well, a lot like that firm and other blue-chip consulting outfits. In its current manifestation, Deloitte is into technology, programming, and smart software. Well, maybe the software is smart but the programmers and the quality control seem to be riding in a different school bus from some other firms’ technical professionals.

The write up points out:

Deloitte was a major beneficiary of the nationwide modernization effort, winning contracts to build automated eligibility systems in more than 20 states, including Tennessee and Texas. Advocacy groups have asked the Federal Trade Commission to investigate Deloitte’s practices in Texas, where they say thousands of residents are similarly being inappropriately denied life-saving benefits by the company’s faulty systems.

In 2016, Cathy O’Neil published Weapons of Math Destruction. Her book had a number of interesting examples of what goes wrong when careless people make assumptions about numerical recipes. If she does another book, she may include this Deloitte case.

Several observations:

  1. The management methods used to create these smart systems require scrutiny. The downstream consequences are harmful.
  2. The developers and programmers can be fired, but the failure to have remediating processes in place when something unexpected surfaces must be part of the work process.
  3. Less informed users and more smart software strikes me as a combustible mixture. When a system ignites, the impacts may reverberate in other smart systems. What entity is going to fix the problem and accept responsibility? The answer is, “No one” unless there are significant consequences.

The State of Tennessee’s experience makes clear that a “brand name”, slick talk, an air of confidence, and possibly ill-informed managers can do harm. The opioid misstep was bad. Now imagine that type of thinking in the form of a fast, indifferent, and flawed “system.” Firing a 25 year old is not the solution.

Stephen E Arnold, September 3, 2024

Google Claims It Fixed Gemini’s “Degenerate” People

September 2, 2024

History revision is a problem. It’s been a problem for…well…since the start of recorded history. The Internet and mass media are infamous for being incorrect about historical facts, but image generating AI, like Google’s Gemini, is even worse. Tech Crunch explains what Google did to correct its inaccurate algorithm: “Google Says It’s Fixed Gemini’s People-Generating Feature.”

Google released Gemini in early 2023, then over a year later paused the chatbot for being too “woke,”“politically incorrect,” and “historically inaccurate.” The worst of Gemini’s offending actions was when it (for example) was asked to depict a Roman legion as ethnically diverse which fit the woke DEI agenda, while when it was asked to make an equally ethnically diverse Zulu warrior army Gemini only returned brown-skinned people. The latter is historically accurate, because Google doesn’t want to offend western ethnic minorities and, of course, Europe (where light skinned pink people originate) was ethnically diverse centuries ago.

Everything was A OK, until someone invoked Godwin’s Law by asking Gemini to generate (degenerate [sic]) an image of Nazis. Gemini returned an ethnically diverse picture with all types of Nazis, not the historically accurate light-skinned Germans-native to Europe.

Google claims it fixed Gemini and it took way longer than planned. The people generative feature is only available to paid Gemini plans. How does Google plan to make its AI people less degenerative? Here’s how:

“According to the company, Imagen 3, the latest image-generating model built into Gemini, contains mitigations to make the people images Gemini produces more “fair.” For example, Imagen 3 was trained on AI-generated captions designed to ‘improve the variety and diversity of concepts associated with images in [its] training data,’ according to a technical paper shared with TechCrunch. And the model’s training data was filtered for “safety,” plus ‘review[ed] … with consideration to fairness issues,’ claims Google…;We’ve significantly reduced the potential for undesirable responses through extensive internal and external red-teaming testing, collaborating with independent experts to ensure ongoing improvement,” the spokesperson continued. ‘Our focus has been on rigorously testing people generation before turning it back on.’”

Google will eventually make it work and the company is smart to limit Gemini’s usage to paid subscriptions. Limiting the user pool means Google can better control the chatbot and (if need be) turn it off. It will work until bad actors learn how to abuse the chatbot again for their own sheets and giggles.

Whitney Grace, September 2, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta