Robots, Hard and Soft, Moving Slowly. Very Slooowly. Not to Worry, Humanoids
February 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
CNN that bastion of “real” journalism published a surprising story: “We May Not Lose Our Jobs to Robots So Quickly, MIT Study Finds.” Wait, isn’t MIT the outfit which had a tie up with the interesting Jeffrey Epstein? Oh, well.

The robots have learned that they can do humanoid jobs quickly and easily. But the robots are stupid, right? Yes, they are, but the managers looking for cost reductions and workforce reductions are not. Thanks, MSFT Copilot Bing thing. How the security of the MSFT email today?
The story presents as actual factual an MIT-linked study which seems to go against the general drift of smart software, smart machines, and smart investors. The story reports:
new research suggests that the economy isn’t ready for machines to put most humans out of work.
The fresh research finds that the impact of AI on the labor market will likely have a much slower adoption than some had previously feared as the AI revolution continues to dominate headlines. This carries hopeful implications for policymakers currently looking at ways to offset the worst of the labor market impacts linked to the recent rise of AI.
The story adds:
One key finding, for example, is that only about 23% of the wages paid to humans right now for jobs that could potentially be done by AI tools would be cost-effective for employers to replace with machines right now. While this could change over time, the overall findings suggest that job disruption from AI will likely unfurl at a gradual pace.
The intriguing facet of the report and the research itself is that it seems to suggest that the present approach to smart stuff is working just fine, thank you very much. Why speed up or slow down? The “unfurling” is a slow process. No need for these professionals to panic as major firms push forward with a range of hard and soft robots:
- Consulting firms. Has MIT checked out Deloitte’s posture toward smart software and soft robots?
- Law firms. Has MIT talked to any of the Top 20 law firms about their use of smart software?
- Academic researchers. Has MIT talked to any of the graduate students or undergraduates about their use of smart software or soft robots to generate bibliographies, summaries of possibly non-reproducible studies, or books mentioning their professor?
- Policeware vendors. Companies like Babel Street and Recorded Future are putting pedal to the metal with regard to smart software.
My hunch is that MIT is not paying attention to the happy robots at Tesla or the bad actors using software robots to poke through the cyber defenses of numerous outfits.
Does CNN ask questions? Not that I noticed. Plus, MIT appears to want good news PR. I would too if I were known to be pals with certain interesting individuals.
Stephen E Arnold, February 1, 2024
AI and SEO: If This Does Not Kill Relevance, Nothing Will
February 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The integration of AI into search engines may help consumers better find what they are looking for and reduce or eliminate creepy intrusive ads. More importantly, for readers of Adweek anyway, that dynamic is an opportunity for advertisers. Now they can more finely target ads while charming potential customers with friendly algorithmic rapport. That is the gist of write-up, “3 Major Ways Generative AI Is Redefining Search.” Under the subheadings “Dialogue over monologue,” “Offers not ads,” and “Subjective data over objective data,” writer Christian J. Ward details how marketers can leverage the human-esque qualities of AI interactions to entice consumers. For example, under the first of these “pivotal shifts,” Ward writes:
“With conversational AI as the interface, consumers can share exactly what they want to share, and brands can focus on great responses instead of suboptimal guesses. … When a consumer freely offers details on what they seek and why, the brand can leverage that zero-party data to personalize their experience. Trust is built through dialogues, not infinite monologues algorithmically ranked in search engine results. Most importantly, these AI-driven dialogues open unprecedented opportunities for brands to engage each person individually.”
Yes, trust is built through dialogues. But is that still the case when one party is a fake person? Probably, for many consumers. Ward goes on to describe ways companies can capitalize on these “conversations:”
“Conversations like these build trust and enable the brand to customize an offer that meets the needs of that individual customer. This is the future of offer-based interactions, directly controlled by a dialogue with the customer. Moving from privacy-invasive ad models to trust-centric dialogue models will take time. But for objective questions—which often directly precede conversion and purchase decisions—brands will utilize gen AI aggressively to take back the consumer dialogue from centralized search systems that seek to monetize ad spend.”
Reduce one’s ad budget while using salary-free AI to build lucrative customer rapport? Sounds great. Unless one’s interest is in truly relevant search results, not marketing ploys. Welcome to the next iteration of SEO.
Cynthia Murrell, February 1, 2024
Techno Feudalist Governance: Not a Signal, a Rave Sound Track
January 31, 2024
This essay is the work of a dumb dinobaby. No smart software required.
One of the UK’s watchdog outfits published a 30-page report titled “One Click Away: A Study on the Prevalence of Non-Suicidal Self Injury, Suicide, and Eating Disorder Content Accessible by Search Engines.” I suggest that you download the report if you are interested in what the consequences of poor corporate governance are. I recommend reading the document while watching your young children or grand children playing with their mobile phones or tablet devices.
Let me summarize the document for you because its contents provide some color and context for the upcoming US government hearings with a handful of techno feudalist companies:
Web search engines and social media services are one-click gateways to self-harm and other content some parents and guardians might deem inappropriate.
Does this report convey information relevant to the upcoming testimony of selected large US technology companies in the Senate? I want to say, “Yes.” However, the realistic answer is, “No.”
Techmeme, an online information service, displayed its interest in the testimony with these headlines on January 31, 2024:
Screenshots are often difficult to read. The main story is from the weird orange newspaper whose content is presented under this Techmeme headline:
Ahead of the Senate Hearing, Mark Zuckerberg Calls for Requiring Apple and Google to Verify Ages via App Stores…
Ah, ha, is this a red herring intended to point the finger at outfits not on the hot seat in the true blue Senate hearing room?
The New York Times reports on a popular DC activity: A document reveal:
Ahead of the Senate Hearing, US Lawmakers Release 90 Pages of Internal Meta Emails…
And to remind everyone that an allegedly China linked social media service wants to do the right thing (of course!), Bloomberg’s angle is:
In Prepared Senate Testimony, TikTok CEO Shou Chew Says the Company Plans to Spend $2B+ in 2024 on Trust and Safety Globally…
Therefore, the Senate hearing on January 31, 2024 is moving forward.
What will be the major take-away from today’s event? I would suggest an opportunity for those testifying to say, “Senator, thank you for the question” and “I don’t have that information. I will provide that information when I return to my office.”
And the UK report? What? And the internal governance of certain decisions related to safety in the techno feudal firms? Secondary to generating revenue perhaps?
Stephen E Arnold, January 31, 2024
Journalism Is … Exciting, Maybe Even Thrilling
January 31, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Journalism is a field in an unusual industrial location. It is an important career because journalists are dedicated to sharing current and important information. Journalism, however, is a difficult field because news outlets are fading faster than the Internet’s current meme. Another alarming problem for journalists, especially those who work internationally, is the increasing risk of incarceration. The Committee To Protect Journalists (CPJ) reported that according to a “2023 Prison Census: Jailed Journalists Near Record High; Israel Imprisonments Spike.”
Due to the October 7 terrorist attack by Palestinian-led Hamas and the start of a new war, Israel ranked sixth on the list countries that imprison journalists. Israel ironically tied with Iran and is behind China, Myanmar, Belarus, Russia, and Vietnam. CPJ recorded that 320 journalists were incarcerated in 2023. It’s the second-highest number since CPJ started tracking in 1992. CPJ explained the high number of imprisonments is due to authoritarian regimes silencing the opposition. One hundred sixty-eight, more than half of the journalists, are charged with terrorism for critical coverage and spreading “false news.”
China is one of the worst offenders with Orwellian censorship laws, human rights violations, and a crackdown on pro-democracy protests and news. Myanmar’s coup in 2021 and Belarus’s controversial 2020 election incited massive upheavals and discontentment with citizens. Reporters from these countries are labeled as extremists when they are imprisoned.
Israel ties with Iran in 2023 due to locking up a high number of Palestinian journalists. They’re kept behind bars without cause on the grounds to prevent future crimes. Iran might have less imprisoned journalists than 2022 but the country is still repressing the media. Russia also keeps a high number of journalists jailed due to its war with Ukraine.
Jailed reporters face horrific conditions:
“Prison conditions are harsh in the nations with the worst track records of detaining journalists. Country reports released by the U.S. Department of State in early 2023 found that prisoners in China, Myanmar, Belarus, Russia, and Vietnam typically faced physical and sexual abuse, overcrowding, food and water shortages, and inadequate medical care.”
They still face problems even when they’ve served their sentence:
“Many journalists face curbs on their freedom even after they’ve served their time. This not only affects their livelihoods, but allows repressive governments to continue silencing their voices.”
These actions signify the importance of the US Constitution’s First Amendment. Despite countless attempts for politicians and bad actors to silence journalists abroad and on home soil, the First Amendment is still upheld. It’s so easy to take it for granted.
Whitney Grace, January 31, 2024
A Glimpse of Institutional AI: Patients Sue Over AI Denied Claims
January 31, 2024
This essay is the work of a dumb dinobaby. No smart software required.
AI algorithms are revolutionizing business practices, including whether insurance companies deny or accept medical coverage. Insurance companies are using more on AI algorithms to fast track paperwork. They are, however, over relying on AI to make decisions and it is making huge mistakes by denying coverage. Patients are fed up with their medical treatments being denied and CBS Moneywatch reports that a slew of “Lawsuits Take Aim At Use Of AI Tool By Health Insurance Companies To Process Claims.”
The defendants in the AI insurance lawsuits are Humana and United Healthcare. These companies are using the AI model nHPredict to process insurance claims. On December 12, 2023, a class action lawsuit was filed against Humana, claiming nHPredict denied medically necessary care for elderly and disabled patients under Medicare Advantage. A second lawsuit was filed in November 2023 against United Healthcare. United Healthcare also used nHPredict to process claims. The lawsuit claims the insurance company purposely used the AI knowing it was faulty and about 90% of its denials were overridden.
The AI model is supposed to work:
“NHPredicts is a computer program created by NaviHealth, a subsidiary of United Healthcare, that develops personalized care recommendations for ill or injured patients, based on “real world experience, data and analytics,’ according to its website, which notes that the tool “is not used to deny care or to make coverage determinations.’
But recent litigation is challenging that last claim, alleging that the “nH Predict AI Model determines Medicare Advantage patients’ coverage criteria in post-acute care settings with rigid and unrealistic predictions for recovery.” Both United Healthcare and Humana are being accused of instituting policies to ensure that coverage determinations are made based on output from nHPredicts’ algorithmic decision-making.”
Insurance companies deny coverage whenever they can. Now a patient can talk to an AI customer support system about an AI system’s denying a claim. Will the caller be faced with a voice answering call loop on steroids? Answer: Oh, yeah. We haven’t seen or experienced what’s coming down the cost-cutting information highway. The blip on the horizon is interesting, isn’t it?
Whitney Grace, January 31, 2024
Habba Logic? Is It Something One Can Catch?
January 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I don’t know much about lawyering. I have been exposed to some unusual legal performances. Most recently, Alina Habba delivered in impassioned soliloquy after a certain high-profile individual was told, “You have to pay a person whom you profess not to know $83 million.” Ms. Habba explained that the decision was a bit of a problem based on her understanding of New York State law. That’s okay. As a dinobaby, I am wrong on a pretty reliable basis. Once it is about 3 pm, I have difficulty locating my glasses, my note cards about items for this blog, and my bottle of Kroger grape-flavored water. (Did you know the world’s expert on grape flavor was a PhD named Abe Bakal. I worked with him in the 1970s. He influenced me, hence the Bakalized water.)
Habba logic explains many things in the world. If Socrates does not understand, that’s his problem, the young Agonistes Habba in the logic class. Thanks, MSFT Copilot. Good enough. But the eyes are weird.
I did find my notecard about a TechDirt article titled “Cable Giants Insist That Forcing Them to Make Cancellations Easier Violates Their First Amendment Rights.” I once learned that the First Amendment had something to do with free speech. To me, a dinobaby don’t forget, this means I can write a blog post, offer my personal opinions, and mention the event or item which moved me to action. Dinobabies are not known for their swiftness.
The write up explains that cable companies believe that making it difficult for a customer to cancel a subscription to TV, phone, Internet, and other services is a free speech issue. The write up reports:
But the cable and broadband industry, which has a long and proud tradition of whining about every last consumer protection requirement (no matter how basic), is kicking back at the requirement. At a hearing last week, former FCC boss-turned-top-cable-lobbying Mike Powell suggested such a rule wouldn’t be fair, because it might somehow (?) prevent cable companies from informing customers about better deals.
The idea is that the cable companies’ free of speech would be impaired. Okay.
What’s this got to do with the performance by Ms. Habba after her client was slapped with a big monetary award? Answer: Habba logic.
Normal logic says, “If a jury finds a person guilty, that’s what a jury is empowered to do.” I don’t know if describing it in more colorful terms alters what the jury does. But Habba logic is different, and I think it is diffusing from the august legal chambers to a government meeting. I am not certain how to react to Habba logic.
I do know, however, however, that cable companies are having a bit of struggle retaining their customers, amping up their brands, and becoming the equivalent of Winnie the Pooh sweatshirts for kids and adults. Cable companies do not want a customer to cancel and boost the estimable firms’ churn ratio. Cable companies do want to bill every month in order to maintain their cash intake. Cable companies do want to maintain a credit card type of relationship to make it just peachy to send mindless snail mail marketing messages about outstanding services, new set top boxes, and ever faster Internet speeds. (Ho ho ho. Sorry. I can’t help myself.)
Net net: Habba logic is identifiable, and I will be watching for more examples. Dinobabies like watching those who are young at heart behaving in a fascinating manner. Where’s my fake grape water? Oh, next to fake logic.
Stephen E Arnold, January 30, 2024
Google Gems: January 30, 2024
January 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The dinobaby wants to share another collection of Google gems. These are high-value actions which provide insight into one of the world’s most successful online advertising companies. Let’s get rolling with the items which I thought were the biggest outputs of behavioral magma movements in the last week, give or take a day or two. For gems, whose keeping track?
The dinobaby is looking for Google gems. There are many. Thanks, MSFT Copilot Bing thing. Good enough, but I think I am more svelt than your depiction of me.
GOOGLE AND REAL INNOVATION
How do some smart people innovate. “Google Settles AI-Related Chip Patent Lawsuit That Sought US$1.67-Billion in Damages” states:
Singular, founded by Massachusetts-based computer scientist Joseph Bates, claimed that Google incorporated his technology into processing units that support AI features in Google Search, Gmail, Google Translate and other Google services. The 2019 lawsuit said that Bates shared his inventions with the company between 2010 and 2014. It argued that Google’s Tensor Processing Units copied Bates’ technology and infringed two patents.
Did Google accidentally borrow intellectual property? I don’t know. But when $1.67 is bandied about as a desired amount and the Google settles right before trial, one can ask, “Does Google do me-too invention?” Of course not. Google is too cutting edge. Plus the invention allegedly touches Google’s equally innovative artificial intelligence set up. But $1.67 billion? Interesting.
A TWO’FER
Two former Googlers have their heads in the clouds (real, not data center clouds). Well, one mostly former Googler and another who has returned to the lair to work on AI. Hey, those are letters which appear in the word lAIr. What a coincidence. Xoogler one is a founder of the estimable company. Xoogler two is a former “adult” at the innovative firm.
Sergey Brin’s, like Icarus, has taken flight. He didn’t. His big balloon has. The Travel reports in “The World’s Largest Airship Is Now A Reality As It Took Flight In California”:
Pathfinder 1, a prototype electric airship designed by LTA Research, is being unveiled to the public as dawn rises over Silicon Valley. The project’s backer, Google co-founder Sergey Brin, expects it will speed the airship’s humanitarian efforts and usher in a new age of eco-friendly air travel. The airship has magnified drone technology, incorporating fly-by-wire controls, electric motors, and lidar sensing, to a scale surpassing that of three Boeing 737s. This enlarged version has the potential to transport substantial cargo across extensive distances. Its distinctive snow-white steampunk appearance is easily discernible from the bustling 101 highway.
The article includes a reference to the newsreel meme The Hindenburg. Helpful? Not so much. Anyway the Brin-aloon is up.
The second item about a Xoogler also involves flight. Business Insider (an outfit in the news itself this week) published “Ex-Google CEO Eric Schmidt Quietly Created a Company Called White Stork, Which Plans to Build AI-Powered Attack Drones, Report Says.” Drones are a booming business. The write up states:
The former Google chief told Wired that occasionally, a new weapon comes to market that “changes things” and that AI could help revolutionize the Department of Defense’s equipment. He said in the Wired interview, “Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology — nuclear weapons — that could change war, which it clearly did. I would argue that [AI-powered] autonomy and decentralized, distributed systems are that powerful.”
What if a smart White Stork goes after Pathfinder? Impossible. AI is involved.
WAY FINDING WITH THRILLS
The next major Google gem is about the map product I find almost impossible to use. But I am a dinobaby, and these nifty new products are not tuned to 80-year-old eyes and fingers. I can still type, however. “The Google Maps Effect: Authorities Looking for Ways to Prevent Cars From Going Down Steps” shares this allegedly actual factual functionality:
… beginning in December, several drivers attempted to go down the steps either in small passenger cars or lorries that wouldn’t even fit in the small space between the buildings. Drivers blamed Google Maps on every occasion, claiming they followed the turn-by-turn guidance offered by the application. Google Maps told them to make a turn and attempt to go down the steps, so they eventually got stuck for obvious reasons.
I did a job for the bright fellow who brought WordStar to market. Google Maps wanted me to drive off the highway and into the bay. I turned off the helpful navigation system. I may be old, but dinobabies are not completely stupid. Other drivers relying on good enough Google presumably are.
AI MARKETING HOO-HAH
The Google is tooting its trumpet. Here are some recent “innovations” designed to keep the pesky OpenAI, Mistal, and Zuckbookers at bay:
- Google can make videos using AI. “Google’s New AI Video Generator Looks Incredible” reports that the service is “incredible.” What else from the quantum supremacy crowd? Sure, and it produces cute animals.
- Those Chromebooks are not enough. Google is applying its AI to education. Read more about how an ad company will improve learning in “Google Announces New AI-Powered Features for Education.”
- More Googley AI is coming to ads. If you are into mental manipulation, you will revel in “YouTube Ads Are About to Get Way More Effective with AI-Powered Neuromarketing.” Hey, “way more” sounds like the super smart Waymo Google car thing, doesn’t it?
LITTLE CUBIC ZIRCONIAS
Let me highlight what I call little cubic zirconias of Google goodness. Here we go:
- The New York Post published “Google News Searches Ranked AI-Generated Rip-offs Above Real Articles — Including a Post Exclusive.” The main point is that Google’s estimable system and wizards cannot tell diamonds from the chemical twins produced by non-Googlers. With elections coming, let’s talk about trust in search results, shall we?
- Google’s wizards have created a new color for the Pixel phone. Read about the innovative green at this link.
- TechRadar reported that Google has a Kubernetes “flaw.” Who can exploit it? Allegedly anyone with a Google Gmail account. Details at this Web location.
Before I close this week’s edition of Gems, I want to mention two relatively minor items. Some people may think these molehills are much larger issues. What can I do?
Google has found that firing people is difficult. According to Business Insider, Googlers fired in South Korea won’t leave the company. Okay. Whatever.
Also, New York Magazine, a veritable treasure trove of technical information, reports that Google has ended the human Internet with the upgrade Chrome browser. News flash: The human Internet was killed by search engine optimization years ago.
Watch for more Google Gems next week. I think there will be sparkly items available.
Stephen E Arnold, January 30, 2024
Ho-Hum Write Up with Some Golden Nuggets
January 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Anthropic Confirms It Suffered a Data Leak.” I know. I know. Another security breach involving an outfit working with the Bezos bulldozer and Googzilla. Snore. But in the write up, tucked away were a couple of statements I found interesting.
“Hey, pardner, I found an inconsistency.” Two tries for a prospector and a horse. Good enough, MSFT Copilot Bing thing. I won’t ask about your secure email.
Here these items are:
- Microsoft, Amazon and others are being asked by a US government agency “to provide agreements and rationale for collaborations and their implications; analysis of competitive impact; and information on any other government entities requesting information or performing investigations.” Regulatory scrutiny of the techno feudal champions?
- The write up asserts: “Anthropic has made a “long-term commitment” to provide AWS customers with “future generations” of its models through Amazon Bedrock, and will allow them early access to unique features for model customization and fine-tuning purposes.” Love at first sight?
- And a fascinating quote from a Googler. Note: I have put in bold some key words which I found interesting:
“Anthropic and Google Cloud share the same values when it comes to developing AI–it needs to be done in both a bold and responsible way,” Google Cloud CEO Thomas Kurian said in a statement on their relationship. “This expanded partnership with Anthropic, built on years of working together, will bring AI to more people safely and securely, and provides another example of how the most innovative and fastest growing AI startups are building on Google Cloud.”
Yeah, but the article is called “Anthropic Confirms It Suffered a Data Leak.” What’s with the securely?
Ah, regulatory scrutiny and obvious inconsistency. Ho-hum with a good enough tossed in for spice.
Stephen E Arnold, January 30, 2024
AI Coding: Better, Faster, Cheaper. Just Pick Two, Please
January 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Visual Studio Magazine is not on my must-read list. Nevertheless, one of my research team told me that I needed to read “New GitHub Copilot Research Finds “Downward Pressure on Code Quality.” I had no idea what “downward pressure” means. I read the article trying to figure out what the plain English meaning of this tortured phrase meant. Was it the downward pressure on the metatarsals when a person is running to a job interview? Was it the deadly downward pressure exerted on the OceanGate submersible? Was it the force illustrated in the YouTube “Hydraulic Press Channel”?
A partner at a venture firms wants his open source recipients to produce more code better, faster, and cheaper. (He does not explain that one must pick two.) Thanks MSFT Copilot Bing thing. Good enough. But the green? Wow.
Wrong.
The writeup is a content marketing piece for a research report. That’s okay. I think a human may have written most of the article. Despite the frippery in the article, I spotted several factoids. If these are indeed verifiable, excitement in the world of machine generated open source software will ensue. Why does this matter? Well, in the words of the SmartNews content engine, “Read on.”
Here are the items of interest to me:
- Bad code is being created and added to the GitHub repositories.
- Code is recycled, despite smart efforts to reduce the copy-paste approach to programming.
- AI is preparing a field in which lousy, flawed, and possible worse software will flourish.
Stephen E Arnold, January 29, 2024
Modern Poison: Models, Data, and Outputs. Worry? Nah.
January 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
One bad apple does not a failed harvest make. Let’s hope. I read “Poisoned AI Went Rogue During Training and Couldn’t Be Taught to Behave Again in Legitimately Scary Study.” In several of my lectures in 2023 I included a section about poisoned data. When I described the method and provided some examples of content injection, the audience was mostly indifferent. When I delivered a similar talk in October 2023, those in my audience were attentive. The concept of intentionally fooling around with model thresholds, data used for training, and exploiting large language model developers’ efforts to process more current or what some call “real time” data hit home. For each of these lectures, my audience was composed of investigators and intelligence analysts.
How many bad apples are in the spectrum of smart software? Give up. Don’t feel bad. No one knows. Perhaps it is better to ignore the poisoned data problem? There is money to be made and innovators to chase the gold rush. Thanks, MSFT Copilot Bing thing. How is your email security? Oh, good enough, like the illustration with lots of bugs.
Write ups like “Poisoned AI Went Rogue…” add a twist to my tales. Specifically a function chunk of smart software began acting in a manner not only surprising but potentially harmful. The write up in LiveScience asserted:
AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.
Interesting. The article noted:
Artificial intelligence (AI) systems that were trained to be secretly malicious resisted state-of-the-art safety methods designed to "purge" them of dishonesty … Researchers programmed various large language models (LLMs) — generative AI systems similar to ChatGPT — to behave maliciously. Then, they tried to remove this behavior by applying several safety training techniques designed to root out deception and ill intent. They found that regardless of the training technique or size of the model, the LLMs continued to misbehave.
Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, is quoted as saying:
"I think our results indicate that we don’t currently have a good defense against deception in AI systems — either via model poisoning or emergent deception — other than hoping it won’t happen… And since we have really no way of knowing how likely it is for it to happen, that means we have no reliable defense against it. So I think our results are legitimately scary, as they point to a possible hole in our current set of techniques for aligning AI systems."
If you want to read the research paper, you can find it at this link. Note that one of the authors is affiliated with the Amazon- and Google-supported Anthropic AI company.
Net net: We do not have at this time a “good defense” against this type of LLM poisoning. Do I have a clever observation, some words of reassurance, or any ideas for remediation?
Nope.
Stephen E Arnold, January 29, 2024

