Weaponizing AI Information for Rubes with Googley Fakes
December 8, 2023
This essay is the work of a dumb dinobaby. No smart software required.
From the “Hey, rube” department: “Google Admits That a Gemini AI Demo Video Was Staged” reports as actual factual:
There was no voice interaction, nor was the demo happening in real time.
Young Star Wars’ fans learn the truth behind the scenes which thrill them. Thanks, MSFT Copilot. One try and some work with the speech bubble and I was good to go.
And to what magical event does this mysterious statement refer? The Google Gemini announcement. Yep, 16 Hollywood style videos of “reality.” Engadget asserts:
Google is counting on its very own GPT-4 competitor, Gemini, so much that it staged parts of a recent demo video. In an opinion piece, Bloomberg says Google admits that for its video titled “Hands-on with Gemini: Interacting with multimodal AI,” not only was it edited to speed up the outputs (which was declared in the video description), but the implied voice interaction between the human user and the AI was actually non-existent.
The article makes what I think is a rather gentle statement:
This is far less impressive than the video wants to mislead us into thinking, and worse yet, the lack of disclaimer about the actual input method makes Gemini’s readiness rather questionable.
Hopefully sometime in the near future Googlers can make reality from Hollywood-type fantasies. After all, policeware vendors have been trying to deliver a Minority Report-type of investigative experience for a heck of a lot longer.
What’s the most interesting part of the Google AI achievement? I think it illuminates the thinking of those who live in an ethical galaxy far, far away… if true, of course. Of course. I wonder if the same “fake it til you make it” approach applies to other Google activities?
Stephen E Arnold, December 8, 2023
Google Smart Software Titbits: Post Gemini Edition
December 8, 2023
This essay is the work of a dumb dinobaby. No smart software required.
In the Apple-inspired roll out of Google Gemini, the excitement is palpable. Is your heart palpitating? Ah, no. Neither is mine. Nevertheless, in the aftershock of a blockbuster “me to” the knowledge shrapnel has peppered my dinobaby lair; to wit: Gemini, according to Wired, is a “new breed” of AI. The source? Google’s Demis Hassabis.
What happens when the marketing does not align with the user experience? Tell the hardware wizards to shift into high gear, of course. Then tell the marketing professionals to evolve the story. Thanks, MSFT Copilot. You know I think you enjoyed generating this image.
Navigate to “Google Confirms That Its Cofounder Sergey Brin Played a Key Role in Creating Its ChatGPT Rival.” That’s a clickable headline. The write up asserts: “Google hinted that its cofounder Sergey Brin played a key role in the tech giant’s AI push.”
Interesting. One person involved in both Google and OpenAI. And Google responding to OpenAI after one year? Management brilliance or another high school science club method? The right information at the right time is nine-tenths of any battle. Was Google not processing information? Was the information it received about OpenAI incorrect or weaponized? Now Gemini is a “new breed” of AI. The Verge reports that McDonald’s burger joints will use Google AI to “make sure your fries are fresh.”
Google has been busy in non-AI areas; for instance:
- The Register asserts that a US senator claims Google and Apple reveal push notification data to non-US nation states
- Google has ramped up its donations to universities, according to TechMeme
- Lost files you thought were in Google Drive? Never fear. Google has a software tool you can use to fix your problem. Well, that’s what Engadget says.
So an AI problem? What problem?
Stephen E Arnold, December 8, 2023
Safe AI or Money: Expert Concludes That Money Wins
December 8, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I read “The Frantic Battle over OpenAI Shows That Money Triumphs in the End.” The author, an esteemed wizard in the world of finance and economics, reveals that money is important. Here’s a snippet from the essay which I found truly revolutionary, brilliant, insightful, and truly novel:
The academic wizard has concluded that a ball is indeed round. The world of geometry has been stunned. The ball is not just round. It exists as a sphere. The most shocking insight from the Ivory Tower is that the ball bounces. Thanks for the good enough image, MSFT Copilot.
But ever since OpenAI’s ChatGPT looked to be on its way to achieving the holy grail of tech – an at-scale consumer platform that would generate billions of dollars in profits – its non-profit safety mission has been endangered by big money. Now, big money is on the way to devouring safety.
Who knew?
The essay continues:
Which all goes to show that the real Frankenstein monster of AI is human greed. Private enterprise, motivated by the lure of ever-greater profits, cannot be relied on to police itself against the horrors that an unfettered AI will create. Last week’s frantic battle over OpenAI shows that not even a non-profit board with a capped profit structure for investors can match the power of big tech and Wall Street. Money triumphs in the end.
Oh, my goodness. Plato, Aristotle, and other mere pretenders to genius you have been put to shame. My heart is palpitating from the revelation that “money triumphs in the end.”
Stephen E Arnold, December 8, 2023
Big Tech, Big Fakes, Bigger Money: What Will AI Kill?
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I don’t read The Hollywood Reporter. I did one job for a Hollywood big wheel. That was enough for me. I don’t drink. I don’t take drugs unless prescribed by my comic book addicted medical doctor in rural Kentucky. I don’t dress up and wear skin bronzers in the hope that my mobile will buzz. I don’t stay out late. I don’t fancy doing things which make my ethical compass buzz more angrily than my mobile phone. Therefore, The Hollywood Reporter does not speak to me.
One of my research team sent me a link to “The Rise of AI-Powered Stars: Big Money and Risks.” I scanned the write up and then I went through it again. By golly, The Hollywood Reporter hit on an “AI will kill us” angle not getting as much publicity as Sam AI-Man’s minimal substance interview.
Can a techno feudalist generate new content using what looks like “stars” or “well known” people? Probably. A payoff has to be within sight. Otherwise, move on to the next next big thing. Thanks, MSFT Copilot. Good enough cartoon.
Please, read the original and complete article in The Hollywood Reporter. Here’s the passage which rang the insight bell for me:
tech firms are using the power of celebrities to introduce the underlying technology to the masses. “There’s a huge possible business there and I think that’s what YouTube and the music companies see, for better or for worse
Let’s think about these statements.
First, the idea of consumerizing AI for the masses is interesting. However, I interpret the insight as having several force vectors:
- Become the plumbing for the next wave of user generated content (USG)
- Get paid by users AND impose an advertising tax on the USG
- Obtain real-time data about the efficacy of specific smart generation features so that resources can be directed to maintain a “moat” from would-be attackers.
Second, by signing deals with people who to me are essentially unknown, the techno giants are digging some trenches and putting somewhat crude asparagus obstacles where the competitors are like to drive their AI machines. The benefits include:
- First hand experience with the stars’ ego system responds
- The data regarding cost of signing up a star, payouts, and selling ads against the content
- Determining what push back exists [a] among fans and [b] the historical middlemen who have just been put on notice that they can find their future elsewhere.
Finally, the idea of the upside and the downside for particular entities and companies is interesting. There will be winners and losers. Right now, Hollywood is a loser. TikTok is a winner. The companies identified in The Hollywood Reporter want to be winners — big winners.
I may have to start paying more attention to this publication and its stories. Good stuff. What will AI kill? The cost of some human “talent”?
Stephen E Arnold, December 7, 2023
Will TikTok Go Slow in AI? Well, Sure
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The AI efforts of non-governmental organizations, government agencies, and international groups are interesting. Many resolutions, proclamations, and blog polemics, etc. have been saying, “Slow down AI. Smart software will put people out of work. Destroy humans’ ability to think. Unleash the ‘I’ll be back guy.'”
Getting those enthusiastic about smart software is a management problem. Thanks, MSFT Copilot. Good enough.
My stance in the midst of this fearmongering has been bemusement. I know that predicting the future perturbations of technology is as difficult as picking a Kentucky Derby winner and not picking a horse that will drop dead during the race. When groups issue proclamations and guidelines without an enforcement mechanism, not much is going to happen in the restraint department.
I submit as partial evidence for my bemusement the article “TikTok Owner ByteDance Joins Generative AI Frenzy with Service for Chatbot Development, Memo Says.” What seems clear, if the write up is mostly on the money, is that a company linked to China is joining “the race to offer AI model development as a service.”
Two quick points:
- Model development allows the provider to get a sneak peak at what the user of the system is trying to do. This means that information flows from customer to provider.
- The company in the “race” is one of some concern to certain governments and their representatives.
The write up says:
ByteDance, the Chinese owner of TikTok, is working on an open platform that will allow users to create their own chatbots, as the company races to catch up in generative artificial intelligence (AI) amid fierce competition that kicked off with last year’s launch of ChatGPT. The “bot development platform” will be launched as a public beta by the end of the month…
The cited article points out:
China’s most valuable unicorn has been known for using some form of AI behind the scenes from day one. Its recommendation algorithms are considered the “secret sauce” behind TikTok’s success. Now it is jumping into an emerging market for offering large language models (LLMs) as a service.
What other countries are beavering away on smart software? Will these drive in the slow lane or the fast lane?
Stephen E Arnold, December 7, 2023
Gemini Twins: Which Is Good? Which Is Evil? Think Hard
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I received a link to a Google DeepMind marketing demonstration Web page called “Welcome to Gemini.” To me, Gemini means Castor and Pollux. Somewhere along the line — maybe a wonky professor named Chapman — told my class that these two represented Zeus and Hades. Stated another way, one was a sort of “good” deity with a penchant for non-godlike behavior. The other downright awful most of the time. I assume that Google knows about Gemini, its mythological baggage, and the duality of a Superman type doing the trust, justice, American way, and the other inspiring a range of bad actors. Imagine. Something that is good and bad. That’s smart software I assume. The good part sells ads; the bad part fails at marketing perhaps?
Two smart Googlers in New York City learn the difference between book learning for a PhD and street learning for a degree from the Institute of Hard Knocks. Thanks, MSFT Copilot. (Are you monitoring Google’s effort to dominate smart software by announcing breakthroughs very few people understand? Are you finding Google’s losses at the AI shell game entertaining?
Google’s blog post states with rhetorical aplomb:
Gemini is built from the ground up for multimodality — reasoning seamlessly across text, images, video, audio, and code.
Well, any other AI using Google’s previous technology is officially behind the curve. That’s clear to me. I wonder if Sam AI-Man, Microsoft, and the users of ChatGPT are tuned to the Google wavelength? There’s a video or more accurately more than a dozen of them, but I don’t like video so I skipped them all. There are graphs with minimal data and some that appear to jiggle in “real” time. I skipped those too. There are tables. I did read the some of the data and learned that Gemini can do basic arithmetic and “challenging” math like geometry. That is the 3, 4, 5 triangle stuff. I wonder how many people under the age of 18 know how to use a tape measure to determine if a corner is 90 degrees? (If you don’t, why not ask ChatGPT or MSFT Copilot.) I processed the twin’s size which come in three sizes. Do twins come in triples? Sigh. Anyway one can use Gemini Ultra, Gemini Pro, and Gemini Nano. Okay, but I am hung up on the twins and the three sizes. Sorry. I am a dinobaby. There are more movies. I exited the site and navigated to YCombinator’s Hacker News. Didn’t Sam AI-Man have a brush with that outfit?
You can find the comments about Gemini at this link. I want to highlight several quotations I found suggestive. Then I want to offer a few observations based on my conversation with my research team.
Here are some representative statements from the YCombinator’s forum:
- Jansan said: Yes, it [Google] is very successful in replacing useful results with links to shopping sites.
- FrustratedMonkey said: Well, deepmind was doing amazing stuff before OpenAI. AlphaGo, AlphaFold, AlphaStar. They were groundbreaking a long time ago. They just happened to miss the LLM surge.
- Wddkcs said: Googles best work is in the past, their current offerings are underwhelming, even if foundational to the progress of others.
- Foobar said: The whole things reeks of being desperate. Half the video is jerking themselves off that they’ve done AI longer than anyone and they “release” (not actually available in most countries) a model that is only marginally better than the current GPT4 in cherry-picked metrics after nearly a year of lead-time?
- Arson9416 said: Google is playing catchup while pretending that they’ve been at the forefront of this latest AI wave. This translates to a lot of talk and not a lot of action. OpenAI knew that just putting ChatGPT in peoples hands would ignite the internet more than a couple of over-produced marketing videos. Google needs to take a page from OpenAI’s playbook.
Please, work through the more than 600 comments about Gemini and reach your own conclusions. Here are mine:
- The Google is trying to market using rhetorical tricks and big-brain hot buttons. The effort comes across to me as similar to Ford’s marketing of the Edsel.
- Sam AI-Man remains the man in AI. Coups, tension, and chaos — irrelevant. The future for many means ChatGPT.
- The comment about timing is a killer. Google missed the train. The company wants to catch up, but it is not shipping products nor being associated to features grade school kids and harried marketers with degrees in art history can use now.
Sundar Pichai is not Sam AI-Man. The difference has become clear in the last year. If Sundar and Sam are twins, which represents what?
Stephen E Arnold, December 6, 2023
x
x
x
x
xx
How about Fear and Paranoia to Advance an Agenda?
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
I thought sex sells. I think I was wrong. Fear seems to be the barn burner at the end of 2023. And why not? We have the shadow of another global pandemic? We have wars galore. We have craziness on US air planes. We have a Cybertruck which spells the end for anyone hit by the behemoth.
I read (but did not shake like the delightful female in the illustration “AI and Mass Spying.” The author is a highly regarded “public interest technologist,” an internationally renowned security professional, and a security guru. For me, the key factoid is that he is a fellow at the Berkman Klein Center for Internet & Society at Harvard University and a lecturer in public policy at the Harvard Kennedy School. Mr. Schneier is a board member of the Electronic Frontier Foundation and the most, most interesting organization AccessNow.
Fear speaks clearly to those in retirement communities, elder care facilities, and those who are uninformed. Let’s say, “Grandma, you are going to be watched when you are in the bathroom.” Thanks, MSFT Copilot. I hope you are sending data back to Redmond today.
I don’t want to make too much of the Harvard University connection. I feel it is important to note that the esteemed educational institution got caught with its ethical pants around its ankles, not once, but twice in recent memory. The first misstep involved an ethics expert on the faculty who allegedly made up information. The second is the current hullabaloo about a whistleblower allegation. The AP slapped this headline on that report: “Harvard Muzzled Disinfo Team after $500 Million Zuckerberg Donation.” (I am tempted to mention the Harvard professor who is convinced he has discovered fungible proof of alien technology.)
So what?
The article “AI and Mass Spying” is a baffler to me. The main point of the write up strikes me as:
Summarization is something a modern generative AI system does well. Give it an hourlong meeting, and it will return a one-page summary of what was said. Ask it to search through millions of conversations and organize them by topic, and it’ll do that. Want to know who is talking about what? It’ll tell you.
I interpret the passage to mean that smart software in the hands of law enforcement, intelligence operatives, investigators in one of the badge-and-gun agencies in the US, or a cyber lawyer is really, really bad news. Smart surveillance has arrived. Smart software can process masses of data. Plus the outputs may be wrong. I think this means the sky is falling. The fear one is supposed to feel is going to be the way a chicken feels when it sees the Chik-fil-A butcher truck pull up to the barn.
Several observations:
- Let’s assume that smart software grinds through whatever information is available to something like a spying large language model. Are those engaged in law enforcement are unaware that smart software generates baloney along with the Kobe beef? Will investigators knock off the verification processes because a new system has been installed at a fusion center? The answer to these questions is, “Fear advances the agenda of using smart software for certain purposes; specifically, enforcement of rules, regulations, and laws.”
- I know that the idea that “all” information can be processed is a jazzy claim. Google made it, and those familiar with Google search results knows that Google does not even come close to all. It can barely deliver useful results from the Railway Retirement Board’s Web site. “All” covers a lot of ground, and it is unlikely that a policeware vendor will be able to do much more than process a specific collection of data believed to be related to an investigation. “All” is for fear, not illumination. Save the categorical affirmatives for the marketing collateral, please.
- The computational cost for applying smart software to large domains of data — for example, global intercepts of text messages — is fun to talk about over lunch. But the costs are quite real. Then the costs of the computational infrastructure have to be paid. Then the cost of the downstream systems and people who have to figure out if the smart software is hallucinating or delivering something useful. I would suggest that Israel’s surprise at the unhappy events in October 2023 to the present day unfolded despite the baloney for smart security software, a great intelligence apparatus, and the tons of marketing collateral handed out at law enforcement conferences. News flash: The stuff did not work.
In closing, I want to come back to fear. Exactly what is accomplished by using fear as the pointy end of the stick? Is it insecurity about smart software? Are there other messages framed in a different way to alert people to important issues?
Personally, I think fear is a low-level technique for getting one’s point across. But when those affiliated with an outfit with the ethics matter and now the payola approach to information, how about putting on the big boy pants and select a rhetorical trope that is unlikely to anything except remind people that the Covid thing could have killed us all. Err. No. And what is the agenda fear advances?
So, strike the sex sells trope. Go with fear sells.
Stephen E Arnold, December 6, 2023
AI: Big Ideas Become Money Savers and Cost Cutters
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this week (November 28, 2023,) The British newspaper The Guardian published “Sports Illustrated Accused of Publishing Articles Written by AI.” The main idea is that dependence on human writers became the focus of a bunch of bean counters. The magazine has a reasonably high profile among a demographic not focused on discerning the difference between machine output and sleek, intellectual, well groomed New York “real” journalists. Some cared. I didn’t. It’s money ball in the news business.
The day before the Sports Illustrated slick business and PR move, I noted a Murdoch-infused publication’s revelation about smart software. Barron’s published “AI Will Create—and Destroy—Jobs. History Offers a Lesson.” Barron’s wrote about it; Sports Illustrated got snared doing it.
Barron’s said:
That AI technology will come for jobs is certain. The destruction and creation of jobs is a defining characteristic of the Industrial Revolution. Less certain is what kind of new jobs—and how many—will take their place.
Okay, the Industrial Revolution. Exactly how long did that take? What jobs were destroyed? What were the benefits at the beginning, the middle, and end of the Industrial Revolution? What were the downsides of the disruption which unfolded over time? Decades wasn’t it?
The AI “revolution” is perceived to be real. Investors, testosterone-charged venture capitalists, and some Type A students are going to make the AI Revolution a reality. Damn, the regulators, the copyright complainers, and the dinobabies who want to read, think, and write themselves.
Barron’s noted:
A survey conducted by LinkedIn for the World Economic Forum offers hints about where job growth might come from. Of the five fastest-growing job areas between 2018 and 2022, all but one involve people skills: sales and customer engagement; human resources and talent acquisition; marketing and communications; partnerships and alliances. The other: technology and IT. Even the robots will need their human handlers.
I can think of some interesting jobs. Thanks, MSFT Copilot. You did ingest some 19th century illustrations, didn’t you, you digital delight.
Now those are rock solid sources: Microsoft’s LinkedIn and the charming McKinsey & Company. (I think of McKinsey as the opioid innovators, but that’s just my inexplicable predisposition toward an outstanding bastion of ethical behavior.)
My problem with the Sports Illustrated AI move and the Barron’s essay boils down to the bipolarism which surfaces when a new next big thing appears on the horizon. Predicting what will happen when a technology smashes into business billiard balls is fraught with challenges.
One thing is clear: The balls are rolling, and journalists, paralegals, consultants, and some knowledge workers are going to find themselves in the side pocket. The way out might be making TikToks or selling gadgets on eBay.
Some will say, “AI took our jobs, Billy. Now what?” Yes, now what?
Stephen E Arnold, December 6, 2023
The RAG Snag: Convenience May Undermine Thinking for Millions
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
My understanding is that everyone who is informed about AI knows about RAG. The acronym means Retrieval Augmented Generation. One explanation of RAG appears in the nVidia blog in the essay “What Is Retrieval Augmented Generation aka RAG.” nVidia, in my opinion, loves whatever drives demand for its products.
The idea is that machine processes to minimize errors in output. The write up states:
Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.
Simplifying the idea, RAG methods gather information and perform reference checks. The checks can be performed by consulting other smart software, the Web, or knowledge bases like an engineering database. nVidia provides a “reference architecture,” which obviously relies on nVidia products.
The write up does an obligatory tour of a couple of search and retrieval systems. Why? Most of the trendy smart software are demonstrations of information access methods wrapped up in tools that think for the “searcher” or person requiring output to answer a question, explain a concept, or help the human to think clearly. (In dinosaur days, the software is performing functions once associated with a special librarian or an informed colleague who could ask questions and conduct a reference interview. I hope the dusty concepts did not make you sneeze.)
“Yes, young man. The idea of using multiples sources can result in learning. We now call this RAG, not research.” The young man, stunned with the insight say, “WTF?” Thanks, MSFT Copilot. I love the dual tassels. The young expert is obviously twice as intelligent as the somewhat more experienced dinobaby with the weird fingers.
The article includes a diagram which I found difficult to read. I think the simple blocks represent the way in which smart software obviates the need for the user to know much about sources, verification, or provenance about the sources used to provide information. Furthermore, the diagram makes the entire process look just like getting the location of a pizza restaurant from an iPhone (no Google Maps for me).
The highlight of the write up are the links within the article. An interested reader can follow the links for additional information.
Several observations:
- The emergence of RAG as a replacement for such concepts as “search”, “special librarian,” and “provenance” makes clear that finding information is a problem not solved for systems, software, and people. New words make the “old” problem appear “new” again.
- The push for recursive methods to figure out what’s “right” or “correct” will regress to the mean; that is, despite the mathiness of the methods, systems will deliver “acceptable” or “average” outputs. A person who thinks that software will impart genius to a user are believing in a dream. These individuals will not be living the dream.
- widespread use of smart software and automation means that for most people, critical thinking will become the equivalent of an appendix. Instead of mother knows best, the system will provide the framing, the context, and the implication that the outputs are correct.
RAG opens new doors for those who operate widely adopted smart software systems will have significant control over what people think and, thus, do. If the iPhone shows a pizza joint, what about other pizza joints? Just ask. The system will not show pizza joints not verified in some way. If that “verification” requires the company advertising to be in the data set, well, that’s the set of pizza joints one will see. The others? Invisible, not on the radar, and doomed to failure seem their fate.
RAG is significant because it is new speak and it marks a disassociation of “knowing” from “accepting” output information as the best and final words on a topic. I want to point out that for a small percentage of humans, their superior cognitive abilities will ensure a different trajectory. The “new elite” will become the individuals who design, shape, control, and deploy these “smart” systems.
Most people will think they are informed because they can obtain information from a device. The mass of humanity will not know how information control influences their understanding and behavior. Am I correct? I don’t know. I do know one thing: This dinobaby prefers to do knowledge acquisition the old fashioned, slow, inefficient, and uncontrolled way.
Stephen E Arnold, December 4, 2023
Health Care and Steerable AI
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Large language models are powerful tools that can be used for the betterment of humanity. Or, in the hands of for-profit entities, to get away with wringing every last penny out of a system in the most opaque and intractable ways possible. When that system manages the wellbeing of millions and millions of people, the fallout can be tragic. TechDirt charges, “’AI’ Is Supercharging our Broken Healthcare System’s Worst Tendencies.”
Reporter Karl Bode begins by illustrating the bad blend of corporate greed and AI with journalism as an example. Media companies, he writes, were so eager to cut corners and dodge unionized labor they adopted AI technology before it was ready. In that case the results were “plagiarism, bull[pucky], a lower quality product, and chaos.” Those are bad. Mistakes in healthcare are worse. We learn:
“Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake. For example UnitedHealthcare, the largest health insurance company in the US, has been using AI to determine whether elderly patients should be cut off from Medicare benefits. If you’ve ever navigated this system on behalf of an elderly loved one, you likely know what a preposterously heartless [poop]whistle this whole system already is long before automation gets involved. But a recent investigation by STAT showed the AI consistently made major errors and cut elderly folks off from needed care prematurely, with little recourse by patients or families. … A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time.”
And yet, employees were ordered to follow the algorithm’s decisions no matter their inanity. For the few patients who did win hard-fought reversals, those decisions were immediately followed by fresh rejections that kicked them back to square one. Bode writes:
“The company in question insists that the AI’s rulings are only used as a guide. But it seems pretty apparent that, as in most early applications of LLMs, the systems are primarily viewed by executives as a quick and easy way to cut costs and automate systems already rife with problems, frustrated consumers, and underpaid and overtaxed support employees.”
But is there hope these trends will be eventually curtailed? Well, no. The write-up concludes by scoffing at the idea that government regulations or class action lawsuits are any match for corporate greed. Sounds about right.
Cynthia Murrell, December 4, 2023