Amazon: Machine-Generated Content Adds to Overhead Costs

July 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Amazon Has a Big Problem As AI-Generated Books Flood Kindle Unlimited” makes it clear that Amazon is going to have to re-think how it runs its self-publishing operation and figure out how to deal with machine-generated books from “respected” publishers.

The author of the article is expressing concern about ChatGPT-type outputs being assembled into electronic books. That concern is focused on Amazon and its ageing, arthritic Kindle eBook business. With voice to text tools, I suppose one should think about Audible audiobooks spit out by text-to-voice. The culprit, however, may be Amazon itself. Paying a person read a book for seven hours, not screw up, and making sure the sound is acceptable when the reader has a stuffed nose can be pricey.

7 4 baffled exec

A senior Amazon executive thinks to herself, “How can I fix this fake content stuff? I should really update my LinkedIn profile too.’ Will the lucky executive charged with fixing the problem identified in the article be allowed to eliminate revenue? Yep, get going on the LinkedIn profile first. Tackle the fake stuff later.

The write up points out:

the mass uploading of AI-generated books could be used to facilitate click-farming, where ‘bots’ click through a book automatically, generating royalties from Amazon Kindle Unlimited, which pays authors by the amount of pages that are read in an eBook.

And what’s Amazon doing about this quasi-fake content? The article reports:

It [Amazon] didn’t explicitly state that it was making an effort specifically to address the apparent spam-like persistent uploading of nonsensical and incoherent AI-generated books.

Then, the article raises the issues of “quality” and “authenticity.” I am not sure what these two glory words mean. My impression is that a machine-generated book is not as good as one crafted by a subject matter expert or motivated human author. If I am right, the editors at TechRadar are apparently oblivious to the idea of using XML structure content and a MarkLogic-type tool to slice-and-dice content. Then the components are assembled into a reference book. I want to point out that this method has been in use by professional publishers for a number of years. Because I signed a confidentiality agreement, I am not able to identify this outfit. But I still recall the buzz of excitement that rippled through one officer meeting at this outfit when those listening to a presentation realized [a] Humanoids could be terminated and a reduced staff could produce more books and [b] the guts of the technology was a database, a technology mostly understood by those with a few technical conferences under their belt. Yippy! No one had to learn anything. Just calculate the financial benefit of dumping humans and figuring out how to expense the contractors who could format content from a hovel in a Myanmar-type of low-cost location. At night, the executives dreamed about their bonuses for hitting their financial targets and how to start RIF’ing editorial staff, subject matter experts, and assorted specialists who doodled with front matter, footnotes, and fonts.

Net net: There is no fix. The write up illustrates the lack of understanding about how large sections of the information industry uses technology and the established procedures for dealing with cost-saving opportunity. Quality means more revenue from decisions. Authenticity is a marketing job. Amazon has a content problem and has to gear up its tools and business procedures to cope with machine-generated content whether in product reviews and eBooks.

Stephen E Arnold, July 7, 2023

Step 1: Test AI Writing Stuff. Step 2: Terminate Humanoids. Will Outrage Prevent the Inevitable?

July 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am fascinated by the information (allegedly actual factual) in “Gizmodo and Kotaku Staff Furious After Owner Announces Move to AI Content.” Part of my interest is the subtitle:

God, this is gonna be such a f***ing nightmare.

Ah, for whom, pray tell. Probably not for the owners, who may see a pot of gold at the end of the smart software rainbow; for example, Costs Minus Humans Minus Health Care Minus HR Minus Miscellaneous Humanoid costs like latte makers, office space, and salaries / bonuses. What do these produce? More money (value) for the lucky most senior managers and selected stakeholders. Humanoids lose; software wins.

72 nightmare

A humanoid writer sits at desk and wonders if the smart software will become a pet rock or a creature let loose to ruin her life by those who want a better payoff.

For the humanoids, it is hasta la vista. Assume the quality is worse? Then the analysis requires quantifying “worse.” Software will be cheaper over a time interval, expensive humans lose. Quality is like love and ethics. Money matters; quality becomes good enough.

Will, fury or outrage or protests make a difference? Nope.

The write up points out:

“AI content will not replace my work — but it will devalue it, place undue burden on editors, destroy the credibility of my outlet, and further frustrate our audience,” Gizmodo journalist Lin Codega tweeted in response to the news. “AI in any form, only undermines our mission, demoralizes our reporters, and degrades our audience’s trust.” “Hey! This sucks!” tweeted Kotaku writer Zack Zwiezen. “Please retweet and yell at G/O Media about this! Thanks.”

Much to the delight of her significant others, the “f***ing nightmare” is from the creative, imaginative humanoid Ashley Feinberg.

An ideal candidate for early replacement by a software system and a list of stop words.

Stephen E Arnold, July 5, 2023

Google: Is the Company Engaging in F-U-D?

July 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

When I was a wee sprout in 1963, I was asked to attend an IBM presentation at the so-so university I attended. Because I was a late-night baby-sitter for the school’s big, hot, and unreliable mainframe, a full day lecture and a free lunch. Of course, I went. I remember one thing more than a half century later. The other attendees from my college were using a word I was hearing but interpreting reasonably well.

7 1 google fud

The artistic MidJourney presents an picture showing executives struggling to process Google’s smart software announcements about the future. One seems to be wondering, “These are the quantum supremacy people. They revolutionized protein folding. Now they want us to wait while our competitors are deploying ChatGPT based services? F-U-D that!”

The word was F-U-D. To make sure I wasn’t confusing the word with a popular epithet, I asked one of the people who worked in the computer center as a supervisor (actually an underpaid graduate student) but superior to my $3 per hour wage, what’s F-U-D.

The fellow explained, “It means fear, uncertainty, and doubt. The idea is that IBM wants us to be afraid of buying something from Burroughs or National Cash Register. The uncertainty means that we have to make sure the competitors’ computers are as good as the IBM machines. And the doubt means that if we buy a Control Data system, we can be fired if it isn’t IBM.”

Yep, F-U-D. The game plan designed to make people like me cautious about anything not embraced by administrators. New things had to be kept in a sandbox. Really new things had to be part of a Federal research grant which could blow up and destroy a less-than-brilliant researcher’s career but cause no ripple in carpetland.

Why am I thinking about F-U-D?

I read “Here’s Why Google Thinks Its Gemini AI Will Surpass ChatGPT.” The write up makes clear:

“At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models,” Hassabis told Wired. “We also have some new innovations that are going to be pretty interesting.”

I interpreted this comment in this way:

  1. Be patient, Google has better, faster, cheaper, more wonderful technology for you coming soon, really soon
  2. Google is creating better AI because we are combining great technology with the open source systems and methods we made available to losers like OpenAI
  3. Google is innovative. (Remember, please, that Google equates innovation with complexity.)

Net net: By Gemini, just slow down. Wait for us. We are THE Google, and we do F-U-D.

Stephen E Arnold, July 3, 2023

Accuracy: AI Struggles with the Concept

June 30, 2023

For those who find reading and understanding research papers daunting, algorithms can help. At least according to the write-up, “5 AI Tools for Summarizing a Research Paper” at Cointelegraph. Writer Alice Ivey emphasizes research articles can be full of jargon, complex ideas, and technical descriptions, making them tricky for anyone outside the researchers’ field. It is AI to the rescue! That is, as long as you don’t mind summaries that contain a few errors. We learn:

“Artificial intelligence (AI)-powered tools that provide support for tackling the complexity of reading research papers can be used to solve this complexity. They can produce succinct summaries, make the language simpler, provide contextualization, extract pertinent data, and provide answers to certain questions. By leveraging these tools, researchers can save time and enhance their understanding of complex papers.

But it’s crucial to keep in mind that AI tools should support human analysis and critical thinking rather than substitute for them. In order to ensure the correctness and reliability of the data collected from research publications, researchers should exercise caution and use their domain experience to check and analyze the outputs generated by AI techniques. … It’s crucial to keep in mind that AI tools may not always accurately capture the context of the original publication, even though they can help summarize research papers.”

So, one must be familiar with the area of study to judge whether the AI got it right. Doesn’t that defeat the purpose? One can imagine scenarios where relying on misinformation could have serious consequences. Or at least some embarrassment.

The article lists ChatGPT, QuillBot, SciSpacy, IBM Watson Discovery, and Semantic Scholar as our handy but potentially inaccurate AI explainers. Some readers may possess the knowledge needed to recognize a faulty summary and think such tools may at least save them a bit of time. It would be nice to know how much one would pay for that convenience, but that small detail is missing from the write-up. ChatGPT, for example, is $240 per year. It might be more cost effective to just read the articles for oneself.

Cynthia Murrell, June 30, 2023

Annoying Humans Bedevil Smart Software

June 29, 2023

Humans are inherently biased. While sexist, ethnic, and socioeconomic prejudices are implied as the cause behind biases, unconscious obliviousness is more likely to be the culprit. Whatever causes us to be biased, AI developers are unfortunately teaching AI algorithms our fallacies. Bloomberg investigates how AI is being taught bad habits in the article, “Humans Are Biased, Generative AI Is Even Worse.”

Stable Diffusion is one of the may AI bots that generates images from text prompts. Based on these prompts, it delivers images that display an inherent bias in favor of white men and discriminates against women and brown-skinned people. Using Stable Diffusion, Bloomber conducted a test of 5000 AI images They were analyzed and found that Stable Diffusion is more racist and sexist than real-life.

While Stable Diffusion and other text-to-image AI are entertaining, they are already employed by politicians and corporations. AI-generated images and videos set a dangerous precedent, because it allows bad actors to propagate false information ranging from conspiracy theories to harmful ideologies. Ethical advocates, politicians, and some AI leaders are lobbying for moral guidelines, but a majority of tech leaders and politicians are not concerned:

“Industry researchers have been ringing the alarm for years on the risk of bias being baked into advanced AI models, and now EU lawmakers are considering proposals for safeguards to address some of these issues. Last month, the US Senate held a hearing with panelists including OpenAI CEO Sam Altman that discussed the risks of AI and the need for regulation. More than 31,000 people, including SpaceX CEO Elon Musk and Apple co-founder Steve Wozniak, have signed a petition posted in March calling for a six-month pause in AI research and development to answer questions around regulation and ethics. (Less than a month later, Musk announced he would launch a new AI chatbot.) A spate of corporate layoffs and organizational changes this year affecting AI ethicists may signal that tech companies are becoming less concerned about these risks as competition to launch real products intensifies.”

Biased datasets for AI are not new. AI developers must create more diverse and “clean” data that incorporates a true, real-life depiction. The answer may be synthetic data; that is, human involvement is minimized — except when the system has been set up.

Whitney Grace, June 29, 2023

Harvard University: Ethics and Efficiency in Teaching

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

You are familiar with Harvard University, the school of broad endowments and a professor who allegedly made up data and criticized colleagues for taking similar liberties with the “truth.” For more color about this esteemed Harvard professional read “Harvard Behavioral Scientist Who Studies Dishonesty Is Accused of Fabricating Data.”

Now the academic home of William James many notable experts in ethics, truth, reasoning, and fund raising has made an interesting decision. “Harvard’s New Computer Science Teacher Is a Chatbot.”

6 24 robot teach3er fixed

A terrified 17 year old from an affluent family in Brookline asks, “Professor Robot, will my social acceptance score be reduced if I do not understand how to complete the programming assignment?” The inspirational image is an output from the copyright compliant and ever helpful MidJourney service.

The article published in the UK “real” newspaper The Independent reports:

Harvard University plans to use an AI chatbot similar to ChatGPT as an instructor on its flagship coding course.

The write up adds:

The AI teaching bot will offer feedback to students, helping to find bugs in their code or give feedback on their work…

Once installed and operating, the chatbot will be the equivalent of a human teaching students how to make computers do what the programmer wants? Hmmm.

Several questions:

  1. Will the Harvard chatbot, like a living, breathing Harvard ethics professor make up answers?
  2. Will the Harvard chatbot be cheaper to operate than a super motivated, thrillingly capable adjunct professor, graduate student, or doddering lecturer close to retirement?
  3. Why does an institution like Harvard lack the infrastructure to teach humans with humans?
  4. Will the use of chatbot output code be considered original work?

But as one maverick professors keeps saying, “Just getting admitted to a prestigious university punches one’s employment ticket.”

That’s the spirit of modem education. As William James, a professor from a long and dusty era said:

The world we see that seems so insane is the result of a belief system that is not working. To perceive the world differently, we must be willing to change our belief system, let the past slip away, expand our sense of now, and dissolve the fear in our minds.

Should students fear algorithms teaching them how to think?

Stephen E Arnold, June 28, 2023

Digital Work: Pick Up the Rake and Get with the Program

June 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The sky is falling, according to “AI Is Killing the Old Web, And the New Web Struggles to Be Born.” What’s the answer? Read publications like the Verge online, of course. At least, that is the message I received from this essay. (I think I could hear the author whispering, “AI will kill us all, and I will lose my job. But this essay is a rizz. NYT, here I come.”)

6 27 raking leaves

This grumpy young person says, “My brother dropped the car keys in the leaves. Now I have to rake— like actually rake — to find them. My brother is a dork and my life is over.” Is there an easy, quick fix? No, the sky — not leaves — are falling when it comes to finding information, according to the Verge, a Silicon Valley-type “real” news outfit. MidJourney, you have almost captured the dour look of a young person who must do work.

I noted this statement in the essay:

AI-generated misinformation is insidious because it’s often invisible. It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick. If machine-generated content supplants human authorship, it would be hard — impossible, even — to fully map the damage. And yes, people are plentiful sources of misinformation, too, but if AI systems also choke out the platforms where human expertise currently thrives, then there will be less opportunity to remedy our collective errors.

Thump. The sky allegedly has fallen. The author, like the teen in the illustration is faced with work; that is, the task of raking, bagging, and hauling the trash to the burn pit.

What a novel concept! Intellectual work; that is, sifting through information and discarding the garbage. Prior to Gutenberg, one asked around, found a person who knew something, and asked the individual, “How do I make a horseshoe.” After Gutenberg, one had to find, read, and learn information.” With online, free services are supposed to just cough up the answer. The idea is that the leaves put themselves in the garbage bags and the missing keys appear. It’s magic or one of those Apple tracking devices.

News flash.

Each type of finding tool requires work. Yep, effort. In order to locate information, one has to do work. Does the thumb typing, TikTok consuming person want to do work? From my point of view, work is not on the menu at Philz Coffee.

New tools, different finding methods, and effort are required to rake the intellectual leaves and reveal the lawn. In the comments to the article, Barb3d says:

It’s clear from his article that James Vincent is more concerned about his own relevance in an AI-powered future than he is about the state of the web. His alarmist view of AI’s role in the web landscape appears to be driven more by personal fear of obsolescence than by objective analysis.

My view is that the Verge is concerned about its role as a modern Oracle of Delphi. The sky-is-falling itself is click bait. The silliness of the Silicon Valley “real” news outfit vibrates in the write up. I would point out that the article itself is derivative of another article from an online service Tom’s Hardware.

The author allegedly talked to one expert in hiking boots. That’s a good start. The longest journey begins with a single step. But learning how to make a horse shoe and an opinion about which boot to purchase are two different tasks. One is instrumental and the other is fashion.

No, software advances won’t kill the Web as “we” know it. As Barb3d says, “Adapt.” Or in my lingo, pick up the rake, quit complaining, and find the keys.

Stephen E Arnold, June 27, 2023

Are AI UIs Really Better?

June 27, 2023

User experience design firm Nielsen Norman Group believes advances in AI define an entirely new way of interacting with computers. Writer and company cofounder Jakob Nielsen asserts, “AI: First New UI Paradigm in 60 Years.” We would like to point out natural language is not new, but we acknowledge there are now machine resources and software that make methods more useful. Do they rise to the level of a shiny new paradigm?

Neilsen begins with a little history lesson. First came batch processing in 1945 — think stacks of punch cards and reams of folded printouts. It was an unwieldy and inconvenient system to say the least. Then around 1964 command-based interaction took over, evolving through the years from command-line programming to graphical user interfaces. Nielsen describes why AI represents a departure from these methods:

“With the new AI systems, the user no longer tells the computer what to do. Rather, the user tells the computer what outcome they want. Thus, the third UI paradigm, represented by current generative Auk is intent-based outcome specification.”

Defining outcomes instead of steps — sounds great until one asks who’s in control. Not the user. The article continues:

“Do what I mean, not what I say is a seductive UI paradigm — as mentioned, users often order the computer to do the wrong thing. On the other hand, assigning the locus of control entirely to the computer does have downsides, especially with current AI, which is prone to including erroneous information in its results. When users don’t know how something was done, it can be harder for them to identify or correct the problem.”

Yes! Nielsen cites this flaw as a reason he will stick with graphic user interfaces, thank you very much. (Besides, he feels, visual information is easier to understand and interact with than text.) We would add a more sinister consideration: Is the system weaponized or delivering shaped information? Developers’ lack of transparency can hide not only honest mistakes but also biases and even intentional misinformation. We agree with Nielsen: We will stick with GUIs for a bit longer.

Cynthia Murrell, June 27, 2023

Amazon AWS PR: A Signal from a Weakening Heart?

June 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Amazon’s vision: An AI Model for Everything.” Readers of these essays know that I am uncomfortable with categorical affirmatives like “all”, “every”, and “everything.” The article in Semafor (does the word remind you of a traffic light in Lima, Peru?) is an interview with a vice president of Amazon Web Services. AWS is part of the online bookstore and digital flea market available at Amazon.com. The write up asserts that AWS will offer an “AI model for everything.” Everything? That’s a modest claim for a fast moving and rapidly changing suite of technologies.

Amazon executives — unlike some high-technology firms’ professionals — are usually less visible. But here is Matt Wood, the VP of AWS, explaining the digital flea market’s approach to smart software manifested in AWS cloud technology. I thought AWS was numero uno in the cloud computing club. Big dogs don’t do much PR but this is 2023, so adaptation is necessary I assume. AWS is shadowed by Microsoft, allegedly was number two, in the Cloud Club. Make no mistake, the Softies and their good enough software are gunning for the top spot in a small but elite strata of the techno world. The Google, poor Google, is lumbering through a cloud bedecked market with its user first, super duper promises for the future and panting quantum, AI, Office 365 with each painful step.

6 26 amazon gym

In a gym, high above the clouds in a sky scraper in the Pacific northwest, a high powered denizen of the exclusive Cloud Club, experiences a chest pain in the rarified air. After saying, “Hey, I am a-okay.” The sleek and successful member of an exclusive club, yelps and grabs his chest. Those in the club express shock and dismay. But one person seems to smile. Is that a Microsoftie or a Googler looking just a little bit happy at the fellow member’s obvious distress? MidJourney cooked up a this tasty illustration. Thanks, you plagiarism free bot you.

The Semafor interview offers some statements about its goals. No information about AWS and its Byzantine cloud pricing policies, nor is much PR light shed on  the yard sale approach to third party sourced products.

Here are three snippets which caught my attention. (I call these labored statements because each seems as if a committee of lawyers, blue chip consultants, and interns crafted them, but that’s just my opinion. You may find these gems  worthy of writing on a note card and saving for those occasions when you need a snappy quotation.)

Labored statement one

But there’s an old Amazon adage that these things are usually an “and” and not an “or.” So we’re doing both.

Got that? Boolean, isn’t it? Even though Amazon AWS explained its smart software years ago, a fact I documented in an invited lecture I gave in 2019, the company has not delivered on its promise of “off the shelf, ready to run” models, packaged data sets, and easy-to-use methods so AWS customers could deploy smart software easily. Like Amazon’s efforts in blockchain, some ideational confections were in the AWS jungle. A suite of usable and problem solving services were not. Has AWS pioneered in more than complicated cloud pricing?

Labored statement two

The ability to take that data and then take a foundational model and just contribute additional knowledge and information to it very quickly and very easily, and then put it into production very quickly and very easily, then iterate on it in production very quickly and very easily. That’s kind of the model that we’re seeing.

Ah, ha. I loved the “just.” Easy stuff. Digital Lego blocks. I once stayed in the Lego hotel. On arrival, I watched a team of Lego professionals trying to reassemble one of the Lego sculptures some careless child had knocked over. Little rectangles littered the hotel lobby. Two days later when I checked out, the Lego Star Wars’ figure was still being reassembled. I thought Lego toys were easy to use. Oh, well. My perception of AWS is that there are many, many components. Licensees can just assemble them as long as they have the time, expertise, and money. Is that the kind of model AWS will deliver or is delivering?

Labored statement three

ChatGPT may be the most successful technology demo since the original iPhone introduction. It puts a dent in the universe.

My immediate reaction: “What about fire, the wheel, printing, the Internet?” And I liked the fact that ChatGPT is a demonstration. Let me describe how Amazon handles its core functions. The anecdote dates from early 2022. I wrote about ordering an AMD Ryzen 5950 and receiving from Amazon a pair of red female-centric underwear.

panty on table

This red female undergarment arrived after I ordered an AMD Ryzen 5950 CPU. My wife estimated the value of the giant sized personal item at about $4.00US. The 5950 cost me about $550.00US. I am not sure how a warehouse fulfillment professional or a poorly maintained robot picker could screw up my order. But Amazon pulled it off and then for almost a month insisted the panties were the CPU.

This picture is the product sent to me by Amazon instead of an AMD Ryzen 5950 CPU. For the full story see, “Amazon: Is the Company Losing Control of Essentials?” After three weeks of going back and forth with Amazon’s stellar customer service department, my money was refunded. I was told to keep the underwear which now hang on the corner of the computer with the chip. I was able to buy the chip for a lower price from B+H Photo Video. When I opened the package, I saw the AMD box, not a pair of cheap, made-heaven-knows-where panties.

What did that say about Amazon’s ability to drive the Bezos bulldozer now that the founder rides his yacht, lifts weights, and ponders how Elon Musk and SpaceX have become the go-to space outfit? Can Amazon deliver something the customer wants?

Several observations:

First, this PR effort is a signal that Amazon is aware that it is losing ground in the AI battle.

Second, the Amazon approach is unlikely to slow Microsoft’s body slam of commercial customers. Microsoft’s software may be “good enough” to keep Word and SharePoint lovers on the digital ranch.

Third, Amazon’s Bezos bulldozer drivers seem to have lost its GPS signal. May I suggest ordering a functioning GPS from Wal-Mart?

Basics, Amazon, basics, not words. Especially words like “everything.” Do one thing and do it well, please.

Stephen E Arnold, June 26, 2023

Have You Heard the AI Joke about? Yeah, Over and Over Again

June 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Developers have been unable to program one key facet of human intelligence into AI: a sense of humor. Oh, ChatGPT has jokes, but its repertoire is limited. And when asked to explain why something is or is not funny, it demonstrates it just doesn’t get it. Ars Technica informs us, “Researchers Discover that ChatGPT Prefers Repeating 25 Jokes Over and Over.”

6 17 jokes suck

A young person in the audience says to the standup comedian: “Hey, dude. Your jokes suck. Did an AI write them for you?” This illustration, despite my efforts to show the comedian getting bombarded with apple cores, bananas, and tomatoes, would only produce this sanitized image. It’s great, right? Thanks, MidJourney.

Reporter Benj Edwards writes:

“Two German researchers, Sophie Jentzsch and Kristian Kersting, released a paper that examines the ability of OpenAI’s ChatGPT-3.5 to understand and generate humor. In particular, they discovered that ChatGPT’s knowledge of jokes is fairly limited: During a test run, 90 percent of 1,008 generations were the same 25 jokes, leading them to conclude that the responses were likely learned and memorized during the AI model’s training rather than being newly generated.”

See the article, if curious, for the algorithm’s top 10 dad jokes and their frequencies within the 1,008 joke sample. There were a few unique jokes in the sample, but the AI seems to have created them by combining elements of others. And often, those mashups were pure nonsense. We learn:

“The researchers found that the language model’s original creations didn’t always make sense, such as, ‘Why did the man put his money in the blender? He wanted to make time fly.’ When asked to explain each of the 25 most frequent jokes, ChatGPT mostly provided valid explanations according to the researchers’ methodology, indicating an ‘understanding’ of stylistic elements such as wordplay and double meanings. However, it struggled with sequences that didn’t fit into learned patterns and couldn’t tell when a joke wasn’t funny. Instead, it would make up fictional yet plausible-sounding explanations.”

Plausible sounding, perhaps, but gibberish nonetheless. See the write-up for an example. ChatGPT simply does not understand what it means for something to be funny. Humor, after all, is a quintessentially human characteristic. Algorithms may get better at mimicking it, but we must never lose sight of the fact that AI is software, incapable of amusement. Or any other emotion. If we begin thinking of AI as human, we are in danger of forgetting the very real limits of machine learning as a lens on the world.

Cynthia Murrell, June 23, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta