Harvard University: Ethics and Efficiency in Teaching

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

You are familiar with Harvard University, the school of broad endowments and a professor who allegedly made up data and criticized colleagues for taking similar liberties with the “truth.” For more color about this esteemed Harvard professional read “Harvard Behavioral Scientist Who Studies Dishonesty Is Accused of Fabricating Data.”

Now the academic home of William James many notable experts in ethics, truth, reasoning, and fund raising has made an interesting decision. “Harvard’s New Computer Science Teacher Is a Chatbot.”

6 24 robot teach3er fixed

A terrified 17 year old from an affluent family in Brookline asks, “Professor Robot, will my social acceptance score be reduced if I do not understand how to complete the programming assignment?” The inspirational image is an output from the copyright compliant and ever helpful MidJourney service.

The article published in the UK “real” newspaper The Independent reports:

Harvard University plans to use an AI chatbot similar to ChatGPT as an instructor on its flagship coding course.

The write up adds:

The AI teaching bot will offer feedback to students, helping to find bugs in their code or give feedback on their work…

Once installed and operating, the chatbot will be the equivalent of a human teaching students how to make computers do what the programmer wants? Hmmm.

Several questions:

  1. Will the Harvard chatbot, like a living, breathing Harvard ethics professor make up answers?
  2. Will the Harvard chatbot be cheaper to operate than a super motivated, thrillingly capable adjunct professor, graduate student, or doddering lecturer close to retirement?
  3. Why does an institution like Harvard lack the infrastructure to teach humans with humans?
  4. Will the use of chatbot output code be considered original work?

But as one maverick professors keeps saying, “Just getting admitted to a prestigious university punches one’s employment ticket.”

That’s the spirit of modem education. As William James, a professor from a long and dusty era said:

The world we see that seems so insane is the result of a belief system that is not working. To perceive the world differently, we must be willing to change our belief system, let the past slip away, expand our sense of now, and dissolve the fear in our minds.

Should students fear algorithms teaching them how to think?

Stephen E Arnold, June 28, 2023

Dust Up: Social Justice and STEM Publishing

June 28, 2023

Are you familiar with “social justice warriors?” These are people who. Take it upon themselves to police the world for their moral causes, usually from a self-righteous standpoint. Social justice warriors are also known my the acronym SJWs and can cross over into the infamous Karen zone. Unfortunately Heterodox STEM reports SJWs have invaded the science community and Anna Krylov and Jay Tanzman discussed the issue in their paper: “Critical Social Justice Subverts Scientific Publishing.”

SJWs advocate for the politicization of science, adding an ideology to scientific research also known as critical social justice (CSJ). It upends the true purpose of science which is to help and advance humanity. CSJ adds censorship, scholarship suppression, and social engineering to science.

Krylov and Tanzmans’ paper was presented at the Perils for Science in Democracies and Authoritarian Countries and they argue CSJ harms scientific research than helps it. They compare CSJ to Orwell’s fictional Ministry of Love; although real life examples such as Josef Goebbels’s Nazi Ministry of Propaganda, the USSR’s Department for Agitation and Propaganda, and China’s authoritarian regime work better. CSJ is the opposite of the Enlightenment that liberated human psyches from religious and royal dogmas. The Enlightenment engendered critical thinking, the scientific process, philosophy, and discovery. The world became more tolerant, wealthier, educated, and healthier as a result.

CSJ creates censorship and paranoia akin to tyrannical regimes:

“According to CSJ ideologues, the very language we use to communicate our findings is a minefield of offenses. Professional societies, universities, and publishing houses have produced volumes dedicated to “inclusive” language that contain long lists of proscribed words that purportedly can cause offense and—according to the DEI bureaucracy that promulgates these initiatives—perpetuate inequality and exclusion of some groups, disadvantage women, and promote patriarchy, racism, sexism, ableism, and other isms. The lists of forbidden terms include “master database,” “older software,” “motherboard,” “dummy variable,” “black and white thinking,” “strawman,” “picnic,” and “long time no see” (Krylov 2021: 5371, Krylov et al. 2022: 32, McWhorter 2022, Paul 2023, Packer 2023, Anonymous 2022). The Google Inclusive Language Guide even proscribes the term “smart phones” (Krauss 2022). The Inclusivity Style  Guide of the American Chemical Society (2023)—a major chemistry publisher of more than 100 titles—advises against using such terms as “double blind studies,” “healthy weight,” “sanity check,” “black market,” “the New World,” and “dark times”…”

New meanings that cause offense are projected onto benign words and their use is taken out of context. At this rate, everything people say will be considered offensive, including the most uncontroversial topic: the weather.

Science must be free from CSJ ideologies but also corporate ideologies that promote profit margins. Examples from American history include, Big Tobacco, sugar manufacturers, and Big Pharma.

Whitney Grace, June 28, 2023

Digital Work: Pick Up the Rake and Get with the Program

June 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The sky is falling, according to “AI Is Killing the Old Web, And the New Web Struggles to Be Born.” What’s the answer? Read publications like the Verge online, of course. At least, that is the message I received from this essay. (I think I could hear the author whispering, “AI will kill us all, and I will lose my job. But this essay is a rizz. NYT, here I come.”)

6 27 raking leaves

This grumpy young person says, “My brother dropped the car keys in the leaves. Now I have to rake— like actually rake — to find them. My brother is a dork and my life is over.” Is there an easy, quick fix? No, the sky — not leaves — are falling when it comes to finding information, according to the Verge, a Silicon Valley-type “real” news outfit. MidJourney, you have almost captured the dour look of a young person who must do work.

I noted this statement in the essay:

AI-generated misinformation is insidious because it’s often invisible. It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick. If machine-generated content supplants human authorship, it would be hard — impossible, even — to fully map the damage. And yes, people are plentiful sources of misinformation, too, but if AI systems also choke out the platforms where human expertise currently thrives, then there will be less opportunity to remedy our collective errors.

Thump. The sky allegedly has fallen. The author, like the teen in the illustration is faced with work; that is, the task of raking, bagging, and hauling the trash to the burn pit.

What a novel concept! Intellectual work; that is, sifting through information and discarding the garbage. Prior to Gutenberg, one asked around, found a person who knew something, and asked the individual, “How do I make a horseshoe.” After Gutenberg, one had to find, read, and learn information.” With online, free services are supposed to just cough up the answer. The idea is that the leaves put themselves in the garbage bags and the missing keys appear. It’s magic or one of those Apple tracking devices.

News flash.

Each type of finding tool requires work. Yep, effort. In order to locate information, one has to do work. Does the thumb typing, TikTok consuming person want to do work? From my point of view, work is not on the menu at Philz Coffee.

New tools, different finding methods, and effort are required to rake the intellectual leaves and reveal the lawn. In the comments to the article, Barb3d says:

It’s clear from his article that James Vincent is more concerned about his own relevance in an AI-powered future than he is about the state of the web. His alarmist view of AI’s role in the web landscape appears to be driven more by personal fear of obsolescence than by objective analysis.

My view is that the Verge is concerned about its role as a modern Oracle of Delphi. The sky-is-falling itself is click bait. The silliness of the Silicon Valley “real” news outfit vibrates in the write up. I would point out that the article itself is derivative of another article from an online service Tom’s Hardware.

The author allegedly talked to one expert in hiking boots. That’s a good start. The longest journey begins with a single step. But learning how to make a horse shoe and an opinion about which boot to purchase are two different tasks. One is instrumental and the other is fashion.

No, software advances won’t kill the Web as “we” know it. As Barb3d says, “Adapt.” Or in my lingo, pick up the rake, quit complaining, and find the keys.

Stephen E Arnold, June 27, 2023

Google: I Promise to Do Better. No, Really, Really Better This Time

June 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The UK online publication The Register made available this article: “Google Accused of Urging Android Devs to Mislabel Apps to Get Forbidden Kids Ad Data.” The write up is not about TikTok. The subject is Google and an interesting alleged action by the online advertising company.

6 24 i promise

The high school science club member who pranked the principal says when caught: “Listen to me, Mr. Principal. I promise I won’t make that mistake again. Honest. Cross my heart and hope to die. Boy scout’s honor. No, really. Never, ever, again.” The illustration was generated by the plagiarism-free MidJourney.

The write up states as “actual factual” behavior by the company:

The complaint says that both Google and app developers creating DFF apps stood to gain by not applying the strict “intended for children” label. And it claims that Google incentivized this mislabeling by promising developers more advertising revenue for mixed-audience apps.

The idea is that intentionally assigned metadata made it possible for Google to acquire information about a child’s online activity.

My initial reaction was, “What’s new? Google says one thing and then demonstrates it adolescent sense of cleverness via a workaround?

After a conversation with my team, I formulated a different hypothesis; specifically, Google has institutionalized mechanisms to make it possible for the company’s actual behavior to be whatever the company wants its behavior to be.

One can hope this was a one-time glitch. My “different hypothesis” points to a cultural and structural policy to make it possible for the company to do what’s necessary to achieve its objective.

Stephen E Arnold, June 27, 2023

Are AI UIs Really Better?

June 27, 2023

User experience design firm Nielsen Norman Group believes advances in AI define an entirely new way of interacting with computers. Writer and company cofounder Jakob Nielsen asserts, “AI: First New UI Paradigm in 60 Years.” We would like to point out natural language is not new, but we acknowledge there are now machine resources and software that make methods more useful. Do they rise to the level of a shiny new paradigm?

Neilsen begins with a little history lesson. First came batch processing in 1945 — think stacks of punch cards and reams of folded printouts. It was an unwieldy and inconvenient system to say the least. Then around 1964 command-based interaction took over, evolving through the years from command-line programming to graphical user interfaces. Nielsen describes why AI represents a departure from these methods:

“With the new AI systems, the user no longer tells the computer what to do. Rather, the user tells the computer what outcome they want. Thus, the third UI paradigm, represented by current generative Auk is intent-based outcome specification.”

Defining outcomes instead of steps — sounds great until one asks who’s in control. Not the user. The article continues:

“Do what I mean, not what I say is a seductive UI paradigm — as mentioned, users often order the computer to do the wrong thing. On the other hand, assigning the locus of control entirely to the computer does have downsides, especially with current AI, which is prone to including erroneous information in its results. When users don’t know how something was done, it can be harder for them to identify or correct the problem.”

Yes! Nielsen cites this flaw as a reason he will stick with graphic user interfaces, thank you very much. (Besides, he feels, visual information is easier to understand and interact with than text.) We would add a more sinister consideration: Is the system weaponized or delivering shaped information? Developers’ lack of transparency can hide not only honest mistakes but also biases and even intentional misinformation. We agree with Nielsen: We will stick with GUIs for a bit longer.

Cynthia Murrell, June 27, 2023

Amazon AWS PR: A Signal from a Weakening Heart?

June 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Amazon’s vision: An AI Model for Everything.” Readers of these essays know that I am uncomfortable with categorical affirmatives like “all”, “every”, and “everything.” The article in Semafor (does the word remind you of a traffic light in Lima, Peru?) is an interview with a vice president of Amazon Web Services. AWS is part of the online bookstore and digital flea market available at Amazon.com. The write up asserts that AWS will offer an “AI model for everything.” Everything? That’s a modest claim for a fast moving and rapidly changing suite of technologies.

Amazon executives — unlike some high-technology firms’ professionals — are usually less visible. But here is Matt Wood, the VP of AWS, explaining the digital flea market’s approach to smart software manifested in AWS cloud technology. I thought AWS was numero uno in the cloud computing club. Big dogs don’t do much PR but this is 2023, so adaptation is necessary I assume. AWS is shadowed by Microsoft, allegedly was number two, in the Cloud Club. Make no mistake, the Softies and their good enough software are gunning for the top spot in a small but elite strata of the techno world. The Google, poor Google, is lumbering through a cloud bedecked market with its user first, super duper promises for the future and panting quantum, AI, Office 365 with each painful step.

6 26 amazon gym

In a gym, high above the clouds in a sky scraper in the Pacific northwest, a high powered denizen of the exclusive Cloud Club, experiences a chest pain in the rarified air. After saying, “Hey, I am a-okay.” The sleek and successful member of an exclusive club, yelps and grabs his chest. Those in the club express shock and dismay. But one person seems to smile. Is that a Microsoftie or a Googler looking just a little bit happy at the fellow member’s obvious distress? MidJourney cooked up a this tasty illustration. Thanks, you plagiarism free bot you.

The Semafor interview offers some statements about its goals. No information about AWS and its Byzantine cloud pricing policies, nor is much PR light shed on  the yard sale approach to third party sourced products.

Here are three snippets which caught my attention. (I call these labored statements because each seems as if a committee of lawyers, blue chip consultants, and interns crafted them, but that’s just my opinion. You may find these gems  worthy of writing on a note card and saving for those occasions when you need a snappy quotation.)

Labored statement one

But there’s an old Amazon adage that these things are usually an “and” and not an “or.” So we’re doing both.

Got that? Boolean, isn’t it? Even though Amazon AWS explained its smart software years ago, a fact I documented in an invited lecture I gave in 2019, the company has not delivered on its promise of “off the shelf, ready to run” models, packaged data sets, and easy-to-use methods so AWS customers could deploy smart software easily. Like Amazon’s efforts in blockchain, some ideational confections were in the AWS jungle. A suite of usable and problem solving services were not. Has AWS pioneered in more than complicated cloud pricing?

Labored statement two

The ability to take that data and then take a foundational model and just contribute additional knowledge and information to it very quickly and very easily, and then put it into production very quickly and very easily, then iterate on it in production very quickly and very easily. That’s kind of the model that we’re seeing.

Ah, ha. I loved the “just.” Easy stuff. Digital Lego blocks. I once stayed in the Lego hotel. On arrival, I watched a team of Lego professionals trying to reassemble one of the Lego sculptures some careless child had knocked over. Little rectangles littered the hotel lobby. Two days later when I checked out, the Lego Star Wars’ figure was still being reassembled. I thought Lego toys were easy to use. Oh, well. My perception of AWS is that there are many, many components. Licensees can just assemble them as long as they have the time, expertise, and money. Is that the kind of model AWS will deliver or is delivering?

Labored statement three

ChatGPT may be the most successful technology demo since the original iPhone introduction. It puts a dent in the universe.

My immediate reaction: “What about fire, the wheel, printing, the Internet?” And I liked the fact that ChatGPT is a demonstration. Let me describe how Amazon handles its core functions. The anecdote dates from early 2022. I wrote about ordering an AMD Ryzen 5950 and receiving from Amazon a pair of red female-centric underwear.

panty on table

This red female undergarment arrived after I ordered an AMD Ryzen 5950 CPU. My wife estimated the value of the giant sized personal item at about $4.00US. The 5950 cost me about $550.00US. I am not sure how a warehouse fulfillment professional or a poorly maintained robot picker could screw up my order. But Amazon pulled it off and then for almost a month insisted the panties were the CPU.

This picture is the product sent to me by Amazon instead of an AMD Ryzen 5950 CPU. For the full story see, “Amazon: Is the Company Losing Control of Essentials?” After three weeks of going back and forth with Amazon’s stellar customer service department, my money was refunded. I was told to keep the underwear which now hang on the corner of the computer with the chip. I was able to buy the chip for a lower price from B+H Photo Video. When I opened the package, I saw the AMD box, not a pair of cheap, made-heaven-knows-where panties.

What did that say about Amazon’s ability to drive the Bezos bulldozer now that the founder rides his yacht, lifts weights, and ponders how Elon Musk and SpaceX have become the go-to space outfit? Can Amazon deliver something the customer wants?

Several observations:

First, this PR effort is a signal that Amazon is aware that it is losing ground in the AI battle.

Second, the Amazon approach is unlikely to slow Microsoft’s body slam of commercial customers. Microsoft’s software may be “good enough” to keep Word and SharePoint lovers on the digital ranch.

Third, Amazon’s Bezos bulldozer drivers seem to have lost its GPS signal. May I suggest ordering a functioning GPS from Wal-Mart?

Basics, Amazon, basics, not words. Especially words like “everything.” Do one thing and do it well, please.

Stephen E Arnold, June 26, 2023

The New Ethics: Harvard Innovates Again

June 26, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have no idea if the weird orange newspaper’s story “Harvard Dishonesty Expert Accused of Dishonesty” is on the money. I find it amusing and a useful insight into the antics of Ivory Tower professor behavior. As a old dinobaby, I have seen a number of examples of what one of Tennessee Williams’ well-adjusted characters called mendacity. And this Harvard confection is a topper.

22 june man caught

The snagged wizard, in my mental theater said, “I did not mean to falsify data, plagiarize, or concoct a modest amount of twaddle like the president of Stanford University. I apologize. I really am sorry. May I buy you a coffee? I could also write your child a letter of recommendation to Harvard admissions.” This touching and now all-too-common scene has been visualized by the really non-imitative MidJourney system.

The core of the “real news” story is captured in this segment of the article:

A high-profile expert on ethics and dishonesty is facing allegations of dishonesty in her own work and has taken administrative leave from Harvard Business School.

The “real news” article called attention to the behavior of the high profile expert; to wit:

In 2021, a 2012 paper on dishonesty by Gino, behavioral economist Dan Ariely and other co-authors was retracted from the journal Proceedings of the National Academy of Sciences after the Data Colada team suggested there was fraud in one of the experiments involved. [Ah, Data Colada, the apologizing professor’s pals.]

If true, the professor attacked the best-selling author and others for not being on the up and up. And that mud slinger from the dusty Wild West of Harvard’s ethics unit alleged fudged information. That’s a slick play in my book.

What’s this say about the ethical compass of the professor, about Harvard’s hiring and monitoring processes, and about the failure of the parties to provide a comment to the weird orange newspaper?

Ah, no comment. A wise lawyer’s work possibly. An ethical wise lawyer.

Stephen E Arnold, June 26, 2023

The Future from the Masters of the Obvious

June 26, 2023

The last few years have seen many societal changes that, among other things, affect business operations. Gartner corals these seismic shifts into six obvious considerations for its article, “6 Macro Factors Reshaping Business this Decade.” Contributor Jordan Turner writes:

“Executives will continue to grapple with a host of challenges during the 2020s, but from the maelstrom that was their first few years, new business opportunities will arise. ‘As we entered the 2020s, economies were already on the edge,’ says Mark Raskino, Distinguished VP Analyst at Gartner. ‘A decade-long boom, generated substantially from inexpensive finance and lower-cost energy, led to structural stresses such as highly leveraged debt, crumbling international alliances and bubble-like asset prices. We were overdue for a reckoning.’ Six macro factors that will reshape business this decade. The pandemic coincided with and catalyzed societal shifts, spurring a strategy reset for many industries. Executive leaders must acknowledge these six changes to reconsider how business will get done.”

Their list includes: the threat of recession, systemic mistrust, poor economic productivity, sustainability, a talent shortage, and emerging technologies. See the write-up for details on each. Not surprisingly, the emerging technologies list includes adaptive AI alongside the metaverse, platform engineering, sustainable technology and superapps. Unfortunately, the Gartner wizards omitted replacing consultants and analysts with smart software. That may be the most cost-effective transition for businesses yet the most detrimental to workers. We wonder why they left it out.

And grapple? Yes, grapple. I wonder if Gartner will have a special presentation and a conference about these. Attendees can grapple. Like Musk and Zuck?

Cynthia Murrell, June 26, 2023

Canada Bill C-18 Delivers a Victory: How Long Will the Triumph Pay Off in Cash Money?

June 23, 2023

News outlets make or made most of their money selling advertising. The idea was — when I worked at a couple of big news publishing companies — the audience for the content would attract those who wanted to reach the audience. I worked at the Courier-Journal & Louisville Times Co. before it dissolved into a Gannett marvel. If a used car dealer wanted to sell a 1980 Corvette, the choice was the newspaper or a free ad in what was called AutoTrader. This was a localized, printed collection of autos for sale. Some dealers advertised, but in the 1980s, individuals looking for a cheap or free way to pitch a vehicle loved AutoTrader. Despite a free option, the size of the readership and the sports news, comics, and obituaries made the Courier-Journal the must-have for a motivated seller.

6 23 cannae

Hannibal and his war elephant Zuckster survey the field of battle after Bill C-18 passes. MidJourney was the digital wonder responsible for this confection.

When I worked at the Ziffer in Manhattan, we published Computer Shopper. The biggest Computer Shopper had about 800 pages. It could have been bigger, but there were paper and press constraints If I recall correctly. But I smile when I remember that 85 percent of those pages were paid advertisements. We had an audience, and those in the burgeoning computer and software business wanted to reach our audience. How many Ziffers remember the way publishing used to work?

When I read the National Post article titled “Meta Says It’s Blocking News on Facebook, Instagram after Government Passes Online News Bill,” I thought about the Battle of Cannae. The Romans had the troops, the weapons, and the psychological advantage. But Hannibal showed up and, if historical records are as accurate as a tweet, killed Romans and mercenaries. I think it may have been estimated that Roman whiz kids lost 40,000 troops and 5,000 cavalry along with the Roman strategic wizards Paulus, Servilius, and Atilius.

My hunch is that those who survived paid with labor or money to be allowed to survive. Being a slave in peak Rome was a dicey gig. Having a fungible skill like painting zowie murals was good. Having minimal skills? Well, someone has to work for nothing in the fields or quarries.

What’s the connection? The publishers are similar to the Roman generals. The bad guys are the digital rebels who are like Hannibal and his followers.

Back to the cited National Post article:

After the Senate passed the Online News Act Thursday, Meta confirmed it will remove news content from Facebook and Instagram for all Canadian users, but it remained unclear whether Google would follow suit for its platforms.  The act, which was known as Bill C-18, is designed to force Google and Facebook to share revenues with publishers for news stories that appear on their platforms. By removing news altogether, companies would be exempt from the legislation.

The idea is that US online services which touch most online users (maybe 90 or 95 percent in North America) will block news content. This means:

  1. Cash gushers from Facebook- and Google-type companies will not pay for news content. (This has some interesting downstream consequences but for this short essay, I want to focus on the “not paying” for news.)
  2. The publishers will experience a decline in traffic. Why? Without a “finding and pointing” mechanism, how would I find this “real news” article published by the National Post. (FYI: I think of this newspaper as Canada’s USAToday, which was a Gannett crown jewel. How is that working out for Gannett today?)
  3. Rome triumphed only to fizzle out again. And Hannibal? He’s remembered for the elephants-through-the-Alps trick. Are man’s efforts ultimately futile?

What happens if one considers, the clicks will stop accruing to the publishers’ Web sites. How will the publishers generate traffic? SEO. Yeah, good luck with that.

Is there an alternative?

Yes, buy Facebook and Google advertising. I call this pay to play.

The Canadian news outlets will have to pay for traffic. I suppose companies like Tyler Technologies, which has an office in Vancouver I think, could sell ads for the National Post’s stories, but that seems to be a stretch. Similarly the National Post could buy ads on the Embroidery Classics & Promotions (Calgary) Web site, but that may not produce too many clicks for the Canadian news outfits. I estimate one or two a month.

Bill C-18 may not have the desired effect. Facebook and Facebook-type outfits will want to sell advertising to the Canadian publishers in my opinion. And without high-impact, consistent and relevant online advertising, state-of-art marketing, and juicy content, the publishers may find themselves either impaled on their digital hopes or placed in servitude to the Zuck and his fellow travelers.

Are these publishers able to pony up the cash and make the appropriate decisions to generate revenues like the good old days?

Sure, there’s a chance.

But it’s a long shot. I estimate the chances as similar to King Charles’ horse winning the 2024 King George V Stakes race in 2024; that is, 18 to 1. But Desert Hero pulled it off. Who is rooting for the Canadian publishers?

Stephen E Arnold, June 23, 2023

Have You Heard the AI Joke about? Yeah, Over and Over Again

June 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Developers have been unable to program one key facet of human intelligence into AI: a sense of humor. Oh, ChatGPT has jokes, but its repertoire is limited. And when asked to explain why something is or is not funny, it demonstrates it just doesn’t get it. Ars Technica informs us, “Researchers Discover that ChatGPT Prefers Repeating 25 Jokes Over and Over.”

6 17 jokes suck

A young person in the audience says to the standup comedian: “Hey, dude. Your jokes suck. Did an AI write them for you?” This illustration, despite my efforts to show the comedian getting bombarded with apple cores, bananas, and tomatoes, would only produce this sanitized image. It’s great, right? Thanks, MidJourney.

Reporter Benj Edwards writes:

“Two German researchers, Sophie Jentzsch and Kristian Kersting, released a paper that examines the ability of OpenAI’s ChatGPT-3.5 to understand and generate humor. In particular, they discovered that ChatGPT’s knowledge of jokes is fairly limited: During a test run, 90 percent of 1,008 generations were the same 25 jokes, leading them to conclude that the responses were likely learned and memorized during the AI model’s training rather than being newly generated.”

See the article, if curious, for the algorithm’s top 10 dad jokes and their frequencies within the 1,008 joke sample. There were a few unique jokes in the sample, but the AI seems to have created them by combining elements of others. And often, those mashups were pure nonsense. We learn:

“The researchers found that the language model’s original creations didn’t always make sense, such as, ‘Why did the man put his money in the blender? He wanted to make time fly.’ When asked to explain each of the 25 most frequent jokes, ChatGPT mostly provided valid explanations according to the researchers’ methodology, indicating an ‘understanding’ of stylistic elements such as wordplay and double meanings. However, it struggled with sequences that didn’t fit into learned patterns and couldn’t tell when a joke wasn’t funny. Instead, it would make up fictional yet plausible-sounding explanations.”

Plausible sounding, perhaps, but gibberish nonetheless. See the write-up for an example. ChatGPT simply does not understand what it means for something to be funny. Humor, after all, is a quintessentially human characteristic. Algorithms may get better at mimicking it, but we must never lose sight of the fact that AI is software, incapable of amusement. Or any other emotion. If we begin thinking of AI as human, we are in danger of forgetting the very real limits of machine learning as a lens on the world.

Cynthia Murrell, June 23, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta