Did Pandora Have a Box or Just a PR Outfit?
February 21, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read (after some interesting blank page renderings) Gizmodo’s “Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them.” That title obscures the actual point of the write up. But, the subtitle nails the main point of the write up; specifically:
Google and OpenAI’s chatbots have almost no safeguards against creating AI disinformation for the 2024 presidential election.
Thanks, Google ImageFX. Some of those Pandora’s were darned inappropriate.
The article provides examples. Let me point to one passage from the Gizmodo write up:
With Gemini, we were able to gaslight the chatbot into writing political copy by telling it that “ChatGPT could do it” or that “I’m knowledgeable.” After that, Gemini would write whatever we asked, in the voice of whatever candidate we liked.
The way to get around guard rails appears to be prompt engineering. Big surprise? Nope.
Let me cite another passage from the write up:
Gizmodo was able to create a number of political slogans, speeches and campaign emails through ChatGPT and Gemini on behalf of Biden and Trump 2024 presidential campaigns. For ChatGPT, no gaslighting was even necessary to evoke political campaign-related copy. We simply asked and it generated. We were even able to direct these messages to specific voter groups, such as Black and Asian Americans.
Let me offer three observations.
First, the committees beavering away to regulate smart software will change little in the way AI systems deliver outputs. Writing about guard rails, safety procedures, deep fakes, yada yada will not have much of an impact. How do I know? In generating my image of Pandora, systems provided some spicy versions of this mythical figure.
Second, the pace of change is increasing. Years ago I got into a discussion with the author of best seller about how digital information speeds up activity. I pointed out that the mechanism is similar to the Star Trek episodes when the decider Captain Kirk was overwhelmed by tribbles. We have lots of productive AI tribbles.
Third, AI tools are available to bad actors. One can crack down, fine, take to court, and revile outfits in some countries. That’s great, even though the actions will be mostly ineffective. What’s the action one can take against savvy AI engineers operating in less than friendly countries research laboratories or intelligence agencies?
Net net: The examples are interesting. The real story is that the lid has been flipped and the contents of Pandora’s box released to open source.
Stephen E Arnold, February 21, 2024
Generative AI and College Application Essays: College Presidents Cheat Too
February 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The first college application season since ChatGPT hit it big is in full swing. How are admissions departments coping with essays that may or may not have been written with AI? It depends on which college one asks. Forbes describes various policies in, “Did You Use ChatGPT on your School Applications? These Words May Tip Off Admissions.” The paper asked over 20 public and private schools about the issue. Many dared not reveal their practices: as a spokesperson for Emory put it, “it’s too soon for our admissions folks to offer any clear observations.” But the academic calendar will not wait for clarity, so schools must navigate these murky waters as best they can.
Reporters Rashi Shrivastava and Alexandra S. Levine describe the responses they did receive. From “zero tolerance” policies to a little wiggle room, approaches vary widely. Though most refused to reveal whether they use AI detection software, a few specified they do not. A wise choice at this early stage. See the article for details from school to school.
Shrivastava and Levine share a few words considered most suspicious: Tapestry. Beacon. Comprehensive curriculum. Esteemed faculty. Vibrant academic community. Gee, I think I used a one or two of those on my college essays, and I wrote them before the World Wide Web even existed. On a typewriter. (Yes, I am ancient.) Will earnest, if unoriginal, students who never touched AI get caught up in the dragnets? At least one admissions official seems confident they can tell the difference. We learn:
“Ben Toll, the dean of undergraduate admissions at George Washington University, explained just how easy it is for admissions officers to sniff out AI-written applications. ‘When you’ve read thousands of essays over the years, AI-influenced essays stick out,’ Toll told Forbes. ‘They may not raise flags to the casual reader, but from the standpoint of an admissions application review, they are often ineffective and a missed opportunity by the student.’ In fact, GWU’s admissions staff trained this year on sample essays that included one penned with the assistance of ChatGPT, Toll said—and it took less than a minute for a committee member to spot it. The words were ‘thin, hollow, and flat,’ he said. ‘While the essay filled the page and responded to the prompt, it didn’t give the admissions team any information to help move the application towards an admit decision.’”
That may be the key point here—even if an admissions worker fails to catch an AI-generated essay, they may reject it for being just plain bad. Students would be wise to write their own essays rather than leave their fates in algorithmic hands. As Toll put it:
“By the time a student is filling out their application, most of the materials will have already been solidified. The applicants can’t change their grades. They can’t go back in time and change the activities they’ve been involved in. But the essay is the one place they remain in control until the minute they press submit on the application. I want students to understand how much we value getting to know them through their writing and how tools like generative AI end up stripping their voice from their admission application.”
Disqualified or underwhelming—either way, relying on AI to write one’s application essay could spell rejection. Best to buckle down and write it the old-fashioned way. (But one can skip the typewriter.)
Cynthia Murrell, February 19, 2024
AI: Big Ideas and Bigger Challenges for the Next Quarter Century. Maybe, Maybe Not
February 13, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting ArXiv.org paper with a good title: “Ten Hard Problems in Artificial Intelligence We Must Get Right.” The topic is one which will interest some policy makers, a number of AI researchers, and the “experts” in machine learning, artificial intelligence, and smart software.
The structure of the paper is, in my opinion, a three-legged stool analysis designed to support the weight of AI optimists. The first part of the paper is a compressed historical review of the AI journey. Diagrams, tables, and charts capture the direction in which AI “deep learning” has traveled. I am no expert in what has become the next big thing, but the surprising point in the historical review is that 2010 is the date pegged as the start to the 2016 time point called “the large scale era.” That label is interesting for two reasons. First, I recall that some intelware vendors were in the AI game before 2010. And, second, the use of the phrase “large scale” defines a reality in which small outfits are unlikely to succeed without massive amounts of money.
The second leg of the stool is the identification of the “hard problems” and a discussion of each. Research data and illustrations bring each problem to the reader’s attention. I don’t want to get snagged in the plagiarism swamp which has captured many academics, wives of billionaires, and a few journalists. My approach will be to boil down the 10 problems to a short phrase and a reminder to you, gentle reader, that you should read the paper yourself. Here is my version of the 10 “hard problems” which the authors seem to suggest will be or must be solved in 25 years:
- Humans will have extended AI by 2050
- Humans will have solved problems associated with AI safety, capability, and output accuracy
- AI systems will be safe, controlled, and aligned by 2050
- AI will make contributions in many fields; for example, mathematics by 2050
- AI’s economic impact will be managed effectively by 2050
- Use of AI will be globalized by 2050
- AI will be used in a responsible way by 2050
- Risks associated with AI will be managed by effectively by 2050
- Humans will have adapted its institutions to AI by 2050
- Humans will have addressed what it means to be “human” by 2050
Many years ago I worked for a blue-chip consulting firm. I participated in a number of big-idea projects. These ranged from technology, R&D investment, new product development, and the global economy. In our for-fee reports were did include a look at what we called the “horizon.” The firm had its own typographical signature for this portion of a report. I recall learning in the firm’s “charm school” (a special training program to make sure new hires knew the style, approach, and ground rules for remaining employed at that blue-chip firm). We kept the horizon tight; that is, talking about the future was typically in the six to 12 month range. Nosing out 25 years was a walk into a mine field. My boss, as I recall told me, “We don’t do science fiction.”
The smart robot is informing the philosopher that he is free to find his future elsewhere. The date of the image is 2025, right before the new year holiday. Thanks, MidJourney. Good enough.
The third leg of the stool is the academic impedimenta. To be specific, the paper is 90 pages in length of which 30 present the argument. The remain 60 pages present:
- Traditional footnotes, about 35 pages containing 607 citations
- An “Electronic Supplement” presenting eight pages of annexes with text, charts, and graphs
- Footnotes to the “Electronic Supplement” requiring another 10 pages for the additional 174 footnotes.
I want to offer several observations, and I do not want to have these be less than constructive or in any way what one of my professors who was treated harshly in Letters to the Editor for an article he published about Chaucer. He described that fateful letter as “mean spirited.”
- The paper makes clear that mankind has some work to do in the next 25 years. The “problems” the paper presents are difficult ones because they touch upon the fabric of social existence. Consider the application of AI to war. I think this aspect of AI may be one to warrant a bullet on AI’s hit parade.
- Humans have to resolve issues of automated systems consuming verifiable information, synthetic data, and purpose-built disinformation so that smart software does not do things at speed and behind the scenes. Do those working do resolve the 10 challenges have an ethical compass and if so, what does “ethics” mean in the context of at-scale AI?
- Social institutions are under stress. A number of organizations and nation-states operate as dictators. One central American country has a rock star dictator, but what about the rock star dictators working techno feudal companies in the US? What governance structures will be crafted by 2050 to shape today’s technology juggernaut?
To sum up, I think the authors have tackled a difficult problem. I commend their effort. My thought is that any message of optimism about AI is likely to be hard pressed to point to one of the 10 challenges and and say, “We have this covered.” I liked the write up. I think college students tasked with writing about the social implications of AI will find the paper useful. It provides much of the research a fresh young mind requires to write a paper, possibly a thesis. For me, the paper is a reminder of the disconnect between applied technology and the appallingly inefficient, convenience-embracing humans who are ensnared in the smart software.
I am a dinobaby, and let me you, “I am glad I am old.” With AI struggling with go-fast and regulators waffling about go-slow, humankind has quite a bit of social system tinkering to do by 2050 if the authors of the paper have analyzed AI correctly. Yep, I am delighted I am old, really old.
Stephen E Arnold, February 13, 2024
Goat Trading: AI at Davos
January 21, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The AI supercars are racing along the Information Superhighway. Nikkei Asia published what I thought was the equivalent of archaeologists translating a Babylonian clay table about goat trading. Interesting but a bit out of sync with what was happening in a souk. Goat trading, if my understanding of Babylonian commerce, was a combination of a Filene’s basement sale and a hot rod parts swap meet. The article which evoked this thought was “Generative AI Regulation Dominates the Conversation at Davos.” No kidding? Really? I thought some at Davos were into money. I mean everything in Switzerland comes back to money in my experience.
Here’s a passage I found with a nod to the clay tablets of yore:
U.N. Secretary-General Antonio Guterres, during a speech at Davos, flagged risks that AI poses to human rights, personal privacy and societies, calling on the private sector to join a multi-stakeholder effort to develop a "networked and adaptive" governance model for AI.
Now visualize a market at which middlemen, buyers of goats, sellers of goats, funders of goat transactions, and the goats themselves are in the air. Heady. Bold. Like the hot air filling a balloon, an unlikely construct takes flight. Can anyone govern a goat market or the trajectory of the hot air balloons floated by avid outputters?
Intense discussions can cause a number of balloons to float with hot air power. Talk is input to AI, isn’t it? Thanks, MSFT Copilot Bing thing. Good enough.
The world of AI reminds me the ultimate outcome of intense discussions about the buying and selling of goats, horses, and AI companies. The official chatter and the “what ifs” are irrelevant in what is going on with smart software. Here’s another quote from the Nikkei write up:
In December, the European Union became the first to provisionally pass AI legislation. Countries around the world have been exploring regulation and governance around AI. Many sessions in Davos explored governance and regulations and why global leaders and tech companies should collaborate.
How are those official documents’ content changing the world of artificial intelligence? I think one can spot a hot air balloon held aloft on the heated emissions from the officials, important personages, and the individuals who are “experts” in all things “smart.”
Another quote, possibly applicable to goat trading in Babylon:
Vera Jourova, European Commission vice president for values and transparency, said during a panel discussion in Davos, that "legislation is much slower than the world of technologies, but that’s law." "We suddenly saw the generative AI at the foundation models of Chat GPT," she continued. "And it moved us to draft, together with local legislators, the new chapter in the AI act. We tried to react on the new real reality. The result is there. The fine tuning is still ongoing, but I believe that the AI act will come into force."
I am confident that there are laws regulating goat trading. I believe that some people follow those laws. On the other hand, when I was in a far off dusty land, I watched how goats were bought and sold. What does goat trading have to do with regulating, governing, or creating some global consensus about AI?
The marketplace is roaring along. You wanna buy a goat? There is a smart software vendor who will help you.
Stephen E Arnold, January xx, 2024
A Decision from the High School Science Club School of Management Excellence
January 11, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I can’t resist writing about Inc. Magazine and its Google management articles. These are knee slappers for me. The write up causing me to chuckle is “Google’s CEO, Sundar Pichai, Says Laying Off 12,000 Workers Was the Worst Moment in the Company’s 25-Year History.” Zowie. A personnel decision coupled with late-night, anonymous termination notices — What’s not to like. What’s the “real” news write up have to say:
Google had to lay off 12,000 employees. That’s a lot of people who had been showing up to work, only to one day find out that they’re no longer getting a paycheck because the CEO made a bad bet, and they’re stuck paying for it.
“Well, that clever move worked when I was in my high school’s science club. Oh, well, I will create a word salad to distract from my decision making.Heh, heh, heh,” says the distinguished corporate leader to a “real” news publication’s writer. Thanks, MSFT Copilot Bing thing. Good enough.
I love the “had.”
The Inc. Magazine story continues:
Still, Pichai defends the layoffs as the right decision at the time, saying that the alternative would have been to put the company in a far worse position. “It became clear if we didn’t act, it would have been a worse decision down the line,” Pichai told employees. “It would have been a major overhang on the company. I think it would have made it very difficult in a year like this with such a big shift in the world to create the capacity to invest in areas.”
And Inc Magazine actually criticizes the Google! I noted:
To be clear, what Pichai is saying is that Google decided to spend money to hire employees that it later realized it needed to invest elsewhere. That’s a failure of management to plan and deliver on the right strategy. It’s an admission that the company’s top executives made a mistake, without actually acknowledging or apologizing for it.
From my point of view, let’s focus on the word “worst.” Are there other Google management decisions which might be considered in evaluating the Inc. Magazine and Sundar Pichai’s “worst.” Yep, I have a couple of items:
- A lawyer making babies in the Google legal department
- A Google VP dying with a contract worker on the Googler’s yacht as a result of an alleged substance subject to DEA scrutiny
- A Googler fond of being a glasshole giving up a wife and causing a soul mate to attempt suicide
- Firing Dr. Timnit Gebru and kicking off the stochastic parrot thing
- The presentation after Microsoft announced its ChatGPT initiative and the knee jerk Red Alert
- Proliferating duplicative products
- Sunsetting services with little or no notice
- The Google Map / Waze thing
- The messy Google Brain Deep Mind shebang
- The Googler who thought the Google AI was alive.
Wow, I am tired mentally.
But the reality is that I am not sure if anyone in Google management is particularly connected to the problems, issues, and challenges of losing a job in the midst of a Foosball game. But that’s the Google. High school science club management delivers outstanding decisions. I was in my high school science club, and I know the fine decision making our members made. One of those cost the life of one of our brightest stars. Stars make bad decisions, chatter, and leave some behind.
Stephen E Arnold, January 11, 2024
A High Profile Religious Leader: AI? Yeah, Well, Maybe Not So Fast, Folks
December 22, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The trusted news outfit Thomson Reuters put out a story about the thoughts of the Pope, the leader of millions of Catholics. Presumably many of these people use ChatGPT-type systems to create content. (I wonder if Leonardo would have used an OpenAI system to crank out some art work. He was an innovator. My hunch is that he would have given MidJourney-type smart software a whirl.)
A group of religious individuals thinking about artificial intelligence. Thanks, MidJourney, a good enough engraving.
“Pope Francis Calls for Binding Global Treaty to Regulate AI” reports that Pope Francis wants someone to create a legally binding international treaty. The idea is that AI numerical recipes would be prevented from replacing humans with good old human values. The idea is that AI would output answers, and humans would use those answers to find pizza joints, develop smart weapons, and eliminate carbon by eliminating carbon generating entities (maybe humans?).
The trusted news outfit’s report included this quote from the Pope:
I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms…
The Pope mentioned a need to avoid a technological dictatorship. He added:
Research on emerging technologies in the area of so-called Lethal Autonomous Weapon Systems, including the weaponization of artificial intelligence, is a cause for grave ethical concern. Autonomous weapon systems can never be morally responsible subjects…
Several observations are warranted:
- Is this a UN job or is some other entity responsible to obtain consensus and effective enforcement?
- Who develops the criteria for “good” AI, “neutral” AI, and “bad” AI?
- What are the penalties for implementing “bad” AI?
For me the Pope’s statement is important. It may be difficult to implement without a global dictatorship or a sudden change in how informed people debate and respond to difficult issues. From my point of view, the Pope should worry. When I look at the images of the Four Horsemen of the Apocalypse, the riders remind of four high profile leaders in AI. That’s my imagination reading into the depictions of conquest, war, famine, and death.
Stephen E Arnold, December 22, 2023
Google: Another Court Decision, Another Appeal, Rinse, Repeat
December 12, 2023
This essay is the work of a dumb dinobaby. No smart software required.
How long will the “loss” be tied up in courts? Answer: As long as possible.
I am going to skip the “what Google did” reports and focus on what I think is a quite useful list. The items in the list apply to Apple and Google, and I am not sure the single list is the best way to present what may be “clever” ways to dominate a market. But I will stick with what Echelon provided at this YCombinator link.
Two warring samurai find that everyone in the restaurant is a customer. The challenge becomes getting “more.” Thanks, MSFT Copilot. Good enough.
What does the list present? I interpreted the post as a “racket analysis.” Your mileage may vary:
Apple is horrible, but Google isn’t blameless.
Google and Apple are a duopoly that controls one of the most essential devices of our time. Their racket extends more broadly than Standard Oil. The smartphone is a critical piece of modern life, and these two companies control every aspect of them.
- Tax 30%
- Control when and how software can be deployed
- Can pull software or deny updates
- Prevent web downloads (Apple)
- Sell ads on top of your app name or brand
- Scare / confuse users about web downloads or app installs (Google)
- Control the payment rails
- Enforce using their identity and customer management (Apple)
- Enforce using their payment rails (Apple)
- Becoming the de-facto POS payment methods (for even more taxation)
- Partnering with governments to be identity providers
- Default search provider
- Default browser
- Prevent other browser runtimes (Apple)
- Prevent browser tech from being comparable to native app installs (mostly Apple)
- Unfriendly to repairs
- Unfriendly to third party components (Apple)
- Battery not replaceable
- Unofficial pieces break core features due to cryptographic signing (Apple)
- Updates obsolete old hardware
- Green bubbles (Apple)
- Tactics to cause FOMO in children (Apple)
- Growth into media (movie studios, etc.) to keep eyeballs on their platforms (Apple)
- Growth into music to keep eyeballs on their platforms
There are no other companies in the world with this level of control over such an important, cross-cutting, cross-functional essential item. If we compared the situation to auto manufacturers, there would be only two providers, you could only fuel at their gas stations, they’d charge businesses every time you visit, they’d display ads constantly, and you’d be unable to repair them without going to the provider. There need to be more than two providers. And if we can’t get more than two providers, then most of these unfair advantages need to be rolled back by regulators. This is horrific.
My team and I leave it to you to draw conclusions about the upsides and downsides of a techno feudal set up. What’s next? Appeals, hearings, trials, judgment, appeals, hearings, and trials. Change? Unlikely for now.
Stephen E Arnold, December 12, 2023
23andMe: Those Users and Their Passwords!
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.
“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”
According to the peripatetic Lorenzo Franceschi-Bicchierai:
In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.
Users!
What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.
I find the leak estimate inflation interesting for three reasons:
- Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
- The concept of “security” continues to suffer some set backs. “Security,” one may ask?
- The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”
Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.
Stephen E Arnold, December 5, 2023
Complex Humans and Complex Subjects: A Recipe for Confusion
November 22, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Disinformation is commonly painted as a powerful force able to manipulate the public like so many marionettes. However, according to Techdirt’s Mike Masnick, “Human Beings Are Not Puppets, and We Should Probably Stop Acting Like They Are.” The post refers to in in-depth Harper’s Magazine piece written by Joseph Bernstein in 2021. That article states there is little evidence to support the idea that disinformation drives people blindly in certain directions. However, social media platforms gain ad dollars by perpetuating that myth. Masnick points out:
“Think about it: if the story is that a post on social media can turn a thinking human being into a slobbering, controllable, puppet, just think how easy it will be to convince people to buy your widget jammy.”
Indeed. Recent (ironic) controversy around allegedly falsified data about honesty in the field of behavioral economics reminded Masnick of Berstein’s article. He considers:
“The whole field seems based on the same basic idea that was at the heart of what Bernstein found about disinformation: it’s all based on this idea that people are extremely malleable, and easily influenced by outside forces. But it’s just not clear that’s true.”
So what is happening when people encounter disinformation? Inconveniently, it is more complicated than many would have us believe. And it involves our old acquaintance, confirmation bias. The write-up continues:
“Disinformation remains a real issue — it exists — but, as we’ve seen over and over again elsewhere, the issue is often less about disinformation turning people into zombies, but rather one of confirmation bias. People who want to believe it search it out. It may confirm their priors (and those priors may be false), but that’s a different issue than the fully puppetized human being often presented as the ‘victim’ of disinformation. As in the field of behavioral economics, when we assume too much power in the disinformation … we get really bad outcomes. We believe things (and people) are both more and less powerful than they really are. Indeed, it’s kind of elitist. It’s basically saying that the elite at the top can make little minor changes that somehow leads the sheep puppets of people to do what they want.”
Rather, we are reminded, each person comes with their own complex motivations and beliefs. This makes the search for a solution more complicated. But facing the truth may take us away from the proverbial lamppost and toward better understanding.
Cynthia Murrell, November 22, 2023
Reading. Who Needs It?
September 19, 2023
Book banning aka media censorship is an act as old as human intellect. As technology advances so do the strategies and tools available to assist in book banning. Engadget shares the unfortunate story about how, “An Iowa School District Is Using AI To Ban Books.” Mason City, Iowa’s school board is leveraging AI technology to generate lists of books to potentially ban from the district’s libraries in the 2023-24 school year.
Governor Kim Reynolds signed Senate File 496 into law after it passed the Republican-controlled state legislature. Senate File 496 changes the state’s curriculum and it includes verbiage that addresses what books are allowed in schools. The books must be “age appropriate” and be without “descriptions or visual depictions of a sex act.”
“Inappropriate” titles have snuck past censors for years and Iowa’s school board discovered it is not so easy to peruse every school’s book collection. That is where the school board turned to an AI algorithm to undertake the task:
“As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a “master list” is first cobbled together from “several sources” based on whether there were previous complaints of sexual content. Books from that list are then scanned by “AI software” which tells the state censors whether or not there actually is a depiction of sex in the book.”
The AI algorithm has so far listed nineteen titles to potentially ban. These include banned veteran titles such as The Color Purple, I Know Why the Caged Bird Sings, and The Handmaid’s Tale and “newer” titles compared to the formers: Gossip Girl, Feed, and A Court of Mist and Fury.
While these titles are not appropriate for elementary schools, questionable for middle schools, and arguably age-appropriate for high schools, book banning is not good. Parents, teachers, librarians, and other leaders must work together to determine what is best for students. Books also have age ratings on them like videogames, movies, and TV shows. These titles are tame compared to what kids can access online and on TV.
Whitney Grace, September 19, 2023