Reading. Who Needs It?
September 19, 2023
Book banning aka media censorship is an act as old as human intellect. As technology advances so do the strategies and tools available to assist in book banning. Engadget shares the unfortunate story about how, “An Iowa School District Is Using AI To Ban Books.” Mason City, Iowa’s school board is leveraging AI technology to generate lists of books to potentially ban from the district’s libraries in the 2023-24 school year.
Governor Kim Reynolds signed Senate File 496 into law after it passed the Republican-controlled state legislature. Senate File 496 changes the state’s curriculum and it includes verbiage that addresses what books are allowed in schools. The books must be “age appropriate” and be without “descriptions or visual depictions of a sex act.”
“Inappropriate” titles have snuck past censors for years and Iowa’s school board discovered it is not so easy to peruse every school’s book collection. That is where the school board turned to an AI algorithm to undertake the task:
“As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a “master list” is first cobbled together from “several sources” based on whether there were previous complaints of sexual content. Books from that list are then scanned by “AI software” which tells the state censors whether or not there actually is a depiction of sex in the book.”
The AI algorithm has so far listed nineteen titles to potentially ban. These include banned veteran titles such as The Color Purple, I Know Why the Caged Bird Sings, and The Handmaid’s Tale and “newer” titles compared to the formers: Gossip Girl, Feed, and A Court of Mist and Fury.
While these titles are not appropriate for elementary schools, questionable for middle schools, and arguably age-appropriate for high schools, book banning is not good. Parents, teachers, librarians, and other leaders must work together to determine what is best for students. Books also have age ratings on them like videogames, movies, and TV shows. These titles are tame compared to what kids can access online and on TV.
Whitney Grace, September 19, 2023
Profits Over Promises: IBM Sells Facial Recognition Tech to British Government
September 18, 2023
Just three years after it swore off any involvement in facial recognition software, IBM has made an about-face. The Verge reports, “IBM Promised to Back Off Facial Recognition—Then it Signed a $69.8 Million Contract to Provide It.” Amid the momentous Black Lives Matter protests of 2020, IBM’s Arvind Krishna wrote a letter to Congress vowing to no longer supply “general purpose” facial recognition tech. However, it appears that is exactly what the company includes within the biometrics platform it just sold to the British government. Reporter Mark Wilding writes:
“The platform will allow photos of individuals to be matched against images stored on a database — what is sometimes known as a ‘one-to-many’ matching system. In September 2020, IBM described such ‘one-to-many’ matching systems as ‘the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights.'”
In the face of this lucrative contract IBM has changed its tune. It now insists one-to-many matching tech does not count as “general purpose” since the intention here is to use it within a narrow scope. But scopes have a nasty habit of widening to fit the available tech. The write-up continues:
“Matt Mahmoudi, PhD, tech researcher at Amnesty International, said: ‘The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies — including IBM — must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding.’ Police use of facial recognition has been linked to wrongful arrests in the US and has been challenged in the UK courts. In 2019, an independent report on the London Metropolitan Police Service’s use of live facial recognition found there was no ‘explicit legal basis’ for the force’s use of the technology and raised concerns that it may have breached human rights law. In August of the following year, the UK’s Court of Appeal ruled that South Wales Police’s use of facial recognition technology breached privacy rights and broke equality laws.”
Wilding notes other companies similarly promised to renounce facial recognition technology in 2020, including Amazon and Microsoft. Will governments also be able to entice them into breaking their vows with tantalizing offers?
Cynthia Murrell, September 18, 2023
Accidental Bias or a Finger on the Scale?
September 18, 2023
Who knew? According to Bezos’ rag The Washington Post, “Chat GPT Leans Liberal, Research Shows.” Writer Gerrit De Vynck cites a study on OpenAI’s ChatGPT from researchers at the University of East Anglia:
“The results showed a ‘significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,’ the researchers wrote, referring to Luiz Inácio Lula da Silva, Brazil’s leftist president.”
Then there’s research from Carnegie Mellon’s Chan Park. That study found Facebook’s LLaMA, trained on older Internet data, and Google’s BERT, trained on books, supplied right-leaning or even authoritarian answers. But Chat GPT-4, trained on the most up-to-date Internet content, is more economically and socially liberal. Why might the younger algorithm, much like younger voters, skew left? There’s one more juicy little detail. We learn:
“Researchers have pointed to the extensive amount of human feedback OpenAI’s bots have gotten compared to their rivals as one of the reasons they surprised so many people with their ability to answer complex questions while avoiding veering into racist or sexist hate speech, as previous chatbots often did. Rewarding the bot during training for giving answers that did not include hate speech, could also be pushing the bot toward giving more liberal answers on social issues, Park said.”
Not exactly a point in conservatives’ favor, we think. Near the bottom, the article concedes this caveat:
“The papers have some inherent shortcomings. Political beliefs are subjective, and ideas about what is liberal or conservative might change depending on the country. Both the University of East Anglia paper and the one from Park’s team that suggested ChatGPT had a liberal bias used questions from the Political Compass, a survey that has been criticized for years as reducing complex ideas to a simple four-quadrant grid.”
Read more about the Political Compass here and here. So does ChatGPT lean left or not? Hard to say from the available studies. But will researchers ever be able to pin down the rapidly evolving AI?
Cynthia Murrell, September 18, 2023
Microsoft: Good Enough Just Is Not
September 18, 2023
Was it the Russian hackers? What about the special Chinese department of bad actors? Was it independent criminals eager to impose ransomware on hapless business customers?
No. No. And no.
The manager points his finger at the intern working the graveyard shift and says, “You did this. You are probably worse than those 1,000 Russian hackers orchestrated by the FSB to attack our beloved software. You are a loser.” The intern is embarrassed. Thanks, Mom MJ. You have the hands almost correct… after nine months or so. Gradient descent is your middle name.
“Microsoft Admits Slim Staff and Broken Automation Contributed to Azure Outage” presents an interesting interpretation of another Azure misstep. The report asserts:
Microsoft’s preliminary analysis of an incident that took out its Australia East cloud region last week – and which appears also to have caused trouble for Oracle – attributes the incident in part to insufficient staff numbers on site, slowing recovery efforts.
But not really. The report adds:
The software colossus has blamed the incident on “a utility power sag [that] tripped a subset of the cooling units offline in one datacenter, within one of the Availability Zones.”
Ah, ha. Is the finger of blame like a heat seeking missile. By golly, it will find something like a hair dryer, fireworks at a wedding where such events are customary, or a passenger aircraft. A great high-tech manager will say, “Oops. Not our fault.”
The Register’s write up points out:
But the document [an official explanation of the misstep] also notes that Microsoft had just three of its own people on site on the night of the outage, and admits that was too few.
Yeah. Work from home? Vacay time? Managerial efficiency planning? Whatever.
My view of this unhappy event is:
- Poor managers making bad decisions
- A drive for efficiency instead of a drive toward excellence
- A Microsoft Bob moment.
More exciting Azure events in the future? Probably. More finger pointing? It is a management method, is it not?
Stephen E Arnold, September 18, 2023
Can Smart Software Get Copyright? Wrong?
September 15, 2023
It is official: copyrights are for humans, not machines. JD Supra brings us up to date on AI and official copyright guidelines in, “Using AI to Create a Work – Copyright Protection and Infringement.” The basic principle goes both ways. Creators cannot copyright AI-generated material unless they have manipulated it enough to render it a creative work. On the other hand, it is a violation to publish AI-generated content that resembles a copyright-protected work. As for feeding algorithms a diet of human-made media, that is not officially against the rules. Yet. We learn:
“To obtain copyright protection for a work containing AI-generated material, the work must have sufficient human input, such as sufficient modification of the AI output or the human selection or arrangement of the AI content. However, copyright protection would be limited to those ‘human-made’ elements. Past, pending, and future copyright applications need to identify explicitly the human element and disclaim the AI-created content if it is more than minor. For existing registrations, a supplementary registration may be necessary. Works created using AI are subject to the same copyright infringement analysis applicable to any work. The issue with using AI to create works is that the sources of the original works may not be identified, so an infringement analysis cannot be conducted until the cease-and-desist letter is received. No court has yet adopted the theory that merely using an AI database means the resulting work is automatically an infringing derivative work if it is not substantially similar to the protectable elements in the copyrighted work.”
The article cites the Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16,190 (March 16, 2023). It notes those guidelines were informed by a decision handed down in February, Zarya v Dawn, which involved a comic book with AI-generated content. the Copyright Office sliced and diced elements, specifying:
“… The selection and arrangement of the images and the text were the result of human authorship and thus copyrightable, but the AI-generated images resulting from human prompts were not. The prompts ‘influenced,’ but did not ‘dictate,’ the resulting image, so the applicant was not the ‘mastermind’ and therefore not the author of the images. Further, the applicant’s edits to the images were too minor to be deemed copyrightable.”
Ah, the fine art of splitting hairs. As for training databases packed with protected content, the article points to pending lawsuits by artists against Stability AI, MidJourney, and Deviant Art. We are told those cases may be dismissed on technical grounds, but are advised to watch for similar cases in the future. Stay tuned.
Cynthia Murrell, September 15, 2023
Bankrupting a City: Big Software, Complexity, and Human Shortcomings Does the Trick
September 15, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I have noticed failures in a number of systems. I have no empirical data, just anecdotal observations. In the last few weeks, I have noticed glitches in a local hospital’s computer systems. There have been some fascinating cruise ship problems. And the airlines are flying the flag for system ineptitudes. I would be remiss if I did not mention news reports about “near misses” at airports. A popular food chain has suffered six recalls in a four or five weeks.
Most of these can be traced to software issues. Others are a hot mess combination of inexperienced staff and fouled up enterprise resource planning workflows. None of the issues were a result of smart software. To correct that oversight, let me mention the propensity of driverless automobiles to mis-identify emergency vehicles or possessing some indifference to side street traffic at major intersections.
“The information technology manager looks at the collapsing data center and asks, “Who is responsible for this issue?” No one answers. Those with any sense have adopted the van life, set up stalls to sell crafts at local art fairs, or accepted another job. Thanks, MidJourney. I guarantee your sliding down the gradient descent is accelerating.
What’s up?
My person view is that some people do not know how complex software works but depend on it despite that cloud of unknowing. Other people just trust the marketing people and buy what seems better, faster, and cheaper than an existing system which requires lots of money to keep chugging along.
Now we have an interesting case example that incorporates a number of management and technical issues. Birmingham, England is now bankrupt. The reason? The cost of a new system sucked up the cash. My hunch is that King Charles or some other kind soul will keep the city solvent. But the idea of city going broke because it could not manage a software project is illustrative of the future in my opinion.
“Largest Local Government Body in Europe Goes Under amid Oracle Disaster” reports:
Birmingham City Council, the largest local authority in Europe, has declared itself in financial distress after troubled Oracle project costs ballooned from £20 million to around £100 million ($125.5 million).
An extra £80 million would make little difference to an Apple, Google, or Microsoft. To a city in the UK, the cost is a bit of a problem.
Several observations:
- Large project management expertise does not deliver functional solutions. How is that air traffic control or IRS system enhancement going?
- Vendors rely on marketing to close deals, and then expect engineers to just make the system work. If something is incomplete or not yet coded, the failure rate may be anticipated, right? Nope, what’s anticipated in a scope change and billing more money.
- Government agencies are not known for smooth, efficient technical capabilities. Agencies are good at statements of work which require many interesting and often impossible features. The procurement attorneys cannot spot these issues, but those folks ride herd on the legal lingo. Result? Slips betwixt cup and lip.
Are the names of the companies involved important? Nope. The same situation exists when any enterprise software vendor wins a contract based on a wild and wooly statement of work, managed by individuals who are not particularly adept at keeping complex technical work on time and on target, and when big outfits let outfits sell via PowerPoints and demonstrations, not engineering realities.
Net net: More of these types of cases will be coming down the pike.
Stephen E Arnold, September 15, 2023
Turn Left at Ethicsville and Go Directly to Immoraland, a New Theme Park
September 14, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Stanford University lost a true icon of scholarship. Why is this individual leaving the august institution, a hot spot of modern ethical and moral discourse. Yeah, the leader apparently confused real and verifiable data with less real and tough-to-verify data. Across the country, an ethics professor no less is on leave or parked in an academic rest area over a similar allegation. I will not dwell on the outstanding concept of just using synthetic data to inform decision models, a practice once held in esteem at the Stanford Artificial Intelligence Lab.
“Gasp,” one PhD utters. An audience of scholars reveals shock and maybe horror when a colleague explains that making up, recycling, or discarding data at odds with the “real” data is perfectly reasonable. The brass ring of tenure and maybe a prestigious award for research justify a more hippy dippy approach to accuracy. And what about grants? Absolutely. Money allows top-quality research to be done by graduate assistants. Everyone needs someone to blame. MidJourney, keep on slidin’ down that gradient descent, please.
“Scientist Shocks Peers by Tailoring Climate Study” provides more color for these no-ethics actions by leaders of impressionable youth. I noted this passage:
While supporters applauded Patrick T. Brown for flagging what he called a one-sided climate “narrative” in academic publishing, his move surprised at least one of his co-authors—and angered the editors of leading journal Nature. “I left out the full truth to get my climate change paper published,” read the headline to an article signed by Brown…
Ah, the greater good logic.
The write up continued:
A number of tweets applauded Brown for his “bravery”, “openness” and “transparency”. Others said his move raised ethical questions.
The write up raised just one question I would like answered: “Where has education gone?” Answer: Immoraland, a theme park with installations at Stanford and Harvard with more planned.
Stephen E Arnold, September 14, 2023
What Is More Important? Access to Information or Money
September 14, 2023
Laws that regulate technology can be outdated because they were written before the technology was invented. While that is true, politicians have updated laws to address situations that arise from advancing technology. Artificial intelligence is causing a flurry of new legislative concerns. The Conversation explains that there are already laws regulating AI on the books but they are not being follow: “Do We Need A New Law For AI? Sure-But First We Could Try Enforcing The Laws We Already Have.”
In the early days of the Internet and mass implementation of computers, regulation was a bad work akin to censoring freedom of speech and would also impede technology progress. AI technology is changing that idea. Australian Minister for Industry and Science Ed Husic is leading the charge for an end to technology self-regulation that could inspire lawmakers in other countries.
Husic wants his policies to focus on high risk issues related to AI and balancing the relationship between humans and machines. He no longer wants the Internet and technology to be a lawless wild west. Big tech leaders such as OpenAI Chief Executive Sam Altman said regulating AI was essential. OpenAI developed the ChatGPT chatbot/AI assistant. Altman’s statement comes ten years after Facebook founder Mark Zuckerberg advised people in the tech industry to move fast and break things. Why are tech giants suddenly changing their tune?
One idea is that tech giants understand the dangers associated with unbridled AI. They realize without proper regulation, AI’s negative consequences could outweigh the positives.
There are already AI regulating laws in most countries but it refers to technology in general:
“Our current laws make clear that no matter what form of technology is used, you cannot engage in deceptive or negligent behavior.
Say you advise people on choosing the best health insurance policy, for example. It doesn’t matter whether you base your advice on an abacus or the most sophisticated form of AI, it’s equally unlawful to take secret commissions or provide negligent advice.”
The article was written by tech leaders at the Human Technology Institute located at the University of Technology Sydney, who are calling for Australia to create a new government role, the AI Commissioner. This new role would be an independent expert advisor to the private and government sector to advise businesses and lawmakers on how to use and enforce AI within Australia’s laws. Compared to North America, the European Union, and many Asian countries, Australia has dragged its heels developing AI laws.
The authors stress that personal privacy must be protected like the laws that already exist in Europe. Also they cite examples of how mass-automation of tasks led to discrimination and bureaucratic nightmares.
An AI Commissioner is a brilliant idea but it places the responsibility on one person. A small, regulating board monitored like other government bodies would be a better idea. Since the idea is logical the Australian government will fail to implement it. That is not a dig on Australia. Any and all governments fail at implementing logical plans.
Whitney Grace, September 14, 2023
An AI to Help Law Firms Craft More Effective Invoices
September 14, 2023
Think money. That answers many AI questions.
Why are big law firms embracing AI? For better understanding of the law? Nay. To help clients? No. For better writing? Nope. What then? Why more fruitful billing, if course. We learn from Above The Law, “Law Firms Struggling with Arcane Billing Guidelines Can Look to AI for Relief.” According to writer and litigator Joe Patrice, law clients rely on labyrinthine billing compliance guidelines to delay paying their invoices. Now AI products like Verify are coming to rescue beleaguered lawyers from penny pinching clients. Patrice writes:
“Artificial intelligence may not be prepared to solve every legal industry problem, but it might be the perfect fit for this one. ZERO CEO Alex Babin is always talking about developing automation to recover the money lawyers lose doing non-billable tasks, so it’s unsurprising that the company has turned its attention to the industry’s billing fiasco. And when it comes to billing guideline compliance, ZERO estimates that firms can recover millions by introducing AI to the process. Because just ‘following the guidelines’ isn’t always enough. Some guidelines are explicit. Others leave a world of interpretation. Still others are explicit, but no one on the client side actually cares enough to force outside counsel to waste time correcting the issue. Where ZERO’s product comes in is in understanding the guidelines and the history of rejections and appeals surrounding the bills to figure out what the bill needs to look like to get the lawyers paid with the least hassle.”
Verify can even save attorneys from their own noncompliant wording, rewriting their narratives to comply with guidelines. And it can do while mimicking each lawyer’s writing style. Very handy.
Cynthia Murrell, September 14, 2023
The Best Books: Banned Ones, Of Course
September 14, 2023
Every few years the United States deals with a morality crisis that surrounds inappropriate items. In the 1980s-1990s, it was the satanic panic that demoralized role playing games, heavy metal and rap music, videogames, and everything tied to the horror/supernatural genre. This resulted in the banning of multiple mediums, including books. Teachers, parents, and other adults are worried about kids’ access to so-called pornographic books, specifically books that deal with LGBTIA+ topics.
The definition of “p*rnography” differs with every individual, but the majority agree that anything describing sex or related topics, nudity, transgenderism, and homosexuality fulfill that definition. While some of the questionable books do depict sex and/or sexual acts, protestors are applying a blanket term to every book they deem inappropriate. Their worst justification for book banning is that they do not need to read a book to know it is “p*rnographic.” [Just avoiding the smart banned word list. Editor]
Thankfully there are states that stand by the First Amendment: “Illinois Passes Bill To Stop Book-Bannings,” says Lit Hub. The Illinois state legislature passed House Bill 2789 that states all schools and libraries that remove books from their collections will not receive state grant money. Anti-advocates complain that people’s taxes are paying for books that they do not like. Radical political groups, including the Proud Boys, have supported book banning.
Other books topics deemed inappropriate include racial themes, death, health and wellbeing, violence, suicide, physical abuse, abortion, sexual abuse, and teen pregnancy. In 1982, Island Trees School District vs. Pico was a book banning case that went before the Supreme Court. The plaintiff was student Steven Pico from Long Island, who challenged his school district about books they claimed were “just plain filthy.” The outcome was:
“Pico won, and Justice William Brennan wrote in a majority decision that “Local school boards may not remove books from school libraries simply because they dislike the ideas contained in those books and seek by their removal to ‘prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.’”
We are in a new era characterized by the Internet and changing societal values. The threat to freedom of speech and access to books, however, remains the same.
Whitney Grace, September 14, 2023