Publishers Sign Up for the Great Unknown: Risky, Oh, Yeah
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
OpenAI is paying for content. Why? Maybe to avoid lawsuits? Maybe to get access to “real” news to try to get ahead of its perceived rivals? Maybe because Sam AI-Man pushes forward while its perceived competitors do weird things like add features, launch services which are lousy, or which have the taste of the bitter fruit of Zuckus nepenthes.
Publishers are like beavers. Publishers have to do whatever they can to generate cash. Thanks, MSFT Copilot. Good enough. Not a cartoon and not a single dam, but just like MSFT security good enough, today’s benchmark of excellence.
“Journalists Deeply Troubled by OpenAI’s Content Deals with Vox, The Atlantic” is a good example of the angst Sam AI-Man is causing among “real” news outfits and their Fourth Estate professionals. The write up reports:
“Alarmed” writers unions question transparency of AI training deals with ChatGPT maker.
Oh, oh. An echo of Google’s Code Red am I hearing? No, what I hear is the ka-ching of the bank teller’s deposit system as the “owner” of the Fourth Estate professional business process gets Sam AI-Man’s money. Let’s not confuse “real” news with “real” money, shall we? In the current economic climate, money matters. Today it is difficult to sell advertising unless one is a slam dunk monopoly with an ad sales system that is tough to beat. Today it is tough to get those who consume news via a podcast or a public Web site to subscribe. I think that the number I heard for conversions is something like one or two subscribers per 100 visitors on a really good day. Most days are not really good.
“Real” journalists can be unionized. The idea is that their services have to be protected from the lawyers and bean counters who run many high profile publishing outfit. The problem with unions is that these seek to limit what the proprietors can do in a largely unregulated capitalist set up like the one operating within the United States. In a long-forgotten pre-digital era, those in a union dust up in 1921 at Blair Mountain in my favorite state, West Virginia. Today, the union members are more likely to launch social media posts and hook up with a needy lawyering outfit.
Let me be clear. Some of the “real” journalists will find fame as YouTubers, pundits on what’s left of traditional TV or cable news programs, or by writing a book which catches the attention of Netflix. Most, however, will do gig work and migrate to employment adjacent to “real” news. The problem is that in any set of “real” journalists, the top 10 percent will be advantaged. The others may head to favelas, their parent’s basement, or a Sheetz parking lot in my favorite state for some chemical relief. Does that sound scary?
Think about this.
Sam AI-Man, according to the Observer’s story “Sam Altman Says OpenAI Doesn’t Fully Understand How GPT Works Despite Rapid Progress.” These money-focused publishers are signing up for something that not only do they not understand but the fellow who is surfing the crazy wave of smart software does not understand. But taking money and worrying about the future is not something publishing executives in their carpetlands think about. Money in hand is good. Worrying about the future, according to their life coach, is not worth the mental stress. It is go-go in a now-now moment.
I cannot foretell the future. If I could, I would not be an 80-year-old dinobaby sitting in my home office marveling at the downstream consequences of what amounts to a 2024 variant of the DR-LINK technology. I can offer a handful of hypotheses:
- “Real” journalists are going to find that publishers cut deals to get cash without thinking of the “real” journalists or the risks inherent in hopping in a small cabin with Sam AI-Man for a voyage in the unknown.
- Money and cost reductions will fuel selling information to Sam AI-Man and any other Big Tech outfit which comes calling with a check book. Money now is better than looking at a graph of advertising sales over the last five years. Money trumps “real” journalists’ complaints when they are offered part-time work or an opportunity to find their future elsewhere.
- Publishing outfits have never been technology adept, and I think that engineered blindness is now built into the companies’ management processes. Change is going to make publishing an interesting business. That’s good for consultants and bankruptcy specialists. It will not be so good for those who do not have golden parachutes or platinum flying cars.
Net net: What are the options for the “real” journalists’ unions? Lawyers, maybe. Social media posts. Absolutely. Will these prevent publishers from doing what publishers have to do? Nope.
Stephen E Arnold, June 7, 2024
AI in the Newsroom
June 7, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
It seems much of the news we encounter is already, at least in part, generated by AI. Poynter discusses how “AI Is Already Reshaping Newsrooms, AP Study Finds.” The study asked 292 respondents from legacy media, public broadcasters, magazines, and other news outlets. Writer Alex Mahadevan summarizes:
“Nearly 70% of newsroom staffers from a variety of backgrounds and organizations surveyed in December say they’re using the technology for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts, among other uses. One-fifth said they’d used generative AI for multimedia, including social graphics and videos.”
Surely these professionals are only using these tools under meticulous guidelines, right? Well, a few are. We learn:
“The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.”
The survey found the idea of guidelines was not even on most respondents’ minds. That is unsettling. Mahadevan lists some other interesting results:
“*54% said they’d ‘maybe’ let AI companies train their models using their content.
*49% said their workflows have already changed because of generative AI.
*56% said the AI generation of entire pieces of content should be banned.
*Only 7% of those who responded were worried about AI displacing jobs.
*18% said lack of training was a big challenge for ethical use of AI. ‘Training is lovely, but time spent on training is time not spent on journalism — and a small organization can’t afford to do that,’ said one respondent.”
That last statement is disturbing, given the gradual deterioration and impoverishment of large news outlets. How can we ensure best practices make their way into this mix, and can it be done before any news may be fake news?
Cynthia Murrell, June 7, 2024
Lunch at a Big Time Publisher: Humble Pie and Sour Words
June 4, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Years ago I did some work for a big time New York City publisher. The firm employed people who used words like “fungible” and “synergy” when talking with me. I took the time to read an article with this title: “So Much for Peer Review — Wiley Shuts Down 19 Science Journals and Retracts 11,000 Gobbledygook Papers.” Was this the staid, conservative, and big vocabulary?
Yep.
The essay is little more than a wrapper for a Wall Street Journal story with the title “Flood of Fake Science Forces Multiple Journal Closures Tainted by Fraud.” I quite like that title, particularly the operative word “fraud.” What in the world is going on?
The write up explains:
Wiley — a mega publisher of science articles has admitted that 19 journals are so worthless, thanks to potential fraud, that they have to close them down. And the industry is now developing AI tools to catch the AI fakes (makes you feel all warm inside?)
A group of publishing executives becomes the focal point of a Midtown lunch in an upscale restaurant. The titans of publishing are complaining about the taste of humble pie and user secret NYAC gestures to express their disapproval. Thanks, MSFT Copilot. Your security expertise may warrant a special banquet too.
The information in the cited article contains some tasty nuggets which complement humble pie in my opinion; for instance:
- The shut down of the junk food publications has required two years. If Sillycon Valley outfits can fire thousands via email or Zoom, “Why are those uptown shoes being dragged?” I asked myself.
- Other high-end publishers have been doing the same thing. Sadly there are no names.
- The bogus papers included something called a “AI gobbledygook sandwich.” Interesting. Human reviews who are experts could not recognize the vernacular of academic and research fraudsters.
- Some in Australia think that the credibility of universities might be compromised. Oh, come now. Just because the president of Stanford had to search for his future elsewhere after some intellectual fancy dancing and the head of the Harvard ethic department demonstrated allegedly sci-fi ethics in published research, what’s the problem? Don’t students just get As and Bs. Professors are engaged in research, chasing consulting gigs, and ginning up grant money. Actual research? Oh, come now.
- Academic journals are or were a $30 billion dollar industry.
Observations are warranted:
- In today’s datasphere, I am not surprised. Scams, frauds, and cheats seems to be as common as ants at a picnic. A cultural shift has occurred. Cheating has become the norm.
- Will the online databases, produced by some professional publishers and commercial database companies, be updated to remove or at least flag the baloney? Probably not. That costs money. Spending money is not a modern publishing CEO’s favorite activity. (Hence the two-year draw down of the fake information at the publishing house identified in the cited write up.)
- How many people have died or been put out of work because of specious research data? I am not holding my breath for the peer reviewed journals to provide this information.
Net net: Humiliating and a shame. Quite a cultural mismatch between what some publishers say and this alleged what the firm ordered from the deli. I thought the outfit had a knowledge-based reason to tell me that it takes the high road. It seems that on that road, there are places where a bad humble pie is served.
Stephen E Arnold, June 4, 2024
The Death of the Media: Remember Clay Tablets?
May 24, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Did the home in which you grew from a wee one to a hyperspeed teen have a plaster cast which said, “Home sweet home” or “Welcome” hanging on the wall. My mother had those craft sale treasures everywhere. I have none. The point is that the clay tablets from ancient times were not killed, put out of business, or bankrupted because someone wrote on papyrus, sheep skin, or bits of wood. Eliminating a communications medium is difficult. Don’t believe me? Go to an art fair and let me know if you were unable to spot something made of clay with writing or a picture on it.
I mention these older methods of disseminating a message because I read “Publishers Horrified at New Google AI Feature That Could Kill What’s Left of Journalism.” Really?
The write up states:
… preliminary studies on Google’s use of AI in its search engine has the potential to reduce website traffic by 25 percent, The Associated Press reports. That could be billions in revenue lost, according to an interview with Marc McCollum, chief innovation officer for content creator consultancy Raptive, who was interviewed by the AP.
The idea is that “real” journalism depends on Google for revenue. If the revenue from Google’s assorted ad programs tossing pennies to Web sites goes away, so will the “real” journalism on these sites.
If my dinobaby memory is working, the AP (Associated Press) was supported by newspapers. Then the AP was supported by Google. What’s next? I don’t know, but the clay tablet fellows appear to have persisted. The producers of the tablets probably shifted to tableware. Those who wrote on the tablets learned to deal with ink and sheepskin.
Chilling in the room thinking thoughts of doom. Thanks, MSFT Copilot. Keep following your security recipe.
AI seems to be capable of creating stories like those in Smartnews or one of the AI-powered spam outfits. The information is recycled. But it is good enough. Some students today seem incapable of tearing themselves from their mobile devices to read words. The go-to method for getting information is a TikTok-type service. People who write words may be fighting to make the shift to new media.
One thing is reasonably clear: Journalists and media-mavens are concerned that a person will take an answered produced by a Google-like service. The entering a query approach to information is a “hot medium thing.” Today kicking back and letting video do the work seems to be a winner.
Google, however, has in my opinion been fiddling with search since it “innovated” in its implementation of the GoTo.com/Overture.com approach to “pay to play” search. If you want traffic, buy ads. The more one spends, the more traffic one’s site gets. That’s simple. There are some variations, but the same Google model will be in effect with or without Google little summaries. The lingo may change, but where there are clicks. When there are clicks, advertisers will pay to be there.
Google can, of course, kill its giant Googzilla mom laying golden eggs. That will take some time. Googzilla is big. My theory is that enterprising people with something to say will find a way to get paid for their content outputs regardless of their form. True, there is the cost of paying, but that’s the same hit the clay table took thousands of years ago. But those cast plaster and porcelain art objects are probably on sale at an art fair this weekend.
Observations:
- The fear is palpable. Why not direct it to a positive end? Griping about Google which has had 25 years to do what it wanted to do means Google won’t change too much. Do something to generate money. Complaining is unlikely to produce a result.
- The likelihood Google shaft a large number of outfits and individuals is nearly 99 percent. Thus, moving in a spritely manner may be a good idea. Google is not a sprinter as its reaction to Microsoft’s Davos marketing blitz made clear.
- New things do appear. I am not sure what the next big thing will be. But one must pay attention.
Net net: The sky may be falling. The question is, “How fast?” Another is, “Can you get out of the way?”
Stephen E Arnold, May 24, 2024
Using AI But For Avoiding Dumb Stuff One Hopes
May 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:
I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.
Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?
The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:
I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.
The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).
I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.
However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.
Several observations are warranted:
- For certain types of content, the systems lack sufficient data to know what the heck I am talking about
- For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
- The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.
Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?
Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.
Stephen E Arnold, May 1, 2024
NSO Pegasus: No Longer Flying Below the Radar
April 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “AP Exclusive: Polish Opposition Senator Hacked with Spyware.” I remain fearful of quoting the AP or Associated Press. I think it is a good business move to have an 89 year old terrified of an American “institution, don’t you. I think I am okay if I tell you the AP recycled a report from the University of Toronto’s Citizen Lab. Once again, the researchers have documented the use of what I call “intelware” by a nation state. The AP and other “real” news outfits prefer the term “spyware.” I think it has more sizzle, but I am going to put NSO Group’s mobile phone system and method in the category of intelware. The reason is that specialized software like Pegasus gathers information for a nation’s intelligence entities. Well, that’s the theory. The companies producing these platforms and tools want to answer such questions as “Who is going to undermine our interests?” or “What’s the next kinetic action directed at our facilities?” or “Who is involved in money laundering, human trafficking, or arms deals?”
Thanks, MSFT Copilot. Cutting down the cycles for free art, are you?
The problem is that specialized software is no longer secret. The Citizen Lab and the AP have been diligent in explaining how some of the tools work and what type of information can be gathered. My personal view is that information about these tools has been converted into college programming courses, open source software tools, and headline grabbing articles. I know from personal experience that most people do not have a clue how data from an iPhone can be exfiltrated, cross correlated, and used to track down those who would violate the laws of a nation state. But, as the saying goes, information wants to be free. Okay, it’s free. How about that?
The write up contains an interesting statement. I want to note that I am not plagiarizing, undermining advertising sales, or choking off subscriptions. I am offering the information as a peg on which to hang some observations. Here’s the quote:
“My heart sinks with each case we find,” Scott-Railton [a senior researcher at UT’s Citizen Lab] added. “This seems to be confirming our worst fear: Even when used in a democracy, this kind of spyware has an almost immutable abuse potential.”
Okay, we have malware, a command-and-control system, logs, and a variety of delivery mechanisms.
I am baffled because malware is used by both good and bad actors. Exactly what does the University of Toronto and the AP want to happen. The reality is that once secret information is leaked, it becomes the Teflon for rapidly diffusing applications. Does writing about what I view an “old” story change what’s happening with potent systems and methods? Will government officials join in a kumbaya moment and force the systems and methods to fall into disuse? Endless recycling of an instrumental action by this country or that agency gets us where?
In my opinion, the sensationalizing of behavior does not correlate with responsible use of technologies. I think the Pegasus story is a search for headlines or recognition for saying, “Look what we found. Country X is a problem!” Spare me. Change must occur within institutions. Those engaged in the use of intelware and related technologies are aware of issues. These are, in my experience, not ignored. Improper behavior is rampant in today’s datasphere.
Standing on the sidelines and yelling at a player who let the team down does what exactly? Perhaps a more constructive approach can be identified and offered as a solution beyond Pegasus again? Broken record. I know you are “just doing your job.” Fine but is there a new tune to play?
Stephen E Arnold, April l29, 2024
Fake Books: Will AI Cause Harm or Do Good?
April 24, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read what I call a “howl” from a person who cares about “good” books. Now “good” is a tricky term to define. It is similar to “quality” or “love.” I am not going to try to define any of these terms. Instead I want to look at one example of smart software creating a problem for humans who create books. Then I want to focus attention on Amazon, the online bookstore. I think about two-thirds of American shoppers have some interaction with Amazon. That percentage is probably low if we narrow to top earners in the US. I want to wrap up with a reminder to those who think about smart software that the diffusion of technology chugs along and then — bang! — phase change. Spoiler: That’s what AI is doing now, and the pace is accelerating.
The Copilot images illustrates how smart software spreads. Cleaning up is a bit of a chore. The table cloth and the meeting may be ruined. But that’s progress of sorts, isn’t it?
The point of departure is an essay cum “real” news write up about fake books titled “Amazon Is Filled with Garbage Ebooks. Here’s How They Get Made.”
. These books are generated by smart software and Fiverr-type labor. Dump the content in a word processor, slap on a title, and publish the work on Amazon. I write my books by hand, and I try to label that which I write or pay people to write as “the work of a dumb dinobaby.” Other authors do not follow my practice. Let many flowers bloom.
The write up states:
It’s so difficult for most authors to make a living from their writing that we sometimes lose track of how much money there is to be made from books, if only we could save costs on the laborious, time-consuming process of writing them. The internet, though, has always been a safe harbor for those with plans to innovate that pesky writing part out of the actual book publishing.
This passage explains exactly why fake books are created. The fact of fake books makes clear that AI technology diffusing; that is, smart software is turning up in places and ways that the math people fiddling the numerical recipes or the engineers hooking up thousands of computing units envisioned. Why would they? How many mathy types are able to remember their mother’s birthday?
The path for the “fake book” is easy money. The objective is not excellence, sophisticated demonstration of knowledge, or the mindlessness of writing a book “because.” The angst in the cited essay comes from the side of the coin that wants books created the old-fashioned way. Yeah, I get it. But today it is clear that the hand crafted books are going to face some challenges in the marketplace. I anticipate that “quality” fake books will convert the “real” book to the equivalent of a cuneiform tablet. Don’t like this? I am a dinobaby, and I call the trajectory as my experience and research warrants.
Now what about buying fake books on Amazon? Anyone can get an ISBN, but for Amazon, no ISBN is (based on our tests) no big deal. Amazon has zero incentive to block fake books. If someone wants a hard copy of a fake book, let Amazon’s own instant print service produce the copy. Amazon is set up to generate revenue, not be a grumpy grandmother forcing grandchildren to pick up after themselves. Amazon could invest to squelch fraudulent or suspect behaviors. But here’s a representative Amazon word salad explanation cited in the “Garbage Ebooks” essay:
In a statement, Amazon spokesperson Ashley Vanicek said, “We aim to provide the best possible shopping, reading, and publishing experience, and we are constantly evaluating developments that impact that experience, which includes the rapid evolution and expansion of generative AI tools.”
Yep, I suggest not holding one’s breath until Amazon spends money to address a pervasive issue within its service array.
Now the third topic: Slowly, slowly, then the frog dies. Smart software in one form or another has been around a half century or more. I can date smart software in the policeware / intelware sector to the late 1990s when commercial services were no longer subject to stealth operation or “don’t tell” business practices. For the ChatGPT-type services, NLP has been around longer, but it did not work largely due to computational costs and the hit-and-miss approaches of different research groups. Inference, DR-LINK, or one of the other notable early commercial attempts, anyone?
Okay, now the frog is dead, and everyone knows it. Better yet, navigate to any open source repository or respond to one of those posts on Pinboard or listings in Product Hunt, and you are good to go. Anthropic has released a cook book, just do-it-yourself ideas for building a start up with Anthropic tools. And if you write Microsoft Excel or Word macros for a living, you are already on the money road.
I am not sure Microsoft’s AI services work particularly well, but the stuff is everywhere. Microsoft is spending big to make sure it is not left out of an AI lunches in Dubai. I won’t describe the impact of the Manhattan chatbot. That’s a hoot. (We cover this slip up in the AItoAI video pod my son and I do once each month. You can find that information about NYC at this link.)
Net net: The tipping point has been reached. AI is tumbling and its impact will be continuous — at least for a while. And books? Sure, great books like those from Twitter luminaries will sell. To those without a self-promotion rail gun, cloudy days ahead. In fact, essays like “Garbage Ebooks” will be cranked out by smart software. Most people will be none the wiser. We are not facing a dead Internet; we are facing the death of high-value information. When data are synthetic, what’s original thinking got to do with making money?
Stephen E Arnold, April 24, 2024
The Evolution of Study Notes: From Lazy to Downright Slothful
April 22, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Study guides, Cliff Notes, movie versions, comic books, and bribing elder siblings or past students for their old homework and class notes were the how kids used to work their way through classes. Then came the Internet and over the years innovative people have perfected study guides. Some have even made successful businesses from study guides for literature, science, math, foreign language, writing, history, and more.
The quality of these study guides range from poor to fantastic. PinkMonkey.com is one of the average study guide websites. It has some free book guides while others are behind a paywall. There are also educational tips for different grades and advice for college applications. The information is a little dated but when it is combined with other educational and homework help websites it still has its uses.
PinkMonkey.com describes itself as:
“…a "G" rated study resource for junior high, high school, college students, teachers and home schoolers. What does PinkMonkey offer you? The World’s largest library of free online Literature Summaries, with over 460 Study Guides / Book Notes / Chapter Summaries online currently, and so much more. No more trips to the book store; no more fruitless searching for a booknote that no one ever has in stock! You’ll find it all here, online 24/7!”
YouTube, TikTok, and other platforms are also 24/7. They’re also being powered more and more by AI. It won’t be long before AI is condensing these guides and turning them into consumable videos. There are already channels that made study guides but homework still requires more than an AI answer.
ChatGPT and other generative AI algorithms are getting smarter by being trained on sets that pull their data from the Internet. These datasets include books, videos, and more. In the future, students will be relying on study guides in video format. The question to ask is how will they look? Will they summarize an entire book in fifteen seconds, take it chapter by chapter, or make movies powered by AI?
Whitey Grace, April 22, 2024
Publishers Not Thrilled with Internet Archive
April 15, 2024
So you are saving the library of an island? So what?
The non-profit Internet Archive (IA) preserves digital history. It also archives a wealth of digital media, including a large number of books, for the public to freely access. Certain major publishers are trying to stop the organization from sharing their books. These firms just scored a win in a New York federal court. However, the IA is not giving up. In its defense, the organization has pointed to the opinions of authors and copyright scholars. Now, Hachette, HarperCollins, John Wiley, and Penguin Random House counter with their own roster of experts. TorrentFreak reports, “Publishers Secure Widespread Support in Landmark Copyright Battle with Internet Archive.” Journalist Ernesto Van der Sar writes:
“The importance of this legal battle is illustrated by the large number of amicus briefs that are filed by third parties. Previously, IA received support from copyright scholars and the Authors Alliance, among others. A few days ago, another round of amicus came in at the Court of Appeals, this time to back the publishers who filed their reply last week. In more than a handful of filings, prominent individuals and organizations urge the Appeals Court not to reverse the district court ruling, arguing that this would severely hamper the interests of copyright holders. The briefs include positions from industry groups such as the MPA, RIAA, IFPI, Copyright Alliance, the Authors Guild, various writers unions, and many others. Legal scholars, professors, and former government officials, also chimed in.”
See the article for more details on those chimes. A couple points to highlight: First, AI is a part of this because of course it is. Several trade groups argue IA makes high-quality texts too readily available for LLMs to train upon, posing an “artificial intelligence” threat. Also of interest are the opinions that differentiate this case from the Google Books precedent. We learn:
“[Scholars of relevant laws] stress that IA’s practice should not be seen as ‘transformative’ fair use, arguing that the library offers a ‘substitution’ for books that are legally offered by the publishers. This sets the case apart from current legal precedents including the Google Books case, where Google’s mass use of copyrighted books was deemed fair use. ‘IA’s exploitation of copyrighted books is thus the polar opposite of the copying that was found to be transformative in Google Books and HathiTrust. IA offers no “utility-expanding” searchable database to its subscribers.’”
Ah, the devilish details. Will these amicus-rich publishers prevail, or will the decision be overturned on IA’s appeal?
Cynthia Murrell, April 15, 2024
The University of Illinois: Unintentional Irony
March 22, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I admit it. I was in the PhD program at the University of Illinois at Champaign-Urbana (aka Chambana). There was nothing like watching a storm build from the upper floors of now departed FAR. I spotted a university news release titled “Americans Struggle to Distinguish Factual Claims from Opinions Amid Partisan Bias.” From my point of view, the paper presents research that says that half of those in the sample cannot distinguish truth from fiction. That’s a fact easily verified by visiting a local chain store, purchasing a product, and asking the clerk to provide the change in a specific way; for example, “May I have two fives and five dimes, please?” Putting data behind personal experience is a time-honored chore in the groves of academe.
Discerning people can determine “real” from “original fakes.” Well, only half the people can it seems. The problem is defining what’s true and what’s false. Thanks, MSFT Copilot. Keep working on your security. Those breaches are “real.” Half the time is close enough for horseshoes.
Here’s a quote from the write up I noted:
“How can you have productive discourse about issues if you’re not only disagreeing on a basic set of facts, but you’re also disagreeing on the more fundamental nature of what a fact itself is?” — Matthew Mettler, a U. of I. graduate student and co-author of with Jeffery J. Mondak, a professor of political science and the James M. Benson Chair in Public Issues and Civic Leadership at Illinois.
The news release about Mettler’s and Mondak’s research contains this statement:
But what we found is that, even before we get to the stage of labeling something misinformation, people often have trouble discerning the difference between statements of fact and opinion…. “What we’re showing here is that people have trouble distinguishing factual claims from opinion, and if we don’t have this shared sense of reality, then standard journalistic fact-checking – which is more curative than preventative – is not going to be a productive way of defanging misinformation,” Mondak said. “How can you have productive discourse about issues if you’re not only disagreeing on a basic set of facts, but you’re also disagreeing on the more fundamental nature of what a fact itself is?”
But the research suggests that highly educated people cannot differentiate made up data from non-weaponized information. What struck me is that Harvard’s Misinformation Review published this U of I research that provides a road map to fooling peers and publishers. Harvard University, like Stanford University, has found that certain big-time scholars violate academic protocols.
I am delighted that the U of I research is getting published. My concern is that the Misinformation Review does not find my laughing at its Misinformation Review to their liking. Harvard illustrates that academic transgressions cannot be identified by half of those exposed to the confections of up-market academics.
Should Messrs Mettler and Mondak have published their research in another journal? That a good question, but I am no longer convinced that professional publications have more credibility than the outputs of a content farm. Such is the erosion of once-valued norms. Another peril of thumb typing is present.
Stephen E Arnold, March 22, 2024