AI May Fizzle and the New York Times Is Thrilled

April 7, 2025

dino orangeYep, a dinobaby blog post. No smart software required.

I read “The Tech Fantasy That Powers A.I. Is Running on Fumes.” Is this a gleeful headline or not. Not even 10 days after the Italian “all AI” newspaper found itself the butt of merciless humor, the NYT is going for the jugular.

The write up opines:

  • “Midtech” — tech but not really
  • “Silly” — Showing little thought or judgment
  • “Academics” — Ivory tower dwellers, not real journalists and thinkers

Here’s a quote from a person who obviously does not like self check outs:

The economists Daron Acemoglu and Pascual Restrepo call these kinds of technological fizzles “so-so” technologies. They change some jobs. They’re kind of nifty for a while. Eventually they become background noise or are flat-out annoying, say, when you’re bagging two weeks’ worth of your own groceries.

And now the finale:

But A.I. is a parasite. It attaches itself to a robust learning ecosystem and speeds up some parts of the decision process. The parasite and the host can peacefully coexist as long as the parasite does not starve its host. The political problem with A.I.’s hype is that its most compelling use case is starving the host — fewer teachers, fewer degrees, fewer workers, fewer healthy information environments.

My thought is that the “real” journalists at the NYT hope that AI fails. Most routine stories can be handled by smart software. Sure, there are errors. But looking at a couple of versions of the same event is close enough for horse shoes.

The writing is on the wall of the bean counters’ offices: Reduce costs. Translation: Some “real” journalists can try to get a job as a big time consultant. Oh, strike that. Outfits that sell brains are replacing flakey MBAs with smart software. Well, there is PR and marketing. Oh, oh, strike that tool. Telegram’s little engines of user controlled smart software can automate ads. Will other ad outfits follow Telegram’s lead? Absolutely.

Yikes. It won’t be long before some “real” journalists will have an opportunity to write their version of:

  • Du côté de chez Swann
  • À l’ombre des jeunes filles en fleurs
  • Le Côté de Guermantes
  • Sodome et Gomorrhe
  • La Prisonnière
  • Albertine disparue (also published as La Fugitive)
  • Le Temps retrouvé

Which one will evoke the smell of the newsroom?

Stephen E Arnold, April 7, 2025

Original Research: Not-So-Original Assertions about Content Appropriation

April 2, 2025

dino orange_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

The Social Science Research Council published the 30-plus page report “Beyond Public Access in LLM Pre-Training Data.” The subtitle reveals the principal finding: “Non-Public Non-Public Book Content in OpenAI’s Models.”

The write up states:

Using a legally obtained dataset of 34 copyrighted O’Reilly Media books, we apply the DE-COP membership inference attack method to investigate whether OpenAI’s large language models were trained on copyrighted content without consent. Our AUROC scores show that GPT-4o, OpenAI’s more recent and capable model, demonstrates strong recognition of paywalled O’Reilly book content (AUROC = 82%), compared to OpenAI’s earlier model GPT-3.5 Turbo. In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples. GPT-4o Mini, as a much smaller model, shows no knowledge of public or non-public O’Reilly Media content when tested (AUROC ? 50%). Testing multiple models, with the same cutoff date, helps us account for potential language shifts over time that might bias our findings. These results highlight the urgent need for increased corporate transparency regarding pre-training data sources as a means to develop formal licensing frameworks for AI content training.

I want to mention that the DE-COP method provide one way to determine whether a specific data record was part of the training dataset for a machine learning model. The result of the SSRC’s research suggests that as OpenAI enhanced its models, the OpenAI organization appears to have used non-public content. Smaller (less capable OpenAI) models seemed to know less about the test set; the bigger models knew more.

The write up concludes that developers of smart software are sucking down data without obtaining permission or entering into licensing deals with publishers, copyright holders, or entities working on content not yet available as public content like this dinobaby blog post.

To sum up: A not-to-original assertion that certain organizations just take content and ignore rights is apparently accurate in this set of research. OpenAI accelerates. Speed is a positive. Breaking speed limits is believed to be no big deal.

A bit of a flashback: Over 40 years ago, I participated in a series of meetings to license the right to create abstracts of original articles from more than 1000 different publications. Our efforts were successful. Most publishers of academic business and management publications knew zero about online information services. The idea of summarizing an eight page article about executive compensation was unfamiliar to most of the people with whom our organization interacted over a period of years. Nevertheless, the owner of the database unit of the Courier Journal & Louisville Times Company insisted that we get signed agreements. The process was time consuming and expensive. Some outfits had representatives. So a negotiation went through layers of bureaucracy. Other publishers wanted a sit down meeting and a dog-and-pony show to explain why we wanted to boil down a 600 word article to about 150 words. There were worries about the impact of online access. There were worries about control of editorial content. There were worries about signing away rights. We couldn’t assuage worries, but we chugged along. The reputations of the Courier Journal and Barry Bingham Jr. helped but our small database operation had to follow the rules.

If the data in the cited document are accurate, the attitude of Barry Bingham Jr. would not have gained much traction at OpenAI. Instead of my getting a pat on the head for setting up the licensing deals with the able help of other Courier Journal professionals, I would have gotten kicked on the shin or worse.

Several observations:

  1. Observing content ownership remains important despite the acceleration toward illegal behaviors in the online sector
  2. Using show-and-tells and explaining the benefits of participating in digital innovations remains important. Knowledge minimizes anger and other negative reactions to pre-emptive behavior
  3. Managing the process is as important as demonstrating an unwillingness to conform to industry, social, and legal norms.

Let’s assume that the findings in the SSRC are off by 50 percent. Does that change what appears to be a strong possibility of some fancy dancing at the OpenAI disco?

My conclusion is that taking short cuts is part of the Silicon Valley approach of doing whatever and just saying, “Wow, I am sorry.”

I asked ChatGPT 4 via You.com this question:

Should I create an online product and just use content from the Harvard Business Review without permission?

Here’s the response:

No, you should not create an online product using content from the Harvard Business Review (HBR) without permission. Doing so would likely violate copyright laws and HBR’s usage policies. Here’s why:

HBR’s Copyright and Usage Policies
  1. HBR Material is Copyrighted: Harvard Business Review explicitly states that its content is protected by copyright, and no part of its publications may be reproduced without permission.
  2. Usage Restrictions: HBR has strict policies regarding how its content can be used. For example, even in academic settings, HBR articles cannot be freely distributed unless done through specific channels like creating course packs via "Harvard Business Publishing for Educators"

    Permission is Required: If you want to use HBR content, you must obtain explicit permission. This includes reproducing, distributing, or using their material in any form.

Legal and Ethical Implications
  • Copyright Infringement: Using HBR content without permission would likely constitute copyright infringement, which can result in legal action, fines, or other penalties.
  • Ethical Concerns: Beyond legal issues, using someone else’s intellectual property without authorization is unethical and could harm your reputation or business.
What You Should Do Instead
  1. Seek Permission: Contact Harvard Business Review to request permission to use their content. They may have licensing options available for your intended use.
  2. Create Original Content: Instead of using HBR’s material, consider creating your own original content inspired by their ideas (but without copying their text or proprietary materials).
  3. Use Public Domain or Licensed Content: Look for content that is in the public domain or available under a license (e.g., Creative Commons) that allows commercial use.
Conclusion

Using HBR content without permission is not only illegal but also against their explicit policies. To avoid legal and ethical issues, either seek proper licensing or create your own original material.

In this case, the OpenAI’s AI may be more intelligent that some people engaged in content acquisition at the firm. But I am a dinobaby. I try to follow the social, legal, and ethical rules as my parents who had only high school educations taught me. I am a dinobaby. A sad dinobaby.

Stephen E Arnold, April 2, 2025

A Reminder That Academic Papers Can Be Science Fiction

April 2, 2025

dino orange_thumbDinobaby says, “No smart software involved. That’s for “real” journalists and pundits.

For many years, I have been skeptical about peer reviewed papers. There are two reasons.

First, I did some consulting work for an outfit in the UK. It was a crown operation. That means its outputs carried considerable influence. One of the projects in which I was engaged involved peer review of research under the purview of the “institute.” Wow, did I learn why there was a publishing backlog. Wow, did I learn about the cronyism in reviewing “real” research. Wow, did I learn about the wheeling and dealing of peer reviewers among their research assistants. Wowzah. That was an education.

Second, for a short time I was a peer reviewer for a British journal. Let me tell you that my first hand exposure to the mechanics and politics of peer reviewing did not prepare me for the reviewing task. A typical submission contained text edited by several hands. None of these was doing fine needlework. A stab here and a stab these summed up the submitted documents. The data and the charts? I had a couple of my team help me figure out if the chart was semi accurate. Working through a five or six page article sent to me for review took me and two people a week to process. In most cases, we gave the paper a D and sent it back to the editor in chief who had to tell the author and his legion of busy bees that the paper sucked. I bailed after six months. Too much work to fix up stuff that was truly terrible.

Today I read “Sometimes Papers Contain Obvious Lies.” That’s a good title, but my thought would be to include the phrase “and Really Crappy.” But I am a dinobaby, and I live in rural Kentucky. The author Cremieux Recueil is much classier than I.

I noted this passage:

The authors of scientific papers often say one thing and find another; they concoct a story around a set of findings that they might not have even made, or which they might have actually even contradicted. This happens surprisingly often, and it’s a very serious issue…

No kidding. The president of Stanford University resigned due to some allegations of fancy dancing. The — note the the — Harvard University experienced a bit of excitement in its ethics department. Is that an oxymoron? An ethics professors violated “ethics” in some research cartwheels.

I liked this sentence because it is closer to my method of communicating concern:

Lying in scientific papers happens all the time.

Hey, not just in scientific papers. I encounter lying 24×7. If someone is not articulating a fabrication, the person may be living a lie. I hear the roar of a 20 somethings hyper car at the gym. Do you?

The paper focuses on a paper with some razzle dazzle related to crime data. The author’s analysis is accurate. However, the focus on an example does not put the scale of the “crime data” problem in perspective.

Let me give you an example and you can test this for validity yourself. Go to your bank. Ask the “personal banker” to tell you about the bank’s experience with cyber crime. Then ask, “How many fraudulent transactions occur at this bank location each year?” Listen to the answer.

Crime data, like health care data, are slippery fish. Numbers do not correlate to reality when scrutinized. Verifiable, statistically valid data is expensive to generate. We live in a “good enough” world and trust whatever black box (human or mechanical) spits out data.

I do disagree with this statement in the essay:

scientists often lie with far more brazenness.

No. Fabrication is now the business of information and the information of business.

Stephen E Arnold, April 2, 2025

Journalism Is Now Spelled Journ-AI-sm

March 24, 2025

dino orange_thumb_thumb_thumb_thumb_thumbAnother dinobaby blog post. Eight decades and still thrilled when I point out foibles.

When I worked at a “real” newspaper, I enjoyed listening to “real” journalists express their opinions on everything. Some were experts in sports and knew little details about Louisville basketball. Others were “into” technology and tracked the world of gadgets — no matter how useless — and regaled people at lunch with descriptions of products that would change everything. Everything? Yeah. I heard about the rigors of journalism school. The need to work on either the college radio station or the college newspaper. These individuals fancied themselves Renaissance men and women. That’s okay, but do bean counters need humans to “cover” the news?

The answer is, “Probably not.”

“Italian Newspaper Says It Has Published World’s First AI-Generated Edition” suggests that “real” humans covering the news may face a snow leopard or Maui ‘Alauahio moment. The article reports:

An Italian newspaper has said it is the first in the world to publish an edition entirely produced by artificial intelligence. The initiative by Il Foglio, a conservative liberal daily, is part of a month-long journalistic experiment aimed at showing the impact AI technology has “on our way of working and our days”, the newspaper’s editor, Claudio Cerasa, said.

The smart software is not just spitting out “real” news. The system does “headlines, quotes, and even the irony.” Wow. Irony from smart software.

According to the “real” journalistic who read the stories in the paper:

The articles were structured, straightforward and clear, with no obvious grammatical errors. However, none of the articles published in the news pages directly quote any human beings.

That puts Il Foglio ahead of the Smartnews’ articles. Wow, are some of those ungrammatical and poorly structured? In my opinion, I would toss in the descriptor “incoherent.”

What do I make of Il Folio’s trial? That’s an easy question:

  1. If the smart software is good enough and allows humans to be given an opportunity to find their future elsewhere, smart software is going to be used. A few humans might be rehired if  revenues tank, but the writing is on the wall of the journalism school
  2. Bean counters know one thing: Counting beans. If the smart software generates financial benefits, the green eye shade crowd will happily approve licensing smart software.
  3. Readers may not notice or not care. Headline. First graf. Good to go.

Will  the future pundits, analysts, marketing specialists, PR professionals, and LLM trainers find the journalistic joy? Being unhappy at work and paying bills is one thing; being happy doing news knowing that smart software is coming for the journalism jobs is another.

I would not return to college to learn how to be a “real” journalist. I would stay home, eat snacks, and watch game show re-runs. Good enough life plan, right?

Why worry? Il Foglio is just doing a small test.

Stephen E Arnold, March 24, 2025

Google Experiment: News? Nobody Cares So Ad Impact Is Zero, Baby, Zero

March 24, 2025

dino orange_thumb_thumbDinobaby, here. No smart software involved unlike some outfits. 

I enjoy reading statistically valid wizard studies from monopolistic outfits. “Our Experiment on the Value of European News Content” reports a wonderful result: Nobody cares if Googzilla does not index “real” news. That’s it. The online ad outfit conclusively proves that “real” news is irrelevant.

The write up explains:

The results have now come in: European news content in Search has no measurable impact on ad revenue for Google. The study showed that when we removed this content, there was no change to Search ad revenue and a <1% (0.8%) drop in usage, which indicates that any lost usage was from queries that generated minimal or no revenue. Beyond this, the study found that combined ad revenue across Google properties, including our ad network, also remained flat.

What should those with a stake in real news conclude? From my point of view, Google is making crystal clear that publishers need to shut up or else. What’s the “else”? Google stops indexing “real” news sites. Where will those “real” news sites get traffic. Bear Blog, a link from YCombinator Hacker News, a Telegram Group, Twitter, or TikTok?

Sure, absolutely.

Several observations:

  1. Fool around with a monopoly in the good old days, and some people would not have a train stop at their town in Iowa or the local gas stations cannot get fuel. Now it is search traffic. Put that in your hybrid.
  2. Google sucks down data. Those who make data available to the Google are not likely to be invited to the next Sundar & Prabhakar Comedy Show.
  3. Google will continue to flip the digital bird at the EU, stopping when the lawsuits go away and publishers take their medicine and keep quiet. The crying and whining is annoying.

One has to look forward to Google’s next research study, doesn’t one?

Stephen E Arnold, March 24, 2025

AI Checks Professors Work: Who Is Hallucinating?

March 19, 2025

Hopping Dino_thumbThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you?

I  read an amusing write up in Nature Magazine, a publication which does not often veer into MAD Magazine territory. The write up “AI Tools Are Spotting Errors in Research Papers: Inside a Growing Movement” has a wild subtitle as well: “Study that hyped the toxicity of black plastic utensils inspires projects that use large language models to check papers.”

Some have found that outputs from large language models often make up information. I have included references in my writings to Google’s cheese errors and lawyers submitting court documents with fabricated legal references. The main point of this Nature article is that presumably rock solid smart software will check the work of college professors, pals in the research industry, and precocious doctoral students laboring for love and not much money.

Interesting but will hallucinating smart software find mistakes in the work of people like the former president of Stanford University and Harvard’s former ethics star? Well, sure, peers and co-authors cannot be counted on to do work and present it without a bit of Photoshop magic or data recycling.

The article reports that their are two efforts underway to get those wily professors to run their “work” or science fiction through systems developed by Black Spatula and YesNoError. The Black Spatula emerged from tweaked research that said, “Your black kitchen spatula will kill you.” The YesNoError is similar but with a crypto  twist. Yep, crypto.

Nature adds:

Both the Black Spatula Project and YesNoError use large language models (LLMs) to spot a range of errors in papers, including ones of fact as well as in calculations, methodology and referencing.

Assertions and claims are good. Black Spatula markets with the assurance its system  “is wrong about an error around 10 percent of the time.” The YesNoError crypto wizards “quantified the false positives in only around 100 mathematical errors.” Ah, sure, low error rates.

I loved the last paragraph of the MAD inspired effort and report:

these efforts could reveal some uncomfortable truths. “Let’s say somebody actually made a really good one of these… in some fields, I think it would be like turning on the light in a room full of cockroaches…”

Hallucinating smart software. Professors who make stuff up. Nature Magazine channeling important developments in research. Hey, has Nature Magazine ever reported bogus research? Has Nature Magazine run its stories through these systems?

Good question. Might be a good idea.

Stephen E Arnold, March 19, 2025

What Sells Books? Publicity, Sizzle, and Mouth-Watering Titbits

March 18, 2025

dino orangeAnother dinobaby blog post.

Editor note: This post was written on March 13, 2025. Availability of the articles and the book cited may change when this appears in Mr. Arnold’s public blog.

I have heard that books are making a comeback. In rural Kentucky, where I labor in an underground nook, books are good for getting a fire started. The closest bookstore is filled with toys and odd stuff one places on a desk. I am rarely motivated to read a whatchamacallit like a book. I must admit that I read one of those emergence books from a geezer named Stuart A. Kauffman at the Santa Fe Institute, and it was pretty good. Not much in the jazzy world of social media but it was a good use of my time.

I now have another book I want to read. I think it is a slice of reality TV encapsulated in a form of communication less popular than TikTok- or Telegram Messenger-type of media. The bundle of information is called Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism. Many   and pundits have grabbed the story of a dispute between everyone’s favorite social media company and an authoress named Sarah Wynn-Williams.

There is nothing like some good old legal action, a former employee, and a very defensive company.

The main idea is that a memoir published on March 11, 2025, and available via Amazon at https://shorturl.at/Q077l is not supposed to be sold. Like any good dinobaby who actually read a dead tree thing this year, I bought the book. I have no idea if it has been delivered to my Kindle. I know one thing. Good old Amazon will be able to reach out and kill that puppy when the news reaches the equally sensitive leadership at that outstanding online service.

image

A festive group ready to cook dinner over a small fire of burning books. Thanks, You.com. Good enough.

According to The Verge, CNBC, and the Emergency International Arbitral Tribunal, an arbitrator (Nicholas Gowen) decided that the book has to be put in the information freezer. According to the Economic Times:

… violated her contract… In addition to halting book promotions and sales, Wynn-Williams must refrain from engaging in or ‘amplifying any further disparaging, critical or otherwise detrimental comments… She also must retract all previous disparaging comments ‘to the extent within her control.’”

My favorite green poohbah publication The Verge offered:

…it’s unclear how much authority the arbitrator has to do so.

Such a bold statement: It’s unclear, we say.

The Verge added:

In the decision, the arbitrator said Wynn-Williams must stop making disparaging remarks against Meta and its employees and, to the extent that she can control, cease further promoting the book, further publishing the book, and further repetition of previous disparaging remarks. The decision also says she must retract disparaging remarks from where they have appeared.

Now I have written a number of books and monographs. These have been published by outfits no longer in business. I had a publisher in Scandinavia. I had a publisher in the UK. I had a publisher in the United States. A couple of these actually made revenue and one of them snagged a positive review in a British newspaper.

But in all honesty, no one really cared about my Google, search and retrieval, and electronic publishing work.

Why?

I did not have a giant company chasing me to the Emergency International Arbitral Tribunal and making headlines for the prestigious outfit CNBC.

Well, in my opinion Sarah Wynn-Williams has hit a book publicity home run. Imagine, non readers like me buying a book about a firm to which I pay very little attention. Instead of writing about the Zuckbook, I am finishing a book (gasp!) about Telegram Messenger and that sporty baby maker Pavel Durov. Will his “core” engineering team chase me down? I wish. Sara Wynn-Williams is in the news.

Will Ms. Wynn-Williams “win” a guest spot on the Joe Rogan podcast or possibly the MeidasTouch network? I assume that her publisher, agent, and she have their fingers crossed. I heard somewhere that any publicity is good publicity.

I hope Mr. Beast picks up this story. Imagine what he would do with forced arbitration and possibly a million dollar payoff for the PR firm that can top the publicity the apparently Meta has delivered to Ms. Wynn-Williams.

Net net: Win, Wynn!

Stephen E Arnold, March 18, 2025

Tales of Silicon Valley Management Method: Perceived Cruelty

February 21, 2025

dino orange_thumbA dinobaby post. No smart software involved.

I read an interesting write up. Is it representative? A social media confection? A suggestion that one of the 21st centuries masters of the universe harbors a Vlad the Impaler behavior? I don’t know. But the article “Laid-Off Meta Employees Blast Zuckerberg for Running the Cruelest Tech Company Out There As Some Claim They Were Blindsided after Parental Leave” caught my attention. Note: This is a paywalled write up and you have to pay up.

Straight away I want to point out:

  • AI does not have organic carbon based babies — at least not yet
  • AI does not require health care — routine maintenance but the down time should be less than a year
  • AI does not complain on social media about its gradient descents and Bayesian drift — hey, some do like the new “I remember” AI from Google.

Now back to the write up. I noted this passage:

Over on Blind, an anonymous app for verified employees often used in the tech space, employees are noting that an unseasonable chill has come over Silicon Valley. Besides allegations of the company misusing the low-performer label, some also claimed that Meta laid them off while they were taking approved leave.

Yep, a social media business story.

There are other tech giants in the story, but one is cited as a source of an anonymous post:

A Microsoft employee wrote on Blind that a friend from Meta was told to “find someone” to let go even though everyone was performing at or above expectations. “All of these layoffs this year are payback for 2021–2022,” they wrote. “Execs were terrified of the power workers had [at] that time and saw the offers and pay at that time [are] unsustainable. Best way to stop that is put the fear of god back in the workers.”

I think that a big time, mainstream business publication has found a new source of business news: Employee complaint forums.

In the 1970s I worked with a fellow who was a big time reporter for Fortune. He ended up at the blue chip consulting firm helping partners communicate. He communicated with me. He explained how he tracked down humans, interviewed them, and followed up with experts to crank out enjoyable fact-based feature stories. He seemed troubled that the approach at a big time consulting firm was different from that of a big time magazine in Manhattan. He had an attitude, and he liked spending months working on a business story.

I recall him because he liked explaining his process.

I am not sure the story about the cruel Zuckster would have been one that he would have written. What’s changed? I suppose I could answer the question if I prowled social media employee grousing sites. But we are working on a monograph about Telegram, and we are taking a different approach. I suppose my method is closer to what my former colleague did in his Fortune days reduced like a French sauce by the approach I learned at the blue chip consulting firm.

Maybe I should give social media research, anonymous sources, and something snappy like cruelty to enliven our work? Nah, probably not.

Stephen E Arnold, February 21, 2025

TikTok Alleged to Be Spying on … Journalists

February 19, 2025

It is an open secret that TikTok is spying on the West and collecting piles of information on (maybe) unsuspecting victims. Forbes, however, allegedly has evidence of TikTok spying on its reporters: “TikTok Spied On Forbes Journalists.”

ByteDance, TikTok’s parent company, conducted an internal investigation and discovered that their employees tracked journalists who were reporting on the company. The audit also revealed that they used the journalists’ user data to track if they had been in close proximity with ByteDance employees.

“According to materials reviewed by Forbes, ByteDance tracked multiple Forbes journalists as part of this covert surveillance campaign, which was designed to unearth the source of leaks inside the company following a drumbeat of stories exposing the company’s ongoing links to China. As a result of the investigation into the surveillance tactics, ByteDance fired Chris Lepitak, its chief internal auditor who led the team responsible for them. The China-based executive Song Ye, who Lepitak reported to and who reports directly to ByteDance CEO Rubo Liang, resigned.”

ByteDance didn’t deny the surveillance, but said that TikTok couldn’t monitor people like the article suggested. The parent company also said it didn’t target journalists, public figures, US government members, or political activists. It’s funny that TikTok is trying to convince the Trump administration that it’s a benign force, but this story proves the opposite.

All of this is alleged of course. But it is an interesting story because journalists don’t do news. Journalists are pundits, consultants, and podcasters.

Stephen E Arnold, February 19, 2025

Real AI News? Yes, with Fact Checking, Original Research, and Ethics Too

February 17, 2025

dino orange_thumb_thumb_thumb_thumbThis blog post is the work of a real-live dinobaby. No smart software involved.

This is “real” news… if the story is based on fact checking, original research, and those journalistic ethics pontifications. Let’s assume that these conditions of old-fashioned journalism to apply. This means that the story “New York Times Goes All-In on Internal AI Tools” pinpoints a small shift in how “real” news will be produced.

The write up asserts:

The New York Times is greenlighting the use of AI for its product and editorial staff, saying that internal tools could eventually write social copy, SEO headlines, and some code.

Yep, some. There’s ground truth (that’s an old-fashioned journalism concept) in blue-chip consulting. The big money maker is what’s called scope creep. Stated simply, one starts small like a test or a trial. Then if the sky does not fall as quickly as some companies’ revenue, the small gets a bit larger. You check to make sure the moon is in the sky and the revenues are not falling, hopefully as quickly as before. Then you expand. At each step there are meetings, presentations, analyses, and group reassurances from others in the deciders category. Then — like magic! — the small project is the rough equivalent of a nuclear-powered aircraft carrier.

Ah, scope creep.

Understate what one is trying. Watch it. Scale it. End up with an aircraft carrier scale project. Yes, it is happening at an outfit like the New York Times if the cited article is accurate.

What scope creep stage setting appears in the write up? Let look:

  1. Staff will be trained. You job, one assumes, is safe. (Ho ho ho)
  2. AI will help uncover “the truth.” (Absolutely)
  3. More people will benefit (Don’t forget the stakeholders, please)

What’s the write up presenting as actual factual?

The world’s greatest newspaper will embrace hallucinating technology, but only a little bit.

Scope creep begins, and it won’t change a thing, but that information will appear once the cost savings, revenue, and profit data become available at the speed of newspaper decision making.

Stephen E Arnold, February 17, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta