What Do Gamers Know about AI? Nothing, Nothing at All
February 20, 2025
Take-Two Games CEO says, "There’s no such thing" as AI.
Is the head of a major gaming publisher using semantics to downplay the role of generative AI in his industry? PC Gamer reports, "Take-Two CEO Strauss Zelnick Takes a Moment to Remind Us Once Again that ‘There’s No Such Thing’ as Artificial Intelligence." Writer Andy Chalk quotes Strauss’ from a recent GamesIndustry interview:
"Artificial intelligence is an oxymoron, there’s no such thing. Machine learning, machines don’t learn. Those are convenient ways to explain to human beings what looks like magic. The bottom line is that these are digital tools and we’ve used digital tools forever. I have no doubt that what is considered AI today will help make our business more efficient and help us do better work, but it won’t reduce employment. To the contrary, the history of digital technology is that technology increases employment, increases productivity, increases GDP and I think that’s what’s going to happen with AI. I think the videogame business will probably be on the leading, if not bleeding, edge of using AI."
So AI, which does not exist, will actually create jobs instead of eliminate them? The write-up correctly notes the evidence points to the contrary. On the other hand, Strauss seems clear-eyed on the topic of copyright violations. AI-on-AI violations, anyway. We learn:
"That’s a mess Zelnick seems eager to avoid. ‘In terms of [AI] guardrails, if you mean not infringing on other people’s intellectual property by poaching their LLMs, yeah, we’re not going to do that,’ he said. ‘Moreover, if we did, we couldn’t protect that, we wouldn’t be able to protect our own IP. So of course, we’re mindful of what technology we use to make sure that it respects others’ intellectual property and allows us to protect our own.’"
Perhaps Strauss is on to something. It is true that generative AI is just another digital tool—albeit one that tends to put humans out of work. But as we know, hype is more important than reality for those chasing instant fame and riches.
Cynthia Murrell, February 20, 2025
Smart Software and Law Firms: Realities Collide
February 19, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
TechCrunch published “Legal Tech Startup Luminance, Backed by the Late Mike Lynch, Raises $75 Million.” Good news for Luminance. Now the company just needs to ring the bell for those putting up the money. The write up says:
Claiming to be capable of highly accurate interrogation of legal issues and contracts, Luminance has raised $75 million in a Series C funding round led by Point72 Private Investments. The round is notable because it’s one of the largest capital raises by a pure-play legal AI company in the U.K. and Europe. The company says it has raised over $115 million in the last 12 months, and $165 million in total. Luminance was originally developed by Cambridge-based academics Adam Guthrie (founder and chief technical architect) and Dr. Graham Sills (founder and director of AI).
Why is Luminance different? The method is similar to that used by Deepseek. With concerns about the cost of AI, a method which might be less expensive to get up and keep running seems like a good bet.
However, Eudia has raised $105 million with backing from people familiar with Relativity’s legal business. Law dot com suggests that Eudia will streamline legal business processes.
The article “Massive Law Firm Gets Caught Hallucinating Cases” offers an interesting anecdote about a large law firm’s facing sanctions. What did the big boys and girls at the law firm do? Those hard working Type A professionals cited nine cases to support an argument. There is just one trivial issue perplexing the senior partners. Eight of those cases were “nonexistent.” That means made up, invented, and spot out by a nifty black box of probabilities and their methods.
I am no lawyer. I did work as an expert witness and picked up some insight about the thought processes of big time lawyers. My observations may not apply to the esteemed organizations to which I linked in this short essay, but I will assume that I am close enough for horseshoes.
- Partners want big pay and juicy bonuses. If AI can help reduce costs and add protein powder to the compensation package, AI is definitely a go-to technology to use.
- Lawyers who are very busy all of the billable time and then some want to be more efficient. The hyperbole swirling around AI makes it clear that using an AI is a productivity booster. Do lawyers have time to check what the AI system did? Nope. Therefore, hallucination is going to be part of the transformer-based methodologies until something better becomes feasible. (Did someone say, “Quantum computers?)
- The marketers (both directly compensated and the social media remoras) identify a positive. Then that upside is gilded like Tzar Nicholas’ powder room and repeated until it sure seems true.
The reality for the investors is that AI could be a winner. Go for it. The reality is for the lawyers that time to figure out what’s in bounds and what’s out of bounds is unlikely to be available. Other professionals will discover what the cancer docs did when using the late, great IBM Watson. AI can do some things reasonably well. Other things can have severe consequences.
Stephen E Arnold, February 19, 2025
Now I Get It: Duct Tape Jobs Are the Problem
February 19, 2025
A dinobaby post. No smart software involved.
“Is Ops a Bullsh&t Job?” appears to address the odd world of fix it people who work on systems of one sort of anther. The focus in the write up is on software, but I think the essay reveals broader insight into work today. First, let’s look at a couple of statements in this very good essay, and, second, turn our attention briefly to the non-software programming sector.
I noted this passage attributed to an entity allegedly named Pablo:
Basically, we have two kinds of jobs. One kind involves working on core technologies, solving hard and challenging problems, etc. The other one is taking a bunch of core technologies and applying some duct tape to make them work together. The former is generally seen as useful. The latter is often seen as less useful or even useless, but, in any case, much less gratifying than the first kind. The feeling is probably based on the observation that if core technologies were done properly, there would be little or no need for duct tape.
The distinction strikes me as important. The really good programmers work on the “core” part of a system. A number of companies embrace this stratification of the allegedly most talented programmers and developers. This is a spin on what my seventh grade teacher called a “caste system.” I do remember thinking, “It is very important to get to the top of the pyramid; otherwise, life will be a chore.”
Another passage warranted a blue circle:
A “duct taper” is a role that only exists to solve a problem that ought not to exist in the first place.
The essay then provides some examples. Here are three from the essay:
-
- “My job was to transfer information about the state’s oil wells into a different set of notebooks than they were currently in.”
- “My day consisted of photocopying veterans’ health records for seven and a half hours a day. Workers were told time and again that it was too costly to buy the machines for digitizing.”
- “I was given one responsibility: watching an in-box that received emails in a certain form from employees in the company asking for tech help, and copy and paste it into a different form.”
Good stuff.
With that as background, here’s what I think the essay suggests.
The reason so many gratuitous changes, lousy basic services, and fights at youth baseball games are evident is a lack of meaningful work. Undertaking a project which a person and everyone else around the individual knows is meaningless, creates a persistent sense of unease.
How is this internal agitation manifested? Let me identify several from my experiences this week. None is directly “technical” but lurking in the background is the application of information to a function. When that information is distorted by the duct tape wrapped around a sensitive area, these are what happens in real life.
First, I had to get a tax sticker for my license plate. The number of people at the state agency was limited. More people entered than left. The cycle time for a sticker issuing professional was about 75 minutes. When I reached the desk of the SIP I presented my documents. I learned that my proof of insurance was a one page summary of the policy I had on my auto. I learned, “We can only accept insurance cards. This is a sheet of paper, not a card. You come back when you have the card. Next.” Nifty. Duct tape wrapped around a procedure that required a policy number and the name of the insurance provider.
Second, I bought three plastic wrapped packages of bottled water. I picked up a quart of milk. I put a package of raisins in my basket. I went through the self check out because no humans worked at the check out jobs at the time I visited. I scanned my items and placed them on the “Put purchases here” area. I inserted my credit card and the system honked and displayed, “Stay here a manager is coming.” Okay, I stayed there and noted that the other three self check outs had similar messages and honks coming from those self check out systems. I watched as a harried young person tried to determine if each of the four customers had stolen items. The fix he implemented was to have the four of us rescan the items. My system honked. My milk was not in the store’s system as a valid product. He asked me to step aside, and he entered the product number manually. Success for him. Utter failure for the grocery store.
Third, I picked up two shirts from the cleaners. I like my shirts with heavy starch. The two shirts had no starch. The young person had no idea what to do. I said, “Send the shirts through the process again and have your colleagues dip them in starch. The young worker told me, “We can’t do that. You have to pay the bill and then I will create a new work order.” Sorry. I paid the bill and went to another company’s store.
I am not sure these are duct tape jobs. If I needed money, I would certainly do the work and try to do my best. The message in the essay is that there are duct tape jobs. I disagree. The worker sees the job as beneath him or her and does not put physical, emotional, or intellectual effort in providing value to the employer or the customer.
Instead we get silly interface changes in Windows. We get truly stupid explanations about why a policy number cannot be entered from a sheet of paper, not a “card.” We get non-functioning check out systems and employees who don’t say, “Come to the register. I will get these processed and you out of here as fast as I can.”
Duct tape in the essay is about software. I think duct tape is a mind set issue. Use duct tape to make something better.
Stephen E Arnold, February 19, 2025
Speed Up Your Loss of Critical Thinking. Use AI
February 19, 2025
While the human brain isn’t a muscle, its neurology does need to be exercised to maintain plasticity. When a human brain is rigid, it’s can’t function in a healthy manner. AI is harming brains by making them not think good says 404 Media: “Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared.” You can read the complete Microsoft research report at this link. (My hunch is that this type of document would have gone the way of Timnit Gebru and the flying stochastic parrot, but that’s just my opinion, Hank, Advait, Lev, Ian, Sean, Dick, and Nick.)
Carnegie Mellon University and Microsoft researchers released a paper that says the more humans rely on generative AI the “result in the deterioration of cognitive faculties that ought to be preserved.”
Really? You don’t say! What else does this remind you of? How about watching too much television or playing too many videogames? These passive activities (arguable with videogames) stunt the development of brain gray matter and in a flight of Mary Shelley rhetoric make a brain rot! What else did the researchers discover when they studied 319 knowledge workers who self-reported their experiences with generative AI:
“ ‘The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,’ the researchers wrote. ‘Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.’”
By the way, we definitely love and absolutely believe data based on self reporting. Think of the mothers who asked their teens, “Where did you go?” The response, “Out.” The mothers ask, “What did you do?” The answer, “Nothing.” Yep, self reporting.
Does this mean generative AI is a bad thing? Yes and no. It’ll stunt the growth of some parts of the brain, but other parts will grow in tandem with the use of new technology. Humans adapt to their environments. As AI becomes more ingrained into society it will change the way humans think but will only make them sort of dumber [sic]. The paper adds:
“ ‘GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques,’ the researchers wrote. ‘The tool could help develop specific critical thinking skills, such as analyzing arguments, or cross-referencing facts against authoritative sources. This would align with the motivation enhancing approach of positioning AI as a partner in skill development.’”
The key is to not become overly reliant AI but also be aware that the tool won’t go away. Oh, when my mother asked me, “What did you do, Whitney?” I responded in the best self reporting manner, “Nothing, mom, nothing at all.”
Whitney Grace, February 19, 2025
TikTok Alleged to Be Spying on … Journalists
February 19, 2025
It is an open secret that TikTok is spying on the West and collecting piles of information on (maybe) unsuspecting victims. Forbes, however, allegedly has evidence of TikTok spying on its reporters: “TikTok Spied On Forbes Journalists.”
ByteDance, TikTok’s parent company, conducted an internal investigation and discovered that their employees tracked journalists who were reporting on the company. The audit also revealed that they used the journalists’ user data to track if they had been in close proximity with ByteDance employees.
“According to materials reviewed by Forbes, ByteDance tracked multiple Forbes journalists as part of this covert surveillance campaign, which was designed to unearth the source of leaks inside the company following a drumbeat of stories exposing the company’s ongoing links to China. As a result of the investigation into the surveillance tactics, ByteDance fired Chris Lepitak, its chief internal auditor who led the team responsible for them. The China-based executive Song Ye, who Lepitak reported to and who reports directly to ByteDance CEO Rubo Liang, resigned.”
ByteDance didn’t deny the surveillance, but said that TikTok couldn’t monitor people like the article suggested. The parent company also said it didn’t target journalists, public figures, US government members, or political activists. It’s funny that TikTok is trying to convince the Trump administration that it’s a benign force, but this story proves the opposite.
All of this is alleged of course. But it is an interesting story because journalists don’t do news. Journalists are pundits, consultants, and podcasters.
Stephen E Arnold, February 19, 2025
Programming: Missing the Message
February 18, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
I read “New Junior Developers Can’t Actually Code.” The write up is interesting. I think an important point in the essay has been either overlooked or sidestepped. The main point of the article in my opinion is:
The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
I agree. The push is to make creating software has shifted to what I like to describe as a TikTok mindset. The idea is that one can do a quick search and get an answer, preferably in less than 30 seconds. I know there are young people who spend time working through problems. We have one of these 12 year olds in our family. The problem is that I am not sure how many other 12-year-olds have this baked in desire to work through problems. From what I see and hear, teachers are concerned that students are in TikTok mode, not in “work through” mode, particularly in class.
The write up says:
Here’s the reality: The acceleration has begun and there’s nothing we can do about it. Open source models are taking over, and we’ll have AGI running in our pockets before we know it. But that doesn’t mean we have to let it make us worse developers. The future isn’t about whether we use AI—it’s about how we use it. And maybe, just maybe, we can find a way to combine the speed of AI with the depth of understanding that we need to learn.
I agree. Now the “however”:
- Mistakes with older software may not be easily remediated. I am a dinobaby. Dinobabies drop out or die. The time required to figure out why something isn’t working may not be available. That might be a problem for a small issue. For something larger, like a large bank, the problem can be a difficult one.
- People with modern skills may not know where to look for an answer. The reference materials, the snippets of code, or the knowledge about a specific programming language may not be available. There are many reasons for this “knowledge loss.” Once gone, it will take time and money to get the information, not a TikTok fix.
- The software itself may be a hack job. We did a project for Bell Labs at the time of the Judge Green break up. The regional manager running my project asked the people working with me on this minor job if Alan and Howard (my two mainframe IBM CICS specialists) if they wrote documentation. Howard said, “Ho ho ho. We just use Assembler and make it work.” The project manager said, “You can’t do that for this project.” Alan said, “How do you propose to get the service you want us to implement to work?” We got the job, and the system is still almost 50 years later still in service. Okay, young wizard with smart software, fix up our work.
So what? We are reaching a point when the disconnect between essential computer science knowledge and actual implementation in large-scale, mission-critical systems is being lost. Maybe AI can do what Alan, Howard, and I did to comply with Judge Green’s order relating to Baby Bell information exchange in the IBM environment.
I am skeptical. That’s a problem with the TikTok approach and smart software. If the model gets it wrong, there may be no fix. TikTok won’t be much help either. (I think Steve Gibson might agree with some of my assertions.) The write up does not flip over the rock. There is some shocking stuff beneath the gray, featureless surface.
Stephen E Arnold, February 18, 2025
Hackers and AI: Of Course, No Hacker Would Use Smart Software
February 18, 2025
This blog post is the work of a real live dinobaby. Believe me, after reading the post, you know that smart software was not involved.
Hackers would never ever use smart software. I mean those clever stealer distributors preying on get-rich-quick stolen credit card users. Nope. Those people using online games to lure kiddies and people with kiddie-level intelligence into providing their parents’ credit card data? Nope and double nope. Those people in computer science classes in Azerbaijan learning how to identify security vulnerability while working as contractors for criminals. Nope. Never. Are you crazy. These bad actors know that smart software is most appropriate for Mother Teresa type activities and creating Go Fund Me pages to help those harmed by natural disasters, bad luck, or not having a job except streaming.
I mean everyone knows that bad actors respect the firms providing smart software. It is common knowledge that bad actors play fair. Why would a criminal use smart software to create more efficacious malware payloads, compromise Web sites, or defeat security to trash the data on Data.gov. Ooops. Bad example. Data.gov has been changed.
I read “Google Says Hackers Abuse Gemini AI to Empower Their Attacks.” That’s the spirit. Bad actors are using smart software. The value of the systems is evident to criminals. The write up says:
Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets. Google’s Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses. Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period. Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China.
Stop the real time news stream! Who could have imagined that bad actors would be interested in systems and methods that would make their behaviors more effective and efficient.
When Microsoft rolled out its marketing gut punch aimed squarely at Googzilla, the big online advertising beast responded. The Code Red and Code Yellow lights flashed. Senior managers held meetings after Foosball games and hanging at Philz’ Coffee.
Did Google management envision the reality of bad actors using Gemini? No. It appears that the Google acquisition Mandiant figured it out. Eventually — it’s been two years and counting since Microsoft caused the AI tsunami — the Eureka! moment arrived.
The write up reports:
Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform’s security measures. These attempts were reportedly unsuccessful.
Of course, the attacks were. Do US banks tell their customers when check fraud or other cyber dishonesty relieves people of their funds. Sure they don’t. Therefore, it is only the schlubs who are unfortunate enough to have the breach disclosed. Then the cyber security outfits leap into action and issue fixes. Everything is the cyber security world is buttoned up and buttoned down. Absolutely.
Several observations:
- How has free access without any type of vetting working out? The question is directed at the big tech outfits who are beavering away in this technology blast zone.
- What are the providers of free smart software doing to make certain that the method can only produce seventh grade students’ essays about the transcontinental railroad?
- What exactly is a user of free smart software supposed to do to reign in the actions of nation states with which most Americans are somewhat familiar. I mean there is a Chinese restaurant near Harrod’s Creek. Am I to discuss the matter with the waitress?
Why worry? That worked for Mad Magazine until it didn’t. Hey, Google, thanks for the information. Who could have known smart software can be used for nefarious purposes? (Obviously not Google.)
Stephen E Arnold, February 18, 2025
Unified Data Across Governments? How Useful for a Non Participating Country
February 18, 2025
A dinobaby post. No smart software involved.
I spoke with a person whom I have known for a long time. The individual lives and works in Washington, DC. He mentioned “disappeared data.” I did some poking around and, sure enough, certain US government public facing information had been “disappeared.” Interesting. For a short period of time I made a few contributions to what was FirstGov.gov, now USA.gov.
For those who don’t remember or don’t know about President Clinton’s Year 2000 initiative, the idea was interesting. At that time, access to public-facing information on US government servers was via the Web search engines. In order to locate a tax form, one would navigate to an available search system. On Google one would just slap in IRS or IRS and the form number.
Most of the US government public-facing Web sites were reasonably straight forward. Others were fairly difficult to use. The US Marine Corps’ Web site had poor response times. I think it was hosted on something called Server Beach, and the would-be recruit would have to wait for the recruitment station data to appear. The Web page worked but it was slow.
President Clinton wanted or someone in his administration wanted the problem to be fixed with a search system for US government public-facing content. After a bit of work, the system went online in September 2000. The system morphed into a US government portal a bit like the Yahoo.com portal model.
I thought about the information in “Oracle’s Ellison Calls for Governments to Unify Data to Feed AI.” The write up reports:
Oracle Corp.’s co-founder and chairman Larry Ellison said governments should consolidate all national data for consumption by artificial intelligence models, calling this step the “missing link” for them to take full advantage of the technology. Fragmented sets of data about a population’s health, agriculture, infrastructure, procurement and borders should be unified into a single, secure database that can be accessed by AI models…
Several questions arise; for instance:
- What country or company provides the technology?
- Who manages what data are added and what data are deleted?
- What are the rules of access?
- What about public data which are not available for public access; for example, the “disappeared” data from US government Web sites?
- What happens to commercial or quasi-commercial government units which repackage public data and sell it at a hefty mark up?
Based on my brief brush with the original Clinton project, I think the idea is interesting. But I have one other question in mind: What happens when non-participating countries get access to the aggregated public facing data. Digital information is a tricky resource to secure. In fact, once data are digitized and connected to a network, it is fair game. Someone, somewhere will figure out how to access, obtain, exfiltrate, and benefit from aggregated data.
The idea is, in my opinion, a bit of grandstanding like Google’s quantum supremacy claims. But US high technology wizards are ready and willing to think big thoughts and take even bigger actions. We live in interesting times, but I am delighted that I am old.
Stephen E Arnold, February 18, 2025
A Vulnerability Bigger Than SolarWinds? Yes.
February 18, 2025
No smart software. Just a dinobaby doing his thing.
I read an interesting article from WatchTowr Labs. (The spelling is what the company uses, so the url is labs.watchtowr.com.) On February 4, 2024, the company reported that it discovered what one can think of as orphaned or abandoned-but-still alive Amazon S3 “buckets.” The discussion of the firm’s research and what it revealed is presented in “8 Million Requests Later, We Made The SolarWinds Supply Chain Attack Look Amateur.”
The company explains that it was curious if what it calls “abandoned infrastructure” on a cloud platform might yield interesting information relevant to security. We worked through the article and created what in the good old days would have been called an abstract for a database like ABI/INFORM. Here’s our summary:
The article from WatchTowr Labs describes a large-scale experiment where researchers identified and took control of about 150 abandoned Amazon Web Services S3 buckets previously used by various organizations, including governments, militaries, and corporations. Over two months, these buckets received more than eight million requests for software updates, virtual machine images, and sensitive files, exposing a significant vulnerability. Watchtowr explain that bad actors could have injected malicious content. Abandoned infrastructure could be used for supply chain attacks like SolarWinds. Had this happened, the impact would have been significant.
Several observations are warranted:
- Does Amazon Web Services have administrative functions to identify orphaned “buckets” and take action to minimize the attack surface?
- With companies information technology teams abandoning infrastructure, how will these organizations determine if other infrastructure vulnerabilities exist and remediate them?
- What can cyber security vendors’ software and systems do to identify and neutralize these “shoot yourself in the foot” vulnerabilities?
One of the most compelling statements in the WatchTowr article, in my opinion, is:
… we’d demonstrated just how held-together-by-string the Internet is and at the same time point out the reality that we as an industry seem so excited to demonstrate skills that would allow us to defend civilization from a Neo-from-the-Matrix-tier attacker – while a metaphorical drooling-kid-with-a-fork-tier attacker, in reality, has the power to undermine the world.
Is WatchTowr correct? With government and commercial organizations leaving S3 buckets available, perhaps WatchTowr should have included gum, duct tape, and grade-school white glue in its description of the Internet?
Stephen E Arnold, February 18, 2025
Real AI News? Yes, with Fact Checking, Original Research, and Ethics Too
February 17, 2025
This blog post is the work of a real-live dinobaby. No smart software involved.
This is “real” news… if the story is based on fact checking, original research, and those journalistic ethics pontifications. Let’s assume that these conditions of old-fashioned journalism to apply. This means that the story “New York Times Goes All-In on Internal AI Tools” pinpoints a small shift in how “real” news will be produced.
The write up asserts:
The New York Times is greenlighting the use of AI for its product and editorial staff, saying that internal tools could eventually write social copy, SEO headlines, and some code.
Yep, some. There’s ground truth (that’s an old-fashioned journalism concept) in blue-chip consulting. The big money maker is what’s called scope creep. Stated simply, one starts small like a test or a trial. Then if the sky does not fall as quickly as some companies’ revenue, the small gets a bit larger. You check to make sure the moon is in the sky and the revenues are not falling, hopefully as quickly as before. Then you expand. At each step there are meetings, presentations, analyses, and group reassurances from others in the deciders category. Then — like magic! — the small project is the rough equivalent of a nuclear-powered aircraft carrier.
Ah, scope creep.
Understate what one is trying. Watch it. Scale it. End up with an aircraft carrier scale project. Yes, it is happening at an outfit like the New York Times if the cited article is accurate.
What scope creep stage setting appears in the write up? Let look:
- Staff will be trained. You job, one assumes, is safe. (Ho ho ho)
- AI will help uncover “the truth.” (Absolutely)
- More people will benefit (Don’t forget the stakeholders, please)
What’s the write up presenting as actual factual?
The world’s greatest newspaper will embrace hallucinating technology, but only a little bit.
Scope creep begins, and it won’t change a thing, but that information will appear once the cost savings, revenue, and profit data become available at the speed of newspaper decision making.
Stephen E Arnold, February 17, 2025