Scattered Spider: Operating Freely Despite OSINT and Specialized Investigative Tools. Why?
July 7, 2025
No smart software to write this essay. This dinobaby is somewhat old fashioned.
I don’t want to create a dust up in the specialized software sector. I noted the July 2, 2025, article “A Group of Young Cybercriminals Poses the Most Imminent Threat of Cyberattacks Right Now.” That story surprised me. First, the Scattered Spider group was documented (more or less) by Trellix, a specialized software and services firm. You can read the article “Scattered Spider: The Modus Operandi” and get a sense of what Trellix reported. The outfit even has a Wikipedia article about their activities.
Last week I was asked a direct question, “Which of the specialized services firms can provide me with specific information about Telegram Groups and Channels, both public and private?” My answer, “None yet.”
Scattered Spider uses Telegram for some messaging functions, and if you want to get a sense of what the outfit does, just fire up your OSINT tools or better yet use one of the very expensive specialized services available to government agencies. The young cybercriminals appear to use the alias @ScatteredSpiderERC.” There is a Wikipedia article about this group’s activities.
So what? Let’s go back to the question addressed directly to me about firms that have content about Telegram. If we assume the Wikipedia write up is sort of correct, the Scattered Spider entity popped up in 2022 and its activities caught the attention of Trellix. The time between the Trellix post and the Wired story is about two years.
Why has a specialized services firm providing actionable data to the US government, the Europol investigators, and the dozens of others law enforcement operations around the world? Isn’t it a responsible act to use that access to Telegram data to take down outfits that endanger casinos and other organizations?
Apparently the answer is, “No.”
My hunch is that these specialized software firms talk about having tools to access Telegram. That talk is a heck of a lot easier than finding a reliable way to access private Groups and Channels, trace a handle back to a real live human being possibly operating in the EU or the US. I would suggest that France tried to use OSINT and the often nine figure systems to crack Telegram. Will other law enforcement groups realize that the specialized software vendors’ tools fall short of the mark and think about a France-type of response?
France seems to have made a dent in Telegram. I would hypothesize that the failure of OSINT and the specialized software tool vendors contributed to France’s decision to just arrest Pavel Durov. Mr. Durov is now ensnared in France’s judicial bureaucracy. To make the arrest more complex for Mr. Durov, he is a citizen of France and a handful of other countries, including Russia and the United Arab Emirates.
I mention this lack of Telegram cracking capability for three reasons:
- Telegram is in decline and the company is showing some signs of strain
- The changing attitude toward crypto in the US means that Telegram absolutely has to play in that market or face either erosion or decimation of its seven year push to create alternative financial services based on TONcoin and Pavel Durov’s partners’ systems
- Telegram is facing a new generation of messaging competitors. Like Apple, Telegram is late to the AI party.
One would think that at a critical point like this, the Shadow Server account would be a slam dunk for any licensee of specialized software advertising, “Telegram content.”
Where are those vendors who webinars, email blasts, and trade show demonstrations? Where are the testimonials that Company Nuco’s specialized software really did work. “Here’s what we used in court because the specialized vendor’s software generated this data for us” is what I want to hear. I would suggest that Telegram remains a bit of a challenge to specialized software vendors. Will I identify these “big hat, no cattle outfits”? Nope.
Just thought that a reminder that marketing and saying what government professionals want to hear are easier than just talking.
Stephen E Arnold, July 2025
Technology Firms: Children of Shoemakers Go Barefoot
July 7, 2025
If even the biggest of Big Tech firms are not safe from cyberattacks, who is? Investor news site Benzinga reveals, “Apple, Google and Facebook Among Services Exposed in Massive Leak of More than 16 Billion Login Records.” The trove represents one of the biggest exposures of personal data ever, writer Murtuza J. Merchant tells us. We learn:
“Cybersecurity researchers have uncovered 30 massive data collections this year alone, each containing tens of millions to over 3.5 billion user credentials, Cybernews reported. These previously unreported datasets were briefly accessible through misconfigured cloud storage or Elasticsearch instances, giving the researchers just enough time to detect them, though not enough to trace their origin. The findings paint a troubling picture of how widespread and organized credential leaks have become, with login information originating from malware known as infostealers. These malicious programs siphon usernames, passwords, and session data from infected machines, usually structured as a combination of a URL, username, and password.”
Ah, advanced infostealers. One of the many handy tools AI has made possible. The write-up continues:
“The leaked credentials span a wide range of services from tech giants like Apple, Facebook, and Google, to platforms such as GitHub, Telegram, and various government portals. Some datasets were explicitly labeled to suggest their source, such as ‘Telegram’ or a reference to the Russian Federation. … Researchers say these leaks are not just a case of old data resurfacing.”
Not only that, the data’s format is cybercriminal-friendly. Merchant writes:
“Many of the records appear recent and structured in ways that make them especially useful for cybercriminals looking to run phishing campaigns, hijack accounts, or compromise corporate systems lacking multi-factor authentication.”
But it is the scale of these datasets that has researchers most concerned. The average collection held 500 million records, while the largest had more than 3.5 billion. What are the chances your credentials are among them? The post suggests the usual, most basic security measures: complex and frequently changed passwords and regular malware scans. But surely our readers are already observing these best practices, right?
Cynthia Murrell, July 7, 2025
Worthless College Degrees. Hey, Where Is Mine?
July 4, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
This write up is not about going “beyond search.” Heck, search has just changed adjectives and remains mostly a frustrating and confusing experience for employees. I want to highlight the information (which I assume to be 100 percent dead accurate like other free data on the Internet) about the “17 Most Useless College Degrees Employers Don’t Want Today.” Okay, high school seniors, pay attention. According to the estimable Finance Buzz, do not study these subjects and — heaven forbid — expect to get a job when you graduate from an online school, the local college, or a big-time, big-bucks university; I have grouped the write up’s earthworm list into some categories; to wit:
Do gooder work
- Criminal justice
- Education (Who needs an education when there is YouTube?)
Entertainment
- Fashion design
- Film, video, and photographic arts
- Music
- Performing arts
Information
- Advertising
- Creative writing (like Finance Buzz research articles?)
- Communications
- Computer science
- Languages (Emojis and some English are what is needed I assume)
Real losers
- Anthropology and archaeology (I thought these were different until Finance Buzz cleared up my confusion)
- Exercise science
- Religious studies
Waiting tables and working the midnight check in desk
- Culinary arts (Fry cook until the robots arrive)
- Hospitality (Smile and show people their table)
- Tourism (Do not fall into the volcano)
Assume the write up is providing verifiable facts. (I know, I know, this is the era of alternative facts.) If we flash forward five years, the already stretched resources for law enforcement and education will be in an even smaller pickle barrel. Good for the bad actors and the people who don’t want to learn. Perhaps less beneficial to others in society. I assume that one can make TikTok-type videos and generate a really bigly income until the Googlers change the compensation rules or TikTok is banned from the US. With the world awash in information and open source software available, who needs to learn anything. AI will do this work. Who in the heck gets a job in archaeology when one can learn from UnchartedX and Brothers of the Serpent? Exercise. Play football and get a contract when you are in middle school like talented kids in Brazil. And the cruise or specialty restaurant business? Those contracts are for six months for a reason. Plus cruise lines have started enforcing no video rules on the staff who were trying to make day in my life videos about the wonderful cruise ship environment. (Weren’t these vessels once called “prison ships”?) My hunch is that whoever assembled this stellar research at Finance Buzz was actually but indirectly writing about smart software and robots. These will decimate many jobs in the idenfied
What should a person study? Nuclear physics, mathematics (applied and theoretical maybe), chemistry, biogenetics, materials science, modern financial management, law (aren’t there enough lawyers?), medicine, and psychology until the DRG codes are restricted.
Excellent way to get a job. And in what field was my degree? Medieval religious literature. Perfect for life-long employment as a dinobaby essayist.
Stephen E Arnold, July 4, 2025
Apple Fix: Just Buy Something That Mostly Works
July 4, 2025
No smart software involved. Just an addled dinobaby.
A year ago Apple announced AI which means, of course, Apple Intelligence. Well, Apple was “held back”. In 2025, the powerful innovation machine made the iPhone and Macs look a bit like the Windows see-through motif. Okay.
I read “Apple Reportedly Has a Secret Plan to Quickly Gain Ground in the AI Race.” I won’t point out that if information is circulating AND appears in an article, that information is not secret. It is public relations and marketing output. Second, forget the split infinitive. Since few recognize that datum is singular and data is plural or that the word none is singular, I won’t mention it. Obviously few “real” journalists care.
Now to the write up. In my opinion, the big secret revealed and analyzed is …
Sources report that the company is giving serious consideration to bidding for the startup Perplexity AI, which would allow it to transplant a chunk of expertise and ready-made technology into Apple Park and leapfrog many of the obstacles it currently faces. Perplexity runs an AI-powered search engine which can already perform the contextual tricks which Apple advertised ahead of the iPhone 16 launch but hasn’t yet managed to build into Siri.
Analysis of this “secret” is a bit underwhelming. Here’s the paragraph that is supposed to make sense of this non-secret secret:
Historically, Apple has been wary of large acquisitions, whereas rivals, such as Facebook (buying WhatsApp for $22 billion) and Google (acquiring cloud security platform Wiz for $32 billion), have spent big to scoop up companies. It could be a mark of how worried Apple is about the AI situation that it’s considering such a major and out-of-character move. But after a year of headaches and obstacles, it also could pay off in a big way.
Okay, but what about Google acquiring Motorola? What about Microsoft’s clever purchase of Nokia? And there are other examples. Big companies buying other companies can work out or fizzle. Where is Dodgeball now? Orkut?
The actual issue strikes me as Apple’s failure to recognize that smart software — whether it works particularly well or not — was a marketing pony to ride in the technical circus. Microsoft got the message, and it seems that the marketing play triggered Google. But the tie up seems to be under a bit of stress as of June 2025.
Another problem is that buying AI requires that the purchaser manage the operation, ensure continued innovation of an order slightly more demanding that imitating a Windows interface, and getting the wizard huskies to remain hooked to the dog sled.
What seems to be taking place is a division of the smart software world into three sectors:
- Companies that “do” large language models; for example, Google, OpenAI, and others
- Companies that “wrap” large language models and generate start ups that are presented as AI but are interfaces
- Companies that “integrate” or “glue on” AI to an existing service, platform, or system.
Apple failed at number 1. It hasn’t invented anything in the AI world. (I think I learned about Siri in a Stanford Research Institute presentation many, many years ago. (No, it did not work particularly well even in the demo.)
Apple is not too good at wrapping anything. Safari doesn’t wrap. Safari blazes its own weird trail which is okay for those who love Apple software. For someone like me, I find it annoying.
Apple has demonstrated that it could not “glue on” SIRI.
Okay, Apple has not scored a home run with either approach one, two, or three.
Thus, the analysis, in my opinion, is that Apple like some other outfits now realize smart software — whether it is 100 percent reliable — continues to generate buzz. The task for Apple, therefore, is to figure out how to convert whatever it does into buzz. Skip the cost of invention. Sidestep wrapping AI and look for “partners” who do what department stores in the 1950s: Wrap my holiday gifts. And, three, try to make “glue on” work.
Net net: Will Apple undertake auto de fe and see the light?
Stephen E Arnold, July 4, 2025
Hot Bots Bite
July 3, 2025
No smart software involved. Just an addled dinobaby.
I read “Discord is Threatening to Shutdown BotGhost: The Ensh*ttification of Discord.” (I really hate that “ensh*t neologism.) The write up is interesting. If you know zero about bots, just skip it. If you do know something about bots in “walled gardens.” Take a look. The use of software robots which are getting smarter and smarter thanks to “artificial intelligence” will emerge, morph, and become vectors for some very exciting types of online criminal activity. Sure, bots can do “good,” but most people with a make-money-fast idea will find ways to botify online crime. With crypto currency scoped to be an important part of “everything” apps, excitement is just around the corner.
However, I want to call attention to the comments section of Hacker News. Several of the observations struck me as germane to my interests in bots purpose built for online criminal activity. Your interests are probably different from mine, but here’s a selection of the remarks I found on point for me:
- throwaway7679 posts: [caps in original] “NEITHER DISCORD NOR ITS AFFILIATES, SUPPLIERS, OR DISTRIBUTORS MAKE ANY SPECIFIC PROMISES ABOUT THE APIs, API DATA, DOCUMENTATION, OR ANY DISCORD SERVICES. The existence of terms like this make any discussion of the other terms look pretty silly. Their policy is simply that they do whatever they want, and that hasn’t changed.”
- sneak posts: “Discord has the plaintext of every single message ever sent via Discord, including all DMs. Can you imagine the value to LLM companies? It’s probably the single largest collection of sexting content outside of WeChat (and Apple’s archive of iCloud Backups that contain all of the iMessages).”
- immibis posts: “Reddit is more evil than Discord IMO – they did this years ago, tried to shut down all bots and unofficial apps, and they heavily manipulate consensus opinion, which Discord doesn’t as far as I know.”
- macspoofing posts: “…For software platforms, this has been a constant. It happened with Twitter, Facebook, Google (Search/Ads, Maps, Chat), Reddit, LinkedIn – basically ever major software platform started off with relatively open APIs that were then closed-off as it gained critical mass and focused on monetization.”
- altairprime posts: “LinkedIn lost a lawsuit about prohibiting third parties tools from accessing its site, Matrix has strong interop, Elite Dangerous offers OAuth API for sign-in and player data download, and so on. There are others but that’s sixty seconds worth of thinking about it. Mastodon metastasized the user store but each site is still a tiny centralized user store. That’s how user stores work. Doesn’t mean they’re automatically monopolistic. Discord’s taking the Reddit-Apollo approach to forcing them offline — half-assed conversations for months followed by an abrupt fuck-you moment with little recourse — which given Discord’s free of charge growth mechanism, means that — just like Reddit — they’re likely going to shutdown anything by that’s providing a valuable service to a significant fraction of their users, either to Sherlock and charge money for it, or simply to terminate what they view as an obstruction.”
Several observations:
- Telegram not mentioned in the comments which I reviewed (more are being added, but I am not keeping track of these additions as of 1125 am US Eastern on June 25, 2025)
- Bots are a contentious type of software
- The point about the “value” of messages to large language models is accurate.
Stephen E Arnold, July 3, 2025
Read This Essay and Learn Why AI Can Do Programming
July 3, 2025
No AI, just the dinobaby expressing his opinions to Zillennials.
I, entirely by accident since Web search does not work too well, an essay titled “Ticket-Driven Development: The Fastest Way to Go Nowhere.” I would have used a different title; for example, “Smart Software Can Do Faster and Cheaper Code” or “Skip Computer Science. Be a Plumber.” Despite my lack of good vibe coding from the essay’s title, I did like the information in the write up. The basic idea is that managers just want throughput. This is not news.
The most useful segment of the write up is this passage:
You don’t need a process revolution to fix this. You need permission to care again. Here’s what that looks like:
- Leave the code a little better than you found it — even if no one asked you to.
- Pair up occasionally, not because it’s mandated, but because it helps.
- Ask why. Even if you already know the answer. Especially then.
- Write the extra comment. Rename the method. Delete the dead file.
- Treat the ticket as a boundary, not a blindfold.
Because the real job isn’t closing tickets it’s building systems that work.
I wish to offer several observations:
- Repetitive boring, mindless work is perfect for smart software
- Implementing dot points one to five will result in a reprimand, transfer to a salubrious location, or termination with extreme prejudice
- Spending long hours with an AI version of an old-fashioned psychiatrist because you will go crazy.
After reading the essay, I realized that the managerial approach, the “ticket-driven workflow”, and the need for throughput applies to many jobs. Leadership no longer has middle managers who manage. When leadership intervenes, one gets [a] consultants or [b] knee-jerk decisions or mandates.
The crisis is in organizational set up and management. The developers? Sorry, you have been replaced. Say, “hello” to our version of smart software. Her name is No Kidding.
Stephen E Arnold, July 3, 2025
AI Management: Excellence in Distancing Decisions from Consequences
July 2, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
This write up “Exclusive: Scale AI’s Spam, Security Woes Plagued the Company While Serving Google” raises two minor issues and one that is not called out in the headline or the subtitle:
$14 billion investment from Meta struggled to contain ‘spammy behavior’ from unqualified contributors as it trained Gemini.
Who can get excited about a workflow and editorial quality issue. What is “quality”? In one of my Google monographs I pointed out that Google used at one time a number of numerical recipes to figure out “quality.” Did that work? Well, it was good enough to help get the Yahoo-inspired Google advertising program off the ground. Then quality became like those good brownies from 1953: Stuffed with ingredients no self-respecting Stanford computer science graduate would eat for lunch.
I believe some caution is required when trying to understand a very large and profitable company from someone who is no longer working at the company. Nevertheless, the article presents a couple of interesting assertions and dodges what I consider the big issue.
Consider this statement in the article:
In a statement to Inc., Scale AI spokesperson Joe Osborne said: “This story is filled with so many inaccuracies, it’s hard to keep track. What these documents show, and what we explained to Inc ahead of publishing, is that we had clear safeguards in place to detect and remove spam before anything goes to customers.” [Editor’s Note: “this” means the rumor that Scale cut corners.]
The story is that a process included data that would screw up the neural network.
And the security issue? I noted this passage:
The [spam] episode raises the question of whether or not Google at one point had vital data muddied by workers who lacked the credentials required by the Bulba program. It also calls into question Scale AI’s security and vetting protocols. “It was a mess. They had no authentication at the beginning,” says the former contributor. [Editor’s Note: Bulba means “Bard.”]
A person reading the article might conclude that Scale AI was a corner cutting outfit. I don’t know. But when big money starts to flow and more can be turned on, some companies just do what’s expedient. The signals in this Scale example are the put the pedal to the metal approach to process and the information that people knew that bad data was getting pumped into Googzilla.
But what’s the big point that’s missing from the write up? In my opinion, Google management made a decision to rely on Scale. Then Google management distanced itself from the operation. In the good old days of US business, blue-suited informed middle managers pursued quality, some companies would have spotted the problems and ridden herd on the subcontractor.
Google did not do this in an effective manner.
Now Scale AI is beavering away for Meta which may be an unexpected win for the Google. Will Meta’s smart software begin to make recommendations like “glue your cheese on the pizza”? My personal view is that I now know why Google’s smart software has been more about public relations and marketing, not about delivering something that is crystal clear about its product line up, output reliability, and hallucinatory behaviors.
At least Google management can rely on Deepseek to revolutionize understanding the human genome. Will the company manage in as effective a manner as its marketing department touts its achievements?
Stephen E Arnold, July 2, 2025
Microsoft Innovation: Emulating the Bold Interface Move by Apple?
July 2, 2025
This dinobaby wrote this tiny essay without any help from smart software. Not even hallucinating gradient descents can match these bold innovations.
Bold. Decisive. Innovative. Forward leaning. Have I covered the adjectives used to communicate “real” innovation? I needed these and more to capture my reaction to the information in “Forget the Blue Screen of Death – Windows Is Replacing It with an Even More Terrifying Black Screen of Death.”
Yep, terrifying. I don’t feel terrified when my monitors display a warning. I guess some people do.
The write up reports:
Microsoft is replacing the Windows 11 Blue Screen of Death (BSoD) with a Black Screen of Death, after decades of the latter’s presence on multiple Windows iterations. It apparently wants to provide more clarity and concise information to help troubleshoot user errors easily.
The important aspect of this bold decision to change the color of an alert screen may be Apple color envy.
Apple itself said, “Apple Introduces a Delightful and Elegant New Software Design.” The innovation was… changing colors and channeling Windows Vista.
Let’s recap. Microsoft makes an alert screen black. Apple changes its colors.
Peak innovation. I guess that is what happens when artificial intelligence does not deliver.
Stephen E Arnold, July 2, 2025
Microsoft and OpenAI: An Expensive Sitcom
July 1, 2025
No smart software involved. Just an addled dinobaby.
I remember how clever I thought the book title “Who Says Elephants Can’t Dance?: Leading a Great Enterprise Through Dramatic Change.” I find the break dancing content between Microsoft and OpenAI even more amusing. Bloomberg “real” news reported that Microsoft is “struggling to sell its Copilot solutions. Why? Those Microsoft customers want OpenAI’s ChatGPT. That’s a hoot.
Computerworld adds to this side show more Monte Python twists. “Microsoft and OpenAI: Will They Opt for the Nuclear Option?” (I am not too keen on the use of the word “nuclear.” People bandy it about without understanding exactly what the actual consequences of such an opton means. Please, do a bit of homework before suggesting that two enterprises are doing anything remotely similar.)
The estimable Computerworld reports:
Microsoft needs access to OpenAI technologies to keep its worldwide lead in AI and grow its valuation beyond its current more than $3.5 trillion. OpenAI needs Microsoft to sign a deal so the company can go public via an IPO. Without an IPO, the company isn’t likely to keep its highly valued AI researchers — they’ll probably be poached by companies willing to pay hundreds of millions of dollars for the talent.
The problem seems to be that Microsoft is trying to sell its version of smart software. The enterprise customers and even dinobabies like myself prefer the hallucinatory and unpredictable ChatGPT to the downright weirdness of Copilot in Notepad. The Computerworld story says:
Hovering over it all is an even bigger wildcard. Microsoft’s and OpenAI’s existing agreement dramatically curtails Microsoft’s rights to OpenAI technologies if the technologies reach what is called artificial general intelligence (AGI) — the point at which AI becomes capable of human reasoning. AGI wasn’t defined in that agreement. But Altman has said he believes AGI might be reached as early as this year.
People cannot agree over beach rights and school taxes. The smart software (which may remain without regulation for a decade) is a much bigger deal. The dollars at stake are huge. Most people do not know that a Board of Directors for a Fortune 1000 company will spend more time arguing about parking spaces than a $300 million acquisition. The reason? Most humans cannot conceive of the numbers of dollars associated with artificial intelligence. If the AI next big thing does not work, quite a few outfits are going to be selling snake oil from tables at flea markets.
Here’s the humorous twist from my vantage point. Microsoft itself kicked off the AI boom with its announcements a couple of years ago. Google, already wondering how it can keep the money gushing to pay the costs of simply being Google, short circuited and hit the switch for Code Red, Yellow, Orange, and probably the color only five people on earth have ever seen.
And what’s happened? The Google-spawned methods aren’t eliminating hallucinations. The OpenAI methods are not eliminating hallucinations. The improvements are more and more difficult to explain. Meanwhile start ups are doing interesting things with AI systems that are good enough for certain use cases. I particularly like consulting and investment firms using AI to get rid of MBAs.
The punch line for this joke is that the Microsoft version of ChatGPT seems to have more brand deliciousness. Microsoft linked with OpenAI, created its own “line of AI,” and now finds that the frisky money burner OpenAI is more popular and can just define artificial general intelligence to its liking and enjoy the philosophical discussions among AI experts and lawyers.
One cannot make this sequence up. Jack Benny’s radio scripts came close, but I think the Microsoft – OpenAI program is a prize winner.
Stephen E Arnold, July 1, 2025
Publishing for Cash: What Is Here Is Bad. What Is Coming May Be Worse
July 1, 2025
Smart software involved in the graphic, otherwise just an addled dinobaby.
Shocker. Pew Research discovers that most “Americans” do not pay for news. Amazing. Is it possible that the Pew professionals were unaware of the reason newspapers, radio, and television included comic strips, horoscopes, sports scores, and popular music in their “real” news content? I read in the middle of 2025 the research report “Few Americans Pay for News When They Encounter Paywalls.” For a number of years I worked for a large publishing company in Manhattan. I also worked at a privately owned publishing company in fly over country.
The sky looks threatening. Is it clouds, locusts, or the specter of the new Dark Ages? Thanks, you.com. Good enough.
I learned several things. Please, keep in mind that I am a dinobaby and I have zero in common with GenX, Y, Z, or the horrific GenAI. The learnings:
- Publishing companies spend time and money trying to figure out how to convert information into cash. This “problem” extended from the time I took my first real job in 1972 to yesterday when I received an email from a former publisher who is thinking about batteries as the future.
- Information loses its value as it diffuses; that is, if I know something, I can generate money IF I can find the one person who recognizes the value of that information. For anyone else, the information is worthless and probably nonsense because that individual does not have the context to understand the “value” of an item of information.
- Information has a tendency to diffuse. It is a bit like something with a very short half life. Time makes information even more tricky. If the context changes exogenously, the information I have may be rendered valueless without warning.
So what’s the solution? Here are the answers I have encountered in my professional life:
- Convert the “information” into magic and the result of a secret process. This is popular in consulting, certain government entities, and banker types. Believe me, people love the incantations, the jargon talk, and the scent of spontaneous ozone creation.
- Talk about “ideals,” and deliver lowest common denominator content. The idea that the comix and sports scores will “sell” and the revenue can be used to pursue ideals. (I worked at an outfit like this, and I liked its simple, direct approach to money.)
- Make the information “exclusive” and charge a very few people a whole lot of money to access this “special” information. I am not going to explain how lobbying, insider talk, and trade show receptions facilitate this type of information wheeling and dealing. Just get a LexisNexis-type of account, run some queries, and check out the bill. The approach works for certain scientific and engineering information, financial data, and information people have no idea is available for big bucks.
- Embrace the “if it bleeds, it leads” approach. Believe me this works. Look at YouTube thumbnails. The graphics and word choice make clear that sensationalism, titillation, and jazzification are the order of the day.
Now back to the Pew research. Here’s a passage I noted:
The survey also asked anyone who said they ever come across paywalls what they typically do first when that happens. Just 1% say they pay for access when they come across an article that requires payment. The most common reaction is that people seek the information somewhere else (53%). About a third (32%) say they typically give up on accessing the information.
Stop. That’s the key finding: one percent pay.
Let me suggest:
- Humans will take the easiest path; that is, they will accept what is output or what they hear from their “sources”
- Humans will take “facts” and glue they together to come up with more “facts”. Without context — that is, what used to be viewed as a traditional education and a commitment to lifelong learning, these people will lose the ability to think. Some like this result, of course.
- Humans face a sharper divide between the information “haves” and the information “have nots.”
Net net: The new dark ages are on the horizon. How’s that for a speculative conclusion from the Pew research?
Stephen E Arnold, July 1, 2025

