Starlink: Are You the Only Game in Town? Nope

October 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “SpaceX Disables More Than 2,000 Starlink Devices Used in Myanmar Scam Compounds.” Interesting from a quite narrow Musk-centric focus. I wonder if this is a PR play or the result of some cooperative government action. The write up says:

Lauren Dreyer, the vice-president of Starlink’s business operations, said in a post on X Tuesday night that the company “proactively identified and disabled over 2,500 Starlink Kits in the vicinity of suspected ‘scam centers’” in Myanmar. She cited the takedowns as an example of how the company takes action when it identifies a violation of its policies, “including working with law enforcement agencies around the world.”

The cyber outfit added:

Myanmar has recently experienced a handful of high-profile raids at scam compounds which have garnered headlines and resulted in the arrest, and in some cases release, of thousands of workers. A crackdown earlier this year at another center near Mandalay resulted in the rescue of 7,000 people. Nonetheless, construction is booming within the compounds around Mandalay, even after raids, Agence France-Presse reported last week. Following a China-led crackdown on scam hubs in the Kokang region in 2023, a Chinese court in September sentenced 11 members of the Ming crime family to death for running operations.

image

Thanks, Venice.ai. Good enough.

Just one Chinese crime family. Even more interesting.

I want to point out that the write up did not take a tiny extra step; for example, answer this question, “What will prevent the firms listed below from filling the Starlink void (if one actually exists)? Here are some Starlink options. These may be more expensive, but some surplus cash is spun off from pig butchering, human trafficking, drug brokering, and money laundering. Here’s the list from my files. Remember, please, that I am a dinobaby in a hollow in rural Kentucky. Are my resources more comprehensive than a big cyber security firm’s?

  • AST
  • EchoStar
  • Eutelsat
  • HughesNet
  • Inmarsat
  • NBN Sky Muster
  • SES S.A.
  • Telstra
  • Telesat
  • Viasat

With access to money, cut outs, front companies, and compensated government officials, will a Starlink “action” make a substantive difference? Again this is a question not addressed in the original write up. Myanmar is just one country operating in gray zones where government controls are ineffective or do not exist.

Starlink seems to be a pivot point for the write up. What about Starlinks in other “countries” like Lao PDR? What about a Starlink customer carrying his or her Starlink into Cambodia? I wonder if some US cyber security firms keep up with current actions, not those with some dust on the end tables in the marketing living room.

Stephen E Arnold, October 23, 2025

AI: There Is Gold in Them There Enterprises Seeking Efficiency

October 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a “ride-em-cowboy” write up called “IBM Claims 45% Productivity Gains with Project Bob, Its Multi-Model IDE That Orchestrates LLMs with Full Repository Context.” That, gentle reader, is a mouthful. Let’s take a quick look at what sparked an efflorescence of buzzing jargon.

image

Thanks, Midjourney. Good enough like some marketing collateral.

I noted this statement about Bob (no, not the famous Microsoft Bob):

Project Bob, an AI-first IDE that orchestrates multiple LLMs to automate application modernization; AgentOps for real-time agent governance; and the first integration of open-source Langflow into Watsonx Orchestrate, IBM’s platform for deploying and managing AI agents. IBM’s announcements represent a three-pronged strategy to address interconnected enterprise AI challenges: modernizing legacy code, governing AI agents in production and bridging the prototype-to-production gap.

Yep, one sentence. The spirit of William Faulkner has permeated IBM’s content marketing team. Why not make a news release that is a single sentence like the 1300 word extravaganza in “Absalom, Absalom!”?

And again:

Project Bob isn’t another vibe coder, it’s an enterprise modernization tool.

I can visualize IBM customers grabbing the enterprise modernization tool and modernizing the enterprise. Yeah, that’s going to become a 100 percent penetration quicker than I can say, “Bob was the precursor to Clippy.” (Oh, sorry. I was confusing Microsoft’s Bob with IBM’s Bob again. Drat!)

Is it Watson making the magic happen with IDE’s and enterprise modernization? No, Watson is probably there because, well, that’s IBM. But the brains for Bob comes from Anthropic. Now Bob and Claude are really close friends. IBM’s middleware is Watson, actually Watsonx. And the magic of these systems produces …. wait for it … AgentOps and Agentic Workflows.

The write up says:

Agentic Workflows handles the orchestration layer, coordinating multiple agents and tools into repeatable enterprise processes.  AgentOps then provides the governance and observability for those running workflows. The new built-in observability layer provides real-time monitoring and policy-based controls across the full agent lifecycle. The governance gap becomes concrete in enterprise scenarios. 

Yep, governance. (I still don’t know what that means exactly.) I wonder if IBM content marketing documents should come with a glossary like the 10 pages of explanations of Telegram’s wild and wonderful crypto freedom jargon.

My hunch is that IBM wants to provide the Betty Crocker approach to modernizing an enterprise’s software processes. Betty did wonders for my mother’s chocolate cake. If you want more information, just call IBM. Perhaps the agentic workflow Claude Watson customer service line will be answered by a human who can sell you the deed to a mountain chock full of gold.

Stephen E Arnold, October 23, 2025

Woof! Innovation Is Doomed But Novel Gym Shoes Continue

October 23, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have worked for commercial and government firms. My exposure ranges from the fun folks at Bell Labs and Bellcore to the less than forward leaning people at a canned fish outfit not far from MIT. Geography is important … sometimes. I have also worked on “innovation teams,” labored in a new product organization, and sat at the hand of the all-time expert of product innovation, Conrad Jones. Ah, you don’t know the name. That ices you out of some important information about innovation. Too bad.

I read “No Science, No Startups: The Innovation Engine We’re Switching Off.” The write up presents a reasonable and somewhat standard view of the “innovation process.” The basic idea is that there is an ecosystem which permits innovation. Think of a fish tank. Instead of water, we have fish, pet fish to be exact. We have a hobbyist. We have a bubbler and a water feed. We even have toys in the fish tank. The owner of the fish tank is a hobbyist. The professional fish person might be an ichthyologist or a crew member on a North Sea fishing boat.  The hobbyist buys live fish from the pet side of the fish business. The  ichthyologist studies fish. The fishing boat crew member just hauls them in and enjoys every minute of the activity. Winter is particularly fun. I suppose I could point out other aspects of the fish game. How about fish oil? What about those Asian fish sauces? What about the perfume makers who promise that Ambroxan is just as good as ambergris. Then these outfits in Grasse buy whale stuff for their best concoctions.

image

Innovation never stops… with or without a research grant. It may not be AI, but it shows a certain type of thinking. Thanks, Venice.ai, good enough.

The fish business is complicated. Innovation, particularly in technology-centric endeavors, is more complex. The “No Science, No Startups” essay makes innovation simple. Is innovation really little more than science theorists, researchers, and engineers moving insights and knowledge through a poorly disorganized and poorly understood series of activities?

Yes, it is like the fish business. One starts with a herring. Where one ends up can quite surprising, maybe sufficiently baffling to cause people to say, “No way, José.” Here’s an example: Fish bladders use to remove impurities from wine. Eureka! An invention somewhere in the mists of time. That’s fish. Technology in general and digital technology in particular are more slippery. (Yep, a fish reference.)

The cited essay says the pipeline has several process containers filled with people. (Keep in mind that outfits like Google Deepseek want to replace humanoids with smart software. But let’s go with the humans matter approach for this blog post.)

  1. Scientists who find, gather, discover, or stumble upon completely new things. Remember this from grade school, “Come here, Mr. Watson.”
  2. Engineers who recycle, glue together, or apply insight X to problem Y and create something novel as product Y.
  3. MBA-inspired people look and listen (sort of) to what engineers say and experience a Eureka moment. Some moments lead to Pets.com. Others yield a Google-type novelty with help from a National Science Foundation grant. (Check out that PageRank patent.)

The premise is that if the scientific group does not have money, the engineers and the MBA-inspired people will have a tough time coming up with new products, services, applications, or innovations. Is a flawed self-driving system in the family car an innovation or an opportunity to dance with death?

Where is the cited essay going? It is heading toward doom for the US belief that the country is the innovation leader. That’s America’s manifest destiny. The essay says:

Cut U.S. funding, then science will happen in other countries that understand its relationship to making a nation great – like China. National power is derived from investments in Science. Reducing investment in basic and applied science makes America weak.

In general, I think the author of this “No Science, No Startups” is on a logical path. However, I am not sure the cited article’s analysis covers the possibilities of innovation. Let’s go back to fish.

The fish business is complicated and global. The landscape of the fish business changes if an underwater volcano erupts near the fishing areas not too distant from Japan and China. The fish business can take a knock if some once benign microbe does the Darwin thing and rips through the world’s cod. What happens to fish if some countries’ fishing community eat the stock of tuna? What if a TikTok video convinces people not to eat fish or to wear articles of clothing fabricated of fish skin. (Yes, it is a thing.)

Innovation, particularly in technology, has as many if not more points of disruption. The disruptions or to use blue chip consultant speak or exogenous events occur, humanoids have a history of innovating. Vikings in the sixth century kept warm without lighting fires on their wooden boats made water tight with flammable pine tar. (Yep, like those wooden boat hull churches, the spectacle of a big time fire teaches a harsh lesson.)

If I am correct that discontinuities, disruptions, and events humans cannot control occur, here’s what I think about innovation, spending lots of money, and entrepreneurs.

  1. If Maxwell could innovate, so can theorists and scientists today. Does the government have to fund these people? Possibly but mom might produce some cash or the scientist has a side gib.
  2. Will individuals not now recognized as scientists, engineers, and entrepreneurs come up with novel products and services? The answer is, “Yes.” How do I know? Easy. Someone had to figure out how to make a wheel: No lab, no grants, no money, just a log and a need to move something. Eureka worked then and it will work again.
  3. Is technology itself the reason big bucks are needed? My view is yes. Each technological innovation seems to have bigger price tags than the previous technological innovation. How much did Apple spend making a “new and innovative” orange iPhone? Answer: Too much. Why? Answer:   To sell a fashion item.  Is this innovation? Answer: Nope. Its MBA think and that, gentle reader, is infinitely innovative.

If I think about AI, I find myself gravitating to the AI turmoil at Apple and Meta. Money, smart people, and excuses. OpenAI is embracing racy outputs. That’s innovation at three big outfits. World-changing? Nope, stock and financial wobblies. That’s not how innovation is supposed to work, is it?

Net net: The US is definitely churning out wonky products, students who cannot read or calculate, and research that is bogus. The countries who educate, enforce standards, and put wandering young minds in schools and laboratories will generate new products and services. The difference is that these countries will advance in technological innovation. The countries that embrace lower standards, reduced funding for research, and glorify doom scrolling will become third-world outfits. What countries will be the winners in innovation? The answer is not the country that takes the lead in foot ware made of fish skins.

Stephen E Arnold, October 23, 2025

I love

AI and Data Exhaustion: Just Use Synthetic Data and Recycle User Prompts

October 23, 2025

That did not take long. The Independent reports, “AI Has Run Out of Training Data, Warns Data Chief.” Yes, AI models have gobbled up the world’s knowledge in just a few years. Neema Raphael, Goldman Sach’s chief data officer and head of engineering, made that declaration on a recent podcast. He added that, as a result, AI models will increasingly rely on synthetic data. Get ready for exponential hallucinations. Writer Anthony Cuthbertson quotes Raphael:

“We’ve already run out of data. I think what might be interesting is people might think there might be a creative plateau… If all of the data is synthetically generated, then how much human data could then be incorporated? I think that’ll be an interesting thing to watch from a philosophical perspective.”

Interesting is one word for it. Cuthbertson notes Raphael’s warning did not come out of the blue. He writes:

“An article in the journal Nature in December predicted that a ‘crisis point’ would be reached by 2028. ‘The internet is a vast ocean of human knowledge, but it isn’t infinite,’ the article stated. ‘Artificial intelligence researchers have nearly sucked it dry.’ OpenAI co-founder Ilya Sutskever said last year that the lack of training data would mean that AI’s rapid development ‘will unquestionably end’. The situation is similar to fossil fuels, according to Mr Sutskever, as human-generated content is a finite resource just like oil or coal. ‘We’ve achieved peak data and there’ll be no more,’ he said. ‘We have to deal with the data that we have. There’s only one internet.’”

So AI firms knew this limitation was coming. Did they warn investors? They may have concerns about this “creative plateau.” The write-up suggests the dearth of fresh data may force firms to focus less on LLMs and more on agentic AI. Will that be enough fuel to keep the hype train going? Sure, hype has a life of its own. Now synthetic data? That’s forever.

Cynthia Murrell, October 23, 2025

Amazon and its Imperative to Dump Human Workers

October 22, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Everyone loves Amazon. The local merchants thank Amazon for allowing them to find their future elsewhere. The people and companies dependent on Amazon Web Services rejoiced when the AWS system failed and created an opportunity to do some troubleshooting and vendor shopping. The customer (me) who received a pair of ladies underwear instead of an AMD Ryzen 5750X. I enjoyed being the butt of jokes about my red, see through microprocessor. Was I happy!

image

Mice discuss Amazon’s elimination of expensive humanoids. Thanks, Venice.ai. Good enough.

However, I read “Amazon Plans to Replace More Than Half a Million Jobs With Robots.” My reaction was that some employees and people in the Amazon job pipeline were not thrilled to learn that Amazon allegedly will dump humans and embrace robots. What a great idea. No health care! No paid leave! No grousing about work rules! No medical costs! No desks! Just silent, efficient, depreciable machines. Of course there will be smart software. What could go wrong? Whoops. Wrong question after taking out an estimated one third of the Internet for a day. How about this question, “Will the stakeholders be happy?” There you go.

The write up cranked out by the Gray Lady reports from confidential documents and other sources says:

Amazon’s U.S. work force has more than tripled since 2018 to almost 1.2 million. But Amazon’s automation team expects the company can avoid hiring more than 160,000 people in the United States it would otherwise need by 2027. That would save about 30 cents on each item that Amazon picks, packs and delivers to customers. Executives told Amazon’s board last year that they hoped robotic automation would allow the company to continue to avoid adding to its U.S. work force in the coming years, even though they expect to sell twice as many products by 2033. That would translate to more than 600,000 people whom Amazon didn’t need to hire.

Why is Amazon dumping humans? The NYT turns to that institution that found Jeffrey Epstein a font of inspiration. I read this statement in the cited article:

“Nobody else has the same incentive as Amazon to find the way to automate,” said Daron Acemoglu, a professor at the Massachusetts Institute of Technology who studies automation and won the Nobel Prize in economic science last year. “Once they work out how to do this profitably, it will spread to others, too.” If the plans pan out, “one of the biggest employers in the United States will become a net job destroyer, not a net job creator,” Mr. Acemoglu said.

Ah, save money. Keep more money for stakeholders. Who knew? Who could have foreseen this motivation?

What jobs will Amazon provide to humans? Obviously leadership will keep leadership jobs. In my decades of professional work experience, I have never met a CEO who really believes anyone else can do his or her job. Well, the NYT has an answer about what humans will do at Amazon; to wit:

Amazon has said it has a million robots at work around the globe, and it believes the humans who take care of them will be the jobs of the future. Both hourly workers and managers will need to know more about engineering and robotics as Amazon’s facilities operate more like advanced factories.

I wish to close this essay with several observations:

  1. Much of the information in the write up come from company documents. I am not comfortable with the use of this type of information. It strikes me as a short cut, a bit like Google or self-made expert saying, “See what I did!”
  2. Many words were used to get one message across: Robots and by extension smart software will put people out of work. Basic income time, right? Why not say that?
  3. The reason wants to dump people is easy to summarize: Humans are expensive. Cut humans, costs drop (in theory). But are there social costs? Sure, but why dwell on those.

Net net: Sigh. Did anyone reviewing this story note the Amazon online collapse? Perhaps there is a relationship between cost cutting at Amazon and the company’s stability?

Stephen E Arnold, October 22, 2025

Parents and Screen Time for Their Progeny: A Losing Battle? Yep

October 22, 2025

Sometimes I am glad my child-rearing days are well behind me. With technology a growing part of childhood education and leisure, how do parents stay on top of it all? For over 40%, not as well as they would like. The Pew Research Center examined “How Parents Manage Screen Time for Kids.” The organization surveyed US parents of kids 12 and under about the use of tablets, smartphones, smartwatches, gaming devices, and computers in their daily lives. Some highlights include:

“Tablets and smartphones are common – TV even more so.

[a] Nine-in-ten parents of kids ages 12 and younger say their child ever watches TV, 68% say they use a tablet and 61% say they use a smartphone.

[b] Half say their child uses gaming devices. About four-in-ten say they use desktops or laptops.

AI is part of the mix.

[c] About one-in-ten parents say their 5- to 12-year-old ever uses artificial intelligence chatbots like ChatGPT or Gemini.

[c] Roughly four-in-ten parents with a kid 12 or younger say their child uses a voice assistant like Siri or Alexa. And 11% say their child uses a smartwatch.

Screens start young.

[e] Some of the biggest debates around screen time center on the question: How young is too young?

[f] It’s not just older kids on screens: Vast majorities of parents say their kids ever watch TV – including 82% who say so about a child under 2.

[g] Smartphone use also starts young for some, but how common this is varies by age. About three-quarters of parents say their 11- or 12-year-old ever uses one. A slightly smaller share, roughly two-thirds, say their child age 8 to 10 does so. Majorities say so for kids ages 5 to 7 and ages 2 to 4.

[h] And fewer – but still about four-in-ten – say their child under 2 ever uses or interacts with one.”

YouTube is a big part of kids’ lives, presumably because it is free and provides a “contained environment for kids.” Despite this show of a “child-safe” platform, many have voiced concerns about both child-targeted ads and questionable content. TikTok and other social media are also represented, of course, though a whopping 80% of parents believe those platforms do more harm than good for children.

Parents cite several reasons they allow kids to access screens. Most do so for entertainment and learning. For children under five, keeping them calm is also a motivation. Those who have provided kids with their own phones overwhelmingly did so for ease of contact. On the other hand, those who do not allow smartphones cite safety, developmental concerns, and screen time limits. Their most common reason, though, is concern about inappropriate content. (See this NPR article for a more in-depth discussion of how and why to protect kids from seeing porn online, including ways porn is more harmful than it used to be. Also, your router is your first line of defense.)

It seems parents are not blind to the potential harms of technology. Almost all say managing screen time is a priority, though for most it is not in the top three. See the write-up for more details, including some handy graphs. Bottomline: Parents are fighting a losing battle in many US households.

Cynthia Murrell, October 22. 2025

Apple Can Do AI Fast … for Text That Is

October 22, 2025

Wasn’t Apple supposed to infuse Siri with Apple Intelligence? Yeah, well, Apple has been working on smart software. Unlike the Google and Samsung, Apple is still working out some kinks in [a] its leadership, [b] innovation flow, [c] productization, and [d] double talk.

Nevertheless, I learned by reading “Apple’s New Language Model Can Write Long Texts Incredibly Fast.” That’s excellent. The cited source reports:

In the study, the researchers demonstrate that FS-DFM was able to write full-length passages with just eight quick refinement rounds, matching the quality of diffusion models that required over a thousand steps to achieve a similar result. To achieve that, the researchers take an interesting three-step approach: first, the model is trained to handle different budgets of refinement iterations. Then, they use a guiding “teacher” model to help it make larger, more accurate updates at each iteration without “overshooting” the intended text. And finally, they tweak how each iteration works so the model can reach the final result in fewer, steadier steps.

And if you want proof, just navigate to the archive of research and marketing documents. You can access for free the research document titled “FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models.” The write up contains equations and helpful illustrations like this one:

The research paper is in line with other “be more efficient”-type efforts. At some point, companies in the LLM game will run out of money, power, or improvements. Efforts like Apple’s are helpful. However, like its debunking of smart software, Apple is lagging in the AI game.

Net net: Like orange iPhones and branding plays like Apple TV, a bit more in the delivery of products might be helpful. Apple did produce a gold thing-a-ma-bob for a world leader. It also reorganizes. Progress of a sort I surmise.

Stephen E Arnold, October 21, 2025

Moral Police? Not OpenAI, Dude and Not Anywhere in Silicon Valley

October 22, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Coming up with clever stuff is either the warp or the woof of innovation. With the breakthroughs in software that seems intelligent, clever is morphing into societal responsibility. For decades I have asserted that the flow of digital information erodes notional structures. From my Eagleton Lecture in the mid-1980s to the observations in this blog, the accuracy of my observation is verified. What began as disintermediation in the niche of special librarians has become the driving force for the interesting world now visible to most people.

image

Worrying about morality in 2025 is like using a horse and buggy to commute in Silicon Valley. Thanks, Venice.ai. Good enough.

I can understand the big idea behind Sam AI-Man’s statements as reported in “Sam Altman Says OpenAI Isn’t ‘Moral Police of the World’ after Erotica ChatGPT Post Blows Up.” Technology is — like, you know, so, um — neutral. This means that its instrumental nature appears in applications. Who hassles the fellow who innovated with Trinitrotoluene or electric cars with top speeds measured in hundreds of miles per hour?

The write up says:

OpenAI CEO Sam Altman said Wednesday [October 15, 2025] that the company is “not the elected moral police of the world” after receiving backlash over his decision to loosen restrictions and allow content like erotica within its chatbot ChatGPT. The artificial intelligence startup has expanded its safety controls in recent months as it faced mounting scrutiny over how it protects users, particularly minors. But Altman said Tuesday in a post on X that OpenAI will be able to “safely relax” most restrictions now that it has new tools and has been able to mitigate “serious mental health issues.”

This is a sporty paragraph. It contains highly charged words and a message. The message, as I understand it, is, “We can’t tell people what to do or not to do with our neutral and really good smart software.”

Smart software has become the next big thing for some companies. Sure, many organizations are using AI, but the motors driving the next big thing are parked in structures linked with some large high technology outfits.

What’s a Silicon Valley type outfit supposed to do with this moral frippery? The answer, according to the write up:

On Tuesday [October 13, 2025] , OpenAI announced assembled a council of eight experts who will provide insight into how AI impacts users’ mental health, emotions and motivation. Altman posted about the company’s aim to loosen restrictions that same day, sparking confusion and swift backlash on social media.

What am I confused about the arrow of time? Sam AI-Man did one thing on the 13th of October and then explained that his firm is not the moral police on the 14th of October. Okay, make a move and then crawfish. That works for me, and I think the approach will become part of the managerial toolkit for many Silicon Valley outfits.

For example, what if AI does not generate enough data to pay off the really patient, super understanding, and truly king people who fund the AI effort? What if the “think it and it will become real” approach fizzles? What if AI turns out to be just another utility useful for specific applications like writing high school essays or automating a sales professional’s prospect follow up letter? What if….? No, I won’t go there.

Several observations:

  1. Silicon Valley-type outfits now have the tools to modify social behavior. Whether it is Peter Thiel as puppet master or Pavel Durov carrying a goat to inspire TONcoin dApp developers, these individuals can control hearts and minds.
  2. Ignoring or imposing philosophical notions with technology was not a problem when an innovation like Teslas A/C motor was confined to a small sector of industry. But today, the innovations can ripple globally in seconds. It should be no surprise that technology and ideology are for now intertwined.
  3. Control? Not possible. The ink, as the saying goes, has been spilled on the blotter. Out of the bottle. Period.

The waffling is little more than fire fighting. The uncertainty in modern life is a “benefit” of neutral technology. How do you like those real time ads that follow you around from online experience to online experience? Sam AI-Man and others of his ilk are not the moral police. That concept is as outdated as a horse-and-buggy on El Camino Real. Quaint but anachronistic. Just swipe left for another rationalization. It is 2025.

Stephen E Arnold, October 23, 2025

Smart Software: The DNA and Its DORK Sequence

October 22, 2025

green-dino_thumb_thumb[3]This essay is the work of a dumb dinobaby. No smart software required.

I love article that “prove” something. This is a gem: “Study Proves Being Rude to AI Chatbots Gets Better Results Than Being Nice.” Of course, I believe everything I read online. This write up reports as actual factual:

A new study claims that being rude leads to more accurate results, so don’t be afraid to tell off your chatbot. Researchers at Pennsylvania State University found that “impolite prompts consistently outperform polite ones” when querying large language models such as ChatGPT.

My initial reaction is that I would much prefer providing my inputs about smart software directly to outfits creating these modern confections of a bunch of technologies and snake oil. How about a button on Microsoft Copilot, Google Gemini or whatever it is now, and the others in the Silicon Valley global domination triathlon of deception, money burning, and method recycling? This button would be labeled, “Provide feedback to leadership.” Think that will happen? Unlikely.

image

Thanks, Venice.ai, not good enough, you inept creation of egomaniacal wizards.

Smart YouTube and smart You.com were both dead for hours. Hey, no problemo. Want to provide feedback? Sure, just write “we care” at either firm. A wizard will jump right on the input.

The write up adds:

Okay, but why does being rude work? Turns out, the authors don’t know, but they have some theories.

Based on my experience with Silicon Valley type smart software outfits, I have an explanation. The majority of the leadership has a latent protein in their DNA. This DORK sequence ensures that arrogance, indifference to others, and boundless confidence takes precedence over other characteristics; for example, ethical compass aligned with social norms.

Built by DORK software responds to dorkish behavior because the DORK sequence wakes up and actually attempts to function in a semi-reliable way.

The write up concludes with this gem:

The exact reason isn’t fully understood. Since language models don’t have feelings, the team believes the difference may come down to phrasing, though they admit “more investigation is needed.”

Well, that makes sense. No one is exactly sure how the black boxes churned out by the next big thing outfits work. Therefore, why being a dork to the model remains a mystery. Can the DORK sequence be modified by CRISPR/Cas9? Is there funding the Pennsylvania State University experts can pursue? I sure hope so.

Stephen E Arnold, October 22, 2025

First WAP? What Is That? Who Let the Cat Out of the Bag?

October 21, 2025

green-dino_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Ageing in rural Kentucky is not a good way to keep up with surveillance technology. I did spot a post on LinkedIn. I will provide a url for the LinkedIn post, but I have zero clue if anyone reading this blog will be able to view the information. The focus of the LinkedIn post is that some wizards have taken inspiration from NSO Group-type of firms and done some innovation. Like any surveillance technology, one has to apply it in a real life situation. Sometimes there is a slight difference between demonstrations, PowerPoint talks, and ease of use. But, hey, that’s the MBA-inspired way to riches or at least in NSO Group’s situation, infamy.

image

Letting the cat out of the bag. Who is the individual? The president, an executive, a conference organizer, or a stealthy “real” journalist. One thing is clear: The cat is out of the bag. Thanks, Venice.ai. Good enough.

The LinkedIn post is from an entity using the handle OSINT Industries. Here is the link, dutifully copied from Microsoft’s outstanding social media platform. Don’t blame me if it doesn’t work. Microsoft just blames users, so just look in the mirror and complain: https://www.linkedin.com/posts/osint-industries_your-phone-is-being-tracked-right-now-ugcPost-7384354091293982721-KQWk?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAACYEwBhJbGkTw7Ad0vyN4RcYKj0Su8NUU

How’s that for a link. ShortURL spit out this version: https://shorturl.at/x2Qx9.

So what’s the big deal. Cyber security outfits and an online information service (in the old days a printed magazine) named Mother Jones learned that an outfit called First WAP exploited the SS7 telecom protocol. As i understand this signal switching, SS7 is about 50 years old and much loved by telephony nerds and Bell heads. The system and method acts like an old fashioned switchyard operator at a rail yard in the 1920s. Signals are filtered from voice channels. Call connections and other housekeeping are pushed to the SS7 digital switchyard. Instead of being located underground in Manhattan, the SS7 system is digital and operates globally. I have heard but have no first hand information about its security vulnerabilities. I know that a couple of companies are associated with switching fancy dancing. Do security exploits work? Well, the hoo-hah about First WAP suggests that SS7 exploitation is available.

The LinkedIn post says that “The scale [is] 14,000+ phone numbers. 160 countries. Over 1 million location pings.

A bit more color appears in the Russian information service ? FrankMedia.ru’s report “First WAP Empire: How Hidden Technology Followed Leaders and Activists.” The article is in Russian, but ever-reliable Google Translate makes short work of one’s language blind spots. Here are some interesting points from Frank Media:

  1. First WAP has been in business for about 17 or 18 years
  2. The system was used to track Google and Raytheon professionals
  3. First WAP relies on resellers of specialized systems and services and does not do too much direct selling. The idea is that the intermediaries are known to the government buyers. A bright engineer from another country is generally viewed as someone who should not be in a meeting with certain government professionals. This is nothing personal, you understand. This is just business.
  4. The system is named Altamides, which may be a variant of a Greek word for “powerful.”

The big reveal in the Russian write up is that a journalist got into the restricted conference, entered into a conversation with an attendee at the restricted conference, and got information which has put First WAP in the running to be the next NSO Group in terms of PR problems. The Frank Media write up does a fine job of identifying two individuals. One is the owner of the firm and the other is the voluble business development person.

Well, everyone gets 15 minutes of fame. Let me provide some additional, old-person information. First, the company’s Web address is www.1rstwap.com. Second, the firm’s alleged full name is First WAP International DMCC. The “DMCC” acronym means that the firm operates from Dubai’s economic zone. Third, the firm sells through intermediaries; for example, an outfit called KCS operating allegedly from the UK. Companies House information is what might be called sparse.

Several questions:

  1. How did a non-LE or intel professional get into the conference?
  2. Why was the company to operate off the radar for more than a decade?
  3. What benefits does First WAP derive from its nominal base in Indonesia?
  4. What are the specific security vulnerabilities First WAP exploits?
  5. Why do the named First WAP executives suddenly start talking after many years of avoiding an NSO-type PR problem?

Carelessness seems to be the reason this First WAP got its wireless access protocol put in the spotlight. Nice work!

To WAP up, you can download the First WAP encrypted messaging application from… wait for it… the Google Play Store. The Google listing includes this statement, “No data shared with third parties.” Think about that statement.

Stephen E Arnold, October 21, 2025

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta