Ad Blockers and a Googley Consequence
April 11, 2025
Another dinobaby blog post. Eight decades and still thrilled when I point out foibles.
Motivated individuals are acting in a manner usually associated with Cloudflare-type of outfits. The idea of a “man in the middle” is a good one. It works when one buys something from Amazon. The user wants convenience and does not take the time to hunt around for a better or cheaper version of a particular product.
“Block YouTube Ads on AppleTV by Decrypting and Stripping Ads from Profobuf” provides a recipe for dumping advertisements in some streaming services, but the spotlight is on the lovable Google and Apple’s streaming device. (Poor Apple. Like its misfiring AI and definitely interesting glasses, the company caught a bright person’s attention.)
Social media needs two things: Beacons that phone home and advertising because how else is a company going to push products and services. The write up provides step-by-step instructions for chopping out ads from two big outfits.
Here’s what I think will happen at the monopolies:
- At least two software people will tackle this “problem”: One from Apple and one from Google.
- One will come up with a “fix” to the work-around
- The “fix” will be shared with the company who did not come up with an enhancement first
- The modified method will be deployed
- The game begins again.
The cat-and-mouse sequence is little more than that von Neumann game theory just in real life with money at stake. It’s too bad Johnny and his pals (some of whom were quite quirky) are not around to work on ad blocking instead of nuclear weapons.
Well, Johnny isn’t around, and I think that game theory does not work when one battles multi billion dollar monopolies with lots of reasonably bright people around providing they aren’t veterans of the Apple AI team or the original Google Glass product.
The write up is interesting. I admire the effort the author put into the blocking. How long will it persist? Good question, but the next iteration will probably be designed to preserve the money flow. Ads and user tracking are the means to the end: Big revenue.
Stephen E Arnold, April 11, 2025
Trapped in the Cyber Security Gym with Broken Gear?
April 11, 2025
As an IT worker you can fall into more pitfalls than a road that needs repaving. Mac Chaffee shared a new trap on his blog, Mac’s Tech Blog and how he handled: “Avoid Building A Security Treadmill.” Chaffee wrote that he received a ticket that asked him to stop people from using a GPU service to mine cryptocurrencies. Chafee used Falco, an eBPF-powered agent that runs on the Kubernetes cluster, to monitor the spot and deactivate the digital mining.
Chaffee doesn’t mind the complexity of the solution. His biggest issue was with the “security treadmill” that he defines as:
“A security treadmill1 is a piece of software that, due to a weakness of design, requires constant patching to keep it secure. Isn’t that just all software? Honestly… kinda, yeah, but a true treadmill is self-inflicted. You bought it, assembled it, and put it in your spare bedroom; a device specifically designed to let you walk/run forever without making forward progress.”
One solution suggested to Chaffee was charging people to use the GPU. The idea was that if they charged people more to use the GPU than what they were making with cryptocurrencies than it would stop. That idea wasn’t followed of reasons Chaffee wasn’t told, so Falco was flown.
Unfortunately Falco only detects network traffic to host when its directly connected to the IP. The security treadmill was in full swing because users were bypassing the Internet filter monitored by Falco. Falco needs to be upgraded to catch new techniques that include a VPN or proxy.
Another way to block cryptocurrency mining is blocking all outbound traffic except for those an allowed-user list. It would also prevent malware attacks, command and control servers, and exfiltration attacks. Another problem Chaffee noted is that applications doesn’t need a full POSIX environment. To combat this he suggests:
“Perhaps free-tier users of these GPUs could have been restricted to running specific demos, or restrictive timeouts for GPU processing times, or denying disk write access to prevent downloading miners, or denying the ability to execute files outside of a read-only area.”
Chaffee declares it’s time to upgrade legacy applications or make them obsolete to avoid security treadmills. It sounds like there’s a niche to make a startup there. What a thought a Planet Fitness with one functioning treadmill.
Whitney Grace, April 11, 2025
The UK, the Postal Operation, and Computers
April 11, 2025
According to the Post Office Scandal, there’s a new amendment in Parliament that questions how machines work: “Proposed Amendment To Legal Presumption About The Reliability Of Computers.”
Journalist Tom Webb specializes in data protection and he informed author Nick Wallis about an amendment to the Data (Use and Access) Bill that is running through the British Parliament. The amendment questions:
“It concerns the legal presumption that “mechanical instruments” (which seems to be taken to include computer networks) are working properly if they look to the user like they’re working properly.”
Wallis has chronicled the problems associated with machines appearing to work properly since barrister Stephen Mason reported the issue to him. Barrister Mason is fighting on behalf of the British Post Office Scandal (which is another story) about the this flawed thinking and its legal implication. Here’s more on what the problem is:
“Although the “mechanical instruments” presumption has never, to the best of my knowledge, been quoted in any civil or criminal proceedings involving a Subpostmaster, it has been said to effectively reverse the burden of proof on anyone who might be convicted using digital evidence. The logic being if the courts are going to assume a computer was working fine at the time an offence allegedly occurred because it looked like it was working fine, it is then down to the defendant to prove that it was not working fine. This can be extremely difficult to do (per the Seema Misra/Lee Castleton cases).”
The proposed amendment uses legal jargon to do the following:
“This amendment overturns the current legal assumption that evidence from computers is always reliable which has contributed to miscarriages of justice including the Horizon Scandal. It enables courts to ask questions of those submitting computer evidence about its reliability.”
This explanation means that just because the little light is blinking and the machine is doing something, those lights do not mean the computer is working correctly. Remarkable.
Whitney Grace, April 11, 2025
Meta a Great Company Lately?
April 10, 2025
Sorry, no AI used to create this item.
Despite Google’s attempt to flood the zone with AI this and AI that, Meta kept popping up in my newsfeed this morning (April 10, 2025). I pushed past the super confidential information from the US District Court of Northern District of California (an amazing and typically incoherent extract of super confidential information) and focused on a non-fiction author.
The Zuck – NSO Group dust up does not make much of a factoid described in considerable detail in Wikipedia. That encyclopedia entry is “Onavo.” In a nutshell, Facebook acquired a company which used techniques not widely known to obtain information about users of an encrypted app. Facebook’s awareness of Onavo took place, according to Wikipedia, prior to 2013 when Facebook purchased Onavo. My thought is that someone in the Facebook organization learned about other Israeli specialized software firms. Due to the high profile NSO Group had as a result of its participation in certain intelligence-related conferences and the relatively small community of specialized software developers in Israel, Facebook may have learned about the Big Kahuna, NSO Group. My personal view is that Facebook and probably more than a couple of curious engineers learned how specialized software purpose-built to cope with mobile phone data and were more than casually aware of systems and methods. The Meta – NSO Group dust up is an interesting case. Perhaps someday someone will write up how the Zuck precipitated a trial, which to an outsider, looks like a confused government-centric firm facing a teenagers with grudge. Will this legal matter turn a playground-type of argument about who is on whose team into an international kidney stone for the specialized software sector? For now, I want to pick up the Meta thread and talk about Washington, DC.
The Hill, an interesting publication about interesting institutions, published “Whistleblower Tells Senators That Meta Undermined U.S. Security, Interests.” The author is a former Zucker who worked as the director of global public policy at Facebook. If memory serves me, she labored at the estimable firm when Zuck was undergoing political awakening.
The Hill reports:
Wynn-Williams told Hawley’s panel that during her time at Meta: “Company executives lied about what they were doing with the Chinese Communist Party to employees, shareholders, Congress and the American public,” according to a copy of her remarks. Her most explosive claim is that she witnessed Meta executives decide to provide the Chinese Communist Party with access to user data, including the data of Americans. And she says she has the “documents” to back up her accusations.
After the Zuck attempted to block, prevent, thwart, or delete Ms. Wynn-Williams’ book Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism from seeing the light of a Kindle, I purchased the book. Silicon Valley tell-alls are usually somewhat entertaining. It is a mark of distinction for Ms. Wynn-Williams that she crafted a non-fiction write up that made me downright uncomfortable. Too much information about body functions and allegations about sharing information with a country not getting likes from too many people in certain Washington circles made me queasy. Dinobabies are often sensitive creatures unless they grow up to be Googzillas.
The Hill says:
Wynn-Williams testified that Meta started briefing the Chinese Communist party as early as 2015, and provided information about critical emerging technologies and artificial intelligence. “There’s a straight line you can draw from these briefings to the recent revelations that China is developing AI models for military use,” she said.
But isn’t open source AI software the future a voice in my head said?
What adds some zip to the appearance is this factoid from the article:
Wynn-Williams has filed a shareholder resolution asking the company’s board to investigate its activity in China and filed whistleblower complaints with the Securities and Exchange Administration and the Department of Justice.
I find it fascinating that on the West Coast, Facebook is unhappy with intelware being used on a Zuck-purchased service to obtain information about alleged persons of interest. About the same time, on the East coast, a former Zucker is asserting that the estimable social media company buddied up to a nation-state not particularly supportive of American interests.
Assuming that the Northern District court case is “real” and “actual factual” and that Ms. Wynn-Williams’ statements are “real” and “actual factual,” what can one hypothesize about the estimable Meta outfit? Here are my thoughts:
- Meta generates little windstorms of controversy. It doesn’t need to flood the zone with Google-style “look at us” revelations. Meta just stirs up storms.
- On the surface, Meta seems to have an interesting public posture. On one hand, the company wants to bring people together for good, etc. etc. On the other, the company could be seen as annoyed that a company used his acquired service to do data collection at odds with Meta’s own pristine approach to information.
- The tussles are not confined to tiny spaces. The West Coast matter concerns what I call intelware. When specialized software is no longer “secret,” the entire sector gets a bit of an uncomfortable feeling. Intelware is a global issue. Meta’s approach is in my opinion spilling outside the courtroom. The East Coast matter is another bigly problem. I suppose allegations of fraternization with a nation-state less than thrilled with the US approach to life could be seen as “small.” I think Ms. Wynn-Williams has a semi-large subject in focus.
Net net: [a] NSO Group cannot avoid publicity which could have an impact on a specialized software sector that should have remained in a file cabinet labeled “Secret.” [b] Ms. Wynn-Williams could have avoided sharing what struck me as confidential company information and some personal stuff as well. The book is more than a tell-all; it is a summary of what could be alleged intentional anti-US activity. [c] Online seems to be the core of innovation, finance, politics, and big money. Just forty five years ago, I wore bunny ears when I gave talks about the impact of online information. I called myself the Data Bunny. and, believe it or not, wore white bunny rabbit ears for a cheap laugh and make the technical information more approachable. Today many know online has impact. From a technical oddity used by fewer than 5,000 people to disruption of the specialized software sector by a much-loved organization chock full of Zuckers.
Stephen E Arnold, April 10, 2025
Extra Effort Required to Find Some Google Information
April 10, 2025
Dinobaby says, “No smart software involved. That’s for “real” journalists and pundits.
We are plugging along on a little project. As part of our checking assorted publicly accessible sources for being publicly accessible, we were delighted to verify that Exploit Database is alive and kicking. Plus, it appears to be current as of August 2024.
Since we are doing some poking around for information related to the newly-almost-free Pavel Durov, we were interested in the Google Hacking Database. You can locate that list of “Google dorks” at this link. The most recent additions or dorks provide some information about finding files containing passwords.
Here’s the little discovery. None of the almost 8,000 dorks are Telegram specific. However, many of the methods can be applied to Pavel Durov’s interesting outfit. We tried a handful and learned that Google’s index either is filtering Telegram-related content or simply does not make much of an effort to provide pointers to certain types of public Telegram information.
How does an analyst or researcher locate current, comprehensive information about bots, Groups, Channels, and third-party specialized services for that platform? That is an excellent question which leads to some Russian resources which are often presented in Russian, semi low profile outfits like Forbidden Stories.
Net net: OSINT professionals depend on Google. However, certain large services engaged in a wide range of activities require pushing beyond the Google and its ever-helpful smart software.
Stephen E Arnold, April 10, 2025
AI Horn Honking: Toot for Refact
April 10, 2025
What is one of the things we were taught in kindergarten? Oh, right. Humility. That, however, doesn’t apply when you’re in a job interview, selling a product, or writing a press release. Dev.to’s wrote a press release about their open source AI agent for programming in IDE was high ranking: “Our AI Agent + 3.7 Sonnet Ranked #1 Pn Aider’s Polyglot Bench — A 76.4% Score.”
As the title says, Dev.to’s open source AI programming agent ranked 76.4%. The agent is called Refact.ai and was upgraded with 3.7 Sonnet. It outperformed other AI agents, include Claude, Deepseek, ChatGPT, GPT-4.5 Preview, and Aider.
Refact.ai does better than the others because it is an intuitive AI agent. It uses a feedback loop to create self-learning and auto-correcting AI agent:
• “Writes code: The agent generates code based on the task description.
• Fixes errors: Runs automated checks for issues.
• Iterates: If problems are found, the agent corrects the code, fixes bugs, and re-tests until the task is successfully completed.
• Delivers the result, which will be correct most of the time!”
Dev.to has good reasons to pat itself on the back. Hopefully they will continue to develop and deliver high-performing AI agents.
Whitney Grace, April 10, 2025
China and AI: Moving Ahead?
April 10, 2025
There’s a longstanding rivalry between the United States and China. The rivalry extends to everything from government, economy, GDP, and technology. There’s been some recent technology developments in this heated East and West rivalry says The Independent in the article, “Has China Just Built The World’s First Human-Level AI?”
Deepseek is a AI start-up that’s been compared to OpenAI with its AI models. The clincher is that Deepseek’s models are more advanced than OpenAI because they perform better and use less resources. Another Chinese AI company claims they’ve made another technology breakthrough and it’s called “Manus.” Manus is is supposedly the world’s first fully autonomous AI agent that can perform complex tasks without human guidance. These tasks include creating a podcast, buying property, or booking travel plans.
Yichao Ji is the head of Manu’s AI development. He said that Manus is the next AI evolution and that it’s the beginning of artificial general intelligence (AGI). AGI is AI that rivals or surpasses human intelligence. Yichao Ji said:
“ ‘This isn’t just another chatbot or workflow, it’s a truly autonomous agent that bridges the gap between conception and execution,’ he said in a video demonstrating the AI’s capabilities. ‘Where other AI stops at generating ideas, Manus delivers results. We see it as the next paradigm of human-machine collaboration.’”
Meanwhile Dario Amodei’s company designed Claude, the ChatGPT rival, and he predicted that AGI would be available as soon as 2026. He wrote an essay in October 2024 with the following statement:
“ ‘It can engage in any actions, communications, or remote operations,’ he wrote, ‘including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with a skill exceeding that of the most capable humans in the world.’”
These are tasks that Manus can do, according to the AI’s Web site. However when Manus was tested users spotted it making mistakes that most humans would spot.
Manus’s team is grateful for the insight into its AI’s flaws and will work to deliver a better AGI. The experts are viewing Manus with a more critical eye, because Manus is not delivering the same results as its American counterparts.
It appears that the US is still developing higher performing AI that will become the basis of AGI. Congratulations to the red, white, and blue!
Whitney Grace, April 10, 2025
Stamping Out Intelligence: Censorship May Work Wonders
April 9, 2025
Sorry, no AI used to create this item.
I live in a state which has some interesting ideas. One of them is that the students are well educated. At this time, I think the state in which I reside holds position 47 out of 50 in terms of reading skills or academic performance. Are the numbers accurate? Probably not, but they indicate that learning is not priority number one in some quarters.
A young student with a gift for mathematics is the class dunce. He has to write on the chalk board, “I will not do linear algebra in class.” Thanks, OpenAI. Know any budding Einsteins in Mississippi?
However, there is a state which performs less well than mine. That state is Mississippi. Should that state hold the rank of the 50th less academically slick entity in the US. Probably not, but the low ranking does say something to some people.
I thought about this notion of “low academic performance” when I read “Mississippi Libraries Ordered to Delete Academic Research in Response to State Laws.” The write up says:
A state commission scrubbed academic research from a database used by Mississippi libraries and public schools — a move made to comply with recent state laws changing what content can be offered in libraries. The Mississippi Library Commission ordered the deletion of two research collections that might violate state law, a March 31 internal memo obtained by Mississippi Today shows. One of the now deleted research collections focused on “race relations” and the other on “gender studies.”
So what?
I find it interesting that in a state holding down the 50th spot in academic slickness assumes that its students will be reading research on these topics or any topics for that matter.
I did a very brief stint as a teacher. In fact, I invested one year teaching in a quite challenging high school environment about 100 miles south of Chicago. If my students read anything, I was quite happy. I suppose today that I would be terminated because I used the Sunday comics, gas station credit card application forms, job applications for the local Hunt’s Drive In, and a wide range of printed matter. My goal was to provide reading material that was different from the standard text book, a text book I used when I was in high school years before I showed up at my teaching job.
The goal is to get students reading. Today, I assume that removing books and research material is more informed than what I did.
Several observations:
- Taking steps to prevent reading is different from how I would approach the question, “What should be in the school library?”
- The message sent to students who actually learn that books and research materials are being removed from the library seems to me to be, “Hey, don’t read this academic garbage.”
- The anti-intellectualism which this removal seems to underscore means that Mississippi is working hard to nail down its number 50 spot.
I am a dinobaby. I am quite thrilled with this fact. I will probably fall over dead with a book in my hands. Remember: I used outside materials to try to engage my students in reading for that one year of high school teaching. I should have been killed when a library stack fell over when I was in grade school.
These types of decisions are going to get the job done for me I think.
Stephen E Arnold, April 9, 2025
AI: Job Harvesting
April 9, 2025
It is a question that keeps many of us up at night. Commonplace ponders, "Will AI Automate Away Your Job?" The answer: Probably, sooner or later. The when depends on the job. Some workers may be lucky enough to reach retirement age before that happens. Writer Jason Hausenloy explains:
"The key idea where the American worker is concerned is that your job is as automatable as the smallest, fully self-contained task is. For example, call center jobs might be (and are!) very vulnerable to automation, as they consist of a day of 10- to 20-minute or so tasks stacked back-to-back. Ditto for many forms of many types of freelancer services, or paralegals drafting contracts, or journalists rewriting articles. Compare this to a CEO who, even in a day broken up into similar 30-minute activities—a meeting, a decision, a public appearance—each required years of experiential context that a machine can’t yet simply replicate. … This pattern repeats across industries: the shorter the time horizon of your core tasks, the greater your automation risk."
See the post for a more detailed example that compares the jobs of a technical support specialist and an IT systems architect.
Naturally, other factors complicate the matter. For example, Hausenloy notes, blue-collar jobs may be safer longer because physical robots are more complex to program than information software. Also, the more data there is on how to do a job, the better equipped algorithms are to mimic it. That is one reason many companies implement tracking software. Yes, it allows them to micromanage workers. And also it gathers data needed to teach an LLM how to do the job. With every keystroke and mouse click, many workers are actively training their replacements.
Ironically, it seems those responsible for unleashing AI on the world may be some of the most replaceable. Schadenfreude, anyone? The article notes:
"The most vulnerable jobs, then, are not those traditionally thought of as threatened by automation—like manufacturing workers or service staff—but the ‘knowledge workers’ once thought to be automation-proof. And most vulnerable of all? The same Silicon Valley engineers and programmers who are building these AI systems. Software engineers whose jobs are based on writing code as discrete, well-documented tasks (often following standardized updates to a central directory) are essentially creating the perfect training data for AI systems to replace them."
In a section titled "Rethinking Work," Hausenloy waxes philosophical on a world in which all of humanity has been fired. Is a universal basic income a viable option? What, besides income, do humans get out of their careers? In what new ways will we address those needs? See the write-up for those thought exercises. Meanwhile, if you do want to remain employed as long as possible, try to make your job depend less on simple, repetitive tasks and more on human connection, experience, and judgement. With luck, you may just reach retirement before AI renders you obsolete.
Cynthia Murrell, April 9, 2025
AI Addicts Are Now a Thing
April 9, 2025
Hey, pal, can you spare a prompt?
Gee, who could have seen this coming? It seems one can become dependent on a chatbot, complete with addition indicators like preoccupation, withdrawal symptoms, loss of control, and mood modification. "Something Bizarre Is Happening to People Who Use ChatGPT a Lot," reports The Byte. Writer Noor Al-Sibai cites a recent joint study by OpenAI and MIT Media Lab as she writes:
"To get there, the MIT and OpenAI team surveyed thousands of ChatGPT users to glean not only how they felt about the chatbot, but also to study what kinds of ‘affective cues,’ which was defined in a joint summary of the research as ‘aspects of interactions that indicate empathy, affection, or support,’ they used when chatting with it. Though the vast majority of people surveyed didn’t engage emotionally with ChatGPT, those who used the chatbot for longer periods of time seemed to start considering it to be a ‘friend.’ The survey participants who chatted with ChatGPT the longest tended to be lonelier and get more stressed out over subtle changes in the model’s behavior, too. Add it all up, and it’s not good. In this study as in other cases we’ve seen, people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI — and where that leads could end up being sad, scary, or somewhere entirely unpredictable."
No kidding. Interestingly, the study found those who use the bot as an emotional or psychological sounding board were less likely to become dependent than those who used it for "non-personal" tasks, like brainstorming. Perhaps because the former are well-adjusted enough to examine their emotions at all? (The privacy risks of sharing such personal details with a chatbot are another issue entirely.) Al-Sibai emphasizes the upshot of the research: The more time one spends using ChatGPT, the more likely one is to become emotionally dependent on it. We think parents, especially, should be aware of this finding.
How many AI outfits will offer free AI? You know. Just give folks a taste.
Cynthia Murrell, April 9, 2025