Another Grousing Googler: These Wizards Need Time to Ponder Ethical Issues
May 1, 2025
No AI. This old dinobaby just plods along, delighted he is old and this craziness will soon be left behind. What about you?
My view of the Google is narrow. Sure, I got money to write about some reports about the outfit’s technology. I just did my job and moved on to more interesting things than explaining the end of relevance and how flows of shaped information destroys social structures.
This Googzilla is weeping because one of the anointed is not happy with the direction the powerful creature is headed. Googzilla asks itself, “How can we replace weak and mentally weak humans with smart software more quickly?” Thanks, OpenAI. Good enough like much of technology these days.
I still enjoy reading about the “real” Google written by a “real” Googlers and Xooglers (these are former Googlers who now work at wonderfully positive outfits like emulating the Google playbook).
The article in front of me this morning (Sunday, April20, 2025) is titled “I’ve Worked at Google for Decades. I’m Sickened by What It’s Doing.” The subtitle tells me a bit about the ethical spine of the author, but you may find it enervating. As a dinobaby, I am not in tune with the intellectual, ethical, and emotional journeys of Googlers and Xooglers. Here’s the subtitle:
For the first time, I feel driven to speak publicly, because our company is now powering state violence across the globe.
Let’s take a look at what this Googler asserts about the estimable online advertising outfit. Keep in mind that the fun-loving Googzilla has been growing for more than two decades, and the creature is quite spritely despite some legal knocks and Timnit Gebru-type pains. Please, read the full “Sacramentum Paenitentiae.” (I think this is a full cycle of paenitentia, but as a dinobaby, I don’t have the crystalline intelligence of a Googler or Xoogler.)
Here’s statement one I noted. The author contrasts the good old days of St. Paul Buchheit’s “Don’t be evil” enjoinder to the present day’s Sundar & Prabhakar’s Comedy Show this way:
But if my overwhelming feeling back then was pride, my feeling now is a very different one: heartbreak. That’s thanks to years of deeply troubling leadership decisions, from Google’s initial foray into military contracting with Project Maven, to the corporation’s more recent profit-driven partnerships like Project Nimbus, Google and Amazon’s joint $1.2 billion AI and cloud computing contract with the Israeli military that has powered Israel’s ongoing genocide of Palestinians in Gaza.
Yeah, smart software that wants to glue cheese on pizzas running autonomous weapons strikes me as an interesting concept. At least the Ukrainian smart weapons are home grown and mostly have a human or two in the loop. The Google-type outfits are probably going to find the Ukrainian approach inefficient. The blue chip consulting firm mentality requires that these individuals be allowed to find their future elsewhere.
Here’s another snip I circled with my trusty Retro51 ball point pen:
For years, I have organized internally against Google’s full turn toward war contracting. Along with other coworkers of conscience, we have followed official internal channels to raise concerns in attempts to steer the company in a better direction. Now, for the first time in my more than 20 years of working at Google, I feel driven to speak publicly, because our company is now powering state violence across the globe, and the severity of the harm being done is rapidly escalating.
I find it interesting that it takes decades to make a decision involving morality and ethicality. These are tricky topics and must be considered. St. Augustine of Hippo took about three years (church scholars are not exactly sure and, of course, have been known to hallucinate). But this Google-certified professional required 20 years to figure out some basic concepts. Is this judicious or just an indication of how tough intellectual amorality is to analyze?
Let me wrap up with one final snippet.
To my fellow Google workers, and tech workers at large: If we don’t act now, we will be conscripted into this administration’s fascist and cruel agenda: deporting immigrants and dissidents, stripping people of reproductive rights, rewriting the rules of our government and economy to favor Big Tech billionaires, and continuing to power the genocide of Palestinians. As tech workers, we have a moral responsibility to resist complicity and the militarization of our work before it’s too late.
The evil-that-men-do argument. Now that’s one that will resonate with the “leadership” of Alphabet, Google, Waymo, and whatever weirdly named units Googzilla possesses, controls, and partners. As that much-loved American thinker Ralph Waldo-Emerson allegedly said:
“What lies behind you and what lies in front of you, pales in comparison to what lies inside of you.”
I am not sure I want this Googler, Xoogler, or whatever on my quick recall team. Twenty years to figure out something generally about having an ethical compass and a morality meter seems like a generous amount of time. No wonder Googzilla is rushing to replace its humanoids with smart software. When that code runs on quantum computers, imagine the capabilities of the online advertising giant. It can brush aside criminal indictments. Ignore the mewing and bleating of employees. Manifest itself into one big … self, maybe sick, but is it the Googley destiny?
Stephen E Arnold, May 1, 2025
Honesty and Integrity? Are You Kidding Me?
April 23, 2025
No AI, just the dinobaby himself.
I read a blog post which begins with a commercial and self promotion. That allowed me to jump to the actual write up which contains a couple of interesting comments. The write up is about hiring a programmer, coder, or developer right now.
The write up is “Tech Hiring: Is This an Inflection Point?” The answer is, “Yes.” Okay, now what is the interesting part of the article? The author identifies methods of “hiring” which includes interviewing and determining expertise which no longer work.
These methods are:
- Coding challenges done at home
- Exercises done remotely
- Posting jobs on LinkedIn.
Why don’t these methods work?
The answer is, “Job applicants doing anything remotely and under self-supervision cheat. Okay, that explains the words “honesty” and “integrity” in the headline to my blog post.
It does not take a rocket scientist or a person who gives one lecture a year to figure out what works. In case you are wondering, the article says, “Real person interviews.” Okay, I understand. That’s the way getting a job worked before the remote working, Zoom interviews, and AI revolutions took place. Also, one must not forget Covid. Okay, I remember. I did not catch Covid, and I did not change anything about my work routine or daily life. But I did wear a quite nifty super duper mask to demonstrate my concern for others. (Keep in mind that I used to work at Halliburton Nuclear, and I am not sure social sensitivity was a must-have for that work.)
Several observations:
- Common sense is presented as a great insight. Sigh.
- Watching a live prospect do work yields high value information. But the observer must not doom scroll or watch TikToks in my opinion.
- Allowing the candidate to speak with other potential colleagues and getting direct feedback delivers another pick up truck of actionable information.
Now what’s the stand out observation in the self-promotional write up?
LinkedIn is losing value.
I find that interesting. I have noticed that the service seems to be struggling to generate interest and engagement. I don’t pay for LinkedIn. I am 80, and I don’t want to bond, interact, or share with individuals whom I will never meet in the short time I have left to bedevil readers of this Beyond Search post.
I think Microsoft is taking the same approach to LinkedIn that it has to the problem of security for its operating systems, the reliability of its updates, and the amazingly weird indifference to flaws in the cloud synchronization service.
That’s useful information. And, no, I won’t be attending the author’s one lecture a year, subscribing to his for fee newsletter, or listening to his podcast. Stating the obvious is not my cup of tea. But I liked the point about LinkedIn and the implications about honesty and integrity.
Stephen E Arnold, April 23, 2025
JudyRecords: Is It Back or Did It Never Go Away?
April 22, 2025
Believe it or not, no smart software. Just a dumb and skeptical dinobaby.
I was delighted to see that JudyRecords is back online. Here’s what the service says as of April 19, 2025:
Judyrecords is a 100% free nationwide search engine that lets you instantly search hundreds of millions of United States court cases and lawsuits.judyrecords has over 100x more cases than Google Scholar and 10x more cases than PACER, the official case management system of the United States federal judiciary.As of Jul 2022, judyrecords now features free full-text search of all United States patents from 1/1/1976 to 07/01/2022 — over 8.1 million patents in total.
My thought is that lawyers, law students, and dinobabies like me will find the service quite useful.
The JudyRecords’ Web site adds:
The first 500K results are displayed instead of just the first 2K.
- murder – 926K cases
- fraud – 2.1 million cases
- burglary – 3.7 million cases
- assault – 8.2 million cases
Most people don’t realize that the other “free” search engines limit the number of hits shown to the user. The old-fashioned ideas of precision and recall are not operative with most of the people whom I encounter. At the Googleplex, precision and recall are treated like a snappy joke when the Sundar & Prabhakar Comedy Show appears in a major venue like courtrooms.
If you want to control the results, JudyRecords provides old-fashioned and definitely unpopular methods such as Boolean logic. I can visualize the GenZs rolling their eyes and mouthing, “Are you crazy, old man?”
Please, check out JudyRecords because the outstanding management visionaries at LexisNexis, Thomson Reuters, and other “professional” publishers will be taking a look themselves.
Stephen E Arnold, April 22, 2025
Meta a Great Company Lately?
April 10, 2025
Sorry, no AI used to create this item.
Despite Google’s attempt to flood the zone with AI this and AI that, Meta kept popping up in my newsfeed this morning (April 10, 2025). I pushed past the super confidential information from the US District Court of Northern District of California (an amazing and typically incoherent extract of super confidential information) and focused on a non-fiction author.
The Zuck – NSO Group dust up does not make much of a factoid described in considerable detail in Wikipedia. That encyclopedia entry is “Onavo.” In a nutshell, Facebook acquired a company which used techniques not widely known to obtain information about users of an encrypted app. Facebook’s awareness of Onavo took place, according to Wikipedia, prior to 2013 when Facebook purchased Onavo. My thought is that someone in the Facebook organization learned about other Israeli specialized software firms. Due to the high profile NSO Group had as a result of its participation in certain intelligence-related conferences and the relatively small community of specialized software developers in Israel, Facebook may have learned about the Big Kahuna, NSO Group. My personal view is that Facebook and probably more than a couple of curious engineers learned how specialized software purpose-built to cope with mobile phone data and were more than casually aware of systems and methods. The Meta – NSO Group dust up is an interesting case. Perhaps someday someone will write up how the Zuck precipitated a trial, which to an outsider, looks like a confused government-centric firm facing a teenagers with grudge. Will this legal matter turn a playground-type of argument about who is on whose team into an international kidney stone for the specialized software sector? For now, I want to pick up the Meta thread and talk about Washington, DC.
The Hill, an interesting publication about interesting institutions, published “Whistleblower Tells Senators That Meta Undermined U.S. Security, Interests.” The author is a former Zucker who worked as the director of global public policy at Facebook. If memory serves me, she labored at the estimable firm when Zuck was undergoing political awakening.
The Hill reports:
Wynn-Williams told Hawley’s panel that during her time at Meta: “Company executives lied about what they were doing with the Chinese Communist Party to employees, shareholders, Congress and the American public,” according to a copy of her remarks. Her most explosive claim is that she witnessed Meta executives decide to provide the Chinese Communist Party with access to user data, including the data of Americans. And she says she has the “documents” to back up her accusations.
After the Zuck attempted to block, prevent, thwart, or delete Ms. Wynn-Williams’ book Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism from seeing the light of a Kindle, I purchased the book. Silicon Valley tell-alls are usually somewhat entertaining. It is a mark of distinction for Ms. Wynn-Williams that she crafted a non-fiction write up that made me downright uncomfortable. Too much information about body functions and allegations about sharing information with a country not getting likes from too many people in certain Washington circles made me queasy. Dinobabies are often sensitive creatures unless they grow up to be Googzillas.
The Hill says:
Wynn-Williams testified that Meta started briefing the Chinese Communist party as early as 2015, and provided information about critical emerging technologies and artificial intelligence. “There’s a straight line you can draw from these briefings to the recent revelations that China is developing AI models for military use,” she said.
But isn’t open source AI software the future a voice in my head said?
What adds some zip to the appearance is this factoid from the article:
Wynn-Williams has filed a shareholder resolution asking the company’s board to investigate its activity in China and filed whistleblower complaints with the Securities and Exchange Administration and the Department of Justice.
I find it fascinating that on the West Coast, Facebook is unhappy with intelware being used on a Zuck-purchased service to obtain information about alleged persons of interest. About the same time, on the East coast, a former Zucker is asserting that the estimable social media company buddied up to a nation-state not particularly supportive of American interests.
Assuming that the Northern District court case is “real” and “actual factual” and that Ms. Wynn-Williams’ statements are “real” and “actual factual,” what can one hypothesize about the estimable Meta outfit? Here are my thoughts:
- Meta generates little windstorms of controversy. It doesn’t need to flood the zone with Google-style “look at us” revelations. Meta just stirs up storms.
- On the surface, Meta seems to have an interesting public posture. On one hand, the company wants to bring people together for good, etc. etc. On the other, the company could be seen as annoyed that a company used his acquired service to do data collection at odds with Meta’s own pristine approach to information.
- The tussles are not confined to tiny spaces. The West Coast matter concerns what I call intelware. When specialized software is no longer “secret,” the entire sector gets a bit of an uncomfortable feeling. Intelware is a global issue. Meta’s approach is in my opinion spilling outside the courtroom. The East Coast matter is another bigly problem. I suppose allegations of fraternization with a nation-state less than thrilled with the US approach to life could be seen as “small.” I think Ms. Wynn-Williams has a semi-large subject in focus.
Net net: [a] NSO Group cannot avoid publicity which could have an impact on a specialized software sector that should have remained in a file cabinet labeled “Secret.” [b] Ms. Wynn-Williams could have avoided sharing what struck me as confidential company information and some personal stuff as well. The book is more than a tell-all; it is a summary of what could be alleged intentional anti-US activity. [c] Online seems to be the core of innovation, finance, politics, and big money. Just forty five years ago, I wore bunny ears when I gave talks about the impact of online information. I called myself the Data Bunny. and, believe it or not, wore white bunny rabbit ears for a cheap laugh and make the technical information more approachable. Today many know online has impact. From a technical oddity used by fewer than 5,000 people to disruption of the specialized software sector by a much-loved organization chock full of Zuckers.
Stephen E Arnold, April 10, 2025
A TikTok Use Case: Another How To
April 7, 2025
Another dinobaby blog post. Eight decades and still thrilled when I point out foibles.
Social media services strike me as problematic. As a dinobaby, I marvel at the number of people who view services through a porthole in their personal submarine. Write ups that are amazed at the applications of social media which are negative remind me that there are some reasons meaningful regulation of TikTok-type services has not been formulated. Are these negative use cases news? For me, nope.
I read “How TikTok Is Emerging As an Essential Tool for Migrant Smugglers.” The write up explains how a “harmless” service can be used for criminal activities. The article says:
At a time when legal pathways to the U.S. have been slashed and criminal groups are raking in money from migrant smuggling, social media apps like TikTok have become an essential tool for smugglers and migrants alike. The videos—taken to cartoonish extremes—offer a rare look inside a long elusive industry and the narratives used by trafficking networks to fuel migration north.
Yep, TikTok is a marketing tool for people smugglers. Wow! Really?
Is this a surprise? My hunch is that the write up reveals more about the publication and the researchers than it does about human smugglers.
Is this factoid unheard of?
A 2023 study by the United Nations reported that 64% of the migrants they interviewed had access to a smart phone and the internet during their migration to the U.S.
A free service used by millions of people provides a communications fabric. Marketing is the go-to function of organizations, licit and illicit.
Several observations:
- Social media — operating in the US or in countries with different agendas — is a tool. Tools can be used for many purposes. Why wouldn’t bad actors exploit TikTok or any other social media service.
- The intentional use of a social media service for illegal purposes is wide spread. LinkedIn includes fake personas; Telegram offers pirated video content; and Facebook — sure, even Facebook — allows individuals to advertise property for sale which may not come with a legitimate sales receipt from the person who found a product on a door step in an affluent neighborhood. Social media invites improper activity.
- Regulation in many countries has not kept space with the diffusion of social media. In 2025, worrying about misuse of these services is not even news.
The big question is, “Have we reached a point of no return with social media?” I have been involved in computers and digital information for more than a half century. The datasphere is the world in which we live.
Will the datasphere evolve? Yes, the intentional use of social media is shifting toward negative applications. For me that means that for every new service, I do not perceive a social benefit. I see opportunities for accelerating improper use of data flows.
What strikes me about the write up is that documenting a single issue is interesting, but it misses what and how flows of information in TikTok-like service operate. Who are the winners? Who are the losers? And, who will own TikTok and the information space for its users?
Stephen E Arnold, April 7, 2025
Original Research: Not-So-Original Assertions about Content Appropriation
April 2, 2025
No AI. Just a dinobaby sharing an observation about younger managers and their innocence.
The Social Science Research Council published the 30-plus page report “Beyond Public Access in LLM Pre-Training Data.” The subtitle reveals the principal finding: “Non-Public Non-Public Book Content in OpenAI’s Models.”
The write up states:
Using a legally obtained dataset of 34 copyrighted O’Reilly Media books, we apply the DE-COP membership inference attack method to investigate whether OpenAI’s large language models were trained on copyrighted content without consent. Our AUROC scores show that GPT-4o, OpenAI’s more recent and capable model, demonstrates strong recognition of paywalled O’Reilly book content (AUROC = 82%), compared to OpenAI’s earlier model GPT-3.5 Turbo. In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples. GPT-4o Mini, as a much smaller model, shows no knowledge of public or non-public O’Reilly Media content when tested (AUROC ? 50%). Testing multiple models, with the same cutoff date, helps us account for potential language shifts over time that might bias our findings. These results highlight the urgent need for increased corporate transparency regarding pre-training data sources as a means to develop formal licensing frameworks for AI content training.
I want to mention that the DE-COP method provide one way to determine whether a specific data record was part of the training dataset for a machine learning model. The result of the SSRC’s research suggests that as OpenAI enhanced its models, the OpenAI organization appears to have used non-public content. Smaller (less capable OpenAI) models seemed to know less about the test set; the bigger models knew more.
The write up concludes that developers of smart software are sucking down data without obtaining permission or entering into licensing deals with publishers, copyright holders, or entities working on content not yet available as public content like this dinobaby blog post.
To sum up: A not-to-original assertion that certain organizations just take content and ignore rights is apparently accurate in this set of research. OpenAI accelerates. Speed is a positive. Breaking speed limits is believed to be no big deal.
A bit of a flashback: Over 40 years ago, I participated in a series of meetings to license the right to create abstracts of original articles from more than 1000 different publications. Our efforts were successful. Most publishers of academic business and management publications knew zero about online information services. The idea of summarizing an eight page article about executive compensation was unfamiliar to most of the people with whom our organization interacted over a period of years. Nevertheless, the owner of the database unit of the Courier Journal & Louisville Times Company insisted that we get signed agreements. The process was time consuming and expensive. Some outfits had representatives. So a negotiation went through layers of bureaucracy. Other publishers wanted a sit down meeting and a dog-and-pony show to explain why we wanted to boil down a 600 word article to about 150 words. There were worries about the impact of online access. There were worries about control of editorial content. There were worries about signing away rights. We couldn’t assuage worries, but we chugged along. The reputations of the Courier Journal and Barry Bingham Jr. helped but our small database operation had to follow the rules.
If the data in the cited document are accurate, the attitude of Barry Bingham Jr. would not have gained much traction at OpenAI. Instead of my getting a pat on the head for setting up the licensing deals with the able help of other Courier Journal professionals, I would have gotten kicked on the shin or worse.
Several observations:
- Observing content ownership remains important despite the acceleration toward illegal behaviors in the online sector
- Using show-and-tells and explaining the benefits of participating in digital innovations remains important. Knowledge minimizes anger and other negative reactions to pre-emptive behavior
- Managing the process is as important as demonstrating an unwillingness to conform to industry, social, and legal norms.
Let’s assume that the findings in the SSRC are off by 50 percent. Does that change what appears to be a strong possibility of some fancy dancing at the OpenAI disco?
My conclusion is that taking short cuts is part of the Silicon Valley approach of doing whatever and just saying, “Wow, I am sorry.”
I asked ChatGPT 4 via You.com this question:
Should I create an online product and just use content from the Harvard Business Review without permission?
Here’s the response:
No, you should not create an online product using content from the Harvard Business Review (HBR) without permission. Doing so would likely violate copyright laws and HBR’s usage policies. Here’s why:
HBR’s Copyright and Usage Policies
- HBR Material is Copyrighted: Harvard Business Review explicitly states that its content is protected by copyright, and no part of its publications may be reproduced without permission.
- Usage Restrictions: HBR has strict policies regarding how its content can be used. For example, even in academic settings, HBR articles cannot be freely distributed unless done through specific channels like creating course packs via "Harvard Business Publishing for Educators"
Permission is Required: If you want to use HBR content, you must obtain explicit permission. This includes reproducing, distributing, or using their material in any form.
Legal and Ethical Implications
- Copyright Infringement: Using HBR content without permission would likely constitute copyright infringement, which can result in legal action, fines, or other penalties.
- Ethical Concerns: Beyond legal issues, using someone else’s intellectual property without authorization is unethical and could harm your reputation or business.
What You Should Do Instead
- Seek Permission: Contact Harvard Business Review to request permission to use their content. They may have licensing options available for your intended use.
- Create Original Content: Instead of using HBR’s material, consider creating your own original content inspired by their ideas (but without copying their text or proprietary materials).
- Use Public Domain or Licensed Content: Look for content that is in the public domain or available under a license (e.g., Creative Commons) that allows commercial use.
Conclusion
Using HBR content without permission is not only illegal but also against their explicit policies. To avoid legal and ethical issues, either seek proper licensing or create your own original material.
In this case, the OpenAI’s AI may be more intelligent that some people engaged in content acquisition at the firm. But I am a dinobaby. I try to follow the social, legal, and ethical rules as my parents who had only high school educations taught me. I am a dinobaby. A sad dinobaby.
Stephen E Arnold, April 2, 2025
Digital Marketing: Is It Worse Than Social Media? Yep, in Some Ways
March 26, 2025
Yep, another dinobaby original.
With the US taking an interesting trajectory, I have seen an uptick in articles that tackle the question, “Why are many of society’s functions leaking synthetic oil?”
“How Digital Marketing Broke Society” takes a different analytic path. The culprit is not social media. (Please, visualize the tattoos on 11 year olds in TikTok- and Facebook-type content.) The bad actor is “digital marketing.” I must admit that I interpreted “digital marketing” with Google, but, please, don’t take my mental shortcut. You are more informed and neutral than I.
A young Silicon Valley type professional surfing to a digital advertising company on a flood of zeros and ones. He is making six figures and in his mom’s mind, her son is doing good things. He is such a good young man. She then turns her attention to his agentic note: “Love ya, mom.”
Let’s look at what Joan Westenberg (a person about whom I have zero knowledge) asserts.
I noted this passage:
We are, increasingly, a cross-generational society of extremely online, screen-tapping, doom scrolling depressives. And it’s having an impact. More than ever, we are divided by misinformation and disinformation, driven by and vulnerable to ignorance, hate-mongering, and bullshit, reactive more than proactive, caught in what is either a global backslide or a cycle of decay. We’re lonely, scared, and more likely than ever to take it out on total strangers. We read less. We watch more, and what we watch is short-form, viral videos designed to tap into the dopamine-fueled, base parts of our brains.
I think her point comes across, and it resonates with me. Every time I go to the gym I see people in need of serious calorie burning sitting on a machine or a bench fiddling with their mobile phones. I hit the gym at 6 am, and I am astounded that so many people have urgent messages to which they must respond. Hey, do some exercises. But these folks are gym potatoes engaged in doom scrolling I have concluded.
Ms. Westenberg adds:
An entirely new system of influence came into being, operating largely beneath conscious awareness. Dark patterns increased, making it harder for users to protect their privacy or limit their exposure. Intermittent variable rewards—the psychological mechanism that powers slot machines—became standard practice. The industry successfully reframed this surveillance and manipulation as “personalization” and “enhanced user experience”—linguistic legerdemain that positioned exploitation as a service.
I think this is accurate. (Remember, please, that I am interpreting this as Googley behavior. The company can pay for streaming video mostly for free because it seems to have the knack for using advertising to get more advertising and offering tools to facilitate most of the steps in the money extraction process. You, gentle reader, must be more open minded than I am.)
Another point in Ms. Westenberg’s essay caught my attention; to wit:
Meta’s internal research found that divisive content generates significantly higher engagement, translating directly to advertising revenue. YouTube’s recommendation algorithm precisely drives users toward increasingly extreme content because it maximizes watch time and ad impressions. Marketing technology companies have built their entire business models around the commodification of attention – regardless of its social consequences. Digital marketing is capitalism at its most predatory—a system where deliberately amplifying society’s worst impulses becomes a rational business strategy. The industry has created a machine that converts social discord into shareholder value. In an attention economy, hatred isn’t just profitable; it’s the optimal product. The economic model and the foundational worldview of digital marketing operate on the premise that human consciousness exists primarily as a resource to be mined, refined, and sold. It’s a form of cognitive colonization—claiming and exploiting mental territory that once belonged to human beings. You can read it in the language of the industry: users are “captured,” attention is “harvested,” and engagement is “extracted.”
Yes, data mining works. Predictive analytics work for precisely the insight behind Eugene Wigner’s 1960 essay “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Digital marketing just applies this idea, and, in my opinion, it is indeed highly effective in 21st century human-centric predictive work.
I want to quote the passage that makes me quite happy with Ms. Westenberg’s essay, and if I may do a bit of predictive assertion, makes me feel warm and fuzzy about her mental orientation; specifically, she writes and I quote:
I spent fifteen years in digital marketing. Building campaigns, funnels, and content. But over the last month, I shut down my marketing business. The decision was both ethical and existential. I can’t continue participating in an industry that has evolved from persuasion – harmful enough – to psychological exploitation and the deliberate destruction of our social order. Every optimization, targeting refinement, and engagement metric has brought us closer to a world where human autonomy is systematically undermined for commercial gain. The techniques I learned and taught others—the psychological triggers, the attention-capturing mechanisms, the behavioral prediction models—have escaped their commercial confines to taint our information ecosystem and devour the foundations of political discourse. Digital marketing is a fundamental threat to human flourishing, cognitive autonomy, and democratic governance. The industry has normalized a state of perpetual surveillance and manipulation that would have been unthinkable just decades ago. Its methodologies have created a world where attention is constantly hijacked, emotions are continuously manipulated, and reality itself is customized in pursuit of extraction value.
I am not sure that the flow of digital or weaponized information will stop. I suppose at some point a power outage or some other exogenous event will knock online out, but for the foreseeable future, we have to surf on the torrents flooding the information highway.
Net net: I suppose I should recast my thinking and consider the perpetrators at those who are simply following the Google Legacy.
Stephen E Arnold, March 26, 2025
Facebook: Always Giving Families a Boost
March 21, 2025
What parent has not erred on the side of panic? We learn of one mom who turned to Facebook in the search for her teenage adult daughter, who "vanished" for ten days without explanation. The daughter had last been seen leaving her workplace with a man who, she later revealed, is her boyfriend. The Rakyat Post of Malaysia reports, "Mom’s Missing Teen Alert Backfires: ‘Stop Embarrassing Me, I’m Fine!’" To be fair, it can be hard to distinguish between a kidnapping and a digital cold shoulder. Writer Fernando Fong explains:
"CCTV footage from what’s believed to be the company dormitory showed Pei Ting leaving with a man around 2 PM on the 18th, carrying her bags and luggage. Since then, she has refused to answer calls or reply to WhatsApp messages, leading her mother to worry that someone might be controlling her phone. The mother said neither her elder daughter nor the employer had seen this man."
Such a scenario would alarm many a parent. The post continues:
"Desperate and frantic, the mother turned to social media as her last hope, only to be stunned when her daughter emerged from the digital shadows – not with remorse or understanding, but with embarrassment and indignation at her mother’s public display of concern."
Oops. In the comments of her mother’s worried post, the daughter identified the mystery man as her boyfriend. She also painted a picture of family conflict. Ahh, dirty laundry heaped in the virtual public square. Social media has certainly posed a novel type of challenge for parents.
Cynthia Murrell, March 21, 2025
A Swelling Wave: Internet Shutdowns in Africa
March 18, 2025
Another dinobaby blog post. No AI involved which could be good or bad depending on one’s point of view.
How does a government deal with information it does not like, want, or believe? The question is a pragmatic one. Not long ago, Russia suggested to Telegram that it cut the flow of Messenger content to Chechnya. Telegram has been somewhat more responsive to government requests since Pavel Durov’s detainment in France, but it dragged its digital feet. The fix? The Kremlin worked with service providers to kill off the content flow or at least as much of it as was possible. Similar methods have been used in other semi-enlightened countries.
“Internet Shutdowns at Record High in Africa As Access Weaponised’ reports:
A report released by the internet rights group Access Now and #KeepItOn, a coalition of hundreds of civil society organisations worldwide, found there were 21 shutdowns in 15 African countries, surpassing the existing record of 19 shutdowns in 2020 and 2021.
There are workarounds, but some of these are expensive and impractical for the people in Cormoros, Guinea-Bassau, Mauritius, Burundi, Ethiopia, Equatorial Guinea, and Kenya. I am not sure the list is complete, but the idea of killing Internet access seems to be an accepted response in some countries.
Several observations:
- Recent announcements about Google making explicit its access to users’ browser histories provide a rich and actionable pool of information. Will these type of data be used to pinpoint a dissident or a problematic individual? In my visits to Africa, including the thrilling Zimbabwe, I would suggest that the answer could be, “Absolutely.”
- Online is now pervasive, and due to a lack of meaningful regulation, the idea of going online and sharing information is a negative. In the late 1980s, I gave a lecture for ASIS at Rutgers University. I pointed out that flows of information work like silica grit in a sand blasting device to remove rust in an autobody shop. I can say from personal experience that no one knew what I was talking about. In 40 years, people and governments have figured out how online flows erode structures and social conventions.
- The trend of shutdown is now in the playbook of outfits around the world. Commercial companies can play the game of killing a service too. Certain large US high technology companies have made it clear that their service would summarily be blocked if certain countries did not play ball the US way.
As a dinobaby who has worked in online for decades, I find it interesting that the pigeons are coming home to roost. A failure years ago to recognize and establish rules and regulation for online is the same as having those lovable birds loose in the halls of government. What do pigeons produce? Yep, that’s right. A mess, a potentially deadly one too.
Stephen E Arnold, March 18, 2025
AI Hiring Spoofs: A How To
March 12, 2025
Be aware. A dinobaby wrote this essay. No smart software involved.
The late Robert Steele, one of first government professionals to hop on the open source information bandwagon, and I worked together for many years. In one of our conversations in the 1980s, Robert explained how he used a fake persona to recruit people to assist him in his work on a US government project. He explained that job interviews were an outstanding source of information about a company or an organization.
“AI Fakers Exposed in Tech Dev Recruitment: Postmortem” is a modern spin on Robert’s approach. Instead of newspaper ads and telephone calls, today’s approach uses AI and video conferencing. The article presents a recipe for what was at one time a technique not widely discussed in the 1980s. Robert learned his approach from colleagues in the US government.
The write up explains that a company wants to hire a professional. Everything hums along and then:
…you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.
The cited article explains how to set and operate this type of deep fake play. I am not going to present the “how to” in this blog post. If you want the details, head to the original. The penetration tactic requires Microsoft LinkedIn, which gives that platform another use case for certain individuals gathering intelligence.
Several observations:
- Keep in mind that the method works for fake employers looking for “real” employees in order to obtain information from job candidates. (Some candidates are blissfully unaware that the job is a front for obtaining data about an alleged former employer.)
- The best way to avoid AI centric scams is to do the work the old-fashioned way. Smart software opens up a wealth of opportunities to obtain allegedly actionable information. Unfortunately the old fashioned way is slow, expensive, and prone to social engineering tactics.
- As AI and bad actors take advantage of the increased capabilities of smart software, humans do not adapt quickly when those humans are not actively involved with AI capabilities. Personnel related matters are a pain point for many organizations.
To sum up, AI is a tool. It can be used in interesting ways. Is the contractor you hired on Fiverr or via some online service a real person? Is the job a real job or a way to obtain information via an AI that is a wonderful conversationalist? One final point: The target referenced in the write was a cyber security outfit. Did the early alert, proactive, AI infused system prevent penetration?
Nope.
Stephen E Arnold, March 12, 2025