The National Public Radio Entity Emulates Grandma

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I can hear my grandmother telling my cousin Larry. Chew your food. Or… no television for you tonight. The time was 6 30 pm. The date was March 3, 1956. My cousin and I were being “watched” when our parents were at a political rally and banquet. Grandmother was in charge, and my cousin was edging close to being sent to grandfather for a whack with his wooden paddle. Tough love I suppose. I was a good boy. I chewed my food and worked to avoid the Wrath of Ma. I did the time travel thing when I read “NPR Suspends Veteran Editor As It Grapples with His Public Criticism.” I avoid begging for dollars outfits. I had no idea what the issue is or was.


“Gea’t haspoy” which means in grandmother speak: “That’s it. No TV for you tonight. In the morning, both of you are going to help Grandpa mow the yard and rake up the grass.” Thanks, NPR. Oh, sorry, thanks MSFT Copilot. You do the censorship thing too, don’t you?

The write up explains:

NPR has formally punished Uri Berliner, the senior editor who publicly argued a week ago that the network had “lost America’s trust” by approaching news stories with a rigidly progressive mindset.

Oh, I get it. NPR allegedly shapes stories. A “real” journalist does not go along with the program. The progressive leaning outfit ignores the free speech angle. The “real” journalist is punished with five days in a virtual hoosegow. An NPR “real” journalist published an essay critical of NPR and then vented on a podcast.

The article I have cited is an NPR article. I guess self criticism is progressive trait maybe? Any way, the article about the grandma action stated:

In rebuking Berliner, NPR said he had also publicly released proprietary information about audience demographics, which it considers confidential. He said those figures “were essentially marketing material. If they had been really good, they probably would have distributed them and sent them out to the world.”

There is no hint that this “real” journalist shares beliefs believed to be held by Julian Assange or that bold soul Edward Snowden, both of whom have danced with super interesting information.

Several observations:

  1. NPR’s suspending an employee reminds me of my grandmother punishing us for not following her wacky rules
  2. NPR is definitely implementing a type of information shaping; if it were not, what’s the big deal about a grousing employee? How many of these does Google have protesting in a year?
  3. Banning a person who is expressing an opinion strikes me as a tasty blend of and that master motivator Joe Stalin. But that’s just my dinobaby mind have a walk-about.

Net net: What media are not censoring, muddled, and into acting like grandma?

Stephen E Arnold, April 15, 2024

Google Mandates YouTube AI Content Be Labeled: Accurately? Hmmmm

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The rules for proper use of AI-generated content are still up in the air, but big tech companies are already being pressured to induct regulations. Neowin reported that “Google Is Requiring YouTube Creators To Post Labels For Realistic AI-Created Content” on videos. This is a smart idea in the age of misinformation, especially when technology can realistically create images and sounds.

Google first announced the new requirement for realistic AI-content in November 2023. The YouTube’s Creator Studio now has a tool in the features to label AI-content. The new tool is called “Altered content” and asks creators yes and no questions. Its simplicity is similar to YouTube’s question about whether a video is intended for children or not. The “Altered content” label applies to the following:

• “Makes a real person appear to say or do something they didn’t say or do

• Alters footage of a real event or place

• Generates a realistic-looking scene that didn’t actually occur”

The article goes on to say:

“The blog post states that YouTube creators don’t have to label content made by generative AI tools that do not look realistic. One example was “someone riding a unicorn through a fantastical world.” The same applies to the use of AI tools that simply make color or lighting changes to videos, along with effects like background blur and beauty video filters.”

Google says it will have enforcement measures if creators consistently don’t label their realistic AI videos, but the consequences are specified. YouTube will also reserve the right to place labels on videos. There will also be a reporting system viewers can use to notify YouTube of non-labeled videos. It’s not surprising that Google’s algorithms can’t detect realistic videos from fake. Perhaps the algorithms are outsmarting their creators.

Whitney Grace, April 2, 2024

Alternative Channels, Superstar Writers, and Content Filtering

February 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In this post-Twitter world, a duel of influencers is playing out in the blogosphere. At issue: Substack’s alleged Nazi problem. The kerfuffle began with a piece in The Atlantic by Jonathan M. Katz, but has evolved into a debate between Platformer’s Casey Newton and Jesse Singal of Singal-Minded. Both those blogs are hosted by Substack.

To get up to speed on the controversy, see the original Atlantic article. Newton wrote a couple posts about Substack’s responses and detailing Platformer’s involvement. In “Substack Says It Will Remove Nazi Publications from the Platform,” he writes:

“Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies. As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.”

How many publications did Platformer flag, and how many of those did Substack remove? Were they significant publications, and did they really violate the rules? These are the burning questions Singal sought to answer. He shares his account in, “Platformer’s Reporting on Substack’s Supposed ‘Nazi Problem’ Is Shoddy and Misleading.” But first, he specifies his own perspective on Katz’ Atlantic article:

“In my view, this whole thing is little more than a moral panic. Moreover, Katz cut certain corners to obscure the fact that to the extent there are Nazis on Substack at all, it appears they have almost no following or influence, and make almost no money. In one case, for example, Katz falsely claimed that a white nationalist was making a comfortable living writing on Substack, but even the most cursory bit of research would have revealed that that is completely false.”

Singal says he plans a detailed article supporting that assertion, but first he must pick apart Platformer’s position. Readers are treated to details from an email exchange between the bloggers and reasons Singal feels Newton’s responses are inadequate. One can navigate to that post for those details if one wants to get into the weeds. As of this writing, Newton has not published a response to Singal’s diatribe. Were we better off when such duels took place 280 characters at a time?

One positive about newspapers: An established editorial process kept superstars grounded in reality. Now entitlement, more than content, seems to be in the driver’s seat.

Cynthia Murrell, February 7, 2024

Harvard University: Does Money Influence Academic Research?

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.


Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.

Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.

The write up asserts:

Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.

Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.

If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.

What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.

If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.

Stephen E Arnold, December 5, 2023

The Google Magic Editor: Mom Knows Best and Will Ground You, You Goof Off

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

What’s better at enforcing rules? The US government and its Declaration of Independence, Constitution, and regulatory authority or Mother Google? If you think the US government legal process into Google’s alleged fancy dancing with mere users is opaque, you are correct. The US government needs the Google more than Google Land needs the world’s governments. Who’s in charge of Google? The real authority is Mother Google, a ghost like iron maiden creating and enforcing with smart software many rules and regulations. Think of Mother Google operating from a digital Star Chamber. Banned from YouTube? Mother Google did it. Lost Web site traffic overnight? Mother Google did it? Lost control of your user data? Mother Google did not do that, of course.


A stern mother says, “You cannot make deep fakes involving your gym teacher and your fifth grade teacher. Do you hear me?” Thanks, Microsoft Bing. Great art.

The author of “Google Photos’ Magic Editor Will Refuse to Make These Edits.” The write up states:

Code within the latest version of Google Photos includes specific error messages that highlight the edits that Magic Editor will refuse to do. Magic Editor will refuse to edit photos of ID cards, receipts, images with personally identifiable information, human faces, and body parts. Magic Editor already avoids many of these edits but without specific error messages, leaving users guessing on what is allowed and what is not.

What’s interesting is that user have to discover that which is forbidden by experimenting. My reaction to this assertion is that Google does not want to get in trouble when a crafty teen cranks out fake IDs in order to visit some of the more interesting establishments in town.

I have a nagging suspicion or two  I would like to share:

  1. The log files identifying which user tried to create what with which prompt would be interesting to review
  2. The list of don’ts is not published because it is adjusted to meet Google’s needs, not the users’
  3. Google wants to be able to say, “See, we are trying to keep the Internet safe, pure, and tidy.”

Net net: What happens when smart software enforces more potent and more subtle controls over the framing and presenting of information? Right, mom?

Stephen E Arnold, November 13, 2023

The Google: Dribs and Drabs of Information Suggest a Frisky Outfit

October 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have been catching up since I returned from a law enforcement conference. One of the items in my “read” file concerned Google’s alleged demonstrations of the firm’s cleverness. Clever is often valued more than intelligence in some organization  in my experience. I picked up on an item describing the system and method for tweaking a Google query to enhance the results with some special content.

 How Google Alters Search Queries to Get at Your Wallet” appeared on October 2, 2023. By October 6, 2023, the article was disappeared. I want to point out for you open source intelligence professionals, the original article remains online.


Two serious and bright knowledge workers look confused when asked about alleged cleverness. One says, “I don’t understand. We are here to help you.” Thanks, Microsoft Bing. Highly original art and diverse too.

Nope. I won’t reveal where or provide a link to it. I read it and formulated three notions in my dinobaby brain:

  1. The author is making darned certain that he/she/it will not be hired by the Google.
  2. The system and method described in the write up is little more than a variation on themes which thread through a number of Google patent documents. I demonstrated in my monograph Google Version 2.0: The Calculating Predator that clever methods work for profiling users and building comprehensive data sets about products.
  3. The idea of editorial curation is alive, just not particularly effective at the “begging for dollars” outfit doing business as Wired Magazine.

Those are my opinions, and I urge you to formulate your own.

I noted several interesting comments on Hacker News about this publish and disappear event. Let me highlight several. You can find the posts at this link, but keep in mind, these also can vaporize without warning. Isn’t being a sysadmin fun?

  1. judge2020: “It’s obvious that they design for you to click ads, but it was fairly rocky suggesting that the backend reaches out to the ad system. This wouldn’t just destroy results, but also run afoul of FCC Ad disclosure requirements….”
  2. techdragon: “I notice it seems like Google had gotten more and more willing to assume unrelated words/concepts are sufficiently interchangeable that it can happily return both in a search query for either … and I’ll be honest here… single behavior is the number one reason I’m on the edge of leaving google search forever…”
  3. TourcanLoucan: “Increasingly the Internet is not for us, it is certainly not by us, it is simply where you go when you are bored, the only remaining third place that people reliably have access to, and in true free market fashion, it is wall-to-wall exploitation.”

I want to point out that online services operate like droplets of mercury. They merge and one has a giant blob of potentially lethal mercury. Is Google a blob of mercury? The disappearing content is interesting as are the comments about the incident. But some kids play with mercury; others use it in industrial processes; and some consume it (willingly or unwillingly) like sailors of yore with a certain disease. They did not know. You know or could know.

Stephen E Arnold, October 10, 2023

    Reading. Who Needs It?

    September 19, 2023

    Book banning aka media censorship is an act as old as human intellect. As technology advances so do the strategies and tools available to assist in book banning. Engadget shares the unfortunate story about how, “An Iowa School District Is Using AI To Ban Books.” Mason City, Iowa’s school board is leveraging AI technology to generate lists of books to potentially ban from the district’s libraries in the 2023-24 school year.

    Governor Kim Reynolds signed Senate File 496 into law after it passed the Republican-controlled state legislature. Senate File 496 changes the state’s curriculum and it includes verbiage that addresses what books are allowed in schools. The books must be “age appropriate” and be without “descriptions or visual depictions of a sex act.”

    “Inappropriate” titles have snuck past censors for years and Iowa’s school board discovered it is not so easy to peruse every school’s book collection. That is where the school board turned to an AI algorithm to undertake the task:

    “As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a “master list” is first cobbled together from “several sources” based on whether there were previous complaints of sexual content. Books from that list are then scanned by “AI software” which tells the state censors whether or not there actually is a depiction of sex in the book.”

    The AI algorithm has so far listed nineteen titles to potentially ban. These include banned veteran titles such as The Color Purple, I Know Why the Caged Bird Sings, and The Handmaid’s Tale and “newer” titles compared to the formers: Gossip Girl, Feed, and A Court of Mist and Fury.

    While these titles are not appropriate for elementary schools, questionable for middle schools, and arguably age-appropriate for high schools, book banning is not good. Parents, teachers, librarians, and other leaders must work together to determine what is best for students. Books also have age ratings on them like videogames, movies, and TV shows. These titles are tame compared to what kids can access online and on TV.

    Whitney Grace, September 19, 2023

    Dust Up: Social Justice and STEM Publishing

    June 28, 2023

    Are you familiar with “social justice warriors?” These are people who. Take it upon themselves to police the world for their moral causes, usually from a self-righteous standpoint. Social justice warriors are also known my the acronym SJWs and can cross over into the infamous Karen zone. Unfortunately Heterodox STEM reports SJWs have invaded the science community and Anna Krylov and Jay Tanzman discussed the issue in their paper: “Critical Social Justice Subverts Scientific Publishing.”

    SJWs advocate for the politicization of science, adding an ideology to scientific research also known as critical social justice (CSJ). It upends the true purpose of science which is to help and advance humanity. CSJ adds censorship, scholarship suppression, and social engineering to science.

    Krylov and Tanzmans’ paper was presented at the Perils for Science in Democracies and Authoritarian Countries and they argue CSJ harms scientific research than helps it. They compare CSJ to Orwell’s fictional Ministry of Love; although real life examples such as Josef Goebbels’s Nazi Ministry of Propaganda, the USSR’s Department for Agitation and Propaganda, and China’s authoritarian regime work better. CSJ is the opposite of the Enlightenment that liberated human psyches from religious and royal dogmas. The Enlightenment engendered critical thinking, the scientific process, philosophy, and discovery. The world became more tolerant, wealthier, educated, and healthier as a result.

    CSJ creates censorship and paranoia akin to tyrannical regimes:

    “According to CSJ ideologues, the very language we use to communicate our findings is a minefield of offenses. Professional societies, universities, and publishing houses have produced volumes dedicated to “inclusive” language that contain long lists of proscribed words that purportedly can cause offense and—according to the DEI bureaucracy that promulgates these initiatives—perpetuate inequality and exclusion of some groups, disadvantage women, and promote patriarchy, racism, sexism, ableism, and other isms. The lists of forbidden terms include “master database,” “older software,” “motherboard,” “dummy variable,” “black and white thinking,” “strawman,” “picnic,” and “long time no see” (Krylov 2021: 5371, Krylov et al. 2022: 32, McWhorter 2022, Paul 2023, Packer 2023, Anonymous 2022). The Google Inclusive Language Guide even proscribes the term “smart phones” (Krauss 2022). The Inclusivity Style  Guide of the American Chemical Society (2023)—a major chemistry publisher of more than 100 titles—advises against using such terms as “double blind studies,” “healthy weight,” “sanity check,” “black market,” “the New World,” and “dark times”…”

    New meanings that cause offense are projected onto benign words and their use is taken out of context. At this rate, everything people say will be considered offensive, including the most uncontroversial topic: the weather.

    Science must be free from CSJ ideologies but also corporate ideologies that promote profit margins. Examples from American history include, Big Tobacco, sugar manufacturers, and Big Pharma.

    Whitney Grace, June 28, 2023

    Two Polemics about the Same Thing: Info Control

    June 12, 2023

    Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    Polemics are fun. The term, as I use it, means:

    a speech or piece of writing expressing a strongly critical attack on or controversial opinion about someone or something.

    I took the definition from Google’s presentation of the “Oxford Languages.” I am not sure what that means, but since we are considering two polemics, the definition is close enough for horseshoes. Furthermore, polemics are not into facts, verifiable assertions, or hard data. I think of polemics as blog posts by individuals whom some might consider fanatics, apologists, crusaders, or zealots.

    Ah, you don’t agree? Tough noogies, gentle reader.

    The first document I read and fed into Browserling’s free word frequency tool was Marc Andreessen’s delightful “Why AI Will Save the World.” The document has a repetitive contents listing, which some readers may find useful. For me, the effort to stay on track added duplicate words.

    The second document I read and stuffed into the Browserling tool was the entertaining, and in my opinion, fluffy, Aeropagitica, made available by Dartmouth.

    The mechanics of the analysis were simple. I compared the frequency of words which I find indicative of a specific rhetorical intent. Mr. Andreessen is probably more well known to modern readers than John Milton. Mr. Andreessen’s contribution to polemic literature is arguably more readable. There’s the clumsy organization impedimenta. There are shorter sentences. There are what I would describe as Silicon Valley words. Furthermore, based on Bing, Google, and Yandex searches for the text of the document, one can find Mr. Andreessen’s contribution to the canon in more places than John Milton’s lame effort. I want to point out that Mr. Milton’s polemic is longer than Mr. Andreessen’s by a couple of orders of magnitude. I did what most careless analysts would do: I took the full text of Mr. Andreessen’s screed and snagged the first 8000 words of Mr. Milton’s writing. A writing known to bring tears to the eyes of first year college students asked to read the prose and write an analytic essay about Aeropagitica in 500 words. Good training for either a debate student, a future lawyer, or a person who wants to write for Reader’s Digest magazine I believe.

    So what did I find?

    First, both Mr. Andreessen and Mr. Milton needed to speak out for their ideas. Mr. Andreessen is an advocate of smart software. Mr. Milton wanted a censorship free approach to publishing. Both assumed that “they” or people not on their wave length needed convincing about the importance of their ideas. It is safe to say that the audiences for these two polemics are not clued into the subject. Mr. Andreessen is speaking to those who are jazzed on smart software, neglecting to point out that smart software is pretty common in the online advertising sector. Mr. Milton assumed that censorship was a new threat, electing to ignore that religious authorities, educational institutions, and publishers were happily censoring information 24×7. But that’s the world of polemicists.

    Second, what about the words used by each author. Since this is written for my personal blog, I will boil down my findings to a handful of words.

    The table below presents selected 12 words and a count of each:











































    Several observations:

    1. Messrs. Andreessen and Milton share an absolutist approach. The word “all” figures prominently in both polemics.
    2. Mr. Andreessen uses “every” words to make clear that AI is applicable to just about anything one cares to name. Logical? Hey, these are polemics. The logic is internal.
    3. Messrs. Andreessen share a fondness for adulting. Note the frequency of “should” and “would.”
    4. Mr. Andreessen has an interest in ethical and moral behavior. Mr. Milton writes around these notions.

    Net net: Polemics are designed as marketing collateral. Mr. Andreessen is marketing as is Mr. Milton. Which pitch is better? The answer depends on the criteria one uses to judge polemics. I give the nod to Mr. Milton. His polemic is longer, has freight train scale sentences, and is for a modern college freshman almost unreadable. Mr. Andreessen’s polemic is sportier. It’s about smart software, not censorship directly. However, both polemics boil down to who has his or her hands on the content levers.

    Stephen E Arnold, June 12, 2023

    Has the Interior Magic of Cyber Security Professionals Been Revealed?

    April 14, 2023

    Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

    The idea of “real” secrets is an interesting one. Like much of life today, “real” and “secret” depend on the individual. Observation changes reality; therefore, information is malleable too. I wonder if this sounds too post-Heisenberg for a blog post by a dinobaby? The answer is, “Yes.” However, I don’t care, particularly after reading “40% of IT Security Pros Say They’ve Been Told Not to Report a Data Leak.”

    The write up states:

    According to responses from large companies in the US, EU, and Britain, half of organizations have experienced a data leak in the past year with America faring the worst: three quarters of respondents from that side of the pond said they experienced an intrusion of some kind. To further complicate matters, 40 percent of IT infosec folk polled said they were told to not report security incidents, and that climbs to 70.7 percent in the US, far higher than any other country.

    After reading the article, I thought about the “interior character” of the individuals who cover up cyber security weaknesses. My initial reaction is that individuals are concerned about their own aura of “excellence.” Money, the position each holds, the perception of others via a LinkedIn profile — The fact of the breach is secondary to this other, more important consideration. Upon reflection, the failure to talk about flaws may be a desire to prevent miscreants from exploiting what is a factual condition: Lousy cyber security.

    What about those marketing assurances from cyber security companies? What about the government oversight groups who are riding herd on appropriate cyber security actions and activities?

    Perhaps the marketing is better than the policies, procedures, software, and people involved in protecting information and systems from bad actors?

    Stephen E Arnold, April 14, 2023

    Next Page »

    • Archives

    • Recent Posts

    • Meta