Cloudflare, What Else Can You Block?

July 11, 2024

I spotted an interesting item in Silicon Angle. The article is “Cloudflare Rolls Out Feature for Blocking AI Companies’ Web Scrapers.” I think this is the main point:

Cloudflare Inc. today debuted a new no-code feature for preventing artificial intelligence developers from scraping website content. The capability is available as part of the company’s flagship CDN, or content delivery network. The platform is used by a sizable percentage of the world’s websites to speed up page loading times for users. According to Cloudflare, the new scraping prevention feature is available in both the free and paid tiers of its CDN.

Cloudflare is what I call an “enabler.” For example, when one tries to do some domain research, one often encounters Cloudflare, not the actual IP address of the service. This year I have been doing some talks for law enforcement and intelligence professionals about Telegram and its Messenger service. Guess what? Telegram is a Cloudflare customer. My team and I have encountered other interesting services which use Cloudflare the way Natty Bumpo’s sidekick used branches to obscure footprints in the forest.

Cloudflare has other capabilities too; for instance, the write up reports:

Cloudflare assigns every website visit that its platform processes a score of 1 to 99. The lower the number, the greater the likelihood that the request was generated by a bot. According to the company, requests made by the bot that collects content for Perplexity AI consistently receive a score under 30.

I wonder what less salubrious Web site operators score. Yes, there are some pretty dodgy outfits that may be arguably worse than an AI outfit.

The information in this Silicon Angle write up raises a question, “What other content blocking and gatekeeping services can Cloudflare provide?

Stephen E Arnold, July 11, 2024

Google Takes Stand — Against Questionable Content. Will AI Get It Right?

May 24, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The Internet is the ultimate distribution system for illicit material, especially pornography. A simple Google search yields access to billions of lewd material for free and behind paywalls. Pornography already has people in a tizzy but the advent of deepfake porn material is making things worse. Google is upset about deepfakes and decided to take a moral stand Extreme Tech says: “Google Bans Ads For Platforms That Generate Deepfake Pornography.”

Beginning May 30, Google won’t allow platforms that create deepfake porn, explain how to make it, or promote/compare services to place ads through the Google Ads system. Google already has an Inappropriate Content Policy in place. It prohibits the promotion of hate groups, self-harm, violence, conspiracy theories, and sharing explicit images to garner attention. The policy also bans advertising sex work and sexual abuse.

Violating the content policy results in a ban from Google Ads. Google is preparing for future problems as AI becomes better:

“The addition of deepfake pornography to the Inappropriate Content Policy is undoubtedly the result of increasingly accessible and adept generative AI. In 2022, Google banned deepfake training on Colab, its mostly free public computing resource. Even six years ago, Pornhub and Reddit had to go out of their way to ban AI-generated pornography, which often depicts real people (especially celebrities) engaging in sexual acts they didn’t perform or didn’t consent to recording. Whether we’d like to or not, most of us know just how much better AI has gotten at creating fake faces since then. If deepfake pornography looked a bit janky back in 2018, it’s bound to look a heck of a lot more realistic now.”

If it weren’t for the moral center of humanity, Google’s minions would allow lead material and other illicit content on Google Ads. Porn sells. It always has.

Whitney Grace, May 24, 2024

The National Public Radio Entity Emulates Grandma

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I can hear my grandmother telling my cousin Larry. Chew your food. Or… no television for you tonight. The time was 6 30 pm. The date was March 3, 1956. My cousin and I were being “watched” when our parents were at a political rally and banquet. Grandmother was in charge, and my cousin was edging close to being sent to grandfather for a whack with his wooden paddle. Tough love I suppose. I was a good boy. I chewed my food and worked to avoid the Wrath of Ma. I did the time travel thing when I read “NPR Suspends Veteran Editor As It Grapples with His Public Criticism.” I avoid begging for dollars outfits. I had no idea what the issue is or was.

image

“Gea’t haspoy” which means in grandmother speak: “That’s it. No TV for you tonight. In the morning, both of you are going to help Grandpa mow the yard and rake up the grass.” Thanks, NPR. Oh, sorry, thanks MSFT Copilot. You do the censorship thing too, don’t you?

The write up explains:

NPR has formally punished Uri Berliner, the senior editor who publicly argued a week ago that the network had “lost America’s trust” by approaching news stories with a rigidly progressive mindset.

Oh, I get it. NPR allegedly shapes stories. A “real” journalist does not go along with the program. The progressive leaning outfit ignores the free speech angle. The “real” journalist is punished with five days in a virtual hoosegow. An NPR “real” journalist published an essay critical of NPR and then vented on a podcast.

The article I have cited is an NPR article. I guess self criticism is progressive trait maybe? Any way, the article about the grandma action stated:

In rebuking Berliner, NPR said he had also publicly released proprietary information about audience demographics, which it considers confidential. He said those figures “were essentially marketing material. If they had been really good, they probably would have distributed them and sent them out to the world.”

There is no hint that this “real” journalist shares beliefs believed to be held by Julian Assange or that bold soul Edward Snowden, both of whom have danced with super interesting information.

Several observations:

  1. NPR’s suspending an employee reminds me of my grandmother punishing us for not following her wacky rules
  2. NPR is definitely implementing a type of information shaping; if it were not, what’s the big deal about a grousing employee? How many of these does Google have protesting in a year?
  3. Banning a person who is expressing an opinion strikes me as a tasty blend of X.com and that master motivator Joe Stalin. But that’s just my dinobaby mind have a walk-about.

Net net: What media are not censoring, muddled, and into acting like grandma?

Stephen E Arnold, April 15, 2024

Google Mandates YouTube AI Content Be Labeled: Accurately? Hmmmm

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The rules for proper use of AI-generated content are still up in the air, but big tech companies are already being pressured to induct regulations. Neowin reported that “Google Is Requiring YouTube Creators To Post Labels For Realistic AI-Created Content” on videos. This is a smart idea in the age of misinformation, especially when technology can realistically create images and sounds.

Google first announced the new requirement for realistic AI-content in November 2023. The YouTube’s Creator Studio now has a tool in the features to label AI-content. The new tool is called “Altered content” and asks creators yes and no questions. Its simplicity is similar to YouTube’s question about whether a video is intended for children or not. The “Altered content” label applies to the following:

• “Makes a real person appear to say or do something they didn’t say or do

• Alters footage of a real event or place

• Generates a realistic-looking scene that didn’t actually occur”

The article goes on to say:

“The blog post states that YouTube creators don’t have to label content made by generative AI tools that do not look realistic. One example was “someone riding a unicorn through a fantastical world.” The same applies to the use of AI tools that simply make color or lighting changes to videos, along with effects like background blur and beauty video filters.”

Google says it will have enforcement measures if creators consistently don’t label their realistic AI videos, but the consequences are specified. YouTube will also reserve the right to place labels on videos. There will also be a reporting system viewers can use to notify YouTube of non-labeled videos. It’s not surprising that Google’s algorithms can’t detect realistic videos from fake. Perhaps the algorithms are outsmarting their creators.

Whitney Grace, April 2, 2024

Alternative Channels, Superstar Writers, and Content Filtering

February 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In this post-Twitter world, a duel of influencers is playing out in the blogosphere. At issue: Substack’s alleged Nazi problem. The kerfuffle began with a piece in The Atlantic by Jonathan M. Katz, but has evolved into a debate between Platformer’s Casey Newton and Jesse Singal of Singal-Minded. Both those blogs are hosted by Substack.

To get up to speed on the controversy, see the original Atlantic article. Newton wrote a couple posts about Substack’s responses and detailing Platformer’s involvement. In “Substack Says It Will Remove Nazi Publications from the Platform,” he writes:

“Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies. As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.”

How many publications did Platformer flag, and how many of those did Substack remove? Were they significant publications, and did they really violate the rules? These are the burning questions Singal sought to answer. He shares his account in, “Platformer’s Reporting on Substack’s Supposed ‘Nazi Problem’ Is Shoddy and Misleading.” But first, he specifies his own perspective on Katz’ Atlantic article:

“In my view, this whole thing is little more than a moral panic. Moreover, Katz cut certain corners to obscure the fact that to the extent there are Nazis on Substack at all, it appears they have almost no following or influence, and make almost no money. In one case, for example, Katz falsely claimed that a white nationalist was making a comfortable living writing on Substack, but even the most cursory bit of research would have revealed that that is completely false.”

Singal says he plans a detailed article supporting that assertion, but first he must pick apart Platformer’s position. Readers are treated to details from an email exchange between the bloggers and reasons Singal feels Newton’s responses are inadequate. One can navigate to that post for those details if one wants to get into the weeds. As of this writing, Newton has not published a response to Singal’s diatribe. Were we better off when such duels took place 280 characters at a time?

One positive about newspapers: An established editorial process kept superstars grounded in reality. Now entitlement, more than content, seems to be in the driver’s seat.

Cynthia Murrell, February 7, 2024

Harvard University: Does Money Influence Academic Research?

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.

image

Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.

Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.

The write up asserts:

Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.

Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.

If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.

What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.

If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.

Stephen E Arnold, December 5, 2023

The Google Magic Editor: Mom Knows Best and Will Ground You, You Goof Off

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

What’s better at enforcing rules? The US government and its Declaration of Independence, Constitution, and regulatory authority or Mother Google? If you think the US government legal process into Google’s alleged fancy dancing with mere users is opaque, you are correct. The US government needs the Google more than Google Land needs the world’s governments. Who’s in charge of Google? The real authority is Mother Google, a ghost like iron maiden creating and enforcing with smart software many rules and regulations. Think of Mother Google operating from a digital Star Chamber. Banned from YouTube? Mother Google did it. Lost Web site traffic overnight? Mother Google did it? Lost control of your user data? Mother Google did not do that, of course.

image

A stern mother says, “You cannot make deep fakes involving your gym teacher and your fifth grade teacher. Do you hear me?” Thanks, Microsoft Bing. Great art.

The author of “Google Photos’ Magic Editor Will Refuse to Make These Edits.” The write up states:

Code within the latest version of Google Photos includes specific error messages that highlight the edits that Magic Editor will refuse to do. Magic Editor will refuse to edit photos of ID cards, receipts, images with personally identifiable information, human faces, and body parts. Magic Editor already avoids many of these edits but without specific error messages, leaving users guessing on what is allowed and what is not.

What’s interesting is that user have to discover that which is forbidden by experimenting. My reaction to this assertion is that Google does not want to get in trouble when a crafty teen cranks out fake IDs in order to visit some of the more interesting establishments in town.

I have a nagging suspicion or two  I would like to share:

  1. The log files identifying which user tried to create what with which prompt would be interesting to review
  2. The list of don’ts is not published because it is adjusted to meet Google’s needs, not the users’
  3. Google wants to be able to say, “See, we are trying to keep the Internet safe, pure, and tidy.”

Net net: What happens when smart software enforces more potent and more subtle controls over the framing and presenting of information? Right, mom?

Stephen E Arnold, November 13, 2023

The Google: Dribs and Drabs of Information Suggest a Frisky Outfit

October 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have been catching up since I returned from a law enforcement conference. One of the items in my “read” file concerned Google’s alleged demonstrations of the firm’s cleverness. Clever is often valued more than intelligence in some organization  in my experience. I picked up on an item describing the system and method for tweaking a Google query to enhance the results with some special content.

 How Google Alters Search Queries to Get at Your Wallet” appeared on October 2, 2023. By October 6, 2023, the article was disappeared. I want to point out for you open source intelligence professionals, the original article remains online.

image

Two serious and bright knowledge workers look confused when asked about alleged cleverness. One says, “I don’t understand. We are here to help you.” Thanks, Microsoft Bing. Highly original art and diverse too.

Nope. I won’t reveal where or provide a link to it. I read it and formulated three notions in my dinobaby brain:

  1. The author is making darned certain that he/she/it will not be hired by the Google.
  2. The system and method described in the write up is little more than a variation on themes which thread through a number of Google patent documents. I demonstrated in my monograph Google Version 2.0: The Calculating Predator that clever methods work for profiling users and building comprehensive data sets about products.
  3. The idea of editorial curation is alive, just not particularly effective at the “begging for dollars” outfit doing business as Wired Magazine.

Those are my opinions, and I urge you to formulate your own.

I noted several interesting comments on Hacker News about this publish and disappear event. Let me highlight several. You can find the posts at this link, but keep in mind, these also can vaporize without warning. Isn’t being a sysadmin fun?

  1. judge2020: “It’s obvious that they design for you to click ads, but it was fairly rocky suggesting that the backend reaches out to the ad system. This wouldn’t just destroy results, but also run afoul of FCC Ad disclosure requirements….”
  2. techdragon: “I notice it seems like Google had gotten more and more willing to assume unrelated words/concepts are sufficiently interchangeable that it can happily return both in a search query for either … and I’ll be honest here… single behavior is the number one reason I’m on the edge of leaving google search forever…”
  3. TourcanLoucan: “Increasingly the Internet is not for us, it is certainly not by us, it is simply where you go when you are bored, the only remaining third place that people reliably have access to, and in true free market fashion, it is wall-to-wall exploitation.”

I want to point out that online services operate like droplets of mercury. They merge and one has a giant blob of potentially lethal mercury. Is Google a blob of mercury? The disappearing content is interesting as are the comments about the incident. But some kids play with mercury; others use it in industrial processes; and some consume it (willingly or unwillingly) like sailors of yore with a certain disease. They did not know. You know or could know.

Stephen E Arnold, October 10, 2023

    Reading. Who Needs It?

    September 19, 2023

    Book banning aka media censorship is an act as old as human intellect. As technology advances so do the strategies and tools available to assist in book banning. Engadget shares the unfortunate story about how, “An Iowa School District Is Using AI To Ban Books.” Mason City, Iowa’s school board is leveraging AI technology to generate lists of books to potentially ban from the district’s libraries in the 2023-24 school year.

    Governor Kim Reynolds signed Senate File 496 into law after it passed the Republican-controlled state legislature. Senate File 496 changes the state’s curriculum and it includes verbiage that addresses what books are allowed in schools. The books must be “age appropriate” and be without “descriptions or visual depictions of a sex act.”

    “Inappropriate” titles have snuck past censors for years and Iowa’s school board discovered it is not so easy to peruse every school’s book collection. That is where the school board turned to an AI algorithm to undertake the task:

    “As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a “master list” is first cobbled together from “several sources” based on whether there were previous complaints of sexual content. Books from that list are then scanned by “AI software” which tells the state censors whether or not there actually is a depiction of sex in the book.”

    The AI algorithm has so far listed nineteen titles to potentially ban. These include banned veteran titles such as The Color Purple, I Know Why the Caged Bird Sings, and The Handmaid’s Tale and “newer” titles compared to the formers: Gossip Girl, Feed, and A Court of Mist and Fury.

    While these titles are not appropriate for elementary schools, questionable for middle schools, and arguably age-appropriate for high schools, book banning is not good. Parents, teachers, librarians, and other leaders must work together to determine what is best for students. Books also have age ratings on them like videogames, movies, and TV shows. These titles are tame compared to what kids can access online and on TV.

    Whitney Grace, September 19, 2023

    Dust Up: Social Justice and STEM Publishing

    June 28, 2023

    Are you familiar with “social justice warriors?” These are people who. Take it upon themselves to police the world for their moral causes, usually from a self-righteous standpoint. Social justice warriors are also known my the acronym SJWs and can cross over into the infamous Karen zone. Unfortunately Heterodox STEM reports SJWs have invaded the science community and Anna Krylov and Jay Tanzman discussed the issue in their paper: “Critical Social Justice Subverts Scientific Publishing.”

    SJWs advocate for the politicization of science, adding an ideology to scientific research also known as critical social justice (CSJ). It upends the true purpose of science which is to help and advance humanity. CSJ adds censorship, scholarship suppression, and social engineering to science.

    Krylov and Tanzmans’ paper was presented at the Perils for Science in Democracies and Authoritarian Countries and they argue CSJ harms scientific research than helps it. They compare CSJ to Orwell’s fictional Ministry of Love; although real life examples such as Josef Goebbels’s Nazi Ministry of Propaganda, the USSR’s Department for Agitation and Propaganda, and China’s authoritarian regime work better. CSJ is the opposite of the Enlightenment that liberated human psyches from religious and royal dogmas. The Enlightenment engendered critical thinking, the scientific process, philosophy, and discovery. The world became more tolerant, wealthier, educated, and healthier as a result.

    CSJ creates censorship and paranoia akin to tyrannical regimes:

    “According to CSJ ideologues, the very language we use to communicate our findings is a minefield of offenses. Professional societies, universities, and publishing houses have produced volumes dedicated to “inclusive” language that contain long lists of proscribed words that purportedly can cause offense and—according to the DEI bureaucracy that promulgates these initiatives—perpetuate inequality and exclusion of some groups, disadvantage women, and promote patriarchy, racism, sexism, ableism, and other isms. The lists of forbidden terms include “master database,” “older software,” “motherboard,” “dummy variable,” “black and white thinking,” “strawman,” “picnic,” and “long time no see” (Krylov 2021: 5371, Krylov et al. 2022: 32, McWhorter 2022, Paul 2023, Packer 2023, Anonymous 2022). The Google Inclusive Language Guide even proscribes the term “smart phones” (Krauss 2022). The Inclusivity Style  Guide of the American Chemical Society (2023)—a major chemistry publisher of more than 100 titles—advises against using such terms as “double blind studies,” “healthy weight,” “sanity check,” “black market,” “the New World,” and “dark times”…”

    New meanings that cause offense are projected onto benign words and their use is taken out of context. At this rate, everything people say will be considered offensive, including the most uncontroversial topic: the weather.

    Science must be free from CSJ ideologies but also corporate ideologies that promote profit margins. Examples from American history include, Big Tobacco, sugar manufacturers, and Big Pharma.

    Whitney Grace, June 28, 2023

    Next Page »

    • Archives

    • Recent Posts

    • Meta