China Seeks to Curb Algorithmic Influence and Manipulation

December 5, 2024

Someone is finally taking decisive action against unhealthy recommendation algorithms, AI-driven price optimization, and exploitative gig-work systems. That someone is China. ”China Sets Deadline for Big Tech to Clear Algorithm Issues, Close ‘Echo Chambers’,” reports the South China Morning Post. Ah, the efficiency of a repressive regime. Writer Hayley Wong informs us:

‘Tech operators in China have been given a deadline to rectify issues with recommendation algorithms, as authorities move to revise cybersecurity regulations in place since 2021. A three-month campaign to address ‘typical issues with algorithms’ on online platforms was launched on Sunday, according to a notice from the Communist Party’s commission for cyberspace affairs, the Ministry of Industry and Information Technology, and other relevant departments. The campaign, which will last until February 14, marks the latest effort to curb the influence of Big Tech companies in shaping online views and opinions through algorithms – the technology behind the recommendation functions of most apps and websites. System providers should avoid recommendation algorithms that create ‘echo chambers’ and induce addiction, allow manipulation of trending items, or exploit gig workers’ rights, the notice said.

They should also crack down on unfair pricing and discounts targeting different demographics, ensure ‘healthy content’ for elderly and children, and impose a robust ‘algorithm review mechanism and data security management system’.”

Tech firms operating within China are also ordered to conduct internal investigations and improve algorithms’ security capabilities by the end of the year. What happens if firms fail? Reeducation? A visit to the death van? Or an opportunity to herd sheep in a really nice area near Xian? The brief write-up does not specify.

We think there may be a footnote to the new policy; for instance, “Use algos to advance our policies.”

Cynthia Murrell, December 5, 2024

FOGINT: Kenya Throttles Telegram to Protect KCSE Exam Integrity

November 20, 2024

Secondary school students in Kenya need to do well on their all-encompassing final exam if they hope to go to college. Several Telegram services have emerged to assist students through this crucial juncture—by helping them cheat on the test. Authorities caught on to the practice and have restricted Telegram usage during this year’s November exams. As a result, reports Kenyans.co.ke, “NetBlocks Confirms Rising User Frustrations with Telegram Slowdown in Kenya.” Since Telegram is Kenya’s fifth most downloaded social-media platform, that is a lot of unhappy users. Writer Rene Otinga tells us:

“According to an internet observatory, NetBlocks, Telegram was restricted in Kenya with their data showing the app as being down across various internet providers. Users across the country have reported receiving several error messages while trying to interact with the app, including a ‘Connecting’ error when trying to access the Telegram desktop. However, a letter shared online from the Communications Authority of Kenya (CAK) also confirmed the temporary suspension of Telegram services to quell the perpetuation of criminal activities.”

Apparently, the restriction worked. We learn:

“On Friday, Education Principal Secretary Belio Kipsang said only 11 incidents of attempted sneaking of mobile phones were reported across the country. While monitoring examinations in Kiambu County, the PS said this was the fewest number of cheating cases the ministry had experienced in recent times.”

That is good news for honest students in Kenya. But for Telegram, this may be just the beginning of its regulatory challenges. Otinga notes:

“Governments are wary of the app, which they suspect is being used to spread disinformation, spread extremism, and in Kenya, promote examination cheating. European countries are particularly critical of the app, with the likes of Belarus, Russia, Ukraine, Germany, Norway, and Spain restricting or banning the messaging app altogether.”

Encryption can hide a multitude of sins. But when regulators are paying attention, it might not be enough to keep one out of hot water.

Cynthia Murrell, November 20, 2024

But What about the Flip Side of Smart Software Swaying Opinion

September 20, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Silicon Valley “fight of the century” might be back on. I think I heard, “Let’s scrap” buzzing in the background when I read “Musk Has Turned Twitter into Hyper-Partisan Hobby Horse, Says Nick Clegg.” Here in Harrod’s Creek, Kentucky, them is fightin’ words. When they are delivered by a British luminary educated at Westminster School before going on to study at the University of Cambridge, the pronouncement is particularly grating on certain sensitive technology super heroes.

image

The Silicon Valley Scrap is ramping up. One one digital horse is the Zuck. On the other steed is Musk. When the two titans collide, who will emerge as the victor? How about the PR and marketing professionals working for each of the possible chevaliers? Thanks, MSFT Copilot. Good enough.

The write up in the Telegraph (a British newspaper which uses a paywall to discourage the riff raff from reading its objective “real news” stories reports:

Sir Nick, who is now head of global affairs for Facebook-owner Meta, said Mr Musk’s platform, which is now known as X, was used by a tiny group of elite people to “yell at each other” about politics. By contrast, Facebook and Instagram had deprioritized news and politics because people did not want to read it, he said.

Of course, Cambridge University graduates who have studied at the home of the Golden Gophers and the (where is it again?) College of Europe would not “yell.” How plebeian! How nouveau riche! My, my, how déclassé.

The Telegraph reports without a hint of sarcasm:

Meta launched a rival service last year called Threads, but has said it will promote subjects such as sports above news and politics in feeds. Sir Nick, who will next week face a Senate committee about tech companies’ role in elections, said that social media has very little impact on voters’ choices. “People tend to somewhat exaggerate the role that technology plays in how people vote and political behavior,” he said.

As a graduate of a loser school, I wish to humbly direct Sir Richard’s attention to “AI Chatbots Might Be Better at Swaying Conspiracy Theorists Than Humans.” The main idea of the write up of a research project is that:

Experiments in which an AI chatbot engaged in conversations with people who believed at least one conspiracy theory showed that the interaction significantly reduced the strength of those beliefs, even two months later. The secret to its success: the chatbot, with its access to vast amounts of information across an enormous range of topics, could precisely tailor its counterarguments to each individual.

Keeping in mind that I am not the type of person the University of Europe or Golden Gopher U. wants on its campus, I would ask, “Wouldn’t smart software work to increase the power of bad actors or company owners who use AI chatbots to hold opinions promoted by the high-technology companies. If so, Mr. Clegg’s description of X.com as a hobby horse would apply to Sir Richard’s boss, Mark Zuckerberg (aka the Zuck). Surely social media and smart software are able to slice, dice, chop, and cut in multiple directions. Wouldn’t a filter tweaked a certain way provide a powerful tool to define “reality” and cause some users to ramp up their interest in a topic? Could these platforms with a digital finger on the filter controls make some people roll over, pat their tummies, and believe something that the high technology “leadership” wants?

Which of these outstanding, ethical high-technology social media platforms will win a dust up in Silicon Valley? How much will Ticketmaster charge for a ring-side seat? What other pronouncements will the court jesters for these two highly-regarded companies say?

Stephen E Arnold, September 20, 2024

Why Is the Telegram Übermensch Rolling Over Like a Good Dog?

September 10, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have been following the story of Pavel Durov’s detainment in France, his hiring of a lawyer with an office on St Germaine de Pres, and his sudden cooperativeness. I want to offer come observations on this about face. To begin, let me quote from his public statement at t.me/durov/342:

… we [Pavel and Nikolai] hear voices saying that it’s not enough. Telegram’s abrupt increase in user count to 950M caused growing pains that made it easier for criminals to abuse our platform. That’s why I made it my personal goal to ensure we significantly improve things in this regard. We’ve already started that process internally, and I will share more details on our progress with you very soon.

image

The Telegram French bulldog flexes his muscles at a meeting with French government officials. Thanks, Microsoft. Good enough like Recall I think.

First, the key item of information is the statement “user count to 950M” [million] users. Telegram’s architecture makes it possible for the company to offer a range of advertising services to those with the Telegram “super app” installed. With the financial success of advertising revenue evidenced by the financial reports from Amazon, Facebook, and Google, the brothers Durov, some long-time collages, and a handful of alternative currency professionals do not want to leave money on the table. Ideals are one thing; huge piles of cash are quite another.

Second, Telegram’s leadership demonstrated Cirque de Soleil-grade flexibility when doing a flip flop on censorship. Regardless of the reason, Mr. Durov chatted up a US news personality. In an interview with a former Murdoch luminary, Mr. Durov complained about the US and sang the praises of free speech. Less than two weeks, Telegram blocked Ukrainian Telegram messages to Russians in Russia about Mr. Putin’s historical “special operation.” After 11 years of pumping free speech, Telegram changed direction. Why? One can speculate but the free speech era at least for Ukraine-to-Russia Messenger traffic ended.

Third, Mr. Durov’s digital empire extends far beyond messaging (whether basic or the incredibly misunderstood “secret” function). As I write this, Mr. Durov’s colleagues who work at arm’s length from Telegram, have rolled out a 2024 version of VKontakte or VK called TONsocial. The idea is to extend the ecosystem of The One Network and its TON alternative currency. (Some might use the word crypto, but I will stick with “alternative”.) Even though these entities and their staff operate at arm’s length, TON is integrated into the Telegram super app. Furthermore, clever alternative currency games are attracting millions of users. The TON alternative currency is complemented with Telegram STAR, another alternative currency available within the super app. In the last month, one of these “games”—technically a dApp or distributed application — has amassed over 35 million users and generates revenue with videos on YouTube. The TON Foundation — operating at arm’s length from Telegram — has set up a marketing program, a developer outreach program with hard currency incentives for certain types of work, and videos on YouTube which promote Telegram-based distributed applications, the alternative currency, and the benefits of the TON ecosystem.

So what’s causing Mr. Durov to shift from the snarling Sulimov to goofy French bulldog? Telegram wants to pull off at IPO or an initial public offering. In order to do that after the US Securities & Exchange Commission shut down his first TON alternative currency play, the brothers Durov and their colleagues cooked up a much less problematic approach to monetize the Telegram ecosystem. An IPO would produce money and fame. An IPO could legitimize a system which some have hypothesized retains strong technical and financial ties to some Russian interests.

The conversion from free speech protector with fangs and money to scratch-my-ears French bulldog may be little more than a desire for wealth and fame… maybe power or an IPO. Mr. Durov has an alleged 100 or more children. That’s a lot of college tuition to pay I imagine. Therefore, I am not surprised: Mr. Durov will:

  • Cooperate with the French
  • Be more careful with his travel operational security in the future
  • Be the individual who can, should he choose, access the metadata and the messages or everyone of the 950 million Telegram users (with so darned few in the EU to boot)
  • Sell advertising
  • Cook up a new version of VKontakte
  • Be a popular person among influential certain other countries’ government professionals.

But as long as he is rich, he will be okay. He watches what he eats, he exercises, and he has allegedly good cosmetic surgeons at his disposal. He is flexible obviously. I can hear the French bulldog emitting dulcet sounds now as it sticks out its chest and perks its ears.

Stephen E Arnold, September 10, 2024

When Egos Collide in Brazil

September 10, 2024

Why the Supreme Federal Court of Brazil has Suspended X

It all started when Brazilian Supreme Court judge Alexandre de Moraes issued a court order requiring X to block certain accounts for spewing misinformation and hate speech. Notably, these accounts belonged to right-wing supporters of former Brazilian President Jair Bolsonaro. After taking his ball and going home, Musk responded with some misinformation and hate speech of his own. He published some insulting AI-generated images of de Moraes, because apparently that is a thing he does now. He has also blatantly refused to pay the fines and appoint the legal representative required by the court. Musk’s tantrums would be laughable if his colossal immaturity were not matched by his dangerous wealth and influence.

But De Moraes seems to be up for the fight. The judge has now added Musk to an ongoing investigation into the spread of fake news and has launched a separate probe into the mogul for obstruction of justice and incitement to crime. We turn to Brazil’s Globo for de Moraes’ perspective in the article, “Por Unanimidade, 1a Turma do STF Mantém X Suspenso No Brasil.” Or in English, “Unanimously, 1st Court of the Supreme Federal Court Maintains X Suspension in Brazil.” Reporter Márcio Falcão writes (in Google Translate’s interpretation):

“Moraes also affirmed that Elon Musk confuses freedom of expression with a nonexistent freedom of aggression and deliberately confuses censorship with the constitutional prohibition of hate speech and incitement to antidemocratic acts. The minister said that ‘the criminal instrumentalization of various social networks, especially network X, is also being investigated in other countries.’ I quote an excerpt from the opinion of Attorney General Paulo Gonet, who agrees with the decision to suspend In this sixth edition. Alexandre de Moraes also affirmed that there have been ‘repeated, conscious, and voluntary failures to comply with judicial orders and non-implementation of daily fines applied, in addition to attempts not to submit to the Brazilian legal system and Judiciary, to ‘Instituting an environment of total impunity and ‘terra sem lei’ [‘lawless land’] in Brazilian social networks, including during the 2024 municipal elections.’”

“A nonexistent freedom of aggression” is a particularly good burn. Chef’s kiss. The article also shares viewpoints from the four other judges who joined de Moraes to suspend X. The court also voted to impose huge fines for any Brazilians who continue to access the platform through a VPN, though The Federal Council of Advocates of Brazil asked de Moraes to reconsider that measure. (Here’s Google’s translation of that piece.) What will be next in this dramatic standoff? And what precedent(s) will be set?

Cynthia Murrell, September 10, 2024

Cloudflare, What Else Can You Block?

July 11, 2024

I spotted an interesting item in Silicon Angle. The article is “Cloudflare Rolls Out Feature for Blocking AI Companies’ Web Scrapers.” I think this is the main point:

Cloudflare Inc. today debuted a new no-code feature for preventing artificial intelligence developers from scraping website content. The capability is available as part of the company’s flagship CDN, or content delivery network. The platform is used by a sizable percentage of the world’s websites to speed up page loading times for users. According to Cloudflare, the new scraping prevention feature is available in both the free and paid tiers of its CDN.

Cloudflare is what I call an “enabler.” For example, when one tries to do some domain research, one often encounters Cloudflare, not the actual IP address of the service. This year I have been doing some talks for law enforcement and intelligence professionals about Telegram and its Messenger service. Guess what? Telegram is a Cloudflare customer. My team and I have encountered other interesting services which use Cloudflare the way Natty Bumpo’s sidekick used branches to obscure footprints in the forest.

Cloudflare has other capabilities too; for instance, the write up reports:

Cloudflare assigns every website visit that its platform processes a score of 1 to 99. The lower the number, the greater the likelihood that the request was generated by a bot. According to the company, requests made by the bot that collects content for Perplexity AI consistently receive a score under 30.

I wonder what less salubrious Web site operators score. Yes, there are some pretty dodgy outfits that may be arguably worse than an AI outfit.

The information in this Silicon Angle write up raises a question, “What other content blocking and gatekeeping services can Cloudflare provide?

Stephen E Arnold, July 11, 2024

Google Takes Stand — Against Questionable Content. Will AI Get It Right?

May 24, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The Internet is the ultimate distribution system for illicit material, especially pornography. A simple Google search yields access to billions of lewd material for free and behind paywalls. Pornography already has people in a tizzy but the advent of deepfake porn material is making things worse. Google is upset about deepfakes and decided to take a moral stand Extreme Tech says: “Google Bans Ads For Platforms That Generate Deepfake Pornography.”

Beginning May 30, Google won’t allow platforms that create deepfake porn, explain how to make it, or promote/compare services to place ads through the Google Ads system. Google already has an Inappropriate Content Policy in place. It prohibits the promotion of hate groups, self-harm, violence, conspiracy theories, and sharing explicit images to garner attention. The policy also bans advertising sex work and sexual abuse.

Violating the content policy results in a ban from Google Ads. Google is preparing for future problems as AI becomes better:

“The addition of deepfake pornography to the Inappropriate Content Policy is undoubtedly the result of increasingly accessible and adept generative AI. In 2022, Google banned deepfake training on Colab, its mostly free public computing resource. Even six years ago, Pornhub and Reddit had to go out of their way to ban AI-generated pornography, which often depicts real people (especially celebrities) engaging in sexual acts they didn’t perform or didn’t consent to recording. Whether we’d like to or not, most of us know just how much better AI has gotten at creating fake faces since then. If deepfake pornography looked a bit janky back in 2018, it’s bound to look a heck of a lot more realistic now.”

If it weren’t for the moral center of humanity, Google’s minions would allow lead material and other illicit content on Google Ads. Porn sells. It always has.

Whitney Grace, May 24, 2024

The National Public Radio Entity Emulates Grandma

April 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I can hear my grandmother telling my cousin Larry. Chew your food. Or… no television for you tonight. The time was 6 30 pm. The date was March 3, 1956. My cousin and I were being “watched” when our parents were at a political rally and banquet. Grandmother was in charge, and my cousin was edging close to being sent to grandfather for a whack with his wooden paddle. Tough love I suppose. I was a good boy. I chewed my food and worked to avoid the Wrath of Ma. I did the time travel thing when I read “NPR Suspends Veteran Editor As It Grapples with His Public Criticism.” I avoid begging for dollars outfits. I had no idea what the issue is or was.

image

“Gea’t haspoy” which means in grandmother speak: “That’s it. No TV for you tonight. In the morning, both of you are going to help Grandpa mow the yard and rake up the grass.” Thanks, NPR. Oh, sorry, thanks MSFT Copilot. You do the censorship thing too, don’t you?

The write up explains:

NPR has formally punished Uri Berliner, the senior editor who publicly argued a week ago that the network had “lost America’s trust” by approaching news stories with a rigidly progressive mindset.

Oh, I get it. NPR allegedly shapes stories. A “real” journalist does not go along with the program. The progressive leaning outfit ignores the free speech angle. The “real” journalist is punished with five days in a virtual hoosegow. An NPR “real” journalist published an essay critical of NPR and then vented on a podcast.

The article I have cited is an NPR article. I guess self criticism is progressive trait maybe? Any way, the article about the grandma action stated:

In rebuking Berliner, NPR said he had also publicly released proprietary information about audience demographics, which it considers confidential. He said those figures “were essentially marketing material. If they had been really good, they probably would have distributed them and sent them out to the world.”

There is no hint that this “real” journalist shares beliefs believed to be held by Julian Assange or that bold soul Edward Snowden, both of whom have danced with super interesting information.

Several observations:

  1. NPR’s suspending an employee reminds me of my grandmother punishing us for not following her wacky rules
  2. NPR is definitely implementing a type of information shaping; if it were not, what’s the big deal about a grousing employee? How many of these does Google have protesting in a year?
  3. Banning a person who is expressing an opinion strikes me as a tasty blend of X.com and that master motivator Joe Stalin. But that’s just my dinobaby mind have a walk-about.

Net net: What media are not censoring, muddled, and into acting like grandma?

Stephen E Arnold, April 15, 2024

Google Mandates YouTube AI Content Be Labeled: Accurately? Hmmmm

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The rules for proper use of AI-generated content are still up in the air, but big tech companies are already being pressured to induct regulations. Neowin reported that “Google Is Requiring YouTube Creators To Post Labels For Realistic AI-Created Content” on videos. This is a smart idea in the age of misinformation, especially when technology can realistically create images and sounds.

Google first announced the new requirement for realistic AI-content in November 2023. The YouTube’s Creator Studio now has a tool in the features to label AI-content. The new tool is called “Altered content” and asks creators yes and no questions. Its simplicity is similar to YouTube’s question about whether a video is intended for children or not. The “Altered content” label applies to the following:

• “Makes a real person appear to say or do something they didn’t say or do

• Alters footage of a real event or place

• Generates a realistic-looking scene that didn’t actually occur”

The article goes on to say:

“The blog post states that YouTube creators don’t have to label content made by generative AI tools that do not look realistic. One example was “someone riding a unicorn through a fantastical world.” The same applies to the use of AI tools that simply make color or lighting changes to videos, along with effects like background blur and beauty video filters.”

Google says it will have enforcement measures if creators consistently don’t label their realistic AI videos, but the consequences are specified. YouTube will also reserve the right to place labels on videos. There will also be a reporting system viewers can use to notify YouTube of non-labeled videos. It’s not surprising that Google’s algorithms can’t detect realistic videos from fake. Perhaps the algorithms are outsmarting their creators.

Whitney Grace, April 2, 2024

Alternative Channels, Superstar Writers, and Content Filtering

February 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In this post-Twitter world, a duel of influencers is playing out in the blogosphere. At issue: Substack’s alleged Nazi problem. The kerfuffle began with a piece in The Atlantic by Jonathan M. Katz, but has evolved into a debate between Platformer’s Casey Newton and Jesse Singal of Singal-Minded. Both those blogs are hosted by Substack.

To get up to speed on the controversy, see the original Atlantic article. Newton wrote a couple posts about Substack’s responses and detailing Platformer’s involvement. In “Substack Says It Will Remove Nazi Publications from the Platform,” he writes:

“Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies. As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.”

How many publications did Platformer flag, and how many of those did Substack remove? Were they significant publications, and did they really violate the rules? These are the burning questions Singal sought to answer. He shares his account in, “Platformer’s Reporting on Substack’s Supposed ‘Nazi Problem’ Is Shoddy and Misleading.” But first, he specifies his own perspective on Katz’ Atlantic article:

“In my view, this whole thing is little more than a moral panic. Moreover, Katz cut certain corners to obscure the fact that to the extent there are Nazis on Substack at all, it appears they have almost no following or influence, and make almost no money. In one case, for example, Katz falsely claimed that a white nationalist was making a comfortable living writing on Substack, but even the most cursory bit of research would have revealed that that is completely false.”

Singal says he plans a detailed article supporting that assertion, but first he must pick apart Platformer’s position. Readers are treated to details from an email exchange between the bloggers and reasons Singal feels Newton’s responses are inadequate. One can navigate to that post for those details if one wants to get into the weeds. As of this writing, Newton has not published a response to Singal’s diatribe. Were we better off when such duels took place 280 characters at a time?

One positive about newspapers: An established editorial process kept superstars grounded in reality. Now entitlement, more than content, seems to be in the driver’s seat.

Cynthia Murrell, February 7, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta