Ampliganda: A Wonderful Word
October 13, 2021
Let’s try to create a meme. That’s sounds like fun. How about coining a word? The Atlantic has one to share. It’s ampliganda.
You can read about the word in “It’s Not Misinformation. It’s Amplified Propaganda.” The write up explains as only the Atlantic and the Stanford Internet Observatory can:
Perhaps the best word for this emergent bottom-up dynamic is one that doesn’t exist quite yet: ampliganda, the shaping of perception through amplification. It can originate from an online nobody or an onscreen celebrity. No single person or organization bears responsibility for its transmission. And it is having a profound effect on democracy and society.
Several observations:
- The Stanford Internet Observatory is definitely quick on the meme trigger. It has been a mere two decades since the search engine optimization crowd figured out how to erode relevance
- A number of the night ampliganda outfits have roots at Stanford. Isn’t that something?
- “Voting” for popularity is a thrilling concept. It works for middle school class officer elections. Algorithms can emulate popularity feedback mechanisms.
Who would have known unless Stanford was on the job? Yep, ampliganda. A word for the ages. Like Google maybe?
Stephen E Arnold, October 13, 2021
Stanford Google AI Bond?
October 12, 2021
I read “Peter Norvig: Today’s Most Pressing Questions in AI Are Human-Centered.” It appears, based on the interview, that Mr. Norvig will work at Stanford’s Institute for Human Centered AI.
Here’s the quote I found interesting:
Now that we have a great set of algorithms and tools, the more pressing questions are human-centered: Exactly what do you want to optimize? Whose interests are you serving? Are you being fair to everyone? Is anyone being left out? Is the data you collected inclusive, or is it biased?
These are interesting questions, and ones that I assume Dr. Timnit Gebru will offer answers.
Will Stanford’s approach to artificial intelligence advance its agenda and address such issues as bias in the Snorkel-type of approach to machine learning? Will Stanford and Google expand their efforts to provide the solutions which Mr. Norvig describes in this way?
You don’t get credit for choosing an especially clever or mathematically sophisticated model, you get credit for solving problems for your users.
Like ads, maybe? Like personnel problems? Like augmenting certain topics for teens? Maybe?
Stephen E Arnold, October 12, 2021
Mistaken Fools Versus Lying Schemers
October 4, 2021
We must distinguish between misinformation born of honest, if foolish, mistakes and deliberate disinformation. Writer Mike Masnick makes that point in, “The Role of Confirmation Bias In Spreading Misinformation” at TechDirt.
If a story supports our existing beliefs we are more likely to believe it without checking the facts. This can be true even for professional journalists, as a recent Rolling Stone article illustrates. That venerable publication relied on a local TV report that made what turned out to be unverifiable claims. Both reported that gunshot victims were turned away from a certain emergency room because ivermectin overdose patients had taken all the beds. The story quickly spread, covered by The Guardian, the The BBC, the Hill, and a wealth of foreign papers eager to scoff at the US. Ouch. According to the healthcare system overseeing that hospital, however, they had not treated a single case of ivermectin overdose and had not turned away any emergency-care patients. The original article was based on the word of a doctor who, they say, had not worked at that hospital in over two months. (And, we suspect, never again after all this.) This debacle should serve as a warning to all journalists to do their own fact-checking, no matter how plausible a story sounds to them.
Though such misinformation is a serious issue, Masnick writes, it is a different problem from that of deliberate disinformation. Conflating the two leads to even more problems. He observes:
“However, as we’ve discussed before, when you conflate a mistake with the deliberate bad faith pushing of false information, then that only serves to give more ammunition to those who wish to not just discredit all content from certain publications, but to then look to minimize complaints against ‘news’ organizations that specialize and focus on bad faith propaganda, by simply claiming it’s no different than what the mainstream media does in presenting ‘disinformation.’ But there is a major difference. A mistake is bad, and everyone who fell for this story looks silly for doing so. But without a clear pattern of deliberately pushing misleading or out of context information, it suggests a mere error, as opposed to deliberate bad faith activity. The same cannot be said for all ‘news’ organizations.”
An important distinction indeed.
Cynthia Murrell, October 4, 2021
Researcher Suggests Alternative to Criminalization to Curb Fake News
September 10, 2021
Let us stop treating purveyors of fake news like criminals and instead create an atmosphere where misinformation cannot thrive. That is the idea behind one academic’s proposal, The Register explains in, “Online Disinformation Is an Industry that Needs Regulation, Says Boffin.” (Boffin is British for “scientist or technical expert.”) Dr. Ross Tapsell, director of the Australian National University’s Malaysia Institute, looked at Malaysia’s efforts to address online misinformation by criminalizing its spread. That approach has not gone so well for that nation, one in which much of its civil discourse occurs online. Reporter Laura Dobberstein writes:
“In 2018, Malaysia introduced an anti-fake news bill, the first of its kind in the world. According to the law, those publishing or circulating misleading information could spend up to six years in prison. The law put online service providers on the hook for third-party content and anyone could make an accusation. This is problematic as fake news is often not concrete or definable, existing in an ever-changing grey area. Any fake news regulation brings a whole host of freedom of speech issues with it and raises questions as to how the law might be used nefariously – for example to silence political opponents. … The law was repealed in 2019 after becoming seen as an instrument to suppress political opponents rather than protecting Malaysians from harmful information.”
Earlier this year, though, lawmakers reversed course again in the face of COVID—wielding fines of up to RM100,000 ($23,800 US) and the threat of prison for those who spread false information about the disease. Tapsell urges them to consider an alternate approach. He writes:
“Rather than adopting the common narrative of social media ‘weaponisation’, I will argue that the challenges of a contemporary ‘infodemic’ are part of a growing digital media industry and rapidly shifting information society” that is best addressed “through creating and developing a robust, critical and trustworthy digital media landscape.”
Nice idea. Tapsell points to watchdog agencies, which have already taken over digital campaigns during Malaysian elections, as one way to create this shift. His main push, though, seems to be for big tech companies like Facebook and Twitter to take action. For example, they can publicly call out purveyors of false info. After all, it is harder to retaliate against them than against local researchers and journalists, the researcher notes. He recognizes social media companies have made some efforts to halt coordinated disinformation campaigns and to make them less profitable, but insists there is more they can do. What, specifically, is unclear. We wonder—does Tapsell really mean to leave it to Big Tech to determine which news is real and which is fake? We are not sure that is the best plan.
Cynthia Murrell, September 10, 2021
Another Angle for Protecting Kids Online
September 10, 2021
Nonprofit group Campaign for Accountability has Apple playing defense for seemingly putting kids at risk. MacRumors reports, “Watchdog Investigation Finds ‘Major Weaknesses’ in Apple’s App Store Child Safety Measures.” Writer Joe Rossignol cites the group’s report as he writes:
“As part of its Tech Transparency Project, the watchdog group said it set up an Apple ID for a fictitious 14-year-old user and used it to download and test 75 apps in the App Store across several adult-oriented genres: dating, hookups, online chat, and casino/gambling. Despite all of these apps being designated as 17+ on the App Store, the investigation found the underage user could easily evade the apps’ age restrictions. Among the findings presented included a dating app that presented pornography before asking the user’s age, adult chat apps with explicit images that never asked the user’s age, and a gambling app that allowed the minor to deposit and withdraw money. The investigation also identified broader flaws in Apple’s approach to child safety, claiming that Apple and many apps ‘essentially pass the buck to each other’ when it comes to blocking underage users. The report added that a number of apps design their age verification mechanisms ‘in a way that minimizes the chance of learning the user is underage,’ and claimed that Apple takes no discernible steps to prevent this.”
Ah, buck passing, a time-honored practice. Why does Apple itself not block such content when it knows a user is underaged? That is what the Campaign for Accountability’s executive director would like to know. Curious readers can see more details from the report and the organization’s methodology at its Tech Transparency website.
For its part, Apple points to its parent control features built in to its iOS and iPadOS. These settings let guardians choose what apps can be downloaded as well as the time children may spend on each app or website. The Campaign for Accountability did not have these controls activated for its hypothetical 14-year-old. Don’t parents still bear ultimate responsibility for what their kids are exposed to? Trying to outsource that burden to tech companies and app developers is probably a bad idea.
Cynthia Murrell, September 10, 2021
Great Moments in Customer Service: Online May Pose Different Risks
September 6, 2021
No, I am not talking about Yext’s new focus on helping customer service via a connected device better. No, I am not talking about Amazon’s paying up to $1,000 for a third party product which exhibits interesting behavior; for example, producing unexpected consequences. Yes, I am talking about a non-digital approach.
Navigate to “An Illinois Man Ran Over His Customer after a Botched Drug Sale. Here’s How Long He’ll Spend in Prison.” Note: Prison sentences in the Land of Lincoln can be malleable. Take terms with both salt and furikake.
The write up reports as “real” news:
Macon County Circuit Court Judge Thomas Griffith sentenced Christopher Castelli on Aug. 24 to a maximum of nine years in prison according to the plea agreement he made with the district attorney’s office. Initially, Castelli was charged with reckless homicide, but the charges were dismissed. Instead, he accepted a plea for leaving the scene of an accident resulting in the death of Alisha Gordon, 27.
Interesting. Honest Abe might wonder about this sentencing and its dismissal. For now, online customer service does not pose this type of risk to customers.
Stephen E Arnold, September 6, 2021
Taliban: Going Dark
September 3, 2021
I spotted a story from the ever reliable Associated Press called “Official Taliban Websites Go Offline, Though Reasons Unknown.” (Note: I am terrified of the AP because quoting is an invitation for this outfit to let loose its legal eagles. I don’t like this type of bird.)
I can, I think, suggest you read the original write up. I recall that the “real” news story revealed some factoids I found interesting; for example:
- Taliban Web site “protected” by Cloudflare have been disappeared. (What’s that suggest about the Cloudflare Web performance and security capabilities?)
- Facebook has disappeared some Taliban info and maybe accounts.
- The estimable Twitter keeps PR maven Z. Mjuahid’s tweets flowing.
I had forgotten that the Taliban is not a terrorist organization. I try to learn something new each day.
Stephen E Arnold, September 3, 2021
It Is Official: Big Tech Outfits Are Empires
August 23, 2021
Who knew? The Electronic Frontier Foundation revealed a factoid which is designed to shock. My position has been that big tech outfits operate like countries. I was wrong. The FAANG-type operations are empires. I stand corrected.
I learned this in “With Great Power Comes Great Responsibility: Platforms Want To Be Utilities, Self-Govern Like Empires.” The write up asserts:
The tech giants argue that they are entitled to run their businesses largely as they see fit: if you don’t like the house rules, just take your business elsewhere.
The write up omits that FAANG-type outfits are not harming the consumer. Plus these organizations operate in accordance with an invisible hand. (I like science fiction, don’t you.)
The problem is that we are now decades into the digital revolution, and the EFF like some other entities are beginning to realize that flows of digital information reconstitute the Great Chain of Being. At the top of the chain are the FAANG-type operations.
At the bottom are the thumbtypers. In the middle, those who are unable to ascend and unwilling to become data serfs are experts like those at the EFF.
“Fixes” are the way forward. From my point of view, the problems have been fixed when those lower in the chain complain, upgrade to a new mobile device, suck down some TikToks, and chill with “content.”
The future has arrived, and it is quite difficult to change the status quo and probably an Afghanistanian task to alter the near-term future.
Empires, not countries. Sounds about right.
Stephen E Arnold, August 23, 2021
Stopping Disinformation At The Systemic Level
August 19, 2021
Disinformation has been a problem since humans created the first conspiracy theory, but the spread has gotten worse in the past view years during Trump’s administration and the pandemic. TechDirt describes how it is more difficult to stop the disinformation spread in the article: “Disentangling Disinformation: Not As Easy As It Looks.” Protestors are urging Facebook to ban disinformation super spreaders and rightly so.
Disinformation about COVID-19 comes from a limited number of Facebook accounts as well as WhatsApp groups, news programs, local communities, and other social media platforms. Facebook does ban misinformation about COVID-19, but the company does not enforce its own rules. It is easy to identify the misinformation super spreaders, it is difficult to stop them. Disinformation has infected the Internet on a systemic level and it is hard to target.
It is hard to decide what actually qualifies as misinformation. What is real deemed hard fact and conspiracy theories changes all the time. For example, homosexuality used to be considered a mental illness and the chronic illness ME/CFS was only deemed recently deemed real. Another part of the issue is that giving authorities power to determine what is disinformation has downsides, because authorities do not always agree with the public about what is truthful. It is also extremely difficult to enforce rules about disinformation:
“We know that enforcing terms of service and community standards is a difficult task even for the most resourced, even for those with the best of intentions—like, say, a well-respected, well-funded German newspaper. But if a newspaper, with layers of editors, doesn’t always get it right, how can content moderators—who by all accounts are low-wage workers who must moderate a certain amount of content per hour—be expected to do so? And more to the point, how can we expect automated technologies—which already make a staggering amount of errors in moderation—to get it right?”
In other words, companies can do better jobs to moderate disinformation, but it is nearly an impossible task. Misinformation spreads around the globe in multiple languages and there is not an easy, universal way to stop everything. It is even worse when good content gets lost because of misinformation.
Whitney Grace, August 19, 2021
Biased? Abso-Fricken-Lutely
August 16, 2021
To be human is to be biased. Call it a DNA thing or blame it on a virus from a pangolin. In the distant past, few people cared about biases. Do you think those homogeneous nation states emerged because some people just wanted to invent the biathlon?
There’s a reasonably good run down of biases in A Handy Guide to Cognitive Biases: Short Cuts. One is able to scan bi8ases by an alphabetical list (a bit of a rarity these days) or by category.
The individual level of biases may give some heartburn; for example, the base rate neglect fallacy. The examples are familiar to some of the people with whom I have worked over the years. These clear thinkers misjudge the probability of an event by ignoring background information. I would use the phrase “ignoring context,” but I defer to the team which aggregated and assembled the online site.
Worth a look. Will most people absorb the info and adjust? Will the mystery of Covid’s origin be resolved in a definitive, verifiable way? Yeah, maybe.
Stephen E Arnold, August 16, 2021