Another Angle for Protecting Kids Online

September 10, 2021

Nonprofit group Campaign for Accountability has Apple playing defense for seemingly putting kids at risk. MacRumors reports, “Watchdog Investigation Finds ‘Major Weaknesses’ in Apple’s App Store Child Safety Measures.” Writer Joe Rossignol cites the group’s report as he writes:

“As part of its Tech Transparency Project, the watchdog group said it set up an Apple ID for a fictitious 14-year-old user and used it to download and test 75 apps in the App Store across several adult-oriented genres: dating, hookups, online chat, and casino/gambling. Despite all of these apps being designated as 17+ on the App Store, the investigation found the underage user could easily evade the apps’ age restrictions. Among the findings presented included a dating app that presented pornography before asking the user’s age, adult chat apps with explicit images that never asked the user’s age, and a gambling app that allowed the minor to deposit and withdraw money. The investigation also identified broader flaws in Apple’s approach to child safety, claiming that Apple and many apps ‘essentially pass the buck to each other’ when it comes to blocking underage users. The report added that a number of apps design their age verification mechanisms ‘in a way that minimizes the chance of learning the user is underage,’ and claimed that Apple takes no discernible steps to prevent this.”

Ah, buck passing, a time-honored practice. Why does Apple itself not block such content when it knows a user is underaged? That is what the Campaign for Accountability’s executive director would like to know. Curious readers can see more details from the report and the organization’s methodology at its Tech Transparency website.

For its part, Apple points to its parent control features built in to its iOS and iPadOS. These settings let guardians choose what apps can be downloaded as well as the time children may spend on each app or website. The Campaign for Accountability did not have these controls activated for its hypothetical 14-year-old. Don’t parents still bear ultimate responsibility for what their kids are exposed to? Trying to outsource that burden to tech companies and app developers is probably a bad idea.

Cynthia Murrell, September 10, 2021

Great Moments in Customer Service: Online May Pose Different Risks

September 6, 2021

No, I am not talking about Yext’s new focus on helping customer service via a connected device better. No, I am not talking about Amazon’s paying up to $1,000 for a third party product which exhibits interesting behavior; for example, producing unexpected consequences. Yes, I am talking about a non-digital approach.

Navigate to “An Illinois Man Ran Over His Customer after a Botched Drug Sale. Here’s How Long He’ll Spend in Prison.” Note: Prison sentences in the Land of Lincoln can be malleable. Take terms with both salt and furikake.

The write up reports as “real” news:

Macon County Circuit Court Judge Thomas Griffith sentenced Christopher Castelli on Aug. 24 to a maximum of nine years in prison according to the plea agreement he made with the district attorney’s office. Initially, Castelli was charged with reckless homicide, but the charges were dismissed. Instead, he accepted a plea for leaving the scene of an accident resulting in the death of Alisha Gordon, 27.

Interesting. Honest Abe might wonder about this sentencing and its dismissal. For now, online customer service does not pose this type of risk to customers.

Stephen E Arnold, September 6, 2021

Taliban: Going Dark

September 3, 2021

I spotted a story from the ever reliable Associated Press called “Official Taliban Websites Go Offline, Though Reasons Unknown.” (Note: I am terrified of the AP because quoting is an invitation for this outfit to let loose its legal eagles. I don’t like this type of bird.)

I can, I think, suggest you read the original write up. I recall that the “real” news story revealed some factoids I found interesting; for example:

  • Taliban Web site “protected” by Cloudflare have been disappeared. (What’s that suggest about the Cloudflare Web performance and security capabilities?)
  • Facebook has disappeared some Taliban info and maybe accounts.
  • The estimable Twitter keeps PR maven Z. Mjuahid’s tweets flowing.

I had forgotten that the Taliban is not a terrorist organization. I try to learn something new each day.

Stephen E Arnold, September 3, 2021

It Is Official: Big Tech Outfits Are Empires

August 23, 2021

Who knew? The Electronic Frontier Foundation revealed a factoid which is designed to shock. My position has been that big tech outfits operate like countries. I was wrong. The FAANG-type operations are empires. I stand corrected.

I learned this in “With Great Power Comes Great Responsibility: Platforms Want To Be Utilities, Self-Govern Like Empires.” The write up asserts:

The tech giants argue that they are entitled to run their businesses largely as they see fit: if you don’t like the house rules, just take your business elsewhere.

The write up omits that FAANG-type outfits are not harming the consumer. Plus these organizations operate in accordance with an invisible hand. (I like science fiction, don’t you.)

The problem is that we are now decades into the digital revolution, and the EFF like some other entities are beginning to realize that flows of digital information reconstitute the Great Chain of Being. At the top of the chain are the FAANG-type operations.

At the bottom are the thumbtypers. In the middle, those who are unable to ascend and unwilling to become data serfs are experts like those at the EFF.

“Fixes” are the way forward. From my point of view, the problems have been fixed when those lower in the chain complain, upgrade to a new mobile device, suck down some TikToks, and chill with “content.”

The future has arrived, and it is quite difficult to change the status quo and probably an Afghanistanian task to alter the near-term future.

Empires, not countries. Sounds about right.

Stephen E Arnold, August 23, 2021

Stopping Disinformation At The Systemic Level

August 19, 2021

Disinformation has been a problem since humans created the first conspiracy theory, but the spread has gotten worse in the past view years during Trump’s administration and the pandemic. TechDirt describes how it is more difficult to stop the disinformation spread in the article: “Disentangling Disinformation: Not As Easy As It Looks.” Protestors are urging Facebook to ban disinformation super spreaders and rightly so.

Disinformation about COVID-19 comes from a limited number of Facebook accounts as well as WhatsApp groups, news programs, local communities, and other social media platforms. Facebook does ban misinformation about COVID-19, but the company does not enforce its own rules. It is easy to identify the misinformation super spreaders, it is difficult to stop them. Disinformation has infected the Internet on a systemic level and it is hard to target.

It is hard to decide what actually qualifies as misinformation. What is real deemed hard fact and conspiracy theories changes all the time. For example, homosexuality used to be considered a mental illness and the chronic illness ME/CFS was only deemed recently deemed real. Another part of the issue is that giving authorities power to determine what is disinformation has downsides, because authorities do not always agree with the public about what is truthful. It is also extremely difficult to enforce rules about disinformation:

“We know that enforcing terms of service and community standards is a difficult task even for the most resourced, even for those with the best of intentions—like, say, a well-respected, well-funded German newspaper. But if a newspaper, with layers of editors, doesn’t always get it right, how can content moderators—who by all accounts are low-wage workers who must moderate a certain amount of content per hour—be expected to do so? And more to the point, how can we expect automated technologies—which already make a staggering amount of errors in moderation—to get it right?”

In other words, companies can do better jobs to moderate disinformation, but it is nearly an impossible task. Misinformation spreads around the globe in multiple languages and there is not an easy, universal way to stop everything. It is even worse when good content gets lost because of misinformation.

Whitney Grace, August 19, 2021

Biased? Abso-Fricken-Lutely

August 16, 2021

To be human is to be biased. Call it a DNA thing or blame it on a virus from a pangolin. In the distant past, few people cared about biases. Do you think those homogeneous nation states emerged because some people just wanted to invent the biathlon?

There’s a reasonably good run down of biases in A Handy Guide to Cognitive Biases: Short Cuts. One is able to scan bi8ases by an alphabetical list (a bit of a rarity these days) or by category.

The individual level of biases may give some heartburn; for example, the base rate neglect fallacy. The examples are familiar to some of the people with whom I have worked over the years. These clear thinkers misjudge the probability of an event by ignoring background information. I would use the phrase “ignoring context,” but I defer to the team which aggregated and assembled the online site.

Worth a look. Will most people absorb the info and adjust? Will the mystery of Covid’s origin be resolved in a definitive, verifiable way? Yeah, maybe.

Stephen E Arnold, August 16, 2021

Europe: Privacy Footnote

August 11, 2021

If you are not familiar with Chatcontrol, there’s a mostly useful list of resources on the Digital Human Rights blog. The article “Messaging and Chat Control” offers some context as well as a foreshadowing of the possible trajectory of this EU initiative.

The Chatcontrol legislation meshes with Apple’s recent statement that it would be more proactive and transparent about its monitoring activity. You can get a sense of this action in “Expanded Protections for Children.”

A schism exists between those who want to move whatever content is of interest freely. On the other side of the gap are those who want to put controls on digital content flow.

Observations I noted on a flight home from Washington, DC Monday, August 10, 2021, included:

  • Digital content flows accelerate and facilitate some unpleasant facets of human behavior. Vendors have done little since the dawn of “online” to manage corrosive bits. Is this now a surprise that after 50 years, elected officials are trying to take action.
  • The failure to regulate has been a result of generate misunderstanding of the nature of unfettered digital information flows. As I have pointed out, digital content works exactly like glass beads propelled at a rusted fender. Once the rust is gone, keeping the nozzle aimed at the fender blasts the fender away as well. Hence, we have the social fabric in its present and rapidly deteriorating condition.
  • One property of digital information is that those with expertise in digital information can innovate. Thus, there will be workarounds. Some of these will be deployed more rapidly than the filtering and control mechanisms can be updated. I point this out because once a control system is imposed, it becomes increasingly difficult and expensive to keep in tip top shape.

Net net: China has been the pace-setter in this approach to digital information. How easy is it to sketch the trajectory of these long-overdue actions? That’s an interesting question to ponder after a half century to stumble into the school room with a mobile phone and a perception that the online equipped person is a wizard.

Stephen E Arnold, August 11, 2021

A Tiny Idea: Is a New Governmental Thought Shaper Emerging?

August 11, 2021

I read “China’s Top Propaganda Agencies Want to Limit the Role of Algorithms in Distributing Online Content.” What an interesting idea. De-algorithm certain Fancy Dan smart software. Make a human or humanoids responsible for what gets distributed online. Laws apparently are not getting throiugh to the smart software used for certain technology publishing functions. The fix, according to the article, is:

China’s top state propaganda organs, which decide what people can read and watch in the country, have jointly urged better “culture and art reviews” in China partly by limiting the role of algorithms in content distribution, a policy move that could translate into higher compliance costs for online content providers such as ByteDance and Tencent Holdings. The policy guidelines from the Central Propaganda Department of the Communist Party, the Ministry of Culture and Tourism and the State Administration of Radio and Television as well as the China Federation of Literary and Art Circles and Chinese Writers Association, the two state-backed bodies for state-approved artists and authors, mark the latest effort by Beijing to align online content with the state’s agenda and to rein in the role of capital and technology in shaping the country’s minds and mainstream views.

The value of putting a human or humanoids in the target zone is an explicit acknowledgement that “gee, I’m sorry” and “our algorithms are just so advanced my team does not know what those numerical recipes are doing” will not fly or get to the airport.

I am not too interested in the impact of these rules in the Middle Kingdom. What I want to track is how these rules diffuse to nation states which are counting on a big time rail link or money to fund Chinese partners’ projects.

Net net: Chinese government agencies, where monitoring and internal checks and balances are an art form, possibly will make use of interesting algorithms. Commercial enterprises and organizations grousing about China’s rules and regulations will have fewer degrees of freeedom. Maybe no freedom at all. Ideas may not be moving from the US East Coast and the West Coast. Big ideas like clipping algorithmic wings are building in China and heading out. Will the idea catch on?

Stephen E Arnold, August 11, 2021

Online Anonymity: Maybe a Less Than Stellar Idea

July 20, 2021

On one hand, there is a veritable industrial revolution in identifying, tracking, and pinpointing online users. On the other hand, there is the confection of online anonymity. The idea is that by obfuscation, using a fake name, or hijacking an account set up for one’s 75 year old spinster aunt — a person can be anonymous. And what fun some can have when their online actions are obfuscated either by cleverness, Tor cartwheels, and more sophisticated methods using free email and “trial” cloud accounts. I am not a big fan of online anonymity for three reasons:

  1. Online makes it easy for a person to listen to one’s internal demons’ chatter and do incredibly inappropriate things. Anonymity and online, in my opinion, are a bit like reverting to 11 year old thinking often with an adult’s suppressed perceptions and assumptions about what’s okay and what’s not okay.
  2. Having a verified identity linked to an online action imposes social constraints. The method may not be the same as a small town watching the actions of frisky teens and intervening or telling a parent at the grocery that their progeny was making life tough for the small kid with glasses who was studying Lepidoptera.
  3. Individuals doing inappropriate things are often exposed, discovered, or revealed by friends, spouses angry about a failure to take out the garbage, or a small investigative team trying to figure out who spray painted the doors of a religious institution.

When I read “Abolishing Online Anonymity Won’t Tackle the Underlying Problems of Racist Abuse.” I agree. The write up states:

There is an argument that by forcing people to reveal themselves publicly, or giving the platforms access to their identities, they will be “held accountable” for what they write and say on the internet. Though the intentions behind this are understandable, I believe that ID verification proposals are shortsighted. They will give more power to tech companies who already don’t do enough to enforce their existing community guidelines to protect vulnerable users, and, crucially, do little to address the underlying issues that render racial harassment and abuse so ubiquitous.

The observation is on the money.

I would push back a little. Limiting online use to those who verify their identity may curtail some of the crazier behaviors online. At this time, fractious behavior is the norm. Continuous division of cultural norms, common courtesies, and routine interactions destroys.

My thought is that changing the anonymity to real identity might curtail some of the behavior online systems enable.

Stephen E Arnold, July 20, 2021

A Good Question and an Obvious Answer: Maybe Traffic and Money?

July 19, 2021

I read “Euro 2020: Why Is It So Difficult to Track Down Racist Trolls and Remove Hateful Messages on Social Media?” The write up expresses understandable concern about the use of social media to criticize athletes. Some athletes have magnetism and sponsors want to use that “pull” to sell products and services. I remember a technology conference which featured a former football quarterback who explained how to succeed. He did not reference the athletic expertise of a former high school science club member and officer.  As I recall, the pitch was working hard, fighting (!), and a overcoming a coach calling a certain athlete (me, for example) a “fat slug.” Relevant to innovating in online databases? Yes, truly inspirational and an anecdote from the mists of time.

The write up frames its concern this way about derogatory social media “posts”:

Over a quarter of the comments were sent from anonymous private accounts with no posts of their own. But identifying perpetrators of online hate is just one part of the problem.

And the real “problem”? The article states:

It’s impossible to discover through open-source techniques that an account is being operated from a particular country.

Maybe.

Referencing Instagram (a Facebook property), the Sky story notes:

Other users may anonymise their existing accounts so that the comments they post are not traceable to them in the offline world.

Okay, automated systems with smart software don’t do the job. Will another government bill in the UK help.

The write up does everything but comment about the obvious; for example, my view is that online accounts must be linked to a human and verified before posts are permitted.

The smart software thing, the government law thing, and the humans making decision thing, are not particularly efficacious. Why? The online systems permit — if not encourage — anonymity because money maybe? That’s a question for the Sky Data and Forensics team. It is:

a multi-skilled unit dedicated to providing transparent journalism from Sky News. We gather, analyse and visualise data to tell data-driven stories. We combine traditional reporting skills with advanced analysis of satellite images, social media and other open source information. Through multimedia storytelling we aim to better explain the world while also showing how our journalism is done.

Okay.

Stephen E Arnold, July 19, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta