A Xoogler May Question the Google about Responsible and Ethical Smart Software

December 2, 2021

Write a research paper. Get colleagues to provide input. Well, ask colleagues do that work and what do you get. How about “Looks good.” Or “Add more zing to that chart.” Or “I’m snowed under so it will be a while but I will review it…” Then the paper wends its way to publication and a senior manager type reads the paper on a flight from one whiz kid town to another whiz kid town and says, “This is bad. Really bad because the paper points out that we fiddle with the outputs. And what we set up is biased to generate the most money possible from clueless humans under our span of control.” Finally, the paper is blocked from publication and the offending PhD is fired or sent signals that your future lies elsewhere.

image

Will this be a classic arm wrestling match? The winner may control quite a bit of conceptual territory along with knobs and dials to shape information.

Could this happen? Oh, yeah.

Ex Googler Timnit Gebru Starts Her Own AI Research Center” documents the next step, which may mean that some wizards undergarments will be sprayed with eau de poison oak for months, maybe years. Here’s one of the statements from the Wired article:

“Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male—and to recruit people from parts of the world rarely represented in the tech industry. Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies.

The main idea, which Wired and Dr. Gebru delicately sidestep, is that there are allegations of an artificial intelligence or machine learning cabal drifting around some conference hall chatter. On one side is the push for what I call the SAIL approach. The example I use to illustrate how this cost effective, speedy, and clever short cut approach works is illustrated in some of the work of Dr. Christopher Ré, the captain of the objective craft SAIL. Oh, is the acronym unfamiliar to you? SAIL is short version of Stanford Artificial Intelligence Laboratory. SAIL fits on the Snorkel content diving gear I think.

On the other side of the ocean, are Dr. Timnit Gebru’s fellow travelers. The difference is that Dr. Gebru believes that smart software should not reflect the wit, wisdom, biases, and general bro-ness of the high school science club culture. This culture, in my opinion, has contributed to the fraying of the social fabric in the US, caused harm, and erodes behaviors that are supposed to be subordinated to “just what people do to make a social system function smoothly.”

Does the Wired write up identify the alleged cabal? Nope.

Does the write up explain that the Ré / Snorkel methods sacrifice some precision in the rush to generate good enough outputs? (Good enough can be framed in terms of ad revenue, reduced costs, and faster time to market testing in my opinion.) Nope.

Does Dr. Gebru explain how insidious the short cut training of models is and how it will create systems which actively harm those outside the 60 percent threshold of certain statistical yardsticks? Heck, no.

Hopefully some bright researchers will explain what’s happening with a “deep dive”? Oh, right, Deep Dive is the name of a content access company which uses Dr. Ré’s methods. Ho, ho, ho. You didn’t know?

Beyond Search believes that Dr. Gebru has important contributions to make to applied smart software. Just hurry up already.

Stephen E Arnold, December 2, 2021

Apple Podcast Ratings: A Different Angle

November 24, 2021

I read “Apple Podcasts App Ratings Flip after the Company Starts Prompting Users.” The write up explains that Apple’s podcast application was receiving the rough equivalent of a D or D- from its users. How did Apple fix this? Some big monopolies wou8ld have just had an intern enter the desired number. This works with search results pages on some Web and enterprise search systems. Not Apple. The write up reports:

The iPhone maker told The Verge that iOS 15.1 started prompting users for ratings and reviews “just like most third-party apps.” However, many people thought they were rating the show they were listening to, not the app — and that led to a flood of scores and reviews for podcasts.

Two points:

  1. Users were confused
  2. Prompts sparked ratings.

I interpreted this information to mean that users are not too swift even thought Apple’s high priced products are supposed to appeal to the swift and sure. Second, the prompts caused an immediate user reaction at least for some of the app’s users.

My takeaway: Online services can cause behaviors. Power in the hands of the just and true or evidence of the impact of digital nudges? Do higher ratings improve the app? Probably not.

Stephen E Arnold, November 24, 2021

OSINT: As Good as Government Intel

November 16, 2021

It is truly amazing how much information private citizens in the OSINT community can now glean from publicly available data. As The Economist puts it, “Open-Source Intelligence Challenges State Monopolies on Information.” Complete with intriguing examples, the extensive article details the growth of technologies and networks that have drastically changed the intelligence-gathering game over the last decade. We learn of Geo4Nonpro, a project of the James Martin Centre for Nonproliferation

Studies (CNS) at the Middlebury Institute for International Studies at Monterey, California. The write-up reports:

“The CNS is a leader in gathering and analyzing open-source intelligence (OSINT). It has pulled off some dramatic coups with satellite pictures, including on one occasion actually catching the launch of a North Korean missile in an image provided by Planet, a company in San Francisco. Satellite data, though, is only one of the resources feeding a veritable boom in non-state OSINT. There are websites which track all sorts of useful goings-on, including the routes taken by aircraft and ships. There are vast searchable databases. Terabytes of footage from phones are uploaded to social-media sites every day, much of it handily tagged. … And it is not just the data. There are also tools and techniques for working with them—3D modeling packages, for example, which let you work out what sort of object might be throwing the shadow you see in a picture. And there are social media and institutional settings that let this be done collaboratively. Eclectic expertise and experience can easily be leveraged with less-well-versed enthusiasm and curiosity in the service of projects which link academics, activists, journalists and people who mix the attributes of all three groups.”

We recommend reading the whole article for more about those who make a hobby of painstakingly analyzing images and footage. Some of these projects have come to startling conclusions. Government intelligence agencies are understandably wary as capabilities that used to be their purview spread among private OSINT enthusiasts. Not so wary, though, that they will not utilize the results when they prove useful. In fact, the government is a big customer of companies that supply higher-resolution satellite images than one can pull from the Web for free—outfits like American satellite maker Maxar and European aerospace firm Airbus. The article is eye-opening, and we can only wonder what the long-term results of this phenomenon will be.

Cynthia Murrell November 16, 2021

Ampliganda: A Wonderful Word

October 13, 2021

Let’s try to create a meme. That’s sounds like fun. How about coining a word? The Atlantic has one to share. It’s ampliganda.

You can read about the word in “It’s Not Misinformation. It’s Amplified Propaganda.” The write up explains as only the Atlantic and the Stanford Internet Observatory can:

Perhaps the best word for this emergent bottom-up dynamic is one that doesn’t exist quite yet: ampliganda, the shaping of perception through amplification. It can originate from an online nobody or an onscreen celebrity. No single person or organization bears responsibility for its transmission. And it is having a profound effect on democracy and society.

Several observations:

  1. The Stanford Internet Observatory is definitely quick on the meme trigger. It has been a mere two decades since the search engine optimization crowd figured out how to erode relevance
  2. A number of the night ampliganda outfits have roots at Stanford. Isn’t that something?
  3. “Voting” for popularity is a thrilling concept. It works for middle school class officer elections. Algorithms can emulate popularity feedback mechanisms.

Who would have known unless Stanford was on the job? Yep, ampliganda. A word for the ages. Like Google maybe?

Stephen E Arnold, October 13, 2021

Stanford Google AI Bond?

October 12, 2021

I read “Peter Norvig: Today’s Most Pressing Questions in AI Are Human-Centered.” It appears, based on the interview, that Mr. Norvig will work at Stanford’s Institute for Human Centered AI.

Here’s the quote I found interesting:

Now that we have a great set of algorithms and tools, the more pressing questions are human-centered: Exactly what do you want to optimize? Whose interests are you serving? Are you being fair to everyone? Is anyone being left out? Is the data you collected inclusive, or is it biased?

These are interesting questions, and ones that I assume Dr. Timnit Gebru will offer answers.

Will Stanford’s approach to artificial intelligence advance its agenda and address such issues as bias in the Snorkel-type of approach to machine learning? Will Stanford and Google expand their efforts to provide the solutions which Mr. Norvig describes in this way?

You don’t get credit for choosing an especially clever or mathematically sophisticated model, you get credit for solving problems for your users.

Like ads, maybe? Like personnel problems? Like augmenting certain topics for teens? Maybe?

Stephen E Arnold, October 12, 2021

Mistaken Fools Versus Lying Schemers

October 4, 2021

We must distinguish between misinformation born of honest, if foolish, mistakes and deliberate disinformation. Writer Mike Masnick makes that point in, “The Role of Confirmation Bias In Spreading Misinformation” at TechDirt.

If a story supports our existing beliefs we are more likely to believe it without checking the facts. This can be true even for professional journalists, as a recent Rolling Stone article illustrates. That venerable publication relied on a local TV report that made what turned out to be unverifiable claims. Both reported that gunshot victims were turned away from a certain emergency room because ivermectin overdose patients had taken all the beds. The story quickly spread, covered by The Guardian, the The BBC, the Hill, and a wealth of foreign papers eager to scoff at the US. Ouch. According to the healthcare system overseeing that hospital, however, they had not treated a single case of ivermectin overdose and had not turned away any emergency-care patients. The original article was based on the word of a doctor who, they say, had not worked at that hospital in over two months. (And, we suspect, never again after all this.) This debacle should serve as a warning to all journalists to do their own fact-checking, no matter how plausible a story sounds to them.

Though such misinformation is a serious issue, Masnick writes, it is a different problem from that of deliberate disinformation. Conflating the two leads to even more problems. He observes:

“However, as we’ve discussed before, when you conflate a mistake with the deliberate bad faith pushing of false information, then that only serves to give more ammunition to those who wish to not just discredit all content from certain publications, but to then look to minimize complaints against ‘news’ organizations that specialize and focus on bad faith propaganda, by simply claiming it’s no different than what the mainstream media does in presenting ‘disinformation.’ But there is a major difference. A mistake is bad, and everyone who fell for this story looks silly for doing so. But without a clear pattern of deliberately pushing misleading or out of context information, it suggests a mere error, as opposed to deliberate bad faith activity. The same cannot be said for all ‘news’ organizations.”

An important distinction indeed.

Cynthia Murrell, October 4, 2021

Researcher Suggests Alternative to Criminalization to Curb Fake News

September 10, 2021

Let us stop treating purveyors of fake news like criminals and instead create an atmosphere where misinformation cannot thrive. That is the idea behind one academic’s proposal, The Register explains in, “Online Disinformation Is an Industry that Needs Regulation, Says Boffin.” (Boffin is British for “scientist or technical expert.”) Dr. Ross Tapsell, director of the Australian National University’s Malaysia Institute, looked at Malaysia’s efforts to address online misinformation by criminalizing its spread. That approach has not gone so well for that nation, one in which much of its civil discourse occurs online. Reporter Laura Dobberstein writes:

“In 2018, Malaysia introduced an anti-fake news bill, the first of its kind in the world. According to the law, those publishing or circulating misleading information could spend up to six years in prison. The law put online service providers on the hook for third-party content and anyone could make an accusation. This is problematic as fake news is often not concrete or definable, existing in an ever-changing grey area. Any fake news regulation brings a whole host of freedom of speech issues with it and raises questions as to how the law might be used nefariously – for example to silence political opponents. … The law was repealed in 2019 after becoming seen as an instrument to suppress political opponents rather than protecting Malaysians from harmful information.”

Earlier this year, though, lawmakers reversed course again in the face of COVID—wielding fines of up to RM100,000 ($23,800 US) and the threat of prison for those who spread false information about the disease. Tapsell urges them to consider an alternate approach. He writes:

“Rather than adopting the common narrative of social media ‘weaponisation’, I will argue that the challenges of a contemporary ‘infodemic’ are part of a growing digital media industry and rapidly shifting information society” that is best addressed “through creating and developing a robust, critical and trustworthy digital media landscape.”

Nice idea. Tapsell points to watchdog agencies, which have already taken over digital campaigns during Malaysian elections, as one way to create this shift. His main push, though, seems to be for big tech companies like Facebook and Twitter to take action. For example, they can publicly call out purveyors of false info. After all, it is harder to retaliate against them than against local researchers and journalists, the researcher notes. He recognizes social media companies have made some efforts to halt coordinated disinformation campaigns and to make them less profitable, but insists there is more they can do. What, specifically, is unclear. We wonder—does Tapsell really mean to leave it to Big Tech to determine which news is real and which is fake? We are not sure that is the best plan.

Cynthia Murrell, September 10, 2021

Another Angle for Protecting Kids Online

September 10, 2021

Nonprofit group Campaign for Accountability has Apple playing defense for seemingly putting kids at risk. MacRumors reports, “Watchdog Investigation Finds ‘Major Weaknesses’ in Apple’s App Store Child Safety Measures.” Writer Joe Rossignol cites the group’s report as he writes:

“As part of its Tech Transparency Project, the watchdog group said it set up an Apple ID for a fictitious 14-year-old user and used it to download and test 75 apps in the App Store across several adult-oriented genres: dating, hookups, online chat, and casino/gambling. Despite all of these apps being designated as 17+ on the App Store, the investigation found the underage user could easily evade the apps’ age restrictions. Among the findings presented included a dating app that presented pornography before asking the user’s age, adult chat apps with explicit images that never asked the user’s age, and a gambling app that allowed the minor to deposit and withdraw money. The investigation also identified broader flaws in Apple’s approach to child safety, claiming that Apple and many apps ‘essentially pass the buck to each other’ when it comes to blocking underage users. The report added that a number of apps design their age verification mechanisms ‘in a way that minimizes the chance of learning the user is underage,’ and claimed that Apple takes no discernible steps to prevent this.”

Ah, buck passing, a time-honored practice. Why does Apple itself not block such content when it knows a user is underaged? That is what the Campaign for Accountability’s executive director would like to know. Curious readers can see more details from the report and the organization’s methodology at its Tech Transparency website.

For its part, Apple points to its parent control features built in to its iOS and iPadOS. These settings let guardians choose what apps can be downloaded as well as the time children may spend on each app or website. The Campaign for Accountability did not have these controls activated for its hypothetical 14-year-old. Don’t parents still bear ultimate responsibility for what their kids are exposed to? Trying to outsource that burden to tech companies and app developers is probably a bad idea.

Cynthia Murrell, September 10, 2021

Great Moments in Customer Service: Online May Pose Different Risks

September 6, 2021

No, I am not talking about Yext’s new focus on helping customer service via a connected device better. No, I am not talking about Amazon’s paying up to $1,000 for a third party product which exhibits interesting behavior; for example, producing unexpected consequences. Yes, I am talking about a non-digital approach.

Navigate to “An Illinois Man Ran Over His Customer after a Botched Drug Sale. Here’s How Long He’ll Spend in Prison.” Note: Prison sentences in the Land of Lincoln can be malleable. Take terms with both salt and furikake.

The write up reports as “real” news:

Macon County Circuit Court Judge Thomas Griffith sentenced Christopher Castelli on Aug. 24 to a maximum of nine years in prison according to the plea agreement he made with the district attorney’s office. Initially, Castelli was charged with reckless homicide, but the charges were dismissed. Instead, he accepted a plea for leaving the scene of an accident resulting in the death of Alisha Gordon, 27.

Interesting. Honest Abe might wonder about this sentencing and its dismissal. For now, online customer service does not pose this type of risk to customers.

Stephen E Arnold, September 6, 2021

Taliban: Going Dark

September 3, 2021

I spotted a story from the ever reliable Associated Press called “Official Taliban Websites Go Offline, Though Reasons Unknown.” (Note: I am terrified of the AP because quoting is an invitation for this outfit to let loose its legal eagles. I don’t like this type of bird.)

I can, I think, suggest you read the original write up. I recall that the “real” news story revealed some factoids I found interesting; for example:

  • Taliban Web site “protected” by Cloudflare have been disappeared. (What’s that suggest about the Cloudflare Web performance and security capabilities?)
  • Facebook has disappeared some Taliban info and maybe accounts.
  • The estimable Twitter keeps PR maven Z. Mjuahid’s tweets flowing.

I had forgotten that the Taliban is not a terrorist organization. I try to learn something new each day.

Stephen E Arnold, September 3, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta