What Google Knows about the Honest You

December 10, 2021

I read this quote in a Kleenex story about Google’s lists of popular searches:

“You’re never as honest as you are with your search engine. You get a sense of what people genuinely care about and genuinely want to know — and not just how they’re presenting themselves to the rest of the world.”

The alleged Googler crafting this statement is a data editor. You can read more about the highly selective and unverified Google search trends in “What Google’s Trending Searches Say about America in 2021.”

For me, the statement allows several observations:

  1. A person acting in an unguarded way reveals information not usually disseminated in “guarded” settings; for example, a job interview
  2. The word “honest” implies an unvarnished look at the psycho-social factors within a single person
  3. A collection of data points about the psycho-social aspects of a single person makes it possible to tag, classify, and relate that individual to others. Numerical procedures allow a person or system with access to those data to predict certain behaviors, predispositions, or actions.

Thus, the collection of searches, clicks, and items created by an individual using Google services such as Gmail and YouTube create a palette of color from which a data maestro can paint a picture.

Predestination has never been easier, more automatable, or cheaper to convert into an actionable knowledgebase for smart software. Yep, just simple queries. Useful indeed.

Stephen E Arnold, December 10, 2021

More AI Foibles: Inheriting Biases

December 7, 2021

Artificial intelligence algorithms are already implemented in organizations, but the final decisions are still made by humans. It is fact that algorithms are unfortunately programmed with biases towards minorities and marginalized communities. It might appear that these are purposefully built into the AI, it is not. The problem is that the AI designers lack sufficient diverse data to feed algorithms. Biases are discussed in The Next Web’s article, “Worried About AI Ethics? Worry About Developers’ Ethics First.”

The article cites Asimov’s famous three laws of robotics and notes that ethics change depending on the situation and human individual. AI are unable to distinguish these variables like humans, so they must be taught. The question is what ethics are AI developers “teaching” to their creations.

Autonomous cars are a great example, because they rely on human and AI input to make decisions to avoid accidents. Is there a moral obligation to program autonomous cars to override a driver’s decision to prevent collisions? Medicine is another worrisome field. Doctors still make critical choices, but will AI remove the human factor in the not too distant future? There are also weaponized drones and other military robots that could prolong warfare or be hacked.

The philosophical trolley problem is cited, followed by this:

People often struggle to make decisions that could have a life-changing outcome. When evaluating how we react to such situations, one study reported choices can vary depending on a range of factors including the respondent’s age, gender and culture.

When it comes to AI systems, the algorithms training processes are critical to how they will work in the real world. A system developed in one country can be influenced by the views, politics, ethics and morals of that country, making it unsuitable for use in another place and time.

If the system was controlling aircraft, or guiding a missile, you’d want a high level of confidence it was trained with data that’s representative of the environment it’s being used in.”

The United Nations has called for a “a comprehensive global standard-setting instrument” for a global ethical AI network. It is a step in the right direction, especially when it comes to ethnic diversity problems. AI that does not take into account eye shape, skin color, or other biological features are understandably overlooked by developers without them. These can be fixed with broadened data collections.

A bigger problem would be differentials between sexes and socioeconomic background. Women are viewed as less than second class citizens in many societies and socioeconomic status determines nearly everything in all countries. How are developers going to address these ethical issues? How about a deep dive with a snorkel to investigate?

Whitney Grace, December 7, 2021

Counter Intuitive or Unaware of Costco?

November 30, 2021

I try to sidestep arguments with academics cranking out silly or addled reports that are supposed to be impactful. I read “Shopping Trolleys Save Shoppers Money As Pushing Reduces Spending, Finds New Study.” This research gem asserts:

Psychology research has proven that triceps activation is associated with rejecting things we don’t like – for example when we push or hold something away from us – while biceps activation is associated with things we do like – for example when we pull or hold something close to our body. When testing the newly designed trolley on consumers at a supermarket, report authors Professor Zachary Estes and Mathias Streicher found that those who used shopping trolleys with parallel handles bought more products and spent 25 per cent more money than those using the standard trolley.

A couple of thoughts:

  1. A shopping cart or trolley with square wheels would do the trick too, right?
  2. A shopping cart weighing more than 50 kilos would do the trick, particularly in small shops near retirement facilities?
  3. An ALDI style approach, just with a cart use fee of $100 might inhibit shopping?

But the real proof is a visit to Costco. Here’s a snap of what I see when my wife and I visit out local big box store in rural Kentucky:

image

If the person can’t push it, there are motor driven carts.

Stephen E Arnold, November 30, 2021

Facebook and Smoothing Data

November 26, 2021

I like this headline: “The Thousands of Vulnerable People Harmed by Facebook and Instagram Are Lost in Meta’s Average User Data.” Here’s a passage I noticed:

consider a world in which Instagram has a rich-get-richer and poor-get-poorer effect on the well-being of users. A majority, those already doing well to begin with, find Instagram provides social affirmation and helps them stay connected to friends. A minority, those who are struggling with depression and loneliness, see these posts and wind up feeling worse. If you average them together in a study, you might not see much of a change over time.

The write up points out:

The tendency to ignore harm on the margins isn’t unique to mental health or even the consequences of social media. Allowing the bulk of experience to obscure the fate of smaller groups is a common mistake, and I’d argue that these are often the people society should be most concerned about. It can also be a pernicious tactic. Tobacco companies and scientists alike once argued that premature death among some smokers was not a serious concern because most people who have smoked a cigarette do not die of lung cancer.

I like the word “pernicious.” But the keeper is “cancer.” The idea is, it seems to me, that Facebook – sorry, meta — is “cancer.” Cancer is A term for diseases in which abnormal cells divide without control and can invade nearby tissues. Cancer evokes a particularly sonorous word too: Malignancy. Indeed the bound phrase when applied to one’s great aunt is particularly memorable; for example, Auntie has a malignant tumor.

Is Facebook — sorry, Meta — is smoothing numbers the way the local baker applies icing to a so-so cake laced with a trendy substances like cannabutter and cannaoil? My hunch is that dumping outliers, curve fitting, and subsetting data are handy little tools.

What’s the harm?

Stephen E Arnold, November 26, 2021

Survey Says: Facebook Is a Problem

November 11, 2021

I believe everything I read on the Internet. I also have great confidence in surveys conducted by estimable news organizations. A double whammy for me was SSRS Research Refined CNN Study. You can read the big logo version at this link.

The survey reports that Facebook is a problem. Okay, who knew?

Here’s a snippet about the survey:

About one-third of the public — including 44% of Republicans and 27% of Democrats — say both that Facebook is making American society worse and that Facebook itself is more at fault than its users.

Delightful.

Stephen E Arnold, November 11, 2021

The Business Intelligence You Know Is Changing

November 11, 2021

I read “This Is the Future of Intelligence.” I have been keeping my researchers on their toes because I have an upcoming lecture about “intelligence,” not getting grades in schools which have discarded Ds and Fs. The talk is about law enforcement and investigator centric intelligence. That’s persons of interest, events, timelines, and other related topics.

This article references a research report from a mid tier consulting firm. That may ring your chimes or make you chuckle. Either way, here are three gems from the write up. I leave it to you to discern the wheat and the chaff.

How about this statement:

Prediction 1: By 2025, 10% of F500 companies will incorporate scientific methods and systematic experimentation at scale, resulting in a 50% increase in product development and business planning projects — outpacing peers.

In 36 months half of the Fortune 500 companies! I wonder how many of these outfits will be able to pay for the administrative overhead hitting this target will require. Revenue, not hand waving strike me as more important.

And this chunky Wheaties flake:

By 2026, 30% of organizations will use forms of behavioral economics and AI/ML-driven insights to nudge employees’ actions, leading to a 60% increase in desired outcomes.

If we look at bellwether outfits like Amazon and Google, I wonder if the employee push back and internal tension will deliver “desired outcomes.” What seems to be delivered are reports of management wonkiness, discrimination, and legal matters.

And finally, a sparkling Sugar Pop pellet:

By 2026, advances in computing will enable 10% of previously unsurmountable problems faced by F100 organizations to be solved by super-exponential advances in complex analytics.

I like the “previously unsurmountable problems” phrase. I don’t know what a super-exponential advance in complex analytics means. Oh, well. The mid tier experts do, I assume.

Read the list of ten findings. I had a good chuckle with a snort thrown in for good measure.

Stephen E Arnold, November 11, 2021

Research? Sure. Accurate? Yeah, Sort Of

October 19, 2021

Facebook is currently under scrutiny unlike any it has seen since the 2018 Cambridge Analytica scandal. Ironically, much of the criticism cites research produced by the company itself. The Verge discusses “Why These Facebook Research Scandals Are Different.” Reporter Casey Newton tells us about a series of stories about Facebook published by The Wall Street Journal collectively known as The Facebook Files. We learn:

“The stories detail an opaque, separate system of government for elite users known as XCheck; provide evidence that Instagram can be harmful to a significant percentage of teenage girls; and reveal that entire political parties have changed their policies in response to changes in the News Feed algorithm. The stories also uncovered massive inequality in how Facebook moderates content in foreign countries compared to the investment it has made in the United States. The stories have galvanized public attention, and members of Congress have announced a probe. And scrutiny is growing as reporters at other outlets contribute material of their own. For instance: MIT Technology Review found that despite Facebook’s significant investment in security, by October 2019, Eastern European troll farms reached 140 million people a month with propaganda — and 75 percent of those users saw it not because they followed a page but because Facebook’s recommendation engine served it to them. ProPublica investigated Facebook Marketplace and found thousands of fake accounts participating in a wide variety of scams. The New York Times revealed that Facebook has sought to improve its reputation in part by pumping pro-Facebook stories into the News Feed, an effort known as ‘Project Amplify.’”

Yes, Facebook is doing everything it can to convince people it is a force for good despite the negative press. This includes implementing “Project Amplify” on its own platform to persuade users its reputation is actually good, despite what they may have heard elsewhere. Pay no attention to the man behind the curtain. We learn the company may also stop producing in-house research that reveals its own harmful nature. Not surprising, though Newton argues Facebook should do more research, not less—transparency would help build trust, he says. Somehow we doubt the company will take that advice.

A legacy of the Cambridge Analytica affair is the concept that social media algorithms, perhaps Facebook’s especially, is reshaping society. And not in a good way. We are still unclear how and to what extent each social media company works to curtail false and harmful content. Is Facebook finally facing a reckoning, and will it eventually extend to social media in general? See the article for more discussion.

Cynthia Murrell October 19, 2021

Money Put to Good Use at MIT

September 22, 2021

The Massachusetts Institute of Technology had a brush with Mr. Epstein, who continues to haunt the “real news” due to that estimable royal, Prince Andrew. And what of the institution which found Mr. Epstein amiable and enthusiastic about education and research?

The MIT experts have published absolutely stunning data about driver-assist technology. “A Model for Naturalistic Glance Behavior around Tesla Autopilot Disengagements” is a title crafted with the skill of the MIT professionals who explained MIT’s interactions with Mr. Epstein.

What’s fascinating is one conclusion from this official research paper, which MIT will sell to a person eager to support this outstanding institution. Here’s the finding I circled:

Visual behavior patterns change before and after AP disengagement. Before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving. The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead.

What’s this mean to a person in rural Kentucky? Vehicles which “sort of drive themselves” make drivers fiddle with their phones and do stuff not associated with paying attention to driving.

Who knew?

Stephen E Arnold, September 22, 2021

Is Pew Defining News Too Narrowly?

September 21, 2021

I read what looks like another “close enough for horse shoes survey.” The data originate from the Pew Research Center, which has adopted the role of the outfit which says, “This is what’s shaking the digital world.”

The article “News Consumption across Social Media in 2021” reports that ”about half of Americans get news on social media at least sometimes, down slightly form 2020.”

But what’s news? I don’t want to dive into the definitional quandary, but news? What’s truth? Ethical behavior? Honor?

There is a factoid tucked into the write up which is interesting because it seems that hot social media properties like Reddit, TikTok, LinkedIn (Microsoft), Snapchat, WhatsApp, and Twitch are not where Americans go for news.

What?

Let’s zoom into Reddit. The majority of the content is news related; that is, the information calls attention to an action or instrumentality. One easy example is the discussion threads related to problems with computers. Isn’t this information news?

What about WhatsApp (Facebook)? With encrypted messaging services becoming the new Dark Web, much of the information on special interest groups focused on possible illegal activities is, according to my DarkCyber research team, is news: Who, what, where, when, etc.

Another issue is that anyone with an interest in an event (for instance, a law enforcement professional) may find quite “newsy” items on Facebook and YouTube pages. And the sampling used for the Pew study? Maybe not representative?

Net net: Interesting study just a slight shading of “news.” The world has changed and as cartoon characters once said, “Phew, phew.”

Stephen E Arnold, September 21, 2021

Smart Software: Boiling Down to a Binary Decision?

September 9, 2021

I read a write up which contained a nuance which is pretty much a zero or a one; that is, a binary decision. The article is “Amid a Pandemic, a Health Care Algorithm Shows Promise and Peril.” Okay, good news and bad news. The subtitle introduces the transparency issue:

A machine learning-based score designed to aid triage decisions is gaining in popularity — but lacking in transparency.

The good news? A zippy name: The Deterioration Index. I like it.

The idea is that some proprietary smart software includes explicit black boxes. The vendor identifies the basics of the method, but does not disclose the “componentized” or “containerized” features. The analogy I use in my lectures is that no one pays attention to a resistor; it just does its job. Move on.

The write up explains:

The use of algorithms to support clinical decision making isn’t new. But historically, these tools have been put into use only after a rigorous peer review of the raw data and statistical analyses used to develop them. Epic’s Deterioration Index, on the other hand, remains proprietary despite its widespread deployment. Although physicians are provided with a list of the variables used to calculate the index and a rough estimate of each variable’s impact on the score, we aren’t allowed under the hood to evaluate the raw data and calculations.

From my point of view this is now becoming a standard smart software practice. In fact, when I think of “black boxes” I conjure an image of Stanford University and the University of Washington professors, graduate students, and Google-AI types which share these outfits’ DNA. Keep the mushrooms in the cave, not out in the sun’s brilliance. I could be wrong, of course, but I think this write up touches upon what may be a matter that some want to forget.

And what is this marginalized issue?

I call it the Timnit Gebru syndrome. A tiny issue buried deep in a data set or method assumed to be A-Okay may not be. What’s the fix? An ostrich-type reaction, a chuckle from someone with droit de seigneur? Moving forward because regulators and newly-minted government initiatives designed to examine bias in AI are moving with pre-Internet speed?

I think this article provides an interest case example about zeros and ones. Where’s the judgment? In a black box? Embedded and out of reach.

Stephen E Arnold, September 9, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta