Counter Intuitive or Unaware of Costco?

November 30, 2021

I try to sidestep arguments with academics cranking out silly or addled reports that are supposed to be impactful. I read “Shopping Trolleys Save Shoppers Money As Pushing Reduces Spending, Finds New Study.” This research gem asserts:

Psychology research has proven that triceps activation is associated with rejecting things we don’t like – for example when we push or hold something away from us – while biceps activation is associated with things we do like – for example when we pull or hold something close to our body. When testing the newly designed trolley on consumers at a supermarket, report authors Professor Zachary Estes and Mathias Streicher found that those who used shopping trolleys with parallel handles bought more products and spent 25 per cent more money than those using the standard trolley.

A couple of thoughts:

  1. A shopping cart or trolley with square wheels would do the trick too, right?
  2. A shopping cart weighing more than 50 kilos would do the trick, particularly in small shops near retirement facilities?
  3. An ALDI style approach, just with a cart use fee of $100 might inhibit shopping?

But the real proof is a visit to Costco. Here’s a snap of what I see when my wife and I visit out local big box store in rural Kentucky:

image

If the person can’t push it, there are motor driven carts.

Stephen E Arnold, November 30, 2021

Facebook and Smoothing Data

November 26, 2021

I like this headline: “The Thousands of Vulnerable People Harmed by Facebook and Instagram Are Lost in Meta’s Average User Data.” Here’s a passage I noticed:

consider a world in which Instagram has a rich-get-richer and poor-get-poorer effect on the well-being of users. A majority, those already doing well to begin with, find Instagram provides social affirmation and helps them stay connected to friends. A minority, those who are struggling with depression and loneliness, see these posts and wind up feeling worse. If you average them together in a study, you might not see much of a change over time.

The write up points out:

The tendency to ignore harm on the margins isn’t unique to mental health or even the consequences of social media. Allowing the bulk of experience to obscure the fate of smaller groups is a common mistake, and I’d argue that these are often the people society should be most concerned about. It can also be a pernicious tactic. Tobacco companies and scientists alike once argued that premature death among some smokers was not a serious concern because most people who have smoked a cigarette do not die of lung cancer.

I like the word “pernicious.” But the keeper is “cancer.” The idea is, it seems to me, that Facebook – sorry, meta — is “cancer.” Cancer is A term for diseases in which abnormal cells divide without control and can invade nearby tissues. Cancer evokes a particularly sonorous word too: Malignancy. Indeed the bound phrase when applied to one’s great aunt is particularly memorable; for example, Auntie has a malignant tumor.

Is Facebook — sorry, Meta — is smoothing numbers the way the local baker applies icing to a so-so cake laced with a trendy substances like cannabutter and cannaoil? My hunch is that dumping outliers, curve fitting, and subsetting data are handy little tools.

What’s the harm?

Stephen E Arnold, November 26, 2021

Survey Says: Facebook Is a Problem

November 11, 2021

I believe everything I read on the Internet. I also have great confidence in surveys conducted by estimable news organizations. A double whammy for me was SSRS Research Refined CNN Study. You can read the big logo version at this link.

The survey reports that Facebook is a problem. Okay, who knew?

Here’s a snippet about the survey:

About one-third of the public — including 44% of Republicans and 27% of Democrats — say both that Facebook is making American society worse and that Facebook itself is more at fault than its users.

Delightful.

Stephen E Arnold, November 11, 2021

The Business Intelligence You Know Is Changing

November 11, 2021

I read “This Is the Future of Intelligence.” I have been keeping my researchers on their toes because I have an upcoming lecture about “intelligence,” not getting grades in schools which have discarded Ds and Fs. The talk is about law enforcement and investigator centric intelligence. That’s persons of interest, events, timelines, and other related topics.

This article references a research report from a mid tier consulting firm. That may ring your chimes or make you chuckle. Either way, here are three gems from the write up. I leave it to you to discern the wheat and the chaff.

How about this statement:

Prediction 1: By 2025, 10% of F500 companies will incorporate scientific methods and systematic experimentation at scale, resulting in a 50% increase in product development and business planning projects — outpacing peers.

In 36 months half of the Fortune 500 companies! I wonder how many of these outfits will be able to pay for the administrative overhead hitting this target will require. Revenue, not hand waving strike me as more important.

And this chunky Wheaties flake:

By 2026, 30% of organizations will use forms of behavioral economics and AI/ML-driven insights to nudge employees’ actions, leading to a 60% increase in desired outcomes.

If we look at bellwether outfits like Amazon and Google, I wonder if the employee push back and internal tension will deliver “desired outcomes.” What seems to be delivered are reports of management wonkiness, discrimination, and legal matters.

And finally, a sparkling Sugar Pop pellet:

By 2026, advances in computing will enable 10% of previously unsurmountable problems faced by F100 organizations to be solved by super-exponential advances in complex analytics.

I like the “previously unsurmountable problems” phrase. I don’t know what a super-exponential advance in complex analytics means. Oh, well. The mid tier experts do, I assume.

Read the list of ten findings. I had a good chuckle with a snort thrown in for good measure.

Stephen E Arnold, November 11, 2021

Research? Sure. Accurate? Yeah, Sort Of

October 19, 2021

Facebook is currently under scrutiny unlike any it has seen since the 2018 Cambridge Analytica scandal. Ironically, much of the criticism cites research produced by the company itself. The Verge discusses “Why These Facebook Research Scandals Are Different.” Reporter Casey Newton tells us about a series of stories about Facebook published by The Wall Street Journal collectively known as The Facebook Files. We learn:

“The stories detail an opaque, separate system of government for elite users known as XCheck; provide evidence that Instagram can be harmful to a significant percentage of teenage girls; and reveal that entire political parties have changed their policies in response to changes in the News Feed algorithm. The stories also uncovered massive inequality in how Facebook moderates content in foreign countries compared to the investment it has made in the United States. The stories have galvanized public attention, and members of Congress have announced a probe. And scrutiny is growing as reporters at other outlets contribute material of their own. For instance: MIT Technology Review found that despite Facebook’s significant investment in security, by October 2019, Eastern European troll farms reached 140 million people a month with propaganda — and 75 percent of those users saw it not because they followed a page but because Facebook’s recommendation engine served it to them. ProPublica investigated Facebook Marketplace and found thousands of fake accounts participating in a wide variety of scams. The New York Times revealed that Facebook has sought to improve its reputation in part by pumping pro-Facebook stories into the News Feed, an effort known as ‘Project Amplify.’”

Yes, Facebook is doing everything it can to convince people it is a force for good despite the negative press. This includes implementing “Project Amplify” on its own platform to persuade users its reputation is actually good, despite what they may have heard elsewhere. Pay no attention to the man behind the curtain. We learn the company may also stop producing in-house research that reveals its own harmful nature. Not surprising, though Newton argues Facebook should do more research, not less—transparency would help build trust, he says. Somehow we doubt the company will take that advice.

A legacy of the Cambridge Analytica affair is the concept that social media algorithms, perhaps Facebook’s especially, is reshaping society. And not in a good way. We are still unclear how and to what extent each social media company works to curtail false and harmful content. Is Facebook finally facing a reckoning, and will it eventually extend to social media in general? See the article for more discussion.

Cynthia Murrell October 19, 2021

Money Put to Good Use at MIT

September 22, 2021

The Massachusetts Institute of Technology had a brush with Mr. Epstein, who continues to haunt the “real news” due to that estimable royal, Prince Andrew. And what of the institution which found Mr. Epstein amiable and enthusiastic about education and research?

The MIT experts have published absolutely stunning data about driver-assist technology. “A Model for Naturalistic Glance Behavior around Tesla Autopilot Disengagements” is a title crafted with the skill of the MIT professionals who explained MIT’s interactions with Mr. Epstein.

What’s fascinating is one conclusion from this official research paper, which MIT will sell to a person eager to support this outstanding institution. Here’s the finding I circled:

Visual behavior patterns change before and after AP disengagement. Before disengagement, drivers looked less on road and focused more on non-driving related areas compared to after the transition to manual driving. The higher proportion of off-road glances before disengagement to manual driving were not compensated by longer glances ahead.

What’s this mean to a person in rural Kentucky? Vehicles which “sort of drive themselves” make drivers fiddle with their phones and do stuff not associated with paying attention to driving.

Who knew?

Stephen E Arnold, September 22, 2021

Is Pew Defining News Too Narrowly?

September 21, 2021

I read what looks like another “close enough for horse shoes survey.” The data originate from the Pew Research Center, which has adopted the role of the outfit which says, “This is what’s shaking the digital world.”

The article “News Consumption across Social Media in 2021” reports that ”about half of Americans get news on social media at least sometimes, down slightly form 2020.”

But what’s news? I don’t want to dive into the definitional quandary, but news? What’s truth? Ethical behavior? Honor?

There is a factoid tucked into the write up which is interesting because it seems that hot social media properties like Reddit, TikTok, LinkedIn (Microsoft), Snapchat, WhatsApp, and Twitch are not where Americans go for news.

What?

Let’s zoom into Reddit. The majority of the content is news related; that is, the information calls attention to an action or instrumentality. One easy example is the discussion threads related to problems with computers. Isn’t this information news?

What about WhatsApp (Facebook)? With encrypted messaging services becoming the new Dark Web, much of the information on special interest groups focused on possible illegal activities is, according to my DarkCyber research team, is news: Who, what, where, when, etc.

Another issue is that anyone with an interest in an event (for instance, a law enforcement professional) may find quite “newsy” items on Facebook and YouTube pages. And the sampling used for the Pew study? Maybe not representative?

Net net: Interesting study just a slight shading of “news.” The world has changed and as cartoon characters once said, “Phew, phew.”

Stephen E Arnold, September 21, 2021

Smart Software: Boiling Down to a Binary Decision?

September 9, 2021

I read a write up which contained a nuance which is pretty much a zero or a one; that is, a binary decision. The article is “Amid a Pandemic, a Health Care Algorithm Shows Promise and Peril.” Okay, good news and bad news. The subtitle introduces the transparency issue:

A machine learning-based score designed to aid triage decisions is gaining in popularity — but lacking in transparency.

The good news? A zippy name: The Deterioration Index. I like it.

The idea is that some proprietary smart software includes explicit black boxes. The vendor identifies the basics of the method, but does not disclose the “componentized” or “containerized” features. The analogy I use in my lectures is that no one pays attention to a resistor; it just does its job. Move on.

The write up explains:

The use of algorithms to support clinical decision making isn’t new. But historically, these tools have been put into use only after a rigorous peer review of the raw data and statistical analyses used to develop them. Epic’s Deterioration Index, on the other hand, remains proprietary despite its widespread deployment. Although physicians are provided with a list of the variables used to calculate the index and a rough estimate of each variable’s impact on the score, we aren’t allowed under the hood to evaluate the raw data and calculations.

From my point of view this is now becoming a standard smart software practice. In fact, when I think of “black boxes” I conjure an image of Stanford University and the University of Washington professors, graduate students, and Google-AI types which share these outfits’ DNA. Keep the mushrooms in the cave, not out in the sun’s brilliance. I could be wrong, of course, but I think this write up touches upon what may be a matter that some want to forget.

And what is this marginalized issue?

I call it the Timnit Gebru syndrome. A tiny issue buried deep in a data set or method assumed to be A-Okay may not be. What’s the fix? An ostrich-type reaction, a chuckle from someone with droit de seigneur? Moving forward because regulators and newly-minted government initiatives designed to examine bias in AI are moving with pre-Internet speed?

I think this article provides an interest case example about zeros and ones. Where’s the judgment? In a black box? Embedded and out of reach.

Stephen E Arnold, September 9, 2021

Techno-Psych: Perception, Remembering a First Date, and Money

September 9, 2021

Navigate to “Investor Memory of Past Performance Is Positively Biased and Predicts Overconfidence.” Download the PDF of the complete technical paper at this link. What will you find? Scientific verification of a truism; specifically, people remember good times and embellish those memory with sprinkles.

The write up explains:

First, we find that investors’ memories for past performance are positively biased. They tend to recall returns as better than achieved and are more likely to recall winners than losers. No published paper has shown these effects with investors. Second, we find that these positive memory biases are associated with overconfidence and trading frequency. Third, we validated a new methodology for reducing overconfidence and trading frequency by exposing investors to their past returns.

The issue at hand is investors who know they are financial poobahs. Mix this distortion of reality with technology and what does one get? My answer to this question is, “NFTs for burned Banksy art.”

The best line in the academic study, in my view, is:

Overconfidence is hazardous to your wealth.

Who knew? My answer is the 2004 paper called “Overconfidence and the Big Five.” I also think my 89-year-old great grandmother who told me when I was 13, “Don’t be over confident.”

I wonder if the Facebook artificial intelligence wizards were a bit too overconfident in the company’s smart software. There was, if I recall, a question about metatagging a human as a gorilla.

Stephen E Arnold, September 9, 2021

Not an Onion Report: Handwaving about Swizzled Data

August 24, 2021

I read at the suggestion of a friend “These Data Are Not Just Excessively Similar. They Are Impossibly Similar.” At first glance, I thought the write up was a column in an Onion-type of publication. Nope, someone copied the same data set and pasted it into itself.

Here’s what the write up says:

The paper’s Excel spreadsheet of the source data indicated mathematical malfeasance.

Malfeasance. Okay.

But what caught my interest was the inclusion of this name: Dan Ariley. If this is the Dan Ariely who wrote these books, that fact alone is suggestive. If it is a different person, then we are dealing with routine data dumbness or data dishonesty.

image

The write up contains what I call academic ducking and covering. You may enjoy this game, but I find it boring. Non reproducible results, swizzled data, and massaged numerical recipes are the status quo.

Is there a fix? Nope, not as long as most people cannot make change or add up the cost of items in a grocery basket. Smart software depends on data. And if those data are like those referenced in this Metafilter article, well. Excitement.

Stephen E Arnold, August 24, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta