Is Pew Defining News Too Narrowly?

September 21, 2021

I read what looks like another “close enough for horse shoes survey.” The data originate from the Pew Research Center, which has adopted the role of the outfit which says, “This is what’s shaking the digital world.”

The article “News Consumption across Social Media in 2021” reports that ”about half of Americans get news on social media at least sometimes, down slightly form 2020.”

But what’s news? I don’t want to dive into the definitional quandary, but news? What’s truth? Ethical behavior? Honor?

There is a factoid tucked into the write up which is interesting because it seems that hot social media properties like Reddit, TikTok, LinkedIn (Microsoft), Snapchat, WhatsApp, and Twitch are not where Americans go for news.


Let’s zoom into Reddit. The majority of the content is news related; that is, the information calls attention to an action or instrumentality. One easy example is the discussion threads related to problems with computers. Isn’t this information news?

What about WhatsApp (Facebook)? With encrypted messaging services becoming the new Dark Web, much of the information on special interest groups focused on possible illegal activities is, according to my DarkCyber research team, is news: Who, what, where, when, etc.

Another issue is that anyone with an interest in an event (for instance, a law enforcement professional) may find quite “newsy” items on Facebook and YouTube pages. And the sampling used for the Pew study? Maybe not representative?

Net net: Interesting study just a slight shading of “news.” The world has changed and as cartoon characters once said, “Phew, phew.”

Stephen E Arnold, September 21, 2021

Smart Software: Boiling Down to a Binary Decision?

September 9, 2021

I read a write up which contained a nuance which is pretty much a zero or a one; that is, a binary decision. The article is “Amid a Pandemic, a Health Care Algorithm Shows Promise and Peril.” Okay, good news and bad news. The subtitle introduces the transparency issue:

A machine learning-based score designed to aid triage decisions is gaining in popularity — but lacking in transparency.

The good news? A zippy name: The Deterioration Index. I like it.

The idea is that some proprietary smart software includes explicit black boxes. The vendor identifies the basics of the method, but does not disclose the “componentized” or “containerized” features. The analogy I use in my lectures is that no one pays attention to a resistor; it just does its job. Move on.

The write up explains:

The use of algorithms to support clinical decision making isn’t new. But historically, these tools have been put into use only after a rigorous peer review of the raw data and statistical analyses used to develop them. Epic’s Deterioration Index, on the other hand, remains proprietary despite its widespread deployment. Although physicians are provided with a list of the variables used to calculate the index and a rough estimate of each variable’s impact on the score, we aren’t allowed under the hood to evaluate the raw data and calculations.

From my point of view this is now becoming a standard smart software practice. In fact, when I think of “black boxes” I conjure an image of Stanford University and the University of Washington professors, graduate students, and Google-AI types which share these outfits’ DNA. Keep the mushrooms in the cave, not out in the sun’s brilliance. I could be wrong, of course, but I think this write up touches upon what may be a matter that some want to forget.

And what is this marginalized issue?

I call it the Timnit Gebru syndrome. A tiny issue buried deep in a data set or method assumed to be A-Okay may not be. What’s the fix? An ostrich-type reaction, a chuckle from someone with droit de seigneur? Moving forward because regulators and newly-minted government initiatives designed to examine bias in AI are moving with pre-Internet speed?

I think this article provides an interest case example about zeros and ones. Where’s the judgment? In a black box? Embedded and out of reach.

Stephen E Arnold, September 9, 2021

Techno-Psych: Perception, Remembering a First Date, and Money

September 9, 2021

Navigate to “Investor Memory of Past Performance Is Positively Biased and Predicts Overconfidence.” Download the PDF of the complete technical paper at this link. What will you find? Scientific verification of a truism; specifically, people remember good times and embellish those memory with sprinkles.

The write up explains:

First, we find that investors’ memories for past performance are positively biased. They tend to recall returns as better than achieved and are more likely to recall winners than losers. No published paper has shown these effects with investors. Second, we find that these positive memory biases are associated with overconfidence and trading frequency. Third, we validated a new methodology for reducing overconfidence and trading frequency by exposing investors to their past returns.

The issue at hand is investors who know they are financial poobahs. Mix this distortion of reality with technology and what does one get? My answer to this question is, “NFTs for burned Banksy art.”

The best line in the academic study, in my view, is:

Overconfidence is hazardous to your wealth.

Who knew? My answer is the 2004 paper called “Overconfidence and the Big Five.” I also think my 89-year-old great grandmother who told me when I was 13, “Don’t be over confident.”

I wonder if the Facebook artificial intelligence wizards were a bit too overconfident in the company’s smart software. There was, if I recall, a question about metatagging a human as a gorilla.

Stephen E Arnold, September 9, 2021

Not an Onion Report: Handwaving about Swizzled Data

August 24, 2021

I read at the suggestion of a friend “These Data Are Not Just Excessively Similar. They Are Impossibly Similar.” At first glance, I thought the write up was a column in an Onion-type of publication. Nope, someone copied the same data set and pasted it into itself.

Here’s what the write up says:

The paper’s Excel spreadsheet of the source data indicated mathematical malfeasance.

Malfeasance. Okay.

But what caught my interest was the inclusion of this name: Dan Ariley. If this is the Dan Ariely who wrote these books, that fact alone is suggestive. If it is a different person, then we are dealing with routine data dumbness or data dishonesty.


The write up contains what I call academic ducking and covering. You may enjoy this game, but I find it boring. Non reproducible results, swizzled data, and massaged numerical recipes are the status quo.

Is there a fix? Nope, not as long as most people cannot make change or add up the cost of items in a grocery basket. Smart software depends on data. And if those data are like those referenced in this Metafilter article, well. Excitement.

Stephen E Arnold, August 24, 2021

Apple: Change Is a Constant in the Digital Orchard

August 18, 2021

Do you remember how plans would come together at the last minute when you were in high school. Once the gaggle met up, plans would change again. I do. Who knew what was going on? When my parents asked me, “Where are you going?” I answered directly: “I don’t know yet.”

Apple sparked a moment of déjà vu for me when I read “Apple Alters Planned New System for Detecting Child Sex Abuse Images over Privacy Concerns.” The write up explained that the high school science club member have allowed events to shape their plans.

Even more interesting is what the new course of action will be; to wit:

The tech giant has said the system will now only hunt for images that have been flagged by clearinghouses in multiple countries.

How’s this going to work? Mode, median, mean, row vector value smoothing, other? The write up states:

Apple had declined to say how many matched images on a phone or a computer it would take before the operating system notifies them for a human review and possible reporting to authorities.

Being infused with the teen aged high school science club approach to decision making, some give the impression of being confused or disassociated from the less intelligent herd.

I have some questions about how these “clearinghouses in multiple countries” will become part of the Apple method. But as interested as I am in who gets to provide inputs, I am more interested in those thresholds and algorithms.

I don’t have to worry, one of the Apple science club managers apparently believes that the core of the system will return 99 percent or greater accuracy.

That’s pretty accurate because that’s six sigma territory for digital content in digital content land. Amazing.

But that’s the teen spirit which made high school science club decisions about what to do to prank the administrators so much fun. What happens if one chows down on too many digital apples? Oh, oh.

Stephen E Arnold, August 18, 2021

Why Some Outputs from Smart Software Are Wonky

July 26, 2021

Some models work like a champ. Utility rate models are reasonably reliable. When it is hot, use of electricity goes up. Rates are then “adjusted.” Perfect. Other models are less solid; for example, Bayesian systems which are not checked every hour or large neural nets which are “assumed” to be honking along like a well-ordered flight of geese. Why do I offer such Negative Ned observations? Experience for one thing and the nifty little concepts tossed out by Ben Kuhn, a Twitter persona. You can locate this string of observations at this link. Well, you could as of July 26, 2021, at 630 am US Eastern time. Here’s a selection of what are apparently the highlights of Mr. Kuhn’s conversation with “a former roommate.” That’s provenance enough for me.

Item One:

Most big number theory results are apparently 50-100 page papers where deeply understanding them is ~as hard as a semester-long course. Because of this, ~nobody has time to understand all the results they use—instead they “black-box” many of them without deeply understanding.

Could this be true? How could newly minted, be an expert with our $40 online course, create professionals who use models packaged in downloadable and easy to plug in modules be unfamiliar with the inner workings of said bundles of brilliance? Impossible? Really?

Item Two:

A lot of number theory is figuring out how to stitch together many different such black boxes to get some new big result. Roommate described this as “flailing around” but also highly effective and endorsed my analogy to copy-pasting code from many different Stack Overflow answers.

Oh, come on. Flailing around. Do developers flail or do they “trust” the outfits who pretend to know how some multi-layered systems work. Fiddling with assumptions, thresholds, and (close your ears) the data themselves  are never, ever a way to work around a glitch.

Item Three

Roommate told a story of using a technique to calculate a number and having a high-powered prof go “wow, I didn’t know you could actually do that”

No kidding? That’s impossible in general, and that expression would never be uttered at Amazon-, Facebook-, and Google-type operations, would it?

Will Mr. Kuhn be banned for heresy. [Keep in mind how Wikipedia defines this term: “is any belief or theory that is strongly at variance with established beliefs or customs, in particular the accepted beliefs of a church or religious organization.”] Just repeating an idea once would warrant a close encounter with an Iron Maiden or a pile of firewood. Probably not today. Someone might emit a slightly critical tweet, however.

Stephen E Arnold, July 26, 2021

A Google Survey: The Cloud Has Headroom

June 17, 2021

Google sponsored a study. You can read it here. There’s a summary of the report in “Manufacturers Allocate One Third of Overall IT Spend to AI, Survey Shows.”

First, the methodology is presented on the final page of the report. Here’s a snippet:

The survey was conducted online by The Harris Poll on behalf of Google Cloud, from October 15 to November 4, 2020, among 1,154 senior manufacturing executives in France (n=150), Germany (n=200), Italy (n=154), Japan (n=150), South Korea (n=150), the UK (n=150), and the U.S. (n=200) who are employed full-time at a company with more than 500 employees, and who work in the manufacturing industry with a title of director level or higher. The data in each country was weighted by number of employees to bring them into line with actual company size proportions in the population. A global post-weight was applied to ensure equal weight of each country in the global total.

Google apparently wants to make data a singular noun. That’s Googley. Also, there are two references to weighting; however, there are no data for how the weighting factors were calculated nor why the weighting factors were need for what boils down to a set of countries representing the developed world. I did not spot any information about the actual selection process; for example, mailing out a request to a larger set and then taking those who self select is a practice I have encountered in the past. Was that the method in use here? How much back and forth was there between the Harris unit and the Google managers prior to the crafting of the final report? Does this happen? Sure, those who pay want a flash report and then want to “talk about” the data. Is it possible weighting factors were used to make the numbers flow? I don’t know. The study was conducted in the depths of the Covid crisis. Was that a factor? Were those in the sample producing revenue from their AI infused investments? Sorry, no data available.

What were the findings?

Surprise, surprise. Artificial intelligence is a hot button in the manufacturing sector. Those who are into smart software are spending a hefty chunk of their “spend” budget for it. If that AI is delivered from the cloud, then bingo, the headroom for growth is darned good.

The bad news is that two thirds of those in the sample are into AI already. The big tech sharks will be swarming to upsell those early adopters and compete ferociously for the remaining one third who have yet to get the message that AI is a big deal.

Guess what countries are leaders in AI. If you said China, wrong. Go for Italy and Germany. The US was in the middle of the pack. The laggards were Japan and Korea. And China? Hey, sorry, I did not see those data in the report. My bad.

Interesting stuff in these sponsored research projects with unexplained weightings which line up with what the Google says it is doing really well.

Stephen E Arnold, June 17, 2021

Search Share, Anyone? Qwant, Swisscows, Yandex, Yippy? (Oh, Sorry, Yippy May Be a Goner)

May 17, 2021

A recent study by marketing firm Adam & Eve DDB examined the impact of search-result placement on brand visibility over the past six years. McLellan Marketing Group summarizes the findings in it’s post, “Share of Search.” A company’s “share of search” is the percentage of searches for its product category that result in its site popping up near the top. The Google Analytics dashboard helpfully displays organizations’ referrals for specific keywords and phrases, while the Google Keyword Tool reports overall searches for each term or phrase. The study checked out the metrics for three examples. We learn:

“[Adam & Eve DDB’s Les] Binet explored three categories: an expensive considered purchase (automotive), a commodity (gas and electricity) and a lower-priced but very crowded brand segment (mobile phone handsets). The results were very telling. Here are some of the biggest takeaways:

Share of search correlates with market share in all three categories.

Share of search is a leading indicator/predictor of share of market – when share of search goes up, share of market tends to go up, and when share of search goes down, share of market falls.

This long-term prediction can also act as an early warning system for brands in terms of their market share.

Share of voice (advertising) has two effects on share of search: a significant short-term impact that produces a big burst but then fades rapidly, and a smaller, longer-term effect that lingers for a very long time.

The long-term effects build on each other, sustaining and growing over time.

Share of search could also be a new measure for brand strength or health of a brand by measuring the base level of share of search without advertising.

While share of search provides essential quantitative data, brands should also use qualitative research and sentiment analysis to get a more robust picture.”

We are told that when a brand’s search share surpasses its market share, growth is on the way. Yippee! How can one ensure such a result? Writer Drew McLellan reminds us that relevant content tailored to one’s audience is the key to organic search performance. Or one could just take the shortcut: buying Facebook and Google ads also does the trick. But we wonder—where is the fun in that? Yippy? Yippy? Duck Ducking the search thing?

Cynthia Murrell, May 17, 2021

Digital 2021: Lots of Numbers

April 23, 2021

One of the Beyond Search team called my attention to the We Are Social / Hootsuite “Digital 2021 April Global Statshot Report.” The original link did not resolve. After a bit of clicking around, we did locate the presentation on the outstanding SlideShare service. No, the SlideShare search function did not work for us, but we know that it will return to its glory soon. Maybe real soon perhaps?

The report with the numbers is located at this link. If that doesn’t work, there is an index located at this link. If these go dead, you can give the We Are Social / Hootsuite explainer at this Datareportal link.

After that bit of housekeeping, what is the “Digital 2021 April Global Statshot Report”? The answer is that it is:

All the latest stats, insights, and trends you need to make sense of how the world uses the internet, mobile, social media, and ecommerce in April 2021. For more reports, including the latest global trends and in-depth local data for more than 240 countries and territories around the world, visit

As readers of this blog have heard, “all” is a trigger word. I want to know how many Dark Web encrypted message services are operated by state actors, not addled college students. Did I find the answer? Nope. So  the “all” is baloney.

The report does provide assorted disclaimers and numerous big numbers; for example, 55.1 percent of 7,850,000,000 people are active social media users. Pretty darned exact. When I was on a trip to Wuhan, China, I was told by our government provided guide, “No one is sure how many people live in Wuhan. There are different methods of counting.” If China can’t deal with counting, I am curious how precise numbers are generated for a global report. Eastern Asia (possibly China?) accounts for 25.1 percent of global Internet users by region. Probably doesn’t matter in the context of a 200 page report in PowerPoint format.

Other findings which jumped out at me as I flipped through the deck which has taken its inspiration from Mary Meeker’s Internet Trends Report last seen in 2019.

  • Mobile users are 92.8 percent of the total number of Internet users and mobile phones account for 54.18 percent of Web traffic
  • The zippiest Internet is located in the UAE
  • Google’s search market share is 92.4 percent. Qwant, which allegedly caused Eric Schmidt to lose sleep, does not appear in the search engine market share table
  • 98 percent of Internet users visit or use social networks
  • TikTok is the 7th most used social platform but the data come from TikTok, an outfit which is probably the gold standard in reliable information.

The reportal document does not explain what these data mean.

Here’s my take: The data provide many numbers which make clear three points:

  1. Mobile is a big deal
  2. Facebook and Google are bigger deals
  3. Criminal activity within these data ecosystems warrants zero attention.

The reportal’s data are free too.

Stephen E Arnold, April 23, 2021

Artificial Intelligence: Maybe These Numbers Are Artificial?

February 25, 2021

AI this. AI that. Suddenly it’s spring time for algorithmic magic. I read “Worldwide Revenues for AI Skyrocket, Set to Reach $550B by 2024.” That’s an interesting projection. What is “artificial intelligence?” No one has a precise definition. That makes it possible to assert that in 22 months, smart software will be more than half way to a trillion dollar market. That will make the MBA proteins kick into overdrive.

The write up cites the estimable mid tier consulting firm IDC and its Worldwide Semiannual Artificial Intelligence Tracker. I believe that this may be similar to the PC Magazine editorial team sitting around a lunch table generating lists of hot products and numbers about the uptake of windows 95. There is nothing wrong with projections. And estimates which aim toward a trillion dollar market are energizing in the Age of Rona.

The write up reports that IDC calculated with near infinite precision these outputs:

“the artificial intelligence (AI) market, including software, hardware, and services, are forecast to grow 16.4% year over year in 2021 to $327.5 billion… By 2024, the market is expected to break the $500 billion mark with a five-year compound annual growth rate (CAGR) of 17.5% and total revenues reaching $554.3 billion.”

Other findings (aside from the stretchy bendable fuzzy definition of “artificial intelligence” as including software, hardware, and services:

  • “Software represented 88% of the total AI market revenues in 2020. However, it is the slowest growing category with a five-year CAGR of 17.3%.”
  • “AI Applications took the largest share of revenue at 50% in 2020.”
  • “The AI Services category grew slower than the overall AI market with 13% annual revenue growth in 2020.”
  • “By 2024, AI Hardware is forecast to be a $30.5 billion market with AI Servers representing an 82% revenue share.”

Is AI a sandbox in which anyone can play? The data allegedly reveal:

In the Business Services for AI market, there were only four companies, Ernst & Young, PwC, Deloitte, and Booz Allen Hamilton, that generated revenues of more than $100 million in 1H 2020.

Okay, okay. Let’s step back:

  1. The definition of AI is nebulous which means that the assumptions are not exactly as solid as those of the new leaning Tower of Pisa in San Francisco
  2. The fuzzing of revenue streams, hardware, software, and the mushroom of services is confusing at least to me
  3. AI appears to be another of those one percenter sectors.

Net net: AI will use you whether you are ready or not or whether the systems work or not. We could ask IBM Watson but IBM is allegedly trying to sell its fantastic health care AI business. Googlers are busy revealing the flaws in some Googley assumptions about its AI capabilities. Nevertheless, we have big numbers.

VC, consultants, and MBAs, get ready to bill. By the way, these estimates seem similar to those issued by the estimable mid tier consulting firm for the cognitive search market. Not exactly a hole in one as I recall.

Stephen E Arnold, February 25, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta