Why Some Outputs from Smart Software Are Wonky

July 26, 2021

Some models work like a champ. Utility rate models are reasonably reliable. When it is hot, use of electricity goes up. Rates are then “adjusted.” Perfect. Other models are less solid; for example, Bayesian systems which are not checked every hour or large neural nets which are “assumed” to be honking along like a well-ordered flight of geese. Why do I offer such Negative Ned observations? Experience for one thing and the nifty little concepts tossed out by Ben Kuhn, a Twitter persona. You can locate this string of observations at this link. Well, you could as of July 26, 2021, at 630 am US Eastern time. Here’s a selection of what are apparently the highlights of Mr. Kuhn’s conversation with “a former roommate.” That’s provenance enough for me.

Item One:

Most big number theory results are apparently 50-100 page papers where deeply understanding them is ~as hard as a semester-long course. Because of this, ~nobody has time to understand all the results they use—instead they “black-box” many of them without deeply understanding.

Could this be true? How could newly minted, be an expert with our $40 online course, create professionals who use models packaged in downloadable and easy to plug in modules be unfamiliar with the inner workings of said bundles of brilliance? Impossible? Really?

Item Two:

A lot of number theory is figuring out how to stitch together many different such black boxes to get some new big result. Roommate described this as “flailing around” but also highly effective and endorsed my analogy to copy-pasting code from many different Stack Overflow answers.

Oh, come on. Flailing around. Do developers flail or do they “trust” the outfits who pretend to know how some multi-layered systems work. Fiddling with assumptions, thresholds, and (close your ears) the data themselves  are never, ever a way to work around a glitch.

Item Three

Roommate told a story of using a technique to calculate a number and having a high-powered prof go “wow, I didn’t know you could actually do that”

No kidding? That’s impossible in general, and that expression would never be uttered at Amazon-, Facebook-, and Google-type operations, would it?

Will Mr. Kuhn be banned for heresy. [Keep in mind how Wikipedia defines this term: “is any belief or theory that is strongly at variance with established beliefs or customs, in particular the accepted beliefs of a church or religious organization.”] Just repeating an idea once would warrant a close encounter with an Iron Maiden or a pile of firewood. Probably not today. Someone might emit a slightly critical tweet, however.

Stephen E Arnold, July 26, 2021

A Google Survey: The Cloud Has Headroom

June 17, 2021

Google sponsored a study. You can read it here. There’s a summary of the report in “Manufacturers Allocate One Third of Overall IT Spend to AI, Survey Shows.”

First, the methodology is presented on the final page of the report. Here’s a snippet:

The survey was conducted online by The Harris Poll on behalf of Google Cloud, from October 15 to November 4, 2020, among 1,154 senior manufacturing executives in France (n=150), Germany (n=200), Italy (n=154), Japan (n=150), South Korea (n=150), the UK (n=150), and the U.S. (n=200) who are employed full-time at a company with more than 500 employees, and who work in the manufacturing industry with a title of director level or higher. The data in each country was weighted by number of employees to bring them into line with actual company size proportions in the population. A global post-weight was applied to ensure equal weight of each country in the global total.

Google apparently wants to make data a singular noun. That’s Googley. Also, there are two references to weighting; however, there are no data for how the weighting factors were calculated nor why the weighting factors were need for what boils down to a set of countries representing the developed world. I did not spot any information about the actual selection process; for example, mailing out a request to a larger set and then taking those who self select is a practice I have encountered in the past. Was that the method in use here? How much back and forth was there between the Harris unit and the Google managers prior to the crafting of the final report? Does this happen? Sure, those who pay want a flash report and then want to “talk about” the data. Is it possible weighting factors were used to make the numbers flow? I don’t know. The study was conducted in the depths of the Covid crisis. Was that a factor? Were those in the sample producing revenue from their AI infused investments? Sorry, no data available.

What were the findings?

Surprise, surprise. Artificial intelligence is a hot button in the manufacturing sector. Those who are into smart software are spending a hefty chunk of their “spend” budget for it. If that AI is delivered from the cloud, then bingo, the headroom for growth is darned good.

The bad news is that two thirds of those in the sample are into AI already. The big tech sharks will be swarming to upsell those early adopters and compete ferociously for the remaining one third who have yet to get the message that AI is a big deal.

Guess what countries are leaders in AI. If you said China, wrong. Go for Italy and Germany. The US was in the middle of the pack. The laggards were Japan and Korea. And China? Hey, sorry, I did not see those data in the report. My bad.

Interesting stuff in these sponsored research projects with unexplained weightings which line up with what the Google says it is doing really well.

Stephen E Arnold, June 17, 2021

Search Share, Anyone? Qwant, Swisscows, Yandex, Yippy? (Oh, Sorry, Yippy May Be a Goner)

May 17, 2021

A recent study by marketing firm Adam & Eve DDB examined the impact of search-result placement on brand visibility over the past six years. McLellan Marketing Group summarizes the findings in it’s post, “Share of Search.” A company’s “share of search” is the percentage of searches for its product category that result in its site popping up near the top. The Google Analytics dashboard helpfully displays organizations’ referrals for specific keywords and phrases, while the Google Keyword Tool reports overall searches for each term or phrase. The study checked out the metrics for three examples. We learn:

“[Adam & Eve DDB’s Les] Binet explored three categories: an expensive considered purchase (automotive), a commodity (gas and electricity) and a lower-priced but very crowded brand segment (mobile phone handsets). The results were very telling. Here are some of the biggest takeaways:

Share of search correlates with market share in all three categories.

Share of search is a leading indicator/predictor of share of market – when share of search goes up, share of market tends to go up, and when share of search goes down, share of market falls.

This long-term prediction can also act as an early warning system for brands in terms of their market share.

Share of voice (advertising) has two effects on share of search: a significant short-term impact that produces a big burst but then fades rapidly, and a smaller, longer-term effect that lingers for a very long time.

The long-term effects build on each other, sustaining and growing over time.

Share of search could also be a new measure for brand strength or health of a brand by measuring the base level of share of search without advertising.

While share of search provides essential quantitative data, brands should also use qualitative research and sentiment analysis to get a more robust picture.”

We are told that when a brand’s search share surpasses its market share, growth is on the way. Yippee! How can one ensure such a result? Writer Drew McLellan reminds us that relevant content tailored to one’s audience is the key to organic search performance. Or one could just take the shortcut: buying Facebook and Google ads also does the trick. But we wonder—where is the fun in that? Yippy? Yippy? Duck Ducking the search thing?

Cynthia Murrell, May 17, 2021

Digital 2021: Lots of Numbers

April 23, 2021

One of the Beyond Search team called my attention to the We Are Social / Hootsuite “Digital 2021 April Global Statshot Report.” The original link did not resolve. After a bit of clicking around, we did locate the presentation on the outstanding SlideShare service. No, the SlideShare search function did not work for us, but we know that it will return to its glory soon. Maybe real soon perhaps?

The report with the numbers is located at this link. If that doesn’t work, there is an index located at this link. If these go dead, you can give the We Are Social / Hootsuite explainer at this Datareportal link.

After that bit of housekeeping, what is the “Digital 2021 April Global Statshot Report”? The answer is that it is:

All the latest stats, insights, and trends you need to make sense of how the world uses the internet, mobile, social media, and ecommerce in April 2021. For more reports, including the latest global trends and in-depth local data for more than 240 countries and territories around the world, visit https://datareportal.com

As readers of this blog have heard, “all” is a trigger word. I want to know how many Dark Web encrypted message services are operated by state actors, not addled college students. Did I find the answer? Nope. So  the “all” is baloney.

The report does provide assorted disclaimers and numerous big numbers; for example, 55.1 percent of 7,850,000,000 people are active social media users. Pretty darned exact. When I was on a trip to Wuhan, China, I was told by our government provided guide, “No one is sure how many people live in Wuhan. There are different methods of counting.” If China can’t deal with counting, I am curious how precise numbers are generated for a global report. Eastern Asia (possibly China?) accounts for 25.1 percent of global Internet users by region. Probably doesn’t matter in the context of a 200 page report in PowerPoint format.

Other findings which jumped out at me as I flipped through the deck which has taken its inspiration from Mary Meeker’s Internet Trends Report last seen in 2019.

  • Mobile users are 92.8 percent of the total number of Internet users and mobile phones account for 54.18 percent of Web traffic
  • The zippiest Internet is located in the UAE
  • Google’s search market share is 92.4 percent. Qwant, which allegedly caused Eric Schmidt to lose sleep, does not appear in the search engine market share table
  • 98 percent of Internet users visit or use social networks
  • TikTok is the 7th most used social platform but the data come from TikTok, an outfit which is probably the gold standard in reliable information.

The reportal document does not explain what these data mean.

Here’s my take: The data provide many numbers which make clear three points:

  1. Mobile is a big deal
  2. Facebook and Google are bigger deals
  3. Criminal activity within these data ecosystems warrants zero attention.

The reportal’s data are free too.

Stephen E Arnold, April 23, 2021

Artificial Intelligence: Maybe These Numbers Are Artificial?

February 25, 2021

AI this. AI that. Suddenly it’s spring time for algorithmic magic. I read “Worldwide Revenues for AI Skyrocket, Set to Reach $550B by 2024.” That’s an interesting projection. What is “artificial intelligence?” No one has a precise definition. That makes it possible to assert that in 22 months, smart software will be more than half way to a trillion dollar market. That will make the MBA proteins kick into overdrive.

The write up cites the estimable mid tier consulting firm IDC and its Worldwide Semiannual Artificial Intelligence Tracker. I believe that this may be similar to the PC Magazine editorial team sitting around a lunch table generating lists of hot products and numbers about the uptake of windows 95. There is nothing wrong with projections. And estimates which aim toward a trillion dollar market are energizing in the Age of Rona.

The write up reports that IDC calculated with near infinite precision these outputs:

“the artificial intelligence (AI) market, including software, hardware, and services, are forecast to grow 16.4% year over year in 2021 to $327.5 billion… By 2024, the market is expected to break the $500 billion mark with a five-year compound annual growth rate (CAGR) of 17.5% and total revenues reaching $554.3 billion.”

Other findings (aside from the stretchy bendable fuzzy definition of “artificial intelligence” as including software, hardware, and services:

  • “Software represented 88% of the total AI market revenues in 2020. However, it is the slowest growing category with a five-year CAGR of 17.3%.”
  • “AI Applications took the largest share of revenue at 50% in 2020.”
  • “The AI Services category grew slower than the overall AI market with 13% annual revenue growth in 2020.”
  • “By 2024, AI Hardware is forecast to be a $30.5 billion market with AI Servers representing an 82% revenue share.”

Is AI a sandbox in which anyone can play? The data allegedly reveal:

In the Business Services for AI market, there were only four companies, Ernst & Young, PwC, Deloitte, and Booz Allen Hamilton, that generated revenues of more than $100 million in 1H 2020.

Okay, okay. Let’s step back:

  1. The definition of AI is nebulous which means that the assumptions are not exactly as solid as those of the new leaning Tower of Pisa in San Francisco
  2. The fuzzing of revenue streams, hardware, software, and the mushroom of services is confusing at least to me
  3. AI appears to be another of those one percenter sectors.

Net net: AI will use you whether you are ready or not or whether the systems work or not. We could ask IBM Watson but IBM is allegedly trying to sell its fantastic health care AI business. Googlers are busy revealing the flaws in some Googley assumptions about its AI capabilities. Nevertheless, we have big numbers.

VC, consultants, and MBAs, get ready to bill. By the way, these estimates seem similar to those issued by the estimable mid tier consulting firm for the cognitive search market. Not exactly a hole in one as I recall.

Stephen E Arnold, February 25, 2021

Music Research: Bach, Mozart, and Vivaldi Are Losers

February 18, 2021

Here’s a statement from “Techno Is the Genre Least Effective at Reducing Anxiety.” The statement is simple:

techno, dubstep and 70’s rock anthems the top three types of music that recorded an increase in their blood pressure.

Now read this statement:

Techno, dubstep and classical chill out were also the top three genres to increase heart rates among the volunteers.

Let’s try to figure this out:

  • Dubstep appears in each list
  • Techno appears in each list
  • 70s rock anthems appears in one list
  • Classical chill out appears in one list.

It seems that listening to any one of these types of music will pump up the heart rate and increase blood pressure. But no! Only 70s rock anthems and classical chill out increase the heart rate without affecting blood pressure.

What do the data say?

“The study was conducted by the Vera Clinic, who also drafted in Doctor Ömer Avlanm?? to review the results. Medically they make a lot of sense…”

Sure they do. Bach and Mozart are losers. The music should do more than just raise blood pressure. Is there a Chopin rave happening on Zoom soon? Yep, thumb typing research for the GenXers and Millennials. Sample selection methodology? Confidence? Analytic methods? Term definition? Ho ho ho.

Stephen E Arnold, February 18, 2021

Crazy Research for the Work from Home Crowd

December 16, 2020

I read — despite my inner voice shouting, no, no, no — “Australian Study Shows Working in Pajamas Does Mot Hurt Productivity.” One summer session in graduate school, I had a roomie who slept without anything. Nifty, particularly when I had to observe this person sitting at the desk in the dorm before heading to class. Yeah, disgusting then and the memory is disgusting now.

The write up states:

When the study examined the effects wearing pajamas had on productivity and mental health, it found that wearing pajamas was associated with more frequent reporting of poorer mental health. For 59% of participants who wore pajamas during the day at least one day a week, they admitted their mental health declined while working from home, versus 26% of participants who did not wear pajamas while working from home.

The headline sort of misses the point.

But one of the flaws in the study is that the question, “Do you wear clothing when you sleep?” seems to have been ignored by the journalist and maybe the researchers in Sydney.

Key point: Pretty silly stuff. I want to know what percentage of the sample slept naked and then arose to work in a productive manner with a good mental attitude. Then I want to know that if a partner were present for the naked WFHers, what is the impact of this behavior on anyone able to look at this nude person perched in an Aeron with a laptop scrunched on their chest.

Got the picture?

Stephen E Arnold, December 16, 2020

Want to Manipulate Humans? Try These Hot Buttons

December 3, 2020

Okay, thumb typing marketers, insights from academia. Navigate to “We Are All Behavioral, More or Less: A Taxonomy of Consumer Decision Making.” The write up is available from Dartmouth, home of behavioral economists and psychologists and okay pizza.

The write up is 70 pages in length and chock full of jargon and academic thinking. Nevertheless, the author, one Victor Stango, reveals some suggestive information.

Here are a couple of examples:

Table 3. Correlations among behavioral biases, and between biases and other decision inputs offers insight into pairings of bias factors

Table 5. Rotated 8-factor models and loadings of decision inputs on common factors provides a “look up table” with values to help guide a sales pitch

The list of hot button factors includes:

  • Present bias
  • Choice type
  • Risk biases
  • Confidence
  • Math bias
  • Attention
  • Patience vs. risk aversion
  • Cognitive skills
  • Personality

Net net: Manipulate biases by combining factors. Launch those online marketing campaigns via social media with confidence, p-value lovers.

Stephen E Arnold, December 2, 2020

Surveys: These Marketing Devices Are Accurate, Right?

November 10, 2020

There’s nothing like a sample, a statistical sample, that is. What’s interesting is that the US polls seem to have been reflecting some interesting but marketing-type trends. The bastion of “real journalism”— the UK Daily Mail — published “…We Did a Good Job: Defiant Pollster Nate Silver Rushes to Defend His Profession after Another Systematic Failure of Polls in the Build-Up to an Election.” Bibliophiles will note that I have omitted the tasteful obscenity. I like to avoid using words likely to irritate the really smart software which edits blog posts.

The write up points out:

FiveThirtyEight founder and editor-in-chief Nate Silver hit back at those slamming the website for being so off with their election predictions.

Let’s think about why FiveThirtyEight and other polls seem to have predicted a reality different from the one generated by humanoids marking ballots.

First, there is the sample. Picking people at random is dependent on a number of factors: Sources, selection bias, humanoids who don’t respond, etc.

Second, there are the humanoids themselves. Some people plug in the “answers” which get the poll over with really fast. I lose interest at the first hint of dark patterns which make it tough to know how may questions I have to answer to get the coupon, pat on the head, or the free shopping sack.

Third, there is counting. Yep, humans or machine things can happen.

Fourth, there is analysis. It is remarkable what one can do when counting or doing “analytics.”

The Daily Mail quotes an expert about making polls better:

‘The polling profession needs to reshape and reorganize their questionnaires,’ Luntz [the polling expert] told DailyMail.com. ‘It’s the only way they’ll ever get it right.’

But I keep thinking about the FiveThirtyEight obscenity. Defensive? Eloquent? Subjective? Insightful?

That subjective thing.

Stephen E Arnold, November 10, 2020

Spreadsheet Fever Case Example

October 12, 2020

I have been using the phrase “spreadsheet fever” to describe the impact of fiddling with numbers in Microsoft Excel has on MBAs. With Excel providing the backbone for numerous statistical confections, the sugar hit of magic assumptions cannot be under-estimated. The mental structure of a crazed investment analyst brooks no interference from common sense.

Excel: Why Using Microsoft’s Tool Caused Covid-19 Results to Be Lost” provides a possible case example of what happens when thumbtypers and over-confident innumerates tangle with a digital spreadsheet. No green eyeshades and no pencils needed. Calculators? One can hear a 22 year old ask, “What’s a calculator? I have one on my iPhone?”

The Beeb reports:

PHE [Public Health England, a fine UK entity] had set up an automatic process to pull this data together into Excel templates so that it could then be uploaded to a central system and made available to the NHS Test and Trace team, as well as other government computer dashboards.

And what tool did these over confident wizards use?

Microsoft Excel, the weapon of choice for business and STEM analysis, of course.

How did the experts wander off the information highway into a thicket of errors? The Beeb explains:

The problem is that PHE’s own developers picked an old file format to do this – known as XLS. As a consequence, each template could handle only about 65,000 rows of data rather than the one million-plus rows that Excel is actually capable of. And since each test result created several rows of data, in practice it meant that each template was limited to about 1,400 cases. When that total was reached, further cases were simply left off.

The fix? Can kicking perhaps:

But insiders acknowledge that the current clunky system needs to be replaced by something more advanced that excludes Excel, as soon as possible.


Stephen E Arnold, October 12, 2020

Next Page »

  • Archives

  • Recent Posts

  • Meta