Research: A Suspicious Activity and Deserving of a Big Blinking X?
August 2, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The Stanford president does it. The Harvard ethics professor does it. Many journal paper authors do it. Why can’t those probing the innards of the company formerly known as Twatter do it?
I suppose those researchers can. The response to research one doesn’t accept can be a simple, “The data processes require review.” But no, no, no. The response elicited from the Twatter is presented in “X Sues Hate Speech Researchers Whose Scare Campaign Spooked Twitter Advertisers.” The headline is loaded with delicious weaponized words in my opinion; for instance, the ever popular “hate speech”, the phrase “scare campaign,” and “spooked.”
MidJourney, after some coaxing, spit out a frightened audience of past, present, and potential Twatter advertisers. I am not sure the smart software captured the reality of an advertiser faced with some brand-injuring information.
Wording aside, the totally objective real news write up reports:
X Corp sued a nonprofit, the Center for Countering Digital Hate (CCDH), for allegedly “actively working to assert false and misleading claims” regarding spiking levels of hate speech on X and successfully “encouraging advertisers to pause investment on the platform,” Twitter’s blog said.
I found this statement interesting:
X is alleging that CCDH is being secretly funded by foreign governments and X competitors to lob this attack on the platform, as well as claiming that CCDH is actively working to censor opposing viewpoints on the platform. Here, X is echoing statements of US Senator Josh Hawley (R-Mo.), who accused the CCDH of being a “foreign dark money group” in 2021—following a CCDH report on 12 social media accounts responsible for 65 percent of COVID-19 vaccine misinformation, Fox Business reported.
Imagine. The Musker questioning research.
Exactly what is “accurate” today? One could query the Stanford president, the Harvard ethicist, Mr. Musk, or the executives of the Center for Countering Digital Hate. Wow. That sounds like work, probably as daunting as reviewing the methodology used for the report.
My moral and ethical compass is squarely tracking lunch today. No attorneys invited. No litigation necessary if my soup is cold. I will be dining in a location far from the spot once dominated by a quite beefy, blinking letter signifying Twatter. You know. I think I misspelled “tweeter.” I will fix it soon. Sorry.
Stephen E Arnold, August 2, 2023
Whom Does One Trust? Surprise!
July 7, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Whom does one trust? The answer — according to the estimable New York Post — is young people. Believe it or not. Just be sure to exclude dinobabies and millennials, of course.
“Millennials Are the Biggest Liars of All Generations, Survey Reveals”:
A new survey found that of all generations, those born between 1981 and 1996 are the biggest culprits of lying in the workplace and on social media.
Am I convinced that the survey is spot on? Nah. Am I confident that millennials are the biggest liars when the cohort is considered? Nah.
MidJourney generated this illustration of an angry sales manager confronting a worker. The employee reported sales as closed when they were pending. Who would do this? A dinobaby, a millennial, or a regular sales professional?
Am I entertained by the idea that dinobabies are not the most prone to prevarication and mendacity? Yes.
Consider this statement:
The findings showed that millennials were the worst offenders, with 13% copping to being dishonest at least once a day.
How many times do dinobabies eject a falsehood?
By contrast, only 2% of baby boomers, those born between 1946 and 1964, fibbed once per day.
One must be aware that GenXers just five percent engage in “daily deception.”
Where do people take liberties with the truth? Résumés (hello, LinkedIn) and social media. Imagine that! Money and companionship.
Who lies the most? Yep, 26 percent of males lie once a day. Twenty-three percent of females emit deceptive statements once a day. No other genders were considered in the write up, which is an important oversight in my opinion.
And who ran the survey? An outfit named PlayStar. Yes! I wonder if the survey tool was a Survey Monkey-like system.
Stephen E Arnold, July 7, 2023
Moral Decline? Nah, Just Your Perception at Work
June 12, 2023
Here’s a graph from the academic paper “The Illusion of Moral Decline.”
Is it even necessary to read the complete paper after studying the illustration? Of course not. Nevertheless, let’s look at a couple of statements in the write up to get ready for that in-class, blank bluebook semester examination, shall we?
Statement 1 from the write up:
… objective indicators of immorality have decreased significantly over the last few centuries.
Well, there you go. That’s clear. Imagine what life was like before modern day morality kicked in.
Statement 2 from the write up:
… we suggest that one of them has to do with the fact that when two well-established psychological phenomena work in tandem, they can produce an illusion of moral decline.
Okay. Illusion. This morning I drove past people sleeping under an overpass. A police vehicle with lights and siren blaring raced past me as I drove to the gym (a gym which is no longer open 24×7 due to safety concerns). I listened to a report about people struggling amidst the flood water in Ukraine. In short, a typical morning in rural Kentucky. Oh, I forgot to mention the gunfire, I could hear as I walked my dog at a local park. I hope it was squirrel hunters but in this area who knows?
MidJourney created this illustration of the paper’s authors celebrating the publication of their study about the illusion of immorality. The behavior is a manifestation of morality itself, and it is a testament to the importance of crystal clear graphs.
Statement 3 from the write up:
Participants in the foregoing studies believed that morality has declined, and they believed this in every decade and in every nation we studied….About all these things, they were almost certainly mistaken.
My take on the study includes these perceptions (yours hopefully will be more informed than mine):
- The influence of social media gets slight attention
- Large-scale immoral actions get little attention. I am tempted to list examples, but I am afraid of legal eagles and aggrieved academics with time on their hands.
- The impact of intentionally weaponized information on behavior in the US and other nation states which provide an infrastructure suitable to permit wide use of digitally-enabled content.
In order to avoid problems, I will list some common and proper nouns or phrases and invite you think about these in terms of the glory word “morality”. Have fun with your mental gymnastics:
- Catholic priests and children
- Covid information and pharmaceutical companies
- Epstein, Andrew, and MIT
- Special operation and elementary school children
- Sudan and minerals
- US politicians’ campaign promises.
Wasn’t that fun? I did not have to mention social media, self harm, people between the ages of 10 and 16, and statements like “Senator, thank you for that question…”
I would not do well with a written test watched by attentive journal authors. By the way, isn’t perception reality?
Stephen E Arnold, June 12, 2023
Probability: Who Wants to Dig into What Is Cooking Beneath the Outputs of Smart Software?
May 30, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The ChatGPT and smart software “revolution” depends on math only a few live and breathe. One drawer in the pigeon hole desk of mathematics is probability. You know the coin flip example. Most computer science types avoid advanced statistics. I know because my great uncle Vladimir Arnold (yeah, the guy who worked with a so so mathy type named Andrey Kolmogorov, who was pretty good at mathy stuff and liked hiking in the winter in what my great uncle described as “minimal clothing.”)
When it comes to using smart software, the plumbing is kept under the basement floor. What people see are interfaces and application programming interfaces. Watching how the sausage is produced is not what the smart software outfits do. What makes the math interesting is that the system and methods are not really new. What’s new is that memory, processing power, and content are available.
If one pries up a tile on the basement floor, the plumbing is complicated. Within each pipe or workflow process are the mathematics that bedevil many college students: Inferential statistics. Those who dabble in the Fancy Math of smart software are familiar with Markov chains and Martingales. There are garden variety maths as well; for example, the calculations beloved of stochastic parrots.
MidJourney’s idea of complex plumbing. Smart software’s guts are more intricate with many knobs for acolytes to turn and many levers to pull for “users.”
The little secret among the mathy folks who whack together smart software is that humanoids set thresholds, establish boundaries on certain operations, exercise controls like those on an old-fashioned steam engine, and find inspiration with a line of code or a process tweak that arrived in the morning gym routine.
In short, the outputs from the snazzy interface make it almost impossible to understand why certain responses cannot be explained. Who knows how the individual humanoid tweaks interact as values (probabilities, for instance) interact with other mathy stuff. Why explain this? Few understand.
To get a sense of how contentious certain statistical methods are, I suggest you take a look at “Statistical Modeling, Causal Inference, and Social Science.” I thought the paper should have been called, “Why No One at Facebook, Google, OpenAI, and other smart software outfits can explain why some output showed up and some did not, why one response looks reasonable and another one seems like a line ripped from Fantasy Magazine.
In a nutshell, the cited paper makes one point: Those teaching advanced classes in which probability and related operations are taught do not agree on what tools to use, how to apply the procedures, and what impact certain interactions produce.
Net net: Glib explanations are baloney. This mathy stuff is a serious problem, particularly when a major player like Google seeks to control training sets, off-the-shelf models, framing problems, and integrating the firm’s mental orientation to what’s okay and what’s not okay. Are you okay with that? I am too old to worry, but you, gentle reader, may have decades to understand what my great uncle and his sporty pal were doing. What Google type outfits are doing is less easily looked up, documented, and analyzed.
Stephen E Arnold, May 30, 2023
Kiddie Research: Some Guidelines
May 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
The practice of performing market research on children will not go away any time soon. It is absolutely vital, after all, that companies be able to target our youth with pinpoint accuracy. In the article “A Guide on Conducting Better Market and User Research with Kids,” Meghan Skapyak of the UX Collective shares some best practices. Apparently these tips can help companies enthrall the most young people while protecting individual study participants. An interesting dichotomy. She writes:
“Kids are a really interesting source of knowledge and insight in the creation of new technology and digital experiences. They’re highly expressive, brutally honest, and have seamlessly integrated technology into their lives while still not fully understanding how it works. They pay close attention to the visual appeal and entertainment-value of an experience, and will very quickly lose interest if a website or app is ‘boring’ or doesn’t look quite right. They’re more prone to error when interacting with a digital experience and way more likely to experiment and play around with elements that aren’t essential to the task at hand. These aspects of children’s interactions with technology make them awesome research participants and testers when researchers structure their sessions correctly. This is no easy task however, as there are lots of methodological, behavioral, structural, and ethical considerations to take in mind while planning out how your team will conduct research with kids in order to achieve the best possible results.”
Skapyak goes on to blend and summarize decades of research on ethical guidelines, structural considerations, and methodological experiments in this field. To her credit, she starts with the command to “keep it ethical” and supplies links to the UN Convention on the Rights of the Child and UNICEF’s Ethical Research Involving Children. Only then does she launch into techniques for wringing the most shrewd insights from youngsters. Examples include turning it into a game, giving kids enough time to get comfortable, and treating them as the experts. See the article for more details on how to better sell stuff to kids and plant ideas in their heads while not violating the rights of test subjects.
Cynthia Murrell, May 17, 2023
Reproducibility: Academics and Smart Software Share a Quirk
January 15, 2023
I can understand why a human fakes data in a journal article or a grant document. Tenure and government money perhaps. I think I understand why smart software exhibits this same flaw. Humans put their thumbs (intentionally or inadvertently) put their thumbs on the button setting thresholds and computational sequences.
The key point is, “Which flaw producer is cheaper and faster: Human or code?” My hunch is that smart software wins because in the long run it cannot sue for discrimination, take vacations, and play table tennis at work. The downstream consequence may be that some humans get sicker or die. Let’s ask a hypothetical smart software engineer this question, “Do you care if your model and system causes harm?” I theorize that at least one of the software engineer wizards I know would say, “Not my problem.” The other would say, “Call 1-8-0-0-Y-O-U-W-I-S-H and file a complaint.”
Wowza.
“The Reproducibility Issues That Haunt Health-Care AI” states:
a data scientist at Harvard Medical School in Boston, Massachusetts, acquired the ten best-performing algorithms and challenged them on a subset of the data used in the original competition. On these data, the algorithms topped out at 60–70% accuracy, Yu says. In some cases, they were effectively coin tosses1. “Almost all of these award-winning models failed miserably,” he [Kun-Hsing Yu, Harvard] says. “That was kind of surprising to us.”
Wowza wowza.
Will smart software get better? Sure. More data. More better. Think of the start ups. Think of the upsides. Think positively.
I want to point out that smart software may raise an interesting issue: Are flaws inherent because of the humans who created the models and selected the data? Or, are the flaws inherent in the algorithmic procedures buried deep in the smart software?
A palpable desire exists and hopes to find and implement a technology that creates jobs, rejuices some venture activities, and allows the questionable idea that technology to solve problems and does not create new ones.
What’s the quirk humans and smart software share? Being wrong.
Stephen E Arnold, January 15, 2023
Common Sense: A Refreshing Change in Tech Write Ups
December 13, 2022
I want to give a happy quack to this article: “Forget about Algorithms and Models — Learn How to Solve Problems First.” The common sense write up suggests that big data cowboys and cowgirls make sure of their problem solving skills before doing the algorithm and model Lego drill. To make this point clear: Put foundations in place before erecting a structure which may fail in interesting ways.
The write up says:
For programmers and data scientists, this means spending time understanding the problem and finding high-level solutions before starting to code.
But in an era of do your own research and thumbtyping will common sense prevail?
Not often.
The article provides a list a specific steps to follow as part of the foundation for the digital confection. Worth reading; however, the write up tries to be upbeat.
A positive attitude is a plus. Too bad common sense is not particularly abundant in certain fascinating individual and corporate actions; to wit:
- Doing the FBX talkathons
- Installing spyware without legal okays
- Writing marketing copy that asserts a cyber security system will protect a licensee.
You may have your own examples. Common sense? Not abundant in my opinion. That’s why a book like How to Solve It: Modern Heuristics is unlikely to be on many nightstands of some algorithm and data analysts. Do I know this for a fact? Nope, just common sense. Thumbtypers, remember?
Stephen E Arnold, December 13, 2022
The Freakonomics Approach to Decision Making
November 18, 2022
It is predictable an economist like Steven Levitt would apply statistics to the process of making life’s big choices, but one may be surprised at the simplistic solution he has deduced. Levitt, an economist at the University of Chicago, hosts the “Freakonomics” podcast. Freethink explains how a “‘Freakonomics’ Study Offers Simple Strategy for Making Tough Decisions.” The study had each participant make a binary choice, to make a change or not, with a coin toss and report back. Levitt found a trend in the results. Writer Stephen Johnson reports:
“Most surprising were the results on well-being. At both the two and six-month marks, most people who chose change reported feeling happier, better off, and that they had made the correct decision and would make it again. ‘The data from my experiment suggests we would all be better off if we did more quitting,’ Levitt said in a press release. ‘A good rule of thumb in decision making is, whenever you cannot decide what you should do, choose the action that represents a change, rather than continuing the status quo.’ The study had some limitations. One is that its participants weren’t selected randomly. Rather, they opted in to the study after visiting FreakonomicsExperiments.com, which they likely heard about from the podcast or various social media channels associated with it. Another limitation is that participants whose decision didn’t play out well might have been less likely to report back on their status after two and six months. So, the study might be over-representing positive outcomes. Still, the study does suggest that people who are on the margin of a tough decision — that is, people who really can’t decide which option is best — are probably better off going with change.”
Perhaps. Johnson concludes with an old trick for checking your gut instinct that also involves a coin flip? Go ahead and toss that coin, then see which side you find yourself hoping it will land on. Will either of these methods really point to the best decision? Is Mr. Musk using them to inform decision making at Twitter? Are the results reproducible?
Cynthia Murrell, November 18, 2022
LinkedIn: The Logic of the Greater Good
September 26, 2022
I have accepted two factoids about life online:
First, the range of topics searched from my computer systems available to my research team is broad, diverse, and traverses the regular Web, the Dark Web, and what we call the “ghost Web.” As a result, recommendation systems like those in use by Facebook, Google, and Microsoft are laughable. One example is YouTube’s suggesting that one of my team would like an inappropriate beach fashion show here, a fire on a cruise ship here, humorous snooker shots here, or sounds heard after someone moved to America here illustrate the ineffectuality of Google’s smart recommendation software. These recommendations make clear that when smart software cannot identify a pattern or an intentional pattern disrupting click stream, data poisoning works like a champ. (OSINT fans take note. Data poisoning works and I am not the only person harboring this factoid.) Key factoid: Recommendation systems don’t work and the outputs can be poisoned… easily.
Second, profile centric systems like Facebook’s properties or the LinkedIn social network struggle to identify information that is relevant. Thus, we ignore the suggestions for who is hiring people with your profile and the requests to be friends. These are amusing. Here are some anonymized examples. A female in Singapore wanted to connect me with an escort when I was next in Singapore. I interpreted this as a solicitation somewhat ill suited to a 77 year old male who no longer flies to Washington, DC. Forget Singapore. What about a person who is a sales person at a cable company? Or what about a person who does land use planning in Ecuador? What about a person with 19 years experience as a Google “partner”? You get the idea. Pimps and resellers of services which could be discontinued without warning. Key factoid: Recommendations don’t match that I am retired, give lectures to law enforcement and intelligence professionals, and stay in my office in rural Kentucky, with my lovable computers, a not so lovable French bulldog, and my main squeeze for the last 53 years. (Sorry, Singapore intermediary for escorts. )
I read a write up in the indigestion inducing New York Times. I am never sure if the stories are accurate, motivated by social bias, written by a persistent fame seeker, or just made up by a modern day Jayson Blair. For info, click here. (You will have to pay to view this exciting story about fiction presented as “real” news.
The story catching my attention today (Saturday, September 24, 2022) has the title “LinkedIn Ran Social Experiments on 20 Million Users over Five Years?” Obviously the author is not familiar with the default security and privacy settings in Windows 10 and that outstanding Windows 11. Data collection both explicit and implicit is the tension in in the warp and woof of the operating systems’ fabric.
Since Microsoft owns LinkedIn, it did not take me long to conclude that LinkedIn like its precursor Plaxo had to be approached with caution, great caution. The write up reports that some Ivory Tower types figured out that LinkedIn ran and probably still runs tests to determine what can get more users, more clicks, and more advertising dollars for the Softies. An academic stalking horse is usually a good idea.
I did spot several comments in the write up which struck me as amusing. Let’s look at a three:
First, consider this statement:
LinkedIn, which is owned by Microsoft, did not directly answer a question about how the company had considered the potential long term consequences of its experiments on users’ employment and economic status.
No kidding. A big tech company being looked at for its allegedly monopolistic behaviors not directly answering a New York Times’ reporters questions. Earth shaking. But the killer gag for me is wanting to know if Microsoft LinkedIn “consider the potential long term consequences of its experiments.” Ho ho ho. Long term at a high tech outfit is measured in 12 week chunks. Sure, there may be a five year plan, but it probably still includes references to Microsoft’s network card business, the outlook for Windows Phone and Nokia, and getting the menus and icons in Office 365 to be the same across MSFT applications, and pitching the security of Microsoft Azure and Exchange as bulletproof. (Remember. There is a weapon called the Snipex Alligator, but it is not needed to blast holes through some of Microsoft’s vaunted security systems I have heard.)
Second, what about this passage from the write up:
Professor Aral of MIT said the deeper significance of the study was that it showed the importance of powerful social networking algorithms — not just in amplifying problems like misinformation but also as fundamental indications or economic conditions like employment and unemployment.
I think a few people understand that corrosive, disintermediating impact of social media information delivered quickly can have an effect. Examples range from flash mob riots to teens killing themselves because social media just does such a bang up job of helping adolescents deal with inputs from strangers and algorithms which highlight the thrill of blue screening oneself. The excitement of asking people who won’t help one find a job is probably less of a downer but failing to land an interview via LinkedIn might spark binge watching of “Friends.”
Third, I loved this passage:
“… If you want to get more jobs, you should be on LinkedIn more.
Yeah, that’s what I call psychological triggering: Be on LinkedIn more. Now. Log on. Just don’t bother to ask me to add you my network of people whom I don’t know because “Stephen E Arnold” on LinkedIn is managed by different members of my team.
Net net: Which is better? The New York Times or Microsoft LinkedIn. You have 10 minutes to craft an answer which you can post on LinkedIn among the self promotions, weird facts, and news about business opportunities like paying some outfit to put you on a company’s Board of Advisors.
Yeah, do it.
Stephen E Arnold, September 26, 2022
Pew Data about Social Media Use: Should I Be Fearful? Answer: Me, No. You? Probably
September 26, 2022
The Pew Research outfit published more data about social media. If you want to look at the factsheet, navigate to this Pew link. I want to focus on one small, probably meaningless item. What interested me was how those in the sample get their news. If I read the snazzy graphics correctly:
- 82 percent of those in the sample use YouTube. (Does that make YouTube a monopoly?) Of those YouTube users, 25 percent get their “news” from the Alphabet Google YouTube DeepMind entity.
- 30 percent of those in the sample use TikTok, that friendly entity linked with the CCP. Of those TikTok adepts, 10 percent get their news from the Middle Kingdom’s information output and usage intake system.
- Other services deliver news, but it is not clear if video is the mechanism. Video interests me because of the Marshall McLuhan hot-cold notion. Video is the digital garden for couch potatoes. Reading is a bit more active, or so the fans of McLuhan would suggest.
Why am I fearful? How about these thoughts, conceived while consuming a cheese sandwich?
- Potent mechanisms for injecting shaped or weaponized information into consumers of video news are in the hands of two entities focused on achieving their goals. China is into having the US become subservient to the Middle Kingdom and redress the arrogance Americans have manifested over the years. The AGYD entity wants money and the ability to shape the direction in which it would prefer the users go. My view is their the approach of each entity is the same. The goals are somewhat different.
- Most consumers of video and news are unaware of the functionality of weaponized video information. My view is that it is pretty darned good at tearing down and cultivating certain interesting mental frameworks.
- Weaponization is trivial, particularly when each AGYD and TikTok can use money to incentivize the individuals and firms producing content for the respective services’ audience.
Net net: Once one pushes into double digit content dependence, a tipping point is something that can cause what appears to be a stable structure to collapse. Can digital information break the camel’s back? For sure. Am I fearful? Nah. Others? Probably not and that increases my concern.
Stephen E Arnold, September 26, 2022