Measuring How Badly Social Media Amplifies Misinformation

October 26, 2022

In its ongoing examination of misinformation online, the New York Times tells us about the Integrity Institute‘s quest to measure just how much social media contributes to the problem in, “How Social Media Amplifies Misinformation More than Information.” Reporter Steven Lee Meyers writes:

“It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday [October 13] it began publishing results that it plans to update each week through the midterm elections on Nov. 8. The institute’s initial report, posted online, found that a ‘well-crafted lie’ will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.”

In is ongoing investigation, the researchers compare the circulation of posts flagged as false by the International Fact-Checking Network to that of other posts from the same accounts. We learn:

“Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or ‘retweet,’ posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users. … Facebook, according to the sample that the institute has studied so

far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.”

Facebook‘s video content spread lies faster than the rest of the platform, we learn, because its features lean more heavily on recommendation algorithms. Instagram showed the lowest amplification rate, while the team did not yet have enough data on YouTube to draw a conclusion. It will be interesting to see how these amplifications do or do not change as the midterms approach. The Integrity Institute shares its findings here.

Cynthia Murrell, October 26, 2022

Learning Is Supposed to Be Easy. Says Who?

October 26, 2022

I am not sure what a GenZ is. I do know that if I provide cash and change for a bill at a drug store or local grocery store, the person running the cash register looks like a deer in headlights. I have a premonition that if I had my Digital Infrared Thermometer, I could watch the person’s temperature rise. Many of these young people struggle to make change. My wife had a $0.50 cent piece and gave it to the cashier at the garden center along with some bills. The GenZ or GenX or whatever young person called the manager and asked, “What is this coin?”

I read “Intelligent.com Survey Shows 87 Percent of College Students Think Classes Are Too Difficult, But Most Fail to Study Regularly.” I know little about the sponsor of the research, the sampling methodology, or the statistical procedures used to calculate the data. Caution is advised when “real news” trots out data. Let’s assume that the information is close enough for horseshoes. After all, this is the statistical yardstick for mathematical excellence in use at synthetic data companies, Google-type outfits, and many artificial intelligence experts hot for cheap training data. Yep, close enough is good enough. I should create a T shit with this silkscreened on the front. But that’s work, which I don’t do.

The findings reported in the article include some gems which appear to bolster my perception that quite a few GenZ etc. cohort members are not particularly skilled in some facets of information manipulation. I would wager that their TikTok skills are excellent. Other knowledge based functions may lag. Let’s look at these numbers:

65 percent of respondents say they put a lot of effort into their studies. However, research findings also show that one-third of students who claim to put a lot of effort into their schoolwork spend less than 5 hours a week studying.

This is the academic equivalent of a young MBAs saying, “I will have the two pager ready tomorrow morning.” The perception of task completion is sufficient for these young millionaires to be. Doing the work is irrelevant because the individual thinks the work will be done. When reminded, the excuses fly. I want to remind you that some high-tech companies trot out the well worn “the dog ate my homework” excuse when testifying.

And this finding:

Thirty-one percent of respondents spend 1-5 hours, and 37 percent spend 6-10 hours studying for classes each week. Comparatively, 8 percent of students spend 15-20 hours, and 5 percent spend more than 20 hours studying.

I have been working on Hopf fibrations for a couple of years. Sorry, I am not at the finish line yet. Those in the sample compute studying with a few hours in a week. Nope, that time commitment is plotted on flawed timeline, not the real world timeline for learning and becoming proficient in a subject.

I loved this finding:

Twenty-eight percent of students have asked a professor to change their grade, while 31 percent admit they cheated to get better grades. Almost 50 percent of college students believe a pass or fail system should replace the current academic grading system.

Wow.

Net net: No wonder young people struggle with making change and thinking clearly. Bring back the dinobabies even though there are some dull normals in that set of cohorts as well. But when one learns by watching TikToks what can one expect in the currency recognition department? Answer: Not much.

Stephen E Arnold, October 26, 2022

How Social Media Robs Kids of Essential Sleep

October 18, 2022

Here is yet another way social media is bad for kids. A brief write-up at Insider informs us, “Kids Are Waking Up in the Night to Check their Notifications and Are Losing About 1 Night’s Worth of Sleep a Week, Study Suggests.” The study from De Montfort University Leicester is a wake-up call for parents and guardians. See the university’s post for its methodology. Writer Beatrice Nolan tells us:

“Almost 70% of the 60 children under 12 surveyed by De Montfort University in Leicester, UK, said they used social media for four hours a day or more. Two thirds said they used social media apps in the two hours before going to bed. The study also found that 12.5% of the children surveyed were waking up in the night to check their notifications.  Psychology lecturer John Shaw, who headed up the study, said children were supposed to sleep for between nine to 11 hours a night, per NHS guidelines, but those surveyed reported sleeping an average of 8.7 hours nightly. He said: ‘The fear of missing out, which is driven by social media, is directly affecting their sleep. They want to know what their friends are doing, and if you’re not online when something is happening, it means you’re not taking part in it. And it can be a feedback loop. If you are anxious you are more likely to be on social media, you are more anxious as a result of that. And you’re looking at something, that’s stimulating and delaying sleep.'”

Surprising no one, the study found TikTok was the most-used app, followed closely by Snapchat and more distantly by Instagram. The study found almost 70% of its young respondents spend over four hours a day online, most of them just before bedtime. Dr. Shaw emphasizes the importance of sleep routines for children and other humans, sharing his personal policy to turn off his phone an hour before bed. When he does make an exception, he at least turns on his blue-light filter.

Nolan mentions California’s recent law that seeks to shield kids from harm by social media, but the provisions apply more to issues like data collection and privacy than promoting a compulsion to wake up and check one’s phone. That leaves the ball, once again, in the parents’ court. A good practice is to enforce a rule that kids turn off the device an hour before bed and leave it off overnight. Maybe even store it in our own nightstands. Yes, they may fight us on it. But even if we cannot convince them, we know getting adequate sleep is even more important than checking that feed overnight.

Cynthia Murrell, October 18, 2022

LinkedIn: What Is the Flavor Profile of Poisoned Data?

October 6, 2022

I gave a lecture to some law enforcement professionals focused on cyber crime. In that talk, I referenced three OSINT blind spots; specifically:

  1. Machine generated weaponized information
  2. Numeric strings which cause actions within a content processing system
  3. Poisoned data.

This free and public blog is not the place for the examples I presented in my lecture. I can, however, point to the article “Glut of Fake LinkedIn Profiles Pits HR Against the Bots.”

The write up states:

A recent proliferation of phony executive profiles on LinkedIn is creating something of an identity crisis for the business networking site, and for companies that rely on it to hire and screen prospective employees. The fabricated LinkedIn identities — which pair AI-generated profile photos with text lifted from legitimate accounts — are creating major headaches for corporate HR departments and for those managing invite-only LinkedIn groups.

LinkedIn is a Microsoft property, and it — like other Microsoft “solutions” — finds itself unable to cope with certain problems. In this case, I am less interested in “human resources”, chief people officers, or talent manager issues than the issue of poisoning a data set.

LinkedIn is supposed to provide professionals with a service to provide biographies, links to articles, and a semi-blog function with a dash of TikTok. For some, whom I shall not name, it has become a way to preen, promote, and pitch.

But are those outputting the allegedly “real” information operating like good little sixth grade students in a 1950s private school?

Nope.

The article suggests three things to me:

  1. Obviously Microsoft LinkedIn is unable to cope with this data poisoning
  2. Humanoid professionals (and probably the smart software scraping LinkedIn for “intelligence”) have no way to discern what’s square and what’s oval
  3. The notion that this is a new problem is interesting because many individuals are pretty tough to track down. Perhaps these folks don’t exist and never did?

Does this matter? Sure, Microsoft / LinkedIn has to do some actual verification work. Wow. Imagine that. Recruiters / solicitors will have to do more than send a LinkedIn message and set up a Zoom call. (Yeah, Teams is a possibility for some I suppose.) What about analysts who use LinkedIn as a source information?

Interesting question.

Stephen E Arnold, October 6, 2022

Quite a Recipe: Zuck-ini with a Bulky Stuffed Sausage

September 28, 2022

Ah, the Zuckbook or the Meta-thing. I can never remember the nomenclature. I thought about the estimable company after I read “Meta Defends Safe Instagram Posts Seen by Molly Russell.” I suppose I should provide a bit of color about Ms. Russell. She was the British school girl who used the digital Zuck-ini’s Instagram recipe for happiness, success, and positive vibes.

However, in Ms. Russell’s case, her journey to community appears to have gone off the rails. Ms. Russell was 14 when she died by suicide. The Meta-thing’s spokesperson for the legal action sparked by Ms. Russell’s demise said:

Ms Lagone told the inquest at North London Coroner’s Court she thought it was “safe for people to be able to express themselves” – but conceded two of the posts shown to the court would have violated Instagram’s policies and offered an apology about some of the content. Responding to questioning, she said: “We are sorry that Molly viewed content that violated our policies and we don’t want that on the platform.”

Move fast and break things was I believe a phrase associated with the Zuck-ini’s garden of delights. In Ms. Russell’s case the broken thing was Ms. Russell’s family. That “sorry” strikes me as meaningful, maybe heart felt. On the other hand, it might be corporate blather.

Macworld does not address Ms. Russell’s death. However, the article “Despite Apple’s Best Efforts, Meta and Google Are Still Out of Control.” The write up explains that Apple is doing something to slow the stampeding stallions at the Meta-thing and Googzilla.

I noted this passage:

There is a great potential for this [data gathered by certain US high-technology companies] information to be misused and if we in the United States had any sort of functional government, it would have made these sales illegal by now.

My question: What about the combination of a young person’s absorbing content and the systems and methods to display “related” content to a susceptible individual. Might that one-two punch have downsides?

Yep. Is there a fix? Sure, after two decades of inattention, let’s just apply a quick fix or formulate a silver bullet.

But the ultimate is, of course, is to say, “Sorry.” I definitely want to see the Zuck-ini stuffed. Absent a hot iron poker, an Italian sausage will do.

Stephen E Arnold, September 28, 2022

LinkedIn: The Logic of the Greater Good

September 26, 2022

I have accepted two factoids about life online:

First, the range of topics searched from my computer systems available to my research team is broad, diverse, and traverses the regular Web, the Dark Web, and what we call the “ghost Web.” As a result, recommendation systems like those in use by Facebook, Google, and Microsoft are laughable. One example is YouTube’s suggesting that one of my team would like an inappropriate beach fashion show here, a fire on a cruise ship here, humorous snooker shots here, or sounds heard after someone moved to America here illustrate the ineffectuality of Google’s smart recommendation software. These recommendations make clear that when smart software cannot identify a pattern or an intentional pattern disrupting click stream, data poisoning works like a champ. (OSINT fans take note. Data poisoning works and I am not the only person harboring this factoid.) Key factoid: Recommendation systems don’t work and the outputs can be poisoned… easily.

Second, profile centric systems like Facebook’s properties or the LinkedIn social network struggle to identify information that is relevant. Thus, we ignore the suggestions for who is hiring people with your profile and the requests to be friends. These are amusing. Here are some anonymized examples. A female in Singapore wanted to connect me with an escort when I was next in Singapore. I interpreted this as a solicitation somewhat ill suited to a 77 year old male who no longer flies to Washington, DC. Forget Singapore. What about a person who is a sales person at a cable company? Or what about a person who does land use planning in Ecuador? What about a person with 19 years experience as a Google “partner”? You get the idea. Pimps and resellers of services which could be discontinued without warning. Key factoid: Recommendations don’t match that I am retired, give lectures to law enforcement and intelligence professionals, and stay in my office in rural Kentucky, with my lovable computers, a not so lovable French bulldog, and my main squeeze for the last 53 years. (Sorry, Singapore intermediary for escorts. Sad smile)

I read a write up in the indigestion inducing New York Times. I am never sure if the stories are accurate, motivated by social bias, written by a persistent fame seeker, or just made up by a modern day Jayson Blair. For info, click here. (You will have to pay to view this exciting story about fiction presented as “real” news.

The story catching my attention today (Saturday, September 24, 2022) has the title “LinkedIn Ran Social Experiments on 20 Million Users over Five Years?” Obviously the author is not familiar with the default security and privacy settings in Windows 10 and that outstanding Windows 11. Data collection both explicit and implicit is the tension in in the warp and woof of the operating systems’ fabric.

Since Microsoft owns LinkedIn, it did not take me long to conclude that LinkedIn like its precursor Plaxo had to be approached with caution, great caution. The write up reports that some Ivory Tower types figured out that LinkedIn ran and probably still runs tests to determine what can get more users, more clicks, and more advertising dollars for the Softies. An academic stalking horse is usually a good idea.

I did spot several comments in the write up which struck me as amusing. Let’s look at a three:

First, consider this statement:

LinkedIn, which is owned by Microsoft, did not directly answer a question about how the company had considered the potential long term consequences of its experiments on users’ employment and economic status.

No kidding. A big tech company being looked at for its allegedly monopolistic behaviors not directly answering a New York Times’ reporters questions. Earth shaking. But the killer gag for me is wanting to know if Microsoft LinkedIn “consider the potential long term consequences of its experiments.” Ho ho ho. Long term at a high tech outfit is measured in 12 week chunks. Sure, there may be a five year plan, but it probably still includes references to Microsoft’s network card business, the outlook for Windows Phone and Nokia, and getting the menus and icons in Office 365 to be the same across MSFT applications, and pitching the security of Microsoft Azure and Exchange as bulletproof. (Remember. There is a weapon called the Snipex Alligator, but it is not needed to blast holes through some of Microsoft’s vaunted security systems I have heard.)

Second, what about this passage from the write up:

Professor Aral of MIT said the deeper significance of the study was that it showed the importance of powerful social networking algorithms — not just in amplifying problems like misinformation but also as fundamental indications or economic conditions like employment and unemployment.

I think a few people understand that corrosive, disintermediating impact of social media information delivered quickly can have an effect. Examples range from flash mob riots to teens killing themselves because social media just does such a bang up job of helping adolescents deal with inputs from strangers and algorithms which highlight the thrill of blue screening oneself. The excitement of asking people who won’t help one find a job is probably less of a downer but failing to land an interview via LinkedIn might spark binge watching of “Friends.”

Third, I loved this passage:

“… If you want to get more jobs, you should be on LinkedIn more.

Yeah, that’s what I call psychological triggering: Be on LinkedIn more. Now. Log on. Just don’t bother to ask me to add you my network of people whom I don’t know because “Stephen E Arnold” on LinkedIn is managed by different members of my team.

Net net: Which is better? The New York Times or Microsoft LinkedIn. You have 10 minutes to craft an answer which you can post on LinkedIn among the self promotions, weird facts, and news about business opportunities like paying some outfit to put you on a company’s Board of Advisors.

Yeah, do it.

Stephen E Arnold, September 26, 2022

Facebook: Slow and TikTok: Fast. Can Research Keep Pace with Effects?

September 23, 2022

I read “Facebook Proven to Negatively Impact Mental Health.” The academic analysis spanned about two decades. The conclusion is that Facebook (the poster child for bringing people together) is bad news for happy thoughts.

I noted this passage:

The study was based on data that dates back to the 2004 advent of Facebook at Harvard University, before it took the internet by storm. Facebook was initially accessible only to Harvard students who had a Harvard email address. Quickly spreading to other colleges in and outside the US, the network was made available to the general public in the US and beyond in September 2006. The researchers were able to analyze the impact of social media use by comparing colleges that had access to the platform to colleges that did not. The findings show a rise in the number of students reporting severe depression and anxiety (7% and 20% respectively).

The phrase which caught my attention is “quickly spreading.” Sure, by the standards of yesteryear, Facebook was like Road Runner. My thought is that the velocity of TikTok is different:

  1. Slow ramp and then accelerating user growth
  2. Rapid fire content consumption
  3. Short programs which Marshall McLuhan would be interested in if he were alive
  4. Snappy serve-up algorithms.

Facebook is a rabbit with a bum foot. No lucky charm for the Zuckers. TikTok is a Chinese knock off of the SR 71.

Perhaps the researchers in Ivory Towerville will address these questions:

  1. What’s the impact of high velocity, short image-centric videos on high school and grade school students?
  2. What can weaponized information accomplish in attitude change on certain issues like body image, perception of reality, and the value of self harm?
  3. What mental changes take place when information is obtained from a TikTok type source?

Do you see the research challenge? Researchers are just now validating what has been evident to many commercial database publishers for many years. With go-go TikTok, how many years will it take to validate the downsides of this outstanding, intellect-enhancing service?

Stephen E Arnold, September 23, 2022

TikTok: A Slick Engine to Output Blackmail Opportunities

September 22, 2022

Some topics are not ready for online experts who think they know how sophisticated data collection and analytics “work.” The article “TikTok’s Algorithms Knew I Was Bi before I Did. I’m Not the Only One” provides a spy-tingling glimpse into what the China-linked TikTok video addiction machine can do. In recent testimony, TikTok’s handwaver explained that the Middle Kingdom’s psychological profile Ming tea pot is just nothing more than kid vids.

The write up explains:

On TikTok, the relationship between user and algorithm is uniquely (even sometimes uncannily) intimate.

This sounds promising: Intimate as in pillow talk, secret thoughts, video moments forgotten but not lost to fancy math. The article continues:

There is something about TikTok that feels particularly suited to these journeys of sexual self-discovery and, in the case of women loving women, I don’t think it’s just the prescient algorithm. The short-form video format lends itself to lightning bolt-like jolts of soul-bearing nakedness…

Is it just me or is the article explaining exactly how TikTok can shape and then cause a particular behavior? I learned more:

I hadn’t knowingly been deceiving or hiding this part of me. I’d simply discovered a more appropriate label. But it was like we were speaking different languages.

Weaponizing TikTok probably does not remake an individual. The opportunity the system presents to an admin with information weaponization in mind is to nudge TikTok absorbers into a mind set and make it easier to shape a confused, impressionable, or clueless person to be like Ponce de Leon and explore something new.

None of this makes much sense to a seventh grader watching shuffle dance steps. But the weaponization of information angle is what make blackmail opportunities bloom. What if the author was not open about the TikTok nudged or induced shift? Could that information or some other unknown or hidden facet of the past be used to obtain access credentials, a USB stuffed with an organization’s secrets, or using a position of trust to advance a particular point of view?

The answer is, “Yep.” Once there is a tool that tool will be used. Then the tool will be applied to other use cases or opportunities to lure people to an idea like “Hey, that island is just part of China” or something similar.

In my opinion, that’s what the article is struggling to articulate: TikTok means trouble, and the author is “not the only one.”

Stephen E Arnold, September 22, 2022

Mastodon: An Elephant in the Room or a Dwarf Stegodon in the Permafrost?

September 22, 2022

Buzzkill and Crackpot have been pushing Mastodon for years. If you are familiar with the big elephant, you know that mastodons are extinct. If you are not familiar with distributed mastodon, that creature is alive and stomping around. (The dwarf stegodon Facebook may become today’s MySpace.

Chinese Social Media Users Are Flocking to the Decentralised Mastodon Platform to Find Community amid Crackdown at Home” explains once one pays up to access the write up:

Mastodon, an open-source microblogging software, was created by German developer Eugen Rochko in 2017 as a decentralised version of Twitter that is difficult to block or censor. It was partially a response to the control over user data exerted by Big Tech platforms, and the source code has since been used for many alternative platforms catering to those disaffected with mainstream options.

Features attractive to those eager to avoid big tech include, says the report:

Older posts are also difficult to resurface, as there is no free text search, only searching for hashtags. This is by design and encourages users to be more comfortable sharing their thoughts in the moment without worrying about how that information will be used in the future. Blocking content is also difficult for the Great Firewall because it is shared across instances. Alive.bar might be blocked, but people on another domain can follow users there.

Will Chinese uptake of Mastodon cause the beast to multiply and go forth? With censorship and online incivility apparently on the rise, yep.

Stephen E Arnold, September 22, 2022

Be an Information Warrior: Fun and Easy Too

September 16, 2022

I spotted an article in Politico. I won’t present the full title because the words in that title will trigger a range of smart software armed with stop words. Here’s the link if you want to access the source to which I shall refer.

I can paraphrase the title, however. Here’s my stab at avoiding digital tripwires: “Counter Propaganda Tailored to Neutralize Putin’s Propaganda.”

The idea is that a “community” has formed to pump out North Atlantic Fellas’ Organization weaponized and targeted information. The source article says:

NAFO “fellas,” as they prefer to be called, emblazon their Twitter accounts with the Shiba Inu avatar. They overlay the image on TikTok-style videos of Ukrainian troops set to dance music soundtracks. They pile onto Russian propaganda via coordinated social media attacks that rely on humor — it’s hard to take a badly-drawn dog meme seriously — to poke fun at the Kremlin and undermine its online messaging.

The idea is that NAFO is “weaponizing meme culture.” The icon for the informal group is Elon Musk’s favorite digital creature.

See related image detail

The image works well with a number of other images in my opinion. The source write up contains a number of examples.

My thought is that if one has relatives or friends in Russia, joining the NAFO outfit might have some knock on consequences.

From my point of view, once secret and little known information warfare methods are now like Elon Musk. Everywhere.

Stephen E Arnold, September 16, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta