How Social Media Robs Kids of Essential Sleep
October 18, 2022
Here is yet another way social media is bad for kids. A brief write-up at Insider informs us, “Kids Are Waking Up in the Night to Check their Notifications and Are Losing About 1 Night’s Worth of Sleep a Week, Study Suggests.” The study from De Montfort University Leicester is a wake-up call for parents and guardians. See the university’s post for its methodology. Writer Beatrice Nolan tells us:
“Almost 70% of the 60 children under 12 surveyed by De Montfort University in Leicester, UK, said they used social media for four hours a day or more. Two thirds said they used social media apps in the two hours before going to bed. The study also found that 12.5% of the children surveyed were waking up in the night to check their notifications. Psychology lecturer John Shaw, who headed up the study, said children were supposed to sleep for between nine to 11 hours a night, per NHS guidelines, but those surveyed reported sleeping an average of 8.7 hours nightly. He said: ‘The fear of missing out, which is driven by social media, is directly affecting their sleep. They want to know what their friends are doing, and if you’re not online when something is happening, it means you’re not taking part in it. And it can be a feedback loop. If you are anxious you are more likely to be on social media, you are more anxious as a result of that. And you’re looking at something, that’s stimulating and delaying sleep.'”
Surprising no one, the study found TikTok was the most-used app, followed closely by Snapchat and more distantly by Instagram. The study found almost 70% of its young respondents spend over four hours a day online, most of them just before bedtime. Dr. Shaw emphasizes the importance of sleep routines for children and other humans, sharing his personal policy to turn off his phone an hour before bed. When he does make an exception, he at least turns on his blue-light filter.
Nolan mentions California’s recent law that seeks to shield kids from harm by social media, but the provisions apply more to issues like data collection and privacy than promoting a compulsion to wake up and check one’s phone. That leaves the ball, once again, in the parents’ court. A good practice is to enforce a rule that kids turn off the device an hour before bed and leave it off overnight. Maybe even store it in our own nightstands. Yes, they may fight us on it. But even if we cannot convince them, we know getting adequate sleep is even more important than checking that feed overnight.
Cynthia Murrell, October 18, 2022
LinkedIn: What Is the Flavor Profile of Poisoned Data?
October 6, 2022
I gave a lecture to some law enforcement professionals focused on cyber crime. In that talk, I referenced three OSINT blind spots; specifically:
- Machine generated weaponized information
- Numeric strings which cause actions within a content processing system
- Poisoned data.
This free and public blog is not the place for the examples I presented in my lecture. I can, however, point to the article “Glut of Fake LinkedIn Profiles Pits HR Against the Bots.”
The write up states:
A recent proliferation of phony executive profiles on LinkedIn is creating something of an identity crisis for the business networking site, and for companies that rely on it to hire and screen prospective employees. The fabricated LinkedIn identities — which pair AI-generated profile photos with text lifted from legitimate accounts — are creating major headaches for corporate HR departments and for those managing invite-only LinkedIn groups.
LinkedIn is a Microsoft property, and it — like other Microsoft “solutions” — finds itself unable to cope with certain problems. In this case, I am less interested in “human resources”, chief people officers, or talent manager issues than the issue of poisoning a data set.
LinkedIn is supposed to provide professionals with a service to provide biographies, links to articles, and a semi-blog function with a dash of TikTok. For some, whom I shall not name, it has become a way to preen, promote, and pitch.
But are those outputting the allegedly “real” information operating like good little sixth grade students in a 1950s private school?
Nope.
The article suggests three things to me:
- Obviously Microsoft LinkedIn is unable to cope with this data poisoning
- Humanoid professionals (and probably the smart software scraping LinkedIn for “intelligence”) have no way to discern what’s square and what’s oval
- The notion that this is a new problem is interesting because many individuals are pretty tough to track down. Perhaps these folks don’t exist and never did?
Does this matter? Sure, Microsoft / LinkedIn has to do some actual verification work. Wow. Imagine that. Recruiters / solicitors will have to do more than send a LinkedIn message and set up a Zoom call. (Yeah, Teams is a possibility for some I suppose.) What about analysts who use LinkedIn as a source information?
Interesting question.
Stephen E Arnold, October 6, 2022
Quite a Recipe: Zuck-ini with a Bulky Stuffed Sausage
September 28, 2022
Ah, the Zuckbook or the Meta-thing. I can never remember the nomenclature. I thought about the estimable company after I read “Meta Defends Safe Instagram Posts Seen by Molly Russell.” I suppose I should provide a bit of color about Ms. Russell. She was the British school girl who used the digital Zuck-ini’s Instagram recipe for happiness, success, and positive vibes.
However, in Ms. Russell’s case, her journey to community appears to have gone off the rails. Ms. Russell was 14 when she died by suicide. The Meta-thing’s spokesperson for the legal action sparked by Ms. Russell’s demise said:
Ms Lagone told the inquest at North London Coroner’s Court she thought it was “safe for people to be able to express themselves” – but conceded two of the posts shown to the court would have violated Instagram’s policies and offered an apology about some of the content. Responding to questioning, she said: “We are sorry that Molly viewed content that violated our policies and we don’t want that on the platform.”
Move fast and break things was I believe a phrase associated with the Zuck-ini’s garden of delights. In Ms. Russell’s case the broken thing was Ms. Russell’s family. That “sorry” strikes me as meaningful, maybe heart felt. On the other hand, it might be corporate blather.
Macworld does not address Ms. Russell’s death. However, the article “Despite Apple’s Best Efforts, Meta and Google Are Still Out of Control.” The write up explains that Apple is doing something to slow the stampeding stallions at the Meta-thing and Googzilla.
I noted this passage:
There is a great potential for this [data gathered by certain US high-technology companies] information to be misused and if we in the United States had any sort of functional government, it would have made these sales illegal by now.
My question: What about the combination of a young person’s absorbing content and the systems and methods to display “related” content to a susceptible individual. Might that one-two punch have downsides?
Yep. Is there a fix? Sure, after two decades of inattention, let’s just apply a quick fix or formulate a silver bullet.
But the ultimate is, of course, is to say, “Sorry.” I definitely want to see the Zuck-ini stuffed. Absent a hot iron poker, an Italian sausage will do.
Stephen E Arnold, September 28, 2022
LinkedIn: The Logic of the Greater Good
September 26, 2022
I have accepted two factoids about life online:
First, the range of topics searched from my computer systems available to my research team is broad, diverse, and traverses the regular Web, the Dark Web, and what we call the “ghost Web.” As a result, recommendation systems like those in use by Facebook, Google, and Microsoft are laughable. One example is YouTube’s suggesting that one of my team would like an inappropriate beach fashion show here, a fire on a cruise ship here, humorous snooker shots here, or sounds heard after someone moved to America here illustrate the ineffectuality of Google’s smart recommendation software. These recommendations make clear that when smart software cannot identify a pattern or an intentional pattern disrupting click stream, data poisoning works like a champ. (OSINT fans take note. Data poisoning works and I am not the only person harboring this factoid.) Key factoid: Recommendation systems don’t work and the outputs can be poisoned… easily.
Second, profile centric systems like Facebook’s properties or the LinkedIn social network struggle to identify information that is relevant. Thus, we ignore the suggestions for who is hiring people with your profile and the requests to be friends. These are amusing. Here are some anonymized examples. A female in Singapore wanted to connect me with an escort when I was next in Singapore. I interpreted this as a solicitation somewhat ill suited to a 77 year old male who no longer flies to Washington, DC. Forget Singapore. What about a person who is a sales person at a cable company? Or what about a person who does land use planning in Ecuador? What about a person with 19 years experience as a Google “partner”? You get the idea. Pimps and resellers of services which could be discontinued without warning. Key factoid: Recommendations don’t match that I am retired, give lectures to law enforcement and intelligence professionals, and stay in my office in rural Kentucky, with my lovable computers, a not so lovable French bulldog, and my main squeeze for the last 53 years. (Sorry, Singapore intermediary for escorts. )
I read a write up in the indigestion inducing New York Times. I am never sure if the stories are accurate, motivated by social bias, written by a persistent fame seeker, or just made up by a modern day Jayson Blair. For info, click here. (You will have to pay to view this exciting story about fiction presented as “real” news.
The story catching my attention today (Saturday, September 24, 2022) has the title “LinkedIn Ran Social Experiments on 20 Million Users over Five Years?” Obviously the author is not familiar with the default security and privacy settings in Windows 10 and that outstanding Windows 11. Data collection both explicit and implicit is the tension in in the warp and woof of the operating systems’ fabric.
Since Microsoft owns LinkedIn, it did not take me long to conclude that LinkedIn like its precursor Plaxo had to be approached with caution, great caution. The write up reports that some Ivory Tower types figured out that LinkedIn ran and probably still runs tests to determine what can get more users, more clicks, and more advertising dollars for the Softies. An academic stalking horse is usually a good idea.
I did spot several comments in the write up which struck me as amusing. Let’s look at a three:
First, consider this statement:
LinkedIn, which is owned by Microsoft, did not directly answer a question about how the company had considered the potential long term consequences of its experiments on users’ employment and economic status.
No kidding. A big tech company being looked at for its allegedly monopolistic behaviors not directly answering a New York Times’ reporters questions. Earth shaking. But the killer gag for me is wanting to know if Microsoft LinkedIn “consider the potential long term consequences of its experiments.” Ho ho ho. Long term at a high tech outfit is measured in 12 week chunks. Sure, there may be a five year plan, but it probably still includes references to Microsoft’s network card business, the outlook for Windows Phone and Nokia, and getting the menus and icons in Office 365 to be the same across MSFT applications, and pitching the security of Microsoft Azure and Exchange as bulletproof. (Remember. There is a weapon called the Snipex Alligator, but it is not needed to blast holes through some of Microsoft’s vaunted security systems I have heard.)
Second, what about this passage from the write up:
Professor Aral of MIT said the deeper significance of the study was that it showed the importance of powerful social networking algorithms — not just in amplifying problems like misinformation but also as fundamental indications or economic conditions like employment and unemployment.
I think a few people understand that corrosive, disintermediating impact of social media information delivered quickly can have an effect. Examples range from flash mob riots to teens killing themselves because social media just does such a bang up job of helping adolescents deal with inputs from strangers and algorithms which highlight the thrill of blue screening oneself. The excitement of asking people who won’t help one find a job is probably less of a downer but failing to land an interview via LinkedIn might spark binge watching of “Friends.”
Third, I loved this passage:
“… If you want to get more jobs, you should be on LinkedIn more.
Yeah, that’s what I call psychological triggering: Be on LinkedIn more. Now. Log on. Just don’t bother to ask me to add you my network of people whom I don’t know because “Stephen E Arnold” on LinkedIn is managed by different members of my team.
Net net: Which is better? The New York Times or Microsoft LinkedIn. You have 10 minutes to craft an answer which you can post on LinkedIn among the self promotions, weird facts, and news about business opportunities like paying some outfit to put you on a company’s Board of Advisors.
Yeah, do it.
Stephen E Arnold, September 26, 2022
Facebook: Slow and TikTok: Fast. Can Research Keep Pace with Effects?
September 23, 2022
I read “Facebook Proven to Negatively Impact Mental Health.” The academic analysis spanned about two decades. The conclusion is that Facebook (the poster child for bringing people together) is bad news for happy thoughts.
I noted this passage:
The study was based on data that dates back to the 2004 advent of Facebook at Harvard University, before it took the internet by storm. Facebook was initially accessible only to Harvard students who had a Harvard email address. Quickly spreading to other colleges in and outside the US, the network was made available to the general public in the US and beyond in September 2006. The researchers were able to analyze the impact of social media use by comparing colleges that had access to the platform to colleges that did not. The findings show a rise in the number of students reporting severe depression and anxiety (7% and 20% respectively).
The phrase which caught my attention is “quickly spreading.” Sure, by the standards of yesteryear, Facebook was like Road Runner. My thought is that the velocity of TikTok is different:
- Slow ramp and then accelerating user growth
- Rapid fire content consumption
- Short programs which Marshall McLuhan would be interested in if he were alive
- Snappy serve-up algorithms.
Facebook is a rabbit with a bum foot. No lucky charm for the Zuckers. TikTok is a Chinese knock off of the SR 71.
Perhaps the researchers in Ivory Towerville will address these questions:
- What’s the impact of high velocity, short image-centric videos on high school and grade school students?
- What can weaponized information accomplish in attitude change on certain issues like body image, perception of reality, and the value of self harm?
- What mental changes take place when information is obtained from a TikTok type source?
Do you see the research challenge? Researchers are just now validating what has been evident to many commercial database publishers for many years. With go-go TikTok, how many years will it take to validate the downsides of this outstanding, intellect-enhancing service?
Stephen E Arnold, September 23, 2022
TikTok: A Slick Engine to Output Blackmail Opportunities
September 22, 2022
Some topics are not ready for online experts who think they know how sophisticated data collection and analytics “work.” The article “TikTok’s Algorithms Knew I Was Bi before I Did. I’m Not the Only One” provides a spy-tingling glimpse into what the China-linked TikTok video addiction machine can do. In recent testimony, TikTok’s handwaver explained that the Middle Kingdom’s psychological profile Ming tea pot is just nothing more than kid vids.
The write up explains:
On TikTok, the relationship between user and algorithm is uniquely (even sometimes uncannily) intimate.
This sounds promising: Intimate as in pillow talk, secret thoughts, video moments forgotten but not lost to fancy math. The article continues:
There is something about TikTok that feels particularly suited to these journeys of sexual self-discovery and, in the case of women loving women, I don’t think it’s just the prescient algorithm. The short-form video format lends itself to lightning bolt-like jolts of soul-bearing nakedness…
Is it just me or is the article explaining exactly how TikTok can shape and then cause a particular behavior? I learned more:
I hadn’t knowingly been deceiving or hiding this part of me. I’d simply discovered a more appropriate label. But it was like we were speaking different languages.
Weaponizing TikTok probably does not remake an individual. The opportunity the system presents to an admin with information weaponization in mind is to nudge TikTok absorbers into a mind set and make it easier to shape a confused, impressionable, or clueless person to be like Ponce de Leon and explore something new.
None of this makes much sense to a seventh grader watching shuffle dance steps. But the weaponization of information angle is what make blackmail opportunities bloom. What if the author was not open about the TikTok nudged or induced shift? Could that information or some other unknown or hidden facet of the past be used to obtain access credentials, a USB stuffed with an organization’s secrets, or using a position of trust to advance a particular point of view?
The answer is, “Yep.” Once there is a tool that tool will be used. Then the tool will be applied to other use cases or opportunities to lure people to an idea like “Hey, that island is just part of China” or something similar.
In my opinion, that’s what the article is struggling to articulate: TikTok means trouble, and the author is “not the only one.”
Stephen E Arnold, September 22, 2022
Mastodon: An Elephant in the Room or a Dwarf Stegodon in the Permafrost?
September 22, 2022
Buzzkill and Crackpot have been pushing Mastodon for years. If you are familiar with the big elephant, you know that mastodons are extinct. If you are not familiar with distributed mastodon, that creature is alive and stomping around. (The dwarf stegodon Facebook may become today’s MySpace.
“Chinese Social Media Users Are Flocking to the Decentralised Mastodon Platform to Find Community amid Crackdown at Home” explains once one pays up to access the write up:
Mastodon, an open-source microblogging software, was created by German developer Eugen Rochko in 2017 as a decentralised version of Twitter that is difficult to block or censor. It was partially a response to the control over user data exerted by Big Tech platforms, and the source code has since been used for many alternative platforms catering to those disaffected with mainstream options.
Features attractive to those eager to avoid big tech include, says the report:
Older posts are also difficult to resurface, as there is no free text search, only searching for hashtags. This is by design and encourages users to be more comfortable sharing their thoughts in the moment without worrying about how that information will be used in the future. Blocking content is also difficult for the Great Firewall because it is shared across instances. Alive.bar might be blocked, but people on another domain can follow users there.
Will Chinese uptake of Mastodon cause the beast to multiply and go forth? With censorship and online incivility apparently on the rise, yep.
Stephen E Arnold, September 22, 2022
Be an Information Warrior: Fun and Easy Too
September 16, 2022
I spotted an article in Politico. I won’t present the full title because the words in that title will trigger a range of smart software armed with stop words. Here’s the link if you want to access the source to which I shall refer.
I can paraphrase the title, however. Here’s my stab at avoiding digital tripwires: “Counter Propaganda Tailored to Neutralize Putin’s Propaganda.”
The idea is that a “community” has formed to pump out North Atlantic Fellas’ Organization weaponized and targeted information. The source article says:
NAFO “fellas,” as they prefer to be called, emblazon their Twitter accounts with the Shiba Inu avatar. They overlay the image on TikTok-style videos of Ukrainian troops set to dance music soundtracks. They pile onto Russian propaganda via coordinated social media attacks that rely on humor — it’s hard to take a badly-drawn dog meme seriously — to poke fun at the Kremlin and undermine its online messaging.
The idea is that NAFO is “weaponizing meme culture.” The icon for the informal group is Elon Musk’s favorite digital creature.
The image works well with a number of other images in my opinion. The source write up contains a number of examples.
My thought is that if one has relatives or friends in Russia, joining the NAFO outfit might have some knock on consequences.
From my point of view, once secret and little known information warfare methods are now like Elon Musk. Everywhere.
Stephen E Arnold, September 16, 2022
False Expertise: Just Share and Feel Empowered in Intellect
September 15, 2022
I read “Share on Social Media Makes Us Overconfident in Our Knowledge.” The write up states:
Social media sharers believe that they are knowledgeable about the content they share, even if they have not read it or have only glanced at a headline. Sharing can create this rise in confidence because by putting information online, sharers publicly commit to an expert identity. Doing so shapes their sense of self, helping them to feel just as knowledgeable as their post makes them seem.
If the source were a hippy dippy online marketing outfit, I would have ignored the write up. But the research comes from a cow town university. I believe the write up. Would those cowpokes steer me wrong, pilgrim?
I wonder if the researchers will take time out after a Cowboy Kent Rollins cook out to explore the correlation between the boundless expertise of the Silicon Valley “real news” crowd and this group’s dependence on Twitter and similar output channels?
That would make an interesting study because some of the messaging is wild and crazy like a college professor lost in a college bar on dollar beer night.
Stephen E Arnold, September 15, 2022
Tweet Terror in Some Geographic Areas
September 8, 2022
While western countries are chided for controversial engagement with LGBTQ groups, they cannot compare the staunch hatred they face in the Middle East. The Middle East is dominated by fundamentalist Islamic governments that criminalize homosexuality and transgender people. Unfortunately, these groups experienced a new wave of hatred Euro News reported in, “Arabic Anti-LGBTQ Campaign Goes On Twitter.”
The anti-LGBTQ campaign is called Fetrah, meaning “human instinct” in Arabic. Three Egyptian marketing professionals experienced in social media campaigns designed Fetrah. Fetrah promotes only two genders, rejects homosexuality, and supporters show a blue and pink flag.
Meta deleted the Fetrah page, but supporters managed to get a different page up on Facebook as well as on Instagram. Unlike other social media platforms, Twitter does not ban hate groups like Fetrah:
“Mahsa Alimardani, a digital rights expert told the Cube that Twitter and other social media platforms should be investing more resources into fighting this harmful campaign. ‘Too much censorship and policing can actually be a problem on some platforms but with Twitter we often find that the reverse is true, especially when it comes to harassment and harmful content targeting vulnerable communities’ said Alimardini. ‘We can see here a prime example of how queer communities in the Middle Eastern and North African regions can be harmed by Twitter’s inaction. The platform has very high threshold when it comes to policing content, which can be harmful,’ she added.”
Western countries have their faults, but many people have a “live and let live” attitude when it comes to LGBTQ people. People in the Middle East are not that different, but hatred is unfortunately promoted by religious governments.
Whitney Grace, September 8, 2022