Gallup on Social Media: Just One, Tiny, Irrelevant Data Point Missing
October 23, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “Teens Spend Average of 4.8 Hours on Social Media Per Day.” I like these insights into how intelligence is being whittled away.
Me? A dumb bunny. Thanks, MidJourney. What dumb bunny inspired you?
Three findings caught may attention and one, tiny, irrelevant data point I noticed was missing. Let’s look at three of the hooks snagging me.
First, the write up reveals:
Across age groups, the average time spent on social media ranges from as low as 4.1 hours per day for 13-year-olds to as high as 5.8 hours per day for 17-year-olds.
Doesn’t that seem like a large chunk of one’s day?
Second, I learned that the research unearthed this insight:
Teens report spending an average of 1.9 hours per day on YouTube and 1.5 hours per day on TikTok
I assume the bright spot is that only two plus hours are invested in reading X.com, Instagram, and encrypted messages.
Third, I learned:
The least conscientious adolescents — those scoring in the bottom quartile on the four items in the survey — spend an average of 1.2 hours more on social media per day than those who are highly conscientious (in the top quartile of the scale). Of the remaining Big 5 personality traits, emotional stability, openness to experience, agreeableness and extroversion are all negatively correlated with social media use, but the associations are weaker compared with conscientiousness.
Does this mean that social media is particularly effective on the most vulnerable youth?
Now let me point out the one item of data I noted was missing:
How much time does this sample spend reading?
I think I know the answer.
Stephen E Arnold, October 23, 2023
Digital Addiction Game Plan: Get Those Kiddies When Young
April 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I enjoy research which provides roadmaps for confused digital Hummer drivers. The Hummer weighs more than four tons and costs about the same as one GMLRS rocket. Digital weapons are more effective and less expensive. One does give up a bit of shock and awe, however. Life is full of trade offs.
The information in “Teens on Screens: Life Online for Children and Young Adults Revealed” is interesting. The analytics wizards have figure out how to hook a young person on zippy new media. I noted this insight:
Children are gravitating to ‘dramatic’ online videos which appear designed to maximize stimulation but require minimal effort and focus…
How does one craft a magnetic video:
Gossip, conflict, controversy, extreme challenges and high stakes – often involving large sums of money – are recurring themes. ‘Commentary’ and ‘reaction’ video formats, particularly those stirring up rivalry between influencers while encouraging viewers to pick sides, were also appealing to participants. These videos, popularized by the likes of Mr Beast, Infinite and JackSucksAtStuff, are often short-form, with a distinct, stimulating, editing style, designed to create maximum dramatic effect. This involves heavy use of choppy, ‘jump-cut’ edits, rapidly changing camera angles, special effects, animations and fast-paced speech.
One interesting item in the article’s summary of the research concerned “split screening.” The term means that one watches more than one short-form video at the same time. (As a dinobaby, I have to work hard to get one thing done. Two things simultaneously. Ho ho ho.)
What can an enterprising person interested in weaponizing information do? Here are some ideas:
- Undermine certain values
- Present shaped information
- Take time from less exciting pursuits like homework and reading books
- Having self-esteem building experiences.
Who cares? Advertisers, those hostile to the interests of the US, groomers, and probably several other cohorts.
I have to stop now. I need to watch multiple TikToks.
Stephen E Arnold, April 6, 2023
Surprise: TikTok Reveals Its Employees Can View European User Data
December 28, 2022
What a surprise. The Tech Times reports, “TikTok Says Chinese Employees Can Access Data from European Users.” This includes workers not just within China, but also in Brazil, Canada, Israel, Japan, Malaysia, Philippines, Singapore, South Korea, and the United States. According to The Guardian, TikTok revealed the detail in an update to its privacy policy. We are to believe it is all in the interest of improving the users’ experience. Writer Joseph Henry states:
“According to ByteDance, TikTok’s parent firm, accessing the user data can help in improving the algorithm performance on the platform. This would mean that it could help the app to detect bots and malicious accounts. Additionally, this could also give recommendations for content that users want to consume online. Back in July, Shou Zi Chew, a TikTok chief executive clarified via a letter that the data being accessed by foreign staff is a ‘narrow set of non-sensitive’ user data. In short, if the TikTok security team in the US gives a green light for data access, then there’s no problem viewing the data coming from American users. Chew added that the Chinese government officials do not have access to these data so it won’t be a big deal to every consumer.”
Sure they don’t. Despite assurances, some are skeptical. For example, we learn:
“US FCC Commissioner Brendan Carr told Reuters that TikTok should be immediately banned in the US. He added that he was suspicious as to how ByteDance handles all of the US-based data on the app.”
Now just why might he doubt ByteDance’s sincerity? What about consequences? As some Sillycon Valley experts say, “No big deal. Move on.” Dismissive naïveté is helpful, even charming.
Cynthia Murrell, December 28, 2022
SocialFi, a New Type of Mashup
November 22, 2022
Well this is quite an innovation. HackerNoon alerts us to a combination of just wonderful stuff in, “The Rise of SocialFi: A Fusion of Social Media, Web3, and Decentralized Finance.” Short for social finance, SocialFi builds on other -Fi trends: DeFi (decentralized finance) and GameFi (play-to-earn crypto currency games). The goal is to monetize social media through “tokenized achievements.” Writer Samiran Mondal elaborates:
“SocialFi is an umbrella that combines various elements to provide a better social experience through crypto, DeFi, metaverse, NFTs, and Web3. At the heart of SocialFi aremonetization and incentivization through social tokens. SocialFi offers many new ways for users, content creators, and app owners to monetarize their engagements. This has perhaps been the most attractive aspect of SocialFi. By introducing the concept of social tokens and in-app utility tokens. Notably, these tokens are not controlled by the platform but by the creator. Token creators have the power to decide how they want their tokens to be utilized, especially by fans and followers.”
This monetization strategy is made possible by more alphabet soup—PFP NFTs, or picture-for-proof non fungible tokens. These profile pictures identify users, provide proof of NFT ownership, and connect users to specific SocialFi communities. Then there are the DAOs, or decentralized autonomous organizations. These communities make decisions through member votes to prevent the type of unilateral control exercised by companies like Facebook, Twitter, and TikTok. This arrangement provides another feature. Or is it a bug? We learn:
“SocialFi also advocates for freedom of speech since users’ messages or content are not throttled by censorship. Previously, social media platforms were heavily plagued by centralized censorship that limited what users could post to the extent of deleting accounts that content creators and users had poured their hearts and souls into. But with SocialFi, users have the freedom to post content without the constant fear of overreaching moderation or targeted censorship.”
Sounds great, until one realizes what Mondal calls overreach and censorship would include efforts to quell the spread of misinformation and harmful content already bedeviling society. To those behind SocialFi have any plans to address that dilemma? Sure.
Cynthia Murrell, November 22, 2022
Evolution? Sure, Consider the Future of Humanoids
November 11, 2022
It’s Friday, and everyone deserves a look at what their children’s grandchildren will look like. Let me tell you. These progeny will be appealing folk. “Future Humans Could Have Smaller Brains, New Eyelids and Hunchbacks Thanks to Technology.” Let’s look at some of the real “factoids” in this article from the estimable, rock solid fact factory, The Daily Sun:
- A tech neck which looks to me to be a baby hunchback
- Smaller brains (evidence of this may be available now. Just ask a teen cashier to make change
- A tech claw. I think this means fingers adapted to thumbtyping and clicky keyboards.
I must say that these adaptations seem to be suited to the digital environment. However, what happens if there is no power?
Perhaps Neanderthal characteristics will manifest themselves? Please, check the cited article for an illustration of the future human. My first reaction was, “Someone who looks like that works at one of the high tech companies near San Francisco. Silicon Valley may be the cradle of adapted humans at this time. Perhaps a Stanford grad student will undertake a definitive study by observing those visiting Philz’ Coffee.
Stephen E Arnold, November 11, 2022
The Google: Indexing and Discriminating Are Expensive. So Get Bigger Already
November 9, 2022
It’s Wednesday, November 9, 2022, only a few days until I hit 78. Guess what? Amidst the news of crypto currency vaporization, hand wringing over the adult decisions forced on high school science club members at Facebook and Twitter, and the weirdness about voting — there’s a quite important item of information. This particular datum is likely to be washed away in the flood of digital data about other developments.
What is this gem?
An individual has discovered that the Google is not indexing some Mastodon servers. You can read the story in a Mastodon post at this link. Don’t worry. The page will resolve without trying to figure out how to make Mastodon stomp around in the way you want it to. The link to you is Snake.club Stephen Brennan.
The item is that Google does not index every Mastodon server. The Google, according to Mr. Brennan:
has decided that since my Mastodon server is visually similar to other Mastodon servers (hint, it’s supposed to be) that it’s an unsafe forgery? Ugh. Now I get to wait for a what will likely be a long manual review cycle, while all the other people using the site see this deceptive, scary banner.
So what?
Mr. Brennan notes:
Seems like El Goog has no problem flagging me in an instant, but can’t cleanup their mistakes quickly.
A few hours later Mr. Brennan reports:
However, the Search Console still insists I have security problems, and the “transparency report” here agrees, though it classifies my threat level as Yellow (it was Red before).
Is the problem resolved? Sort of. Mr. Brennan has concluded:
… maybe I need to start backing up my Google data. I could see their stellar AI/moderation screwing me over, I’ve heard of it before.
Why do I think this single post and thread is important? Four reasons:
- The incident underscores how an individual perceives Google as “the Internet.” Despite the use of a decentralized, distributed system. The mind set of some Mastodon users is that Google is the be-all and end-all. It’s not, of course. But if people forget that there are other quite useful ways of finding information, the desire to please, think, and depend on Google becomes the one true way. Outfits like Mojeek.com don’t have much of a chance of getting traction with those in the Google datasphere.
- Google operates on a close-enough-for-horseshoes or good-enough approach. The objective is to sell ads. This means that big is good. The Good Principle doesn’t do a great job of indexing Twitter posts, but Twitter is bigger than Mastodon in terms of eye balls. Therefore, it is a consequence of good-enough methods to shove small and low-traffic content output into a area surrounded by Google’s police tape. Maybe Google wants Mastodon users behind its police tape? Maybe Google does not care today but will if and when Mastodon gets bigger? Plus some Google advertisers may want to reach those reading search results citing Mastodon? Maybe? If so, Mastodon servers will become important to the Google for revenue, not content.
- Google does not index “the world’s information.” The system indexes some information, ideally information that will attract users. In my opinion, the once naive company allegedly wanted to achieve the world’s information. Mr. Page and I were on a panel about Web search as I recall. My team and I had sold to CMGI some technology which was incorporated into Lycos. That’s why I was on the panel. Mr. Page rolled out the notion of an “index to the world’s information.” I pointed out that indexing rapidly-expanding content and the capturing of content changes to previously indexed content would be increasingly expensive. The costs would be high and quite hard to control without reducing the scope, frequency, and depth of the crawls. But Mr. Page’s big idea excited people. My mundane financial and technical truths were of zero interest to Mr. Page and most in the audience. And today? Google’s management team has to work overtime to try to contain the costs of indexing near-real time flows of digital information. The expense of maintaining and reindexing backfiles is easier to control. Just reduce the scope of sites indexed, the depth of each crawl, the frequency certain sites are reindexed, and decrease how much content old content is displayed. If no one looks at these data, why spend money on it? Google is not Mother Theresa and certainly not the Andrew Carnegie library initiative. Mr. Brennan brushed against an automated method that appears to say, “The small is irrelevant controls because advertisers want to advertise where the eyeballs are.”
- Google exists for two reasons: First, to generate advertising revenue. Why? None of its new ventures have been able to deliver advertising-equivalent revenue. But cash must flow and grow or the Google stumbles. Google is still what a Microsoftie called a “one-trick pony” years ago. The one-trick pony is the star of the Google circus. Performing Mastodons are not in the tent. Second, Google wants very much to dominate cloud computing, off-the-shelf machine learning, and cyber security. This means that the performing Mastodons have to do something that gets the GOOG’s attention.
Net net: I find it interesting to find examples of those younger than I discovering the precise nature of Google. Many of these individuals know only Google. I find that sad and somewhat frightening, perhaps more troubling than Mr. Putin’s nuclear bomb talk. Mr. Putin can be seen and heard. Google controls its datasphere. Like goldfish in a bowl, it is tough to understand the world containing that bowl and its inhabitants.
Stephen E Arnold, November 9, 2022
COVID-19 Made Reading And Critical Thinking Skills Worse For US Students
March 31, 2022
The COVID-19 pandemic is the first time in history remote learning was implemented for all educational levels. As the guinea pig generation, students were forced to deal with technical interruptions and, unfortunately, not the best education. Higher achieving and average students will be able to compensate for the last two years of education, but the lower achievers will have trouble. The pandemic, however, exasperated an issue with learning.
Kids have been reading less with each passing year, because of they spend more time consuming social media and videogames. Kids are reading, except they are absorbing a butchered form of the English language and not engaging their critical thinking skills. Social media and videogames are passive activities, Another problem for the low reading skills says The New York Times in, “The Pandemic Has Worsened The Reading Crisis In Schools” is the lack of educators:
“The causes are multifaceted, but many experts point to a shortage of educators trained in phonics and phonemic awareness — the foundational skills of linking the sounds of spoken English to the letters that appear on the page.”
According to the article, remote learning lessened the quality of learning elementary school received on reading fundamentals. It is essential for kids to pickup the basics in elementary school, otherwise higher education will be more difficult. Federal funding is being used for assistance programs, but there is a lack of personnel. Trained individuals are leaving public education for the private sector, because it is more lucrative.
Poor reading skills feed into poor critical skills. The Next Web explores the dangers of deep fakes and how easily they fool people: “Deep fakes Study Finds Doctored Text Is More Manipulative Than Phony Video.” Deep fakes are a dangerous AI threat, but MIT Media Lab scientists discovered that people have a hard time discerning fake sound bites:
“Scientists at the MIT Media Lab showed almost 6,000 people 16 authentic political speeches and 16 that were doctored by AI. The sound bites were presented in permutations of text, video, and audio, such as video with subtitles or only text. The participants were told that half of the content was fake, and asked which snippets they believed were fabricated.”
When the participants were only shown text they barely discovered the falsehoods with a 57% success rate, while they were more accurate at video with subtitles (66%) and the best at video and text combined (82%). Participants relied on tone and vocal conveyance to discover the fakes, which makes sense given that is how people discover lying:
“The study authors said the participants relied more on how something was said than the speech content itself: ‘The finding that fabricated videos of political speeches are easier to discern than fabricated text transcripts highlights the need to re-introduce and explain the oft-forgotten second half of the ‘seeing is believing’ adage.’ There is, however, a caveat to their conclusions: their deep fakes weren’t exactly hyper-realistic.”
Low quality deep fakes are not as dangerous as a single video with high resolution, great audio, and spot on duplicates of the subjects. Even the smartest people will be tricked by one high quality deep fake than thousands of bad ones.
It is more alarming that participants did not do well with the text only sound bites. Dd they lack the critical thinking and reading skills they should have learned in elementary school or did the lack of delivery from a human stump them?
Students need to focus on the basics of reading and critical thinking to establish their entire education. It is more fundamental than anything else.
Whitney Grace, March 31, 2022
Misunderstanding a Zuck Move
February 4, 2022
I read some posts — for instance, “Facebook Just Had Its Most Disappointing Quarter Ever. Mark Zuckerberg’s Response Is the 1 Thing No Leader Should Ever Do” — suggesting that Mark Zuckerberg is at fault for his company’s contrarian financial performance. The Zucker move is a standard operating procedure in a high school science club. When caught with evidence of misbehavior, in my high school science club in 1958, we blamed people in the band. We knew that blaming a mere athlete would result in a difficult situation in the boys’ locker room.
Thus it is obvious that the declining growth, the rise of the Chinese surveillance video machine, and the unfriended Tim Apple are responsible for which might be termed a zuck up. If this reminds you of a phrase used to characterize other snarls like the IRS pickle, you are not thinking the way I am. A “zuck up” is a management action which enables the world to join together. Think of the disparate groups who can find fellow travelers; for example, insecure teens who need mature guidance.
I found this comment out of step with the brilliance of the lean in behavior of Mr. Zuckerberg:
Ultimately, you don’t become more relevant by pointing to your competitors and blaming them for your performance. That’s the one thing no company–or leader–should ever do.
My reasoning is that Mr. Zuckerberg is a manipulator, a helmsman, if you will. Via programmatic methods, he achieved a remarkable blend of human pliability and cash extraction. He achieved success by clever disintermediation of some of his allegedly essential aides de camp. He achieved success by acquiring competitors and hooking these third party mechanisms into the Facebook engine room. He dominated because he understood the hot buttons of Wall Street.
I expect the Zuck, like the mythical phoenix (not the wonderful city in Arizona) to rise from the ashes of what others perceive as a failure. What the Zuck will do is use the brilliant techniques of the adolescent wizards in a high school science club to show “them” who is really smart.
Not a zuck up.
Stephen E Arnold, February 4, 2022
Facebook and Social Media: How a Digital Country Perceives Its Reality
September 17, 2021
I read “Instagram Chief Faces Backlash after Awkward Comparison between Cars and Social Media Safety.” This informed senior manager at Facebook seems to have missed a book on many reading lists. The book is one I have mentioned a number of times in the last 12 years since I have been capturing items of interest to me and putting my personal “abstracts” online.
Jacques Ellul is definitely not going to get a job working on the script for the next Star Wars’ film. He won’t be doing a script for a Super Bowl commercial. Most definitely Dr. Ellul will not be founding a church called “New Technology’s Church of Baloney.”
Dr. Ellul died in 1994, and it is not clear if he knew about online or the Internet. He jabbered at the University of Bordeaux, wrote a number of books about technology, and inspired enough people to set up the International Jacques Ellul Society.
One of his books was the Technological Society or in French Le bluff technologique.
The article was sparked my thoughts about Dr. Ellul contains this statement:
“We know that more people die than would otherwise because of car accidents, but by and large, cars create way more value in the world than they destroy,” Mosseri said Wednesday on the Recode Media podcast. “And I think social media is similar.”
Dr. Ellul might have raised a question or two about Instagram’s position. Both are technology; both have had unintended consequences. On one hand, the auto created some exciting social changes which can be observed when sitting in traffic: Eating in the car, road rage, dead animals on the side of the road, etc. On the other hand, social media is sparking upticks in personal destruction of young people, some perceptual mismatches between what their biomass looks like and what an “influencer” looks like wearing clothing from Buffbunny.
Several observations:
- Facebook is influential, at least sufficiently noteworthy for China to take steps to trim the sails of the motor yacht Zucky
- Facebook’s pattern of shaping reality via its public pronouncements, testimony before legislative groups, and and on podcasts generates content that seems to be different from a growing body of evidence that Facebook facts are flexible
- Social media as shaped by the Facebook service, Instagram, and the quite interesting WhatsApp service is perhaps the most powerful information engine created. (I say this fully aware of Google’s influence and Amazon’s control of certain data channels.) Facebook is a digital Major Gérald, just with its own Légion étrangèr.
Net net: Regulation time and fines that amount to more than a few hours revenue for the firm. Also reading Le bluff technologique and writing an essay called, “How technology deconstructs social fabrics.” Blue book, handwritten, and three outside references from peer reviewed journals about human behavior. Due on Monday, please.
Stephen E Arnold, September 17, 2021
That Online Thing Spawns Emily Post-Type Behavior, Right?
July 21, 2021
Friendly virtual watering holes or platforms for alarmists? PC Magazine reports, “Neighborhood Watch Goes Rogue: The Trouble with Nextdoor and Citizen.” Writer Christopher Smith introduces his analysis:
“Apps like Citizen and Nextdoor, which ostensibly exist to keep us apprised of what’s going on in our neighborhoods, buzz our smartphones at all hours with crime reports, suspected illegal activity, and other complaints. But residents can also weigh in with their own theories and suspicions, however baseless and—in many cases—racist. It begs the question: Where do these apps go wrong, and what are they doing now to regain consumer trust and combat the issues within their platforms?”
Smith considers several times that both community-builder Nextdoor and the more security-focused Citizen hosted problematic actions and discussions. Both apps have made changes in response to criticism. For example, Citizen was named Vigilante when it first launched in 2016 and seemed to encourage users to visit and even take an active role in nearby crime scenes. After Apple pulled it from its App Store within two days, the app relaunched the next year with the friendlier name and warnings against reckless behavior. But Citizen still stirs up discussion by sharing publicly available emergency-services data like 911 calls, sometimes with truly unfortunate results. Though the app says it is now working on stronger moderation to prevent such incidents, it also happens to be ramping up its law-enforcement masquerade. Ironically, Citizen itself cannot seem to keep its users’ data safe.
Then there is Nextdoor. During last year’s protests following the murder of George Floyd, its moderators were caught removing posts announcing protests but allowing ones that advocated violence against protestors. The CEO promised reforms in response, and the company soon axed the “Forward to Police” feature. (That is okay, cops weren’t relying on it much anyway. Go figure.) It has also enacted a version of sensitivity training and language guardrails. Meanwhile, Facebook is making its way into the neighborhood app game. Surely that company’s foresight and conscientiousness are just what this situation needs. Smith concludes:
“In theory, community apps like Citizen, Nextdoor, and Facebook Neighborhoods bring people together at time when many of us turn to the internet and our devices to make connections. But it’s a fine line between staying on top of what’s going on around us and harassing the people who live and work there with ill-advised posts and even calls to 911. The companies themselves have a financial incentive to keep us engaged (Nextdoor just filed to go public), whether its users are building strong community ties or overreacting to doom-and-gloom notifications. Can we trust them not to lead us into the abyss, or is it on us not to get caught up neighborhood drama and our baser instincts?”
Absolutely not and, unfortunately, yes.
Cynthia Murrell, July 21, 2021