Surprise: TikTok Reveals Its Employees Can View European User Data

December 28, 2022

What a surprise. The Tech Times reports, “TikTok Says Chinese Employees Can Access Data from European Users.” This includes workers not just within China, but also in Brazil, Canada, Israel, Japan, Malaysia, Philippines, Singapore, South Korea, and the United States. According to The Guardian, TikTok revealed the detail in an update to its privacy policy. We are to believe it is all in the interest of improving the users’ experience. Writer Joseph Henry states:

“According to ByteDance, TikTok’s parent firm, accessing the user data can help in improving the algorithm performance on the platform. This would mean that it could help the app to detect bots and malicious accounts. Additionally, this could also give recommendations for content that users want to consume online. Back in July, Shou Zi Chew, a TikTok chief executive clarified via a letter that the data being accessed by foreign staff is a ‘narrow set of non-sensitive’ user data. In short, if the TikTok security team in the US gives a green light for data access, then there’s no problem viewing the data coming from American users. Chew added that the Chinese government officials do not have access to these data so it won’t be a big deal to every consumer.”

Sure they don’t. Despite assurances, some are skeptical. For example, we learn:

“US FCC Commissioner Brendan Carr told Reuters that TikTok should be immediately banned in the US. He added that he was suspicious as to how ByteDance handles all of the US-based data on the app.”

Now just why might he doubt ByteDance’s sincerity? What about consequences? As some Sillycon Valley experts say, “No big deal. Move on.” Dismissive naïveté is helpful, even charming.

Cynthia Murrell, December 28, 2022

SocialFi, a New Type of Mashup

November 22, 2022

Well this is quite an innovation. HackerNoon alerts us to a combination of just wonderful stuff in, “The Rise of SocialFi: A Fusion of Social Media, Web3, and Decentralized Finance.” Short for social finance, SocialFi builds on other -Fi trends: DeFi (decentralized finance) and GameFi (play-to-earn crypto currency games). The goal is to monetize social media through “tokenized achievements.” Writer Samiran Mondal elaborates:

“SocialFi is an umbrella that combines various elements to provide a better social experience through crypto, DeFi, metaverse, NFTs, and Web3. At the heart of SocialFi aremonetization and incentivization through social tokens. SocialFi offers many new ways for users, content creators, and app owners to monetarize their engagements. This has perhaps been the most attractive aspect of SocialFi.  By introducing the concept of social tokens and in-app utility tokens. Notably, these tokens are not controlled by the platform but by the creator. Token creators have the power to decide how they want their tokens to be utilized, especially by fans and followers.”

This monetization strategy is made possible by more alphabet soup—PFP NFTs, or picture-for-proof non fungible tokens. These profile pictures identify users, provide proof of NFT ownership, and connect users to specific SocialFi communities. Then there are the DAOs, or decentralized autonomous organizations. These communities make decisions through member votes to prevent the type of unilateral control exercised by companies like Facebook, Twitter, and TikTok. This arrangement provides another feature. Or is it a bug? We learn:

“SocialFi also advocates for freedom of speech since users’ messages or content are not throttled by censorship. Previously, social media platforms were heavily plagued by centralized censorship that limited what users could post to the extent of deleting accounts that content creators and users had poured their hearts and souls into. But with SocialFi, users have the freedom to post content without the constant fear of overreaching moderation or targeted censorship.”

Sounds great, until one realizes what Mondal calls overreach and censorship would include efforts to quell the spread of misinformation and harmful content already bedeviling society. To those behind SocialFi have any plans to address that dilemma? Sure.

Cynthia Murrell, November 22, 2022

Evolution? Sure, Consider the Future of Humanoids

November 11, 2022

It’s Friday, and everyone deserves a look at what their children’s grandchildren will look like. Let me tell you. These progeny will be appealing folk. “Future Humans Could Have Smaller Brains, New Eyelids and Hunchbacks Thanks to Technology.” Let’s look at some of the real “factoids” in this article from the estimable, rock solid fact factory, The Daily Sun:

  1. A tech neck which looks to me to be a baby hunchback
  2. Smaller brains (evidence of this may be available now. Just ask a teen cashier to make change
  3. A tech claw. I think this means fingers adapted to thumbtyping and clicky keyboards.

I must say that these adaptations seem to be suited to the digital environment. However, what happens if there is no power?

Perhaps Neanderthal characteristics will manifest themselves? Please, check the cited article for an illustration of the future human. My first reaction was, “Someone who looks like that works at one of the high tech companies near San Francisco. Silicon Valley may be the cradle of adapted humans at this time. Perhaps a Stanford grad student will undertake a definitive study by observing those visiting Philz’ Coffee.

Stephen E Arnold, November 11, 2022

The Google: Indexing and Discriminating Are Expensive. So Get Bigger Already

November 9, 2022

It’s Wednesday, November 9, 2022, only a few days until I hit 78. Guess what? Amidst the news of crypto currency vaporization, hand wringing over the adult decisions forced on high school science club members at Facebook and Twitter, and the weirdness about voting — there’s a quite important item of information. This particular datum is likely to be washed away in the flood of digital data about other developments.

What is this gem?

An individual has discovered that the Google is not indexing some Mastodon servers. You can read the story in a Mastodon post at this link. Don’t worry. The page will resolve without trying to figure out how to make Mastodon stomp around in the way you want it to. The link to you is Snake.club Stephen Brennan.

The item is that Google does not index every Mastodon server. The Google, according to Mr. Brennan:

has decided that since my Mastodon server is visually similar to other Mastodon servers (hint, it’s supposed to be) that it’s an unsafe forgery? Ugh. Now I get to wait for a what will likely be a long manual review cycle, while all the other people using the site see this deceptive, scary banner.
image

So what?

Mr. Brennan notes:

Seems like El Goog has no problem flagging me in an instant, but can’t cleanup their mistakes quickly.

A few hours later Mr. Brennan reports:

However, the Search Console still insists I have security problems, and the “transparency report” here agrees, though it classifies my threat level as Yellow (it was Red before).

Is the problem resolved? Sort of. Mr. Brennan has concluded:

… maybe I need to start backing up my Google data. I could see their stellar AI/moderation screwing me over, I’ve heard of it before.

Why do I think this single post and thread is important? Four reasons:

  1. The incident underscores how an individual perceives Google as “the Internet.” Despite the use of a decentralized, distributed system. The mind set of some Mastodon users is that Google is the be-all and end-all. It’s not, of course. But if people forget that there are other quite useful ways of finding information, the desire to please, think, and depend on Google becomes the one true way. Outfits like Mojeek.com don’t have much of a chance of getting traction with those in the Google datasphere.
  2. Google operates on a close-enough-for-horseshoes or good-enough approach. The objective is to sell ads. This means that big is good. The Good Principle doesn’t do a great job of indexing Twitter posts, but Twitter is bigger than Mastodon in terms of eye balls. Therefore, it is a consequence of good-enough methods to shove small and low-traffic content output into a area surrounded by Google’s police tape.  Maybe Google wants Mastodon users behind its police tape? Maybe Google does not care today but will if and when Mastodon gets bigger? Plus some Google advertisers may want to reach those reading search results citing Mastodon? Maybe? If so, Mastodon servers will become important to the Google for revenue, not content.
  3. Google does not index “the world’s information.” The system indexes some information, ideally information that will attract users. In my opinion, the once naive company allegedly wanted to achieve the world’s information. Mr. Page and I were on a panel about Web search as I recall. My team and I had sold to CMGI some technology which was incorporated into Lycos. That’s why I was on the panel. Mr. Page rolled out the notion of an “index to the world’s information.” I pointed out that indexing rapidly-expanding content and the capturing of content changes to previously indexed content would be increasingly expensive. The costs would be high and quite hard to control without reducing the scope, frequency, and depth of the crawls. But Mr. Page’s big idea excited people. My mundane financial and technical truths were of zero interest to Mr. Page and most in the audience. And today? Google’s management team has to work overtime to try to contain the costs of indexing near-real time flows of digital information. The expense of maintaining and reindexing backfiles is easier to control. Just reduce the scope of sites indexed, the depth of each crawl, the frequency certain sites are reindexed, and decrease how much content old content is displayed. If no one looks at these data, why spend money on it? Google is not Mother Theresa and certainly not the Andrew Carnegie library initiative. Mr. Brennan brushed against an automated method that appears to say, “The small is irrelevant controls because advertisers want to advertise where the eyeballs are.”
  4. Google exists for two reasons: First, to generate advertising revenue. Why? None of its new ventures have been able to deliver advertising-equivalent revenue. But cash must flow and grow or the Google stumbles. Google is still what a Microsoftie called a “one-trick pony” years ago. The one-trick pony is the star of the Google circus. Performing Mastodons are not in the tent. Second, Google wants very much to dominate cloud computing, off-the-shelf machine learning, and cyber security. This means that  the performing Mastodons have to do something that gets the GOOG’s attention.

Net net: I find it interesting to find examples of those younger than I discovering the precise nature of Google. Many of these individuals know only Google. I find that sad and somewhat frightening, perhaps more troubling than Mr. Putin’s nuclear bomb talk. Mr. Putin can be seen and heard. Google controls its datasphere. Like goldfish in a bowl, it is tough to understand the world containing that bowl and its inhabitants.

Stephen E Arnold, November 9, 2022

COVID-19 Made Reading And Critical Thinking Skills Worse For US Students

March 31, 2022

The COVID-19 pandemic is the first time in history remote learning was implemented for all educational levels. As the guinea pig generation, students were forced to deal with technical interruptions and, unfortunately, not the best education. Higher achieving and average students will be able to compensate for the last two years of education, but the lower achievers will have trouble. The pandemic, however, exasperated an issue with learning.

Kids have been reading less with each passing year, because of they spend more time consuming social media and videogames. Kids are reading, except they are absorbing a butchered form of the English language and not engaging their critical thinking skills. Social media and videogames are passive activities, Another problem for the low reading skills says The New York Times in, “The Pandemic Has Worsened The Reading Crisis In Schools” is the lack of educators:

“The causes are multifaceted, but many experts point to a shortage of educators trained in phonics and phonemic awareness — the foundational skills of linking the sounds of spoken English to the letters that appear on the page.”

According to the article, remote learning lessened the quality of learning elementary school received on reading fundamentals. It is essential for kids to pickup the basics in elementary school, otherwise higher education will be more difficult. Federal funding is being used for assistance programs, but there is a lack of personnel. Trained individuals are leaving public education for the private sector, because it is more lucrative.

Poor reading skills feed into poor critical skills. The Next Web explores the dangers of deep fakes and how easily they fool people: “Deep fakes Study Finds Doctored Text Is More Manipulative Than Phony Video.” Deep fakes are a dangerous AI threat, but MIT Media Lab scientists discovered that people have a hard time discerning fake sound bites:

“Scientists at the MIT Media Lab showed almost 6,000 people 16 authentic political speeches and 16 that were doctored by AI. The sound bites were presented in permutations of text, video, and audio, such as video with subtitles or only text. The participants were told that half of the content was fake, and asked which snippets they believed were fabricated.”

When the participants were only shown text they barely discovered the falsehoods with a 57% success rate, while they were more accurate at video with subtitles (66%) and the best at video and text combined (82%). Participants relied on tone and vocal conveyance to discover the fakes, which makes sense given that is how people discover lying:

“The study authors said the participants relied more on how something was said than the speech content itself: ‘The finding that fabricated videos of political speeches are easier to discern than fabricated text transcripts highlights the need to re-introduce and explain the oft-forgotten second half of the ‘seeing is believing’ adage.’ There is, however, a caveat to their conclusions: their deep fakes weren’t exactly hyper-realistic.”

Low quality deep fakes are not as dangerous as a single video with high resolution, great audio, and spot on duplicates of the subjects. Even the smartest people will be tricked by one high quality deep fake than thousands of bad ones.

It is more alarming that participants did not do well with the text only sound bites. Dd they lack the critical thinking and reading skills they should have learned in elementary school or did the lack of delivery from a human stump them?

Students need to focus on the basics of reading and critical thinking to establish their entire education. It is more fundamental than anything else.

Whitney Grace, March 31, 2022

Misunderstanding a Zuck Move

February 4, 2022

I read some posts — for instance, “Facebook Just Had Its Most Disappointing Quarter Ever. Mark Zuckerberg’s Response Is the 1 Thing No Leader Should Ever Do” — suggesting that Mark Zuckerberg is at fault for his company’s contrarian financial performance. The Zucker move is a standard operating procedure in a high school science club. When caught with evidence of misbehavior, in my high school science club in 1958, we blamed people in the band. We knew that blaming a mere athlete would result in a difficult situation in the boys’ locker room.

Thus it is obvious that the declining growth, the rise of the Chinese surveillance video machine, and the unfriended Tim Apple are responsible for which might be termed a zuck up. If this reminds you of a phrase used to characterize other snarls like the IRS pickle, you are not thinking the way I am. A “zuck up” is a management action which enables the world to join together. Think of the disparate groups who can find fellow travelers; for example, insecure teens who need mature guidance.

I found this comment out of step with the brilliance of the lean in behavior of Mr. Zuckerberg:

Ultimately, you don’t become more relevant by pointing to your competitors and blaming them for your performance. That’s the one thing no company–or leader–should ever do.

My reasoning is that Mr. Zuckerberg is a manipulator, a helmsman, if you will. Via programmatic methods, he achieved a remarkable blend of human pliability and cash extraction. He achieved success by clever disintermediation of some of his allegedly essential aides de camp. He achieved success by acquiring competitors and hooking these third party mechanisms into the Facebook engine room. He dominated because he understood the hot buttons of Wall Street.

I expect the Zuck, like the mythical phoenix (not the wonderful city in Arizona) to rise from the ashes of what others perceive as a failure. What the Zuck will do is use the brilliant techniques of the adolescent wizards in a high school science club to show “them” who is really smart.

Not a zuck up.

Stephen E Arnold, February 4, 2022

Facebook and Social Media: How a Digital Country Perceives Its Reality

September 17, 2021

I read “Instagram Chief Faces Backlash after Awkward Comparison between Cars and Social Media Safety.” This informed senior manager at Facebook seems to have missed a book on many reading lists. The book is one I have mentioned a number of times in the last 12 years since I have been capturing items of interest to me and putting my personal “abstracts” online.

Jacques Ellul is definitely not going to get a job working on the script for the next Star Wars’ film. He won’t be doing a script for a Super Bowl commercial. Most definitely Dr. Ellul will not be founding a church called “New Technology’s Church of Baloney.”

Dr. Ellul died in 1994, and it is not clear if he knew about online or the Internet. He jabbered at the University of Bordeaux, wrote a number of books about technology, and inspired enough people to set up the International Jacques Ellul Society.

One of his books was the Technological Society or in French Le bluff technologique.

The article was sparked my thoughts about Dr. Ellul contains this statement:

“We know that more people die than would otherwise because of car accidents, but by and large, cars create way more value in the world than they destroy,” Mosseri said Wednesday on the Recode Media podcast. “And I think social media is similar.”

Dr. Ellul might have raised a question or two about Instagram’s position. Both are technology; both have had unintended consequences. On one hand, the auto created some exciting social changes which can be observed when sitting in traffic: Eating in the car, road rage, dead animals on the side of the road, etc. On the other hand, social media is sparking upticks in personal destruction of young people, some perceptual mismatches between what their biomass looks like and what an “influencer” looks like wearing clothing from Buffbunny.

Several observations:

  • Facebook is influential, at least sufficiently noteworthy for China to take steps to trim the sails of the motor yacht Zucky
  • Facebook’s pattern of shaping reality via its public pronouncements, testimony before legislative groups, and and on podcasts generates content that seems to be different from a growing body of evidence that Facebook facts are flexible
  • Social media as shaped by the Facebook service, Instagram, and the quite interesting WhatsApp service is perhaps the most powerful information engine created. (I say this fully aware of Google’s influence and Amazon’s control of certain data channels.) Facebook is a digital Major Gérald, just with its own Légion étrangèr.

Net net: Regulation time and fines that amount to more than a few hours revenue for the firm. Also reading Le bluff technologique and writing an essay called, “How technology deconstructs social fabrics.” Blue book, handwritten, and three outside references from peer reviewed journals about human behavior. Due on Monday, please.

Stephen E Arnold, September 17, 2021

That Online Thing Spawns Emily Post-Type Behavior, Right?

July 21, 2021

Friendly virtual watering holes or platforms for alarmists? PC Magazine reports, “Neighborhood Watch Goes Rogue: The Trouble with Nextdoor and Citizen.” Writer Christopher Smith introduces his analysis:

“Apps like Citizen and Nextdoor, which ostensibly exist to keep us apprised of what’s going on in our neighborhoods, buzz our smartphones at all hours with crime reports, suspected illegal activity, and other complaints. But residents can also weigh in with their own theories and suspicions, however baseless and—in many cases—racist. It begs the question: Where do these apps go wrong, and what are they doing now to regain consumer trust and combat the issues within their platforms?”

Smith considers several times that both community-builder Nextdoor and the more security-focused Citizen hosted problematic actions and discussions. Both apps have made changes in response to criticism. For example, Citizen was named Vigilante when it first launched in 2016 and seemed to encourage users to visit and even take an active role in nearby crime scenes. After Apple pulled it from its App Store within two days, the app relaunched the next year with the friendlier name and warnings against reckless behavior. But Citizen still stirs up discussion by sharing publicly available emergency-services data like 911 calls, sometimes with truly unfortunate results. Though the app says it is now working on stronger moderation to prevent such incidents, it also happens to be ramping up its law-enforcement masquerade. Ironically, Citizen itself cannot seem to keep its users’ data safe.

Then there is Nextdoor. During last year’s protests following the murder of George Floyd, its moderators were caught removing posts announcing protests but allowing ones that advocated violence against protestors. The CEO promised reforms in response, and the company soon axed the “Forward to Police” feature. (That is okay, cops weren’t relying on it much anyway. Go figure.) It has also enacted a version of sensitivity training and language guardrails. Meanwhile, Facebook is making its way into the neighborhood app game. Surely that company’s foresight and conscientiousness are just what this situation needs. Smith concludes:

“In theory, community apps like Citizen, Nextdoor, and Facebook Neighborhoods bring people together at time when many of us turn to the internet and our devices to make connections. But it’s a fine line between staying on top of what’s going on around us and harassing the people who live and work there with ill-advised posts and even calls to 911. The companies themselves have a financial incentive to keep us engaged (Nextdoor just filed to go public), whether its users are building strong community ties or overreacting to doom-and-gloom notifications. Can we trust them not to lead us into the abyss, or is it on us not to get caught up neighborhood drama and our baser instincts?”

Absolutely not and, unfortunately, yes.

Cynthia Murrell, July 21, 2021

Cheaper Lodgings Correlated with Violence: Stats 101 at Work

July 20, 2021

I don’t have a dog in this fight, but AirBnB- and VRBO-type disruptors do. ”AirBnB Listings Lead to Increased Neighborhood Violence, Study Finds” reports:

AirBnB removes social capital from the neighborhood in the form of stable households, weakening the associated community dynamics…

The write up explains:

Researchers at Northeastern University in Boston conducted a statistical analysis of AirBnB listings and data on different types of crime in their city. Covering a period from 2011 to 2017, the team found that the more AirBnB listings were in any given neighborhood, the higher the rates of violence in that neighborhood – but not public social disorder or private conflict.

Who causes the crime? The tourists? Nah, here’s what’s allegedly happening:

the transient population diminishes how communities prevent crime.

Interesting assertion. I have a small sample: One. One home in our neighborhood became an AirBnB-type outfit. No one stayed. The house was sold to a family.

No change in the crime rate, but that may be a result of the police patrols, the work from home people who walk dogs, jog, post to Nextdoor.com, and clean the lenses on their Amazon Ring doorbells.

Insightful.

Stephen E Arnold, July 20, 2021

Real Silicon Valley News Predicts the Future

July 1, 2021

I read “Why Some Biologists and Ecologists Think Social Media Is a Risk to Humanity.” I thought this was an amusing essay because the company publishing it is very much a social media thing. Clicks equal fame, money, and influence. These are potent motivators, and the essay is cheerfully ignorant of the irony of the Apocalypse foretold in the write up.

I learned:

One of the real challenges that we’re facing is that we don’t have a lot of information

But who is “we”? I can name several entities which have quite comprehensive information. Obviously these entities are not part of the royal “we”. I have plenty of information and some of it is proprietary. There are areas about which I would like to know more, but overall, I think I have what I need to critique thumbtyper-infused portents of doom.

Here’s another passage:

Seventeen researchers who specialize in widely different fields, from climate science to philosophy, make the case that academics should treat the study of technology’s large-scale impact on society as a “crisis discipline.” A crisis discipline is a field in which scientists across different fields work quickly to address an urgent societal problem — like how conservation biology tries to protect endangered species or climate science research aims to stop global warming. The paper argues that our lack of understanding about the collective behavioral effects of new technology is a danger to democracy and scientific progress.

I assume the Silicon Valley “real” news outfit and the experts cited in the write up are familiar with the work of J. Ellul? If not, some time invested in reading it might be helpful. As a side note, Google Books thinks that the prescient and insightful analysis of technology is about “religion.” Because Google, of course.

The write up adds:

Most major social media companies work with academics who research their platforms’ effects on society, but the companies restrict and control how much information researchers can use.

Remarkable insight. Why pray tell?

Several observations:

  • Technology is not well understood
  • Flows of information are destructive in many situations
  • Access to information spawns false conclusions
  • Bias distorts logic even among the informed.

Well, this is a pickle barrel and “we” are in it. What is making my sides ache from laughter is that advocates of social media in particular and technology in general are now asking, “Now what?”

Few like China’s approach or that of other authoritarian entities who want to preserve the way it was.

Cue Barbara’s “The Way We Were.” Oh, right. Blocked by YouTube. Do ecologists and others understand cancer?

Stephen E Arnold, July 1, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta