Facebook: Continuous Reality Distortion

September 14, 2021

Facebook CEO Mark Zuckerberg stated in 2019 that WhatsApp was designed as a “privacy-focused vision” for communication. WhatsApp supposedly offers end-to-end encryption. ProPublica shares that is not true in, “How Facebook Undermines Privacy Protections For Its 2 Billion WhatsApp Users.” Essentially the majority of WhatsApp messages are private, but items users flag are sifted through WhatsApp employees.

These employees monitor the flagged messages for child pornography, terroristic plots, spam, and more. This type of monitoring appears contrary to WhatsApp’s mission, but Carl Woog, the director of communications, did not regard this as content monitoring and saw it as preventing abuse.

WhatsApp reviewers sign NDAs and, if asked, say they work for Accenture. They review over 600 violation tickets a day, leaving less than a minute for each one, then they decide if they should ban the account, put the user on “watch,” or do nothing. Reviewers are required to:

“WhatsApp moderators must make subjective, sensitive and subtle judgments, interviews and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report,” “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat,” “CEI” (child exploitative imagery) and “CP” (child pornography). Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.””

Unlike Facebook’s other platforms, Facebook and Instagram, WhatsApp does not release statistics about what data it collects, because it cites that its an encryption service. Facebook also needs WhatsApp to generate a profit, because the company spent $22 billion on it in 2014. WhatsApp does share data with Facebook, despite its dedication to privacy. Facebook also faced fines for violating user privacy. WhatsApp was used to collect data on criminals and governments want backdoors to access and trace data. It is for user safety, but governments can take observation too far.

Whitney Grace, September 14, 2021

Facebook: A Curious Weakness and a Microsoft Strength

September 7, 2021

I read “The Irony of Facebook’s VR Collaboration Debacle” authored by a wizard whom I associate with IBM. I am not sure why the author’s observations trigger images of Big Blue, mainframes, and blazing history of Watson.

The angle in this essay is:

Collaboration is a social process where people get together to collectively solve problems. But Facebook sucks at social. A more accurate descriptor is that Facebook is a gossip platform at scale, which has done considerable harm to several countries and put them at considerable existential risk.

Yikes. “Sucks.” “Gossip platform.” And “harm to several countries.”

The write up zips into Zoom-land which Facebook allegedly wants to reimagine as a virtual reality metaverse.

Where is the analysis of “Facebook sucks” heading? Here’s a clue:

Facebook’s Horizon Workrooms is not collaboration. Microsoft Teams would be a better solution for information sharing because you’d see Zuckerberg, not an avatar that looks nothing like him.

I think I have it. The write up is a rah-rah for Teams. I was hoping that the conclusion would point to IBM video services.

Nope, it’s Microsoft a company I presume which does not suck, is not a gossip platform, and has not done harm to several countries?

Stephen E Arnold, September 7, 2021

Facebook: Controlling Behavior Underscores Facebook Insecurity

August 30, 2021

Misinformation was running rampant long before the pandemic hit its stride. No one knows if the misinformation wave that currently plagues the United States and the world has hit its peak. Experts, like social media researcher Laura Edelson, are investigating the how misinformation spreads, but social media platforms do not like it says Vox Recode in “‘People Do Not Trust That Facebook Is Healthy Ecosystem.’” Edelson works at the NYU Ad Observatory and focuses her current research on Facebook’s role in spreading misinformation.

She believes that misinformation encourages COVID anti-vaxxers and is eroding democracy. Unfortunately Facebook decided to block Edelson and her colleagues’ Facebook accounts. They use their accounts to study political advertisements and misinformation. Facebook stated that the Ad Observatory violated users’ privacy through its Ad Observer tool. Edelson replied that only volunteers download the tool.

Lawmakers, free speech advocates, and the FTC condemned Facebook. Edelson states that Facebook wants to bury her research, because it exposes its part in spreading misinformation. On Facebook, users share misinformation more than any other content and the company refuses to disclose who pays for political ads. It demonstrates that Facebook does not like Edelson’s research and wants to stop it, because it hurts their bottom dollar.

Facebook, of course, denies the allegation and it points to larger problems:

“But Facebook’s effective shutdown of the Ad Observatory raises larger questions about whether the company is trying to limit outside interrogation of the company’s business practices in the name of protecting its users’ privacy. At the same time, the social media network has good reason to be worried about privacy as it faces intense regulatory scrutiny for past missteps that led to it having to pay the largest penalty ever imposed by the Federal Trade Commission.”

Edelson states that Facebook is an unhealthy misinformation ecosystem. Facebook and other misinformation platforms could be doing irreparable damage to society. Because this is a current problem, Facebook should be working with Edelson and other researchers who want to combat the misinformation plague.

Facebook and other companies, however, are more concerned about losing control and revenue. The good news is … Wait. There isn’t any for those researching the world’s most stabilizing and refreshing social media system.

Whitney Grace, August 30, 2021

Remember Who May Have Wanted to License Pegasus?

August 20, 2021

Cyber intelligence firm NSO, makers of Pegasus spyware, knows no bounds when it comes to enabling government clients to spy on citizens. Apparently, however, it draws the line at helping Facebook spy on its users. At his Daring Fireball blog, computer scientist John Gruber reports that “Facebook Wanted NSO Spyware to Monitor iOS Users.” We learn that NSO CEO Shalev Hulio has made a legal declaration stating he was approached in 2017 by Facebook reps looking to purchase certain Pegasus capabilities. Gruber quotes Motherboard’s Joseph Cox, who wrote:

“At the time, Facebook was in the early stages of deploying a VPN product called Onavo Protect, which, unbeknownst to some users, analyzed the web traffic of users who downloaded it to see what other apps they were using. According to the court documents, it seems the Facebook representatives were not interested in buying parts of Pegasus as a hacking tool to remotely break into phones, but more as a way to more effectively monitor phones of users who had already installed Onavo. ‘The Facebook representatives stated that Facebook was concerned that its method for gathering user data through Onavo Protect was less effective on Apple devices than on Android devices,’ the court filing reads. ‘The Facebook representatives also stated that Facebook wanted to use purported capabilities of Pegasus to monitor users on Apple devices and were willing to pay for the ability to monitor Onavo Protect users.’”

We are glad to learn NSO has boundaries of any sort. And score one for Apple security. As for Facebook, Gruber asserts this news supports his oft-stated assertion that Facebook is a criminal operation. He bluntly concludes:

“Facebook’s stated intention for this software was to use it for mass surveillance of its own honest users. That is profoundly [messed] up — sociopathic.”

Perhaps.

Cynthia Murrell, August 20, 2021

Facebook Keeps E2EE Goodness Flowing

August 18, 2021

Facebook is a wonderful outfit. One possible example is the helpful company’s end to end encryption for Facebook Messenger. “Facebook Messenger now have End-to-End Encryption for Voice and Video Calls” reports:

The social media giant said that end-to-end encryption for group voice and video calls will soon be a part of Messenger. Encryption is already available in Messenger as Secret Conversation. But Secret Conversation makes many features disable and only can be done with individuals. Facebook is going to change it in the coming weeks. Users will be able to control who can reach your chat lists, who will stay in the requests folder, and who can’t message you at all. In the blog, Facebook also talked about that Instagram is also likely to get end-to-end encryption and one-to-one conversations.

Should Facebook be subject to special oversight?

Stephen E Arnold, August 18, 2021

Facebook: A Force for Good. Now What Does Good Mean?

August 17, 2021

I read Preston Byrne’s essay about the Taliban’s use of WhatsApp. You can find that very good write up at this link. Mr. Byrne asks an important question: Did America just lose Afghanistan because of WhatsApp?

I also read “WhatsApp Can’t Ban the Taliban Because It Can’t Read Their Texts.” The main point of the write up is to point out that Facebook’s encrypted message system makes blocking users really difficult, like impossible almost.

I noted this statement:

the Taliban used Facebook-owned chat app WhatsApp to spread its message and gain favor among local citizens…

Seems obvious, right. Free service. Widely available. Encrypted. Why the heck not?

Here’s a statement in the Vice write up which caught my attention:

The company spokesperson said that WhatsApp complies with U.S. sanctions law, so if it encounters any sanctioned people or organizations using the app, it will take action, including banning the accounts. This obviously depends on identifying who uses WhatsApp, without having access to any of the messages sent through the platform, given that the app uses end-to-end encryption. This would explain why WhatsApp hasn’t taken action against some account spreading the Taliban’s message in Afghanistan.

Let me ask a pointed question: Is it time to shut down Facebook, WhatsApp, and Instagram? Failing that, why not use existing laws to bring a measure of control over access, message content, and service availability?

Purposeful action is needed. If Facebook cannot figure out what to do to contain and blunt the corrosive effects of the “free” service, outsource the task to an entity which will make an effort. That approach seems to be what is looming for the NSO Group. Perhaps purposeful action is motivating Apple to try and control the less salubrious uses of the iPhone ecosystem?

Dancing around the Facebook earnings report is fine entertainment. Is it time to add some supervision to the largely unregulated, uncontrolled, and frat boy bash? One can serve a treat like Bore Palaw too.

Stephen E Arnold, August 17, 2021

Facebook, Booze, Youngsters, and Australia: Shaken and Stirred

August 6, 2021

Quite a mixologist’s concoction: Facebook, booze, young people, and the Australian government. The country seems to be uncomfortable with some of Facebook’s alleged practices. I would assume that some Australian citizens who hold shares in the social media outfit are pleased as punch with the company’s financial results.

Others are not amused. “Facebook to Limit Ads Children See after revelations Australian Alcohol Companies Can Reach Teens” reports:

Facebook will impose more control on the types of ads that children as young as 13 are exposed to on Instagram and other platforms, as new research finds Australian alcohol companies are not restricting their social media content from reaching younger users.

How many companies targeted the youngsters down under? The write up asserts:

The paper examined the use of social media age-restriction controls by 195 leading alcohol brands on Instagram and Facebook, and found large numbers were not shielding their content from children. The 195 brands were owned by nine companies, and the research identified 153 Facebook accounts, including 84 based in Australia, and 151 Instagram accounts, of which 77 were Australian-based. The authors found 28% of the Instagram accounts and 5% of Facebook accounts had not activated age-restriction controls.

I did spot a quote attributed to one of the experts doing the research about Facebook, Booze, Youngsters, and Australia; to wit:

it was clear that companies were not complying with the code. “The alcohol industry has demonstrated that it is unable to effectively control its own marketing…

Shocking that about self regulation. Has anyone alerted the US financial sector?

Stephen E Arnold, August 6, 2021

Facebook Lets Group Admins Designate Experts. Okay!

August 2, 2021

Facebook once again enlists the aid of humans to impede the spread of misinformation, only this time it has found a way to avoid paying anyone for the service. Tech Times reports, “Facebook Adds Feature to Let Admin in Groups Chose ‘Experts’ to Curb Misinformation.” The move also has the handy benefit of shifting responsibility for bad info away from the company. We wonder—what happened to that smart Facebook software? The article does not say. Citing an article from Business Insider, writer Alec G. does tell us:

“The people who run the communities on Facebook now have the authority to promote individuals within its group to gain the title of ‘expert.’ Then, the individuals dubbed as experts can be the voices of which the public can then base their questions and concerns. This is to prevent misinformation plaguing online communities for a while now.”

But will leaving the designation of “expert” up to admins make the problem worse instead of better? The write-up continues:

“The social platform now empowers specific individuals inside groups who are devoted to solely spreading misinformation-related topics. The ‘Stop the Steal’ group, for example, was created in November 2020 with over 365,000 members. They were convinced that the election for the presidency was a fraud. If Facebook didn’t remove the group two days later, it would continue to have negative effects. Facebook explained that the organization talked about ‘the delegitimization of the election process,’ and called for violence, as reported by the BBC. Even before that, other groups within Facebook promoted violence and calls to action that would harm the civility of the governments.”

Very true. We are reminded of the company’s outsourced Oversight Board created in 2018, a similar shift-the-blame approach that has not worked out so well. Facebook’s continued efforts to transfer responsibility for bad content to others fail to shield it from blame. They also do little to solve the problem and may even make it worse. Perhaps it is time for a different (real) solution.

Cynthia Murrell, August 2, 2021

Facebook and NSO Group: An Odd Couple or Squabbling Neighbors?

July 28, 2021

Late in 2019, The Adware Guru published “Facebook Sues NSO Group Spyware Maker Due to Exploitation of WhatsApp Vulnerability.” That write up stated:

The cause of [Facebook’s]  lawsuit was WhatsApp’s zero-day vulnerability, which Facebook claims was sold to the NSO Group, and then the company helped use the problem to attack human rights defenders, journalists, political dissidents, diplomats, and governmental officials. According to court documents, more than 1,400 people in Bahrain, the United Arab Emirates, and Mexico suffered a total of 11 days from attacks. Facebook has already sent WhatsApp special messages to everyone affected.

In April 2020, Technadu published “The NSO Group Is Accusing Facebook of Having Tried to License Their Spyware.” That write up stated:

The ‘NSO Group’ is now turning the tables, claiming that they rejected Facebook’s proposal to license Pegasus because they only did it for governments and not private companies. In addition to that, they describe Facebook’s accusations as baseless and even accuse the social media company of failing to prepare the legal paperwork properly, which resulted in legislative procedure problems. NSO says Facebook didn’t have powerful methods to spy on iOS devices in the same way that they did with Android, and they felt like Pegasus could solve this problem for them. Facebook, on the other side, completely dismissed these statements by saying that these allegations had the sole purpose of distracting the court from the real facts.

Technadu added:

even if Facebook wasn’t trying to add Pegasus in Onavo for iOS, they are giving the NSO Group something to hold on to and make allegations that are at least seemingly realistic. At the very least, this development will complicate the legal process by much now.

Jump to the present. The Guardian’s story “Officials Who Are US Allies Among Targets of NSO Malware, Says WhatsApp Chief” reported on July 24, 2021:

Cathcart said that he saw parallels between the attack against WhatsApp users in 2019 – which is now the subject of a lawsuit brought by WhatsApp against NSO – and reports about a massive data leak that are at the centre of the Pegasus project… When WhatsApp says it believes its users were “targeted”, it means the company has evidence that an NSO server attempted to install malware on a user’s device.

The Guardian story includes this statement from the PR savvy NSO Group:

An NSO spokesperson said: “We are doing our best to help creating a safer world. Does Mr Cathcart have other alternatives that enable law enforcement and intelligence agencies to legally detect and prevent malicious acts of pedophiles, terrorists and criminals using end-to-end encryption platforms? If so, we would be happy to hear.”

Are Facebook’s statements credible? Is NSO Group’s version believable? Are these two behaving like the characters in Neil Simon’s “Odd Couple” or like the characters in the 1981 film “Neighbors”? Does each firm have something the other needs?

Stephen E Arnold, July 28, 2021

Does Facebook Kill?

July 22, 2021

I found it interesting that the US government suggested that Facebook information kills. You can refresh your knowledge of this assertion in “Biden: COVID Misinformation on Platforms Like Facebook Is ‘Killing People’”. The statement is an attention grabber. Facebook responded, according to Neowin in “Facebook Refutes Biden’s Blame That It’s “Killing People” with COVID Fake News”:

Facebook clearly took issue with these statements and a company spokesperson responded by saying, “We will not be distracted by accusations which aren’t supported by the facts”.

The US government asserts one thing; Facebook another. Which is the correct interpretation of Facebook: An instrument of death or a really great helper of humanity?

The US is a country, and it has legal tools at its disposal. Facebook is a commercial enterprise operating in the US with a single person controlling what the company does.

Facebook wants to use the laws of the country to advantage itself; for example, Facebook is not too keen on Lina Khan. The company filed a legal document to keep that person from getting involved in matters related to Facebook’s commercial behaviors.

I find the situation amusing. Facebook’s assertions are not going to get a like from me. The US government, on the other hand, is a country. When countries take action — as China did with regard to Jack Ma — consequences can be significant.

The phrase “Facebook kills” is meme-able. That may be a persistent problem for the Zuck and the Zuckers in my opinion.

Stephen E Arnold, July 22, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta