Facebook: Friends Are Useful

April 16, 2019

I read “Mark Zuckerberg Leveraged Facebook User Data to Fight Rivals and Help Friends, Leaked Documents Show.” I must admit I was going to write about Alphabet Google YouTube DeepMind’s smart software which classified the fire in Paris in a way that displayed links to the 9/11 attack. I then thought, “Why not revisit Microsoft’s changing story about how much user information was lost via the email breach?” But I settled on a compromise story. Facebook allegedly loses control of documents. This is a security angle. The document reveal how high school science club management methods allegedly behave.

According to CNBC:

Facebook would reward favored companies by giving them access to the data of its users. In other cases, it would deny user-data access to rival companies or apps.

If true, the statement does not surprise me. I was in my high school science club, and I have a snapshot of our fine group of outcasts, wizards, and crazy people. Keep in mind: I was one of these exemplars of the high school.

Let’s put these allegedly true revelations in context:

  1. Facebook has amassed a remarkable track record in the last year
  2. Google, a company which contributed some staff to Facebook, seems to have some interesting behaviors finding their way into the “real news” media; for example, senior management avoiding certain meetings and generally staying out of sight
  3. Microsoft, a firm which dabbled in monopoly power, is trying to figure out how to convert its high school science club management methods in its personnel department to processes which match some employees’ expectations for workplace behavior.

What’s the view from Harrod’s Creek? Like the Lyft IPO and subsequent stock market performance, the day of reckoning does not arrive with a bang. Nope. The day creeps in on cat’s feet. The whimpering may soon follow.

Stephen E Arnold, April 16, 2019

 

Zuck Hunting: Investors Want to Blast a Clay Pigeon

April 14, 2019

I read “Facebook Investors Desperate to Boot Mark Zuckerberg from Chairmanship.” I wonder if these senior business professionals realize that the Zuck (Mark Zuckerberg) has taken anticipatory steps to remain in control of the Facebook privacy grinder.

The write up reports this sentence from an April 12, 2019, Securities & Exchange Commission filing:

[Zuckerberg’s] dual-class shareholdings give him approximately 60% of Facebook’s voting shares, leaving the board, even with a lead independent director, with only a limited ability to check Mr. Zuckerberg’s power,” reads the statement supporting the proposal. “We believe this weakens Facebook’s governance and oversight of management.

The write up summarizes some of the concern stakeholders have in the Zuck’s decision making.

Zuck (the “face” of Facebook) may not embrace the idea of a small step toward knocking his clay pigeons from the sky.

I learned that “Facebook isn’t a fan” of this idea.

From my vantage point in rural Kentucky, the situation seems to be:

  1. A lack of meaningful regulatory oversight and control on a company which has demonstrated a willingness to say “I am sorry.” Each time I hear these words from a Facebook professional, I think of John Cleese’s character being held upside down out of a window in “A Fish Called Wanda.”
  2. A desire to continue chugging forward in order to maintain what I call “HSSCMM” or high school science club management methods. (If you are not sure what this means, ask a high school science club student at an institution near you.)
  3. Confidence that the company’s team and its users can continue to bring the world together despite some modest evidence that Facebook causes a small amount of disruption.

The push back strikes me as an example of too little, too late. But at least there is some questioning of the HSSCMM’s efficacy.

I hear the call “pull” but that clay pigeon is flying free. The prized Zuck sails forward unscathed.

Stephen E Arnold, April 14, 2019

Looking Back: Facebook and Live Streams

April 9, 2019

Many have asked how Facebook could allow it—during the tragic mass shooting in New Zealand on March 15, the alleged perpetrator live-streamed the horror for 17 minutes. Now, CNET shares, “Facebook Explains Why its AI Didn’t Catch New Zealand Gunman’s Livestream.” Writers Erin Carson and Queenie Wong cite a post from Facebook VP Guy Rosen, and say the company just wasn’t prepared for such an event. They report:

“In order for AI to recognize something, it has to be trained on what it is and isn’t. For example, you might need thousands of images of nudity or terrorist propaganda to teach the system to identify those things. ‘We will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare,’ Rosen said in the post. In addition, he noted that it’s a challenge for the system to recognize ‘visually similar’ images that could be harmless like live-streamed video games. ‘AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect,’ Rosen said. Facebook’s AI challenges also underscore how the social network relies on user reports. The social network didn’t get a user report during the alleged shooter’s live broadcast. That matters, Rosen said, because Facebook prioritizes reports about live videos.”

The first user report about this video came in 12 minutes after the stream ended. The company says fewer than 200 users viewed the video in real time, but that more than 4,000 views occurred before it was taken down.

With no vetting, no time delay, and just smart software, the shooting video was available.

Rosen does tell us how Facebook plans to address the issue going forward: continue to improve its AI’s matching technology; find a way to get user reports faster; and continue working with the Global Internet Forum to Counter Terrorism. Do these plans seem a nebulous to anyone else?

Three of the five eyes are taking steps to put sheriffs in the social media territory.

Cynthia Murrell, April 9, 2019

Making, Not Filtering, Disinformation

April 8, 2019

I spotted a link to this article on Sunday (April 7, 2019). The title of the “real news” report was “Facebook Is Asking to Be Regulated but Wants to Choose How.” The write ostensibly was about Facebook’s realization that regulation would be good for everyone. Mark Zuckerberg wants to be able to do his good work within a legal framework.

I noted this passage in the article:

Facebook has been in the vanguard of creating ways in which both harmful content can be generated and easily sent to anyone in the world, and it has given rise to whole new categories of election meddling. Asking for government regulation of “harmful content” is an interesting proposition in terms of the American constitution, which straight-up forbids Congress from passing any law that interferes with speech under the first amendment.

I also circled this statement:

Facebook went to the extraordinary lengths of taking out “native advertising” in the Daily Telegraph. In other words ran a month of paid-for articles demonstrating the sunnier side of tech, and framing Facebook’s efforts to curb nefarious activities on its own platform. There is nothing wrong with Facebook buying native advertising – indeed, it ran a similar campaign in the Guardian a couple of years ago – but this was the first time that the PR talking points adopted by the company have been used in such a way.

From Mr. Zuckerberg’s point of view, he is sharing his ideas.

From the Guardian’s point of view, he is acting in a slippery manner.

From the newspapers reporting about his activities and, in the case of the Washington Post, providing him with an editorial forum, news is news.

But what’s the view from Harrod’s Creek? Let me share a handful of observations:

  1. If a person pays money to a PR firm to get information in a newspaper, that information is “news” even if it sets forth an agenda
  2. Identifying disinformation or weaponized information is difficult, it seems, for humans involved in creating “real news”. No wonder software struggles. Money may cloud judgment.
  3. Information disseminated from seemingly “authoritative” sources is not much different from the info rocks from a digital slingshot. Disgruntled tweeters and unhappy Instagramers can make people duck and respond.

For me, disinformation, reformation, misinformation, and probably regular old run-of-the-mill information is unlikely to be objective. Therefore, efforts and motivations to identify and filter these payloads is likely to be very difficult.

Stephen E Arnold, April 8, 2019

Singapore Enters the War Against US Social Media

April 7, 2019

Wow, the high school science club companies may have to duke it out with the student council. That’s a metaphor. The Facebooks, Googles, and other social media ad giants may have to deal with politicians. Horror of horrors.

I read “Singapore’s Fake News Laws Upset Tech Giants.” The main point is that a city state which takes a hard line on chewing gum is “adopting tough measures” related to fake news. (I assume the write up in Phys.org is accurate, of course.)

The article noted:

Singapore is among several countries pushing legislation to fight fake news, and the government stressed ordering “corrections” to be placed alongside falsehoods would be the primary response, rather than jail or fines.

The trick is that some person or some algorithms has to spot fake news. If that person is an individual who perceives a misstatement, that may be contentious. If a smart algorithm from the science club crowd misses fake news, that’s probably a thorny path as well.

Facebook and Google are on the scene. I noted this statement in the write up:

Google, Facebook and Twitter have their Asia headquarters in Singapore, a city of 5.6 million which is popular with expats as it is developed, safe and efficient. But there were already signs of tensions with tech companies as the government prepared to unveil the laws. During parliamentary hearings last year about tackling online falsehoods, Google and Facebook urged the government not to introduce new laws.

I interpreted this to mean, “Yikes, lobbying does not work in Singapore as it does in the USA.”

Another tiny step from non US regulators to get certain firms to abandon some of their more interesting and possibly cavalier and entitled practices. Can the mere government of Singapore deal with the corporate countries US laws have enabled?

Stephen E Arnold, April 7, 2019

Facebook Problems: A Ripple or a Category 5 Alert

March 26, 2019

When hurricanes hit hapless Florida, the devastation is not confined to a single trailer court. Even the big money McMansions can lose their roofs. Fortune Magazine identifies Facebook and its problems in an insightful way in “Facebook Ever-More Vulnerable to Policy Risks, Analysts Warn.”

Financial analysts and politicos see the anti-Facebookism as something different. Different may mean it is time to cash out and distance oneself from the poster child of high school science club management. Unfortunately the quote round up from assorted experts takes an understandably narrow focus.

The write up concludes:

Facebook shares gained as much as 1.3 percent on Wednesday. The stock has rallied 25 percent year-to-date, versus a 13 percent gain for the S&P 500, though it has fallen almost 3 percent in the past year, compared to the market’s 4 percent rise.

The negativism has generated some financial upside.

What’s Fortune ignoring?

In my opinion, Facebook is one of those early warning gizmos the IBM Weather Channel uses to explain that the hurricane forming will be terrible. If the hurricane forms and tracks over Florida, the damage is going to be extensive.

The Facebook problem may take out other properties as well. In Wall Street’s environment, big losses could be a bit of a problem.

Stephen E Arnold, March 26, 2019

Forbes Raises Questions about Facebook Encryption

March 25, 2019

I am never sure if a story in Forbes (the capitalist tool) is real journalism or marketing. I was interested in a write up called “Could Facebook Start Mining Decrypted WhatsApp Messages For Ads And Counter-Terrorism?” The main point is that Facebook encryption could permit Facebook to read customers’ messages. The purpose of such access would be to sell ads and provide information to “governments or harvesters.” The write up states:

The problem is that end-to-end encryption only protects a message during transit. The sender’s device typically retains an unencrypted copy of the message, while the recipient’s device necessarily must decrypt the message to display to the user. If either of those two devices have been compromised by spyware, the messages between them can be observed in real-time regardless of how strong the underlying encryption is.

No problem with this description. Intentionally or unintentionally, the statement makes clear why compromising user devices is an important tool in some government’s investigative and intelligence toolbox. Why decrypt of the bad actor’s mobile device or computer just emails the information to a third party?

I noted this statement as well:

The messaging app itself has access to the clear text message on both the sender and recipient’s devices.

If I understand the assertion, Facebook can read the messages sent by its encrypted service.

The write up asserts:

As its encrypted applications are increasingly used by terrorists and criminals and to share hate speech and horrific content, the company will come under further pressure to peel back the protections of encryption.

Even if Facebook wants to leave encrypted information in unencrypted form, outside pressures may force Facebook to just decrypt and process the information.

The conclusion of the write up is interesting:

Putting this all together, it is a near certainty that Facebook did not propose its grand vision of platform-wide end-to-end encryption without a clear plan in place to ensure it would be able to continue to monetize its users just as effectively as in its pre-encryption era. The most likely scenario is a combination of behavioral affinity inference through unencrypted metadata and on-device content mining. In the end, as end-to-end encryption meets the ad-supported commercial reality of Facebook, it is likely that we will see a dawn of a new era of on-device encrypted message mining in which Facebook is able to mine us more than ever under the guise of keeping us safe.

Speculation? Part of the capitalist toolkit it seems. Is there a solution? The write up just invokes Orwell. Fear, uncertainty, doubt. Whatever sells. But news?

Stephen E Arnold, March 25, 2019

Smart or Not So Smart Software?

March 22, 2019

I read “A Further Update on New Zealand Terrorist Attack.” The good news is that the Facebook article did not include the word “sorry” or the phrase “we’ll do better.” The bad news is that the article includes this statement:

AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video. This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.

Violent videos have never before been posted to Facebook? Hmmm.

Smart software, smart employees, smart PR. Sort of. The fix is to process more violent videos. Sounds smart.

Stephen E Arnold, March 22, 2019

Instagram: Another Facebook Property in the News

March 22, 2019

Instagram (IG or Insta) has become an important social media channel. Here’s a quick example:

My son and his wife have opened another exercise studio in Washington, DC. How was the service promoted? Instagram.

Did the Instagram promotions for the new facility work? Yes, quite well.

The article “Instagram Is the Internet’s New Home for Hate” makes an attempt to explain that Facebook’s Instagram is more than a marketing tool. Instagram is a source of misinformation.

The write up states:

Instagram is teeming with these conspiracy theories, viral misinformation, and extremist memes, all daisy-chained together via a network of accounts with incredible algorithmic reach and millions of collective followers—many of whom, like Alex, are very young. These accounts intersperse TikTok videos and nostalgia memes with anti-vaccination rhetoric, conspiracy theories about George Soros and the Clinton family, and jokes about killing women, Jews, Muslims, and liberals.

We also noted this statement:

The platform is likely where the next great battle against misinformation will be fought, and yet it has largely escaped scrutiny. Part of this is due to its reputation among older users, who generally use it to post personal photos, follow inspirational accounts, and keep in touch with friends. Many teenagers, however, use the platform differently—not only to connect with friends, but to explore their identity, and often to consume information about current events.

Is it time to spend more time on Instagram? How do intelligence-centric software systems index Instagram content? What non obvious information can be embedded in a picture or a short video? Who or what examines content posted on the service? Can images with hashtags be used to pass information about possibly improper or illegal activities?

Stephen E Arnold, March 22, 2019

Facebook: Ripples of Confusion, Denial, and Revisionism

March 18, 2019

Facebook contributed to an interesting headline about the video upload issue related to the bad actor in New Zealand. Here’s the headline I noted as it appeared on Techmeme’s Web page:

image

The Reuters’ story ran a different headline:

image

What caught my attention is the statement “blocked at upload.” If a video were blocked at upload, were those videos removed? If blocked, then the number of videos drops to 300 million.

This type of information is typical of the coverage of Facebook, a company which is become the embodiment of social media.

There were two other interesting Facebook stories in my news feed this morning.

The first concerns a high profile Silicon Valley investor, Marc Andreessen. The write up reports and updates a story whose main point is:

Facebook Board Member May Have Met Cambridge Analytica Whistleblower in 2016.

Read more

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta