Facebook Factoid: Deleting User Content

July 6, 2017

Who knows if this number is accurate. I found the assertion of a specific number of Facebook deletions interesting. Plus, someone took the time to wrap the number is some verbiage about filtering, aka censorship. The factoid appears in “Facebook Deletes 66,000 Posts a Week to Curb Hate Speech, Extremism.”

Here’s the passage with the “data”:

Facebook has said that over the past two months, it has removed roughly 66,000 posts on average per week that were identified as hate speech.

My thought is that the 3.2 million “content objects” is neither high nor low. The number is without context other than my assumption that Facebook has two billion users per month. The method used to locate and scrub the data seems to be a mystical process powered by artificial intelligence and humans.

One thing is clear to me: Figuring out what to delete will give both the engineers writing the smart software and the lucky humans who get paid to identity inappropriate content in the musings of billions of happy Facebookers seems to be a somewhat challenging task.

What about those “failures”? Good question. What about that “context”? Another good question. Without context what have we with this magical 66,000? Not much in my opinion. One can’t find information if it has been deleted. That’s another issue to consider.

Stephen E Arnold, July 6, 2017

Facebook to Tackle Terrorism with Increased Monitoring

July 5, 2017

Due to recent PR nightmares involving terrorism organizations, Facebook is revamping their policies and policing of terrorism content within the social media network. A recent article in Digital Trends, Facebook Fights Against Terrorist Content on Its Site Using A.I., Human Expertise, explains how Zuckerberg and his team of anti-terrorism experts are changing the game in monitoring Facebook for terrorism activity.

As explained in the article,

To prevent AI from flagging a photo related to terrorism in a post like a news story, human judgment is still required. In order to ensure constant monitoring, the community operations team works 24 hours a day and its members are also skilled in dozens of languages.” Recently Facebook was in the news for putting their human monitors at risk by accidentally revealing personal information to the terrorists they were investigating on the site. As Facebook increase the number of monitors, it seems the risk to those monitors also increases.

The efforts put forth by Facebook are admirable, yet we can’t help wonder how – even with their impressive AI/human team – the platform can monitor the sheer number of live-streaming videos as those numbers continue to increase. The threats, terrorist or otherwise, present in social media continue to grow with the technology and will require a much bigger fix than more manpower.

Catherine Lamsfuss, July 5, 2017

Facebook: Search Images by the Objects They Contain

July 3, 2017

Has Facebook attained the holy grail of image search? Tech Crunch reports, “Facebook’s AI Unlocks the Ability to Search Photos by What’s in Them.” I imagine this will be helpful to law enforcement.

A platform Facebook originally implemented to help the visually impaired, Lumos (built on top of FBLearner Flow), is now being applied to search functionality across the social network. With this tool, one can search using keywords that describe things in the desired image, rather than relying on tags and captions. Writer John Mannes describes how this works:

Facebook trained an ever-fashionable deep neural network on tens of millions of photos. Facebook’s fortunate in this respect because its platform is already host to billions of captioned images. The model essentially matches search descriptors to features pulled from photos with some degree of probability. After matching terms to images, the model ranks its output using information from both the images and the original search. Facebook also added in weights to prioritize diversity in photo results so you don’t end up with 50 pics of the same thing with small changes in zoom and angle. In practice, all of this should produce more satisfying and relevant results.

Facebook expects to extrapolate this technology to the wealth of videos it continues to amass. This could be helpful to a user searching for personal videos, of course, but just consider the marketing potential. The article continues:

Pulling content from photos and videos provides an original vector to improve targeting. Eventually it would be nice to see a fully integrated system where one could pull information, say searching a dress you really liked in a video, and relate it back to something on Marketplace or even connect you directly with an ad-partner to improve customer experiences while keeping revenue growth afloat.

Mannes reminds us Facebook is operating amidst fierce competition in this area. Pinterest, for example, enables users to search images by the objects they contain. Google may be the furthest along, though; that inventive company has developed its own image captioning model that boasts an accuracy rate of over 90% when either identifying objects or classifying actions within images.

Cynthia Murrell, July 3, 2017

 

Facebook May Be Exploiting Emotions of Young Audiences

June 26, 2017

Open Rights Group, a privacy advocacy group is demanding details of a study Facebook conducted on teens and sold its results to marketing companies. This might be a blatant invasion of privacy and attempt to capitalize on emotional distress of teens.

In a press release sent out by the Open Rights Group and titled Rights Groups Demand More Transparency over Facebook’s ‘Insights’ into Young Users, the spokesperson says:

It is incumbent upon Facebook as a cultural leader to protect, not exploit, the privacy of young people, especially when their vulnerable emotions are involved.

This is not the first time technology companies have come under heavy criticism from privacy rights groups. Facebook through its social media platform collects information and metrics from users, analyzes it and sells the results to marketing companies. However, Facebook never explicitly tells the user that they are being watched. Open Rights Group is demanding that this information is made public. Though there is no hope, will Facebook concede?

Vishal Ingole, June 26, 2017

What to Do about the Powerful Tech Monopolies

June 14, 2017

Traditionally, we as a country have a thing against monopolies—fair competition for the little guy and all that. Have we allowed today’s tech companies amass too much power? That seems to be the conclusion of SiliconBeat’s article, “Google, Facebook, and Amazon: Monopolies that Should be Broken Up or Regulated?” Writer Ethan Baron summarizes these companies massive advantages, and the efforts of regulatory agencies to check them. He cites a New York Times article by Jonathan Taplin:

Taplin, in his op-ed, argued that Google, Facebook and Amazon ‘have stymied innovation on a broad scale.’ With industry giants facing limited competition, incumbent companies have a profound advantage over new entrants, Taplin said. And the tech firms’ explosive growth has caused massive damage to companies already operating, he said. ‘The platforms of Google and Facebook are the point of access to all media for the majority of Americans. While profits at Google, Facebook and Amazon have soared, revenues in media businesses like newspaper publishing or the music business have, since 2001, fallen by 70 percent,’ Taplin said. The rise of Google and Facebook have diverted billions of dollars from content creators to ‘owners of monopoly platforms,’ he said. All content creators dependent on advertising must negotiate with Google or Facebook as aggregator. Taplin proposed that for the three tech behemoths, there are ‘a few obvious regulations to start with.’

Taplin suggests limiting acquisitions as the first step since that is how these companies grow into such behemoths. For Google specifically, he suggests regulating it as a public utility. He also takes aim at the “safe harbor” provision of the federal Digital Millennium Copyright Act, which shields Internet companies from damages associated with intellectual property violations found on their platforms. Since the current political climate is not exactly ripe for regulation, Taplin laments that such efforts will have to wait a few years, by which time these companies will be so large that breaking them up will be the only remedy. We’ll see.

Cynthia Murrell, June 14, 2017

The Power of Context in Advertising

June 9, 2017

How’s it going with those ad-and-query matching algorithms? The Washington Post reports, “For Advertisers, Algorithms Can Lead to Unexpected Exposure on Sites Spewing Hate.” Readers may recall that earlier this year, several prominent companies pulled their advertisements from Google’s AdSense after they found them sharing space with objectionable content. Writers Elizabeth Dwoskin and Craig Timberg cite an investigation by their paper, which found the problem is widespread. (See the article for specifics.) How did we get here? The article explains:

The problem has emerged as Web advertising strategies have evolved. Advertisers sometimes choose to place their ads on particular sites — or avoid sites they dislike — but a growing share of advertising budgets go to what the industry calls ‘programmatic’ buys. These ads are aimed at people whose demographic or consumer profile is receptive to a marketing message, no matter where they browse on the Internet. Algorithms decide where to place ads, based on people’s prior Web usage, across vastly different types of sites.

The technology companies behind ad networks have slowly begun to address the issue, but warn it won’t be easy to solve. They say their algorithms struggle to distinguish between content that is truly offensive and language that is not offensive in context. For example, it can be hard for computers to determine the difference between the use of a racial slur on a white-supremacy site and a website about history.

To further complicate the issue, companies employing these algorithms want nothing to do with becoming “arbiters of speech.” After all, not every case is so simple as a post sporting a blatant slur in the headline; the space between hate speech and thoughtful criticism is more of a gradient than a line. Google. Facebook, et al may not have signed up for this role, but the problem is the direct consequence of the algorithmic ad-placing model. Whether on this issue, the scourge of fake news, or the unwitting promotion of counterfeit goods, tech companies must find ways to correct the wide-spread consequences of their revenue strategies.

Cynthia Murrell, June 9, 2017

US Still Most Profitable for Alphabet

May 8, 2017

Alphabet, Inc., the parent company of Google generates maximum revenue from the US market. Europe Middle East and Africa combined come at second and Asia Pacific occupying the third slot.

Recode in its earnings report titled Here’s Where Alphabet Makes Its Money says:

U.S. revenue increased 25 percent from last year to $11.8 billion. Sales from the Asia-Pacific region rose 29 percent to $3.6 billion. Revenue from Europe, the Middle East, and Africa was up 13 percent to $8.1 billion.

Despite the fact that around 61% of world population is in Asia Pacific region, Google garnering most of the revenues from a mere 322 million people is surprising. It can be attributed to the fact that China, which forms the bulk of Asia’s population does not have access to Google or its services. India, another emerging market though is open, is yet to embrace digital economy fully.

While chances of Chinese market opening up for Google are slim, India seems to be high on the radar of not only Google but also for other tech majors like Apple, Amazon, Microsoft and Facebook.

Vishol Ingole, May 8, 2017

Facebook Excitement: The Digital Country and Kids

May 4, 2017

I read “Facebook Admits Oversight after Leak Reveals Internal Research On Vulnerable Children.” The write up reports that an Australian newspaper:

reported that Facebook executives in Australia used algorithms to collect data on more than six million young people in Australia and New Zealand, “indicating moments when young people need a confidence boost.”

social media madness small

The idea one or more Facebook professionals had strikes me as one with potential. If an online service can identify a person’s moment of weakness, that online service could deliver content designed to leverage that insight. The article said:

The data analysis — marked “Confidential: Internal Only” — was intended to reveal when young people feel “worthless” or “insecure,” thus creating a potential opening for specific marketing messages, according to The Australian. The newspaper said this case of data mining could violate Australia’s legal standards for advertising and marketing to children.

Not surprisingly, the “real” journalism said:

“Facebook has an established process to review the research we perform,” the statement continued. “This research did not follow that process, and we are reviewing the details to correct the oversight.”

When Facebook seemed to be filtering advertising based on race, Facebook said:

“Discriminatory advertising has no place on Facebook.”

My reaction is to this revelation is, “What? This type of content shaping is news?”

My hunch is that some folks forget that when advertisers suggest one has a lousy complexion, particularly a disfiguring rash, the entire point is to dig at insecurities. When I buy the book Flow for a friend, I suddenly get lots of psycho-babble recommendations from Amazon.

Facebook, like any other sales oriented and ad hungry outfit, is going to push as many psychological buttons as possible to generate revenue. I have a hypothesis that the dependence some people have on Facebook “success” is part of the online business model.

What’s the fix?

“Fix” is a good word. The answer is, “More social dependence.”

In my experience, drug dealers do not do intervention. The customer keeps coming back until he or she doesn’t.

Enforcement seems to be a hit-and-miss solutions. Intervention makes some Hollywood types oodles of money in reality programming. Social welfare programs slump into bureaucratic floundering.

Could it be that online dependence is a cultural phenomenon. Facebook is in the right place at the right time. Technology makes it easy to refine messages for maximum financial value.

Interesting challenge, and the thrashing about for a “fix” will be fascinating to watch. Perhaps the events will be live streamed on Facebook? That may provide a boost in confidence to Facebook users and to advertisers. Win win.

Stephen E Arnold, May 4, 2017

Android Introduces in Apps Search

March 20, 2017

Android has announced a new search feature, this one specifically for documents and messages within your apps. With this feature, if you want to revisit that great idea you jotted down last Tuesday, you will (eventually) be able to search for it within Evernote using whatever keywords you can recall from your brilliant plan. The brief write-up at Ubergizmo, “Google Introduces ‘In Apps’ Search Feature to Android,” explains the new feature:

According to Google, ‘We use apps to call friends, send messages or listen to music. But sometimes, it’s hard to find exactly what you’re looking for. Today, we’re introducing a new way for you to search for information in your apps on your Android phone. With this new search mode, called In Apps, you can quickly find content from installed apps.

Basically by searching under the ‘In Apps’ tab in the search bar on your Android phone, instead of trying to search the web, it will search within your apps itself. This will be ideal if you’re trying to bring up a particular message, or if you have saved a document and you’re unsure if you saved it in Evernote, Google Drive, Dropbox, in your email, and so on.

So far, In Apps only works with Gmail, Spotify, and YouTube. However, Google plans to incorporate the feature into more apps, including Facebook Messenger, LinkedIn, Evernote, Glide, Todoist, and Google Keep. I expect we will eventually see the feature integrated into nearly every Android app.

Cynthia Murrell, March 20, 2017

When AI Spreads Propaganda

February 28, 2017

We thought Google was left-leaning, but an article at the Guardian, “How Google’s Search Algorithm Spreads False Information with a Rightwing Bias,” seems to contradict that assessment. The article cites recent research by the Observer, which found neo-Nazi and anti-Semitic views prominently featured in Google search results. The Guardian followed up with its own research and documented more examples of right-leaning misinformation, like climate-change denials, anti-LGBT tirades,  and Sandy Hook conspiracy theories. Reporters Olivia Solon and Sam Levin tell us:

The Guardian’s latest findings further suggest that Google’s searches are contributing to the problem. In the past, when a journalist or academic exposes one of these algorithmic hiccups, humans at Google quietly make manual adjustments in a process that’s neither transparent nor accountable.

At the same time, politically motivated third parties including the ‘alt-right’, a far-right movement in the US, use a variety of techniques to trick the algorithm and push propaganda and misinformation higher up Google’s search rankings.

These insidious manipulations – both by Google and by third parties trying to game the system – impact how users of the search engine perceive the world, even influencing the way they vote. This has led some researchers to study Google’s role in the presidential election in the same way that they have scrutinized Facebook.

Robert Epstein from the American Institute for Behavioral Research and Technology has spent four years trying to reverse engineer Google’s search algorithms. He believes, based on systematic research, that Google has the power to rig elections through something he calls the search engine manipulation effect (SEME).

Epstein conducted five experiments in two countries to find that biased rankings in search results can shift the opinions of undecided voters. If Google tweaks its algorithm to show more positive search results for a candidate, the searcher may form a more positive opinion of that candidate.

This does add a whole new, insidious dimension to propaganda. Did Orwell foresee algorithms? Further complicating the matter is the element of filter bubbles, through which many consume only information from homogenous sources, allowing no room for contrary facts. The article delves into how propagandists are gaming the system and describes Google’s response, so interested readers may wish to navigate there for more information.

One particular point gives me chills– Epstein states that research shows the vast majority of readers are not aware that bias exists within search rankings; they have no idea they are being manipulated. Perhaps those of us with some understanding of search algorithms can spread that insight to the rest of the multitude. It seems such education is sorely needed.

Cynthia Murrell, February 28, 2017

 

 

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta