Instagram Reins in Trolls

July 21, 2017

Photo-sharing app Instagram has successfully implemented DeeText, a program that can successfully weed out nasty and spammy comments from people’s feeds.

Wired in an article titled Instagram Unleashes an AI System to Blast Away Nasty Comments says:

DeepText is based on recent advances in artificial intelligence, and a concept called word embeddings, which means it is designed to mimic the way language works in our brains.

DeepText initially was built by Facebook, Instagram’s parent company for preventing abusers, trolls, and spammers at bay. Buoyed by the success, it soon implemented on Instagram.

The development process was arduous wherein a large number of employees and contractors for months were teaching the DeepText engine how to identify abusers. This was achieved by telling the algorithm which word can be abusive based on its context.

At the moment, the tools are being tested and rolled out for a limited number of users in the US and are available only in English. It will be subsequently rolled out to other markets and languages.

Vishal Ingole, July 21, 2017

Software That Detects Sarcasm on Social Media

July 20, 2017

Technion-Israel Institute of Technology Faculty of Industrial Engineering and Management has developed Sarcasm SIGN, a software that can detect sarcasm in social media content. People with learning difficulties will find this tool useful.

According to an article published by Digital Journal titled Software Detects Sarcasm on Social Media:

The primary aim is to interpret sarcastic statements made on social media, be they Facebook comments, tweets or some other form of digital communication.

As we move towards a more digitized world where the majority of our communications are through digital channels, people with learning disabilities are at the receiving end. As machine learning advances so do the natural language capabilities. Tools like these will be immensely helpful for people who are unable to understand the undertones of communication.

The same tool can also be utilized by brands for determining who is talking about them in a negative way. Now ain’t that wonderful Facebook?

Vishal Ingole, July 20, 2017

Facebook Factoid: Deleting User Content

July 6, 2017

Who knows if this number is accurate. I found the assertion of a specific number of Facebook deletions interesting. Plus, someone took the time to wrap the number is some verbiage about filtering, aka censorship. The factoid appears in “Facebook Deletes 66,000 Posts a Week to Curb Hate Speech, Extremism.”

Here’s the passage with the “data”:

Facebook has said that over the past two months, it has removed roughly 66,000 posts on average per week that were identified as hate speech.

My thought is that the 3.2 million “content objects” is neither high nor low. The number is without context other than my assumption that Facebook has two billion users per month. The method used to locate and scrub the data seems to be a mystical process powered by artificial intelligence and humans.

One thing is clear to me: Figuring out what to delete will give both the engineers writing the smart software and the lucky humans who get paid to identity inappropriate content in the musings of billions of happy Facebookers seems to be a somewhat challenging task.

What about those “failures”? Good question. What about that “context”? Another good question. Without context what have we with this magical 66,000? Not much in my opinion. One can’t find information if it has been deleted. That’s another issue to consider.

Stephen E Arnold, July 6, 2017

Facebook to Tackle Terrorism with Increased Monitoring

July 5, 2017

Due to recent PR nightmares involving terrorism organizations, Facebook is revamping their policies and policing of terrorism content within the social media network. A recent article in Digital Trends, Facebook Fights Against Terrorist Content on Its Site Using A.I., Human Expertise, explains how Zuckerberg and his team of anti-terrorism experts are changing the game in monitoring Facebook for terrorism activity.

As explained in the article,

To prevent AI from flagging a photo related to terrorism in a post like a news story, human judgment is still required. In order to ensure constant monitoring, the community operations team works 24 hours a day and its members are also skilled in dozens of languages.” Recently Facebook was in the news for putting their human monitors at risk by accidentally revealing personal information to the terrorists they were investigating on the site. As Facebook increase the number of monitors, it seems the risk to those monitors also increases.

The efforts put forth by Facebook are admirable, yet we can’t help wonder how – even with their impressive AI/human team – the platform can monitor the sheer number of live-streaming videos as those numbers continue to increase. The threats, terrorist or otherwise, present in social media continue to grow with the technology and will require a much bigger fix than more manpower.

Catherine Lamsfuss, July 5, 2017

Facebook: Search Images by the Objects They Contain

July 3, 2017

Has Facebook attained the holy grail of image search? Tech Crunch reports, “Facebook’s AI Unlocks the Ability to Search Photos by What’s in Them.” I imagine this will be helpful to law enforcement.

A platform Facebook originally implemented to help the visually impaired, Lumos (built on top of FBLearner Flow), is now being applied to search functionality across the social network. With this tool, one can search using keywords that describe things in the desired image, rather than relying on tags and captions. Writer John Mannes describes how this works:

Facebook trained an ever-fashionable deep neural network on tens of millions of photos. Facebook’s fortunate in this respect because its platform is already host to billions of captioned images. The model essentially matches search descriptors to features pulled from photos with some degree of probability. After matching terms to images, the model ranks its output using information from both the images and the original search. Facebook also added in weights to prioritize diversity in photo results so you don’t end up with 50 pics of the same thing with small changes in zoom and angle. In practice, all of this should produce more satisfying and relevant results.

Facebook expects to extrapolate this technology to the wealth of videos it continues to amass. This could be helpful to a user searching for personal videos, of course, but just consider the marketing potential. The article continues:

Pulling content from photos and videos provides an original vector to improve targeting. Eventually it would be nice to see a fully integrated system where one could pull information, say searching a dress you really liked in a video, and relate it back to something on Marketplace or even connect you directly with an ad-partner to improve customer experiences while keeping revenue growth afloat.

Mannes reminds us Facebook is operating amidst fierce competition in this area. Pinterest, for example, enables users to search images by the objects they contain. Google may be the furthest along, though; that inventive company has developed its own image captioning model that boasts an accuracy rate of over 90% when either identifying objects or classifying actions within images.

Cynthia Murrell, July 3, 2017

 

Facebook May Be Exploiting Emotions of Young Audiences

June 26, 2017

Open Rights Group, a privacy advocacy group is demanding details of a study Facebook conducted on teens and sold its results to marketing companies. This might be a blatant invasion of privacy and attempt to capitalize on emotional distress of teens.

In a press release sent out by the Open Rights Group and titled Rights Groups Demand More Transparency over Facebook’s ‘Insights’ into Young Users, the spokesperson says:

It is incumbent upon Facebook as a cultural leader to protect, not exploit, the privacy of young people, especially when their vulnerable emotions are involved.

This is not the first time technology companies have come under heavy criticism from privacy rights groups. Facebook through its social media platform collects information and metrics from users, analyzes it and sells the results to marketing companies. However, Facebook never explicitly tells the user that they are being watched. Open Rights Group is demanding that this information is made public. Though there is no hope, will Facebook concede?

Vishal Ingole, June 26, 2017

What to Do about the Powerful Tech Monopolies

June 14, 2017

Traditionally, we as a country have a thing against monopolies—fair competition for the little guy and all that. Have we allowed today’s tech companies amass too much power? That seems to be the conclusion of SiliconBeat’s article, “Google, Facebook, and Amazon: Monopolies that Should be Broken Up or Regulated?” Writer Ethan Baron summarizes these companies massive advantages, and the efforts of regulatory agencies to check them. He cites a New York Times article by Jonathan Taplin:

Taplin, in his op-ed, argued that Google, Facebook and Amazon ‘have stymied innovation on a broad scale.’ With industry giants facing limited competition, incumbent companies have a profound advantage over new entrants, Taplin said. And the tech firms’ explosive growth has caused massive damage to companies already operating, he said. ‘The platforms of Google and Facebook are the point of access to all media for the majority of Americans. While profits at Google, Facebook and Amazon have soared, revenues in media businesses like newspaper publishing or the music business have, since 2001, fallen by 70 percent,’ Taplin said. The rise of Google and Facebook have diverted billions of dollars from content creators to ‘owners of monopoly platforms,’ he said. All content creators dependent on advertising must negotiate with Google or Facebook as aggregator. Taplin proposed that for the three tech behemoths, there are ‘a few obvious regulations to start with.’

Taplin suggests limiting acquisitions as the first step since that is how these companies grow into such behemoths. For Google specifically, he suggests regulating it as a public utility. He also takes aim at the “safe harbor” provision of the federal Digital Millennium Copyright Act, which shields Internet companies from damages associated with intellectual property violations found on their platforms. Since the current political climate is not exactly ripe for regulation, Taplin laments that such efforts will have to wait a few years, by which time these companies will be so large that breaking them up will be the only remedy. We’ll see.

Cynthia Murrell, June 14, 2017

The Power of Context in Advertising

June 9, 2017

How’s it going with those ad-and-query matching algorithms? The Washington Post reports, “For Advertisers, Algorithms Can Lead to Unexpected Exposure on Sites Spewing Hate.” Readers may recall that earlier this year, several prominent companies pulled their advertisements from Google’s AdSense after they found them sharing space with objectionable content. Writers Elizabeth Dwoskin and Craig Timberg cite an investigation by their paper, which found the problem is widespread. (See the article for specifics.) How did we get here? The article explains:

The problem has emerged as Web advertising strategies have evolved. Advertisers sometimes choose to place their ads on particular sites — or avoid sites they dislike — but a growing share of advertising budgets go to what the industry calls ‘programmatic’ buys. These ads are aimed at people whose demographic or consumer profile is receptive to a marketing message, no matter where they browse on the Internet. Algorithms decide where to place ads, based on people’s prior Web usage, across vastly different types of sites.

The technology companies behind ad networks have slowly begun to address the issue, but warn it won’t be easy to solve. They say their algorithms struggle to distinguish between content that is truly offensive and language that is not offensive in context. For example, it can be hard for computers to determine the difference between the use of a racial slur on a white-supremacy site and a website about history.

To further complicate the issue, companies employing these algorithms want nothing to do with becoming “arbiters of speech.” After all, not every case is so simple as a post sporting a blatant slur in the headline; the space between hate speech and thoughtful criticism is more of a gradient than a line. Google. Facebook, et al may not have signed up for this role, but the problem is the direct consequence of the algorithmic ad-placing model. Whether on this issue, the scourge of fake news, or the unwitting promotion of counterfeit goods, tech companies must find ways to correct the wide-spread consequences of their revenue strategies.

Cynthia Murrell, June 9, 2017

US Still Most Profitable for Alphabet

May 8, 2017

Alphabet, Inc., the parent company of Google generates maximum revenue from the US market. Europe Middle East and Africa combined come at second and Asia Pacific occupying the third slot.

Recode in its earnings report titled Here’s Where Alphabet Makes Its Money says:

U.S. revenue increased 25 percent from last year to $11.8 billion. Sales from the Asia-Pacific region rose 29 percent to $3.6 billion. Revenue from Europe, the Middle East, and Africa was up 13 percent to $8.1 billion.

Despite the fact that around 61% of world population is in Asia Pacific region, Google garnering most of the revenues from a mere 322 million people is surprising. It can be attributed to the fact that China, which forms the bulk of Asia’s population does not have access to Google or its services. India, another emerging market though is open, is yet to embrace digital economy fully.

While chances of Chinese market opening up for Google are slim, India seems to be high on the radar of not only Google but also for other tech majors like Apple, Amazon, Microsoft and Facebook.

Vishol Ingole, May 8, 2017

Facebook Excitement: The Digital Country and Kids

May 4, 2017

I read “Facebook Admits Oversight after Leak Reveals Internal Research On Vulnerable Children.” The write up reports that an Australian newspaper:

reported that Facebook executives in Australia used algorithms to collect data on more than six million young people in Australia and New Zealand, “indicating moments when young people need a confidence boost.”

social media madness small

The idea one or more Facebook professionals had strikes me as one with potential. If an online service can identify a person’s moment of weakness, that online service could deliver content designed to leverage that insight. The article said:

The data analysis — marked “Confidential: Internal Only” — was intended to reveal when young people feel “worthless” or “insecure,” thus creating a potential opening for specific marketing messages, according to The Australian. The newspaper said this case of data mining could violate Australia’s legal standards for advertising and marketing to children.

Not surprisingly, the “real” journalism said:

“Facebook has an established process to review the research we perform,” the statement continued. “This research did not follow that process, and we are reviewing the details to correct the oversight.”

When Facebook seemed to be filtering advertising based on race, Facebook said:

“Discriminatory advertising has no place on Facebook.”

My reaction is to this revelation is, “What? This type of content shaping is news?”

My hunch is that some folks forget that when advertisers suggest one has a lousy complexion, particularly a disfiguring rash, the entire point is to dig at insecurities. When I buy the book Flow for a friend, I suddenly get lots of psycho-babble recommendations from Amazon.

Facebook, like any other sales oriented and ad hungry outfit, is going to push as many psychological buttons as possible to generate revenue. I have a hypothesis that the dependence some people have on Facebook “success” is part of the online business model.

What’s the fix?

“Fix” is a good word. The answer is, “More social dependence.”

In my experience, drug dealers do not do intervention. The customer keeps coming back until he or she doesn’t.

Enforcement seems to be a hit-and-miss solutions. Intervention makes some Hollywood types oodles of money in reality programming. Social welfare programs slump into bureaucratic floundering.

Could it be that online dependence is a cultural phenomenon. Facebook is in the right place at the right time. Technology makes it easy to refine messages for maximum financial value.

Interesting challenge, and the thrashing about for a “fix” will be fascinating to watch. Perhaps the events will be live streamed on Facebook? That may provide a boost in confidence to Facebook users and to advertisers. Win win.

Stephen E Arnold, May 4, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta