Wave of Fake News Is Proving a Boon for the Need for Humans in Tech
October 20, 2017
We are often the first to praise the ingenious algorithms and tools that utilize big data and search muscle for good. But we are also one of the first to admit when things need to be scaled back a bit. The current news climate makes a perfect argument for that, as we discovered in a fascinating Yahoo! Finance piece, “Fake News is Still Here, Despite Efforts by Google and Facebook.”
The article lays out all the failed ways that search giants like Google and social media outlets like Facebook have failed to stop the flood of fake news. Despite the world’s sharpest algorithms and computer programs, they can’t seem to curb the onslaught of odd news.
The article wisely points out that it is not a computer problem anymore, but, instead, a human one. The solution is proving to be deceptively simple: human interaction.
Facebook said last week that it would hire an extra 1,000 people to help vet ads after it found a Russian agency bought ads meant to influence last year’s election. It’s also subjecting potentially sensitive ads , including political messages, to ‘human review.’
In July, Google revamped guidelines for human workers who help rate search results in order to limit misleading and offensive material. Earlier this year, Google also allowed users to flag so-called ‘featured snippets’ and ‘autocomplete’ suggestions if they found the content harmful.
Bravo, we say. There is a limit to what high powered search and big data can do. Sometimes it feels as if those horizons are limitless, but there is still a home for humans and that is a good thing. A balance of big data and beating human hearts seems like the best way to solve the fake news problem and perhaps many others out there.
Patrick Roland, October 20, 2017
Social Media Should Be Social News
October 18, 2017
People are reading news more than ever due to easy information access on the Internet. While literacy rates soar, where people are reading news stories has changed from traditional news outlets to something comparatively newer and quite questionable. According to Pew Research, “News Use Across Social Media Platforms 2017,” people are obtaining their news stories from social media platforms like Facebook, Twitter, Instagram, and others. The Pew Research survey discovered that 67% of Americans get some of their news from social media, which has grown from 62% in 2016. The growth comes from people who are older, nonwhite and are less educated. That is an interesting statistic about American social groups:
Furthermore, about three-quarters of nonwhites (74%) get news on social media sites, up from 64% in 2016. This growth means that nonwhites1 are now more likely than whites to get news while on social media. And social media news use also increased among those with less than a bachelor’s degree, up nine percentage points from 60% in 2016 to 69% in 2017. Alternatively, among those with at least a college degree, social media news use declined slightly.
The information is different from what Pew Research has recorded in the past and there are two ways to interpret the data: compare the share of each social media’s users that get news on that specific Web site and the total percentage of Americans that get news on social media sites. Twitter, Snapchat, and YouTube she a significant growth in user shared news and these directly correspond to investments the companies made to in developing their usability. Facebook remains the number one social media Web site that distributes news, while YouTube is a close second. The data also shows that users visit multiple social media sites to read the news, but that they also rely on traditional news platforms as well.
Social media is a major component to how people communicate with the world around them. Perhaps traditional news outlets should look at ways to incorporate themselves more into social media. Will Facebook, YouTube, and/or Twitter hire journalists in the future?
Whitney Grace, October 18, 2017
Veteran Web Researcher Speaks on Bias and Misinformation
October 10, 2017
The CTO of semantic search firm Ntent, Dr. Ricardo Baeza-Yates, has been studying the Web since its inception. In their post, “Fake News and the Power of Algorithms: Dr. Ricardo Baeza-Yates Weights In With Futurezone at the Vienna Gödel Lecture,” Ntent shares his take on biases online by reproducing an interview Baeza-Yates gave Futurezone at the Vienna Gödel Lecture 2017, where he was the featured speaker. When asked about the consequences of false information spread far and wide, the esteemed CTO cited two pivotal events from 2016, Brexit and the US presidential election.
These were manipulated by social media. I do not mean by hackers – which cannot be excluded – but by social biases. The politicians and the media are in the game together. For example, a non-Muslim attack may be less likely to make the front page or earn high viewing ratings. How can we minimize the amount of biased information that appears? It is a problem that affects us all.
One might try to make sure people get a more balanced presentation of information. Currently, it’s often the media and politicians that cry out loudest for truth. But could there be truth in this context at all? Truth should be the basis but there is usually more than one definition of truth. If 80 percent of people see yellow as blue, should we change the term? When it comes to media and politics the majority can create facts. Hence, humans are sometimes like lemmings. Universal values could be a possible common basis, but they are increasingly under pressure from politics, as Theresa May recently stated in her attempt to change the Magna Carta in the name of security. As history already tells us, politicians can be dangerous.
Indeed. The biases that concern Baeza-Yates go beyond those that spread fake news, though. He begins by describing presentation bias—the fact that one’s choices are limited to that which suppliers have, for their own reasons, made available. Online, “filter bubbles” compound this issue. Of course, Web search engines magnify any biases—their top results provide journalists with research fodder, the perceived relevance of which is compounded when that journalist’s work is published; results that appear later in the list get ignored, which pushes them yet further from common consideration.
Ntent is working on ways to bring folks with different viewpoints together on topics on which they do agree; Baeza-Yates admits the approach has its limitations, especially on the big issues. What we really need, he asserts, is journalism that is bias-neutral instead of polarized. How we get there from here, even Baeza-Yates can only speculate.
Cynthia Murrell, October 10, 2017
Addicted Teens! Facebook Help Them!
October 6, 2017
I read “Teens Rebelling Against Social Media’, Say Headteachers.” Poor social media giants, one might say. Yeah, right. Real news, real facts, real phase change.
Decide for yourself.
The main point of the write up is that teens need “detox” and are embracing a cold turkey to help with withdrawl symptoms.
I noted this passage:
Chris King, chair of the HMC and Headmaster of Leicester Grammar School, said the findings were among “the first indications of a rebellion against social media”. He said they remind us that teenagers “may need help to take breaks from [social media’s] constant demands”. Some 56% of those surveyed said they were on the edge of addiction.
Hmm. Edge of addiction.
I circled this statement which was obviously based on “facts”:
Almost two-thirds of schoolchildren would not mind if social media had never been invented, research suggests.
I wonder if BBC professionals have ripped mobile devices from the addicted clutches of their own children?
Doubtful. Who wants a teen sulking and amping up the annoyance in a modern household?
Not me. Log on. Be happy. See I am not asking questions about methodology, analysis, and statistical validity. Gotta run. I have to check my social media feeds.
Stephen E Arnold, October 6, 2017
The Future of Visual and Voice Search
October 4, 2017
From the perspective of the digital marketers they are, GeoMarketing ponders, “How Will Visual and Voice Search Evolve?” Writer David Kaplan consulted Bing Ads’ Purna Virji on what to expect going forward. For example, though companies are not yet doing much to monetize visual search, Virji says that could change as AIs continue to improve their image-recognition abilities. She also emphasizes the potential of visual search for product discovery—If, for example, someone can locate and buy a pair of shoes just by snapping a picture of a stranger’s feet, sales should benefit handsomely. Virji had this to say about traditional, voice, and image search functionalities working together:
A prediction that Andrew Ng had made when he was still with Baidu was that that ‘by 2020, 50 percent of all search will be image or voice.’ Typing will likely never go away. But now, we have more options. Just like mobile didn’t kill the desktop, apps didn’t kill the browser, the mix of visual, voice, and text will combine in ways that are natural extensions of user behavior. We’ll use those tools depending on the specific need and situation at the moment. For example, you could ‘show’ Cortana a picture of a dress in a magazine via your phone camera and say ‘Hey Cortana, I’d love to buy a dress like this,’ and she can go find where to buy it online. In this way, you used voice and images to find what you were looking for.
The interview also touches on the impact of visual search on local marketing and how its growing use in social media offers data analysts a wealth of targeted-advertising potential.
Cynthia Murrell, October 4, 2017
Facebook: A Pioneer in Bro-giveness?
October 2, 2017
The write up “Mark Zuckerberg Asks for Forgiveness from ‘Those I Hurt This Year’ in Yom Kippur Message” surprised me. In my brief encounters with Silicon Valley “bros”, I cannot recall too many apologies or apologetic moments. My first thought was, “Short circuit somewhere.”
The Verge article explained to me:[Mark Zuckerberg, founder of Facebook] publicly asked for forgiveness for those I hurt this year.
I thought online companies were like utilities. Who gets excited if a water main breaks drowns an elderly person’s parakeet? Who laments when a utility pole short circuits a squirrel? Who worries if an algorithm tries to sell me an iPhone when I am an Android-type senior citizen?
I noted this statement:
Zuckerberg acknowledged that Facebook has had a divisive effect on the country, and that he’ll work to do better in the coming year.
I like New Year’s resolutions.
The write up quotes another Silicon Valley source which I sometimes associate with enthusiasm for what’s new and “important”:
Facebook itself needs to do better to improve its efforts in combating the spread of false information and abuse that appears throughout its platform. It and other social media sites have often touted themselves as a neutral platforms for all ideas and beliefs, but underestimate how these ideals can be undermined, which led to tangible impacts in the real world. Zuckerberg may be sincere in his intentions, but the company he founded needs to follow through on them.
Follow through? Okay.
I think of this commitment to do better as the Silicon Valley equivalet of the New Yorker’s breezy, “Let’s have lunch.”
Is bro-giveness is a disruptive approach to forgoveness? If it is, click the Like button.
Stephen E Arnold, October 2, 2017
Natural Language Processing for Facebook Messenger
September 15, 2017
In its continuing effort to evolve from a basic networking site to a platform for services, Facebook is making Messenger smarter. Silicon reports, “Facebook Bakes Natural Language Processing Messenger Platform 2.1.” The inclusion allows developers to create more functionality for organizations that wish to conduct chatbot-based business through Facebook Messenger itself, without having to utilize another site or app. Reporter Roland Moore-Colyer quotes Facebook’s Vivien Tong as he writes:
‘This first version can detect the following entities [within users’ messages]: hello, bye, thanks, date & time, location, amount of money, phone number, email and a URL. This is the first step in bringing NLP capabilities to all developers, enabling brands to scale their experiences on Messenger.’
The natural language processing capabilities come courtesy of Wit.aim a company Facebook acquired backing in 2015; its services have been available to developers for some time, but were not made native to the Messenger Platform until its latest iteration. Alongside in-built natural language processing, the overhauled Messenger Platform contains software development kits for developers to easily integrate payment services into Messenger and make it easier for to switch customer conversations from automated chatbots to human customer services.
Ah, yes, payment services are crucial, and being able to reach a real person is a sanity-saver (and a client-keeper.) Moore-Colyer notes this development is one in a series of advances for Messenger, and that Facebook’s embrace of smart tech extends to fighting terrorism within its platform.
Cynthia Murrell, September 15, 2017
Instagram Algorithm to Recognize Cruelty and Kindness
September 14, 2017
Instagram is using machine learning to make its platform a kinder place, we learn from the CBS News article, “How Instagram is Filtering Out Hate.” Contributor (and Wired Editor-In-Chief) Nick Thompson interviewed Instagram’s CEO Kevin Systrom, and learned the company is using about 20 humans to teach its algorithm to distinguish naughty from nice. The article relates:
Systrom has made it his mission to make kindness itself the theme of Instagram through two new phases: first, eliminating toxic comments, a feature that launched this summer; and second, elevating nice comments, which will roll out later this year. ‘Our unique situation in the world is that we have this giant community that wants to express themselves,’ Systrom said. ‘Can we have an environment where they feel comfortable to do that?’ Thompson told ‘CBS This Morning’ that the process of ‘machine learning’ involves teaching the program how to decide what comments are mean or ‘toxic’ by feeding in thousands of comments and then rating them.
It is smarter censorship if you will. Systrom seems comfortable embracing a little censorship in favor of kindness, and we sympathize; “trolls” are a real problem, after all. Still, the technology could, theoretically, be used to delete or elevate certain ideological or political content. To censor or not to censor is a fine and important line, and those who manage social media sites will be the ones who must walk it. No pressure.
Cynthia Murrell, September 14, 2017
Let the Tweets Lead Your Marketing, Come What May
September 14, 2017
It seems that sales and marketing departments just can’t keep up with consumer patterns and behaviors. The latest example of this is explained in a DMA article outlining how to utilize social media to reach target leads. As people rely more on their own search and online acumen and less on professionals (IRL), marketing has to adjust.
Aseem Badshah, Founder, and CEO of Socedo, explain the problem and a possible solution:
Traditionally, B2B marketers created content based on the products they want to promote. Now that so much of the B2B decision making process occurs online, content has to be more customer-centric. The current set of website analytics tools provide some insights, but only on the audience who have already reached your website. Intent data from social media can help you make your content more relevant. By analyzing social media signals and looking at which signals are picking up in volume over time, you can gain new insights into your audience that helps you create more relevant content.
While everything Badshah says may be true, one has to ask themselves, is following the masses always a good thing? If a business wants to maintain their integrity to their field would it be in their best interest to follow the lead of their target demographic’s hashtags or work harder at marketing their product/service despite the apparent twitter-provided disinterest?
Catherine Lamsfuss, September 14, 2017
Google Innovation Convoluted to Many
September 7, 2017
In a race against time, Google seems to be struggling to keep up with Apple in many categories, messaging and video chat just to name a few. A recent Phandroid article called out Google on their multiple fails over the years in its plight to dominate Apple.
The primary criticism is Google’s lack of comparable messaging system. As the article explains,
Right now, Google’s solution for handling messaging for the average user is looking a lot like the early 90s landscape for all those competing messaging services. But at least those services were competing with one another. Google’s messaging services cannibalize one another as Google meanders down its course of attempting to find an iMessage solution in the wake of its upheavals.
Although the folks at Phandroid do make good points for Google’s identity crisis, they leave out many other innovations that, although possible missteps, are moving things forward. One such development is the introduction of YouTube Messenger that might seem redundant to many, but also answers many of the problems mentioned by Phandroid.
Catherine Lamsfuss, September 7, 2017