Are You Really Beefing up Your Search Skills?

January 18, 2017

Everyone’s New Year’s resolution is usually to lose weight.  When January swings around again, that resolution went out the door with the spring-cleaning.  Exercise can be a challenge, but you can always exercise your search skills by reading Medium’s article, “Google Search Tricks To Become A Search Power User.”  Or at least the article promises to improve your search skills.

Let’s face it, searching on the Web might seem simple, but it requires a little more brainpower than dumping keywords into a search box.  Google makes searching easier and is even the Swiss army knife of answering basic questions.   The Medium article does go a step further by drawing old school search tips, such as the asterisk, quotes, parentheses, and others.  These explanations, however, need to be read more than once to understand how the tools work:

My favorite of all, single word followed by a ‘*’ will do wonders. But yeah this will not narrow your results; still it keeps a wider range of search results. You’ll need to fine tune to find exactly what you want. This way is useful in case when you don’t remember more than a word or two but you still you want to search fully of it.

Having used some of these tips myself, they actually make searching more complicated than taking a little extra time to read the search results.  I am surprised that they did not include the traditional Boolean operators that usually work, more or less.  Sometimes search tips cause more trouble than they are worth.

Whitney Grace, January 18, 2016

Big Data Is a Big Mess

January 18, 2017

Big Data and Cloud Computing were supposed to make things easier for the C-Suites to take billion dollar decisions. But it seems things have started to fall apart.

In an article published by Forbes titled The Data Warehouse Has Failed, Will Cloud Computing Die Next?, the author says:

A company that sells software tools designed to put intelligence controls into data warehousing environments says that traditional data warehousing approaches are flaky. Is this just a platform to spin WhereScape wares, or does Whitehead have a point?

WhereScape, a key player in Data Warehousing is admitting that the buzzwords in the IT industry are fizzing out. The Big Data is being generated, in abundance, but companies still are unsure what to do with the enormous amount of data that their companies produce.

Large corporations who already have invested heavily in Big Data are yet to find any RoIs. As the author points out:

Data led organizations have no idea how good their data is. CEOs have no idea where the data they get actually comes from, who is responsible for it etc. yet they make multi million pound decisions based on it. Big data is making the situation worse not better.

Looks like after 3D-Printing, another buzzword in the tech world, Big Data and Cloud Computing is going to be just a fizzled out buzzword.

Vishal Ingole, January 18, 2017

Chinese Censorship Agency Declares All News Online Must Be Verified

January 12, 2017

The heavy hand of Chinese censorship has just gotten heavier. The South China Morning Post reports, “All News Stories Must Be Verified, China’s Internet Censor Decrees as it Tightens Grip on Online Media.” The censorship agency now warns websites not to publish news without “proper verification.” Of course, to hear the government tell it, they just wants to cut down on fake news and false information. Reporter Choi Chi-yuk  writes:

The instruction, issued by the Cyberspace Administration of China, came only a few days after Xu Lin, formerly the deputy head of the organisation, replaced his boss, Lu Wei, as the top gatekeeper of Chinese internet affairs. Xu is regarded as one of President Xi Jinping’s key supporters.

The cyberspace watchdog said online media could not report any news taken from social media websites without approval. ‘All websites should bear the key responsibility to further streamline the course of reporting and publishing of news, and set up a sound internal monitoring mechanism among all mobile news portals [and the social media chat websites] Weibo or WeChat,’ Xinhua reported the directive as saying. ‘It is forbidden to use hearsay to create news or use conjecture and imagination to distort the facts,’ it said.

We’re told the central agency has directed regional offices to aggressively monitor content and “severely” punish those who post what they consider false news. They also insist that sources be named within posts. Apparently, several popular news portals have been rebuked under the policy, including Sina.com, Ifeng.com, Caijing.com.cn, Qq.com and 163.com.

Cynthia Murrell, January 12, 2017

Google Looks to Curb Hate Speech with Jigsaw

January 6, 2017

No matter how advanced technology becomes, certain questions continue to vex us. For example, where is the line between silencing expression and prohibiting abuse? Wired examines Google’s efforts to walk that line in its article, “Google’s Digital Justice League: How Its Jigsaw Projects are Hunting Down Online Trolls.” Reporter Merjin Hos begins by sketching the growing problem of online harassment and the real-world turmoil it creates, arguing that rampant trolling serves as a sort of censorship — silencing many voices through fear. Jigsaw, a project from Google, aims to automatically filter out online hate speech and harassment. As Jared Cohen, Jigsaw founder and president, put it, “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight, to do everything we can to level the playing field.”

The extensive article also delves into Cohen’s history, the genesis of Jigsaw, how the team is teaching its AI to identify harassment, and problems they have encountered thus far. It is an informative read for anyone interested in the topic.

Hos describes how the Jigsaw team has gone about instructing their algorithm:

The group partnered with The New York Times (NYT), which gave Jigsaw’s engineers 17 million comments from NYT stories, along with data about which of those comments were flagged as inappropriate by moderators.

Jigsaw also worked with the Wikimedia Foundation to parse 130,000 snippets of discussion around Wikipedia pages. It showed those text strings to panels of ten people recruited randomly from the CrowdFlower crowdsourcing service and asked whether they found each snippet to represent a ‘personal attack’ or ‘harassment’. Jigsaw then fed the massive corpus of online conversation and human evaluations into Google’s open source machine learning software, TensorFlow. …

By some measures Jigsaw has now trained Conversation AI to spot toxic language with impressive accuracy. Feed a string of text into its Wikipedia harassment-detection engine and it can, with what Google describes as more than 92 per cent certainty and a ten per cent false-positive rate, come up with a judgment that matches a human test panel as to whether that line represents an attack.

There is still much to be done, but soon Wikipedia and the New York Times will be implementing Jigsaw, at least on a limited basis. At first, the AI’s judgments will be checked by humans. This is important, partially because the software still returns some false positives—an inadvertent but highly problematic overstep. Though a perfect solution may be impossible, it is encouraging to know Jigsaw’s leader understands how tough it will be to balance protection with freedom of expression. “We don’t claim to have all the answers,” Cohen emphasizes.

Cynthia Murrell, January 6, 2017

In Pursuit of Better News Online

December 20, 2016

Since the death of what we used to call “newspapers,” Facebook and Twitter have been gradually encroaching on the news business. In fact, Facebook recently faced criticism for the ways it has managed its Trending news stories. Now, the two social media firms seem to be taking responsibility for their roles, having joined an alliance of organizations committed to more competent news delivery. The write-up, “Facebook, Twitter Join Coalition to Improve Online News” at Yahoo News informs us about the initiative:

First Draft News, which is backed by Google [specifically Google News Lab], announced Tuesday that some 20 news organizations will be part of its partner network to share information on best practices for journalism in the online age. Jenni Sargent, managing director of First Draft, said the partner network will help advance the organization’s goal of improving news online and on social networks.

Filtering out false information can be hard. Even if news organizations only share fact-checked and verified stories, everyone is a publisher and a potential source,’ she said in a blog post. ‘We are not going to solve these problems overnight, but we’re certainly not going to solve them as individual organizations.

Sargent said the coalition will develop training programs and ‘a collaborative verification platform,’ as well as a voluntary code of practice for online news.

We’re told First Draft has been pursuing several projects since it was launched last year, like working with YouTube to verify user-generated videos. The article shares their list of participants; it includes news organizations from the New York Times to BuzzFeed, as well as other interested parties, like Amnesty International and the International Fact-Checking Network. Will this coalition succeed in restoring the public’s trust in our news sources? We can hope.

Cynthia Murrell, December 20, 2016

Use Google on Itself to Search Your Personal Gmail Account

December 16, 2016

The article titled 9 Secret Google Search Tricks on Field Guide includes a shortcut to checking on your current and recent deliveries, your flight plans, and your hotels. Google provides this information by pulling keywords from your Gmail account inbox. Perhaps the best one for convenience is searching “my bills” and being reminded of upcoming payments. Of course, this won’t work for bills that you receive via snail mail. The article explains,

Google is your portal to everything out there on the World Wide Web…but also your portal to more and more of your personal stuff, from the location of your phone to the location of your Amazon delivery. If you’re signed into the Google search page, and you use other Google services, here are nine search tricks worth knowing. It probably goes without saying but just in case: only you can see these results.

Yes, search is getting easier. Trust Mother Google. She will hold all your information in her hand and you just need to ask for it. Other tricks include searching “I’ve lost my phone.” Google might not be Find My Iphone, but it can tell you the last place you had your phone, given that you phone was linked to your Google account. Hotels, Events, Photos, Google will have your back.

Chelsea Kerwin, December 16, 2016

Algorithmic Selling on Amazon Spells Buyer Beware

December 12, 2016

The article on Science Daily titled Amazon Might Not Always Be Pitching You the Best Prices, Researchers Find unveils the stacked deck that Amazon has created for sellers. Amazon rewards sellers who use automated algorithmic pricing by more often featuring those seller’s items in the buy box, the more prominent and visible display. So what is algorithmic pricing, exactly? The article explains,

For a fee, any one of Amazon’s more than 2 million third-party sellers can easily subscribe to an automated pricing service…They then set up a pricing strategy by choosing from a menu of options like these: Find the lowest price offered and go above it (or below it) by X dollars or Y percentage, find Amazon’s own price for the item and adjust up or down relative to it, and so on. The service does the rest.

For the consumer, this means that searching on Amazon won’t necessarily produce the best value (at first click, anyway.) It may be a mere dollar difference, but it could also be a more significant price increase between $20 and $60. What is really startling is that even though less than 10% of “algo sellers,” these sellers account for close to a third of the best-selling products. If you take anything away from this article, let it be that what Amazon is showing you first might not be the best price, so always do your research!

Chelsea Kerwin, December 12, 2016

The Information Not Accuracy Age

December 7, 2016

The impact of Google on our lives is clear through the company’s name being used colloquially as a verb. However, Quantum Run reminds us of their impact, quantifiable, in their piece called All hail Google. Google owns 80% of the smartphone market with over a billion android devices. Gmail’s users tally at 420 million users and Chrome has 800 million users. Also, YouTube, which Google owns, has one billion users. An interesting factoid the article pairs with these stats is that 94% of students equate Google with research. The article notes:

The American Medical association voices their concerns over relying on search engines, saying, “Our concern is the accuracy and trustworthiness of content that ranks well in Google and other search engines. Only 40 percent of teachers say their students are good at assessing the quality and accuracy of information they find via online research. And as for the teachers themselves, only five percent say ‘all/almost all’ of the information they find via search engines is trustworthy — far less than the 28 percent of all adults who say the same.

Apparently, cybercondria is a thing. The article correctly points to the content housed on the deep web and the Dark Web as untouched by Google. The major question sparked by this article is that we now have to question the validity of all the fancy numbers Quantum Run has reported.

Megan Feil, December 7, 2016

Google Search Results Are Politically Biased

December 7, 2016

Google search results are supposed to be objective and accurate.  The key phrase in the last sentence was objective, but studies have proven that algorithms can be just as biased as the humans who design them.  One would think that Google, one of the most popular search engines in the world, who have discovered how to program objective algorithms, but according to the International Business Times, “Google Search Results Tend To Have Liberal Bias That Could Influence Public Opinion.”

Did you ever hear Uncle Ben’s advice to Spider-Man, “With great power comes great responsibility.”  This advice rings true for big corporations, such as Google, that influence the public opinion.  CanIRank.com conducted a study the discovered searches using political terms displayed more pages with a liberal than a conservative view. What does Google have to say about it?

The Alphabet-owned company has denied any bias and told the Wall Street Journal: ‘From the beginning, our approach to search has been to provide the most relevant answers and results to our users, and it would undermine people’s trust in our results, and our company, if we were to change course.’  The company maintains that its search results are based on algorithms using hundreds of factors which reflect the content and information available on the Internet. Google has never made its algorithm for determining search results completely public even though over the years researchers have tried to put their reasoning to it.

This is not the first time Google has been accused of a liberal bias in its search results.  The consensus is that the liberal leanings are unintentional and is an actual reflection of the amount of liberal content on the Web.

What is the truth?  Only the Google gods know.

Whitney Grace, December 7, 2016

Physiognomy for the Modern Age

December 6, 2016

Years ago, when I first learned about the Victorian-age pseudosciences of physiognomy and phrenology, I remember thinking how glad I was that society had evolved past such nonsense. It appears I was mistaken; the basic concept was just waiting for technology to evolve before popping back up, we learn from NakedSecurity’s article, “’Faception’ Software Claims It Can Spot Terrorists, Pedophiles, Great Poker Players.”  Based in Isreal, Faception calls its technique “facial personality profiling.” Writer Lisa Vaas reports:

The Israeli startup says it can take one look at you and recognize facial traits undetectable to the human eye: traits that help to identify whether you’ve got the face of an expert poker player, a genius, an academic, a pedophile or a terrorist. The startup sees great potential in machine learning to detect the bad guys, claiming that it’s built 15 classifiers to evaluate certain traits with 80% accuracy. … Faception has reportedly signed a contract with a homeland security agency in the US to help identify terrorists.

The article emphasizes how problematic it can be to rely on AI systems to draw conclusions, citing University of Washington professor and “Master Algorithm” author Pedro Domingos:

As he told The Washington Post, a colleague of his had trained a computer system to tell the difference between dogs and wolves. It did great. It achieved nearly 100% accuracy. But as it turned out, the computer wasn’t sussing out barely perceptible canine distinctions. It was just looking for snow. All of the wolf photos featured snow in the background, whereas none of the dog pictures did. A system, in other words, might come to the right conclusions, for all the wrong reasons.

Indeed. Faception suggests that, for this reason, their software would be but one factor among many in any collection of evidence. And, perhaps it would—for most cases, most of the time. We join Vaas in her hope that government agencies will ultimately refuse to buy into this modern twist on Victorian-age pseudoscience.

Cynthia Murrell, December 6, 2016

 

Next Page »