Googles Bid for AI Dominance

December 14, 2016

Google‘s dominance on our digital lives cannot be refuted. The tech giant envisages that the future of computing will be Artificial Intelligence (AI), and the search engine leader is all set to dominate it once again.

Arabian Business in a feature article titled Inside Google’s Brave New World, the author says:

The $500bn technology giant is extending its reach into hardware and artificial intelligence, ultimately aiming to create a sophisticated robot that can communicate with smart-device users to get things done.

The efforts can be seen in the form of company restructuring and focus on developing products and hardware that can host its sophisticated AI-powered algorithms. From wearable devices to in-home products like Google Home, the company is not writing powerful algorithms to answer user queries but is also building the hardware that will seamlessly integrate with the AI.

Though these advances might mean more revenue for the company and its shareholders, with Google controlling every aspect of our working lives, the company also needs to address the privacy concerns with equal zeal. As the author points out:

However, with this comes huge responsibility and a host of ethical and other policy issues such as data privacy and cybersecurity, which Google says its teams are working to resolve on a day-to-day basis.

Apart from Google, other tech companies like Amazon, Microsoft, Facebook and Apple too are in the race for AI dominance. However, the privacy concerns remain there too as the end user never knows, how and where the data collected will be used.

Vishal Ingole, December  14, 2016

Smart Software and Bias: Math Is Not Objective, Right?

December 12, 2016

I read “5 Unexpected Sources of Bias in Artificial Intelligence.” Was I surprised? Yep, but the five examples seemed a bit more pop psychology than substantive. In my view, the bias in smart software originates with the flaws or weaknesses in the common algorithms used to build artificially intelligent systems. I have a lecture about the ways in which a content creator can fiddle with algorithms to generate specific results. I call the lecture “Weaponizing Information: Using Words to Fiddle with Algorithms.” (Want to know more? Write benkent2020 at yahoo dot com. Be aware that this is a for fee presentation.)

This “5 Unexpected…” write up offers these ideas:

  • Data driven bias. The notion is that Stats 101 injunctions are happily ignored, forgotten, or just worked around. See what I mean? Human intent, not really mathy at its core.
  • Bias through interaction. The idea is that humans interact. If the humans are biased, guess what? The outputs are biased, which dominoes down the line. Key word: Human.
  • Emergent bias. This is the filter bubble. I view this as feedback looping, which is a short cut to figuring out stuff. I ran across this idea years ago in Minneapolis. A start up there was explaining how to let me do one thing to inform the somewhat dull system about what to present. Does this sound like Amazon’s method to you?
  • Similarity bias. Now we are getting close to a mathy notion. But the write up wanders back to the feedback notion and does not ask questions about the wonkiness of clustering. Sigh.
  • Conflicting goals bias. Now that puzzled me. I read the paragraphs in the original article and highlighted stereotyping. This struck me as a variant of feedback.

Math is sort of objective, but this write up sticks to some broad and somewhat repetitive ideas. The bias enters when thresholds are set, data are selected, processes structured to deliver what the programmer [a] desires, [b] ze’s boss desires,  [c] what can be made to run and sort of work in the time available, or [d] what the developer remembers from a university class, a Hacker News post, or a bit of open source goodness.

The key to bias is to keep the key word “human” in mind.

Stephen E Arnold, December 12, 2016

Algorithmic Selling on Amazon Spells Buyer Beware

December 12, 2016

The article on Science Daily titled Amazon Might Not Always Be Pitching You the Best Prices, Researchers Find unveils the stacked deck that Amazon has created for sellers. Amazon rewards sellers who use automated algorithmic pricing by more often featuring those seller’s items in the buy box, the more prominent and visible display. So what is algorithmic pricing, exactly? The article explains,

For a fee, any one of Amazon’s more than 2 million third-party sellers can easily subscribe to an automated pricing service…They then set up a pricing strategy by choosing from a menu of options like these: Find the lowest price offered and go above it (or below it) by X dollars or Y percentage, find Amazon’s own price for the item and adjust up or down relative to it, and so on. The service does the rest.

For the consumer, this means that searching on Amazon won’t necessarily produce the best value (at first click, anyway.) It may be a mere dollar difference, but it could also be a more significant price increase between $20 and $60. What is really startling is that even though less than 10% of “algo sellers,” these sellers account for close to a third of the best-selling products. If you take anything away from this article, let it be that what Amazon is showing you first might not be the best price, so always do your research!

Chelsea Kerwin, December 12, 2016

Google Search Results Are Politically Biased

December 7, 2016

Google search results are supposed to be objective and accurate.  The key phrase in the last sentence was objective, but studies have proven that algorithms can be just as biased as the humans who design them.  One would think that Google, one of the most popular search engines in the world, who have discovered how to program objective algorithms, but according to the International Business Times, “Google Search Results Tend To Have Liberal Bias That Could Influence Public Opinion.”

Did you ever hear Uncle Ben’s advice to Spider-Man, “With great power comes great responsibility.”  This advice rings true for big corporations, such as Google, that influence the public opinion.  CanIRank.com conducted a study the discovered searches using political terms displayed more pages with a liberal than a conservative view. What does Google have to say about it?

The Alphabet-owned company has denied any bias and told the Wall Street Journal: ‘From the beginning, our approach to search has been to provide the most relevant answers and results to our users, and it would undermine people’s trust in our results, and our company, if we were to change course.’  The company maintains that its search results are based on algorithms using hundreds of factors which reflect the content and information available on the Internet. Google has never made its algorithm for determining search results completely public even though over the years researchers have tried to put their reasoning to it.

This is not the first time Google has been accused of a liberal bias in its search results.  The consensus is that the liberal leanings are unintentional and is an actual reflection of the amount of liberal content on the Web.

What is the truth?  Only the Google gods know.

Whitney Grace, December 7, 2016

Physiognomy for the Modern Age

December 6, 2016

Years ago, when I first learned about the Victorian-age pseudosciences of physiognomy and phrenology, I remember thinking how glad I was that society had evolved past such nonsense. It appears I was mistaken; the basic concept was just waiting for technology to evolve before popping back up, we learn from NakedSecurity’s article, “’Faception’ Software Claims It Can Spot Terrorists, Pedophiles, Great Poker Players.”  Based in Isreal, Faception calls its technique “facial personality profiling.” Writer Lisa Vaas reports:

The Israeli startup says it can take one look at you and recognize facial traits undetectable to the human eye: traits that help to identify whether you’ve got the face of an expert poker player, a genius, an academic, a pedophile or a terrorist. The startup sees great potential in machine learning to detect the bad guys, claiming that it’s built 15 classifiers to evaluate certain traits with 80% accuracy. … Faception has reportedly signed a contract with a homeland security agency in the US to help identify terrorists.

The article emphasizes how problematic it can be to rely on AI systems to draw conclusions, citing University of Washington professor and “Master Algorithm” author Pedro Domingos:

As he told The Washington Post, a colleague of his had trained a computer system to tell the difference between dogs and wolves. It did great. It achieved nearly 100% accuracy. But as it turned out, the computer wasn’t sussing out barely perceptible canine distinctions. It was just looking for snow. All of the wolf photos featured snow in the background, whereas none of the dog pictures did. A system, in other words, might come to the right conclusions, for all the wrong reasons.

Indeed. Faception suggests that, for this reason, their software would be but one factor among many in any collection of evidence. And, perhaps it would—for most cases, most of the time. We join Vaas in her hope that government agencies will ultimately refuse to buy into this modern twist on Victorian-age pseudoscience.

Cynthia Murrell, December 6, 2016

 

Could AI Spell Doom for Marketers?

December 1, 2016

AI is making inroads into almost every domain; marketing is no different. However, inability of AI to be creative in true sense may be a major impediment.

The Telegraph in a feature article titled Marketing Faces Death by Algorithm Unless It Finds a New Code says:

Artificial intelligence (AI) is one of the most-hyped topics in advertising right now. Brands are increasingly finding that they need to market to intelligent machines in order to reach humans, and this is set to transform the marketing function.

The problem with AI, as most marketers agree is its inability to imitate true creativity. As the focus of marketing is shifting from direct product placement to content marketing, the importance of AI becomes even bigger. For instance, a clothing company cannot analyze vast amounts of Big Data, decipher it and then create targeted advertising based on it. Algorithms will play a crucial role in it. However, the content creation will ultimately require human touch and intervention.

As it becomes clear here:

While AI can build a creative idea, it’s not creative “in the true sense of the word”, according to Mr Cooper. Machine learning – the driving technology behind how AI can learn – still requires human intelligence to work out how the machine would get there. “It can’t put two seemingly random thoughts together and recognize something new.

The other school of thought says that what AI lacks is not creativity, but processing power and storage. It seems we are moving closer to bridging this gap. Thus when AI closes this gap, will most occupations, including, creative and technical become obsolete?

Vishal Ingole, December 1, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Microsoft: On the Bandwagon Singing Me Too

November 30, 2016

In my dead tree copy of the November 21, 2016, New York Times (which just report a modest drop in profits), I read a bit of fluff called “Microsoft Spends Big to Build a Computer Out of Science Fiction.” (If you have to pay to view the source, don’t honk at Beyond Search. Let your favorite national newspaper know directly.)

The main point of the PR piece was to make clear that Microsoft is not lagging behind the Alphabet Google thing in quantum computing. Also, Microsoft is not forking over a measly couple of hundred bucks. Nope, Microsoft is spending “big.” I learned from the write up:

There is a growing optimism in the tech world that quantum computers, super powerful devices that were once the stuff of science fiction, are possible — and may even be practical.

I think “spending” is a nice way to say “betting.”

I learned:

In the exotic world of quantum physics, Microsoft has set itself apart from its competitors by choosing a different path. The company’s approach is based on “braiding” particles known as anyons — which physicists describe as existing in just two dimensions — to form the building blocks of a supercomputer that would exploit the unusual physical properties of subatomic particles.

One problem. The Google DWave gizmos are not exactly ready for use in your mobile phone. The Microsoft approach is the anyon, and it is anyone’s guess if the Microsofties can make the gizmo do something useful for opening Word or, like IBM, treat cancer or, like Google, “solve death.”

Where on the journey to the anyon is Microsoft? It seems that this sentence suggests that Microsoft is just about ready to start thinking about planning a trip down computing lane:

Once we get the first qubit figured out, we have a road map that allows us to go to thousands of qubits in a rather straightforward way,” Mr. Holmdahl [a Microsoftie who has avoided termination] said.

Yep, get those qubits working and then one can solve problems in quantum physics or perhaps get Microsoft Word’s auto numbering system to work. Me too, me too. Do you hear the singing? I do.

Stephen E Arnold, November 30, 2016

Emphasize Data Suitability over Data Quantity

November 30, 2016

It seems obvious to us, but apparently, some folks need a reminder. Harvard Business Review proclaims, “You Don’t Need Big Data, You Need the Right Data.” Perhaps that distinction has gotten lost in the Big Data hype. Writer Maxwell Wessel points to Uber as an example. Though the company does collect a lot of data, the key is in which data it collects, and which it does not. Wessel explains:

In an era before we could summon a vehicle with the push of a button on our smartphones, humans required a thing called taxis. Taxis, while largely unconnected to the internet or any form of formal computer infrastructure, were actually the big data players in rider identification. Why? The taxi system required a network of eyeballs moving around the city scanning for human-shaped figures with their arms outstretched. While it wasn’t Intel and Hewlett-Packard infrastructure crunching the data, the amount of information processed to get the job done was massive. The fact that the computation happened inside of human brains doesn’t change the quantity of data captured and analyzed. Uber’s elegant solution was to stop running a biological anomaly detection algorithm on visual data — and just ask for the right data to get the job done. Who in the city needs a ride and where are they? That critical piece of information let the likes of Uber, Lyft, and Didi Chuxing revolutionize an industry.

In order for businesses to decide which data is worth their attention, the article suggests three guiding questions: “What decisions drive waste in your business?” “Which decisions could you automate to reduce waste?” (Example—Amazon’s pricing algorithms) and “What data would you need to do so?” (Example—Uber requires data on potential riders’ locations to efficiently send out drivers.) See the article for more notes on each of these guidelines.

Cynthia Murrell, November 30, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Fake and Bake: Wikileaks Excitement

November 29, 2016

I love the chatter about fake news. I noted the story “Putting This Out There for Everyone’s Information (Is This for Real?) Before ItsNews 11-18-16… Wikileaks Is Gone.” According to the write up, a person named Jim Stone thinks that Wikileaks is a gone goose. The source cited above “felt drawn to put this [the story about Wikileaks as a disappeared organization] out there. To step away from the possibility that the story is bogus, the author of Kauilapele blog crawfishes:

Once more, I do not know for sure if this is true.

What’s the big reveal? Here you go:

The destruction of WikiLeaks was an unprecedented global effort.

There you go. Will accuracy and truthiness algorithms snag the Kauilapele item as possibly incorrect? Run those Bing, Google, and Yandex queries and decide for yourself.

Stephen E Arnold, November 29, 2016

Smart Software Figures Out What Makes Stories Tick

November 28, 2016

I recall sitting in high school when I was 14 years old and listening to our English teacher explain the basic plots used by fiction writers. The teacher was Miss Dalton and he seemed quite happy to point out that fiction depended upon: Man versus man, man versus the environment, man versus himself, man versus belief, and maybe one or two others. I don’t recall the details of a chalkboard session in 1959.

Not to fear.

I read “Fiction Books Narratives Down to Six Emotional Story Lines.” Smart software and some PhDs have cracked the code. Ivory Tower types processed digital versions of 1,327 books of fiction. I learned:

They [the Ivory Tower types] then applied three different natural language processing filters used for sentiment analysis to extract the emotional content of 10,000-word stories. The first filter—dubbed singular value decomposition—reveals the underlying basis of the emotional storyline, the second—referred to as hierarchical clustering—helps differentiate between different groups of emotional storylines, and the third—which is a type of neural network—uses a self-learning approach to sort the actual storylines from the background noise. Used together, these three approaches provide robust findings, as documented on the hedonometer.org website.

Okay, and what’s the smart software say today that Miss Dalton did not tell me more than 50 years ago?

[The Ivory Tower types] determined that there were six main emotional storylines. These include ‘rags to riches’ (sentiment rises), ‘riches to rags’ (fall), ‘man in a hole’ (fall-rise), ‘Icarus’ (rise-fall), ‘Cinderella’ (rise-fall-rise), ‘Oedipus’ (fall-rise-fall). This approach could, in turn, be used to create compelling stories by gaining a better understanding of what has previously made for great storylines. It could also teach common sense to artificial intelligence systems.

Ah, progress.

Stephen E Arnold, November 28, 2016

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta