Google and Microsoft AI Missteps

August 14, 2017

I read an interesting article called “Former Microsoft Exec Reveals Why Amazon’s Alexa Voice Assistant Beat Cortana.” The passage I noted as thought provoking was this one:

Qi Lu, formerly a Microsoft wizard and now a guru at Baidu allegedly said in this passage from the Verge’s article:

Lu believes Microsoft and Google “made the same mistake” of focusing on the phone and PC for voice assistants, instead of a dedicated device. “The phone, in my view, is going to be, for the foreseeable future, a finger-first, mobile-first device,” explains Lu. “You need an AI-first device to solidify an emerging base of ecosystems.”

Apparently Lu repeated what I think is a key point:

“The phone, in my view, is going to be, for the foreseeable future, a finger-first, mobile-first device,” explains Lu. “You need an AI-first device to solidify an emerging base of ecosystems.”

Several questions occurred to me:

  1. Do Google and Microsoft share a similar context for evaluating high value technologies? Perhaps these two companies are more alike in how they see the world than Amazon?
  2. Are Google and Microsoft reactive; that is, the companies act in a reflexive manner with regard to figuring out how to apply a magnetic technology?
  3. Is Amazon’s competitive advantage an ability to think about an interesting technology in terms of the technology’s ability to augment an existing revenue stream and open new revenue streams?

I don’t have the answer to these questions. If Lu is correct, Amazon has done an end run around Google and Microsoft in terms of talking to gizmos. Can Amazon sustain its technological momentum? With Microsoft floundering with Windows 10 and hardware reliability, it is possible that its applied research is mired in the Microsoft management morass. Google, on the other hand, has its hands full with Amazon taking more product search traffic at a time when Google has to figure out how to solve emotional, political, and ideological issues. Need I say “damore”?

Stephen E Arnold, August 14, 2017

A Wonky Analysis of Search Today: The SEO Wizard View

July 24, 2017

I read what one of my goslings described as a “wonky” discussion of search. You will have to judge for yourself, gentle reader. In an era of fake news, I am not sure what to make of a semi factual, incomplete write up with the title “How Search Reveals the World.” Search does not reveal “the world”; search provides some — note the word “some” — useful information about the behaviors of individuals who run queries or make use of systems like the oh, so friendly Amazon Alexa.

I learned that there are three types of search, and I have to tell you that these points were not particularly original. Here they are:

  • Navigational search queries. Don’t think about Endeca’s “guided navigation.” Think about Google Maps, which is going to morph into a publishing platform, a fact not included in the write up which triggered ruffled gosling feathers
  • Information search queries. Ah, now we’re talking. A human types 2.4 words in a search box and feels lucky or just looks at the first few hits on the first search page. Could these hits be ads unrelated or loosely related to the user’s query? Sure, absolutely.
  • Transactional search queries. I am not sure what this phrase “transactional search queries” means, but that’s not too surprising. The confusion rests with me when I think of looking for a product like a USB C plug on Amazon versus navigating to my bank’s fine, fine Web site and using a fine, fine interface to move money from Point A to Point B. Close enough for horseshoes.

image

Skimming the surface is good for seaplanes but not a plus for an analysis of search and retrieval.

But the most egregious argument in the write up is that search becomes little more than a rather clumsy manipulative tool for “marketers, advertisers, and business owners.” Why clumsy? The write up is happily silent about Facebook’s alleged gaming of its system for various purposes. Filtering hate speech, for example, seems admirable until someone has to define “hate speech.” Filtering live streaming of a suicide or crime in progress is a bit more problematic. But search is a sissy compared with the alleged Facebook methods. With marketers looking to make a buck, Facebook seems to slip the pager mâché noose of the write up’s argument.

But there is a far larger omission. One of the most important types of search is “pervasive, predictive search.” The idea is a nifty one. Using various “signals” a system presents information automatically to a user who is online and looking at an output. No specific action on the part of the user is required. The user sees what he or she presumably wants. Search without search! The marketer’s Holy Grail.

There are some important components of this type of search.

Perhaps an SEO expert will explain them instead of recycling old information and failing to define 33 percent of the bedrock statements. But that may be a bridge to far for those who would try to manipulate the systems and methods of some of the providers of free, ad supported search systems. The longest journey begins with a single step. Didn’t an SEO expert say that too?

Stephen E Arnold, July 24, 2017

AI Feeling a Little Sentimental

July 24, 2017

Big data was one of the popular buzzwords a couple years ago, but one conundrum was how organizations were going to use all that mined data?  One answer has presented itself: sentiment analysis.  Science shares the article, “AI In Action: How Algorithms Can Analyze The Mood Of The Masses” about how artificial intelligence is being used to gauge people’s emotions.

Social media presents a constant stream of emotional information about products, services, and places that could be useful to organizations.  The problem in the past is that no one knew how to fish all of that useful information out of the social media Web sites and make it a usable.    By using artificial intelligence algorithms and natural language processing, data scientists are finding associations between words, the language used, posting frequency, and more to determine everything from a person’s mood to their personality, income level, and political associations.

‘There’s a revolution going on in the analysis of language and its links to psychology,’ says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades…’Now, we can analyze everything that you’ve ever posted, ever written, and increasingly how you and Alexa talk,’ Pennebaker says. The result: ‘richer and richer pictures of who people are.’

AI algorithms are able to turn a person’s online social media accounts and construct more than a digital fingerprint of a person.  The algorithms act like digital mind readers and recreate a person based on the data they publish.

Whitney Grace, July 24, 2017

IBM Watson: Predicting the Future

July 12, 2017

I enjoy IBM’s visions of the future. One exception: The company’s revenue estimates for the Watson product line is an exception. I read “IBM Declares AI the Key to Making Unstructured Data Useful.” For me, the “facts” in the write up are a bit like a Payday candy bar. Some nuts squished into a squishy core of questionable nutritional value.

I noted this factoid:

80 percent of company data is unstructured, including free-form documents, images, and voice recordings.

I have been interested in the application of the 80-20 rule to certain types of estimates. The problem is that the ‘principle of factor sparsity” gets disconnected from the underlying data. Generalizations are just so darned fun and easy. The problem is that the mathematical rigor necessary to validate the generalization is just too darned much work. The “hey, I’ve got a meeting” or the more common “I need to check my mobile” get in the way of figuring out if the 80-20 statement makes sense.

My admittedly inept encounters with data suggest that the volume of unstructured data is high, higher that the 80 percent in the rule. The problem is that today’s systems struggle to:

  • Make sense of massive streams of unstructured data from outfits like YouTube, clear text and encrypted text messages, and the information blasted about on social media
  • Identify the important items of content directly germane to a particular matter
  • Figure out how to convert content processing into useful elements like named entities and relate those entities to code words and synonyms
  • Perform cost effective indexing of content streams in near real time.

At this time, systems designed to extract actionable information from relatively small chunks of content are improving. But these systems typically break down when the volume exceeds the budget and computing resources available to those trying to “make sense” of the data in a finite amount of time. This type of problem is difficult due to constraints on the systems. These constraints are financial as in “who has the money available right now to process these streams?” These constraints are problematic when someone asks “what do we do with the data in this dialect from northern Afghanistan?” And there are other questions.

My problem with the IBM approach is that the realities of volume, interrelating structured and semi structured data, and multi lingual content is that these bumps in the information super highway Watson seems to speed along are absorbed by marketing fluffiness.

I loved this passage:

Chatterjee highlighted Macy’s as an example of an IBM customer that’s using the company’s tools to better personalize customers’ shopping experiences using AI. The Macy’s On Call feature lets customers get information about what’s in stock and other key details about the contents of a retail store, without a human sales associate present. It uses Watson’s natural language understanding capabilities to process user queries and provide answers. Right now, that feature is available as part of a pilot in 10 Macy’s stores.

Yep, I bet that Macy’s is going to hit a home run against the fast ball pitching of Jeff Bezos’ Amazon Prime team. Let’s ask Watson. On the other hand, let’s ask Alexa.

Stephen E Arnold, July 12, 2017

The Voice of Assistance Is Called Snips

June 22, 2017

Siri, Cortana, Google Assistant, and Amazon Alexa are the most well known digital assistants, but there are other companies that want to get the same recognition.  Snips is a brand new (relatively) company with the byline: “Our Mission Is To Make Technology Disappear By Putting An AI In Every Device.”  It is a noble mission to enable all technological devices with tools to make our lives better, easier, and more connected.  How did their story begin?

Snips was founded in 2013 as a research lab in AI. Through our projects, we realized that the biggest issue of the next decades was the way humans and machine interact. Indeed, rather than having humans make the effort to use machines, we should use AI to make machines learn to communicate with human. By making this ubiquitous and privacy preserving, we can make technology so intuitive and accessible that it simply disappears from our consciousness.

Snips offer their digital assistant for enterprise systems and it can also be programmed for other systems that need an on-device voice platform, using state of the art Deep Learning.  Snips offer many features, including on-device natural language understanding, customizable hotwords, on device automatic speech recognition, cross-platform, and it is also built using open source technology.

Snips also have their own unique bragging right: they are the only voice platform that is GDPR compliant.  GDPR is a new European regulation mean to protect an individual’s privacy more on connected devices.  If Snips wants to reach more clients in the European market, they might do well partnering with Spain-based Bitext, a company that specializes in linguistic analytics.

Whitney Grace, June 22, 2017

 

Siri Becomes Smarter and More Human

June 20, 2017

When Apple introduced Siri, it was a shiny, new toy, but the more people used it they realized it was a dumb digital assistant.  It is true that Siri can accurately find a place’s location, conduct a Web search, or even call someone in your contact list, but beyond simple tasks “she” cannot do much.  TechCrunch reports that Apple realizes there is a flaw in their flagship digital assistant and in order to compete with Google Assistant, Amazon Alexa, and even Windows Cortana they need to upgrade Siri’s capabilities, “Siri Gets Language Translation And A More Human Voice.”

Apple decided that Siri would receive a big overhaul with iOS 11.  Not only will Siri sound more human, but also the digital assistant will have a female and male voice, the voice will become clearer ability to answer more complex, and even better, a translation application:

Apple is bringing translation to Siri so that you can ask the voice assistant how do say a certain English phrase in a variety of languages, including, at launch, Chinese, French, German, Italian and Spanish.

Apple has changed their view of Siri.  Instead of it being a gimmicky way to communicate with a device, Apple is treating Siri as a general AI that extends a device’s usage.  Apple is making the right decision to make these changes.  For the translation aspect, Apple should leverage tools like Bitext’s DLAP to improve the accuracy.

Whitney Grace, June 20, 2017

Will the Smartest Virtual Assistant Please Stand Up?

June 16, 2017

The devices are driving sales. However AI-powered virtual assistants are far from perfect. Alexa, Google Assistant, Siri, and Cortana are good for basic questions on weather, radio stations, and calendars. But when it comes to complicated questions, all fail.

MarketWatch in an article titled This Is the Smartest Virtual Assistant — and It’s NOT Siri, or Alexa says:

A number of factors will shape the market moving forward, including changes in consumers’ comfort over the security and collection of private data, the progress of natural language processing and advances in voice interface functionalities, and regulatory requirements that could alter the market.

A survey revealed that none of the virtual assistants tested was able to answer 100% of questions (let alone attempt them). Virtual assistants that attempted to answer them were not answering the questions correctly. Google was at the top of the heap while Siri was the last.

The article also points out that people want complicated questions answered rather than the simpletons that these virtual assistants answer. It seems, the days of perfect virtual assistants are still far away. Till then, Google search engine is the best bet (the survey says so)

Vishal Ingole, June 16, 2017

How People Really Use Smart Speakers Will Not Shock You

June 13, 2017

Business Insider tells us a story that we already know, but with a new spin: “People Mainly Use Smart Speakers For Simple Requests.”  The article begins that vocal computing is the next stage in computer evolution.  The hype is that the current digital assistants like Alexa, Siri, and Google Assistant will make our lives easier by automating certain tasks and always be ready to answer our every beck and call.

As to be expected and despite digital assistants advancements, people use them for the simplest tasks.  These include playing music, getting the weather report, and answering questions via Wikipedia.  People also buy products on their smart speakers, much to Amazon’s delight:

Voice-based artificial intelligence may not yet live up to its hype, but that’s not much of a surprise. Even Amazon CEO Jeff Bezos said last year that the tech is closer to “the first guy up at bat” than the first inning of its life. But Bezos will surely be happy when more than just 11% of smart speaker owners buy products online through their devices.

Voice-related technology has yet to even touch the horizon of what will be commonplace ten years from now.  Bitext’s computational linguistic analytics platform that teaches computers and digital assistants to speak human is paving the way towards that horizon.

Whitney Grace, June 13, 2017

Privacy Enabled on Digital Assistants

June 8, 2017

One thing that Amazon, Google, and other digital assistant manufacturers glaze over are how enabling vocal commands on smart speakers potentially violates a user’s privacy.  These include both the Google Home and the Amazon Echo.  Keeping vocal commands continuously on allows bad actors to hack into the smart speaker, listen, record, and spy on users in the privacy of their own homes.  If the vocal commands are disabled on smart speakers, it negates their purpose.  The Verge reports that one smart technology venture is making an individual’s privacy the top priority: “Essential Home Is An Amazon Echo Competitor Puts Privacy First.”

Andy Rubin’s recently released the Essential Home, essentially a digital assistant that responds to vocal, touch, or “sight” commands.  It is supposed to be an entirely new product in the digital assistant ring, but it borrows most of its ideas from Google and Amazon’s innovations.  Essential Home just promises to do them better.

Translation: Huh?

What Essential Home is exactly, isn’t clear. Essential has some nice renders showing the concept in action. But we’re not seeing any photos of a working device and nothing in the way of specifications, prices, or delivery dates. We know it’ll act as the interface to your smart home gear but we don’t know which ecosystems will be supported. We know it runs Ambient OS, though details on that are scant. We know it’ll try to alert you of contextually relevant information during the day, but it’s unclear how.

It is compatible with Nest, SmartThings, and HomeKit and it is also supposed to be friendly with Alexa, Google Assistant, and Siri.  The biggest selling feature might be this:

Importantly, we do know that most of the processing will happen locally on the device, not in the cloud, keeping the bulk of your data within the home. This is exactly what you’d expect from a company that’s not in the business of selling ads, or everything else on the planet.

Essentially, keeping user data locally might be a bigger market player in the future than we think.  The cloud might appeal to more people, however, because it is a popular buzzword.  What is curious is how Essential Home will respond to commands other than vocal.  They might not be relying on a similar diamond in the rough concept that propelled Bitext to the front of the computational linguistics and machine learning market.

Whitney Grace, June 8, 2017

Make Your Amazon Echo an ASMR Device

June 7, 2017

For people who love simple and soothing sounds, the Internet is a boon for their stimulation.  White noise or ambient noise is a technique many people use to relax or fall asleep.  Ambient devices used to be sold through catalogs, especially Sky Mall, but now any sort of sound can be accessed through YouTube or apps for free.  Smart speakers are the next evolution for ambient noise.  CNET has a cool article that explains, “How To Turn Your Amazon Echo Into A Noise Machine.”

The article lists several skills that can be downloaded onto the Echo and the Echo Dot.  The first two suggestions are music skills: Amazon Prime Music and Spotify.  Using these skills, the user can request that Alexia finds any variety of nature sounds and then play them on a loop.  It takes some trial and error to find the perfect sounds to fit your tastes, but once found they can be added to a playlist.  An easier way, but might offer less variety is:

One of the best ways to find ambient noise or nature sounds for Alexa is through skills. Developer Nick Schwab created a family of skills under Ambient Noise. There are currently 12 skills or sounds to choose from:

  • Airplane

  • Babbling Brook

  • Birds

  • City

  • Crickets

  • Fan

  • Fireplace

  • Frogs

  • Ocean waves

  • Rainforest

  • Thunderstorms

  • Train

Normally, you could just say, “Alexa, open Ambient Noise,” to enable the skill, but there are too many similar skills for Alexa to list and let you choose using your voice. Instead, go to alexa.amazon.com or open the iOS or Android app and open the Skills menu. Search for Ambient Noise and click Enable.

This is not a bad start for ambient noises, but the vocal command adds its own set of problems.  Amazon should consider upgrading their machine learning algorithms to a Bitext-based solution.  If you want something with a WHOLE lot more variety to check out YouTube and search for ambient noise or ASMR.

Whitney Grace, June 7, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta