Amazon Alexa Enables Shopping Without Computer, Phone, or TV

July 4, 2017

Mail order catalogs, home shopping networks, and online shopping allowed consumers to buy products from the comfort of their own home.  Each of them had their heyday, but now they need to share the glory or roll into a grave, because Amazon Alexa is making one stop shopping a vocal action.  Tom’s Guide explains how this is possible in, “What Is Alexa Voice Shopping, And How Do You Use It?”

Ordering with Amazon Alexa is really simple.  All you do is summon Alexa, ask the digital assistant to order an item, and then you wait for the delivery.  The only stipulation is that you need to be an Amazon customer, preferably Amazon Prime.  Here is an example scenario:

Let’s just say you’ve been parched all day, and you’re drinking bottle after bottle of Fiji water. Suddenly, you realize you’re all out and you need some more. Rather than drive to the store in the scorching summer heat, you decide to order a case through Amazon and have it delivered to your house.  So, you say, “Alexa, order Fiji Natural Artesian Water.” Alexa will hear that and will respond by telling you that it’s found an option on Amazon for a certain price. Then, Alexa will ask you if it’s OK to order. If you’re happy with the product Alexa found, you can say “yes,” and your order will be placed.  Now, sit back, relax and wait for your water to arrive.

There are some drawbacks, such as you cannot order different multiple items in the same order, but you can order multiples of the same item.  Also if you are concerned about children buying all the toys from their favorite franchise, do not worry because you can set up a confirmation code option so the order will only be processed once the code is provided.

It is more than likely that Amazon will misinterpret orders, so relying on language services like Bitext might help sharpen Alexa’s selling skills.

Whitney Grace, July 4, 2017

 

Alexa Is Deaf to Vocal Skill Search

June 29, 2017

Here is a surprising fact: Amazon does not have a vocal search for Alexa skills.  Amazon prides itself on being on being a top technology developer and retailer, but it fails to allow Alexa users to search for a specific skill.  Sure, it will list the top skills or the newest ones, but it does not allow you to ask for any specifics.  Tech Crunch has the news story: “Amazon Rejects AI2’s Alexa Skill Voice-Search Engine.  Will It Build One?

The Allen Institute For Artificial Intelligence decided to take the task on themselves and built “Skill Search.”  Skill Search works very simply: users state what they want and then Skill Search will list other skills that can fulfill the request.  When AI2 submitted the Skill Search to Amazon it was rejected on the grounds that Amazon does not want “skills to recommend other skills.”  This is a pretty common business practice for companies and Amazon did state on its policy page that skills of this nature were barred.  Still, Amazon is missing an opportunity:

It would seem that having this kind of skill search engine would be advantageous to Amazon. It provides a discovery opportunity for skill developers looking to get more users, and highlighting the breadth of skills could make Alexa look more attractive compared to alternatives like Google Home that don’t have as well established of an ecosystem.

Amazon probably has a vocal search skill on their projects list and does not have enough information about it to share yet.  Opening vocal search gives Amazon another revenue stream for Alexa.  They are probably working on perfecting the skill’s language comprehension skills.  Hey Amazon, maybe you should consider Bitext’s language offerings for an Alexa skills search?

Whitney Grace, June 29, 2017

Google Translate Is Constantly Working

June 28, 2017

It seems we are always just around the corner from creating the universal translation device, but with online translation services the statement is quite accurate.  Google Translate is one of the most powerful and accurate free translation services on the Internet.  Lackuna shares some “Facts About Google Translate You May Not Know” to help you understand the power behind Google Translate.

Google Translation is built on statistical machine translation (SMT), basically it means that computers are analyzing translated documents from the Web to learn languages and find the patterns within them.  From there, the service picks the best probable translation results for each query.  Google Translate used to work differently:

However, Google Translate didn’t always work this way. Initially, it used a rule-based system, where rules of grammar and syntax, along with vocabulary for each language, were manually coded into a computer. Google switched to SMT because it enables the service to improve itself as it combs the web adding to its database of text — as opposed to linguists having to identify and code new rules as a language evolves. This provides a much more accurate translation, as well as saving thousands of programming/linguist man-hours.

While Google might be saving time relying fully on SMT, the linguist human touch is necessary to gain the sentimental and full comprehension of a language.  Companies like Bitext that built analytics engines on linguistics’ knowledge combined with machine learning have a distinct advantage over others.

Meanwhile, Google Translate still remains a valuable service.  It currently translates sixty-four languages, a chatbot translates in real-time and allows people to communicate in their native tongue, it has a speech-to-speech translation in conversation mode node for Android, and also uses photos to translate written language in real time.

Whitney Grace, June 28, 2017

 

Apple Lovers Demand Their Own Talking Speaker

June 27, 2017

Google and Amazon dominate the intelligent speaker market and it is about to get more crowded.  Marketing Land reports on a recent Morning Consult survey that showed how Apple lovers would like their own talking speaker: “Survey: Amazon Echo, Google Home Onwers ‘Very Interested’ In Apple HomePod.”  Morning Consult surveyed 2,000 US consumers and discovered that one third of them are interested the Apple HomePod and 45 percent are Apple users.

Even more surprising among the results is that the consumers who are the most interested to use an Apple HomePod already own the competing devices.  There are more interesting numbers:

According to the survey, the following were the rankings of variables, “among those who said [the] feature was ‘very important’ when considering a voice-controlled assistant:

57% Price

51% Speaker/audio quality

49% Accuracy of device’s voice recognition

44% Compatibility with devices you may already own, such as your smartphone

30% Access to a variety or music streaming services

29% Ability for device to integrate with other services or platforms, such as controlling smart light bulb

29% Brand that manufactures the device21% Aesthetics or look of the device

Is this an indicator that the Apple cult will win over the home digital assistant market?  It might, but Amazon is still favored among consumers and might be the biggest contender because of the shopping connection and the price.  The accuracy of the HomePod’s voice recognition is very important to consumers, especially when Siri fails to understand.  Bitext could improve Apple, Google, and Amazons’ digital assistants when it comes to natural speech recognition.

Whitney Grace, June 27, 2017

The Voice of Assistance Is Called Snips

June 22, 2017

Siri, Cortana, Google Assistant, and Amazon Alexa are the most well known digital assistants, but there are other companies that want to get the same recognition.  Snips is a brand new (relatively) company with the byline: “Our Mission Is To Make Technology Disappear By Putting An AI In Every Device.”  It is a noble mission to enable all technological devices with tools to make our lives better, easier, and more connected.  How did their story begin?

Snips was founded in 2013 as a research lab in AI. Through our projects, we realized that the biggest issue of the next decades was the way humans and machine interact. Indeed, rather than having humans make the effort to use machines, we should use AI to make machines learn to communicate with human. By making this ubiquitous and privacy preserving, we can make technology so intuitive and accessible that it simply disappears from our consciousness.

Snips offer their digital assistant for enterprise systems and it can also be programmed for other systems that need an on-device voice platform, using state of the art Deep Learning.  Snips offer many features, including on-device natural language understanding, customizable hotwords, on device automatic speech recognition, cross-platform, and it is also built using open source technology.

Snips also have their own unique bragging right: they are the only voice platform that is GDPR compliant.  GDPR is a new European regulation mean to protect an individual’s privacy more on connected devices.  If Snips wants to reach more clients in the European market, they might do well partnering with Spain-based Bitext, a company that specializes in linguistic analytics.

Whitney Grace, June 22, 2017

 

Instantaneous Language Translation in Your Ear

June 21, 2017

A common technology concept in cartoons and science-fiction series is an ear device that acts as a universal translator.  The wearer would be able to understand and speak any language in the world.  The universal translator has long been one of the humanity’s pipe dream since the Tower of Babel and as technology improves we could be closer to inventing it.  The Daily Mail shares, “The Earpiece That Promises To Translate Language In Seconds: £140 Will Be Available Next Month.”

International travelers’ new best friend might be Lingmo International’s One2One translator that is built on IBM Watson’s artificial intelligence system.  Unlike other translation devices, it does not reply on WiFi or BlueTooth connectivity.  It supports eight languages: English, Japanese, French, Italian, Spanish, Brazilian, Portuguese, German, and Chinese (does that include Mandarin and Cantonese?).  If the One2One does not rely on the Internet, how will it translate languages?

Instead, it uses IBM Watson’s Natural Language Understanding and Language Translator APIs, which intuitively overcomes many of the contextual challenges associated with common languages, as well as understanding the nuances of local dialects…This allows it to translate what you’re saying, almost in real-time.

Lingomo might be relying on IBM Watson for its natural language API, they should also consider using Bitext, especially when it comes to sentimental analysis.  Some languages have words with multiple meanings that change based on a voice’s inflection and tone.

The ramifications for this device are endless.  Can you imagine traveling to a foreign country and being able to understand the native tongue?   It is the dream of billions, but it could also end some serious conflicts.

Whitney Grace, June 21, 2017

Siri Becomes Smarter and More Human

June 20, 2017

When Apple introduced Siri, it was a shiny, new toy, but the more people used it they realized it was a dumb digital assistant.  It is true that Siri can accurately find a place’s location, conduct a Web search, or even call someone in your contact list, but beyond simple tasks “she” cannot do much.  TechCrunch reports that Apple realizes there is a flaw in their flagship digital assistant and in order to compete with Google Assistant, Amazon Alexa, and even Windows Cortana they need to upgrade Siri’s capabilities, “Siri Gets Language Translation And A More Human Voice.”

Apple decided that Siri would receive a big overhaul with iOS 11.  Not only will Siri sound more human, but also the digital assistant will have a female and male voice, the voice will become clearer ability to answer more complex, and even better, a translation application:

Apple is bringing translation to Siri so that you can ask the voice assistant how do say a certain English phrase in a variety of languages, including, at launch, Chinese, French, German, Italian and Spanish.

Apple has changed their view of Siri.  Instead of it being a gimmicky way to communicate with a device, Apple is treating Siri as a general AI that extends a device’s usage.  Apple is making the right decision to make these changes.  For the translation aspect, Apple should leverage tools like Bitext’s DLAP to improve the accuracy.

Whitney Grace, June 20, 2017

Maybe Trump Speak Pretty One Day

June 15, 2017

US President Donald Trump is not the most popular person in the world.  He is a cherished scapegoat for media outlets, US citizens, and other world leaders.  One favorite point of ridicule for people is his odd use of the English language.  Trump’s take on the English tongue is so confusing that translators are left scratching their heads says The Guardian in, “Trump In Translation: President’s Mangled Language Stumps Translators.”  For probably the first time in his presidency, Trump followed proper sentence structure and grammar when he withdrew the US from the Paris Accord.   While the world was in an uproar about the climate change deniers, translators were happy that they could translate his words easier.

Asian translators are especially worried about what comes out of Trump’s mouths.  Asian languages have different root languages than European ones; so direct translations of the colloquial expressions Trump favors are near impossible.

India problems translating Trump to Hindi:

‘Donald Trump is difficult to make sense of, even in English,’ said Anshuman Tiwari, editor of IndiaToday, a Hindi magazine. “His speech is unclear, and sometimes he contradicts himself or rambles or goes off on a tangent. Capturing all that confusion in writing, in Hindi, is not easy,’ he added. ‘To get around it, usually we avoid quoting Trump directly. We paraphrase what he has said because conveying those jumps in his speech, the way he talks, is very difficult. Instead, we summarise his ideas and convey his words in simple Hindi that will make sense to our readers.’

Indian translators also do Trump a favor by translating his words using the same level of the rhetoric of Indian politicians.  It makes him sound smarter than he appears to English-speakers.  Trump needs to learn to trust his speechwriters, but translators should learn they can rely on Bitext’s DLAP to supplement their work and improve local colloquialisms.

Whitney Grace, June 15, 2017

 

AI Decides to Do the Audio Index Dance

June 14, 2017

Did you ever wonder how search engines could track down the most miniscule information?  Their power resides in indices that catalog Web sites, images, and books.  Audio content is harder to index because most indices rely on static words and images.  However, Audioburst plans to change that says Venture Beat in the article, “How Audioburst Is Using AI To Index Audio Broadcasts And Make Them Easy To Find.”

Who exactly is Audioburst?

Founded in 2015, Audioburst touts itself as a “curation and search site for radio,” delivering the smarts to render talk radio in real time, index it, and make it easily accessible through search engines. It does this through “understanding” the meaning behind audio content and transcribes it using natural language processing (NLP). It can then automatically attach metadata so that search terms entered manually by users will surface relevant audio clips, which it calls “bursts.”

Audioburst recently earned $6.7 million in funding and also announced their new API.   The API allows third-party developers to Audioburst’s content library to feature audio-based feeds in their own applications, in-car entertainment systems, and other connected devices.  There is a growing demand for audio content as more people digest online information via sound bytes, use vocal searches, and make use of digital assistants.

It is easy to find “printed” information on the Internet, but finding a specific audio file is not.  Audioburst hopes to revolutionize how people find and use sound.  They should consider a partnership with Bitext because indexing audio could benefit from advanced linguistics.  Bitext’s technology would make this application more accurate.

Whitney Grace, June 14, 2017

How People Really Use Smart Speakers Will Not Shock You

June 13, 2017

Business Insider tells us a story that we already know, but with a new spin: “People Mainly Use Smart Speakers For Simple Requests.”  The article begins that vocal computing is the next stage in computer evolution.  The hype is that the current digital assistants like Alexa, Siri, and Google Assistant will make our lives easier by automating certain tasks and always be ready to answer our every beck and call.

As to be expected and despite digital assistants advancements, people use them for the simplest tasks.  These include playing music, getting the weather report, and answering questions via Wikipedia.  People also buy products on their smart speakers, much to Amazon’s delight:

Voice-based artificial intelligence may not yet live up to its hype, but that’s not much of a surprise. Even Amazon CEO Jeff Bezos said last year that the tech is closer to “the first guy up at bat” than the first inning of its life. But Bezos will surely be happy when more than just 11% of smart speaker owners buy products online through their devices.

Voice-related technology has yet to even touch the horizon of what will be commonplace ten years from now.  Bitext’s computational linguistic analytics platform that teaches computers and digital assistants to speak human is paving the way towards that horizon.

Whitney Grace, June 13, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta