The Voice of Assistance Is Called Snips

June 22, 2017

Siri, Cortana, Google Assistant, and Amazon Alexa are the most well known digital assistants, but there are other companies that want to get the same recognition.  Snips is a brand new (relatively) company with the byline: “Our Mission Is To Make Technology Disappear By Putting An AI In Every Device.”  It is a noble mission to enable all technological devices with tools to make our lives better, easier, and more connected.  How did their story begin?

Snips was founded in 2013 as a research lab in AI. Through our projects, we realized that the biggest issue of the next decades was the way humans and machine interact. Indeed, rather than having humans make the effort to use machines, we should use AI to make machines learn to communicate with human. By making this ubiquitous and privacy preserving, we can make technology so intuitive and accessible that it simply disappears from our consciousness.

Snips offer their digital assistant for enterprise systems and it can also be programmed for other systems that need an on-device voice platform, using state of the art Deep Learning.  Snips offer many features, including on-device natural language understanding, customizable hotwords, on device automatic speech recognition, cross-platform, and it is also built using open source technology.

Snips also have their own unique bragging right: they are the only voice platform that is GDPR compliant.  GDPR is a new European regulation mean to protect an individual’s privacy more on connected devices.  If Snips wants to reach more clients in the European market, they might do well partnering with Spain-based Bitext, a company that specializes in linguistic analytics.

Whitney Grace, June 22, 2017

 

Instantaneous Language Translation in Your Ear

June 21, 2017

A common technology concept in cartoons and science-fiction series is an ear device that acts as a universal translator.  The wearer would be able to understand and speak any language in the world.  The universal translator has long been one of the humanity’s pipe dream since the Tower of Babel and as technology improves we could be closer to inventing it.  The Daily Mail shares, “The Earpiece That Promises To Translate Language In Seconds: £140 Will Be Available Next Month.”

International travelers’ new best friend might be Lingmo International’s One2One translator that is built on IBM Watson’s artificial intelligence system.  Unlike other translation devices, it does not reply on WiFi or BlueTooth connectivity.  It supports eight languages: English, Japanese, French, Italian, Spanish, Brazilian, Portuguese, German, and Chinese (does that include Mandarin and Cantonese?).  If the One2One does not rely on the Internet, how will it translate languages?

Instead, it uses IBM Watson’s Natural Language Understanding and Language Translator APIs, which intuitively overcomes many of the contextual challenges associated with common languages, as well as understanding the nuances of local dialects…This allows it to translate what you’re saying, almost in real-time.

Lingomo might be relying on IBM Watson for its natural language API, they should also consider using Bitext, especially when it comes to sentimental analysis.  Some languages have words with multiple meanings that change based on a voice’s inflection and tone.

The ramifications for this device are endless.  Can you imagine traveling to a foreign country and being able to understand the native tongue?   It is the dream of billions, but it could also end some serious conflicts.

Whitney Grace, June 21, 2017

Siri Becomes Smarter and More Human

June 20, 2017

When Apple introduced Siri, it was a shiny, new toy, but the more people used it they realized it was a dumb digital assistant.  It is true that Siri can accurately find a place’s location, conduct a Web search, or even call someone in your contact list, but beyond simple tasks “she” cannot do much.  TechCrunch reports that Apple realizes there is a flaw in their flagship digital assistant and in order to compete with Google Assistant, Amazon Alexa, and even Windows Cortana they need to upgrade Siri’s capabilities, “Siri Gets Language Translation And A More Human Voice.”

Apple decided that Siri would receive a big overhaul with iOS 11.  Not only will Siri sound more human, but also the digital assistant will have a female and male voice, the voice will become clearer ability to answer more complex, and even better, a translation application:

Apple is bringing translation to Siri so that you can ask the voice assistant how do say a certain English phrase in a variety of languages, including, at launch, Chinese, French, German, Italian and Spanish.

Apple has changed their view of Siri.  Instead of it being a gimmicky way to communicate with a device, Apple is treating Siri as a general AI that extends a device’s usage.  Apple is making the right decision to make these changes.  For the translation aspect, Apple should leverage tools like Bitext’s DLAP to improve the accuracy.

Whitney Grace, June 20, 2017

Maybe Trump Speak Pretty One Day

June 15, 2017

US President Donald Trump is not the most popular person in the world.  He is a cherished scapegoat for media outlets, US citizens, and other world leaders.  One favorite point of ridicule for people is his odd use of the English language.  Trump’s take on the English tongue is so confusing that translators are left scratching their heads says The Guardian in, “Trump In Translation: President’s Mangled Language Stumps Translators.”  For probably the first time in his presidency, Trump followed proper sentence structure and grammar when he withdrew the US from the Paris Accord.   While the world was in an uproar about the climate change deniers, translators were happy that they could translate his words easier.

Asian translators are especially worried about what comes out of Trump’s mouths.  Asian languages have different root languages than European ones; so direct translations of the colloquial expressions Trump favors are near impossible.

India problems translating Trump to Hindi:

‘Donald Trump is difficult to make sense of, even in English,’ said Anshuman Tiwari, editor of IndiaToday, a Hindi magazine. “His speech is unclear, and sometimes he contradicts himself or rambles or goes off on a tangent. Capturing all that confusion in writing, in Hindi, is not easy,’ he added. ‘To get around it, usually we avoid quoting Trump directly. We paraphrase what he has said because conveying those jumps in his speech, the way he talks, is very difficult. Instead, we summarise his ideas and convey his words in simple Hindi that will make sense to our readers.’

Indian translators also do Trump a favor by translating his words using the same level of the rhetoric of Indian politicians.  It makes him sound smarter than he appears to English-speakers.  Trump needs to learn to trust his speechwriters, but translators should learn they can rely on Bitext’s DLAP to supplement their work and improve local colloquialisms.

Whitney Grace, June 15, 2017

 

AI Decides to Do the Audio Index Dance

June 14, 2017

Did you ever wonder how search engines could track down the most miniscule information?  Their power resides in indices that catalog Web sites, images, and books.  Audio content is harder to index because most indices rely on static words and images.  However, Audioburst plans to change that says Venture Beat in the article, “How Audioburst Is Using AI To Index Audio Broadcasts And Make Them Easy To Find.”

Who exactly is Audioburst?

Founded in 2015, Audioburst touts itself as a “curation and search site for radio,” delivering the smarts to render talk radio in real time, index it, and make it easily accessible through search engines. It does this through “understanding” the meaning behind audio content and transcribes it using natural language processing (NLP). It can then automatically attach metadata so that search terms entered manually by users will surface relevant audio clips, which it calls “bursts.”

Audioburst recently earned $6.7 million in funding and also announced their new API.   The API allows third-party developers to Audioburst’s content library to feature audio-based feeds in their own applications, in-car entertainment systems, and other connected devices.  There is a growing demand for audio content as more people digest online information via sound bytes, use vocal searches, and make use of digital assistants.

It is easy to find “printed” information on the Internet, but finding a specific audio file is not.  Audioburst hopes to revolutionize how people find and use sound.  They should consider a partnership with Bitext because indexing audio could benefit from advanced linguistics.  Bitext’s technology would make this application more accurate.

Whitney Grace, June 14, 2017

How People Really Use Smart Speakers Will Not Shock You

June 13, 2017

Business Insider tells us a story that we already know, but with a new spin: “People Mainly Use Smart Speakers For Simple Requests.”  The article begins that vocal computing is the next stage in computer evolution.  The hype is that the current digital assistants like Alexa, Siri, and Google Assistant will make our lives easier by automating certain tasks and always be ready to answer our every beck and call.

As to be expected and despite digital assistants advancements, people use them for the simplest tasks.  These include playing music, getting the weather report, and answering questions via Wikipedia.  People also buy products on their smart speakers, much to Amazon’s delight:

Voice-based artificial intelligence may not yet live up to its hype, but that’s not much of a surprise. Even Amazon CEO Jeff Bezos said last year that the tech is closer to “the first guy up at bat” than the first inning of its life. But Bezos will surely be happy when more than just 11% of smart speaker owners buy products online through their devices.

Voice-related technology has yet to even touch the horizon of what will be commonplace ten years from now.  Bitext’s computational linguistic analytics platform that teaches computers and digital assistants to speak human is paving the way towards that horizon.

Whitney Grace, June 13, 2017

Privacy Enabled on Digital Assistants

June 8, 2017

One thing that Amazon, Google, and other digital assistant manufacturers glaze over are how enabling vocal commands on smart speakers potentially violates a user’s privacy.  These include both the Google Home and the Amazon Echo.  Keeping vocal commands continuously on allows bad actors to hack into the smart speaker, listen, record, and spy on users in the privacy of their own homes.  If the vocal commands are disabled on smart speakers, it negates their purpose.  The Verge reports that one smart technology venture is making an individual’s privacy the top priority: “Essential Home Is An Amazon Echo Competitor Puts Privacy First.”

Andy Rubin’s recently released the Essential Home, essentially a digital assistant that responds to vocal, touch, or “sight” commands.  It is supposed to be an entirely new product in the digital assistant ring, but it borrows most of its ideas from Google and Amazon’s innovations.  Essential Home just promises to do them better.

Translation: Huh?

What Essential Home is exactly, isn’t clear. Essential has some nice renders showing the concept in action. But we’re not seeing any photos of a working device and nothing in the way of specifications, prices, or delivery dates. We know it’ll act as the interface to your smart home gear but we don’t know which ecosystems will be supported. We know it runs Ambient OS, though details on that are scant. We know it’ll try to alert you of contextually relevant information during the day, but it’s unclear how.

It is compatible with Nest, SmartThings, and HomeKit and it is also supposed to be friendly with Alexa, Google Assistant, and Siri.  The biggest selling feature might be this:

Importantly, we do know that most of the processing will happen locally on the device, not in the cloud, keeping the bulk of your data within the home. This is exactly what you’d expect from a company that’s not in the business of selling ads, or everything else on the planet.

Essentially, keeping user data locally might be a bigger market player in the future than we think.  The cloud might appeal to more people, however, because it is a popular buzzword.  What is curious is how Essential Home will respond to commands other than vocal.  They might not be relying on a similar diamond in the rough concept that propelled Bitext to the front of the computational linguistics and machine learning market.

Whitney Grace, June 8, 2017

Make Your Amazon Echo an ASMR Device

June 7, 2017

For people who love simple and soothing sounds, the Internet is a boon for their stimulation.  White noise or ambient noise is a technique many people use to relax or fall asleep.  Ambient devices used to be sold through catalogs, especially Sky Mall, but now any sort of sound can be accessed through YouTube or apps for free.  Smart speakers are the next evolution for ambient noise.  CNET has a cool article that explains, “How To Turn Your Amazon Echo Into A Noise Machine.”

The article lists several skills that can be downloaded onto the Echo and the Echo Dot.  The first two suggestions are music skills: Amazon Prime Music and Spotify.  Using these skills, the user can request that Alexia finds any variety of nature sounds and then play them on a loop.  It takes some trial and error to find the perfect sounds to fit your tastes, but once found they can be added to a playlist.  An easier way, but might offer less variety is:

One of the best ways to find ambient noise or nature sounds for Alexa is through skills. Developer Nick Schwab created a family of skills under Ambient Noise. There are currently 12 skills or sounds to choose from:

  • Airplane

  • Babbling Brook

  • Birds

  • City

  • Crickets

  • Fan

  • Fireplace

  • Frogs

  • Ocean waves

  • Rainforest

  • Thunderstorms

  • Train

Normally, you could just say, “Alexa, open Ambient Noise,” to enable the skill, but there are too many similar skills for Alexa to list and let you choose using your voice. Instead, go to alexa.amazon.com or open the iOS or Android app and open the Skills menu. Search for Ambient Noise and click Enable.

This is not a bad start for ambient noises, but the vocal command adds its own set of problems.  Amazon should consider upgrading their machine learning algorithms to a Bitext-based solution.  If you want something with a WHOLE lot more variety to check out YouTube and search for ambient noise or ASMR.

Whitney Grace, June 7, 2017

The Next Digital Assistant Is Apple Flavored

June 6, 2017

Amazon Alexa dominated the digital assistant market until Google released Google Assistant.  Both assistants are accessible through smart devices, but more readily through smart speakers that react to vocal commands.  Google and Amazon need to move over, because Apple wants a place on the coffee table.  Mashable explores Apple’s latest invention in, “Apple’s Answer To The Amazon Echo Could Be Unveiled As Early As June.”

Guess who will be the voice behind Apple’s digital assistant?  Why Siri, of course!  While Apple can already hear your groans, the shiny, new smart speaker will distract you.  Apple is fantastically wonderful at packaging and branding their technology to be chic, minimalist, and trendy.  Will the new packaging be enough to gain Siri fans?  Apple should consider deploying Bitext’s computational linguistic platform that renders human speech more comprehensible to computers and even includes sentimental analysis.  This is an upgrade Siri desperately needs.

Apple is also in desperate need to upgrade itself to the increasing demand for smart home products:

Up until now, people married to the Apple ecosystem haven’t had many smart-home options. That’s because the two dominant players, Echo and Google Home, don’t play nice with Siri. So if people wanted to stick with Apple, they only really had one option: Wait it out.
That’s about to change as the new Essential Home will work with Apple’s voice assistant. And, as an added bonus, the Essential Home looks nice. So nice, in fact, that it could sway Apple fans who are dying to get in on the smart-home game but don’t want to wait any longer for Apple to get its act together. “

The new Apple digital assistant will also come with a screen, possibly a way to leverage more of the market and compete with the new Amazon Echo Show.  However, I thought the point of having a smart speaker was to decrease a user’s dependency on screen-related devices.  That’s going to be a hard habit to break, but it’s about time Apple added its flavor to the digital assistant shelf.

Whitney Grace, June 6, 2017

Linguistic Analytics Translate Doctor Scribbles

May 31, 2017

Healthcare is one of the industries that people imagine can be revolutionized by new technology.  Digital electronic medical records, faster, more accurate diagnostic tools, and doctors having the ability to digest piles of data in minutes are some of the newest and best advances in medicine.  Despite all of these wonderful improvements, healthcare still lags behind other fields transforming their big data into actionable, usable data.  Inside Big Data shares the article, “How NLP Can Help Healthcare ‘Catchup’” discusses how natural language processing can help the healthcare industry make more effective use of their resources.

The reason healthcare lags behind other fields is that most of their data is unstructured:

This large realm of unstructured data includes qualitative information that contributes indispensable context in many different reports in the EHR, such as outside lab results, radiology images, pathology reports, patient feedback and other clinical reports. When combined with claims data this mix of data provides the raw material for healthcare payers and health systems to perform analytics. Outside the clinical setting, patient-reported outcomes can be hugely valuable, especially for life science companies seeking to understand the long-term efficacy and safety of therapeutic products across a wide population.

Natural language processing relies on linguistic algorithms to identify key meanings in unstructured data.  When meaning is given to unstructured data, then it can be inserted into machine learning algorithms.  Bitext’s computational linguistics platform does the same with its sentimental analysis algorithm. Healthcare information is never black and white like data in other industries.  While the unstructured data is different from patient to patient, there are similarities and NLP helps the machine learning tools learn how to quantify what was once-unquantifiable.

Whitney Grace, May 31, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta