Privacy Enabled on Digital Assistants

June 8, 2017

One thing that Amazon, Google, and other digital assistant manufacturers glaze over are how enabling vocal commands on smart speakers potentially violates a user’s privacy.  These include both the Google Home and the Amazon Echo.  Keeping vocal commands continuously on allows bad actors to hack into the smart speaker, listen, record, and spy on users in the privacy of their own homes.  If the vocal commands are disabled on smart speakers, it negates their purpose.  The Verge reports that one smart technology venture is making an individual’s privacy the top priority: “Essential Home Is An Amazon Echo Competitor Puts Privacy First.”

Andy Rubin’s recently released the Essential Home, essentially a digital assistant that responds to vocal, touch, or “sight” commands.  It is supposed to be an entirely new product in the digital assistant ring, but it borrows most of its ideas from Google and Amazon’s innovations.  Essential Home just promises to do them better.

Translation: Huh?

What Essential Home is exactly, isn’t clear. Essential has some nice renders showing the concept in action. But we’re not seeing any photos of a working device and nothing in the way of specifications, prices, or delivery dates. We know it’ll act as the interface to your smart home gear but we don’t know which ecosystems will be supported. We know it runs Ambient OS, though details on that are scant. We know it’ll try to alert you of contextually relevant information during the day, but it’s unclear how.

It is compatible with Nest, SmartThings, and HomeKit and it is also supposed to be friendly with Alexa, Google Assistant, and Siri.  The biggest selling feature might be this:

Importantly, we do know that most of the processing will happen locally on the device, not in the cloud, keeping the bulk of your data within the home. This is exactly what you’d expect from a company that’s not in the business of selling ads, or everything else on the planet.

Essentially, keeping user data locally might be a bigger market player in the future than we think.  The cloud might appeal to more people, however, because it is a popular buzzword.  What is curious is how Essential Home will respond to commands other than vocal.  They might not be relying on a similar diamond in the rough concept that propelled Bitext to the front of the computational linguistics and machine learning market.

Whitney Grace, June 8, 2017

Make Your Amazon Echo an ASMR Device

June 7, 2017

For people who love simple and soothing sounds, the Internet is a boon for their stimulation.  White noise or ambient noise is a technique many people use to relax or fall asleep.  Ambient devices used to be sold through catalogs, especially Sky Mall, but now any sort of sound can be accessed through YouTube or apps for free.  Smart speakers are the next evolution for ambient noise.  CNET has a cool article that explains, “How To Turn Your Amazon Echo Into A Noise Machine.”

The article lists several skills that can be downloaded onto the Echo and the Echo Dot.  The first two suggestions are music skills: Amazon Prime Music and Spotify.  Using these skills, the user can request that Alexia finds any variety of nature sounds and then play them on a loop.  It takes some trial and error to find the perfect sounds to fit your tastes, but once found they can be added to a playlist.  An easier way, but might offer less variety is:

One of the best ways to find ambient noise or nature sounds for Alexa is through skills. Developer Nick Schwab created a family of skills under Ambient Noise. There are currently 12 skills or sounds to choose from:

  • Airplane

  • Babbling Brook

  • Birds

  • City

  • Crickets

  • Fan

  • Fireplace

  • Frogs

  • Ocean waves

  • Rainforest

  • Thunderstorms

  • Train

Normally, you could just say, “Alexa, open Ambient Noise,” to enable the skill, but there are too many similar skills for Alexa to list and let you choose using your voice. Instead, go to alexa.amazon.com or open the iOS or Android app and open the Skills menu. Search for Ambient Noise and click Enable.

This is not a bad start for ambient noises, but the vocal command adds its own set of problems.  Amazon should consider upgrading their machine learning algorithms to a Bitext-based solution.  If you want something with a WHOLE lot more variety to check out YouTube and search for ambient noise or ASMR.

Whitney Grace, June 7, 2017

The Next Digital Assistant Is Apple Flavored

June 6, 2017

Amazon Alexa dominated the digital assistant market until Google released Google Assistant.  Both assistants are accessible through smart devices, but more readily through smart speakers that react to vocal commands.  Google and Amazon need to move over, because Apple wants a place on the coffee table.  Mashable explores Apple’s latest invention in, “Apple’s Answer To The Amazon Echo Could Be Unveiled As Early As June.”

Guess who will be the voice behind Apple’s digital assistant?  Why Siri, of course!  While Apple can already hear your groans, the shiny, new smart speaker will distract you.  Apple is fantastically wonderful at packaging and branding their technology to be chic, minimalist, and trendy.  Will the new packaging be enough to gain Siri fans?  Apple should consider deploying Bitext’s computational linguistic platform that renders human speech more comprehensible to computers and even includes sentimental analysis.  This is an upgrade Siri desperately needs.

Apple is also in desperate need to upgrade itself to the increasing demand for smart home products:

Up until now, people married to the Apple ecosystem haven’t had many smart-home options. That’s because the two dominant players, Echo and Google Home, don’t play nice with Siri. So if people wanted to stick with Apple, they only really had one option: Wait it out.
That’s about to change as the new Essential Home will work with Apple’s voice assistant. And, as an added bonus, the Essential Home looks nice. So nice, in fact, that it could sway Apple fans who are dying to get in on the smart-home game but don’t want to wait any longer for Apple to get its act together. “

The new Apple digital assistant will also come with a screen, possibly a way to leverage more of the market and compete with the new Amazon Echo Show.  However, I thought the point of having a smart speaker was to decrease a user’s dependency on screen-related devices.  That’s going to be a hard habit to break, but it’s about time Apple added its flavor to the digital assistant shelf.

Whitney Grace, June 6, 2017

Linguistic Analytics Translate Doctor Scribbles

May 31, 2017

Healthcare is one of the industries that people imagine can be revolutionized by new technology.  Digital electronic medical records, faster, more accurate diagnostic tools, and doctors having the ability to digest piles of data in minutes are some of the newest and best advances in medicine.  Despite all of these wonderful improvements, healthcare still lags behind other fields transforming their big data into actionable, usable data.  Inside Big Data shares the article, “How NLP Can Help Healthcare ‘Catchup’” discusses how natural language processing can help the healthcare industry make more effective use of their resources.

The reason healthcare lags behind other fields is that most of their data is unstructured:

This large realm of unstructured data includes qualitative information that contributes indispensable context in many different reports in the EHR, such as outside lab results, radiology images, pathology reports, patient feedback and other clinical reports. When combined with claims data this mix of data provides the raw material for healthcare payers and health systems to perform analytics. Outside the clinical setting, patient-reported outcomes can be hugely valuable, especially for life science companies seeking to understand the long-term efficacy and safety of therapeutic products across a wide population.

Natural language processing relies on linguistic algorithms to identify key meanings in unstructured data.  When meaning is given to unstructured data, then it can be inserted into machine learning algorithms.  Bitext’s computational linguistics platform does the same with its sentimental analysis algorithm. Healthcare information is never black and white like data in other industries.  While the unstructured data is different from patient to patient, there are similarities and NLP helps the machine learning tools learn how to quantify what was once-unquantifiable.

Whitney Grace, May 31, 2017

Amazon Answers Artificial Intelligence Questions

May 24, 2017

One big question about Amazon is how the company is building its artificial intelligence and machine learning programs.  It was the topic of conversation at the recent Internet Association’s annual gala, where Jeff Bezos, Amazon CEO, discussed it.  GeekWire wrote about Bezos’s appearance at the gala in the article, “Jeff Bezos Explained Amazon’s Artificial Intelligence And Machine Learning.”

The discussion Bezos participated in covered a wide range of topics, including online economy, Amazon’s media overage, its business principles, and, of course, artificial intelligence.  Bezos compared the time we are living in to the realms of science fiction and Amazon is at the forefront of it.  Through Amazon Web Services, the company has clients ranging from software developers to corporations.  Amazon’s goal is make the technology available to everyone, but deployment is a problem as is finding the right personnel with the right expertise.

Amazon realizes that the power of its technology comes from behind the curtain:

I would say, a lot of the value that we’re getting from machine learning is actually happening beneath the surface. It is things like improved search results. Improved product recommendations for customers. Improved forecasting for inventory management. Literally hundreds of other things beneath the surface.

This reminds me of Bitext, an analytics software company based in Madrid, Spain.  Bitext’s technology is used to power machine learning beneath many big companies’ software.  Bitext is the real power behind many analytics projects.

Whitney Grace, May 24, 2017

Catch the Chatbots Chattering Away

May 17, 2017

Chatbots are not self-aware, but the better-programmed ones are so “intelligent” they can hold a real conversation with a human.  While chatbots are meant to engage humans in conversation, have you ever wondered what would happen if two bots are told to speak with each other?  YouTube user winter blessed decided to pit Mitsuku and Cleverbot against one another.  You can view the results in the video, “Mitsuku vs Cleverbot – AI (Artificial Intelligence) Showdown.”

Mitsuku is a female-styled chatbot that can be accessed like a Flash game, while Cleverbot was built using Cleverscript-a SAS that teaches people how to build their own chatbots.  While both Mitsuku and Cleverbot are highly praised, neither of them use Bitext’s analytics platform to help power chats.  They might benefit from incorporating it into their conversations.

Listening to Mitsuku and Cleverbot is an interesting demonstration of how far chatbots have progressed and still how limited they are.  The pair does comprehend each other, but they end up misinterpreting questions and responding incorrectly.  It is like listening to someone who strictly relied on Google Translate to speak a foreign language.  Their conversation is understandable, but devoid of meaning.  Humans are still needed to add meaning behind the words.

Whitney Grace, May 17, 2017

Machine Learning Going Through a Phase

May 10, 2017

People think that machine learning is like an algorithm magic wand.   It works by some writing the algorithmic code, popping in the data, and the computer learns how to do a task.  It is not that easy.  The Bitext blog reveals that machine learning needs assistance in the post, “How Phrase Structure Can Help Machine Learning For Text Analysis.”

Machine learning techniques used for text analysis are not that accurate.  The post explains that instead of learning the meaning of words in a sentence according to its structure, all the words are tossed into a bag and translated individually.  The context and meaning are lost.  A real world example is Chinese and Japanese because they use kanji (pictorial symbols representing words).   Chinese and Japanese are two languages, where a kanji’s meaning changes based on the context.  The result is that both languages have a lot of puns and are a nightmare for text analytics.

As you can imagine there are problems in Germanic and Latin-based languages too:

Ignoring the structure of a sentence can lead to various types of analysis problems. The most common one is incorrectly assigning similarity to two unrelated phrases such as Social Security in the Media” and “Security in Social Media” just because they use the same words (although with a different structure).

Besides, this approach has stronger effects for certain types of “special” words like “not” or “if”. In a sentence like “I would recommend this phone if the screen was bigger”, we don’t have a recommendation for the phone, but this could be the output of many text analysis tools, given that we have the words “recommendation” and “phone”, and given that the connection between “if” and “recommend” is not detected.

If you rely solely on the “bag of words” approach for text analysis the problems only get worse.  That is why it phrase structure is very important for text and sentiment analysis.  Bitext incorporates phrase structure and other techniques in their analytics platform used by a large search engine company and another tech company that likes fruit.

Whitney Grace, May 10, 2017

Voice Recognition Software Has Huge Market Reach

May 3, 2017

Voice recognition software still feels like a futuristic technology, despite its prevalence in our everyday lives.  WhaTech explains how far voice recognition technology has imbedded itself into our habits in, “Listening To The Voice Recognition Market.”

The biggest example of speech recognition technology is an automated phone system.  Automated phone systems are used all over the board, especially in banks, retail chains, restaurants, and office phone directories.  People usually despise automated phone systems, because they cannot understand responses and tend to put people on hold for extended periods of time.

Despite how much we hate automated phone systems, they are useful and they have gotten better in understanding human speech and the industry applications are endless:

The Global Voice Recognition Systems Sales Market 2017report by Big Market Research is a comprehensive study of the global voice recognition market. It covers both current and future prospect scenarios, revealing the market’s expected growth rate based on historical data. For products, the report reveals the market’s sales volume, revenue, product price, market share and growth rate, each of which is segmented by artificial intelligence systems and non-artificial intelligence systems. For end-user applications, the report reveals the status for major applications, sales volume, market share and growth rate for each application, with common applications including healthcare, military and aerospace, communications, and automotive.”

Key players in the voice recognition software field are Validsoft, Sensory, Biotrust ID, Voicevault, Voicebox Technologies, Lumenvox, M2SYS, Advanced Voice Recognition Systems, and Mmodal.  These companies would benefit from using Bitext’s linguistic-based analytics platform to enhance their technology’s language learning skills.

Whitney Grace, May 3, 2017

 

Amazon Aims to Ace the Chatbots

April 26, 2017

Amazon aims to insert itself into every aspect of daily life and the newest way it does is with the digital assistant Alexa.  Reuters reports that, “Amazon Rolls Out Chatbot Tools In Race To Dominate Voice-Powered Tech,” explaining how Amazon plans to expand Alexa’s development.  The retail giant recently released the technology behind Alexa to developers, so they can build chat features into apps.

Amazon is eager to gain dominance in voice-controlled technology.  Apple and Google both reign supreme when it comes to talking computers, chatbots, and natural language processing.  Amazon has a huge reach, perhaps even greater than Apple and Google, because people have come to rely on it for shopping.  Chatbots have a notorious history for being useless and Microsoft’s Tay even turned into a racist, chauvinist program.

The new Alexa development tool is called Alexa Lex, which is hosted on the cloud.  Alexa is already deployed in millions of homes and it is fed a continuous data stream that is crucial to the AI’s learning:

Processing vast quantities of data is key to artificial intelligence, which lets voice assistants decode speech. Amazon will take the text and recordings people send to apps to train Lex – as well as Alexa – to understand more queries.

That could help Amazon catch up in data collection. As popular as Amazon’s Alexa-powered devices are, such as Echo speakers, the company has sold an estimated 10 million or more.

Amazon Alexa is a competent digital assistant, able to respond to vocal commands and even offers voice-only shop via Amazon.  As noted, Alexa’s power rests in its data collection and ability to learn natural language processing.  Bitext uses a similar method but instead uses trained linguists to build its analytics platform.

Whitney Grace, April 26, 2017

Voice Recognition Software Has Huge Market Reach

March 3, 2017

Voice recognition software still feels like a futuristic technology, despite its prevalence in our everyday lives.  WhaTech explains how far voice recognition technology has imbedded itself into our habits in, “Listening To The Voice Recognition Market.”

The biggest example of speech recognition technology is an automated phone system.  Automated phone systems are used all over the board, especially in banks, retail chains, restaurants, and office phone directories.  People usually despise automated phone systems, because they cannot understand responses and tend to put people on hold for extended periods of time.

Despite how much we hate automated phone systems, they are useful and they have gotten better in understanding human speech and the industry applications are endless:

The Global Voice Recognition Systems Sales Market 2017report by Big Market Research is a comprehensive study of the global voice recognition market. It covers both current and future prospect scenarios, revealing the market’s expected growth rate based on historical data. For products, the report reveals the market’s sales volume, revenue, product price, market share and growth rate, each of which is segmented by artificial intelligence systems and non-artificial intelligence systems. For end-user applications, the report reveals the status for major applications, sales volume, market share and growth rate for each application, with common applications including healthcare, military and aerospace, communications, and automotive.

Key players in the voice recognition software field are Validsoft, Sensory, Biotrust ID, Voicevault, Voicebox Technologies, Lumenvox, M2SYS, Advanced Voice Recognition Systems, and Mmodal.  These companies would benefit from using Bitext’s linguistic-based analytics platform to enhance their technology’s language learning skills.

Whitney Grace, May 3, 2017

« Previous Page

  • Archives

  • Recent Posts

  • Meta