Audioburst Tackling Search in an Increasing Audio World

September 5, 2017

With the advent of speech recognition technology our Smart world is slowly becoming more voice activated rather than text based. One company, Audioburst, is hoping to cash in on this trend with a new way to search focusing on audio. A recent TechCrunch article examines the need for such technology and how Audioburst is going about accomplishing the task by utilizing natural language processing and speech recognition technology to identify and organize audio data.

 It…doesn’t only match users’ search queries to those exact same words when spoken, either. For example, it knows that someone speaking about the “president” in a program about U.S. politics was referring to “Donald Trump,” even if they didn’t use his name. The audio content is then tagged and organized in a way that computers understand, making it searchable…This allows its search engine to not just point you to a program or show where a topic was discussed, but the specific segment within that show where that discussion took place. (If you choose, you can then listen to the full show, as the content is linked to the source.)

This technology will allow users to never need the physical phone or tablet to conduct searches. Audioburst is hoping to begin working with car manufacturers soon to bring truly hands-free search to consumers.

Catherine Lamsfuss, September 5, 2017

Take a Hint Amazon, Bing Is Not That Great

August 22, 2017

It recently hit the new stands that Google Home was six times more likely than Amazon Alexa to answer questions.  The Inquirer shares more about this development in the article, “Google Hoe Is Six Times Smarter Than Amazon’s Echo.”

360i conducted a test using their proprietary software that asked Amazon Alexa and Google Home 3,000 questions.  We don’t know what the 3,000 questions were, but some of them did involve retail information.  Google pulled on its Knowledge Graph to answer questions, while Amazon used Bing for its search.  Amazon currently controls 70% of the voice assistant market and has many skills from other manufacturers.  Google, however, is limited in comparison:

By comparison, Google Home has relatively few smart home control chops, relying primarily on IFTTT, which is limited in what it can achieve and often takes a long time between request and execution.

Alexa, on the other hand, can carry out native skill commands in a second or two.

The downside of the two, however, is that Google is Google and Amazon is just not as good. If Echo was able to access the Knowledge Graph, Google Music, and control Chromecasts, then it would be unassailable.

Amazon Alexa and Google Home are a set of rivals and the facts are is that one is a better shopper and the other better at search.  While 360i has revealed their results, we need to see the test questions to fully understand how they arrived at the “six times smarter” statement?

Whitney Grace, August 22, 2017

Analytics for the Non-Tech Savvy

August 18, 2017

I regularly encounter people who say they are too dumb to understand technology. When people tell themselves this, they are hindering their learning ability and are unable to adapt to a society that growing more dependent on mobile devices, the Internet, and instantaneous information.  This is especially harmful for business entrepreneurs.  The Next Web explains, “How Business Intelligence Can Help Non-Techies Use Data Analytics.”

The article starts with the statement that business intelligence is changing in a manner equivalent to how Windows 95 made computers more accessible to ordinary people.  The technology gatekeeper is being removed.  Proprietary software and licenses are expensive, but cloud computing and other endeavors are driving the costs down.

Voice interaction is another way BI is coming to the masses:

Semantic intelligence-powered voice recognition is simply the next logical step in how we interact with technology. Already, interfaces like Apple’s Siri, Amazon Alexa and Google Assistant are letting us query and interact with vast amounts of information simply by talking. Although these consumer-level tools aren’t designed for BI, there are plenty of new voice interfaces on the way that are radically simplifying how we query, analyze, process, and understand complex data.

 

One important component here is the idea of the “chatbot,” a software agent that acts as an automated guide and interface between your voice and your data. Chatbots are being engineered to help users identify data and guide them into getting the analysis and insight they need.

I see this as the smart people are making their technology available to the rest of us and it could augment or even improve businesses.  We are on the threshold of this technology becoming commonplace, but does it have practicality attached to it?  Many products and services are common place, but it they only have flashing lights and whistles what good are they?

Whitney Grace, August 18, 2017

Seriously, Siri? When Voice Interface Goes Wrong

July 17, 2017

The article on Reddit titled Shower Thoughts offers some amusing moments in voice interfaces, mainly related to Siri switching on when least expected. Most of the anecdotes involve the classroom environment either during lecture or test time. Siri has a tendency to check in at the worst possible time, especially for people who are not supposed to be on their phone. For example,

My friend thought it would be funny to change my name on my phone to Sexy Beast, unfortunately I was later sitting in a biology lecture of about 150 people when Siri said loudly “I didn’t quite get [that] Sexy Beast.”…I keep thinking about shouting “Hey Siri, call Mum” whilst in the middle of a house party, and then watch how many people frantically reach for their phones!

For the latter hypothetical, other users pointed out that it would not work because Siri is listening for the voice of the owner. But we have all experienced Siri responding when we had no intention of beckoning her. If you use certain words like “seriously” or “Syria,” she often awkwardly pops into the conversation. One user relates that a teacher asked the class for the capital city of China, and while the class sat in silence, Siri correctly responded, “Beijing.” In this case, Siri earned a better grade. Other people report Siri spilling the beans during exams when cheaters try to keep their phones nearby. All in a day’s work.

Chelsea Kerwin, July 17, 2017

WaveNet Machine-Generated Speech from DeepMind Eclipses Competitor Technology

July 13, 2017

The article on Bloomberg titled Google’s DeepMind Achieves Speech-Generation Breakthrough touts a 50% improvement over current technology for machine speech. DeepMind developed an AI called WaveNet that focuses on mimicking human speech by learning the sound waves of human voices. In testing, the machine-generated speech beat existing technology, but is still not meeting the level of actual human speech.

The article expands,

Speech is becoming an increasingly important way humans interact with everything from mobile phones to cars. Amazon.com Inc., Apple Inc., Microsoft Inc. and Alphabet Inc.’s Google have all invested in personal digital assistants that primarily interact with users through speech. Mark Bennett, the international director of Google Play, which sells Android apps, told an Android developer conference in London last week that 20 percent of mobile searches using Google are made by voice, not written text.

It is difficult to quantify the ROI for the $533M that Google spent to acquire DeepMind in 2014, since most of their advancements are not extremely commercial. Google did credit DeepMind with the technology that helped slash power needs by 40%. But this breakthrough involves far too much computational power to lend itself to commercial applications. However, Google must love that with the world watching, DeepMind continues to outperform competitors in AI advancement.

Chelsea Kerwin, July 13, 2017

Alexa Is Deaf to Vocal Skill Search

June 29, 2017

Here is a surprising fact: Amazon does not have a vocal search for Alexa skills.  Amazon prides itself on being on being a top technology developer and retailer, but it fails to allow Alexa users to search for a specific skill.  Sure, it will list the top skills or the newest ones, but it does not allow you to ask for any specifics.  Tech Crunch has the news story: “Amazon Rejects AI2’s Alexa Skill Voice-Search Engine.  Will It Build One?

The Allen Institute For Artificial Intelligence decided to take the task on themselves and built “Skill Search.”  Skill Search works very simply: users state what they want and then Skill Search will list other skills that can fulfill the request.  When AI2 submitted the Skill Search to Amazon it was rejected on the grounds that Amazon does not want “skills to recommend other skills.”  This is a pretty common business practice for companies and Amazon did state on its policy page that skills of this nature were barred.  Still, Amazon is missing an opportunity:

It would seem that having this kind of skill search engine would be advantageous to Amazon. It provides a discovery opportunity for skill developers looking to get more users, and highlighting the breadth of skills could make Alexa look more attractive compared to alternatives like Google Home that don’t have as well established of an ecosystem.

Amazon probably has a vocal search skill on their projects list and does not have enough information about it to share yet.  Opening vocal search gives Amazon another revenue stream for Alexa.  They are probably working on perfecting the skill’s language comprehension skills.  Hey Amazon, maybe you should consider Bitext’s language offerings for an Alexa skills search?

Whitney Grace, June 29, 2017

How People Really Use Smart Speakers Will Not Shock You

June 13, 2017

Business Insider tells us a story that we already know, but with a new spin: “People Mainly Use Smart Speakers For Simple Requests.”  The article begins that vocal computing is the next stage in computer evolution.  The hype is that the current digital assistants like Alexa, Siri, and Google Assistant will make our lives easier by automating certain tasks and always be ready to answer our every beck and call.

As to be expected and despite digital assistants advancements, people use them for the simplest tasks.  These include playing music, getting the weather report, and answering questions via Wikipedia.  People also buy products on their smart speakers, much to Amazon’s delight:

Voice-based artificial intelligence may not yet live up to its hype, but that’s not much of a surprise. Even Amazon CEO Jeff Bezos said last year that the tech is closer to “the first guy up at bat” than the first inning of its life. But Bezos will surely be happy when more than just 11% of smart speaker owners buy products online through their devices.

Voice-related technology has yet to even touch the horizon of what will be commonplace ten years from now.  Bitext’s computational linguistic analytics platform that teaches computers and digital assistants to speak human is paving the way towards that horizon.

Whitney Grace, June 13, 2017

The Next Digital Assistant Is Apple Flavored

June 6, 2017

Amazon Alexa dominated the digital assistant market until Google released Google Assistant.  Both assistants are accessible through smart devices, but more readily through smart speakers that react to vocal commands.  Google and Amazon need to move over, because Apple wants a place on the coffee table.  Mashable explores Apple’s latest invention in, “Apple’s Answer To The Amazon Echo Could Be Unveiled As Early As June.”

Guess who will be the voice behind Apple’s digital assistant?  Why Siri, of course!  While Apple can already hear your groans, the shiny, new smart speaker will distract you.  Apple is fantastically wonderful at packaging and branding their technology to be chic, minimalist, and trendy.  Will the new packaging be enough to gain Siri fans?  Apple should consider deploying Bitext’s computational linguistic platform that renders human speech more comprehensible to computers and even includes sentimental analysis.  This is an upgrade Siri desperately needs.

Apple is also in desperate need to upgrade itself to the increasing demand for smart home products:

Up until now, people married to the Apple ecosystem haven’t had many smart-home options. That’s because the two dominant players, Echo and Google Home, don’t play nice with Siri. So if people wanted to stick with Apple, they only really had one option: Wait it out.
That’s about to change as the new Essential Home will work with Apple’s voice assistant. And, as an added bonus, the Essential Home looks nice. So nice, in fact, that it could sway Apple fans who are dying to get in on the smart-home game but don’t want to wait any longer for Apple to get its act together. “

The new Apple digital assistant will also come with a screen, possibly a way to leverage more of the market and compete with the new Amazon Echo Show.  However, I thought the point of having a smart speaker was to decrease a user’s dependency on screen-related devices.  That’s going to be a hard habit to break, but it’s about time Apple added its flavor to the digital assistant shelf.

Whitney Grace, June 6, 2017

Helping Machines Decode the World of Online Content

June 5, 2017

With voice search poised to overtake conventional search, startups like WordLift are creating an AI-based algorithm that can help machines understand content created by humans in a better way.

The Next Web in an article titled Wordlift Is Helping Robots Understand What Online Articles Are Really About says:

The evolution of today’s search engines and the rapid adoption of personal assistants (PAs) – capable of understanding user intent and behaviors through available data – require an upgrade of the existing editorial workflow for bloggers, independent news providers, and content marketers.

Voice activated search assistants rely on Metadata for understanding what the content is about. Moreover, metadata alone is unable to tell the AI what is the user intent. WordLift intends to solve this problem by applying advanced AI for understanding the content and make it voice search engine friendly. Structured data, understanding of textual content are some of the strategies WordLift will use to make the content voice search engine friendly.

Vishal Ingole, June 5, 2017

Voice Assistant Apps Have Much Room to Grow

May 31, 2017

Recent excitement around voice assistants is largely based on the idea that, eventually, a thriving app market will develop around them. However, reports Recode, “Alexa and Google Assistant Have a Problem: People Aren’t Sticking with Voice Apps They Try.” Though sales of Amazon’s Alexa and the Google Assistant platforms over the holidays were encouraging, startup VoiceLabs recently issued a report that indicates most apps entice few users to give them a try. Furthermore, those who have dabbled in voice apps have apparently found little to tempt them back. See the article for some statistics or the report for more. Writer Jason Del Rey observes:

The statistics underscore the difficulty Amazon and Google are having in getting Echo and Home owners to discover and use new voice apps on their platforms. Instead, many consumers are sticking to off-the-shelf actions like streaming music, reading audiobooks and controlling lights in their homes.

 

Those are all good use cases for the voice platforms, but not sufficient to build an ecosystem that will keep software developers engaged and lead to new transformative revenue streams. As a result, the numbers highlight the opportunity for Amazon, Google or others like Apple to stand out by helping both consumers and developers solve these discovery and retention problems.

The founders of VoiceLab see a niche, and they are jumping right into it. Amazon and Google, thus far, supply only limited usage data to would-be app developers, so VoiceLabs is lending them their own voice analytics tool, VoiceInsights. They are counting on the app market to pick up, and are determined to help it along. So far, this tool is free; the company expects to start charging for it once Amazon and/or Google provide a way to monetize apps. When that happens, developers will already be comfortable with VoiceLabs—well played. Probably. Founded in May 2016, VoiceLabs is based in San Francisco.

We, too, are paying close attention to the rise of voice assistants and their related apps. Watch for the debut of our new information service, Beyond Alexa.

Cynthia Murrell, May 31, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta