Analytics for the Non-Tech Savvy

August 18, 2017

I regularly encounter people who say they are too dumb to understand technology. When people tell themselves this, they are hindering their learning ability and are unable to adapt to a society that growing more dependent on mobile devices, the Internet, and instantaneous information.  This is especially harmful for business entrepreneurs.  The Next Web explains, “How Business Intelligence Can Help Non-Techies Use Data Analytics.”

The article starts with the statement that business intelligence is changing in a manner equivalent to how Windows 95 made computers more accessible to ordinary people.  The technology gatekeeper is being removed.  Proprietary software and licenses are expensive, but cloud computing and other endeavors are driving the costs down.

Voice interaction is another way BI is coming to the masses:

Semantic intelligence-powered voice recognition is simply the next logical step in how we interact with technology. Already, interfaces like Apple’s Siri, Amazon Alexa and Google Assistant are letting us query and interact with vast amounts of information simply by talking. Although these consumer-level tools aren’t designed for BI, there are plenty of new voice interfaces on the way that are radically simplifying how we query, analyze, process, and understand complex data.

 

One important component here is the idea of the “chatbot,” a software agent that acts as an automated guide and interface between your voice and your data. Chatbots are being engineered to help users identify data and guide them into getting the analysis and insight they need.

I see this as the smart people are making their technology available to the rest of us and it could augment or even improve businesses.  We are on the threshold of this technology becoming commonplace, but does it have practicality attached to it?  Many products and services are common place, but it they only have flashing lights and whistles what good are they?

Whitney Grace, August 18, 2017

Seriously, Siri? When Voice Interface Goes Wrong

July 17, 2017

The article on Reddit titled Shower Thoughts offers some amusing moments in voice interfaces, mainly related to Siri switching on when least expected. Most of the anecdotes involve the classroom environment either during lecture or test time. Siri has a tendency to check in at the worst possible time, especially for people who are not supposed to be on their phone. For example,

My friend thought it would be funny to change my name on my phone to Sexy Beast, unfortunately I was later sitting in a biology lecture of about 150 people when Siri said loudly “I didn’t quite get [that] Sexy Beast.”…I keep thinking about shouting “Hey Siri, call Mum” whilst in the middle of a house party, and then watch how many people frantically reach for their phones!

For the latter hypothetical, other users pointed out that it would not work because Siri is listening for the voice of the owner. But we have all experienced Siri responding when we had no intention of beckoning her. If you use certain words like “seriously” or “Syria,” she often awkwardly pops into the conversation. One user relates that a teacher asked the class for the capital city of China, and while the class sat in silence, Siri correctly responded, “Beijing.” In this case, Siri earned a better grade. Other people report Siri spilling the beans during exams when cheaters try to keep their phones nearby. All in a day’s work.

Chelsea Kerwin, July 17, 2017

WaveNet Machine-Generated Speech from DeepMind Eclipses Competitor Technology

July 13, 2017

The article on Bloomberg titled Google’s DeepMind Achieves Speech-Generation Breakthrough touts a 50% improvement over current technology for machine speech. DeepMind developed an AI called WaveNet that focuses on mimicking human speech by learning the sound waves of human voices. In testing, the machine-generated speech beat existing technology, but is still not meeting the level of actual human speech.

The article expands,

Speech is becoming an increasingly important way humans interact with everything from mobile phones to cars. Amazon.com Inc., Apple Inc., Microsoft Inc. and Alphabet Inc.’s Google have all invested in personal digital assistants that primarily interact with users through speech. Mark Bennett, the international director of Google Play, which sells Android apps, told an Android developer conference in London last week that 20 percent of mobile searches using Google are made by voice, not written text.

It is difficult to quantify the ROI for the $533M that Google spent to acquire DeepMind in 2014, since most of their advancements are not extremely commercial. Google did credit DeepMind with the technology that helped slash power needs by 40%. But this breakthrough involves far too much computational power to lend itself to commercial applications. However, Google must love that with the world watching, DeepMind continues to outperform competitors in AI advancement.

Chelsea Kerwin, July 13, 2017

Alexa Is Deaf to Vocal Skill Search

June 29, 2017

Here is a surprising fact: Amazon does not have a vocal search for Alexa skills.  Amazon prides itself on being on being a top technology developer and retailer, but it fails to allow Alexa users to search for a specific skill.  Sure, it will list the top skills or the newest ones, but it does not allow you to ask for any specifics.  Tech Crunch has the news story: “Amazon Rejects AI2’s Alexa Skill Voice-Search Engine.  Will It Build One?

The Allen Institute For Artificial Intelligence decided to take the task on themselves and built “Skill Search.”  Skill Search works very simply: users state what they want and then Skill Search will list other skills that can fulfill the request.  When AI2 submitted the Skill Search to Amazon it was rejected on the grounds that Amazon does not want “skills to recommend other skills.”  This is a pretty common business practice for companies and Amazon did state on its policy page that skills of this nature were barred.  Still, Amazon is missing an opportunity:

It would seem that having this kind of skill search engine would be advantageous to Amazon. It provides a discovery opportunity for skill developers looking to get more users, and highlighting the breadth of skills could make Alexa look more attractive compared to alternatives like Google Home that don’t have as well established of an ecosystem.

Amazon probably has a vocal search skill on their projects list and does not have enough information about it to share yet.  Opening vocal search gives Amazon another revenue stream for Alexa.  They are probably working on perfecting the skill’s language comprehension skills.  Hey Amazon, maybe you should consider Bitext’s language offerings for an Alexa skills search?

Whitney Grace, June 29, 2017

How People Really Use Smart Speakers Will Not Shock You

June 13, 2017

Business Insider tells us a story that we already know, but with a new spin: “People Mainly Use Smart Speakers For Simple Requests.”  The article begins that vocal computing is the next stage in computer evolution.  The hype is that the current digital assistants like Alexa, Siri, and Google Assistant will make our lives easier by automating certain tasks and always be ready to answer our every beck and call.

As to be expected and despite digital assistants advancements, people use them for the simplest tasks.  These include playing music, getting the weather report, and answering questions via Wikipedia.  People also buy products on their smart speakers, much to Amazon’s delight:

Voice-based artificial intelligence may not yet live up to its hype, but that’s not much of a surprise. Even Amazon CEO Jeff Bezos said last year that the tech is closer to “the first guy up at bat” than the first inning of its life. But Bezos will surely be happy when more than just 11% of smart speaker owners buy products online through their devices.

Voice-related technology has yet to even touch the horizon of what will be commonplace ten years from now.  Bitext’s computational linguistic analytics platform that teaches computers and digital assistants to speak human is paving the way towards that horizon.

Whitney Grace, June 13, 2017

The Next Digital Assistant Is Apple Flavored

June 6, 2017

Amazon Alexa dominated the digital assistant market until Google released Google Assistant.  Both assistants are accessible through smart devices, but more readily through smart speakers that react to vocal commands.  Google and Amazon need to move over, because Apple wants a place on the coffee table.  Mashable explores Apple’s latest invention in, “Apple’s Answer To The Amazon Echo Could Be Unveiled As Early As June.”

Guess who will be the voice behind Apple’s digital assistant?  Why Siri, of course!  While Apple can already hear your groans, the shiny, new smart speaker will distract you.  Apple is fantastically wonderful at packaging and branding their technology to be chic, minimalist, and trendy.  Will the new packaging be enough to gain Siri fans?  Apple should consider deploying Bitext’s computational linguistic platform that renders human speech more comprehensible to computers and even includes sentimental analysis.  This is an upgrade Siri desperately needs.

Apple is also in desperate need to upgrade itself to the increasing demand for smart home products:

Up until now, people married to the Apple ecosystem haven’t had many smart-home options. That’s because the two dominant players, Echo and Google Home, don’t play nice with Siri. So if people wanted to stick with Apple, they only really had one option: Wait it out.
That’s about to change as the new Essential Home will work with Apple’s voice assistant. And, as an added bonus, the Essential Home looks nice. So nice, in fact, that it could sway Apple fans who are dying to get in on the smart-home game but don’t want to wait any longer for Apple to get its act together. “

The new Apple digital assistant will also come with a screen, possibly a way to leverage more of the market and compete with the new Amazon Echo Show.  However, I thought the point of having a smart speaker was to decrease a user’s dependency on screen-related devices.  That’s going to be a hard habit to break, but it’s about time Apple added its flavor to the digital assistant shelf.

Whitney Grace, June 6, 2017

Helping Machines Decode the World of Online Content

June 5, 2017

With voice search poised to overtake conventional search, startups like WordLift are creating an AI-based algorithm that can help machines understand content created by humans in a better way.

The Next Web in an article titled Wordlift Is Helping Robots Understand What Online Articles Are Really About says:

The evolution of today’s search engines and the rapid adoption of personal assistants (PAs) – capable of understanding user intent and behaviors through available data – require an upgrade of the existing editorial workflow for bloggers, independent news providers, and content marketers.

Voice activated search assistants rely on Metadata for understanding what the content is about. Moreover, metadata alone is unable to tell the AI what is the user intent. WordLift intends to solve this problem by applying advanced AI for understanding the content and make it voice search engine friendly. Structured data, understanding of textual content are some of the strategies WordLift will use to make the content voice search engine friendly.

Vishal Ingole, June 5, 2017

Voice Assistant Apps Have Much Room to Grow

May 31, 2017

Recent excitement around voice assistants is largely based on the idea that, eventually, a thriving app market will develop around them. However, reports Recode, “Alexa and Google Assistant Have a Problem: People Aren’t Sticking with Voice Apps They Try.” Though sales of Amazon’s Alexa and the Google Assistant platforms over the holidays were encouraging, startup VoiceLabs recently issued a report that indicates most apps entice few users to give them a try. Furthermore, those who have dabbled in voice apps have apparently found little to tempt them back. See the article for some statistics or the report for more. Writer Jason Del Rey observes:

The statistics underscore the difficulty Amazon and Google are having in getting Echo and Home owners to discover and use new voice apps on their platforms. Instead, many consumers are sticking to off-the-shelf actions like streaming music, reading audiobooks and controlling lights in their homes.

 

Those are all good use cases for the voice platforms, but not sufficient to build an ecosystem that will keep software developers engaged and lead to new transformative revenue streams. As a result, the numbers highlight the opportunity for Amazon, Google or others like Apple to stand out by helping both consumers and developers solve these discovery and retention problems.

The founders of VoiceLab see a niche, and they are jumping right into it. Amazon and Google, thus far, supply only limited usage data to would-be app developers, so VoiceLabs is lending them their own voice analytics tool, VoiceInsights. They are counting on the app market to pick up, and are determined to help it along. So far, this tool is free; the company expects to start charging for it once Amazon and/or Google provide a way to monetize apps. When that happens, developers will already be comfortable with VoiceLabs—well played. Probably. Founded in May 2016, VoiceLabs is based in San Francisco.

We, too, are paying close attention to the rise of voice assistants and their related apps. Watch for the debut of our new information service, Beyond Alexa.

Cynthia Murrell, May 31, 2017

Innovations in Language Understanding

May 25, 2017

AI and robotics have advanced significantly. However, machines are yet to achieve that level of sophistication in language understanding. The work is in progress as these trends indicate.

Abbyy in an eBook titled Killer Language Understanding Innovations says:

Pioneering advances in natural language processing and machine vision are re-defining the computing landscape. And disrupting every single industry in the process.

One of the major trend is training chatbots to automate the entire customer services. Chatbots if become capable of interacting in natural language, it would revolutionize several industries. Another trend is combining geospatial data with language understanding to thwart terrorist threats.

In a corporate domain, decision making can become easier if AI is able to decipher the data an organization has and provides real-time actionable inputs. Similarly, data extraction which is still is a manual process can be expedited with optical recognition capabilities of machines.

These are few of the trends that are dominating the language innovations. You can read more about it by clicking here.

Vishol Ingole, May 25, 2017

New AI on Personal Digital Assistant Horizon

May 22, 2017

Computer scientists at Princeton University have developed a technology that allows the user to fully edit voice recordings using an intelligent algorithm.

Science Daily in a report titled Technology Edits Voices Like Text says that:

The software, named VoCo, provides an easy means to add or replace a word in an audio recording of a human voice by editing a transcript of the recording. New words are automatically synthesized in the speaker’s voice even if they don’t appear anywhere else in the recording.

The system is capable of recreating voice of the user using an intelligent algorithm. This makes adding words to pre-recorded audio recordings easier. The same technology can also be used to create a custom robotic voice for digital personal assistants.

Currently available audio editing software are capable of snipping and patching small segments of a recording and cannot add non-existent words. Algorithm of VoCo after analyzing the entire recording is able to synthesize any word without difficulty. At this speed, do we see the current breed of rock and pop artists disappearing?

Vishol Ingole, May 22, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta