Talking to Software: Policeware Vendors Ignored This Next Big Thing

June 9, 2018

On the flight from somewhere in Europe to Kentucky, I reflected on the demonstrations, presentations, and sales pitches to which I was exposed at a large international law enforcement and intelligence conference.

I realized that none of the presenters or enthusiastic marketers tried to tell me about chatbots. The term refers to a basket of technologies that allow a user to ignore tapping or keyboarding to get actionable information.

When the flight landed, I noted a link in my feed stream to “Chatbots Were the Next Big Thing: What Happened?” My personal experience from four days of talking to humans and listening to explainers was that chatbots were marginalized, maybe left in the office file cabinet.

The write up states:

…Who would monopolize the field, not whether chatbots would take off in the first place:

“Will a single platform emerge to dominate the chatbot and personal assistant ecosystem?”

One year on, we have an answer to that question.

No.

Because there isn’t even an ecosystem for a platform to dominate.

That seems clear.

The write up points out that chatbots were supposed to marginalize applications. One of the more interesting items of information in the article is a collection of chatbots stuck for an answer.

Net net: Like quantum computing, smart software has potential. But technologies with potential have been just around the corner for many years.

Marketing, confident assertions, and bold predictions are one thing. Delivering high value results remains a different task.

Stephen E Arnold, June 9, 2018

Listening and Voice Search: A Happy Tech Couple

April 26, 2018

Voice search is the next big thing in the search industry. This is a pretty universally accepted trend among tech thinkers. With that in mind, it’s a good time to look at your own personal use and your business uses for search and inquire whether or not you are ready. Chances are, you aren’t. We learned more from a recent article in The Next Web, “By 2020 30% of Search Will Be Voice Conducted. Here’s What That Means for Your Business.”

According to the story:

“I would also invest in trying to get clients to review my restaurant on Yelp and Tripadvisor so that when people click through, they will see relevant and recent information on my restaurant. If I were providing services, I would make an effort to get listed in Yelp and Google My Business to increase my chances of showing up.”

Another big way to prepare that experts are recommending is to think about SEO in a totally different way. The way we search through our fingertips and through our voiceboxes are totally different. In short, we tend to say less than we type when searching so SEO will have to be even more precise than before.

However, “Amazon’s Alexa Had a Flaw “That Let Eavesdroppers Listen In” reminds Beyond Search that in order to answer a question, the devices have to listen. Amazon’s Alexa had a “flaw” which allowed third parties to use the device like an old school “bug.” According to the write up, Amazon fixed this problem.

How many other always on listening devices are just listening, analyzing, and sending data into a federated database?

Toss in online search and cross correlation, and one has an intriguing way to gather intelligence.

Stephen E Arnold, April 26, 2018

Amazon and Google Voice Recognition Easily Fooled

January 31, 2018

Voice recognition technology has vastly improved over the past decade, but it still has a long way to go before it responds like a quick-thinking science-fiction computer.  CNET shares how funny and harmful voice recognition technology can be in the article, “Fooling Amazon and Googles’ Voice Recognition Isn’t Hard.”  What exactly is the problem with voice recognition technology?  If someone sounds like you, smart speakers like Google Home or Amazon Echo with Alexa will allow that person to use your credit cards and access your personal information.

The smart speakers can be trained to recognize voices, so that they can respond according to an individual.  For example, families can program the smart speakers to recognize individual members so each person can access their personal information.  It is quite easy to fool Alexa and Googles’ voice recognition.  Purchases can be made vocally and personal information can be exposed.  There are ways to take precautions, such as disabling voice purchasing and there are features to turn of broadcasting your personal information.

In their defense, Google said voice recognition should not be used as a security feature:

Google warns you when you first set up voice recognition that a similar voice might be able to access your info. In response to this story, Kara Stockton on the Google Assistant team offered the following statement over email: Users shouldn’t rely upon Voice Match as a security feature. It is possible for a user to not be identified, or for a guest to be identified as a connected user. Those cases are rare, but they do exist and we’re continuing to work on making the product better.’

Maybe silence is golden after all.  It keeps credit cards and purchases free from vocal stealing.

Whitney Grace, January 31, 2018

Amazon and Google Voice Recognition Easily Fooled

January 25, 2018

Voice recognition technology has vastly improved over the past decade, but it still has a long way to go before it responds like a quick-thinking science-fiction computer.  CNET shares how funny and harmful voice recognition technology can be in the article, “Fooling Amazon and Googles’ Voice Recognition Isn’t Hard.”  What exactly is the problem with voice recognition technology?  If someone sounds like you, smart speakers like Google Home or Amazon Echo with Alexa will allow that person to use your credit cards and access your personal information.

The smart speakers can be trained to recognize voices so that they can respond according to an individual.  For example, families can program the smart speakers to recognize individual members so each person can access their personal information.  It is quite easy to fool Alexa and Googles’ voice recognition.  Purchases can be made vocally and personal information can be exposed.  There are ways to take precautions, such as disabling voice purchasing and there are features to turn of broadcasting your personal information.

In their defense, Google said voice recognition should not be used as a security feature:

Google warns you when you first set up voice recognition that a similar voice might be able to access your info. In response to this story, Kara Stockton on the Google Assistant team offered the following statement over email: Users shouldn’t rely upon Voice Match as a security feature. It is possible for a user to not be identified, or for a guest to be identified as a connected user. Those cases are rare, but they do exist and we’re continuing to work on making the product better.’

Maybe silence is golden after all.  It keeps credit cards and purchases free from vocal stealing.

Whitney Grace, January 25, 2018

AI Predictions for 2018

October 11, 2017

AI just keeps gaining steam, and is positioned to be extremely influential in the year to come. KnowStartup describes “10 Artificial Intelligence (AI) Technologies that Will Rule 2018.” Writer Biplab Ghosh introduces the list:

Artificial Intelligence is changing the way we think of technology. It is radically changing the various aspects of our daily life. Companies are now significantly making investments in AI to boost their future businesses. According to a Narrative Science report, just 38% percent of the companies surveys used artificial intelligence in 2016—but by 2018, this percentage will increase to 62%. Another study performed by Forrester Research predicted an increase of 300% in investment in AI this year (2017), compared to last year. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020. ‘Artificial Intelligence’ today includes a variety of technologies and tools, some time-tested, others relatively new.

We are not surprised that the top three entries are natural language generation, speech recognition, and machine learning platforms, in that order. Next are virtual agents (aka “chatbots” or “bots”), then decision management systems, AI-optimized hardware, deep learning platforms, robotic process automation, text analytics & natural language processing, and biometrics. See the write-up for details on each of these topics, including some top vendors in each space.

Cynthia Murrell, October 11, 2017

New Beyond Search Overflight Report: The Bitext Conversational Chatbot Service

September 25, 2017

Stephen E Arnold and the team at Arnold Information Technology analyzed Bitext’s Conversational Chatbot Service. The BCBS taps Bitext’s proprietary Deep Linguistic Analysis Platform to provide greater accuracy for chatbots regardless of platform.

Arnold said:

The BCBS augments chatbot platforms from Amazon, Facebook, Google, Microsoft, and IBM, among others. The system uses specific DLAP operations to understand conversational queries. Syntactic functions, semantic roles, and knowledge graph tags increase the accuracy of chatbot intent and slotting operations.

One unique engineering feature of the BCBS is that specific Bitext content processing functions can be activated to meet specific chatbot applications and use cases. DLAP supports more than 50 languages. A BCBS licensee can activate additional language support as needed. A chatbot may be designed to handle English language queries, but Spanish, Italian, and other languages can be activated with via an instruction.

Dr. Antonio Valderrabanos said:

People want devices that understand what they say and intend. BCBS (Bitext Chatbot Service) allows smart software to take the intended action. BCBS allows a chatbot to understand context and leverage deep learning, machine intelligence, and other technologies to turbo-charge chatbot platforms.

Based on ArnoldIT’s test of the BCBS, accuracy of tagging resulted in accuracy jumps as high as 70 percent. Another surprising finding was that the time required to perform content tagging decreased.

Paul Korzeniowski, a member of the ArnoldIT study team, observed:

The Bitext system handles a number of difficult content processing issues easily. Specifically, the BCBS can identify negation regardless of the structure of the user’s query. The system can understand double intent; that is, a statement which contains two or more intents. BCBS is one of the most effective content processing systems to deal correctly  with variability in human statements, instructions, and queries.

Bitext’s BCBS and DLAP solutions deliver higher accuracy, and enable more reliable sentiment analyses, and even output critical actor-action-outcome content processing. Such data are invaluable for disambiguating in Web and enterprise search applications, content processing for discovery solutions used in fraud detection and law enforcement and consumer-facing mobile applications.

Because Bitext was one of the first platform solution providers, the firm was able to identify market trends and create its unique BCBS service for major chatbot platforms. The company focuses solely on solving problems common to companies relying on machine learning and, as a result, has done a better job delivering such functionality than other firms have.

A copy of the 22 page Beyond Search Overflight analysis is available directly from Bitext at this link on the Bitext site.

Once again, Bitext has broken through the barriers that block multi-language text analysis. The company’s Deep Linguistics Analysis Platform supports more than 50 languages at a lexical level and +20 at a syntactic level and makes the company’s technology available for a wide range of applications in Big Data, Artificial Intelligence, social media analysis, text analytics,  and the new wave of products designed for voice interfaces supporting multiple languages, such as chatbots. Bitext’s breakthrough technology solves many complex language problems and integrates machine learning engines with linguistic features. Bitext’s Deep Linguistics Analysis Platform allows seamless integration with commercial, off-the-shelf content processing and text analytics systems. The innovative Bitext’s system reduces costs for processing multilingual text for government agencies and commercial enterprises worldwide. The company has offices in Madrid, Spain, and San Francisco, California. For more information, visit www.bitext.com.

Kenny Toth, September 25, 2017

Alexa Gets a Physical Body

September 20, 2017

Alexa did not really get physical robot body, instead, Bionik Laboratories developed an Alexa skill to control their AKRE lower-body exoskeleton.  The news comes from iReviews’s article, “Amazon’s Alexa Can Control An Exoskeleton With Verbal Instructions.”

This is the first time Alexa has ever been connected to an exoskeleton and it could potentially lead to amazing breakthroughs in prosthetics.  Bionik Laboratories developed the exoskeleton to help older people and those with lower body impairments.  Users can activate the exoskeleton through Alexa with simple commands like, “I’m ready to stand” or “I’m ready to walk.”

As the population ages, there will be a higher demand for technology that can help senior citizens move around with more ease.

The ARKE exoskeleton has the potential to help in 100% of all stroke survivors who suffer from lower limb impairment. A portion of wheelchair-bound stroke survivors will be eligible for the exoskeleton. For spinal cord injury patients, Bionik Labs expects to treat 80% of all cases with the ARKE exoskeleton. There is also potential for patients with quadriplegia or incomplete spinal cord injury.

Bionik Laboratories plans to help people regain their mobility and improve their quality of life.  The company is focusing on stroke survivors and other mobile-impaired patients.  Pairing the exoskeleton with Alexa demonstrates the potential home healthcare will have in the future.  It will also feed imaginations as they wonder if the exoskeletons can be programmed not only walk and run but search and kill?  Just a joke, but the potential for aiding impaired people is amazing.

Whitney Grace, September 20, 2015

Amazon to Develop Pet Translating App

September 12, 2017

Anyone who has participated in a one-way conversation with their beloved pet can appropriate Amazon’s latest ambitions in creating an app to translate dog and cat sounds into human language. Not being the first to have this idea, Amazon should note that there has been no significant advance in this particular science and, perhaps, they are over-reaching even their own capacities.

The Guardian recently shared of Amazon’s dreams of a pet-translating app and came to the conclusion that at best it would provide the same service as adult supervision.

Kaminski says a translation device might make things easier for people who lack intuition or young children who misinterpret signals ‘sometimes quite significantly.’ One study, for instance, found that when young children were shown a picture of a dog with menacingly bared teeth, they concluded that the dog was “happy” and “smiling” and that they would like to hug it. An interpretation device might be able to warn of danger.

While there is no doubt that the pet industry is exploding in dollars and interest, Amazon’s app aspirations are a bit of a stretch. It is understandable how such a gimmicky app would set Amazon apart from other translation apps and sites, even if it has the same accuracy.

Catherine Lamsfuss, September 12, 2017

Do You See How Search Will Change?

September 5, 2017

Vocal-activated search is a convenient, hands-free way to quickly retrieve information.  A number of people who use some form of vocal search, either using a smart speaker or a digital assistant.  Scott Monty reports that the voice-activated speaker market has increased by 130% in the article, “Is The Future Of AI-Powered Search Oral Or Visual?”  Amazon controls 70% of the smart speaker market, while Google has 23%.

Voice activated search has its perks, but it does not always prove to be the most useful.  The problem with voice-activated search is that it does not allow a lot of options:

But here’s the current challenge with voice-activated systems: there’s no menu. There’s no dropdown of options. There’s no visual cue to help you give you a sense of what you can ask the system. Oh sure, you can ask what your query options are, but the voice will simply read back to you what your options are.

Monty points out that humans have been a visually-driven culture for thousands of years, ever since written language was invented.  Amazon and Google are already working on projects that combine visual aspects with voice-driven capabilities.  Amazon has the Echo Snow that has the same functionality as the regular Echos, except it has a screen.  Google is developing the Google Lens; think Google Glasses except not as obtrusive.  It can use visual search to augment reality.  The main differences between the two companies still leave a big gap between them: Amazon sells stuff, Google finds information.

But here’s the current challenge with voice-activated systems: there’s no menu. There’s no dropdown of options. There’s no visual cue to help you give you a sense of what you can ask the system. Oh sure, you can ask what your query options are, but the voice will simply read back to you what your options are.

Google still remains on top, but Amazon could develop an ecommerce version of the Google Lens.  Or would it be easier if the two somehow collaborated on a project to conquer shopping and search?

Whitney Grace, September 5, 2017

Audioburst Tackling Search in an Increasing Audio World

September 5, 2017

With the advent of speech recognition technology our Smart world is slowly becoming more voice activated rather than text based. One company, Audioburst, is hoping to cash in on this trend with a new way to search focusing on audio. A recent TechCrunch article examines the need for such technology and how Audioburst is going about accomplishing the task by utilizing natural language processing and speech recognition technology to identify and organize audio data.

 It…doesn’t only match users’ search queries to those exact same words when spoken, either. For example, it knows that someone speaking about the “president” in a program about U.S. politics was referring to “Donald Trump,” even if they didn’t use his name. The audio content is then tagged and organized in a way that computers understand, making it searchable…This allows its search engine to not just point you to a program or show where a topic was discussed, but the specific segment within that show where that discussion took place. (If you choose, you can then listen to the full show, as the content is linked to the source.)

This technology will allow users to never need the physical phone or tablet to conduct searches. Audioburst is hoping to begin working with car manufacturers soon to bring truly hands-free search to consumers.

Catherine Lamsfuss, September 5, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta