AI Predictions for 2018

October 11, 2017

AI just keeps gaining steam, and is positioned to be extremely influential in the year to come. KnowStartup describes “10 Artificial Intelligence (AI) Technologies that Will Rule 2018.” Writer Biplab Ghosh introduces the list:

Artificial Intelligence is changing the way we think of technology. It is radically changing the various aspects of our daily life. Companies are now significantly making investments in AI to boost their future businesses. According to a Narrative Science report, just 38% percent of the companies surveys used artificial intelligence in 2016—but by 2018, this percentage will increase to 62%. Another study performed by Forrester Research predicted an increase of 300% in investment in AI this year (2017), compared to last year. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020. ‘Artificial Intelligence’ today includes a variety of technologies and tools, some time-tested, others relatively new.

We are not surprised that the top three entries are natural language generation, speech recognition, and machine learning platforms, in that order. Next are virtual agents (aka “chatbots” or “bots”), then decision management systems, AI-optimized hardware, deep learning platforms, robotic process automation, text analytics & natural language processing, and biometrics. See the write-up for details on each of these topics, including some top vendors in each space.

Cynthia Murrell, October 11, 2017

New Beyond Search Overflight Report: The Bitext Conversational Chatbot Service

September 25, 2017

Stephen E Arnold and the team at Arnold Information Technology analyzed Bitext’s Conversational Chatbot Service. The BCBS taps Bitext’s proprietary Deep Linguistic Analysis Platform to provide greater accuracy for chatbots regardless of platform.

Arnold said:

The BCBS augments chatbot platforms from Amazon, Facebook, Google, Microsoft, and IBM, among others. The system uses specific DLAP operations to understand conversational queries. Syntactic functions, semantic roles, and knowledge graph tags increase the accuracy of chatbot intent and slotting operations.

One unique engineering feature of the BCBS is that specific Bitext content processing functions can be activated to meet specific chatbot applications and use cases. DLAP supports more than 50 languages. A BCBS licensee can activate additional language support as needed. A chatbot may be designed to handle English language queries, but Spanish, Italian, and other languages can be activated with via an instruction.

Dr. Antonio Valderrabanos said:

People want devices that understand what they say and intend. BCBS (Bitext Chatbot Service) allows smart software to take the intended action. BCBS allows a chatbot to understand context and leverage deep learning, machine intelligence, and other technologies to turbo-charge chatbot platforms.

Based on ArnoldIT’s test of the BCBS, accuracy of tagging resulted in accuracy jumps as high as 70 percent. Another surprising finding was that the time required to perform content tagging decreased.

Paul Korzeniowski, a member of the ArnoldIT study team, observed:

The Bitext system handles a number of difficult content processing issues easily. Specifically, the BCBS can identify negation regardless of the structure of the user’s query. The system can understand double intent; that is, a statement which contains two or more intents. BCBS is one of the most effective content processing systems to deal correctly  with variability in human statements, instructions, and queries.

Bitext’s BCBS and DLAP solutions deliver higher accuracy, and enable more reliable sentiment analyses, and even output critical actor-action-outcome content processing. Such data are invaluable for disambiguating in Web and enterprise search applications, content processing for discovery solutions used in fraud detection and law enforcement and consumer-facing mobile applications.

Because Bitext was one of the first platform solution providers, the firm was able to identify market trends and create its unique BCBS service for major chatbot platforms. The company focuses solely on solving problems common to companies relying on machine learning and, as a result, has done a better job delivering such functionality than other firms have.

A copy of the 22 page Beyond Search Overflight analysis is available directly from Bitext at this link on the Bitext site.

Once again, Bitext has broken through the barriers that block multi-language text analysis. The company’s Deep Linguistics Analysis Platform supports more than 50 languages at a lexical level and +20 at a syntactic level and makes the company’s technology available for a wide range of applications in Big Data, Artificial Intelligence, social media analysis, text analytics,  and the new wave of products designed for voice interfaces supporting multiple languages, such as chatbots. Bitext’s breakthrough technology solves many complex language problems and integrates machine learning engines with linguistic features. Bitext’s Deep Linguistics Analysis Platform allows seamless integration with commercial, off-the-shelf content processing and text analytics systems. The innovative Bitext’s system reduces costs for processing multilingual text for government agencies and commercial enterprises worldwide. The company has offices in Madrid, Spain, and San Francisco, California. For more information, visit www.bitext.com.

Kenny Toth, September 25, 2017

Alexa Gets a Physical Body

September 20, 2017

Alexa did not really get physical robot body, instead, Bionik Laboratories developed an Alexa skill to control their AKRE lower-body exoskeleton.  The news comes from iReviews’s article, “Amazon’s Alexa Can Control An Exoskeleton With Verbal Instructions.”

This is the first time Alexa has ever been connected to an exoskeleton and it could potentially lead to amazing breakthroughs in prosthetics.  Bionik Laboratories developed the exoskeleton to help older people and those with lower body impairments.  Users can activate the exoskeleton through Alexa with simple commands like, “I’m ready to stand” or “I’m ready to walk.”

As the population ages, there will be a higher demand for technology that can help senior citizens move around with more ease.

The ARKE exoskeleton has the potential to help in 100% of all stroke survivors who suffer from lower limb impairment. A portion of wheelchair-bound stroke survivors will be eligible for the exoskeleton. For spinal cord injury patients, Bionik Labs expects to treat 80% of all cases with the ARKE exoskeleton. There is also potential for patients with quadriplegia or incomplete spinal cord injury.

Bionik Laboratories plans to help people regain their mobility and improve their quality of life.  The company is focusing on stroke survivors and other mobile-impaired patients.  Pairing the exoskeleton with Alexa demonstrates the potential home healthcare will have in the future.  It will also feed imaginations as they wonder if the exoskeletons can be programmed not only walk and run but search and kill?  Just a joke, but the potential for aiding impaired people is amazing.

Whitney Grace, September 20, 2015

Amazon to Develop Pet Translating App

September 12, 2017

Anyone who has participated in a one-way conversation with their beloved pet can appropriate Amazon’s latest ambitions in creating an app to translate dog and cat sounds into human language. Not being the first to have this idea, Amazon should note that there has been no significant advance in this particular science and, perhaps, they are over-reaching even their own capacities.

The Guardian recently shared of Amazon’s dreams of a pet-translating app and came to the conclusion that at best it would provide the same service as adult supervision.

Kaminski says a translation device might make things easier for people who lack intuition or young children who misinterpret signals ‘sometimes quite significantly.’ One study, for instance, found that when young children were shown a picture of a dog with menacingly bared teeth, they concluded that the dog was “happy” and “smiling” and that they would like to hug it. An interpretation device might be able to warn of danger.

While there is no doubt that the pet industry is exploding in dollars and interest, Amazon’s app aspirations are a bit of a stretch. It is understandable how such a gimmicky app would set Amazon apart from other translation apps and sites, even if it has the same accuracy.

Catherine Lamsfuss, September 12, 2017

Do You See How Search Will Change?

September 5, 2017

Vocal-activated search is a convenient, hands-free way to quickly retrieve information.  A number of people who use some form of vocal search, either using a smart speaker or a digital assistant.  Scott Monty reports that the voice-activated speaker market has increased by 130% in the article, “Is The Future Of AI-Powered Search Oral Or Visual?”  Amazon controls 70% of the smart speaker market, while Google has 23%.

Voice activated search has its perks, but it does not always prove to be the most useful.  The problem with voice-activated search is that it does not allow a lot of options:

But here’s the current challenge with voice-activated systems: there’s no menu. There’s no dropdown of options. There’s no visual cue to help you give you a sense of what you can ask the system. Oh sure, you can ask what your query options are, but the voice will simply read back to you what your options are.

Monty points out that humans have been a visually-driven culture for thousands of years, ever since written language was invented.  Amazon and Google are already working on projects that combine visual aspects with voice-driven capabilities.  Amazon has the Echo Snow that has the same functionality as the regular Echos, except it has a screen.  Google is developing the Google Lens; think Google Glasses except not as obtrusive.  It can use visual search to augment reality.  The main differences between the two companies still leave a big gap between them: Amazon sells stuff, Google finds information.

But here’s the current challenge with voice-activated systems: there’s no menu. There’s no dropdown of options. There’s no visual cue to help you give you a sense of what you can ask the system. Oh sure, you can ask what your query options are, but the voice will simply read back to you what your options are.

Google still remains on top, but Amazon could develop an ecommerce version of the Google Lens.  Or would it be easier if the two somehow collaborated on a project to conquer shopping and search?

Whitney Grace, September 5, 2017

Audioburst Tackling Search in an Increasing Audio World

September 5, 2017

With the advent of speech recognition technology our Smart world is slowly becoming more voice activated rather than text based. One company, Audioburst, is hoping to cash in on this trend with a new way to search focusing on audio. A recent TechCrunch article examines the need for such technology and how Audioburst is going about accomplishing the task by utilizing natural language processing and speech recognition technology to identify and organize audio data.

 It…doesn’t only match users’ search queries to those exact same words when spoken, either. For example, it knows that someone speaking about the “president” in a program about U.S. politics was referring to “Donald Trump,” even if they didn’t use his name. The audio content is then tagged and organized in a way that computers understand, making it searchable…This allows its search engine to not just point you to a program or show where a topic was discussed, but the specific segment within that show where that discussion took place. (If you choose, you can then listen to the full show, as the content is linked to the source.)

This technology will allow users to never need the physical phone or tablet to conduct searches. Audioburst is hoping to begin working with car manufacturers soon to bring truly hands-free search to consumers.

Catherine Lamsfuss, September 5, 2017

Take a Hint Amazon, Bing Is Not That Great

August 22, 2017

It recently hit the new stands that Google Home was six times more likely than Amazon Alexa to answer questions.  The Inquirer shares more about this development in the article, “Google Hoe Is Six Times Smarter Than Amazon’s Echo.”

360i conducted a test using their proprietary software that asked Amazon Alexa and Google Home 3,000 questions.  We don’t know what the 3,000 questions were, but some of them did involve retail information.  Google pulled on its Knowledge Graph to answer questions, while Amazon used Bing for its search.  Amazon currently controls 70% of the voice assistant market and has many skills from other manufacturers.  Google, however, is limited in comparison:

By comparison, Google Home has relatively few smart home control chops, relying primarily on IFTTT, which is limited in what it can achieve and often takes a long time between request and execution.

Alexa, on the other hand, can carry out native skill commands in a second or two.

The downside of the two, however, is that Google is Google and Amazon is just not as good. If Echo was able to access the Knowledge Graph, Google Music, and control Chromecasts, then it would be unassailable.

Amazon Alexa and Google Home are a set of rivals and the facts are is that one is a better shopper and the other better at search.  While 360i has revealed their results, we need to see the test questions to fully understand how they arrived at the “six times smarter” statement?

Whitney Grace, August 22, 2017

Analytics for the Non-Tech Savvy

August 18, 2017

I regularly encounter people who say they are too dumb to understand technology. When people tell themselves this, they are hindering their learning ability and are unable to adapt to a society that growing more dependent on mobile devices, the Internet, and instantaneous information.  This is especially harmful for business entrepreneurs.  The Next Web explains, “How Business Intelligence Can Help Non-Techies Use Data Analytics.”

The article starts with the statement that business intelligence is changing in a manner equivalent to how Windows 95 made computers more accessible to ordinary people.  The technology gatekeeper is being removed.  Proprietary software and licenses are expensive, but cloud computing and other endeavors are driving the costs down.

Voice interaction is another way BI is coming to the masses:

Semantic intelligence-powered voice recognition is simply the next logical step in how we interact with technology. Already, interfaces like Apple’s Siri, Amazon Alexa and Google Assistant are letting us query and interact with vast amounts of information simply by talking. Although these consumer-level tools aren’t designed for BI, there are plenty of new voice interfaces on the way that are radically simplifying how we query, analyze, process, and understand complex data.

 

One important component here is the idea of the “chatbot,” a software agent that acts as an automated guide and interface between your voice and your data. Chatbots are being engineered to help users identify data and guide them into getting the analysis and insight they need.

I see this as the smart people are making their technology available to the rest of us and it could augment or even improve businesses.  We are on the threshold of this technology becoming commonplace, but does it have practicality attached to it?  Many products and services are common place, but it they only have flashing lights and whistles what good are they?

Whitney Grace, August 18, 2017

Seriously, Siri? When Voice Interface Goes Wrong

July 17, 2017

The article on Reddit titled Shower Thoughts offers some amusing moments in voice interfaces, mainly related to Siri switching on when least expected. Most of the anecdotes involve the classroom environment either during lecture or test time. Siri has a tendency to check in at the worst possible time, especially for people who are not supposed to be on their phone. For example,

My friend thought it would be funny to change my name on my phone to Sexy Beast, unfortunately I was later sitting in a biology lecture of about 150 people when Siri said loudly “I didn’t quite get [that] Sexy Beast.”…I keep thinking about shouting “Hey Siri, call Mum” whilst in the middle of a house party, and then watch how many people frantically reach for their phones!

For the latter hypothetical, other users pointed out that it would not work because Siri is listening for the voice of the owner. But we have all experienced Siri responding when we had no intention of beckoning her. If you use certain words like “seriously” or “Syria,” she often awkwardly pops into the conversation. One user relates that a teacher asked the class for the capital city of China, and while the class sat in silence, Siri correctly responded, “Beijing.” In this case, Siri earned a better grade. Other people report Siri spilling the beans during exams when cheaters try to keep their phones nearby. All in a day’s work.

Chelsea Kerwin, July 17, 2017

WaveNet Machine-Generated Speech from DeepMind Eclipses Competitor Technology

July 13, 2017

The article on Bloomberg titled Google’s DeepMind Achieves Speech-Generation Breakthrough touts a 50% improvement over current technology for machine speech. DeepMind developed an AI called WaveNet that focuses on mimicking human speech by learning the sound waves of human voices. In testing, the machine-generated speech beat existing technology, but is still not meeting the level of actual human speech.

The article expands,

Speech is becoming an increasingly important way humans interact with everything from mobile phones to cars. Amazon.com Inc., Apple Inc., Microsoft Inc. and Alphabet Inc.’s Google have all invested in personal digital assistants that primarily interact with users through speech. Mark Bennett, the international director of Google Play, which sells Android apps, told an Android developer conference in London last week that 20 percent of mobile searches using Google are made by voice, not written text.

It is difficult to quantify the ROI for the $533M that Google spent to acquire DeepMind in 2014, since most of their advancements are not extremely commercial. Google did credit DeepMind with the technology that helped slash power needs by 40%. But this breakthrough involves far too much computational power to lend itself to commercial applications. However, Google must love that with the world watching, DeepMind continues to outperform competitors in AI advancement.

Chelsea Kerwin, July 13, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta