The Narrowing App Market

September 29, 2017

If you are thinking of going into app development, first take a gander at this write-up; Business Insider reports, “Half of Digital Media Time Is Spent in Five Apps.” Citing comScore’s 2017 US Mobile App Report , writer Laurie Beaver tells us:

Users spend 90% of their mobile app time in their top five apps, making up 51% of total digital time spent. Perhaps more alarming is that half of the time spent on smartphones is within just one app. That drops dramatically to 18% of time for the second most used app. This suggests that unless a brand’s or business’ app is the first or second most used (most likely Facebook- or Google-owned), it’s unlikely to get any meaningful share of users’ attention.

There are a few reasons for developers to take heart—the number of app downloads is picking up, and users have become more willing to allow push notifications. Most importantly, perhaps, is that users are making in-app purchases; that is where most apps make their money. Beaver continues:

Nevertheless, the report shows the astonishing influence Facebook and Google have over how US mobile app users spend their time. And given the increasingly large share the top five apps have, it’s likely to only become more difficult for brands and publishers to receive any share of users’ time. Alternate app experiences such as Apple’s iMessage apps, Google’s Instant Apps, and Facebook Messenger’s Instant Games could provide brands and publishers with new avenues to reach consumers where they’re spending their time. While these services are nascent, they do provide a promising option for businesses moving forward.

We’re reminded that apps have gained ground over browsers, and are now the main way folks get online. However, the trends toward app consolidation and app abandonment may lead to a “post-app” future. Never fear, though—Business Insider’s research service, BI Intelligence, offers a report titled “The End of Apps” ($495) that could help businesses and developers prepare for the future. Founded in 2007, Business Insider is headquartered in New York City.

Cynthia Murrell, September 29, 2017

European Tweets Analyzed for Brexit Sentiment

September 28, 2017

The folks at Expert System demonstrate their semantic intelligence chops with an analysis of sentiments regarding Brexit, as expressed through tweets. The company shares their results in their press release, “The European Union on Twitter, One Year After Brexit.” What are Europeans feeling about that major decision by the UK? The short answer—fear. The write-up tells us:

One year since the historical referendum vote that sanctioned Britain’s exit from the European Union (Brexit, June 23, 2016), Expert System has conducted an analysis to verify emotions and moods prevalent in thoughts expressed online by citizens. The analysis was conducted on Twitter using the cognitive Cogito technology to analyze a sample of approximately 160,000 tweets in English, Italian, French, German and Spanish related to Europe (more than 65,000 tweets for #EU, #Europe…) and Brexit (more than 95,000 tweets for #brexit…) posted between May 21 – June 21, 2017. Regarding the emotional sphere of the people, the prevailing sentiment was fear followed by desire as a mood for intensely seeking something, but without a definitive negative or positive connotation. The analysis revealed a need for more energy (action), and, in an atmosphere that seems to be dominated by a general sense of stress, the tweets also showed many contrasts: modernism and traditionalism, hope and remorse, hatred and love.

The piece goes on to parse responses by language, tying priorities to certain countries. For example, those tweeting in Italian often mentioned “citizenship”, while tweets in German focused largely on “dignity” and “solidarity.” The project also evaluates sentiment regarding several EU leaders. Expert System  was founded back in 1989, and their Cogito office is located in London.

Cynthia Murrell, September 28, 2017

Microsoft AI: Another Mobile Type Play?

September 27, 2017

I read a long discussion of Microsoft’s artificial intelligence activities. The article touches briefly on the use of Bing and LinkedIn data. (I have distributed this post via LinkedIn, so members can get a sense of the trajectory of the MSFT AI effort.

I noted several quotes, but I urge you to read the original article “One Year Later, Microsoft AI and Research Grows to 8k People in Massive Bet on Artificial Intelligence.”

[1] Microsoft is looking to avoid missing giant opportunities as it did with mobile and social media, so it is giving its AI strategy a lot of attention and resources

[2] Artificial intelligence is one of the key topics of Nadella’s upcoming book, Hit Refresh.

[3] The initial announcement between Microsoft and Cortana might not be the end of the AI collaboration between the Seattle-area tech giant…

[4] We’ve [Microsoft AI team] largely built what I would call wedges of competency — a great speech recognition system, a great vision and captioning system, great object recognition system…

[5] When you think about the Microsoft Graph and Office Graph, now augmented with the LinkedIn Graph, it’s just amazing.

[6] Now, the question is whether Microsoft “can really execute with differentiation.

[7] Microsoft is looking to avoid missing giant opportunities as it did with mobile and social media, so it is giving its AI strategy a lot of attention and resources.

Stephen E Arnold, September 27, 2017

The Dark Potential Behind Neural Networks

September 27, 2017

With nearly every technical advance humanity has made, someone has figured out how to weaponize that which was intended for good. So too, it seems, with neural networks. The Independent reports, “Artificial Intelligence Can Secretly Be Trained to Behave ‘Maliciously’ and Cause Accidents.”  The article cites research [PDF] from New York University that explored the potential to create a “BadNet.” They found it was possible to modify a neural net’s code to the point where they could even cause tragic physical “accidents,” and that such changes would be difficult to detect. Writer Aatif Sulleyman explains:

Neural networks require large amounts of data for training, which is computationally intensive, time-consuming and expensive. Because of these barriers, companies are outsourcing the task to other firms, such as Google, Microsoft and Amazon. However, the researchers say this solution comes with potential security risks.


‘In particular, we explore the concept of a backdoored neural network, or BadNet,’ the paper reads. ‘In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger.’

Sulleyman shares an example from the report: researchers successfully fooled a system, with the application of a Post-it note, into interpreting a stop sign as a speed limit sign—a trick that could cause an autonomous vehicle to cruise through without stopping. Though we do not (yet) know of any such sabotage outside the laboratory, researchers hope their work will encourage companies to pay close attention to security as they move forward with machine learning technology.

Cynthia Murrell, September 27, 2017


Short Honk: Database Cost

September 26, 2017

If you want to get a sense of the time and computational cost under the covers of Big Data processing, please, read “Cost in the Land of Databases.” Two takeaways for me were [a] real time is different from what some individuals believe, and [b] if you want to crunch Big Data bring money and technical expertise, not assumptions that data are easy.

Stephen E Arnold, September 26, 2017

Chatbots: The Negatives Seem to Abound

September 26, 2017

I read “Chatbots and Voice Assistants: Often Overused, Ineffective, and Annoying.” I enjoy a Hegelian antithesis as much as the next veteran of Dr. Francis Chivers’ course in 19th Century European philosophy. Unlike some of Hegel’s fans, I am not confident that taking the opposite tack in a windstorm is the ideal tactic. There are anchors, inboard motors, and distress signals.

The article points out that quite a few people are excited about chatbots. Yep, sales and marketing professionals earn their keep by crating buzz in order to keep their often-exciting corporate Beneteau 22’s afloat. With VCs getting pressured by those folks who provided the cash to create chatbots, the motive force for an exciting ride hurtles onward.

The big Sillycon Valley guns have been army the chatbot army for years. Anyone remember Ask Jeeves when it pivoted to a human powered question answering machine into a customer support recruit. My recollection is that the recruit washed out, but your mileage may vary.

With Amazon, Facebook, Google, IBM, and dozens and dozens of companies with hard-to-remember names on the prowl, chatbots are “the future.” The Infoworld article is a thinly disguised “be careful” presented as “real news.”

That’s why I wrote a big exclamation point and the words “A statement from the Captain Obvious crowd” next to this passage:

Most of us have been frustrated with misunderstandings as the computer tries to take something as imprecise as your voice and make sense of what you actually mean. Even with the best speech processing, no chatbots are at 100-percent recognition, much less 100-percent comprehension.

I am baffled by this fragment, but I am confident it makes sense to those who were unaware that dealing with human utterances is a pretty tough job for the Googlers and Microsofties who insist their systems are the cat’s pajamas. Note this indication of Infoworld quality in thought an presentation:

It seems very inefficient to resort to imprecise systems when we have [sic]

Yep, an incomplete thought which my mind filled in as saying, “humans who can maybe answer a question sometimes.”

The technology for making sense of human utterance is complex. Baked into the systems is the statistical imprecision that undermines the value of some chatbot implementations.

My thought is that Infoworld might help its readers if it were to answer questions like these:

  • What are the components of a chatbot system? Which introduce errors on a consistent basis?
  • How can error rates of chatbot systems be reduced in an affordable, cost effective manner?
  • What companies are providing third party software to the big girls and boys in the chatbot dodge ball game?
  • Which mainstream chatbot systems have exemplary implementations? What are the metrics behind “exemplary”?
  • What companies are making chatbot technology strides for languages other than English?

I know these questions are somewhat more difficult to answer than a write up which does little more than make Captain Obvious roll his eyes. Perhaps Infoworld and its experts might throw a bone to their true believers?

Stephen E Arnold, September 26, 2017

Why the Future of Computing Lies in Natural Language Processing

September 26, 2017

In a blog post, EasyAsk declares, “Cognitive Computing, Natural Language & AI: Game Changers.”  We must keep in mind that the “cognitive eCommerce” company does have a natural language search engine to sell, so they are a little biased. Still, writer and CEO Craig Bassin make some good points. He begins by citing research firm Gartner’s assessment that natural-language query “will dramatically change human-computer interaction.” After throwing in a couple amusing videos, Bassin examines the role of natural language in two areas of business, business intelligence (BI) and customer relationship management (CRM). He writes:

That shift [to natural language and cognitive computing] enables two things. First, it enables users to ask a computer questions the same way they’d ask an associate, or co-worker. Second, it enables the computer to actually answer the question. That’s the game changer. The difference is a robust Natural Language Linguistic Engine. Let’s go back to the examples above for a reexamination of our questions. For BI, what if there was an app that looked beyond the dashboards into the data to answer ah-hoc questions? Instead of waiting days for a report to be generated, you could have it on the fly – right at your fingertips. For CRM, what if that road warrior could ask and answer questions about the current status across prospects in a specific region to deduce where his/her time would be best spent? Gartner and Forrester see the shift happening. In Gartner’s Magic Quadrant Report for Business Intelligence and Analytics Platforms [PDF], strategic planning assumptions incorporate the use of natural language. It may sound like a pipe dream now, but this is the future.

Naturally, readers can find natural-language goodness in EasyAsk’s platform which, to be fair, has been building their cognitive computing tech for years now. Businesses looking for a more sophisticated search solution would do well to check them out—along with their competition.  Based in Burlington, Mass., EasyAsk also maintains their European office in Berkshire, UK. The company was founded in 2000 and was acquired by Progress Software in 2005.

Cynthia Murrell, September 26, 2017

Google to Win in Self Driving Vehicles. Too Soon to Call the Game?

September 25, 2017

I like confidence. The trait is particularly charming when predicting the future. For example, consider the write up “How Google Will Beat Tesla, GM in Self-Driving Cars.” The title promises a cook book recipe for victory and, as I interpret the title, a win for the GOOG in the autonomous car game.

I highlighted this passage as one worthy of note because it uses as a source the paywalled Wall Street Journal, the Murdoch newspaper. (I fondly recall the wire tapping allegations of another Murdoch property. Impressive, if the allegations were accurate.)

The secret to the recipe seems to be a single new hire at the GOOG, John Krafcik. Mr. Krafcik is associated with outfits which makes the Fiat 500. I noted this description of Mr. Krafcik’s contribution to the Google:

Mr. Krafcik “is making headway in bridging a yawning cultural gap between Silicon Valley and Detroit.”

Another ingredient is that the Alphabet Google thing is “looking at developing theirs for a broad range of uses, including ride hailing, fright delivery, and public transportation.

An interesting factoid caught my attention. Some auto manufacturers have cars which cannot be sold. What does one do when vacant parking lots storage yards are packed with lime green vehicles?

The answer is to convert them to autonomous vehicles. What’s a little time and effort to convert that four door gas sipper into a self driver?

That’s a question to which I don’t have an answer which the auto industry does not seem to have either. Those lime green sleds are not yet set up like a whiz bang Tesla to make autonomous driving a bit flip. Retrofit? Well, a couple of demo models maybe?

The headline and the recycling of the WSJ story do not provide a recipe.

I believe the last “Google will win” trophy was awarded to Anthony Levandowski, the Otto Uber guy. Like Google’s inspired purchase of Motorola, the Levandowski play did not have the secret to an award winning recipe.

The GOOG is swinging for the fences, but American sports do not pay much attention to the competition in other parts of the world. But Mr. Murdoch’s intrepid reporters and those who amplify the “real” journalism are more interested in clicks and ad revenue than some auto industry executives who visit the general store in Harrod’s Creek once in a while.

Yep, these folks mostly grouse about “quality”, not self driving F 150s. But that is not “real news” is it?

Stephen E Arnold, September 25, 2017

New Beyond Search Overflight Report: The Bitext Conversational Chatbot Service

September 25, 2017

Stephen E Arnold and the team at Arnold Information Technology analyzed Bitext’s Conversational Chatbot Service. The BCBS taps Bitext’s proprietary Deep Linguistic Analysis Platform to provide greater accuracy for chatbots regardless of platform.

Arnold said:

The BCBS augments chatbot platforms from Amazon, Facebook, Google, Microsoft, and IBM, among others. The system uses specific DLAP operations to understand conversational queries. Syntactic functions, semantic roles, and knowledge graph tags increase the accuracy of chatbot intent and slotting operations.

One unique engineering feature of the BCBS is that specific Bitext content processing functions can be activated to meet specific chatbot applications and use cases. DLAP supports more than 50 languages. A BCBS licensee can activate additional language support as needed. A chatbot may be designed to handle English language queries, but Spanish, Italian, and other languages can be activated with via an instruction.

Dr. Antonio Valderrabanos said:

People want devices that understand what they say and intend. BCBS (Bitext Chatbot Service) allows smart software to take the intended action. BCBS allows a chatbot to understand context and leverage deep learning, machine intelligence, and other technologies to turbo-charge chatbot platforms.

Based on ArnoldIT’s test of the BCBS, accuracy of tagging resulted in accuracy jumps as high as 70 percent. Another surprising finding was that the time required to perform content tagging decreased.

Paul Korzeniowski, a member of the ArnoldIT study team, observed:

The Bitext system handles a number of difficult content processing issues easily. Specifically, the BCBS can identify negation regardless of the structure of the user’s query. The system can understand double intent; that is, a statement which contains two or more intents. BCBS is one of the most effective content processing systems to deal correctly  with variability in human statements, instructions, and queries.

Bitext’s BCBS and DLAP solutions deliver higher accuracy, and enable more reliable sentiment analyses, and even output critical actor-action-outcome content processing. Such data are invaluable for disambiguating in Web and enterprise search applications, content processing for discovery solutions used in fraud detection and law enforcement and consumer-facing mobile applications.

Because Bitext was one of the first platform solution providers, the firm was able to identify market trends and create its unique BCBS service for major chatbot platforms. The company focuses solely on solving problems common to companies relying on machine learning and, as a result, has done a better job delivering such functionality than other firms have.

A copy of the 22 page Beyond Search Overflight analysis is available directly from Bitext at this link on the Bitext site.

Once again, Bitext has broken through the barriers that block multi-language text analysis. The company’s Deep Linguistics Analysis Platform supports more than 50 languages at a lexical level and +20 at a syntactic level and makes the company’s technology available for a wide range of applications in Big Data, Artificial Intelligence, social media analysis, text analytics,  and the new wave of products designed for voice interfaces supporting multiple languages, such as chatbots. Bitext’s breakthrough technology solves many complex language problems and integrates machine learning engines with linguistic features. Bitext’s Deep Linguistics Analysis Platform allows seamless integration with commercial, off-the-shelf content processing and text analytics systems. The innovative Bitext’s system reduces costs for processing multilingual text for government agencies and commercial enterprises worldwide. The company has offices in Madrid, Spain, and San Francisco, California. For more information, visit

Kenny Toth, September 25, 2017

Combine Humans with AI for Chatbot Success (for Now)

September 25, 2017

For once, humans are taking work from bots. The Register reports, “Dismayed by Woeful AI Chatbots, Boffins Hired Real People—And Went Back to Square One.” Today’s AI-empowered devices can seem pretty smart—as long as one sticks to the script. Until we have chatbots that can hold their own with humans in conversation, though, Chorus may give users the best of both worlds. The app taps into a human workforce through Amazon Mechanical Turk, and was developed by researchers from Carnegie Mellon, the University of Michigan, and Ariel University. A PDF of their paper can be found here. Writer Thomas Claburn reports:

It was hoped by businesses the world over that conversational software could replace face-to-face reps and people in call centers, as the machines should be far cheaper and easier to run. The problem is simply that natural language processing in software is not very good at the moment.


‘Due to the lack of fully automated methods for handling the complexity of natural language and user intent, these services are largely limited to answering a small set of common queries involving topics like weather forecasts, driving directions, finding restaurants, and similar requests,’ the paper explains. … [Researchers] devised a system that connects Google Hangouts, through a third-party framework called Hangoutsbot, with the Chorus web server, which routes queries to on-demand workers participating in Amazon Mechanical Turk.

The team acknowledges they are not the first to combine a chatbot with real people, citing the crowd-sourced app for blind iPhone users, VizWiz. Of course, employing humans brings its own set of problems. For example, they do not come equipped with an auto-timeout, and they sometimes let their emotions get the better of them. It can also be difficult to find enough workers to answer all queries quickly. Researchers see Chorus as an interim solution that, they hope, will also suggest ways to improve automated chat going forward.

Cynthia Murrell, September 25, 2017

Next Page »

  • Archives

  • Recent Posts

  • Meta