CEOs AI Hyped but Not Many Deploy It

October 17, 2017

How long ago was big data the popular buzzword?  It was not that long ago, but now it has been replaced with artificial data and machine learning.  Whenever a buzzword is popular, CEOs and other leaders become obsessed with implementing it within their own organizations.  Fortune opens up about the truth of artificial intelligence and its real deployment in the editorial, “The Hype Gap In AI”.

Organization leaders have high expectations for artificial intelligence, but the reality is well below them.  According to a survey cited in the editorial, 85% of executives believe that AI will change their organizations for the better, but only one in five executives have actually implemented AI into any part of their organizations.  Only 39% actually have an AI strategy plan.

Hype about AI and its potential is all over the business sector, but very few really understand the current capabilities.  Even fewer know how they can actually use it:

But actual adoption of AI remains at a very early stage. The study finds only about 19% of companies both understand and have adopted AI; the rest are in various stages of investigation, experimentation, and watchful waiting. The biggest obstacle they face? A lack of understanding —about how to adapt their data for algorithmic training, about how to alter their business models to take advantage of AI, and about how to train their workforces for use of AI.

Organizations view AI as an end-all solution, similar to how big data was the end all solution a few years ago.  What is even worse is that while big data may have had its difficulties, understanding it was simpler than understanding AI.  The way executives believe AI will transform their companies is akin to a science fiction solution that is still very much in the realm of the imagination.

Whitney Grace, October 17, 2017

Skepticism for Google Micro-Moment Marketing Push

October 13, 2017

An article at Street Fight, “The Fallacy of Google’s ‘Micro-Moment’ Positioning,” calls out Google’s “micro-moments” for the gimmick that it is. Here’s the company’s definition of the term they just made up: “an intent-rich moment when a person turns to a device to act on a need—to know, go, do, or buy.” In other words, any time a potential customer has a need and picks up their smartphone looking for a solution. For Street Fight’s David Mihm and Mike Blumenthal, this emphasis seems like a distraction from the failure of Google’s analytics to provide a well-rounded view of the online consumer. In fact, such oversimplification could hurt businesses that buy into the hype. In their dialogue format, they write:

David:[The term “micro-moments”] reduces all consumer buying decisions to thoughtless reflexes, which is just not reality, and drives all creative to a conversion-focused experience, which is only appropriate for specific kinds of keywords or mobile scenarios.  It’s totally IN-appropriate for display or top-of-funnel advertising. I also think it’s intended to create a bizarre sense of panic among marketers — “OMG, we have to be present at every possible instant someone might be looking at their phone!” — which doesn’t help them think strategically or make the best use of their marketing or ad spend.

Mike: I agree. If you don’t have a sound, broad strategy no micro management of micro moments will help. To some extent I wonder if Google’s use of the term reflects the limits of their analytics to yet be able to provide a more complete picture to the business?

David: Sure, Google is at least as well-positioned as Amazon or Facebook to provide closed-loop tracking of purchase behavior. But I think it reflects a longstanding cultural worldview within the company that reduces human behavior to an algorithm. “Get Notification. Buy Thing.” or “See Ad. Buy Thing.”  That may work for the “head” of transactional behavior but the long tail is far messier and harder to predict. Much as Larry Page would like us to be, humans are never going to be robots.

Companies that recognize the difference between consumers and robots have a clear edge in this area, no matter how Google tries to frame the issue. The authors compare Google’s blind spot to Amazon’s ease-of-use emphasis, noting the latter seems to better understand where customers are coming from. They also ponder the recent alliance between Google and Walmart to provide “voice-activated shopping” with a bit of skepticism. See the article for more of their reasoning.

Cynthia Murrell, October 13, 2017

Smart Software with a Swayed Back Pony

October 1, 2017

I read “Is AI Riding a One-Trick Pony?” and felt those old riding sores again. Technology Review nifty new technology old. Bayesian methods date from the 18th century. The MIT write up has pegged Geoffrey Hinton, a beloved producer of artificial intelligence talent, as the flag bearer for the great man theory of smart software.

Dr. Hinton is a good subject for study. But the need to generate clicks and zip in the quasi-academic world of bit time universities may be engaged in “practical” public relations. For example, the write up praises Dr. Hinton’s method of “back propagation.” At the same time, the MIT publication points out the method of neural networks popular today:

you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.
This makes sense. The idea is that the method allows the real world to be subject to a numerical recipe.

The write up states:

Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world.

Yes, reality. The way the brain works. A way to make software smart. Indeed a one trick pony which can be outfitted with a silver bridle, a groomed mane and tail, and black liquid shoe polish on its dainty hooves.

The sway back? A genetic weakness. A one trick pony with a sway back may not be able to carry overweight kiddies to the Artificial Intelligence Restaurant, however.

MIT’s write up suggests there is a weakness in the method; specifically:

these “deep learning” systems are still pretty dumb, in spite of how smart they sometimes seem.

Why?

Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled.

Software, the article points out that:

And though we’ve started to get a better handle on what kinds of changes will improve deep-learning systems, we’re still largely in the dark about how those systems work, or whether they could ever add up to something as powerful as the human mind.

There is hope too:

Essentially, it is a procedure he calls the “exploration–compression” algorithm. It gets a computer to function somewhat like a programmer who builds up a library of reusable, modular components on the way to building more and more complex programs. Without being told anything about a new domain, the computer tries to structure knowledge about it just by playing around, consolidating what it’s found, and playing around some more, the way a human child does.

We have a braided mane and maybe a combed tail.

But what about that swayed back, the genetic weakness which leads to a crippling injury when the poor pony is asked to haul a Facebook or Google sized child around the ring? What happens if low cost, more efficient ways to create training data, replete with accurate metadata and tags for human things like sentiment and context awareness become affordable, fast, and easy?

My thought is that it may be possible to do a bit of genetic engineering and make the next pony healthier and less expensive to maintain.

Stephen E Arnold, October 1, 2017

New Beyond Search Overflight Report: The Bitext Conversational Chatbot Service

September 25, 2017

Stephen E Arnold and the team at Arnold Information Technology analyzed Bitext’s Conversational Chatbot Service. The BCBS taps Bitext’s proprietary Deep Linguistic Analysis Platform to provide greater accuracy for chatbots regardless of platform.

Arnold said:

The BCBS augments chatbot platforms from Amazon, Facebook, Google, Microsoft, and IBM, among others. The system uses specific DLAP operations to understand conversational queries. Syntactic functions, semantic roles, and knowledge graph tags increase the accuracy of chatbot intent and slotting operations.

One unique engineering feature of the BCBS is that specific Bitext content processing functions can be activated to meet specific chatbot applications and use cases. DLAP supports more than 50 languages. A BCBS licensee can activate additional language support as needed. A chatbot may be designed to handle English language queries, but Spanish, Italian, and other languages can be activated with via an instruction.

Dr. Antonio Valderrabanos said:

People want devices that understand what they say and intend. BCBS (Bitext Chatbot Service) allows smart software to take the intended action. BCBS allows a chatbot to understand context and leverage deep learning, machine intelligence, and other technologies to turbo-charge chatbot platforms.

Based on ArnoldIT’s test of the BCBS, accuracy of tagging resulted in accuracy jumps as high as 70 percent. Another surprising finding was that the time required to perform content tagging decreased.

Paul Korzeniowski, a member of the ArnoldIT study team, observed:

The Bitext system handles a number of difficult content processing issues easily. Specifically, the BCBS can identify negation regardless of the structure of the user’s query. The system can understand double intent; that is, a statement which contains two or more intents. BCBS is one of the most effective content processing systems to deal correctly  with variability in human statements, instructions, and queries.

Bitext’s BCBS and DLAP solutions deliver higher accuracy, and enable more reliable sentiment analyses, and even output critical actor-action-outcome content processing. Such data are invaluable for disambiguating in Web and enterprise search applications, content processing for discovery solutions used in fraud detection and law enforcement and consumer-facing mobile applications.

Because Bitext was one of the first platform solution providers, the firm was able to identify market trends and create its unique BCBS service for major chatbot platforms. The company focuses solely on solving problems common to companies relying on machine learning and, as a result, has done a better job delivering such functionality than other firms have.

A copy of the 22 page Beyond Search Overflight analysis is available directly from Bitext at this link on the Bitext site.

Once again, Bitext has broken through the barriers that block multi-language text analysis. The company’s Deep Linguistics Analysis Platform supports more than 50 languages at a lexical level and +20 at a syntactic level and makes the company’s technology available for a wide range of applications in Big Data, Artificial Intelligence, social media analysis, text analytics,  and the new wave of products designed for voice interfaces supporting multiple languages, such as chatbots. Bitext’s breakthrough technology solves many complex language problems and integrates machine learning engines with linguistic features. Bitext’s Deep Linguistics Analysis Platform allows seamless integration with commercial, off-the-shelf content processing and text analytics systems. The innovative Bitext’s system reduces costs for processing multilingual text for government agencies and commercial enterprises worldwide. The company has offices in Madrid, Spain, and San Francisco, California. For more information, visit www.bitext.com.

Kenny Toth, September 25, 2017

Algorithmic Recommendations and Real Journalists: Volatile Combination

September 22, 2017

I love the excitement everyone has for mathy solutions to certain problems. Sure, the math works. What is tough for some to grasp is that probabilities are different from driving one’s automobile into a mine drainage ditch. Fancy math used to figure out who likes what via clustering or mixing previous choices with information about what “similar” people purchased is different. The car is in the slime: Yes or no. The recommendation is correct: Well, somewhere between 70 percent and 85 percent most of the time.

That’s a meaningful difference.

I thought about the “car in the slime” example when I read “Anatomy of a Moral Panic”. The write up states:

The idea that these ball bearings are being sold for shrapnel is a reporter’s fantasy. There is no conceivable world in which enough bomb-making equipment is being sold on Amazon to train an algorithm to make this recommendation.

Excellent point.

However, the issue is that many people, not just “real” journalists, overlook the fact that a probability is not the same as the car in the slime. As smart software becomes the lazy person’s way to get information, it is useful to recall that some individuals confuse the outputs of a statistical numerical recipe with reality.

I find this larger issue a bit more frightening than the fact that recommendation engines spit out guesses about what is similar and the humans who misunderstand.

Stephen E Arnold, September 22, 2017

Trust the Search Black Box and Only the Black Box

September 21, 2017

This article reads like an infomercial for a kitchen appliance.  It asks the same, old question, “How much time do you waste searching for relevant content?”  Then it leads into a pitch for Microsoft and some other companies.  BA Insights wrote, “The Increasingly Intelligence Search Experience” to be an original article, but frankly it sounds like every spiel to sell a new search algorithm.

After the “hook,” the article runs down the history of Microsoft and faceted search along with refiners and how it was so revolutionary at the time.  Do not get me wrong, this was a revolution move, but it sounds like Microsoft invented the entire tool rather than just using it as a strategy.  There is also a brief mention on faceted navigation, then they throw “intelligence search” at us:

Microsoft’s definition of “intelligence” may still be vague, but it’s clear that the company believes its work in machine-learning, when combined with its cloud platform, can give it a leg up over its competitors. The Microsoft Graph and these new intelligent machine-learning capabilities provide personalized insights based on a user’s personal network, project assignments, meeting schedule, and other search and collaboration activities. These features make it possible not only to search using traditional methods and take action based on those results, but for the tools and systems to proactively provide intelligent, personalized, and timely information before you ask for it – based on your profile, permissions, and activity history.

Oh!  Microsoft is so smart that they have come up with something brand new that companies which specialize in search have never thought of before.  Come on, how many times have we seen and read claims like this before?  Microsoft is doing revolutionary things, but not so much in the field of search technology.  They have contributed to its improvement over the years, but if this was such a revolutionary piece of black box software why has not anyone else picked it up?

Little black box software has their uses, but mostly for enterprise and closed systems-not the bigger Web.

Whitney Grace, September 21, 2015

A Write Up about Facebook Reveals Shocking Human Weakness

September 18, 2017

What do I need with another write up about Facebook? We use the service to post links to stories in this blog, Beyond Search. My dog has an account to use when a site demands a Facebook user name and password. That’s about it. For me, Facebook is an online service which sells ads and provides useful information to some analysts and investigators. Likes, mindless uploading of images, and obsessive checking of the service. Sorry, not for a 74 year old in rural Kentucky, thank you very much.

I did read “How Facebook Tricks You Into Trusting Algorithms.”

I noted this statement, which I think is interesting:

The [Facebook] News Feed is meant to be fun, but also geared to solve one of the essential problems of modernity—our inability to sift through the ever-growing, always-looming mounds of information.

Why use Facebook instead of a service like Talkwalker? Here’s the answer:

Who better, the theory goes, to recommend what we should read and watch than our friends? Zuckerberg has boasted that the News Feed turned Facebook into a “personalized newspaper.”

Several observations:

  1. The success of Facebook is less about “friends” and more about anomie, the word I think used by Émile Durkheim to describe one aspect of “modern” life.
  2. The human mind, it seems, can form attachments to inanimate objects like Teddy Bears, animate objects like a human or dog, or to simulacra which intermediate for the user between the inanimate and the animate.
  3. Assembling large populations of “customers”, Facebook has a way to sell ads based on human actions as identified by the Facebook monitoring software.

So what?

As uncertainty spikes, the “value” of Facebook will go up. No online service is invulnerable. Ennui, competition, management missteps, or technological change can undermine even the most dominant company.

I am not sure that Facebook “tricks” anyone. The company simply responds to the social opportunity modern life presents to people in many countries.

Build a life via the gig economy? Nah, pretty tough to do.

Generate happiness via Likes? Nah, ping ponging between angst and happiness is the new normal.

Become a viral success? Nah, better chance at a Las Vegas casino for most folks?

Facebook, therefore, is something that would have to be created if the real Facebook did not exist.

Will Facebook gain more “power”? Absolutely. Human needs are forever. Likes are transient. Keep on clicking. Algorithms will do the rest.

Stephen E Arnold, September 18, 2017

Yandex Adds Deep Neural Net Algorithm

September 18, 2017

One of Google’s biggest rivals, at least in Asia, is Russian search engine Yandex and in efforts to keep themselves on top of search, Yandex added a new algorithm and a few other new upgrades.  Neowin explains what the upgrades are in the article, “Yandex Rolls Out Korolev Neural Net Search Algorithm.”  Yandex named its upgraded deep neural network search algorithm Korolev and they also added Yandex. Toloka new mass-scale crowdsources platform that feeds search results into MatrixNext.

Korolev was designed to handle long-tail queries in two new ways its predecessor Palekh could not.  Korolev delves into a Web page’s entire content and also it can analyze documents a thousand times faster in real time.  It is also designed to learn the more it is used, so accuracy will improve the more Korolev is used.

Korolev had an impressive release and namesake:

The new Korolev algorithm was announced at a Yandex event at the Moscow Planetarium. Korolev is of course named after the Soviet rocket engineer, Sergei Korolev, who oversaw the Sputnik project, 60 years ago, and the mission that saw Yuri Gagarin get to space. Yandex teleconferenced with Fyodor Yurchikhin and Sergey Ryazansky who are currently representing Russia on the International Space Station.

Yandex is improving its search engine results and services to keep on top of the industry and technology.

Whitney Grace, September 18, 2015

My Feed Personalization a Step Too Far

September 15, 2017

In an effort to be even more user-friendly and to further encourage a narcissistic society, Google now allows individuals to ‘follow’ or ‘unfollow’ topics, delivered daily to devices, as they deem them interesting or uninteresting. SEJ explains the new feature which is considered an enhancement of their ‘my feed’ which is intended to personalize news.

As explained in the article,

Further advancements to Google’s personalized feed include improved machine learning algorithms, which are said to be more capable at anticipating what an individual may find interest. In addition to highlighting stories around manually and algorithmically selected topics of interest, the feed will also display stories trending in your area and around the world.

That seems like a great way to keep people current on topics ranging geographically, politically and culturally, but with the addition of ‘follow’ or ‘unfollow’, once again, individuals can reduce their world to a series of pop-star updates and YouTube hits. Isn’t it an oxymoron to both suggest topics and stories in an effort to keep an individual informed of the world around them, and yet allow them to stop the suggestions are they appear boring or lack familiarity? Now, Google, you can do better.

Catherine Lamsfuss, September 15, 2017

Markov: Maths for the Newspaper Reader

September 14, 2017

Remarkable. I read a pretty good write up called “That’s Maths: Andrey Markov’s Brilliant Ideas Are Still Bearing Fruit.” I noted the source of the article: The Irish Times. A “real” newspaper. Plus it’s Irish. Quick name a great Irish mathematician? I like Sir William Rowan Hamilton, who my slightly addled mathy relative Vladimir Igorevich Arnold and his boss/mentor/leader of semi clothed hikes in the winter Andrey Kolmogorov thought was an okay guy.

Markov liked literature. Well, more precisely, he liked to count letter frequencies and occurrence in Russian novels like everyone’s fave Eugene Onegin. His observations fed his insight that a Markov Process or Markov Chain was a useful way to analyze probabilities in certain types of data. Applications range from making IBM Watson great again to helping outfits like Sixgill generate useful outputs. (Not familiar with Sixgill? I cover the company in my forthcoming lecture at the TechnoSecurity & Digital Forensics Conference next week.)

I noted this passage which I thought was sort of accurate or at least close enough for readers of “real” newspapers:

For a Markov process, only the current state determines the next state; the history of the system has no impact. For that reason we describe a Markov process as memoryless. What happens next is determined completely by the current state and the transition probabilities. In a Markov process we can predict future changes once we know the current state.

The write up does not point out that the Markov Process becomes even more useful when applied to Bayesian methods enriched with some LaPlacian procedures. Now stir in the nuclear industry’s number one with a bullet Monte Carlo method and stir the ingredients. In my experience and that of my dear but departed relative, one can do a better job at predicting what’s next than a bookie at the Churchill Downs Racetrack. MBAs on Wall Street have other methods for predicting the future; namely, chatter at the NYAC or some interactions with folks in the know about an important financial jet blast before ignition.

A happy quack to the Irish Times for running a useful write up. My great uncle would emit a grunt, which is as close as he came to saying, “Good job.”

Stephen E Arnold, September 14, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta