IBM Wrestling with Watson

January 8, 2014

“IBM Struggles to turn Watson into Big Business” warrants a USA Today treatment. You can find the story in the hard copy of the newspaper on page A 1 and A 2. I saw a link to the item online at http://on.wsj.com/1iShfOG but you may have to pay to read it or chase down a Penguin friendly instance of the article.

The main point is that IBM targeted $10 billion in Watson revenue by 2023. Watson has generated less than $100 million in revenue I presume since the system “won” the Jeopardy game show.

The Wall Street Journal article is interesting because it contains a number of semantic signals, for example:

  • The use of the phrase “in a ditch” in reference to a a project at the University of Texas M.D. Anderson Cancer Center
  • The statement “Watson is having more trouble solving real-life problems”
  • The revelation that “Watson doesn’t work with standard hardware”
  • An allegedly accurate quote from a client that says “Watson initially took too long to learn”
  • The assertion that “IBM reworked Watson’s training regimen”
  • The sprinkling of “could’s” and  “if’s”

I came away from the story with a sense of déjà vu. I realized that over the last 25 years I have heard similar information about other “smart” search systems. The themes run through time the way a bituminous coal seam threads through the crust of the earth. When one of these seams catches fire, there are few inexpensive and quick ways to put out the fire. Applied to Watson, my hunch is that the cost of getting Watson to generate $10 billion in revenue is going to be a very big number.

The Wall Street Journal story references the need for humans to learn and then to train Watson about the topic. When Watson goes off track, more humans have to correct Watson. I want to point out that training a smart system on a specific corpus of content is tricky. Algorithms can be quite sensitive to small errors in initial settings. Over time, the algorithms do their thing and wander. This translates to humans who have to monitor the smart system to make sure it does not output information in which it has generated confidence scores that are wrong or undifferentiated. The Wall Street Journal nudges this state of affairs in this passage:

In a recent visit to his [a Sloan Kettering oncologist] pulled out an iPad and showed a screen from Watson that listed three potential treatments. Watson was less than 32% confident  that any of them were [sic] correct.

Then the Wall Street Journal reported that tweaking Watson was tough, saying:

The project initially ran awry because IBM’s engineers and Anderson’s doctors didn’t understand each other.

No surprise, but the fix just adds to the costs of the system. The article revealed:

IBM developers now meet with doctors several times a week.

Why is this Watson write up intriguing to me? There are four reasons:

First, the Wall Street Journal makes clear that dreams about dollars from search and content processing are easy to inflate and tough to deliver. Most search vendors and their stakeholders discover the difference between marketing hyperbole and reality.

Second, the Watson system is essentially dependent on human involvement. The objective of certain types of smart software is to reduce the need for human involvement. Watching Star Trek and Spock is not the same as delivering advanced systems that work and are affordable.

Third, the revenue generated by Watson is actually pretty good. Endeca hit $100 million between 1998 and 2011 when it was acquired by Oracle. Autonomy achieved $800 million between 1996 and 2011 when it was purchased by Hewlett Packard. Watson has been available for a couple of years. The problem is that the goal is, it appears, out of reach even for a company with IBM’s need for a hot new product and the resources to sell almost anything to large organizations.

Fourth, Watson is walking down the same path that STAIRS III, an early IBM search system, followed. IBM embraced open source to help reduce the cost of delivering basic search. Now IBM is finding that the value-adds are more difficult than key word matching and Boolean centric information retrieval. When a company does not learn from its own prior experiences in content processing, the voyage of discovery becomes more risky.

Net net: IBM has its hands full. I am confident that an azure chip consultant and a couple of 20 somethings can fix up Watson in a nonce. But if remediation is not possible, IBM may vie with Hewlett Packard as the pre-eminent example of the perils of the search and content processing business.

Stephen E Arnold, January 8, 2014

Comments

One Response to “IBM Wrestling with Watson”

  1. http://youtube.com/watch?v=ufLWZJceY4U/ on March 25th, 2014 2:54 pm

    Another person you can look to for guidance is Steve Dale,
    America’s Pet Expert. Always bear in mind that the
    course from completely getting him away of pull up diapers and winning the underpants
    agenda will surely not run smooth, prepare for some balks along the road
    and never be discouraged. It is best for smaller
    sized to medium pets as the amount of debris from a larger dog can
    easily be actually as well much to handle.