IBM Watson: Going Back to the Jeopardy Thing
October 28, 2020
IEEE Spectrum ran an interview which I thought was a trifle unusual. Watson is going to modernize legacy code. How much of the legacy code is the work of IBM programmers and acolytes trained in the ways of Big Blue: JCL incantations, chants for PL/I, and abracadabra for Assembler? What about the code for the US air traffic control system? What about the code for the AS/400, a machine series I have lost in the mists of marketing? I remember rocking on with RPG.
The article has a killer SEO-centric title; to wit:
What, pray tell, was the first challenge IBM Watson successfully resolved? Maybe winning the Jeopardy game show. I keep thinking about the wonders of television post-production for programs which shoot a week’s worth of goodness in one day. The behind the scenes Avid users labor away to produce a “real” TV show. Sorry. I remain skeptical.
The article presents five questions. These are not exactly colloquial. The wording is similar to that used in semi-scripted reality TV programs. The answers are IBM-ish. Please, read and enjoy the original document. I will focus on two of the questions. Yes, I selected the ones with the most Watson goodness based on my experience with the giant of White Plains.
The first question probes the darned exciting history of IBM Watson and cancer. As I recall, some of the oncologists in Houston’s medical community were not thrilled with the time required to explain cancer to IBM analysts and slightly less thrilled with the outputs. Hasta la vista, Watson. The article explains IBM Watson and healthcare using wordage like this:
The use of AI in healthcare is still evolving, and it’s a journey. To expect AI to be able to give the right answer in all diagnosis scenarios is expecting too much. The technology has not reached that level yet. However, that’s precisely why we say it’s more about augmenting the healthcare experts than it is about replacing in many ways.
My, “yeah, but” is a memory of an IBM Watson presentation which asserted that Watson could deliver actionable diagnoses. I know I am getting old, but I recall those assurances. That presentation gave me the idea for the “Weakly Watson” series of articles in this blog. There were some crazy attempts to make IBM Watson relevant: Free to use model, build an application to match dogs with dog owners for a festival in Mexico, etc. etc.
The second question I want to highlight is natural language processing (!) and content processing. Here’s the snippet from the IBMer’s answer I circled with my Big Blue pen:
Roughly speaking, rule-based systems will be successful in translating somewhere between 50 to 60 percent of a program. It is true that part of the program can be translated reasonably well, however, that still leaves half of the program to be translated manually, and that remaining 50 percent is the hardest part, typically involving very complex rules. And that’s exactly where AI kicks in because it can act like humans.
There you go. AI “can act like humans.” Tell that to the people shafted by AI systems as documented in Weapons of Math Destruction.
Net net: Where’s IBM going with Watson? I think anywhere money can be generated. Game shows are probably less complex than addressing encrypted text messages and figuring out what’s in a streaming video in real time.
Who knows? Maybe Lucene, acquired technology from outfits like Vivisimo, and home brew code from IBM Almaden can work miracles.
Stephen E Arnold, October 28, 2020