Smart Software and Old School Technology
August 22, 2018
It feels strange to say that anything analog is a trend in artificial intelligence, but that certainly seems to be the case in one segment. According to reports, there’s actually a way for AI to get faster and more accurate by indulging in some old timey thinking. We learned more from a recent Kurzweil article, “IBM Researchers Use Analog Memory to Train Deep Neural Networks Faster and More Efficiently.”
According to the story:
“IBM researchers used large arrays of non-volatile analog memory devices (which use continuously variable signals rather than binary 0s and 1s) to perform computations. Those arrays allowed the researchers to create, in hardware, the same scale and precision of AI calculations that are achieved by more energy-intensive systems in software, but running hundreds of times faster and at hundreds of times lower power…”
This is an intriguing development for AI and machine learning. Next Platform took a look at this news as well and found: “these efforts focused on integrating analog resistive-type electronic memories onto CMOS wafers, they also look at photonic-based devices and systems and how these might fit into the deep learning landscape.” We’re excited to see where this development goes and what companies will do with greater AI strength.
Patrick Roland, August 22, 2018
Wake Up Time: IBM Watson and Real Journalists
August 11, 2018
I read “IBM Has a Watson Dilemma.” I am not sure the word “dilemma” embraces the mindless hyperbole about Vivisimo, home brew code, and open source search technology. The WSJ ran the Watson ads which presented this Lego collection of code parts one with a happy face. You can check out the Watson Dilemma in your dead tree edition of the WSJ on page B1 or pay for online access to the story at www.wsj.com.
The needle point of the story is that IBM Watson’s push to cure cancer ran into the mushy wall composed of cancerous cells. In short, the system did not deliver. In fact, the system created some exciting moments for those trying to handcraft rules to make Dr. Watson work like the TV show and its post production procedures. Why not put patients in jeopardy? That sounds like a great idea. Put experts in a room, write rules, gather training data, and keep it update. No problem, or so the received wisdom chants.
The WSJ reports in a “real” news way:
…Watson’s recommendations can be wrong.
Yep, hitting 85 percent accuracy may be wide of the mark for some cognitive applications.
From a practical standpoint, numerical recipes can perform some tasks to spin money. Google ads work this magic without too much human fiddling. (No, I won’t say how much is “too much.”)
But IBM believed librarians, uninformed consultants who get their expertise via a Gerson Lehrman phone session, and from search engine optimization wizards. IBM management did not look at what search centric systems can deliver in terms of revenue.
Over the last 10 years, I have pointed out case examples of spectacular search flops. Yet somehow IBM was going to be different.
Sorry, search is more difficult to convert to sustainable revenues than many people believe. I wonder if those firms which have pumped significant dollars into the next best things in information access look at the Watson case and ask themselves, “Do you think we will get our money back?”
My hunch is that the answer is, “No.”
For me, I will stick to humanoid doctors. Asking Watson for advice is not something I want to do.
But if you have cancer, why not give IBM Watson a whirl. Let me know how that works out.
Stephen E Arnold, August 11, 2018
IBM Embraces Blockchain. Watson Watches
August 10, 2018
IBM recently announced the creation of LedgerConnect, a Blockchain powered banking service. This is an interesting move for a company that previously seemed to waver on whether it wanted to associate with this technology most famous for its links to cryptocurrency. However, the pairing actually makes sense, as we discovered in a recent IT Pro Portal story, “IBM Reveals Support Blockchain App Store.”
According to an IBM official:
“On LedgerConnect financial institutions will be able to access services in areas such as, but not limited to, know your customer processes, sanctions screening, collateral management, derivatives post-trade processing and reconciliation and market data. By hosting these services on a single, enterprise-grade network, organizations can focus on business objectives rather than application development, enabling them to realize operational efficiencies and cost savings across asset classes.”
This, in addition, to recent news that some of the biggest banks on the planet are already using Blockchain for a variety of needs. This includes the story that the Agricultural Bank of China has started issuing large loans using the technology. In fact, out of the 26 publicly owned banks in China, nearly half are using Blockchain. IBM looks conservative when you think of it like that, which is just where IBM likes to be. Watson, we believe, is watching, able to answer questions about the database du jour.
Patrick Roland, August 10, 2018
IBM Watson Workspace
August 6, 2018
I read “What Is Watson Workspace?” I have been assuming that WW is a roll up of:
- IBM Lotus Connections
- IBM Lotus Domino
- IBM Lotus Mashups
- IBM Lotus Notes
- IBM Lotus Quickr
- IBM Lotus Sametime
The write up explains how wrong I am (yet again. Such a surprise for a person who resides in rural Kentucky). The write up states:
IBM Watson Workspace offers a “smart” destination for employees to collaborate on projects, share ideas, and post questions, all built from the ground up to take advantage of Watson’s cognitive computing abilities.
Yeah, but I thought the Lotus products provided these services.
How silly of me?
The different is that WW includes cognitive APIs. Sounds outstanding. I can:
- Draw insights from conversations
- Turn conversations into actions
- Access video conferencing
- Customize Watson Workspace.
When I was doing a little low level work for one of the US government agencies (maybe it was the White House?) I recall sitting in a briefing and these functions were explained. A short time thereafter I had the thankless job of reviewing a minor contract to answer an almost irrelevant question. Guess what? The “workspace” did not contain the email nor the attachments I sought. The system, it was explained to me by someone from IBM in Gaithersburg, was that it was not the fault of the IBM system.
Doc Watson Says: Take Two Big Blue Pills and Call Me in the Morning… If You Are Alive
August 1, 2018
Oh, dear. AI technology has great potential for good, but even IBM Watson is not perfect, it seems. Gizmodo reports, “IBM Watson Reportedly Recommended Cancer Treatments that Were ‘Unsafe and Incorrect’.” The flubs were found during an evaluation of the software, not within a real-world implementation. (We think.) Still, it is a problem worth keeping an eye on. Writer Jennings Brown cites a report by Stat News that reviewed some 2017 documents from IBM Watson’s former deputy health chief Andrew Norden, reports that were reportedly also provided to IBM Watson Health’s management. We’re told:
“One example in the documents is the case of a 65-year-old man diagnosed with lung cancer, who also seemed to have severe bleeding. Watson reportedly suggested the man be administered both chemotherapy and the drug ‘Bevacizumab.’ But the drug can lead to ‘severe or fatal hemorrhage,’ according to a warning on the medication, and therefore shouldn’t be given to people with severe bleeding, as Stat points out. A Memorial Sloan Kettering (MSK) Cancer Center spokesperson told Stat that they believed this recommendation was not given to a real patient, and was just a part of system testing. …According to the report, the documents blame the training provided by IBM engineers and on doctors at MSK, which partnered with IBM in 2012 to train Watson to ‘think’ more like a doctor. The documents state that—instead of feeding real patient data into the software—the doctors were reportedly feeding Watson hypothetical patients data, or ‘synthetic’ case data. This would mean it’s possible that when other hospitals used the MSK-trained Watson for Oncology, doctors were receiving treatment recommendations guided by MSK doctors’ treatment preferences, instead of an AI interpretation of actual patient data.”
Houston, we have a problem. Let that be a lesson, folks—always feed your AI real, high-quality case data. Not surprisingly, doctors who have already invested in Watson for Oncology are unhappy about the news, saying the technology can now only be used to supply an “extra opinion” when human doctors disagree. Sounds like a plan or common sense.
Cynthia Murrell, August 1, 2018
IBM Turns to Examples to Teach AI Ethics
July 31, 2018
It seems that sometimes, as with humans, the best way to teach an AI is by example. That’s one key takeaway from VentureBeat’s article, “IBM Researchers Train Ai to Follow Code of Ethics.” The need to program a code of conduct into AI systems has become clear, but finding a method to do so has proven problematic. Efforts to devise rules and teach them to systems are way too slow, and necessarily leave out many twists and turns of morality that (most) humans understand instinctively. IBM’s solution is to make the machine draw conclusions for itself by studying examples. Writer Ben Dickson specifies:
“The AI recommendation technique uses two different training stages. The first stage happens offline, which means it takes place before the system starts interacting with the end user. During this stage, an arbiter gives the system examples that define the constraints the recommendation engine should abide by. The AI then examines those examples and the data associated with them to create its own ethical rules. As with all machine learning systems, the more examples and the more data you give it, the better it becomes at creating the rules. … The second stage of the training takes place online in direct interaction with the end user. Like a traditional recommendation system, the AI tries to maximize its reward by optimizing its results for the preferences of the user and showing content the user will be more inclined to interact with. Since satisfying the ethical constraints and the user’s preferences can sometimes be conflicting goals, the arbiter can then set a threshold that defines how much priority each of them gets. In the [movie recommendation] demo IBM provided, a slider lets parents choose the balance between the ethical principles and the child’s preferences.”
Were told the team is also working to use more complex systems than the yes/no model, ones based on ranked priorities instead, for example. Dickson notes the technique can be applied to many other purposes, like calculating optimal drug dosages for certain patients in specific environments. It could also, he posits, be applied to problems like filter bubbles and smartphone addiction.
Beyond Search wonders if IBM ethical methods apply to patent enforcement, staff management of those over 55 year old, and unregulated blockchain services. Annoying questions? I hope so.
Cynthia Murrell, July 31, 2018
IBM and a University Tie Up or Tie Down
July 26, 2018
I wanted to comment about the resuscitation of IBM’s cancer initiative at the Veterans Administration. But that’s pure Watson, and I think Watson has become old news.
A more interesting “galactico” initiative at IBM is blockchain.
What’s bigger than Watson?
Blockchain. Well, that’s the the hope.
IBM is grasping tightly to blockchain technology, this time through an academic partnership, we learn in CoinDesk’s piece, “IBM Teams with Columbia to Launch Blockchain Research Center.” Located on the Manhattan campus of Columbia University, the center hopes to speed the development of blockchain apps and cultivate education initiatives. Writer Wolfie Zhao elaborates:
“A dedicated committee comprised of both Columbia faculty members and IBM research scientists will start reviewing proposals for blockchain ‘curriculum development, business initiatives and research programs’ later this year. In addition, the center will advise on regulatory issues for startups in the blockchain space and provide internship opportunities to improve technical skills for students and professionals with an interest in the tech.”
Zhao also notes this move fits into a larger trend:
“The announcement marks the latest effort by the blockchain industry to invest in a top-tier university in the U.S. to accelerate blockchain understanding and adoption. As reported by CoinDesk in June, San Francisco-based distributed ledger startup Ripple said it will invest $2 million in blockchain research initiatives in the University of Texas at Austin in the next five years, as part of its pledge to invest $50 million in worldwide institutions.”
For those who are interested in the University of Texas at Austin’s Blockchain Initiative, there is more information here, via the university’s McCombs School of Business. Ripple, by the way, was founded in 2012 specifically to capitalize on blockchain technology. Though it is indeed based in San Francisco, the company also maintains offices in New York City and Atlanta.
Perhaps IBM will just buy university research departments before Amazon, Facebook, and Google consume the blockchain academic oxygen?
Cynthia Murrell, July 26, 2018
IBM and Watson
July 23, 2018
I spotted a brief comment about IBM’s recent earnings report. Yep, IBM is doing better. However, “IBM Results Leave Watson Thinking” makes this point:
Artificial intelligence is at the heart of IBM’s long-term strategy, yet its cognitive solutions business experienced a slight decline.
If I had the energy, I would pull from my IBM cognitive archive some of the statements about the huge payoff Watson would deliver, the odd ball advertisement showing Watson as chemical symbols, and the news release about the Union Square office. But it is Monday, and I am reluctant to revisit the Watson thing.
The operative word is “decline.”
Stephen E Arnold, July 23, 2018
IBM Demo: Debating Watson
June 29, 2018
IBM once again displays its AI chops—SFGate reports, “IBM Computer Proves Formidable Against 2 Human Debaters.” The project, dubbed Project Debater, shows off the tech’s improvements in mimicking human-like speech and reasoning. At a recent demonstration, neither the AI nor the two humans knew the topics beforehand: space exploration and telemedicine. According to one of the human participants, the AI held its own pretty well, even if it did rely too much on blanket statements. Writer Matt O’brien says this about IBM’s approach:
“Rather than just scanning a giant trove of data in search of factoids, IBM’s latest project taps into several more complex branches of AI. Search engine algorithms used by Google and Microsoft’s Bing use similar technology to digest and summarize written content and compose new paragraphs. Voice assistants such as Amazon’s Alexa rely on listening comprehension to answer questions posed by people. Google recently demonstrated an eerily human-like voice assistant that can call hair salons or restaurants to make appointments…But IBM says it’s breaking new ground by creating a system that tackles deeper human practices of rhetoric and analysis, and how they’re used to discuss big questions whose answers aren’t always clear. ‘If you think of the rules of debate, they’re far more open-ended than the rules of a board game,’ said Ranit Aharonov, who manages the debater project.
The demo did not declare any “winner” in the debate, but researchers were able to draw some (perhaps obvious) conclusions: While the software was better at recalling specific facts and statistics to bolster its arguments, humans brought more linguistic flair and the power of personal experience to the field. As for potential applications of this technology, IBM’s VP of research suggests it could be used by human workers to better inform their decisions. Lawyers, specifically, were mentioned.
Keep in mind. Demo.
Cynthia Murrell, June 29, 2018
Artificial Intelligence and the New Normal: Over Promising and Under Delivering
June 15, 2018
IBM has the world’s fastest computer. That’s intriguing. Now Watson can output more “answers” in less time. Pity the poor user who has to figure out what’s right and what’s no so right. Progress.
Perhaps a wave of reason is about to hit the AI field. Blogger Filip Piekniewski forecasts, “AI Winter is Well on its Way.” While the neural-networking approach behind deep learning has been promising, it may fall short of the hype some companies have broadcast. Piekniewski writes:
“Many bets were made in 2014, 2015 and 2016 when still new boundaries were pushed, such as the Alpha Go etc. Companies such as Tesla were announcing through the mouths of their CEO’s that fully self-driving car was very close, to the point that Tesla even started selling that option to customers [to be enabled by future software update]. We have now mid 2018 and things have changed. Not on the surface yet, NIPS conference is still oversold, the corporate PR still has AI all over its press releases, Elon Musk still keeps promising self driving cars and Google CEO keeps repeating Andrew Ng’s slogan that AI is bigger than electricity. But this narrative begins to crack. And as I predicted in my older post, the place where the cracks are most visible is autonomous driving – an actual application of the technology in the real world.”
This post documents a certain waning of interest in deep learning, and notes an apparently unforeseen limit to its scale. Most concerning so far, of course, are the accidents that have involved self-driving cars; Piekniewski examines that problem from a technical perspective, so see the article for those details. Whether the AI field will experience a “collapse,” as this post foresees, or we will simply adapt to more realistic expectations, we cannot predict.
Cynthia Murrell, June 15, 2018

