IBM Hits a Single after Years at Bat

January 20, 2018

IBM reported revenue growth. The company’s news release may have been subject to a staff cutback. Here’s the message I saw when I tried to read the official news release:

image

I wonder if IBM’s cloud business offers the stability and reliability of offerings from Amazon, Google, or Microsoft.

The Poughkeepsie Journal was happy. I learned:

On Thursday, IBM reported fourth-quarter 2017 total revenue of $22.5 billion, up 4 percent from $21.8 billion the same quarter in 2016.

Growth is good. Better than a loss. However, where did the growth originate? From Harrod’s Creek, mainframes took a deep breath, put those ageing legs in motion, and managed to get on base.

Mainframes!

Strategic imperatives made a contribution, strike out king IBM Watson, which may be headed to the Louisville Bats, managed about three percent growth.

ZDNet observed:

IBM’s fourth quarter topped expectations and strategic imperative businesses were solid, but Big Blue’s annual revenue is down for the 7th consecutive year.

Back out money made from currency, and Big Blue’s fourth quarter sales were up one percent.

Ginni Rometty, IBM CEO, is quoted as saying:

IBM strengthened our position as the leading enterprise cloud provider and established IBM as the blockchain leader for business. Looking ahead, we are uniquely positioned to help clients use data and AI to build smarter businesses.

Presumably she did not have to access the company’s Web site for the quarterly news release.

The company’s shares went down. That’s confidence.

Stephen E Arnold, January 20, 2018

IBM Disputes Bain Claim

January 12, 2018

I don’t read the Poughkeepsie Journal very often. However, I made a delightful exception this morning. The story “IBM Disputes Report of Redeploying Staffers” reminded me of Robert X Cringely’s The Decline and Fall of IBM and its subsequent hoo-hah. My recollection is that IBM suggested that Mr. Cringely (whom I think of as X) was off base. I am not sure he was.

The Poughkeepsie article reported:

An IBM spokesman disputed an article reporting the company plans to reassign roughly 30 percent of Global Technology Services staffers through attrition this year.

A British online publication reported that Bain was likely to help IBM on its road to recovery.

IBM, according to the Poughkeepsie source, said:

“It’s not accurate,” said Clint Roswell, spokesman for IBM’s Global Technology Services business. He did not give specifics on what information was inaccurate. “The company did not make any announcement and we don’t comment on speculation,” Roswell said. He said IBM hires “many consultants, many of whom make recommendations. It’s as simple as that.”

Okay, where did the British publication’s story originate?

Another question: If IBM hires lots of consultants, why did this particular Bain report trigger a response in the estimable Poughkeepsie newspapers?

My hunch is that a kernel of truth resides in the British report and the IBM denial.

IBM is going to have to do some fancy dancing. Whether Bain, BCG, Booz, McKinsey, or another of the blue chip consulting firms get the job of fixing IBM, the system and method will lead to the same changes I described in “IBM Watson: Fresh Out of Correct Answers?

For those who have made it through advanced degree programs, the blue chip consulting firm charm schools, and the on the job training with Type A “experts”—the thought processes lead to:

  • Reassessment of internal financial data
  • Calculations to identify cost savings and money making opportunities
  • Ranking of units and their people
  • Reorganizations
  • Sales of certain business units
  • Embedding of consultants in place of existing managers
  • An effort to work directly with the Board of Directors

These types of changes are ones that people working for a company rarely make without the help of outside expertise.

Maybe IBM is on its way to sustainable revenues and impressive growth dusted with healthy profits?

On the other hand, IBM admits it works with lots of advisers. One of those outfits will get the job to fix IBM. The result will be the same sequence of actions identified in the dot points above.

The third quarter earning come out during the week of January 15, 2018. Has IBM returned to its glory days? If so, forget the consultants with repair kits. On the other hand, if the numbers are not exciting, maybe the Bainies or another blue chip outfit will be able to flip on the chain saw and do what has to be done. I think I can safely assert that asking Watson will not be Job One.

Stephen E Arnold, January 12, 2018

IBM Watson: Fresh Out of Correct Answers?

January 11, 2018

As a former laborer in the vineyard of a blue chip, bit time, only slightly misunderstood consulting firm, I know when a client throws in the towel.

I read allegedly accurate write up “Black & Blue: IBM Hires Bain to Cut Costs, Up Productivity.” Let’s assume that the story has the hiring of the Bainies 100 percent correct. (If you see me at one of the law enforcement and intelligence conferences at which I will be speaking in 2018, ask me about the Holiday Inn and Route 128 meetings from the late 1970s. That’s an interesting Bain anecdote in my opinion.)

The write up informed me:

IBM has indicated to senior Global Technology Services management that that a third of the global workforce will be “productively redeployed” in 2018 with tens of thousands of personnel “impacted”. Insiders told The Reg that Big Blue had hired consultant Bain & Company to help it plot a way forward for GTS, bringing in external business consultants despite spending $3.5bn to buy PWC in 2002

Interesting.

Let me share my view of what will happen:

  1. Hiring a big time, blue chip consulting firm will lead to upper management changes. I would not be surprised to see a Bainie become the shadow CEO of the company with other Bainies advising the Board of Directors. The reason? In order to book revenues, one moves up the food chain until the blue chip outfit is at the top of the heap and has a way to punch the cash register keys.
  2. Lots of people will lose their jobs. The logic is brutal. If your unit is not making money or hitting its targets, you are part of the problem. The easiest way to solve the problem is to show the underperformers the door with a friendly “find you future elsewhere, you lucky devil.”
  3. Divestitures will play a role in the remediation effort. If the incumbent management cannot turn a sow’s ear into a silk purse, polish it up, whip out some nifty future value diagrams, and sell what Boston Consulting folks once called “dogs.” Bain, like the Boozer, borrowed the BCG quadrant thing, and it will play a part in the Bain solutions.
  4. The stock price will go up. Hey, Bain is like magic dust. Those buy backs should have been used to generate new, sustainable revenue. Now with the Bainies reanalyzing the data, some Wall Street MBAs will see gold in them thar terminations, sell offs, and reorganizations.

Worth watching. If Bain is not on board, at some point another blue chip outfit will like McKinsey & Company could implement the same game plan.

In short, IBM is over. I suppose I could ask IBM Watson, but why bother? Time might be better spent trying to land a top job at Big Blue. Are you on Bain’s radar?

Stephen E Arnold, January 11, 2018

Two Senior Citizens Go Steady: IBM and British Telecom Hug in the Cloud

January 10, 2018

I read “BT Offers businesses Direct Access to IBM Cloud Services.” That sounds like an interesting idea. However, BT (the new version of British Telecom) has joined hands with Amazon’s cloud as well. See this Telecompaper item, please.

These tie ups are interesting.

When I learned of BT’s partnering, I thought of an image which I saw on a Knoxville, Tennessee, TV news program. I dug through Bing and located the story “Couple Renews Vows in Nursing Home after 70 Years of Marriage” and this image:

Image result for nursing home marriages

British Telecom open for business in maybe as far back as 1880, depending on how one interprets the history of the British post office. IBM, of course, flipped on its lights in 1911.

The idea that those with some life experience find partnering rewarding underscores the essence of humanity.

Will the going steady evolve into significant, sustainable new revenues?

Where there is a will there is a way. I am tempted to state boldly, “Let’s ask Watson.” But I think I will go with Amazon’s Alexa which will be installed in some Lexus automobiles.

But age has its virtues. A happy quack to WVLT in Knoxville. No pix of the new couple (BT and IBM) were available to me. Darn.

Stephen E Arnold, January 10, 2018

IBM Socrates Wins 2017 Semantic Web Challenge

January 10, 2018

We learn from the press release “Elsevier Announces the Winner of the 2017 Semantic Web Challenge,” posted at PRNewswire, that IBM has taken the top prize in the 2017 Semantic Web Challenge world cup with its AI project, Socrates. The outfit sponsoring the competition is the number one sci-tech publisher, Elsevier. We assume IBM will be happy with another Jeopardy-type win.

Knowledge graphs were the focus of this year’s challenge, and a baseline representing current progress in the field was established. The judges found that Socrates skillfully wielded natural language processing and deep learning to find and check information across multiple web sources. About this particular challenge, the write-up specifies:

This year, the SWC adjusted the annual format in order to measure and evaluate targeted and sustainable progress in this field. In 2017, competing teams were asked to perform two important knowledge engineering tasks on the web: fact extraction (knowledge graph population) [and] fact checking (knowledge graph validation). Teams were free to use any arbitrary web sources as input, and an open set of training data was provided for them to learn from. A closed dataset of facts, unknown to the teams, served as the ground truth to benchmark how well they did. The evaluation and benchmarking platform for the 2017 SWC is based on the GERBIL framework and powered by the HOBBIT project. Teams were measured on a very clear definition of precision and recall, and their performance on both tasks was tracked on a leader board. All data and systems were shared according to the FAIR principles (Findable, Accessible, Interoperable, Reusable).

The Semantic Web Challenge has been going on since 2003, organized in cooperation with the Semantic Web Science Association.

Cynthia Murrell, January 10, 2018

Watson and CDC Research Blockchain

December 29, 2017

Oh, Watson!  What will IBM have you do next?  Apparently, you will team up with the Centers for Disease Control and Prevention to research blockchain benefits.  The details about Watson’s newest career are detailed in Fast Company’s article, “IBM Watson Health Team With the CDC To Research Blockchain.”  Teaming up with the CDC is an extension of the work IBM Watson is already doing with the Food and Drug Administration by exploring owned-mediated data exchange with blockchain.

IBM chief science officer Shahram Ebadollahi explained that the research with the CDC and FDA with lead to blockchain adoption at the federal government level.  By using blockchain, the CDC hopes to discover new ways to use data and expedite federal reactions to health threats.

Blockchain is a very new technology developed to handle sensitive data and cryptocurrency transactions.  It is used for applications that require high levels of security.  Ebadollahi said:

 ‘Blockchain is very useful when there are so many actors in the system,’ Ebadollahi said. ‘It enables the ecosystem of data in healthcare to have more fluidity, and AI allows us to extract insights from the data. Everybody talks about Big Data in healthcare but I think the more important thing is Long Data.’

One possible result is that consumers will purchase a personal health care system like a home security system.  Blockchain could potentially offer a new level of security that everyone from patients to physicians is comfortable with.

Blockchain is basically big data, except it is a more specific data type.  The applications are the same and it will revolutionize the world just like big data.

Whitney Grace, December 29, 2017

IBM Watson: Now the Personal Assistant You Cannot Harass

December 28, 2017

I miss those wonky and expensive IBM Watson ad campaigns. However, Watson has not gone away. Watson is now available as the IBM Watson Assistant. You will need to be a “developer”, but I would wager that IBM wants you to be working at a Fortune 50 company and looking for a way to spend lots of money for IBM services. You can do magic with the program. Ready to role? Read the legal “rules” here, not the info about hand crafting “rules” to make the system appear so darned helpful. Oh, one point about rule based systems. These gems have to be thought up, coded, tested, and maintained. Does that sound time consuming? You ain’t seen nothing yet. Artificial intelligence is just so “artificial.” What happens if I haven’t coded my unharassable assistant for a specific task like figuring out how to deanonymize i2p hexchat sessions? Well, you get the idea: The personal assistant is harassing me.

Stephen E Arnold, December 28, 2017

Turning to AI for Better Data Hygiene

December 28, 2017

Most big data is flawed in some way, because humans are imperfect beings. That is the premise behind ZDNet’s article, “The Great Data Science Hope: Machine Learning Can Cure Your Terrible Data Hygiene.” Editor-in-Chief Larry Dignan explains:

The reality is enterprises haven’t been creating data dictionaries, meta data and clean information for years. Sure, this data hygiene effort may have improved a bit, but let’s get real: Humans aren’t up for the job and never have been. ZDNet’s Andrew Brust put it succinctly: Humans aren’t meticulous enough. And without clean data, a data scientist can’t create algorithms or a model for analytics.

 

Luckily, technology vendors have a magic elixir to sell you…again. The latest concept is to create an abstraction layer that can manage your data, bring analytics to the masses and use machine learning to make predictions and create business value. And the grand setup for this analytics nirvana is to use machine learning to do all the work that enterprises have neglected.

I know you’ve heard this before. The last magic box was the data lake where you’d throw in all of your information–structured and unstructured–and then use a Hadoop cluster and a few other technologies to make sense of it all. Before big data, the data warehouse was going to give you insights and solve all your problems along with business intelligence and enterprise resource planning. But without data hygiene in the first place enterprises replicated a familiar, but failed strategy: Poop in. Poop out.

What the observation lacks in eloquence it makes up for in insight—the whole data-lake concept was flawed from the start since it did not give adequate attention to data preparation. Dignan cites IBM’s Watson Data Platform as an example of the new machine-learning-based cleanup tools, and points to other noteworthy vendors investigating similar ideas—Alation, Io-Tahoe, Cloudera, and HortonWorks. Which cleaning tool will perform best remains to be seen, but Dignan seems sure of one thing—the data that enterprises have been diligently collecting for the last several years is as dirty as a dustbin lid.

Cynthia Murrell, December 28, 2017

IBM Thinks It Can Crack Pharmaceutical Code with AI

December 20, 2017

Artificial intelligence has been tasked with solving every problem from famine to climate change to helping you pick a new favorite song. So, it should come as no surprise that IBM thinks it can revolutionize another industry with AI. We learned exactly what from a Digital Trends story, “IBM’s New AI Predicts Chemical Reactions, Could Revolutionize Drug Development.”

According to the story,

As described in a new research paper, the A.I. chemist is able to predict chemical reactions in a way that could be incredibly important for fields like drug discovery. To do this, it uses a highly detailed data set of knowledge on 395,496 different reactions taken from thousands of research papers published over the years.

Teo Laino, one of the researchers on the project from IBM Research in Zurich, told Digital Trends that it is a great example of how A.I. can draw upon large quantities of knowledge that would be astonishingly difficult for a human to master — particularly when it needs to be updated all the time.

It’s an absolutely valid plan, but we aren’t sure if IBM is the one to really pull off this trick. IBM trying to work in big pharma seems kind of like your uncle tinkering on his “inventions” out in the shed. We’d rather see someone whose primary focus is AI and medicine, like Certara, PhinC, and Chem Abstracts.

Patrick Roland, December 20, 2017

IBM AI: Speeding Up One Thing, Ignoring a Slow Thing

December 12, 2017

I read “IBM Develops Preprocessing Block, Makes Machine Learning Faster Tenfold.” I read this statement and took out my trust Big Blue marketing highlight felt tip:

“To the best of our knowledge, we are first to have generic solution with a 10x speedup. Specifically, for traditional, linear machine learning models — which are widely used for data sets that are too big for neural networks to train on — we have implemented the techniques on the best reference schemes and demonstrated a minimum of a 10x speedup.” [Emphasis added to make it easy to spot certain semantically-rich verbiage.”]

I like the traditional, linear, and demonstrated lingo.

From my vantage point, this is useful, but it is one modest component of a traditional, linear machine learning “model”.

The part which suck ups subject matter experts, time, and money (lots of money) includes these steps:

  1. Collecting domain specific information, figuring out what’s important and what’s not, and figuring out how to match what a person or subsystem needs to know against this domain knowledge
  2. Collecting the information. Sure, this seems easy, but it can be a slippery fish for some domains. Tidy, traditional domains like a subset of technical information make it easier and cheaper to fiddle with word lists, synonym expansion “helpers”, and sources which are supposed to be accurate. Accuracy, of course, is a bit like mom’s apple pie.
  3. Converting the source information into a format which the content processing system can use without choking storage space with exceptions or engaging in computationally expensive conversions which have to be checked by software or humans before pushing the content to the content processing subsystem. (Some outfits fudge by limiting content types. The approach works in some eDiscovery system because the information is in more predictable formats.)

What is the time and money relationship of dealing with these three steps versus the speed up for the traditional machine learning models? In my experience the cost of the three steps identified above are often greater than the cost of the downstream processes. So a 10 percent speed up in a single process is helpful but it won’t pay for pizza for the development team.

Just my view from Harrod’s Creek, which sees things in a way which is different from IBM marketing and IBM Zurich wizards. Shoot those squirrels before eating them, you hear.

Stephen E Arnold, December  12, 2017

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta