HP Analysis Urges Mainframe Rip and Replace
October 1, 2009
I read “Staying on Legacy Systems Ends Up Costing IT More” absolutely fascinating. The article appeared on the Ziff Davis Web site. There is a link to a podcast (latency and audio made this tough for me given my age, lousy hearing, and general impatience with serial info streams) and a series of excerpts from the “Briefings Direct” podcast discussion. The sponsor of the podcast was, according to the Web site, is Hewlett Packard. HP is on my radar because the company just merged its personal computer and printer business. I suppose that will make it untenable for me to describe HP as “the printer cartridge company”. I really liked that description, but now HP is a consulting firm and a PC company. Much better I suppose.
I abandoned the audio show and jumped to the transcript which you can obtain by clicking http://interarborsolutions.books.officelive.com/Documents/DoingNothing901.pdf.
The premise of the podcast, in my interpretation, is that smart companies will want to dump legacy hardware and systems for the hot, new hardware and systems available from HP. I understand this type of message. I use them myself. The idea sounds good. The notion of progress is based on the idea that what’s new is better than what came before. I won’t drag out the Jacques Ellul argument that technology creates more technology and more, unexpected problems. I will also ignore the studies of progress such as Gregg Easterbrook’s The Progress Paradox: How Life Gets Better While People Feel Worse, originally published in December 2003, five years before the economic dominos starting falling in April 2008. I won’t point out that “legacy” is not defined in a way that helped me understand the premise of the discussion. And, I won’t beat too forcefully on the fuzziness of word “cost” as the industry experts use the term. But costs are the core of the podcast, so I will have to make a quick dash through the thicket of accounting methods but not yet.
HP red ink as metaphor for the cost problems of a mainframe to next generation platform solution.
The first idea that snagged me was “cost hasn’t changed”. What changed was the amount of cash available to organizations. I don’t buy this. First, it is not clear what is included in the data to support the generalization. Without an indication of direct and indirects, capital, services, and any other costs that are associated with a legacy system, I can’t let the generalization into the argument. Without this premise in place, the rest of the assertions are on think ice, at least for me.
Second, consider this assertion by one of the HP “transformation” experts:
What’s still there, and is changing today, is the ability to look at a legacy source code application. We have the tools now to look at the code and visualize it in ways that are very compelling. That’s typically one of the biggest obstacles. If you look at a legacy application and the number of lines of code and number of people that are maintaining it, it’s usually obvious that large portions of the application haven’t really changed much. There’s a lot of library code and that sort of thing
My view is that “obvious” is a word that can be used to create a cloud of unknowing. Mainframe apps, if stable, and doing a good enough job may be useful because the application has not changed. As one of my neighbors here in Harrods Creek said, “If it ain’t broke, don’t fix it.” In my experience, that applies to mainframe apps that are working. If a mainframe app is broken, then an analysis is required to track down direct and indirect costs, opportunity costs, and fuzzy to be sure, but important going-forward costs. Not much is obvious once one gets rolling down the path of the rip-and-replace approach. In my experience, the reason mainframe apps continue to chug along in insurance companies, certain travel sectors, and some manufacturing firms is because they are predictable, known, and stable. Jumping into a whizzy new world may be fun, but such a step may not be prudent within the context of the business. But HP and its wizards aren’t known for their own rock solid business decisions. I am thinking of the ball drop with AltaVista.com and the most recent mash up of the printer and the PC industry. Ink revenue will make HP’s PC revenues soar, but it won’t change the nature of that low margin business.
Third, the analyst participating in the podcast was Steve Woods, whom I don’t know. If I knew him, I would take the butterfly in biology class approach and ask, “What do you mean by performance reliability?” My experience suggests that mainframe apps can be zippy and reliable. In fact, my old pal IBM has mainframe demos that put most modern options to share when it comes to certain computational and data processing tasks. Analysts who toss around phrases like “performance reliability” puzzle me. What the heck is the meaning of “performance reliability”? I ask, “How can an untrialed option running a ported mainframe app or alternative be know to be better, faster, or cheaper than what is now in place and working?” The answer for me is, “Well, we don’t know, but this sure sounds great when we are pushing clueless clients to upgrade.” Yep, there is nothing quite as exciting at migrating millions of lines of code to a different platform. In my shop, we take this extreme action only when there is no other option available to use. I don’t care how nifty automated ETL or Cobol tools are. The manual work can be a financial black hole. Not for me, thank you.
Fourth, I found the reference to the McKinsey article specious. McKinsey does research that is a bit like looking in the rear view mirror of a Greyhound bus. The vehicle is big, slow, and not making much progress in a faster paced world. I am not sure McKinsey’s data can be tied to any firm’s use of a mainframe or migrating from that platform to some other platform. The notion that migration from a working mainframe to a new platform will improve a company’s competitive instincts, its agility, or its customer service is unwarranted. McKinsey’s consultants would do the direct and indirect cost analysis. The data drive the McKinsey consultants, not glittering generalities. But that’s why McKinsey is McKinsey and HP is a manufacturer trying to get into engineering and consulting services. The core competencies of HP are losing relevance with each passing day.
Fifth, the suggestion that the Cobol will be rewritten to “efficient languages” like Java and C# is interesting. I also like the references to Dot Net. The explanations that echo the phrase “looking at the code” translate to one thing; that is, billable hours. For goodness sake, how can broad statements about what programming language be offered when dealing with legacy applications. Maybe the legacy application is a CICS system? Until one knows what the system is doing, how can one suggest Java and Dot Net as options. Maybe the option is an Aster Data system with MarkLogic doing the heavy lifting? My hunch is that this group from HP may not be up to speed on HP’s position with regard to Infobright. Maybe that’s an option. Dot Net? Java? Well, say hello to performance and maintenance issues with Dot Net. Java? The Google crowd seem to be nibbling away at some of Java’s flaws with Noop.
Sixth, the phrase “consolidation factor” put my teeth on edge. I am going to be 65. Mainframes have been part of my life since my first computer class in 1962 or 1963. There was no consolidation factor. If you want to crunch data, you used the machines that were available. Over time, mainframes have demonstrated that their engineering can be quite useful for certain types of computational processes. The consolidation angle was a consequence of mainframe operating systems and hardware becoming increasingly robust. As a result, sharing was easier and easier. Consolidation was not a goal. Consolidation was a result of sys admins who knew how to make certain tasks less of a hassle for themselves and take advantage of tech innovations in the mainframe world.
Seventh, the odd business-techno references puzzled me. One example is the phrase “assembly line model”. The idea, I think, is that code components can be reused. Code reuse is a thorny problem. The Google approach strikes me as having been influenced by mainframe and heavy duty mini computer experiences. The code reuse slashes the cost of certain types of grunt work. Another aspect is making it possible to use a system wide resource without having to do much, if any, heavy lifting. Companies with mainframe systems may find it more prudent to wait until Google makes more of its “as is” functionality available. Then, instead of jumping on the consultants’ newest band wagon, the company with a mainframe could begin to tap into Google services. Over time, the mainframe can be marginalized. I don’t think Java and Dot Net alone can do the job. The engineering of these tools is a mess and will be for time immemorial. Why shift from what’s “good enough” to platforms that are known to be fraught with engineering challenges and cost burdens.
Eighth, consider this statement:
Today, we may spend 80 percent or 90 percent of our IT budget on maintenance, and 10 percent on innovation. What we want to do is flip it. We’re not going to flip it in a year or maybe even two, but we have got to take steps. If we don’t start taking steps, it will never go away.
My research suggests that about one third of an IT budget goes to information and data transformation. The next largest chunk goes to keeping the system running. The number 10 percent is specious. Most information technology operations with which I am familiar are not into “innovation”. Innovation is usually marginalized. In one major Federal agency, new technology is semi officially “off the IT department’s radar”. Any extra money is spent working on nights and weekends to handle the unexpected and unwelcome crashes and problems that plague information technology departments. The numbers in the transcript support the foundationless argument and, like other assertions in the program, are not substantiated. IT departments are maxed out trying to keep existing systems up and running. Even modest upgrades can cause nightmares. A rip and replace program may be grounds for duct taping the executive making this suggestion to the wall of the parking garage.
Ninth, the reference to Web 2.0 and Enterprise 2.0 capabilities made me laugh. What the heck is Web 2.0? What is Enterprise 2.0? As I have said in my public talks and in my writings, Enterprise 1.0 is not working too well. Last time I checked, Enterprise 2.0 has not substantially improved the performance of the US business community. I can hear the protests now, “Social networks, collaboration, and Tweets.” Yeah, right. The reality is that technology does not fix flawed business processes. Technology may make a flawed process execute more quickly but throwing technology at a problem may only exacerbate the plight of the organization. Publishing is a good example of this trajectory. If an Enterprise 1.0 outfit is losing money because of its mainframe, how does it follow that with Enterprise 2.0 software and a new non-mainframe system, that company will generate more top line revenue? The problem goes beyond technology, yet these HP analysts are pitching Enterpriser 2.0 as a key to an organization’s success. Baloney, baloney, baloney.
Welcome to the enterprise mainframe migration carnival. Source: http://www.sideshowworld.com/ats-Carnival-50-60-2.jpg
Tenth, the TCO (total cost of ownership) argument (pages 12 ff) are the capstone to this wild and crazy marketing podcast. If a legacy mainframe system is working, show me the financial analysis that says the cost of capital, the cost of engineering, the cost of software mods, and the cost of downtime are solid gold. I don’t think that analysis is the bedrock upon which the podcast rests. Without cost analysis, the HP pitch is likely to put companies on the edge of survival into the morgue. The statement “Organizations have a very fine eye for what this is going to mean for me not just six months from now, but two years from now, and what it’s going to mean to successors in line in the organization” is wrong. Most commercial outfits are living from month to month and quarter to quarter. The “fine eye” is on managing cash in the present business climate. Some will spend for new systems, but in general, commercial entities are conservative at this time. The idea that migrating from a mainframe will pay off in two years is odd. The mainframe to alternative platform may take more than two years. So, the costs have to be tallied over a heck of a long time and the ultimate cost of the migration will not be easy to predict with precision. In short, the HP argument plants a cost time bomb under the client’s computer center. HP may be out of the picture when the red ink explodes. No Monte Carlo simulation is needed to predict this outcome. The cost time bomb is baked into the HP approach.
Just my opinion. Be kind to a financial analyst today. If you follow the HP program, you are going to need some help with your organization’s finances.
Stephen Arnold, October 1, 2009