Healthcare.gov: The Search for Functional Management via Training

September 21, 2015

I read “How Healthcare.gov Botched $600 Million worth of Contracts.” My initial reaction was that the $600 million figure understated the fully loaded costs of the Web site. I have zero evidence about my view that $600 million was the incorrect total. I do have a tiny bit of experience in US government project work, including assignments to look into accounting methods in procurements.

The write up explains that a an audit by the Office of the Health and Human Services office of Inspector General identified the root causes of the alleged $600 million Healthcare.gov Web site. The source document was online when I checked on September 21, 2015, at this link. If you want this document, I suggest you download it. Some US government links become broken when maintenance, interns, new contractors, or site redesigns are implemented.

The news story, which is the hook for this blog post, does a good job of pulling out some of the data from the IG’s report; for example, a list of “big contractors behind Healthcare.gov.” The list contains few surprises. Many of the names of companies were familiar to me, including that of Booz, Allen, where I once labored on a range of projects. There are references to additional fees from scope changes. I am confident, gentle reader, that you are familiar with scope creep. The idea is that the client, in the case of Healthcare.gov, needed to modify the tasks in the statement of work which underpins the contracts issued to the firms which perform the work. The government method is to rely on contractors for heavy lifting. The government professionals handle oversight, make certain the acquisition guidelines are observed, and plug assorted types of data into various US government back office systems.

The news story repeated the conclusion of the IG’s report that better training was need to make the Healthcare.gov type of project work better in the future.

My thoughts are that the news story ignored several important factors which in my experience provided the laboratory in which this online commerce experiment evolved.

First, the notion of a person in charge is not one that I encountered too often in my brushes with the US government. Many individuals change jobs, rotating from assignment to assignment, so newcomers are often involved after a train has left the station. In this type of staffing environment, the enthusiasm for digging deep and re-rigging the ship is modest or secondary to other tasks such as working on budgets for the next fiscal year, getting involved in new projects, or keeping up with the meetings which comprise the bulk of a professional’s work time. In short, decisions are not informed by a single individual with a desire to accept responsibility for a project. The ship sails on, moved by the winds of decisions by those with different views of the project. The direction emerges.

Second, the budget mechanisms are darned interesting. Money cannot be spent until the project is approved and the estimated funds are actually transferred to an account which can be used to pay a contractor. The process requires that individuals who may have never worked on a similar project create a team which involves various consultants, White House fellows, newly appointed administrators, procurement specialists with law degrees, or other professionals to figure out what is going to be done, how, what time will be allocated and converted to estimates of cost, and the other arcana of a statement of work. The firms who make a living converting statements of work into proposals to do the actual work. At this point, the disconnect between the group which defined the SOW and the firms bidding on the work becomes the vendor selection process. I will not explore vendor selection, an interesting topic outside the scope of this blog post. Vendors are selected and contracts written. Remember that the estimates, the timelines, and the functionality now have to be converted into the Healthcare.gov site or the F-35 aircraft or some other deliverable. What happens if the SOW does not match reality? The answer is a non functioning version of Healthcare.gov. The cause, gentle reader, is not training.

Third, the vendors, bless their billable hearts, now have to take the contract which spells out exactly what the particular vendor is to do and then actually do it. What happens if the SOW gets the order of tasks wrong in terms of timing? The vendors do the best they can. Vendors document what they do, submit invoices, and attend meetings. When multiple vendors are involved, the meetings with oversight professionals are not the places to speak in plain English about the craziness of the requirements or the tasks specified in the contract. The vendors do their work to the best of their ability. When the time comes for different components to be hooked together, the parts usually require some tweaking. Think rework. Scope change required. When the go live date arrives, the vendors flip the switches for their part of the project and individuals try to use the system. When these systems do not work, the problem is a severe one. Once again: training is not the problem. The root cause is that the fundamental assumptions about a project were flawed from the git go.

Is there a fix? In the case of Healthcare.gov, there was. The problem was solved by creating the equivalent of a technical SWAT team, working in a very flexible manner with procurement requirements, and allocating money without the often uninformed assumptions baked into a routine procurement.

Did the fix cost money? Yes, do I know how much? No. My hunch is that there is zero appetite in the US government, at a “real” news service, a watchdog entity, or an in house accountant to figure out the total spent for Healthcare.gov. Why do I know this? The accounting systems in use by most government entities are not designed to roll up direct and indirect costs with a mouse click. Costs are scattered and methods of payment pretty darned crazy.

Net net: Folks can train all day long. If that training focuses on systems and methods which are disconnected from the deliverable, the result is inefficiency, a lack of accountability, and misdirection from the root cause of a problem.

I have been involved in various ways with government work in the US since the early 1970s. One thing remains consistent: The foundational activities are uneven. Will the procurement process change? Forty years ago I used to think that the system would evolve. I was wrong.

Stephen E Arnold, September 21, 2015

Cloud Excitement: What Is Up?

September 21, 2015

I noted two items about cloud services. The first is summarized in “Skype Is Down Worldwide for Many Users.” I used Skype last week one time. I noted that the system was unable to allow my Skype conversationalist to hear me. We gave up fooling with the systems, and the person who wanted to speak with me called me up. I wonder how much that 75 minute international call cost. Exciting.

I also noted that Amazon went offline for some of its customers on September 21, 2015. The information was in “Amazon Web Services Experiences Outages Sunday Morning, Causing Disruptions On Netflix, Tinder, Airbnb And More.”

Several observations are warranted:

  • What happened to automatic failover, redundancy, and distributed computing? I assumed that Google’s loss of data in its Belgium data center was a reminder that marketing chatter is different from actual data center reality. Guess not?
  • Whom or what will be blamed? Amazon will have a run at the Ashburn, Virginia nexus. Microsoft will probably blame a firmware or software update. The cause may be a diffusion of boots on the ground technical knowledge. Let’s face it. These cloud services are complicated puppies. As staff seek their future elsewhere and training is sidestepped, the potential for failure exists. The fix-it-and-move on approach to engineering adds to the excitement. Failure, in a sense, is engineered into many of these systems.
  • What about the promise of having one’s data in the cloud so nothing is lost, no downtime haunts the mobile device user, and no break in a seamless user experience occurs? More baloney? Yep, probably.

Net net: I rely on old fashioned computing and software methods. I think I lost data about 25 years ago and went offline never. Redundancy, reliability, and fail over take work gentle reader, not marketing and advertising.

How old school. The reason my international call took place was a result of my having different mobile telephony accounts plus an old Bell head landline. Expensive? Sure, but none of this required me to issue a news release, publicize how wonderful my cloud system was, and the egg-on-the-face reality of failure.

Stephen E Arnold, September 21, 2015

Big Data, Gartner Hype Cycle, and Generating Revenues

September 21, 2015

I read “Big Data Falls Off the Hype Cycle.” Fascinating. A term without definition has sparked ruminations about why a mid tier consulting firm does not define Big Data as hyperbole.

The write up states:

“Big Data” joins other trends dropped into obscurity this year including:  decision management, autonomous vehicles, prediction markets, and in-memory analytics.  Why are terms dropped?

The article scoots forward to answer this question. The solution for those of you familiar with a multiple choice test include:

Sometimes because they are too obvious.  For example in-memory analytics was dropped because no one was actually pursuing out-of-memory analytics.  Autonomous vehicles because “it will not impact even a tiny fraction of the intended audience in its day-to-day jobs”.  Some die and are forgotten because they are deemed to have become obsolete before they could grow to maturity.  And Big Data, well, per Gartner “data is the key to all of our discussion, regardless of whether we call it “big data” or “smart data.” We know we have to care, so it is moot to make an extra point of it here.”

The write up then offers:

When I first took a stab at making a definition I concluded that Big Data was really more about a new technology in search of a problem to solve.  That technology was NoSQL DBs and it could solve problems in all three of those Vs.  Maybe we should have just called it NoSQL and let it go at that. Not to worry.  I’m sure that calling things “Big Data” will stick around for a long time even if Gartner wants us not to.

I have a different take. My hunch is that the hype cycle is a marketing and lead generation vehicle for a mid tier consulting firm. When the leads no longer flow and the “objective studies” no longer sell, a fresh approach is needed.

Big Data as a concept is no longer hype. That’s reassuring. Perhaps progress is retarded by buzzwords, jargon, and thrashing for revenues?

Stephen E Arnold, September 21, 2015

Redundant Dark Data

September 21, 2015

Have you heard the one about how dark data hides within an organization’s servers and holds potential business insights? Wait, you did not?  Then where have you been for the past three years?  Datameer posted an SEO heavy post on its blog called, “Shine Light On Dark Data.”  The post features the same redundant song and dance about how dark data retained on server has valuable customer trend and business patterns that can put them bring them out ahead of the competition.

One new fact is presented: IDC reports that 90% of digital data is dark.  That is a very interesting fact and spurs information specialists to action to get a big data plan in place, but then we are fed this tired explanation:

“This dark data may come in the form of machine or sensor logs that when analyzed help predict vacated real estate or customer time zones that may help businesses pinpoint when customers in a specific region prefer to engage with brands. While the value of these insights are very significant, setting foot into the world of dark data that is unstructured, untagged and untapped is daunting for both IT and business users.”

The post ends on some less than thorough advice to create an implementation plan.  There are other guides on the Internet that better prepare a person to create a big data action guide.  The post’s only purpose is to serve as a search engine bumper for Datameer.  While Datameer is one of the leading big data software providers, one would think they wouldn’t post a “dark data definition” post this late in the game.

Whitney Grace, September 21, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Google Solves CDN Problem with New Partnerships

September 21, 2015

The article on TechCrunch titled Google Partners with Cloudflare, Fastly, Level 3 and Highwinds to Help Developers Push Google Cloud Content to Users Faster discusses Google’s recent switch from it’s own content delivery network (CDN) (formerly PageSpeed service) to partner services. This has been advanced by the CDN Interconnect launch, purportedly aimed at providing simplified and less costly space for developers who use the cloud service for running applications. The article elucidates,

“Developers who use a CDN Interconnect partner to serve their content — and that’s mostly static assets like photos, music and video — are now eligible to pay a reduced rate for egress traffic to these CDN locations. Google says the idea here is to “encourage the best practice of regularly distributing content originating from Cloud Platform out to the edge close to your end-users. Google provides a private, high-performance link between Cloud Platform and the CDN providers we work with..”

So we see Google doing the partner thing. Going it alone may be lonely and expensive. The article mentions that the importance of CDNs will only grow with the weight of web pages, which are so often plied with high-res images and HD video. So long as Google can’t solve this problem itself, they are happy to partner up with providers.

Chelsea Kerwin, September 21, 2015

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

The Semantic Web Has Arrived

September 20, 2015

Short honk: If you want evidence of the impact of the semantic Web, you will find “What Happened to the Semantic Web?” useful. The author captures 10 examples of the semantic Web in action. I highlighted this passage in the narrative accompanying the screenshots:

there is no question that the Web already has a population of HTML documents that include semantically-enriched islands of structured data. This new generation of documents creates a new Web dimension in which links are no longer seen solely as document addresses, but can function as unambiguous names for anything, while also enabling the construction of controlled natural language sentences for encoding and decoding information [data in context] — comprehensible by both humans and machines (bots).

Structured data will probably play a large part in the new walled gardens now under construction.

The conclusion will thrill the search engine optimization folks who want to decide what is relevant to a user’s query; to wit:

A final note — The live demonstrations in this post demonstrate a fundamental fact: the addition of semantically-rich structured data islands to documents already being published on the Web is what modern SEO (Search Engine Optimization) is all about. Resistance is futile, so just get with the program — fast!

Be happy.

Stephen E Arnold, September 20, 2015

13 Big Data Trends: Fodder for Mid Tier Consultants

September 20, 2015

Let’s assume that a colleague has lost his or her job (xe, in Tennessee, I heard). The question becomes, “What can I do with my current skills to make big money is hot new sector?”

The answer appears in “13 New Trends in Big Data and Data Science.” The write up is intended to be a round up of jazzy hot topics in a couple of even hotter quasi-new facets of the database world. Like enterprise search, databases are in need of juice. Nothing helps established technology than new spins in old orbits.

My6 suggestion is to read through the list of 13 “new trends.” Pick one, and suggest to your prospect hunting pal to get hired. Nothing to it.

Allow me to illustrate the method in action.

I have selected trend 8 “The rise of mobile data exploitation.” There are some companies active in this field; for example, S2T. The S2T name means simulation software and technology. The outfit processes a range of digital information and analyzes it with the company’s own tools. Anyone can work in this sector. The demand for talent is high. The work is not too difficult. The desire to hire “experts” various aspects of data is keen. No problem. Sure, there may be some trivial requirements like checking with a person’s mom and his or her best friends to make sure the applicant can be trusted. Hot trend. No problemo.

Let’s look at another field.

Trend 11. High performance computing (HPC). What could be faster than Apple’s new mobile chip? What could be higher performance than the Facebook or Google infrastructure. If the job seeker is familiar with these technologies, the world of Big Data excitement awaits. The experience is the important thing, not knowledge of optimized parallelization pipelines.

Easy.

Each of the 13 trends makes it clear that there are numerous opportunities. These range from digital health (IBM Watson is a PR player) to the trivial world of analytic apps and APIs.

After reading the article, I was delighted to see how many important trends are getting buzz.

Big Data is definitely the go to discipline. I anticipate that anyone interested in search and cotnent processing will be able to pursue a career in Big Data.

Now some skeptics believe that Big Data is a nebulous concept. Do not be dissuaded. The 13 trends are evidence that databases and the analysis of their contents is the future. Just as these activities have been since the days of Edgar Codd.

The mid tier consultants can ride with the hounds.

Stephen E Arnold, September 20, 2015

Outfits Which Are Big Tech under Fire

September 19, 2015

I am puzzled because I came across a New York Times article dated September 19, 2015, which in Harrod’s Creek is a Saturday. The write up is labeled “Sunday Review” and sports this title: “Big Tech Has Become Way Too Powerful.” I like the “way too” touch. Not just powerful, really powerful, gentle reader. When you click the link, you may encounter a “pay for access” or “access denied” message. Hey, that’s one more example of our modern world. Tough luck for some.

The write up makes the point that some outfits like some addled high tech companies offering products “everyone else has to use.” Yep, digital services coalesce into natural monocultures and monopolies in my experience.

The write up reveals that most online traffic goes to a small percentage of online sites. Yikes, Zipf’s Law has been discovered by the Gray Lady. Not a moment to soon for Benford either.

The problem is that equality is not part of the equation. Just ask a small business about its findability and Web site traffic.

The write up reaches back into America’s past for evidence of the consequences of business concentration:

“The enterprises of the country are aggregating vast corporate combinations of unexampled capital, boldly marching, not for economical conquests only, but for political power,” warned Edward G. Ryan, the chief justice of Wisconsin’s Supreme Court, in 1873. Antitrust law was viewed as a means of breaking this link. “If we will not endure a king as a political power,” Senator John Sherman of Ohio thundered, “we should not endure a king over the production, transportation and sale” of what the nation produced.

The only hitch in the git along is that once digital concentration occurs, the mass of traffic (expressed in different ways) works just like one of those NASA confections showing black holes eating stuff to become bigger. The idea is that the black hole allows nothing to escape its maw.

After reading the write up, I formulated three observations:

  • In the short term, I don’t see the black holes of Facebook, Google, and other traffic dominant firms becoming the equivalent of friendly forest creatures in a Bambiesque world.
  • Users are voting with their behavior. Billions of folks are perfectly happy with concentration. Go with the flow. There must be broader forces at work than concerns about tracking, equality, and features.
  • Regulatory entities are chasing a train which has left the station. By the time, someone discovers Zipf’s Law (maybe the New York Times), other developments are afoot. The time lag makes complaints an academic exercise.

In short, the Gray Lady is taking yet another whack at digital outfits which have supplanted the newspapers perceived right to control certain types of information and advertising.

Nice try. Too late. Facebook and Google may run out of gas sooner rather than later. Their problems will have little to do with traditional publishing. Energy can dissipate. Grousing seems to be more persistent.

By the way, clear time stamps are helpful. Gentle reader, pass this thought along if you come across a “real” journalist. Today is Saturday, not Sunday. Today’s reality is not what will eventually be.

Reality is annoying. Zipf died in 1950. The Zipfian distribution lives on in certain data behaviors. Those log log graphs are indeed useful.

Stephen E Arnold, September 19, 2015

More Google Plus Speculation

September 19, 2015

I read “Google Phases Out Google+ Even Further – Or Does It?.” Once upon a time, Google Plus was the future of Google. I assume that this particular Google is still the good, old Google, not the Ling Temco Vought Alphabet thing.

At one time, Google was going to be defined by Google Plus. Then Google Plus continued to lag behind the Xoogler-filled Facebook. The write up raises a question which is not interesting: Is Google Plus a thing or is Google Plus another Google lab test? Due to my inherent biases, I am not into social content. I do find it fascinating that so many people find that social systems are the cat’s pajamas.

Tucked into the write up, I spotted a statement which characterizes the ageing Google. Here’s the passage I found interesting:

At the same time, Zonozi [strategy expert at Zoomph] acknowledges that Google+ has completely pivoted from being the social platform it once aspired to be. He thinks Google is just trying to maintain its audience while it tries to figure out what exactly to do with the platform. Eventually, he could see it reemerging as something comparable to a Pinterest-Reddit hybrid.

I am not sure about a Pinterest-Reddit hybrid, but I sure do like the phrases “completely pivoted,” “trying to maintain its audience,” and “figure out exactly what to do with the platform.

Yep, the new Alphabet Google thingy in a nutshell.

Stephen E Arnold, September 19, 2015

Google: Single Point of Failure Engineering

September 18, 2015

Do you recall the lightning strike at the Alphabet Google’s data center in Belgium? Sure you do. Four lightning strikes caused the data center to lose data. See “Lightning in Belgium Disrupts Google Cloud Services.” I asked myself, “How could a redundant system, tweaked by AltaVista wizards decades ago, lose data?”

When I was assembling information for the first study in my three part Google series, I waded through many technical papers and patent documents from the GOOG (now Alphabet). These made clear to me that the GOOG was into redundancy. There were nifty methods with clever names. Chubby, anyone?

Now the Belgium “act of God” must have been an anomaly. Since 2003, the GOOG should have been improving its systems and their robustness. Well, maybe Belgium is lower on the hardened engineering list?

I found this article quite interesting: “Google Is 2 Billion Lines of Code. And It Is All in One Place.” Presumably the knowledge embodied in ones and zeros is not in one place. Nope. The code is in 10 data centers, kept in check with Piper, a home brew code management system.

But, I noted:

There are limitations this system. Potvin [Google wizard] says certain highly sensitive code—stuff akin to the Google’s PageRank search algorithm—resides in separate repositories only available to specific employees. And because they don’t run on the ‘net and are very different things, Google stores code for its two device operating systems—Android and Chrome—on separate version control systems. But for the most part, Google code is a monolith that allows for the free flow of software building blocks, ideas, and solutions.

No lightning strikes are expected. What are the odds for simultaneous lightning strikes at multiple data centers? Not worth worry about this unlikely disaster scenario. Earthquake? Nah. Sabotage? Nah.

No single point of failure for the Alphabet Google thingy. Cloud services just do not lose data most of the time. The key word is “most.”

Stephen E Arnold, September 18, 2015

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta