Microsoft and Its Cost Value Message

April 21, 2009

We live in an era of knowledge value. I encountered this phrase after a visit to Japan 20 years ago. The idea is set forth in a very important book called The Knowledge Value Revolution. You can get a copy here. Taichi Sakaiya will require some effort, but I think the time will be well spent. One idea in the book is the notion of value. When I read articles or hear speeches that use the term “value” I wonder if the speaker has internalized Mr. Sakaiya’s explication.

I read the CNet article “Microsoft to Open Source: Please Don’t Compete on Price” here. As reported by Matt Asay, the Microsoft plea is for open source vendors to shift from marketing that pitches price as an advantage and start talking about value. I had to read the summary of Sam Ramji’s comments twice to make sure I understood his angle.

The notion of “value” is quite different from cost. A cost is easy to describe and measure. To soften the edges of the cost analysis, MBAs and other grifters have chopped up costs into indirect and direct costs. Some add notions of “burdened” and “unburdented”. Systems designed to track costs in certain government agencies simply don’t work. These systems were set up to put some costs in one silo and other costs in different silos. Then guidelines were put in place to prevent one silo from reporting costs is exactly the same way as another silo. In self defense, most savvy managers pick a specific cost factor and hang the project on that cost. The idea is to make an apple to apple comparison of the cost of licensing a search system or the cost of running an on site training program for 10 people. The manager ignores indirects and otehr types of MBA and accounting embroidery.

Value, particularly value in an information centric organization, is even fuzzier. Value sucks in costs and runs them through a denin stressing machine so the dollars become soft and edges get frayed. The “value” becomes fuzzy and it is more difficult to pull off an apple to apples comparison of a soft, fuzzy concept that a licensing cost. Marketers thrives in the value space. Heck, that’s what Shakespeare would be doing if he were alive today. There’s big money in the jargon, buzzword, and euphemism business.

“Knowledge value” kicks up the argument several levels. The perception of value pivots on the information available and how those data exist in a context. Mr. Sakaiya analyzes the concept of “knowledge value” in such a way that even I could figure out the brilliance of the Toyota Corporation’s Lexus pricing tactic.

What Microsoft is doing is remarkable. On one hand, the company is making many separate products available in one shopping basket component. The price for these shopping baskets is very attractive. XP for a netbook is a bargain because Microsoft is using price to shut the door on Linux. SharePoint includes search, collaboration, work flow, Web design, and content management for a flexible price that is usually lower than competitors’ price points for comparable features.

Microsoft competes on the basis of price. Sam Ramji wants everyone else, particularly the open source vendors, to compete on value. The idea is that Microsoft imparts more value to its products because of its widespread adoption, its dominance on the desktop, the number of junior college programmers who can use VisualStudio, and the vast Microsfot support network. The value is so great that the low price suddenly is put in a context of serious investment in the Microsoft ecosystem.

The way I understand Mr. Ramji’s argument is that Microsoft can compete on price. Everyone else can compete on value.

I am not sure if Mr. Ramji would agree with my interpretation. I have a hunch that open source vendors who provide software at a lower license cost are prepared to argue that their service fees are more competitive than Microsoft’s. The value, therefore, is a combination of lower licnesing costs and going-forward strategy that gives an organization greater control of those hockey stick cost overruns that often plague some enterprise software deployments.

Open source is gaining traction. How do I know? The “plea” expressed in this summary of Mr. Ranji’s remarks underscore the fact that open source is an issue for Microsoft. Google is a problem but open source may be an even bigger problem going forward. I am looking forware to a bargain priced copy of Windows 7.

Part of the knowledge value revolution is that buyers are getting better at determing which companies deliver knowledge value, not just lower prices and basic value.

Stephen Arnold, April 21, 2009

Passport Canada: Caught with Its Tech in a Time Warp

April 21, 2009

CTV posted an item from “The Canadian Press” called “Online Form Poses Problem for Passport Canada.” You can read the story here. Passport Canada does what its name says—handles documents related to immigration. Another unit of the Canadian government  is the Canadian Foreign Affairs Department. The agency put some blank forms on its Web site. The form – PPTC 132 – is useful for getting a passport without a person who can verify the John Smith is “really” John Smith. The CTV.com story said:

Canadians who are overseas and need a passport require the form, which allows them to make their own declaration under oath.

Passport Canada keeps the form under tight control. Foreign Affairs Department put the form under loose control. Excitement ensues.

Some thoughts:

  • Coordination within and among government agencies is not too good, not just in Canada, but in most countries. Parkinson’s Law and political budget walls are two good reasons
  • Online remains a mystery in the Internet age
  • The notion of a form repository is a tricky one. The US initiative that I bumped into years ago seems to be ineffective
  • Removing information once it is “out there” is tough.

Are there lessons in this example? Lots. Easy and quick and cheap fixes. Pick two.

Stephen Arnold, April 20, 2009

Oracle Sun: Good News, Bad News for Google and Microsoft

April 20, 2009

Sun Microsystems is now part of Oracle. You can read the financial details here. The CNet story is interesting because it floats the Oracle hardware angle. I want to take a slightly different view based on my research for Google: The Digital Gutenberg due out before the Kentucky Derby.

For Google

Oracle is an important partner for Google Apps and the Google Search Appliance. My sources in Harrod’s Creek tell me that Oracle engineers use Google’s tools to unlock Oracle data. Easier and it works. The customers love the idea of a big outfit like Oracle using Google. This is also good news because it may blunt the push back about Google’s implementing some of the Java language. Think of Google’s support for Java and why Google did not support the full range of things Java. The reason is that Google wants to run one of those Walt Disney one way only lines for Thunder Mountain. The “one way street”  is wide and shady. Getting out the same way is going to be tough because of Google’s engineering. Forget that SquareSpace.com “export” function. Not at Google. With Oracle a Google partner, the Google may have some breathing room to lock down its approach to Java with lots of Equal and soy milk. I learned in 2001 that at that time, Google had quite a few former Sun engineers on its staff. Eric Schmidt is a Sun alum, and he knows the upside and downside of the company’s technology and its cost burdens.

For Microsoft

Oracle is a big problem because it sits in most big enterprise data centers. The Oracle DBA is a cross between a Navy SEAL and a electrical engineer. Not only are these folks tough to understand, they can kill you or a company with a data “problem”. Better be nice at budget time. With Oracle choking on petascale data flows, the Sun technologies for whizzy machines and zippy storage is a great opportunity to slash the cost of a CPU license as long as the organization opens the capital budget and buys iron. At some point, Oracle Sun might become a serious cloud provider of Oracle analytics or other data management services. This means that in the enterprise, the folks in Redmond have to deal with Oracle here and now and a loose federation of Google Oracle and Sun at some point in the future. Oracle may have more forces to deploy to slow down the proliferation of Microsoft SQL Server. And, now that Oracle “owns” MySQL, there’s a pricing and upgrade path to consider. The Access and SQL Server tandem may face a MySQL and Oracle database upgrade tactic.

For Search

Google is still the go to solution. SES10g is a non starter. Sun bought some search technology from InfraSearch here years ago but never did much with it. The search picture could change if Oracle invests in Secure Enterprise Search, Triple Hop, and even the really old linguistics technology it acquired in the early 1990s.

Stephen Arnold, April 20, 2009

Fast Search ESP 2009: Some Soft Information about a Hard Problem

April 20, 2009

A happy quack to the reader who sent me a link to this interesting and suggestive article “One Year with Microsoft – A Fast Perspective”. The write up appeared on April 17, 2009, on the Microsoft Enterprise Search Blog. You must read the posting here. The author is Nate, which does not tell me much. In such circumstances, I remind myself that the posting may be a spoof. For my purposes, the snippets of text below are intended as aides de memoire for myself. I have added some preliminary, informal comments to capture the excitement I experienced when I read the post. (The original Microsoft intent to buy Fast Search announcement is here.)

Now to work:

Nate, the author, has worked in “the industry” for 13 years with six years in the Fast Search & Transfer business. Keep in mind that search has been around for more than 40 years and six years is a good start. I assume the author’s experience in search has been shaped by what I call “the Google era”. The sale of Fast Search was precipitated because Fast Search was struggling with money. Fast Search and Google came out of the gate at about the same time in the late 1990s. Google ended up with $20 billion in revenue in the same interval that Fast Search & Transfer approach $80 million (estimated after the 2007 revenues were revisited), a police action, and a a hugely complicated search system that was tough to install. I heard that Google’s “search is simple” campaign was partly in response to the complexity of the installation process for Fast Search ESP and similar old style systems.

The author explained where Fast Search fits in the giant Microsoft Corporation. I did not understand the acronyms, but there were enough units involved to tell me that search is not at the top of the tech pyramid at Microsoft.

Nate presented the acquisition as a pat on the back for a job well done. I respectfully suggest that a financial restatement and a police action are not meritorious.

Nate referenced the Fastforward 2009 conference (which I believe I heard will be merged into another Microsoft conference) as the place where the vision for Fast Search was set forth. He provided a link to a SharePoint unit manager, Kirk Koenigsbauer. The bulk of the Web log write up is a restatement of information presented at the Fast conference earlier in 2009. The key points in my opinion were:

First, commitment. Nate reminded me that Oslo is where the search action is. The discussion of commitment puzzled me. A passage that I noted was:

To be honest, search is such a generally valued concept and the possibilities are so compelling when it’s combined with other Microsoft products and technology that it’s all we can do to stay focused on our main priorities. It’s a good problem.

The word “honest” snagged me. Was the earlier part of the write “not honest”? The statement “it’s all we can do to stay focused on our main priorities” underscored the likelihood that Microsoft is still not sure what’s important in behind the firewall search.

Second, vision. Nate asserted that Microsoft’s vision for search and Fast Search’s vision for search “matched”. I stopped and got out my yellow highlighter and worked through this statement. Microsoft’s vision has been to catch Google and deliver “findability” that keeps SharePoint users and administrators happy. In my descriptions of Fast Search, I use the word “complex” quite a bit. Nate’s vision was, if I read the “visioin” paragraph correctly is to think about Microsoft Surface, which is a touch screen interface. The idea I surmise is to change the interface to search, not the plumbing of search. I received an award from a government agency that included a picture of “lipstick on a pig”. The idea was to remind me that the work I had done would not change the outcome of a government policy, just make it pretty. I thought of that award’s snapshot of a pig when I read about the push to interface.

Third, product plans. Nate references the roadmap. I love roadmaps. I asked myself, “When will there be specific product details?”

Nate concluded by revealing that:

There you have it, my first post for the Microsoft Enterprise Search Blog. Look for more posts from me in this general category of enterprise search vision and strategy. I welcome all comments on this and future entries. Next up – Search plus Natural User Interfaces.

I crave write ups about:

  • Information about on going support for Linux and Solaris installations of Fast Search
  • Detail about the migration plans to Windows servers
  • Return on investment analyses comparing Linux versus Windows servers
  • Documentation about the interaction of Fast ESP subsystems with one another
  • Index updating cycles
  • Scaling best practices
  • A reference architecture for processing terabyte flows of unstructured information in a 24 hour refresh cycle
  • Version upgrade roll backs methods when a point upgrade goes off the rails.

Those topics would hold my interest more than comments about commitment, vision, and plans. The addled goose honks, “Detail, please.”

Stephen Arnold, April 20, 2009

World Digital Library: More than Google Books and Europeana

April 20, 2009

On April 21, 2009, the World Digital Library becomes officially available. You can access the site here. The objectives of the WDL are:

  • Promote international and intercultural understanding
  • Expand the volume and variety of cultural content on the Internet
  • Provide resources for educators, scholars, and general audiences
  • Build capacity in partner institutions to narrow the digital divide within and between countries.

You can run a key word query or narrow by place, time, topic, or type of item.

According to Australia’s The Age here,

Bringing together priceless material, from ancient Chinese or Persian calligraphy to early Latin American photography, it is the world’s third major digital library, after Google Book Search and the EU’s new project, Europeana. Drawing on content from libraries and archives worldwide, it aims to reduce the rich-poor digital divide, expand “non-Western” content on the web, promote better understanding between cultures and provide a global teaching resource.

What is the future of the many virtual library initiatives? What about the neighborhood library? What about federating the catalogs of Google Books, Europeana, and the World Digital Library. I don’t want to run three separate queries. Do you?

Stephen Arnold, April 20, 2009

Ask.com’s Jeeves Character Is Back

April 20, 2009

WebUser.com reported that Ask.com’s butler mascot is back in the UK at least. You can read “Ask Brings Jeeves Back from Dead” here. WebUser.com said:

Jeeves will appear on TV advertising this week and will also have a Facebook profile and Twitter account, the company said. As well as bringing back Jeeves, Mascaraque [Ask executive] also told Web User that the company had been working hard on improving the relevance of results and the speed at which they were delivered, claiming that the majority of requests are answered in just under half a second.

I was hoping to see the butler dressed in a NASCAR jump suit. Ask.com is the search engine of NASCAR. Will a cartoon mascot or improved relevance boost the traffic of this search system?

Stephen Arnold, April 20, 2009

Next Generation Data Management

April 20, 2009

With the saber rattling between the Codd database crowd and some of the MapReduce believers, I came across a must read called “An Overview of Modern SQL-Free Databases” by Jedi here. I found the write up useful. Jedi mentioned a source or two that I had not examined. I downloaded the write up and printed out a hard copy to read whilst watching denizens of Harrod’s Creek shoot at squirrels.

The structure of the article is to provide a 100 to 150 word write up about:

  • Tokyo products (Cabinet, Tyrant, Dystopia)
  • Fixed length record databases (ah, a blast from the past) This Wikipedia article may be useful.
  • Hash databases (a Google interest). More information is here.
  • B+ Tree databases (Wikipedia information is here.)
  • Table databases (Wikipedia information is here.)
  • Lightcloud (Plurk open source here.)
  • Redis – Supports sharding (a Google interest)
  • Flare
  • CouchDB and the Futon interface
  • MongoDB

The comments to the article mention but do not provide much detail:

Stephen Arnold, April 20, 2009

Google in Trouble

April 20, 2009

Ad Age tackles the thorny question of Google’s brand with the analysis billed under the headline “Is Brand Google in Trouble?” here. To make the point, Ad Age’s art director spelled “trouble” using the Google logo colors. lIn my mind, I was expecting the page to play a riff from The Gladiator from Robert Meredith Wilson’s days with the Sousa band. Instead I heard “But He Doesn’t Know the Territory” from the Music Man. Trouble right here in River City.

The Ad Age article asserted that Google’s economics have “have come back to earth.” Google has a strong  consumer brand. The company has to beware of its “third rail”, which is privacy. The idea is that Google can electrocute itself I think. The Ad Age article said:

PR and branding experts advise Google to stay focused on building the next excellent product and not distract itself with what competitors are doing. And, they say, it’s important to listen to consumers; it’s impossible to please 100% of people, and the conversations in the trade and media world can seep out to the public. What’s important is to be transparent and human about addressing criticism.

As I said, “But He Doesn’t Know the Territory”.

Stephen Arnold, April 20, 2009

Exclusive Interview: Donna Spencer, Enterprise Systems Expert

April 20, 2009

Editor’s Note: Another speaker for what looks like a stellar conference agreed to an interview with Janus Boye. As you know, the Boye 09 Conference in Philadelphia takes place the first week in May 2009, May 5 to May 7, 2009, to be precise. Attendees can choose from a number of special interest tracks. These include a range of topics; including strategy and governance, Intranet, Web content management, SharePoint, user experience, and eHealth. Click here for more conference information. Janus Boye spoke with Donna Spencer on April 16, 2009.

Ms. Spencer is a freelance information architect, interaction designer and writer. She plans how to present the things you see on your computer screen, so that they’re easy to understand, engaging and compelling: Things like the navigation, forms, categories and words on intranets, websites, web applications and business systems.

The full text of the interview appears below.

Why is it so hard for organizations to get a grip on user experience design?

I don’t know that this is necessarily true. There are lots of organizations creating awesome user experiences. Of course, there are a lot who aren’t creating great experiences, but it isn’t because they can’t get a grip on user experience, it is because they care more about themselves than about their customers. If they really cared about their customers they’d do stuff to make their experiences great – and that’s possible without even knowing anything formal about user experience. But because they don’t care about their customers, they will fail, as they should…

Is content or visual design most important to the user experience?

Content (or functionality) is ultimately what people visit a website, intranet or application for. So it’s really, really important to get that right. If the core of the product is bad, it isn’t going to work.

But the visual design is often the part that helps people to get to the content. If the layout is poor, the colours and contrast awful and the site looks like it was designed in 1995, that’s going to stop people from even trying.

So both are important, though if I ever had to choose, I’d go for great content.

Is your book on card sorting really going to be released in 2009?

Yes, by the time the conference is on, there should be real, printed books. 150-odd pages of card sorting goodness. I hear that it should be out around 28 April. Really. I promise.

Does Facebook actually offer a better user experience after the redesign?

That’s a really interesting question. I can only speak for myself, but the thing that struck me about the redesign is that all of a sudden Facebook feels like a different beast. It used to be a site where friends were, but also where there were events, and groups and silly apps. Now it just feels like twitter that you can reply to. It feels like they have done a complete turn-around on who they actually are.

So for me the experience is worse. I can get a better idea of what my friends are doing, but I do that via twitter. Now it’s much harder for me to experience groups, events and all the other things we used to do there. I’m definitely using it less.

Why are you speaking at a Philadelphia web conference organized by a company based in Denmark?

Because they rock! But really, their core business overlaps a lot with what I do. I’m interested in the content the conference offers and I think my experience offers a lot to the attendees. Plus I’ve never been to Philly, and travelling to new places is a wonderful learning experience.

Content Management: Modern Mastodon in a Tar Pit, Part Two

April 20, 2009

Part 2: Challenges and the Upside… Sort of an Upside

The CMS Tar Pits

Today, search and content management systems are a really big problem. There’s no easy solution for several reasons:

  1. CMS in many cases have become larger and more complex over time. At the same time, the notion of an information object has raced along about a half mile ahead of the vendors’ software. In an organization, folks want to do podcasts, create fancy reports with embedded videos, and sometimes animated charts and graphs. “Search” is an overburdened word and it is unlikely that the wide range of content objects can be indexed without considerable resources.
  2. CMS has morphed into more than Web content. As a result, the often primitive workflow and repository functions choke when asked to support a special purpose retrieval; for example, eDiscovery. The solution to this problem is not to upgrade the CMS search system but to license another solution, pump the content into the specialized system, and run the eDiscovery with spoliation features from that system.
  3. CMS has not solved the problem of Web content. The reason goes back to the relationship between a human writing something and a system that sort of keeps that “something” organized and mostly eliminates the writer’s interaction with a programmer. CMS shifts the focus from setting up a method for creating useful, substantive content to the mechanics of keeping track of content objects and components. As a result, after the hoo haa of the CMS, Web sites have a content problem. The problem is that the information is often out of phase with the needs of the Web site user and the people who want the Web site to generate sales.
  4. CMS increases inefficiencies associated with writing. Organizations are committee writing machines. One or more individuals may write something. Then that “something” gets routed around, changes are made, a version is created, that version is shuffled around, and then an output occurs. Most document decisions are made at the 11th hour under an artificial “crisis”. This method absolves the “author” and the reviewers of real responsibility. The result is a lot of versions of the “something” and a document that is mostly something that is impenetrable. The “author” is like the guy or gal who sent me the engineering paper with a bunch of names on it. That person does not know what’s in the document and does not understand some parts of it. To see this type of writing in action read the instructions for a 1099 or a patent application.
  5. CMS costs only go up. Because CMS systems have to handle the content generated by their licensees, the costs for these puppies go one way—through the roof. Here’s why: CMS infrastructure has to be expanded to handle more documents and ever larger content objects. An email may be 4 Kb of XML. Stuff in a video and you get a bit of an extra load. Stuff in 20,000 documents with rich content and you get to buy lots of hardware, storage, bandwidth, and engineers to keep the Rube Goldberg machine running. The CMS has to be rebuilt on the fly which is plugging a leak in a speedboat towing a skier on Lake Huron. The fix is at best temporary.

In this environment, customers want facets, real time indexing, context sensitive queries, personalization, and access to structured data. No problem, but it won’t be cheap, easy, or doable with most of the existing budgets with which I am familiar.

Do marketers say these features can be delivered? You bet your life. Once the sale is made, the marketer goes to the next account. The vendor’s technical team is left to explain the reality and limitations of what search and content processing can do within the CMS environment.

Who’s the Lucky Mastodon?

So what’s the mastodon? The CMS that companies struggle to make work. What’ s the tar pit? The chair in front of the CFO’s desk. The owner of the CMS has to sit down and explain the cost overruns. The CFO may not care that the system is generating massive indirect costs, but she will certainly want to know about the hardware, software, license fees, consulting services, and programming expenditures.

Where do CMS consultants fit in?

There are good consultants (blue chippers) and not so good consultants (azure chippers). The “blue” connotes proven professionals from established services firms; for example, some units of IBM and some McKinsey and Boston Consulting Group folks. The azure chippers come from the companies with a modest track record and probably some wonder marketing lingo. The Regis McKenna school of marketing is a model for the azure chippers.

Consultants are usually a mirror of their clients. So clients get what they purchase and what emerges as “needs”. The result is that clients with a death of expertise in writing, content production, and enterprise publishing don’t get the problem fixed.

What exists now is a feedback loop that leads from the edge of the tar pit to the bottom of the tar pit. After a few million years, a preserved system is dug up, dissected, and compared to whatever tools are available. Because of the turnover among some enterprise technology professionals, the corporate memory is often shallow and the folks responsible for the mastodon have moved on.

The Upside of the CMS Tar Pit

What’s the positive view of this situation?

I see three positives.

First, the disasters of today’s CMS means that a number of individuals have attended the School of Hard Knocks and learned about some of the demands of content creation, production, and distribution.

Second, the newer systems have advanced beyond training wheels. You get air bags. You get seat belts. You get safety glass. You might be injured, but you probably won’t be killed. The US Senate’s CMS after several years of effort with two high profile vendors was shelved and a different approach pursued.

Third, some of today’s systems work and can be used by normal humans with so so writing skills. I know that it is great fun to whack on the Google, but I know that Adhere Solutions (a Google partner) has implemented some nifty systems that use the GOOG as plumbing. I referenced the newer cloud based services from a Web log vendor elsewhere in this essay. I also pointed out that the Xquery outfit MarkLogic may warrant a look.

What should you do if you want to have a CMS with lousy search? My first thought was to ask you to call me. My second thought was to tell you to buy a copy of Successful Enterprise Search Management. You can get information about this 2009 study by Martin White (European guru) and me (American addled goose) here. My third thought was to suggest a Google search. My fourth thought is to start over.

You will have to choose an appropriate path. My suggestion is to avoid the azure chip consulting firm crowd, newly minted experts, and anyone who sounds like a TV game show announcer.

Stephen Arnold, April 20, 2009

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta