Prediction, Metadata, and Good Enough
June 14, 2012
Several PR mavens have sent me today multiple unsolicited emails about their clients’ predictive statistical methods. I don’t like spam email. I don’t like PR advisories that promise wild and crazy benefits for predictive analytics applied to big data, indexing content, or figuring out what stocks to buy.
March Communications was pitching Lavastorm and Kabel Deutschland. The subject analytics—real time, predictive, and discovery driven.
Baloney.
Predictive analytics can be helpful in many business and technical processes. Examples range from figuring out where to sell an off lease mint green Ford Mustang convertible to planning when to ramp up outputs from a power generation station. Where predictive analytics are not yet ready for prime time is identifying which horse will win the Kentucky Derby and determining where the next Hollywood starlet will crash a sports car. Predictive methods can suggest how many cancer cells will die under certain conditions and assumptions, but the methods cannot identify which cancer cells will die.
Can predictive analytics make you a big winner at the race track? If firms with rock sold predictive analytics could predict a horse race, would these firms be selling software or would these firms be betting on horse races?
That’s an important point. Marketers promise magic. Predictive methods deliver results that provide some insight but rarely rock solid outputs. Prediction is fuzzy. Good enough is often the best a method can provide.
In between is where hopes and dreams rise and fall with less clear cut results. I am, of course, referring to the use by marketers of lingo like this:
- predictive analytics
- predictive coding
- predictive indexing
- predictive modeling
- predictive search
The idea behind these buzzwords is that numerical recipes can process information or data and assign probabilities to outputs. When one ranks the outputs from highest probability to lowest probability, an analyst or another script can pluck the top five outputs. These outputs are the most likely to occur. The approach works for certain Google-type caching methods, providing feedback to consumer health searchers, and figuring out how much bandwidth is needed for a new office building when it is fully occupied. Picking numbers at the casino? Not so much.
Google, Firefox, and the Threatening Yandex
June 11, 2012
Can an online outfit buy traffic? Yep. Should it? Yep, if you are Google and have lots of dough. Motivation is important too. Example: Surging Yandex, the ad, online payment, email, and search outfit from Russia.
Computerworld ran two stories about six months apart. The first appeared on December 22, 2011. “Google to Pay Mozilla $300M Yearly in New Search Deal, Says Report.” The main idea is that Google delivers cash; Mozilla delivers traffic. No big deal because the tie up was an extension and would be valid for three years. Pay to play is a big deal. Buying traffic is what makes search engine optimizers’ and ad execs’ hearts go pitter patter.
The second story appeared on June 10, 2012. “Mozilla Dumps Yandex as Default Search for Russian Firefox.” The subtitle was “Last year’s $900M global deal with Google forces Mozilla to swap search engines.” Hmm. So where did the extra $600 million come from? What’s the reason behind dumping Yandex.ru search for Google.ru search?
Can money buy traffic? Yes, it can. The Firefox deal explains this and shows the value of eyeballs.
Perhaps Yandex is more than a footnote in the search expert’s write ups? Maybe the Yandex folks are mounting a significant threat to the Google?
My view is that Yandex is a potential problem for Google and not just in Russia. The company has bright engineers. The company owns a chunk of Blekko which keeps getting mentioned as a useful search system by failed webmasters, azure chip consultants, and the odd blogger here and there. I like Blekko, and I really like Yandex.
What’s going on is one of those “predictive” management moves for which I admire Google. Like the two share play triggered in my opinion by rising costs and softening ad traffic to revenue ratios, Google senses warning lights with regard to Yandex. The system sucks up ArnoldIT content and puts some of it in its Russian language index and some of it in its English language index. Go figure. The only link I have with Russia is a distant relative who could add and subtract pretty well. His name, God rest his source, was Vladimir Ivanovich Arnold, best known for his Kolmogorov-Arnold-Moser Theorem. At least my uncle did not do the long distance swimming in cold lakes that Kolmogorov used to jump start his thinking. Arnold’s just sit and contemplate nature.
What I want to capture is that Google is taking steps to maintain a grip on its Russian traffic. Like China, Russia poses a challenge to some online companies. Google muffed the bunny in China, and Russia was not cooperative when it came to Google’s space travel adventures.
My hunch is that even though Firefox is not the powerhouse it once was, Google wants to make sure it keeps what Russian traffic it has and get more if possible. Chrome does not seem to be enough.
I find the pay for traffic actions of Google and the whole Panda Penguin exercises interesting. Are they two sides of the same coin? My thought is that more traffic centric plays will be forthcoming. The impact of mobile search is going to pose a high hurdle for certain companies when it comes to maintaining online advertising revenues.
A downturn could cause the cost curve to blast through the revenue curve. Like Amazon, Google has to balance costs and revenues. Each company is making quite interesting moves despite their apparent lock on certain markets. Locks can be broken.
Stephen E Arnold, June 11, 2012
Sponsored by Polyspot
Google and Enterprise Search: The Eichner Vision
June 7, 2012
Google has a new head of enterprise search, Matt Eichner, Yale, Harvard, and Endeca. Computerworld UK ran an interesting article on on June 5, 2012 by Derek du Preez. “Matt Eichner: Bringing Google.com to the Enterprise” walks through what appears to be the game plan for the enterprise search unit for the next three or four months, maybe longer if Google generates more traction than it has in the previous year or two.
The article reports that Google “commands over 90 percent of the UK’s online search market.” Mr. Eichner allegedly said:
If you look at Google in the search space, we are taking that consumer expectation that we developed on Google.com and packaged both the user interface and the algorithms behind it into an enterprise appliance.
The GSA as the Google Search Appliance is presented has been available for about a decade. Based on chatter at conferences and opinions floated by assorted search experts, Google has placed upwards of 55,000 GSAs in organizations worldwide. Autonomy, by contrast, is alleged to have about 20,000 licensees of its search and content processing systems. Microsoft SharePoint, which includes a search system, is rumored to have more than 100 million licenses. It is difficult to know which enterprise search vendor has the most customers. The numbers are not audited, and each vendor in the enterprise search market tiptoes around how customers many customers are signed up, how many customers are paying their bills, how many customers are dropping licenses, and how much revenue flows to the vendor from enterprise search service and support. In short, it may be difficult to know how big any one vendor’s share of the enterprise search market is or if there is even a market for enterprise search in today’s mash up and fluid business environment.
A block diagram showing a GSA in an enterprise installation. Note the presence of “OneBox” units. Authorized Google partners may be needed to get this type of implementation up and running. If this is accurate, then Mr. Eichner’s assertion about an “out of the box” solution may require some fine tuning. Image source: DevX at http://www.devx.com/enterprise/Article/33372/1954
Google believes there is a market, however.
The pointy end of the spear for Google is its search appliance. The idea is that a customer can order an appliance and get it up and running quickly. The GSA can scale by plugging in more GSAs. The GSA understands “enterprise context”.
According to Computerworld’s write up, Mr. Eichner asserted:
At Google we have billions of queries from Google.com coming in every day that we are able to analyze and deliver an enterprise tool that balances human behavior and search relevance.
Google’s enterprise services are cognizant of big data, which most vendors suggest can be managed by their search system. Google is no exception. Mr. Eichner, according to Computerworld, observed:
Big data is in the eye of the beholder. If I gave you 500,000 documents, which doesn’t sound like a lot, and I said to you find something in there – you would look at me and say, ‘can I use a search engine?’ From your perspective, 500,000 would be big data. We often lose sight of that. Insight needs to be delivered when you have more data than you can process. This can come in the form of 500,000 documents or hundreds of millions of documents. The real mandate in the world today is to get up the competitive stack by being more knowledgeable about what you are doing more quickly – that’s the nature of the information economy. The imperative is to get better at assimilating the knowledge you have and acting on it. The inverse of this is if you have big data and you don’t have insight. That’s the equivalent of saying ‘I’ll take a guess, I won’t use the information and I’ll take a guess.”’
No Shirking Unsolicited Inputs about Traditional Publishing
June 4, 2012
Straight away I am not a “real” journalist. I am not a college professor. I am not a pundit. I am an old guy who is greatly amused with the antics of Warren Buffet, a budding newspaper magnate. I am an old guy who finds the business high jumps of traditional publishers a modern day Kabuki. The story is well known, and I like seeing how the actors deliver the script. I am an old guy who chuckles as online successes which are attracting more legal hassles than a high school physics teacher’s magnet and iron filings demonstration.
You will want to read “The Washington Post Co.’s Self Destructive Course,” “Responding to Shirky on the Washington Post” and “WaPo Must Transform to Survive.” Both write ups adopt the “we know better” approach to providing business advice. I am assuming that both authors are ace executives, have a good mastery of finance and management, and can run big organizations in a way that would cause Jack Welch to take notes.
A tip of the sombrero to Cervantes and Picasso. Get those windmills.
The fact of life in traditional publishing is, for me, easy to summarize:
There are fewer people who read books, magazines, and newspapers than there were in 1970. For the readers, there are many choices. For those who don’t read, the easiest path between their need for information and information is medieval, maybe preliterate Bronze Age. And Google? Instead of white papers Google shoots videos. I sat and watched two camera people and one guy with a sound boom. The focal point was a Googler “running the game plan” about Google enterprise search. The Googler left out some information which I had heard circulating in mid May 2012 at the search conference in New York; namely, the US government was not renewing some Google Search Appliances due to cost, there are too few engineers devoted to the Google Search Appliance, and that any of the nifty integration requires custom code. Yep, a video. The truth for the non readers who don’t have time for the old fashioned approach to information.
Amazon is pushing short books, 3,000 words to maybe three times that length. Why? There are lots of well heeled Amazonians who do not have time for a weighty tome. Magazines cost quite a bit, even for those with six or seven figure incomes. Future Publishing turned me off with its hefty price in the UK last week and an even heftier price at the lone remaining book store in my area of rural Kentucky.
The fix is not “hamsterized nonsense.” (The phrase is cute but I don’t know what it means.) The nonsense, I believe, is share buy backs. Okay, but if one is in the right part of the financial food chain, those buy backs can deliver a new BMW or a condo in Nice, France. That is the marvel of point of view.
My view is that these three pundits are advocating actions which are similar to a non playing, couch potato who shouts at the TV during a professional football game, “I could do a better job calling plays that you.” If that person were better, wouldn’t that person be working for a professional team. The fact that a person has not won such a job suggests that either the person is unqualified or had a shot and flubbed it. It is easier to criticize instead of do. When the do amounts to telling senior managers what they should do, I enjoy the exercise immensely.
My view of traditional publishing is:
- Demographics have changed so some of the old assumptions held by traditional publishing companies either don’t work or lead to unexpected consequences. One should learn from mistakes, but if most publishing Web sites are cost centers, not revenue pumps, then the problem is deeper than doing some digital stuff and adapting.
- Paying for content works when the information is must have. The fact is that most information is nice to have. Trying to charge for nice to have just does not work. A quick look at the history of Dialcom, the Source, or Gannett’s online local newspaper plays provide some interesting case examples.
- Small start ups cannot be replicated at most companies. The reason is that the people who do start up often approach tasks with a different mind set than an employee. Until the mind set shifts, arguing that a major publishing company work like a two person start up is silly. Never worked. Won’t work. Even digital outfits like Google cannot approach innovation the way it did in 1996 to 1998. When Googlers can’t find alternative revenue streams after 13 years of trying, what does one expect of traditional publishing companies?
The fact is that traditional publishing companies are in the buggy whip manufacturers’ position when automobiles appeared. The fact that non executives without profit and loss responsibility offer advice is just funny. The professional managers are often aware of what must be altered. Those managers and their blue chip advisors cannot implement meaningful change.
Academic inputs are not likely to induce change. Real journalists are not the answer to traditional publishing company woes. Verbiage is quite entertaining, however.
Stephen E Arnold, June 4, 2012
Sponsored by HighGainBlog
Semantic Key Word Research
May 29, 2012
Keyword research is the time-tested, reliable way to locate information on the Internet and databases. There have been many changes to they way people use keyword research, some of them have stayed around and others have disappeared into the invisible web faster than a spambot hits a web site. The Search Engine Journal has come up with “5 Tips for Conducting Semantic Keyword Research” which believes that users “must recognize the semantic nature of the search engines’ indexing behaviors.”
For those without a dictionary handy, semantics refers to the meaning or interpretation of a word or phrase. When a user types a phrase into a search engine, it uses indexing (akin to browsing through a list of synonyms) to find other pertinent results.
A happy quack to http://languagelog.ldc.upenn.edu
So how do the tips measure up? Tip #1 has users create a list of “level 1” core keywords aka write a list of subject/keywords. This is the first step in any research project and most people will be familiar with it if they have completed elementary school. Pretty basic, but it builds the foundation for an entire project. Tip #2 delves farther by having users expand the first list by finding more supporting keywords that are not necessary tied to the main keyword, but are connected to others on the list. Again another elementary research tip, reach out and expand.
Tip #3 moves us away from the keyword lists and tells users to peruse their results and see what questions they can answer. After the users find what can be answered they make another list detailing their findings (so we didn’t step that far away from lists).
Tip #4 explains to combine tips #1-3, which will allow the users to outline their research and then write an article on the topic. Lastly. Tip #5 is a fare-thee-well, good luck, and write interesting content:
“One final tip for incorporating semantically-related keywords into your website’s content… Building these varied phrases into your web articles should help eliminate the stilted, unpleasant content that results from trying to stuff a single target keyword into your text a certain number of times.
However, it’s still important to focus on using your new keyword lists to write content that’s as appealing to your readers as it is to the search engines. If Google’s recent crackdowns on Web spam are any indication of its future intentions, it’s safe to say that the best long-term strategy is to use semantic keywords to enhance the value of your copy – without letting its optimization eclipse the quality of the information you deliver to your website visitors.”
What have we got here? Are the tips useful? Yes, they are, but they do not bring about new material about keyword searching. As mentioned earlier, these steps are taught as the very basic of elementary research: make a keyword list about your topic, find associated terms, read what you got, then write the report. It is true that many schools and higher education institutes do not teach the basics, thus so-called researchers lack these finite skills. Also people tend to forget the beginner’s steps. Two common mishaps that make articles like this necessary, but the more seasoned researcher will simply intone, “Duh!.”
Whitney Grace, May 29, 2012
Sponsored by Polyspot
Big Outfits Buy Search Vendors: Does Chaos Commence?
May 25, 2012
I don’t want to mention any specifics in this write up. I have a for-fee Overflight on the subject. I do want to highlight some of the preliminary thoughts the goslings and I collected before creating our client-focused analysis. This write up was sparked by the recent news that the founder of Autonomy, which HP acquired for $10 billion, is seeking new opportunities after eight months immersed in the HP way. See “Hewlett-Packard Can’t Say It Wasn’t Warned about Autonomy.” This write up contained a remarkable statement, even when measured against the work of other “real” journalists:
Some will say this is a classic case of an entrepreneurial business being bought by a hulking, bureaucratic institution which failed to integrate it and failed to understand its culture. Others will say HP, desperate to do a deal, simply overpaid for a company that was going to struggle to maintain its sales and earnings momentum and was deluded about its abilities. Certainly warnings about the latter were there for HP to see before it handed over all that cash. Here’s what Marc Geall, a Deutsche Bank analyst who used to work at Autonomy, said in October 2010 about the business model: “…investment in the business has lagged revenues… [which] could affect customer satisfaction towards the product and the value it delivers.” He went on to warn that Autonomy’s service business was “too lean” and that it “risks falling short of standards demanded by customers”. All of which prompted Geall to question whether the company needed to change its business model – “traditionally, software companies have needed to change their business models at around $1bn in revenues”.
Yep, now the issues are easy to identify: the brutal cost of customer support, the yawning maw of research and development, the time and cost of customizing a system. The problem is that these issues have been identified. However, senior managers looking for the next big thing are extremely confident of their business and technical acumen. Search is a slam dunk. Heck, I can find what I want in Google. How tough can it be to find that purchase order? That confidence may work in business school, but it has not worked in the wild-and-crazy world of enterprise search and content processing.
Think back to the notable search acquisitions over the last few years. Here are some to jump start your memory:
- IBM in 2005 and 2006 purchases iPhrase (a MarkLogic precursor with semantic components) and Language Analysis Systems (a next generation content processing vendor)
- Microsoft which acquired Powerset and Fast Search & Transfer in the 2008 to 2009 period. Both vendors had next-generation systems with semantic, natural language processing, and other near-magical capabilities
- Oracle acquired TripleHop in 2005, focused on its less-and-less visible Secure Enterprise Search line up (SES10g and SES11g), then went on a buying spree to snap up InQuira (actually the company formed when two weaker players, Answerfriend Inc. and Electric Knowledge Inc., merged in 2002 or 2003, RightNow (which uses the Q-Go natural language processing system purchased in 2010 or 2011), and Endeca, an established search vendor with technology dating from the late 1990s)
- SAP snagged some search functions with its NetWeaver buy in 2004 which coexisted in a truce of sorts with the SAP TREX system. SAP bought Business Objects in 2007, the company inherited the Inxight Software, a text analytics vendor with assorted wizardry explained in buzzwords by marketing mavens.
So what have we learned from these buy outs by big companies? Here are the observations:
First, search and content processing does not behave the way other types of software learns to sit, come, and roll over. The MBAs, lawyers, and accountants issue commands like good organizational team players. The enterprise search and content processing crowd listens to the management edicts with bemusement. Everyone thinks search is a slam dunk. How tough can a utility function be? Well, let me remind you, gentle reader, search is pretty darned difficult. Unlike a cloud service for managing contacts, search is not one thing. Furthermore, those who have to use search are generally annoyed because systems have since 1970 failed to generate answers. Search outputs create more work. Usually the outputs are mostly wide of the mark. Big companies want to sell a software product or service that solves a problem like what is the back log for the Midwestern region or when did I last call Mr. Jones? The big companies don’t get this type of system when they buy, often for a premium, companies which purport to make content findable, smart, and accessible. So we have a situation in which a sales presentation whets the appetite of the big company executive who perceives himself or herself as an expert in search. Then when anticipation is at its peak, the sales person closes the deal. In the aftermath, the executives realize that search just does not follow the groove of an accounting system, a videoconferencing system, or a security system. Panic sets in, and you get crazy actions. IBM pretty much jettisoned its search systems and fell in love with open source Lucene / Solr. Good enough was a lot better than trying to figure out the mysteries of proprietary search and how to pay for the brutal research and development costs search requires.
Second, search is a moving target. I find that as recently as my meetings with sleek MBAs from six major financial firms, search was assumed to be a no brainer. Google has figured out search. Move on. When I asked the group how many considered themselves experts in search, everyone replied, “Yes.” I submit that none of these well-paid movers-and-shakers are very good at search and retrieval. Few of them have the time or patience for old fashioned research. Most get information from colleagues, via phone calls which include “I have a hard stop in five minutes”, and emails sent to people whom they have met at social functions or at conferences. Search is not looking up a phone number. Search is not slamming the name of a company into Google. Search is not wandering around midtown Manhattan with an iPhone displaying the location of a pizza joint. Search is whatever the user wishes to find, access, know, or learn at any point in time and in any context. Google is okay at some search functions. Other vendors are okay at others. The problem is that virtually all search and retrieval solutions are okay. People have been trying for about 50 years to deliver responses to queries that are what the user requires. Most systems dissatisfy more than half their users and have for 50 years. A big company buying a next generation search system wants these problems solved. The big company wants to close deals, get client access licenses, or cloud transactions for queries. But the big companies don’t get these things, so the MBAs, lawyers, and accountants are really confused. Confused people make crazy decisions. You get the idea.
Third, search does not mean search. Search technology includes figuring out which words to index in a document. Search does a miserable job of indexing videos unless the video audio track is converted to ASCII and then that ASCII is indexed. Even with this type of content processing system, search does not deliver a usable output. What a user gets is garbled snippets and maybe the opportunity to look at a video to figure out if the information is relevant. Search includes figuring out what a user wants before the user asks the question or even knows what the question is. One company is collecting millions in venture money to achieve this goal. Good luck on that. Search includes providing outputs that answer an employee’s specific question. Most systems provide a horseshoe type of result; that is, the search vendor wants points for getting close to the answer. Employees who have to click, scan, close, and repeat the process are not amused. The employee wants the Smith invoice from April, not increased risk of carpal tunnel problems. The poobahs who acquire search companies want none of these excuses. The poobahs want sales. What search acquisitions generate are increased costs, long sales cycles, and much friction. Marketers overstate and search systems routinely under deliver.
Who cares?
Another enterprise search train wreck. The engineer was either an MBA, an accountant, or a lawyer. No big deal. Just get another search train. How tough can it be to run a search system? Thanks to http://www.eccchistory.org/CCRailroads.htm
Well, the executives selling big companies a search and content processing just want the money. After years of backbreaking effort to generate revenues, the founders usually figure out that there are easier ways to earn a living. If the founders don’t bail out, they get a new job or become a guru at a venture capital firm.
Google and Going Beyond Search
May 17, 2012
The idea for this blog began when I worked through selected Ramanathan Guha patent documents. I have analyzed these in my 2007 Google Version 2. If you are not familiar with them, you may want to take a moment, download these items, and read the “background” and “claims” sections of each. Here are several filings I found interesting:
- US2007 003 8600
- US2007 003 8601
- US2007 003 8603
- US2007 003 8614
- US2007 003 8616
The utility of Dr. Guha’s invention is roughly similar to the type of question answering supported by WolframAlpha. However, there are a number of significant differences. I have explored these in the chapter in The Google Legacy “Google and the Programmable Search Engine.”
I read with interest the different explanations of Google’s most recent enhancement to its search results page. I am not too eager to highlight “Introducing the Knowledge Graph: Things, Not Strings” because it introduces terminology which is more poetic and metaphorical than descriptive. Nevertheless, you will want to take a look at how Google explains its “new” approach. Keep in mind that some of the functions appear in patent documents and technical papers which date from 2006 or earlier. The question this begs is, “Why the delay?” Is the roll out strategic in that it will have an impact on Facebook at a critical point in the company’s timeline or is it evidence that Google experiences “big company friction” when it attempts to move from demonstration to production implementation of a mash up variant.
In the various analyses by experts, “real” journalists, and folks who are fascinated with how Google search is evolving, I am concerned that some experts describe the additional content as “junk” and others view the new approach as “firing back at Bing.”
You must reach your own conclusion. However, I want to capture my observations before they slip from my increasingly frail short term memory.
First, Google operates its own way and in a “Google bubble.” Because the engineers and managers are quite intelligent, clever actions and economy are highly prized. Therefore, the roll out of the new interface tackles several issues at one time. I think of the new interface and its timing as a Google multiple war head weapon. The interface takes a swipe at Facebook, Bing, and Wolfram Alpha. And it captures linkage, wordage, and puffage from the experts, pundits, and wizards. So far, all good for Google.
A MIRV deployment. A single delivery method releases a number of explosive payloads. One or more may hit a target.
Second, the action reveals that Google * had * fallen behind in relevancy, inclusion of new content types, and generating outputs which match the “I have no time or patience for research” user community. If someone types Lady Gaga, the new interface delivers Lady Gaga by golly. Even the most attention deprived Web or mobile user can find information about Lady Gage, click, explore, and surf within a Guha walled garden. The new approach, in my view, delivers more time on Google outputs and increases the number of opportunities to display ads. Google needs to pump those ads for many reasons, not the least of which is maintaining revenue growth in the harsh reality of rising costs.
Third, the approach allows Google to weave in or at least make a case to advertisers that it is getting on its social pony, collecting more fine grained user data, and offering a “better search experience.” The sale pitch side of the new interface is part of Google’s effort to win and retain advertisers. I have to remind myself that some advertisers are starting to realize that “old fashioned” advertising still works for some products and concepts; for example, space advertising in certain publications, direct mail, and causing mostly anonymous Web surfers to visit a Web site and spit out a request for more information or, better yet, buy something.
The new interfaces, however, are dense. I point out in the Information Today column which runs next month that the density is a throw back to the portal approaches of the mid 1990s. There are three columns, dozens of links, and many things with which to entice the clueless user.
In short, we are now in the midst of the portalization of search. When I look for information, I want a list of relevant documents. I want to access those documents, read them, and in some cases, summarize or extract factoids from them. I do not want answers generated by someone else, even if that someone is tapping in the formidable intelligence of Ramanathan Guha.
http://www.billdolson.com/SkyGround/reentryseries/reentryseries.htm
So Google has gone beyond search. The problem is that I don’t want to go there via the Google, Bing, or any other intermediary’s intellectual training wheels. I want to read, think, decide, and formulate my view. In short, I like the dirty, painful research process.
Stephen E Arnold, May 17, 2012
Sponsored by Polyspot
Gartner, A Former Gartner Person, and Ego
May 14, 2012
Computerworld is supposed to be about computers. Now I don’t think too much about Computerworld era computers any more. I think that the owner of Computerworld was gung ho on Verity search once. That told me a great deal about Computerworld’s parent company.
The story “Can a New Analyst Firm Take Down Gartner?” Wow. Quite an amazing write up. Sprawled across three pages, the story is written by a person about whom I know quite a lot after reading the “real” news in Computerworld; for example:
- The author of the story is Rob Enderle who is a big wheel and apparently the brains behind the Enderle Group.
- Mr. Enderle worked at Forrester (an azure chip outfit explaining what’s what in all things related to anything that compute), Giga Information Group (ditto the Forrester services), and a profession who has “worked for” IBM. He worked on audits, competitive analysis, marketing, finance, and security.
- Mr. Enderle is a TV talent type for CNBC, Fox (a Murdoch “real” journalism outfit), Bloomberg, and NPR.
- Mr. Enderle “knows” Gideon Gartner, the brains behind the Gartner we know and love today as a publicly traded azure chip consulting firm.
- Mr. Enderle “helped found” the Giga Information Group.
- Mr. Enderle knows that “line management…doesn’t listen to Gartner and, for that matter, often doesn’t listen to IT either.”
There are other biographical nuggets in the write up too. Mr. Enderle “knows” Gideon Gartner. Be still my heart!
The main point is that an outfit involved in social CRM could—hypothetically and mostly without factual basis—just might be able to “take down Gartner.”
Yowza.
What does the kitty see when it looks in the mirror? A house pet or a wild lion?
The super hero in this story is a company called Ombud, which I assume is shorthand for ombudsman, a full time equivalent who is supposed to be a pair of ears with moist eyes and a warm nature able to solve a customer’s problem. I don’t know any ombudsmen, however. Those characteristics often match up with social workers in my experience.
There were several overt main points in the story about Ombud which I found more like search engine optimization and ego marketing. For instance:
I learned:
Gartner Group was conceived well before social networking, at a time when there not only was no Internet but no PCs. It seemed that it wouldn’t be long before someone would figure out how to blend experts, practitioners and vendors into a service that would be cheaper, more current and more focused on the unique needs of an individual company, thus providing more real value (regardless of price) than the older model.
Er, so what? Ombud is a Web site for a company which offers the same pay to play information which comes from most azure chip and blue chip consulting firms. Check ‘em out yourself at www.ombud.com.
Second, unlike Gartner and I assume any other consulting outfit, Ombud sells “access to RFPs which users create and vendors bid on.” I think the idea is that one can eliminate intermediaries, post a request for work, get bids, and pick a vendor. The organization just goes direct. I know how poorly the traditional procurement process works, but I am sure that a Fortune 50 company will experiment with Ombud. Anything that cuts the burdensome fees imposed by azure chip consultants is a good thing for most chief financial officers.
The Courier Journal: A Louisville Death Rattle
May 13, 2012
In 1981, I joined the Courier Journal and Louisville Times. That was 31 years ago. I am not sure how I made the decision to leave the Washington, DC, area to journey to a city whose zip code and telephone area code were unknown to me. I am a 212, 202, and 301 type of person.
I recall meeting Barry Bingham Jr. He asked me what I did in my spare time. I was thunderstruck. My former employers—Halliburton Nuclear Utility Services and Booz, Allen & Hamilton—never asked me those questions. Those high powered, hard charging outfits wanted to know how much revenue I had generated and how much money I had saved the company, when the next meeting with the Joint Committee on Atomic Energy was, and how the Cleveland Design & Development man trip vehicle was rolling along. The personal stuff floored me.
I did not have an answer. As a Type A, Midwestern, over-achieving, no-brothers-and-no sisters worker bee, fun was not a big part of my personal repertoire.
I asked him, “Why?”
I recall to this day his answer, “I want our officers and employees to have time with their families, get involved in the community, and do great work without getting into that New York City thing.”
Interesting. The Courier Journal had a very good reputation. The newspaper was profitable, operated a wide range of businesses, printed the New York Times’s magazine for the Gray Lady, and operated a commercial database company. In fact, in 1980 the Courier Journal was one of the leaders in commercial online information, competing with a handful of other companies in the delivery of information via digital channels, not the dead-tree, ruin-the-environment, and dump-chemicals approach of most publishing companies.
In 1986, Gannet bought the Courier Journal. The commercial database unit was of zero interest to Gannet, so it and I were sold to Bell+Howell. After a short stint at a company entrenched in 16 mm motion film projectors, I headed back to New York City.
I retained my residence in Louisville, and I have watched the trajectory of the Courier Journal as it moved forward.
I have to be blunt. The Courier Journal is not the newspaper, the company, or the community force it was when I joined Mr. Bingham and a surprisingly diverse, bright, forward-looking team 31 years ago. The 1981 management approach of the Courier Journal was a culture shock to me. Think of the difference between Dick Cheney and Mr. Rogers. The 2012 approach saddens me.
This morning I read “Answering Your Questions on CJ Changes,” written by a person whom I do not know. The author of the article is Wesley Jackson, publisher of the Courier Journal. (I never liked the acronym CJ and still do not.)
The main point of the article is that the Courier Journal has to raise its prices. Last week, Mr. Jackson wrote a short article in the Courier Journal informing subscribers a letter would arrive explaining the new services that would be available. We received our letter on Wednesday, May 9, 2012. We called on Thursday, May 10, 2012, and cancelled our subscription. I am not sure how many other subscribers took this action, but a sufficient number of Courier Journal readers called to kill the phone system at the newspaper.
Mr. Jackson wrote this morning:
Unfortunately our Customer Service Center’s phone system had technical problems, and many of you had long wait times or could not get through to get your questions answered. That I know was frustrating.
I bet. I would love to see the data about the number of calls and the number of cancellations that the paper received when it announced the rate hike, a free iPad application for subscribers, and an email copy of the newspaper sent each day to paying customers.
The write up troubled me for several other reasons:
- Some of the word choices were of the touchy-feely school of communication. There are 19 “we’s”. The word “value” appears twice, there are seven categoricals: six all’s and one never; and the word “conversation” appears twice.
- There is at least one split infinitive “to personally apologize”
- An absolutely amazing promise expressed in this statement: “For those of you who would like to ask questions directly, please email me at publisher@courier-journal.com or send a letter to Publisher, Courier-Journal Media, 525 W. Broadway, Louisville, KY 40202. I promise you will each receive a response.”
“Promise,” “all,” and “never”—yep, I believe those assertions.
I would have included an image of Wesley Jackson but I had to pay for it. Not today, sorry.
My view is that I hear a death rattle from the Courier Journal. The reality of the newspaper is that it runs more and more syndicated content. The type of local coverage for which the paper was known when I joined in 1981 has decreased over the years. When I want news, I look at online services. What I have noticed is that what appears in the Courier Journal has been mentioned on Facebook, Twitter, or headline aggregation services two or three days before the information appears in either the Courier Journal’s hard copy edition or its online site, www.courier-journal.com.
Dave Kellogg, the former president of MarkLogic, used to chide me that I should not refer to major publishing operations and “dead tree publishers.” My view was and is that I am entitled to my opinion. Traditional publishing companies have failed to respond to new opportunities to disseminate and profit from information opportunities.
The list of mistakes include:
- Belief that an app will generate new revenue. Unfortunately apps are not automatic money machines. (Print-centric apps are not the go-to medium for many digital device users.)
- Assumptions about a person’s appetite for paying for “nice to have content.” (One pays for “must have” content, not “nice to have” content.)
- Failure to control costs. (Print margins continue to narrow as traditio0nal publishers try to regain the glory of the pre digital business models.)
- Firing staff who then go on to compete by generating content funded by a different business model. (This blog is an example. We do online advertising and inclusions and sell technical services. For some reason, this works for me thanks to my team which includes some former “real” journalists.)
- Assuming that new technology for printing color on newsprint equips an information technology department that it can handle other information technologies in an effective manner. (Skill in one technical area does not automatically transfer to another technical field.)
I can hear the labored breathing of a local newspaper struggling to stay alive. What do you hear?
Stephen E Arnold, May 13, 2012
Sponsored by HighGainBlog, which is ArnoldIT
Inktomi and Fast Search: Two Troubled Search Companies, One Lesson
May 8, 2012
I found the write up by Diego Basch interesting and thought provoking. I have a little experience with Inktomi. For the original FirstGov.gov system, the US government used Inktomi for the public facing index of US government unclassified information. (FirstGov.gov is now www.usa.gov)
Inktomi had in 2000 a “ready to go” index of content from Dot Gov Web sites. The firm’s business model matched the needs of the US government. There were the normal contracting and technical hurdles for a modestly sized US government project with a fairly tight timeline. No big deal. Job done. Inktomi worked.
When I read “A Relevant Tale: How Google Killed Inktomi,” I thought the write up had some useful information. However, I don’t think Google killed Inktomi or any other search system. Google did not kill Fast Search & Transfer, Excite, HotBot, or any other search system in its rise to its alleged 65 percent share of the search market. (Google share is actually much higher, based on my analyses.)
Excite’s early 1997 attempt at portalization. Can you spot the search box? Does this look like the current version of Google? Say, “No.” Now log into Google and run a query for rental car. Now do you see the similarity between the early portal craziness and the modern Google? I do.
What killed off these outfits was their business models. Let me explain using Inktomi and Fast Search as examples. I could cite other cases, but these two are okay for a free blog post for the two or three readers I have.
Inktomi, for whatever reason, concluded that people wanted to offer search, not do the heavy lifting. In the portal fever that was raging from 1998 to 2001, Web sites wanted to be the “front page” of the Internet. The result was that America Online, Excite, Lycos, and Yahoo among others jammed links on the splash page. At one time, I counted more than 60 links on the Excite home page. Once I hit 50 links, I quit counting. My eyes and patience can cope with three to five things. More than that, and I move on.
Inktomi’s analysts did the spreadsheet fever thing, making assumptions about how many Web sites would license Inktomi results, pay Inktomi’s fees, and generate revenue from the front page of the Internet craziness. The reality was that Inktomi did not have enough customers to support the cost of the spidering, bandwidth, investment in performance, research and development for precision and recall, and the other costs that are underestimated or just ignored. The result was the collapse of the company.