August 19, 2014
I just read “The Mysterious Case of Hewlett-Packard’s Autonomy Deal.” The HP and Autonomy PR professionals have some work to do. Heck, search and content processing vendors have some work to do. The unflagging interest in the purchase of the largest enterprise search and content processing vendor (Autonomy) by one of the largest sources of printer ink (Hewlett Packard) is drawing attention to the risks associated with information retrieval.
The write up from Therese Poletti’s Tech Tales is an example of how a utility function like search is sporting a black eye, a chipped tooth, and a broken nose. Ugly.
The mystery, as I understand the article, concerns writing down “almost $9 billion of its $11.1 billion acquisition of the British software company, Autonomy Corp.” The article reports:
one of the law firms that represented the shareholders in their case against H-P directors, Cotchett, Pitre & McCarthy LLP, now working with H-P, is being accused of a conflict of interest. Cotchett was previously the lead counsel in another class action against H-P. That suit, which also recently settled, alleged that the company’s inkjet printers falsely warned consumers when they were out of printer ink.
I savored the “falsely warned” phrase.
The article reports:
“The inkjet litigation has no bearing on the Autonomy settlement,” an H-P spokeswoman said in an email. “We believe the motion to intervene in the derivative case is just a lawyer-driven attempt to seek attorneys’ fees. It is meritless, as will be shown in court filings.”
And the mystery of the write down? The article asserts:
H-P has said that $5 billion of the write-down was due to accounting improprieties at Autonomy. But so far, the accounting problems found at Autonomy are said to be around $200 million in either hardware sales at a loss or fraudulent transactions, out of just over $1 billion in annual revenue. How this became a multi-billion-dollar write-down is a big question among investors. Perhaps these legal maneuvers will shine some light on the mystery. But it probably will be a long time before investors know what really happened.
The mystery is not yet solved. Life, it seems, does not work out like a US television crime drama. I await the next installment of “The Write-down Mystery.”
Stephen E Arnold, August 19, 2014
August 18, 2014
As the conflict between Amazon and Hachette continues, much of the media is quick to paint Amazon as a bully, the poor publishers as victims, and authors as collateral damage. Some of us hesitate to accept this interpretation. Indeed, if a letter cited in an article we found at TechDirt is accurate, Hachette is at least as culpable. The article is long-windedly but accurately titled, “Amazon Offers Authors 100% of Ebook Sales to Get Them to Recognize Its Fight with Hachette Isn’t About Screwing Authors.”
According to Amazon’s letter to Hachette authors, their publisher is not only outdated but has also been uncooperative and downright rude in response (or lack thereof) to Amazon’s efforts at working out a deal. This recalcitrance is the reason the online seller is going directly to authors, offering them a deal wherein they will make more while customers pay less. How? Why, cut out the obsolete middleman, of course. TechDirt’s Mike Masnick writes:
“The whole situation is quite bizarre when you think about it. At the same time you have Hachette and the Authors Guild insisting that they’re trying to ‘protect the book’ by keeping book prices artificially high, they’re loudly complaining that Amazon won’t discount their books. Notice some hypocrisy here? If you want to understand why this is happening, the best explanation I’ve seen so far comes from Hugh Howey, one of the super successful self-published authors who is firmly in Amazon’s camp on this fight. Writing in the Guardian, he notes the perverse incentives of the traditional publishing world on Amazon:”
Here, Masnick quotes the relevant part of Howey’s Guardian article. I second the endorsement of that clear and concise explanation for anyone still unsure of the details behind this bizarre situation. (Check it out if you wish. I’ll wait….)
The Masnick article goes on:
“In other words, everyone really knows that ebooks should be priced lower, but the old publishing world wants to be able to set much higher prices, forcing Amazon to basically make no money at all on pricing the books lower. Given this scenario, it actually makes sense for Amazon to then make this offer to authors directly: it will hand over 100% of ebook revenue, because under Hachette’s proposal, Amazon would make no money at all (or even lose money) on ebook sales anyway.”
It is important to note that others have used a variant of this strategy to good effect, like ProQuest, Ebsco, and Lexis. Publishers must face facts: just as libraries don’t need librarians like they once did, authors don’t need publishers as they once did. We sympathize with businesses that find themselves growing obsolete with the steady march of time. However, to insist that others accommodate an antiquated business model is unrealistic. (Subsidies for buggy-makers, anyone?) Hachette and other traditional publishers would do well to put that energy toward adapting to inevitable changes. Or, barring that, toward laying off their workers as gently as possible.
Cynthia Murrell, August 18, 2014
August 17, 2014
I read “New Algorithm Gives Credit Where Credit Is Due.” The write up sparked a number of thoughts. Let me highlight a couple of passages that made it into my research file.
The focus of the paper, in my opinion, are documents intended for peer reviewed publications and conferences. The write up did not include a sample of the type of “authorship” labeling that takes place. I dug through my files and located a representative example:
This is a paper about stuffing electronics on a contact lens. Microsoft was in this game. Google hired Babak Parviz (aka Babak Amir Parviz, Babak Amirparviz, and Babak Parvis). The paper has four authors:
- H. Yao
- A. Afanasiev
- I. Lahdesmaki
- B. A. Parviz
The idea is that the numerical recipe devised at the Center for Complex Network Research will figure out who did most of the work. I think this is a good idea because my research suggests that the guys doing the heavy lifting in the lab, with Excel, and writing were Yao, Afanasiev, and Lahdesmaki. The guru for the work was Parviz. I could be wrong, so an algorithm to help me out is of interest.
One of the points I highlighted in the write up was:
Using the algorithm, which Shen [math whiz] developed, the team revealed a new credit allocation system based on how often the paper is co-??cited with the other papers published by the paper’s co-??authors, capturing the authors’ additional contributions to the field.
Okay, my take on this is that this is a variation of Eugene Garfield’s citation analysis work. That is useful, but it does not dig very deeply into the context for the paper, the patent applications afoot, or the controls placed on the writers by their employers or their conscience. In short, I need some concrete examples or better yet access to the software so I can run some tests. Yep, just like those that mid tier consulting firms (what I call azure chip consultants) do not do. For reference see the Netscout legal document or my saucisson write up.)
The second point is that the sample strikes me as small. I know the rule of thumb that one well regarded researcher used was 50 in the sample, but there are hundreds of thousands of technical papers. Many are available as open source from services like PLOS One. Here’s the point I noted:
the team looked at 63 prize-??winning papers using the algorithm. In another finding, the algorithm showed physicist Tom Kibble, who in 1964 wrote a research paper on the Higgs boson theory, should receive the same amount of credit as Nobel prize winners Peter Higgs and François Englert.
I think the work is interesting, but it is in my opinion not ready for prime time.
I know that one content processing firm almost totally dependent on the US Army for funding has been working to identify misinformation, disinformation, and reformation. So far, the effort has yielded no commercial product. Other companies purport to have the ability to “understand” content. Presumably this includes the entities identified in the content object. Progress has stalled. Smart software is easier to write about in a marketing slide deck or a proposal than actually deliver.
That’s why authorship remains something a human has to chase down. Let me give you an example. I provided research to IDC, a mid tier consulting firm in 2012. From august 2012 to July 17, 2014, IDC marketed reports that carried my name, two of my research assistants’ names, and an IDC “expert’s” name. Dave Schubmehl, the IDC “expert” in search is listed as the “author.”
Now is he?
I am confident that in his mind and in IDC’s corporate wisdom he is the man. The person who justifies surfing on another’s name illustrates a core problem in authorship. You can see examples of Dave Schubmehl’s name surfing at this link. The sale of one of these documents on Amazon was an interesting attempt to gain traction for Dave Schubmehl in the high traffic eBook store. See “Amazon May Be Disintermediating Publishers: Maybe Good News for Authors.” I include a screen shot of the Amazon “hit.” My legal eagle successfully got the document removed from Amazon. I am not an Amazon author and don’t want to be.
Hopefully the algorithm to identify the “real” author of a series of $3,500 reports will become a commercial reality. I am interested to learn if there are any other mid tier consulting firms that have used others’ content without getting appropriate permissions. How many “experts” follow the IDC path of expediency?
For now, name surfers have to tracked one by one. Shubmehl and Arnold are now linked. Arnold is the surfboard; Schubmehl is the surfer. Catch a wave is the motto of many surfers.
Stephen E Arnold, August 17, 2014
August 14, 2014
I learned that some folks were not able to locate the Netscout Gartner document referenced in this Diginomica article. You may want to try and get the 27 megabyte court filing at http://slidesha.re/1pPsY21.
This is definitely worth some face time. Parts evoked in me a “stop repeating yourself” but other bits were juicy indeed if true. Plus, there are some allegedly accurate factoids in the document and an illustration purporting to show the Gartner products and services. Keep in mind that this document presents only Netscout’s point of view. I find the information in the document compelling and thought provoking. For me, Netscout’s array of data seem close to reality.
If I come across the Gartner response, I will try to remember to post an item in Beyond Search. But as a former nuclear consultant who was lured into a top tier consulting company, who knows? I have my attention riveted by an IDC swizzle which allowed my content to be sold on Amazon without my permission and with another person’s name on it. Clever stuff these “experts” find to do.
I highly recommend the slide on page 27 of the Netscout legal document. I would like to include it in this short write up, but I don’t have a dog in this Netscout Gartner squabble.
Stephen E Arnold, August 14, 2014
August 14, 2014
I suggest you read “Venture Outcomes Are Even More Skewed Than You Think.” The write up contains several factoids. I highlighted one and added a couple of exclamation points. I suggest you print out the article, grab a writing instrument, and do your own filtering.
The main point of the write up is buried in the paragraph that begins “This really underscores the challenge of crating a venture portfolio that produces reasonable returns.” The factoid I honored with exclamation points is:
In my hypothetical $100M fund with 20 investments, the total number of financings producing a return above 5x was 0.8 – producing almost $100M of proceeds. My theoretical fund actually didn’t find their purple unicorn, they found 4/5ths of that company. If they had missed it, they would have failed to return capital after fees. Even if we doubled the number of portfolio companies in the hypothetical portfolio, a full quarter of the fund’s return comes from the roughly ½ of a company they invested in that generated 10x or above. Had they missed it, they would have produced a return that roughly approximated investing in bonds – not the kind of risk adjusted return they or their investors were looking for.
I know this is a hypothetical. Assume that the analysis is off by plus or minimum 10 percent. What do we get? Lousy returns; that is, returns comparable to dumping cash into bonds. I think about the banking and venture firm meetings in which I have participated. I cannot recall any of the smiling MBAs considering that their best ideas could perform on a par with bonds. My hunch is that the people who pushed money into venture funds and bank VP-inspired investments are not thinking bond-type yield.
If the number is accurate, I wonder if those folks who have pumped tens of millions of dollars into outfits promising a money ball from search and content processing will get their money back. Forget an upside. Break even may be tough. Search and content processing makes headlines like this one every day:
To get similar results, navigate to Google News and enter the query Autonomy HP or Autonomy CFO.
The second item I circled with my pink marker was a diagram:
The important part is the small number of “winners” graphically embodied in the miniscule 0.4% column. This is a broad swath of investments. For search and content processing, the payoffs have to be measured in what money flows via revenues or a sell off like Fast Search to Microsoft, Exalead to Dassault, or Autonomy to HP. The number of folks who made big bucks and are really happy may be modest. In fact, judging from the legal hassles with regard to Fast Search and the recent HP Autonomy headlines, even those who were MBA winners may have headaches. Information retrieval seems to deliver a number of headaches for stakeholders.
The third item is the factoid that makes clear the failure rate of start ups. Search and content processing poses similar challenges. There is a twist. Once a search and content processing sells to a larger firm, how many have become major money pumps to the acquiring companies? The question is very difficult to answer. The absence of information tells me that there are not too many feel good stories to tell. The pleas on LinkedIn enterprise search discussion threads for positive case studies about search are easy to ignore. Good news with regard to search and content processing is not sloshing around the Big Data bucket in which we exist.
How long with companies that have been in business for many years promising a money ball from search be able to survive? How long will the old soft shoe about search and content processing open checkbooks? How many years will it take some information retrieval companies to replace red ink with the blank ink of hefty after tax profits? How long will it take those seeking answers to information retrieval problems to wake up to the fact that consultant saucisson, Star Trek fantasies, and marketing hyperbole are unlikely to deliver a Disneyland-like “win”?
The data set for the Seth Levine write up is large enough to warrant a tentative answer, “Probably never.” Search and content processing are different. The algorithms and methods are decades old. Talk does not change what can be accomplished with affordable computational resources. Pumping money into search, therefore, may be painful when the actual financial data are reviewed by investors and stakeholders.
Why aren’t their abundant “good news” cases for search and content processing? There just aren’t that many. Think a power curve of implementation successes. There are more examples of search going off the rails than home runs. This is surprising when so many profess to be experts in search and so much money has been injected into information retrieval start ups. The business strategy of search and content processing companies may be raising money. Any other work may be of little interest.
Stephen E Arnold, August 14, 2014
August 12, 2014
I find the number of search and content processing start ups surprising. Both fields are difficult to make work in terms of technology and money. The squabble between Hewlett Packard and Autonomy make it clear that established vendors and big companies can be a potent concoction too.
I read a write up from the stats cats at FiveThirtyEight. The article is “Corporate America Hasn’t Been Disrupted.” No kidding. After “real” number crunching FiveThirtyEight wrote:
the advantages of incumbency in corporate America have never been greater. “The business sector of the United States,” economists Ian Hathaway and Robert Litan [an expert, of course] wrote in a recent Brookings Institution paper, “appears to be getting ‘old and fat.’
According to FiveThirtyEight:
recent research suggests that established businesses have less and less to fear from would-be disruptors.
The most interesting item in the article is this statement:
the advantage enjoyed by incumbents, always substantial, has been growing in recent years.
FiveThirtyEight crunched US Census data and generated this diagram:
As a non mathematician, I interpreted the chart to say, “Failure up. Start ups down.”
What about search? My observations are:
- Venture firms pumping millions into search and content processing are likely to lose their money unless a sell out or sell off is possible
- Buying a successful search or content processing business may provide an expensive management challenge; for example, the criminal investigation of Fast Search & Transfer
- Getting objective information about a search and content processing company may be tough due to the saucisson issue or the difficulty of explaining certain technical concepts.
My hunch is that making lots of money from search and content processing as a venture backed start up is similar to buying a Yugo and expecting to win a dirt track race. That type of race can be quite exciting and risky.
Stephen E Arnold, August 12, 2014
August 10, 2014
I read “5 Google Projects That Will Pave the Future.” The title confused me. I think the author wanted me to think that Google was paving the way to the future. What I interpreted the title to mean is that Google wants to cover the future with Google’s own digital macadam.
The point of the write up is that Google is doing some big, speculative projects. Bell Labs used to do this, but without the fanfare. But there is a public relations and marketing battle underway among the giant companies that seek to monopolize markets if not the “future.”
The write up mentions Project Loon (the big balloons that will deliver Internet access to folks without the benefit of non balloon methods), Calico (this is the live forever stuff that recently experienced the departure of a nanotech self assembler due to some differing opinions), robots (mobile, smart gizmos that entrances the folks at DARPA), self driving cars (more time to surf the Web and consume ads in a vehicle), and DeepMind (more of the artificial intelligence hoo hah).
Good stuff for those who consumed science fiction, Star Trek, and Star Wars. The only problem is that those billions have to come from someplace. That’s a point overlooked in the Loon plus four article.
That’s why you will want to read “Dear Google, I Am Writing an Open Letter from the Search Wilderness.” The main point of this write up is that Google is investing considerable time and effort to generate revenue from its traffic. I suppose this is obvious to most Mad Ave types, but it appears to have come as a surprise to the author of this letter.
The passage I highlighted was:
It is now a directory of large public or soon to be public companies, who dominate every inch of our screens. I am sure we have all walked down many high streets with all the same chain stores and brands. This is Google Search today across many of the world’s markets. Gone is the opportunity to explore and unearth gems and engage with individuals on the world’s largest stage where a digital high street could have a thousand specialist shops with ease. There are sophisticated ways and means to search and uncover the unusual, the new and the people who care and services that actually work. But directionally, “Search” heads to the money instantly!
Note the phrase “heads to the money instantly.” Here at Beyond Search, I am indifferent to traffic, PageRank, speed with which Google indexes the content, and anything other than the topics that catch my attention. The reason is that I am retired and this blog is a way to fill time between walking my dogs and napping.
For the author of the letter, Google’s focus on money is, it appears, destroying his business. Well, that’s what happens when one builds on a free service. Personally I think Google can destroy as many businesses as necessary to generate money for:
- Projects like Loon
- Flying around to cut deals for Google Glass
- Replace people like Babak Amirparviz (aka Parviz, Parvis, and Amir Parviz)
- Paying for Google health care so some Googlers can spend three months in Stanford’s medical facilities
- Paying for jets
- Using Steve Ballmer’s running into the wall method to crack into money making television
- Buying companies to amplify usage behavior capabilities.
These initiatives cost money.
I find the complaining in this open letter like King Lear’s howling in the storm:
Lets face it though, with so few slots its a money page now, not a joy to visit any longer!
Wow. Harsh. Google results are not objective and fun.
Here’s an even more subversive view of Google’s search system which cost billions to develop:
So quite interestingly the guest who has relied on Google to sort his problems and assist in his own search has been guided by Google’s very own algorithm to a hotel or holiday home that is not necessarily the best for him, at the best price or with the best amenities who often stands no chance of communicating with the accommodation provider until he has booked! Pay up and hope for the best as the business has no product knowledge, location familiarity, in depth business knowledge, controls or quality control in place!
And here is a thought that I have never entertained:
The consumer may just look elsewhere and try other search engines, as all he may see are the high street brands, the ones he was overjoyed to have dodged when the web was in its infancy and when Google Search revealed a whole myriad of exciting new places, people and products!
The point is that Google is essentially operating as a country. The country’s productivity has to go up. In order to pump up the revenue, the altruism and baloney like “do no evil” or “make all the world’s information accessible” are shibboleths for monetization.
What I find interesting is that Google’s business model is not a Google invention. The idea for pay to play came from GoTo.com (Overture.com). Yahoo owned this company. Google was inspired by Overture’s revenue methods and Yahoo settled for some money in a mild dispute about the use of this monetization method.
You can believe in Loon. I believe in what Google does after 15 years: Sell ads. Last time I checked, the folks with the money can buy lots of ads. Folks who cannot afford to advertise need to find their future elsewhere.
If some Web sites get zero traffic, well, get on that social media tsunami. Google has a mission to deliver revenues and profits every 90 days. That mission does not necessarily coincide with that of others. If you are unfamiliar with this Google process, find and MBA and ask.
Stephen E Arnold, August 10, 2014
August 7, 2014
I read what I thought was a remarkable public relations story. You will want to check the write up out for two reasons. First, it demonstrates how content marketing converts an assertion into what a company believes will generate business. And, second, it exemplifies how a fix can address complex issues in information access. You may, like Archimedes, exclaim, “I have found it.”
The title and subtitle of the “news” are:
NewLane’s Eureka! Search Discovery Platform Provides Self-Servicing Configurable User Interface with No Software Development. Eureka! Delivers Outstanding Results in the Cloud, Hybrid Environments, and On Premises Applications.
My reaction was, “What?”
The guts of the NewLane “search discovery platform” is explained this way:
Eureka! was developed from the ground up as a platform to capture all the commonalities of what a search app is and allows for the easy customization of what a company’s search app specifically needs.
I am confused. I navigated to the company’s Web site and learned:
Eureka! empowers key users to configure and automatically generate business applications for fast answers to new question that they face every day. http://bit.ly/V0E8pI
The Web site explains:
Need a solution that provides a unified view of available information housed in multiple locations and formats? Finding it hard to sort among documents, intranet and wiki pages, and available reporting data? Create a tailored view of available information that can be grouped by source, information type or other factors. Now in a unified, organized view you can search for a project name and see results for related documents from multiple libraries, wiki pages from collaboration sites, and the profiles of project team members from your company’s people directory or social platform.
“Unified information access” is a buzzword used by Attivio and PolySpot, among other search vendors. The Eureka! approach seems to be an interface tool for “key users.”
Here’s the Eureka technology block diagram:
Notice that Eureka! has connectors to access the indexes in Solr, the Google Search Appliance, Google Site Search, and a relational database. The content that these indexing and search systems can access include Documentum, Microsoft SharePoint, OpenText LiveLink, IBM FileNet, files shares, databases (presumably NoSQL and XML data management systems as well), and content in “the cloud.”
For me the diagram makes clear that NewLane’s Eureka is an interface tool. A “key user” can create an interface to access content of interest to him or her. I think there are quite a few people who do not care where data come from or what academic nit picking went on to present information. The focus is on something a harried professional like an MBA who has to make a decision “now” needs some information.
Archimedes allegedly jumped from his bath, ran into the street, and shouted “Eureka.” He reacted, I learned from a lousy math teacher, that he had a mathematical insight about displacement. The teacher did not tell me that Archimedes was killed because he was working on a math problem and ignored a Roman soldier’s command to quit calculating. Image source: http://blocs.xtec.cat/sucdecocu/category/va-de-cientifics/
I find interfaces a bit like my wife’s questions about the color of paint to use for walls. She shows me antique ivory and then parchment. For me, both are white. But for her, the distinctions are really important. She knows nothing about paint chemistry, paint cost, and application time. She is into the superficial impact the color has for her. To me, the colors colors are indistinguishable. I want to know about durability, how many preparation steps the painter must go through between brands, and the cost of getting the room painted off white.
Interfaces for “key users” work like this in my experience. The integrity of the underlying data, the freshness of the indexes, the numerical recipes used to prioritize the information in a report are niggling details of zero interest to many system users. An answer—any answer—may be good enough.
Eureka! makes it easier to create interfaces. My view is that a layer on top of connectors, on top of indexing and content processing systems, on top of wildly diverse content is interesting. However, I see the interfaces as a type of paint. The walls look good but the underlying structure may be deeply flawed. The interface my wife uses for her walls does not address the fact that the wallboard has to be replaced BEFORE she paints again. When I explain this to her when she wants to repaint the garage walls, she says, “Why can’t we just paint it again?” I don’t know about you, but I usually roll over, particularly if it is a rental property.
Now what does the content marketing-like “news” story tell me about Eureka!
I found this statement yellow highlight worthy:
Seth Earley, CEO of Earley and Associates, describes the current global search environment this way, “What many executives don’t realize is that search tools and technologies have advanced but need to be adapted to the specific information needed by the enterprise and by different types of employees accomplishing their tasks. The key is context. Doing this across the enterprise quickly and efficiently is the Holy Grail. Developing new classes of cloud-based search applications are an essential component for achieving outstanding results.”
Yep, context is important. My hunch is that the context of the underlying information is more important. Mr. Earley, who sponsored an IDC study by an “expert” named Dave Schubmehl on what I call information saucisson, is an expert on the quasi academic “knowledge quotient” jargon. He, in this quote, seems to be talking about a person in shipping or a business development professional being able to use Eureka! to get the interface that puts needed information front and center. I think that shipping departments use dedicated systems who data typically does not find their way into enterprise information access systems. I also think that business development people use Google, whatever is close at hand, and enterprise tools if there is time. When time is short, concise reports can be helpful. But what if the data on which the reports are based are incorrect, stale, incomplete, or just wrong? Well, that is not a question germane to a person focused on the “Holy Grail.”
I also noted this statement from Paul Carney, president and founder of NewLane:
The full functionality of Eureka! enables understaffed and overworked IT departments to address the immediate search requirements as their companies navigate the choppy waters of lessening their dependence on enterprise and proprietary software installations while moving critical business applications to the Cloud. Our ability to work within all their existing systems and transparently find content that is being migrated to the Cloud is saving time, reducing costs and delivering immediate business value.
The point is similar to what Google has used to sell licenses for its Google Search Appliance. Traditional information technology departments can be disintermediated.
If you want to know more about FastLane, navigate to www.fastlane.com. Keep a bathrobe handy if you review the Web site relaxing in a pool or hot tube. Like Archimedes, you may have an insight and jump from the water and run through the streets to tell others about your insight.
Stephen E Arnold, August 7, 2014
August 5, 2014
A few years ago, I was in China. I marveled at the multi-SIM phones. I fiddled with a half dozen models and bought an unlocked GSM phone running Android 2.3. The clerk in the store told me that there would be Android phones without Google. At the time, I was thinking about the fragmentation of Android. In hindsight, I think the clerk in Xian knew a heck of a lot more about the future of Android without Google than I understood. The Chinese manufacturers liked Android but not the Google ball and chain “official Android” required of licensees. Android without Google seems to be a less small thing.
I read “Google Under Threat as Forked Android Devices Rise to 20% of Smartphone Shipments.”The article points out that Android has a market share of 85 percent. The article points out that market share is one thing. Revenue is another. With Web search from traditional computers losing its pride of place, mobile search is a bigger and bigger deal. Unfortunately the money generated by mobile clicks is not the gusher that 2004 style search was. To compensate, Google has been monetizing its silicon heart out. You can read one person’s view of Google search in “Dear Google, I Am Writing an Open Letter from the Search Wilderness.”
I am sure Google will dismiss the NextWeb’s story. I am not so sure. As NextWeb observes, “The company faces a growing issue: The rise of non Google Android.” The real test will be the steps Google takes to pump up the top line and control costs at a time when complaints about Google search are becoming more interesting and compelling.
Stephen E Arnold, August 5, 2014
August 1, 2014
I must be starved for intellectual Florida Gar. Nibble on this fish’s lateral line and get nauseous or dead. Knowledge quotient as a concept applied to search and retrieval is like a largish Florida gar. Maybe a Florida gar left too long in the sun.
Lookin’ yummy. Looks can be deceiving in fish and fishing for information. A happy quack to https://www.flmnh.ufl.edu/fish/Gallery/Descript/FloridaGar/FloridaGar.html
I ran a query on one of the search systems that I profile in my lectures for the police and intelligence community. With a bit of clicking, I unearthed some interesting uses of the phrase “knowledge quotient.”
What surprised me is that the phrase is a favorite of some educators. The use of the term as a synonym for plain old search seems to be one of those marketing moments of magic. A group of “experts” with degrees in home economics, early childhood education, or political science sit around and try to figure out how to sell a technology that is decades old. Sure, the search vendors make “improvements” with ever increasing speed. As costs rise and sales fail to keep pace, the search “experts” gobble a cinnamon latte and innovate.
In Dubai earlier this year, I saw a reference to a company engaged in human resource development. I think this means “body shop,” “lower cost labor,” or “mercenary registry,” but I could be off base. The company is called Knowledge Quotient FZ LLC. If one tries to search for the company, the task becomes onerous. Google is giving some love to the recent IDC study by an “expert” named Dave Schubmehl. As you may know, this is the “professional” who used by information and then sold it on Amazon until July 2014 without paying me for my semi-valuable name. For more on this remarkable approach to professional publishing, see http://wp.me/pf6p2-auy.
Also, in Dubai is a tutoring outfit called Knowledge Quotient which delivers home tutoring to the children of parents with disposable income. The company explains that it operates a place where learning makes sense.
Companies in India seem to be taken with the phrase “knowledge quotient.” Consider Chessy Knowledge Quotient Private Limited. In West Bengal, one can find one’s way to Mukherjee Road and engage the founders with regard to an “effective business solution.” See http://chessygroup.co.in. Please, do not confuse Chessy with KnowledgeQ, the company operating as Knowledge Quotient Education Services India Pvt Ltd. in Bangalore. See http://www.knowledgeq.org.
What’s the relationship between these companies operating as “knowledge quotient” vendors and search? For me, the appropriation of names and applying them to enterprise search contributes to the low esteem in which many search vendors are held.
Why is Autonomy IDOL such a problem for Hewlett Packard? This is a company that bought a mobile operating system and stepped away from. This is a company that brought out a tablet and abandoned it in a few months. This is a company that wrote off billions and then blamed the seller for not explaining how the business worked. In short, Autonomy, which offers a suite of technology that performs as well or better than any other search system, has become a bit of Florida gar in my view. Autonomy is not a fish. Autonomy is a search and content processing system. When properly configured and resourced, it works as well as any other late 1990s search system. I don’t need meaningless descriptions like “knowledge quotient” to understand that the “problem” with IDOL is little more than HP’s expectations exceeding what a decades old technology can deliver.
Why is Fast Search & Transfer an embarrassment to many who work in the search sector. Perhaps the reason has to do with the financial dealings of the company. In addition to fines and jail terms, the Fast Search system drifted from its roots in Web search and drifted into publishing, smart software, and automatic functions. The problem was that when customers did not pay, the company did not suck it up, fix the software, and renew their efforts to deliver effective search. Nah, Fast Search became associated with a quick sale to Microsoft, subsequent investigations by Norwegian law enforcement, and the culminating decision to ban one executive from working in search. Yep, that is a story that few want to analyze. Search marketers promised and the technology did not deliver, could not deliver given Fast Search’s circumstances.
What about Excalibur/Convera? This company managed to sell advanced search and retrieval to Intel and the NBA. In a short time, both of these companies stepped away from Convera. The company then focused on a confection called “vertical search” based on indexing the Internet for customers who wanted narrow applications. Not even the financial stroking of Allen & Co. could save Convera. In an interesting twist, Fast Search purchased some of Convera’s assets in an effort to capture more US government business. Who digs into the story of Excalibur/Convera? Answer: No one.
What passes for analysis in enterprise search, information retrieval, and content processing is the substitution of baloney for fact-centric analysis. What is the reason that so many search vendors need multiple injections of capital to stay in business? My hunch is that companies like Antidot, Attivio, BA Insight, Coveo, Sinequa, and Palantir, among others, are in the business of raising money, spending it in an increasingly intense effort to generate sustainable revenue, and then going once again to capital markets for more money. When the funding sources dry up or just cut off the company, what happens to these firms? They fail. A few are rescued like Autonomy, Exalead, and Vivisimo. Others just vaporize as Delphes, Entopia, and Siderean did.
When I read a report from a mid tier consulting firm, I often react as if I had swallowed a chunk of Florida gar. An example in my search file is basic information about “The Knowledge Quotient: Unlocking the Hidden Value of Information.” You can buy this outstanding example of ahistorical analysis from IDC.com, the employer of Dave Schubmehl. (Yep, the same professional who used my research without bothering to issue me a contract or get permission from me to fish with my identity. My attorney, if I understand his mumbo jumbo, says this action was not identity theft, but Schubmehl’s actions between May 2012 and July 2014 strikes me as untoward.)
Net net: I wonder if any of the companies using the phrase “knowledge quotient” are aware of brand encroachment. Probably not. That may be due to the low profile search enjoys in some geographic regions where business appears to be more healthy than in the US.
Can search marketing be compared to Florida gar? I want to think more about this.
Stephen E Arnold, August 1, 2014