Color Changing Ice Cream: The Metaphor for Search Marketing

July 29, 2014

I read “Scientist Invents Ice Cream That Changes Colour As You Lick It.” The write up struck me as a nearly perfect metaphor for enterprise search and retrieval. First, let’s spoon the good stuff from the innovation tub:

Science might be busy working on interstellar travel and curing disease but that doesn’t mean it can’t give some time to ice cream. Specifically making it better visually.

The idea is that ice cream has an unsatisfactory user interface. (Please, do not tell that to the neighbor’s six year old.)

Spanish physicist and electronic engineer Manuel Linares has done exactly that. He’s managed to invent an ice cream that changes colour as you lick it. The secret formula is made entirely from natural ingredients…Before being served, the ice cream is a baby blue colour. The vendor serves and adds a spray of “love elixir”…Then as you lick the ice cream it will change into other colours.

My Eureka! moment took place almost instantly. As enterprise search vendors whip up ever more fantastic capabilities for key word matching and synonym expansion, basic search gets sprayed with “love elixir.” As the organization interacts with the search box, the search results are superficially changed.

The same logic that improves the user experience with ice cream has been for decades the standard method of information retrieval vendors.

But it is still ice cream, right?

Isn’t search still search with the same characteristics persistent for the last four or five decades?

Innovation defines modern life and search marketing.

Stephen E Arnold, July 29, 2014

HP Autonomy Opens IDOL APIs to App Developers

July 29, 2014

App developers can now work with HP Autonomy’s Intelligent Data Operating Layer engine through the company’s new API program. We learned about the initiative from eWeek’s, “HP Autonomy’s IDOL OnDemand APIs Nurture Apps Ecosystem.” The piece by Darryl K. Taft presents a slide show with examples of those APIs being put to use. He writes:

“IDOL OnDemand delivers Web service APIs that allow developers to tap into the explosive growth of unstructured information to build a new generation of apps…. IDOL OnDemand APIs include a growing portfolio of APIs within the format conversion, image analysis, indexing, search, and text analysis categories. Through an early access program, hackathons and several TopCoder challenges, some great apps have emerged. During the weekend of June 7-8, developers participated in an IDOL OnDemand Hackathon in San Francisco, where participants built apps using IDOL OnDemand Web service APIs. This slide show covers several of the early apps to emerge from these events. Enterprise developers are also adopting the IDOL OnDemand platform, with big names such as PwC and HP taking advantage of the developer-friendly technology to accelerate their development projects using the API’s.”

See the slide show for a look at 12 of these weekend projects. Developers should then check out the IDOL OnDemand site for more information. Founded in 1996, Autonomy grew from research originally performed at Cambridge University. Their solutions help prominent organizations around the world manage large amounts of data. Tech giant HP famously purchased the company in 2011.

Cynthia Murrell, July 29, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Big Data Boom Pushes Schools to Create Big Data Programs

July 29, 2014

Can education catch up to progress? Perhaps, especially when corporations take an interest. Fortune discusses “Educating the ‘Big Data’ Generation.” As companies try to move from simply collecting vast amounts of data to putting that information to use, they find a serious dearth of qualified workers in the field. In fact, Gartner predicted in 2012 that 4.4 million big-data IT jobs would be created globally by 2015 (1.9 million in the U.S.). Schools are now working to catch up with this demand, largely as the result of prodding from the big tech companies.

The field of big data collection and analysis presents a previously rare requirement—workers that understand both technology and business. Reporter Katherine Noyes cites MIT’s Erik Brynjolfsson, who will be teaching a course on big data this summer:

“We have more data than ever,’ Brynjolfsson said, ‘but understanding how to apply it to solve business problems needs creativity and also a special kind of person.’ Neither the ‘pure geeks’ nor the ‘pure suits’ have what it takes, he said. ‘We need people with a little bit of each.’”

Over at Arizona State, which boasts year-old master’s and bachelor’s programs in data analytics, Information Systems chair Michael Goul agrees:

“’We came to the conclusion that students needed to understand the business angle,’ Goul said. ‘Describing the value of what you’ve discovered is just as key as discovering it.’”

In order to begin meeting this new need for business-minded geeks (or tech-minded business people), companies are helping schools develop programs to churn out that heretofore suspect hybrid. For example, Noyes writes:

“MIT’s big-data education programs have involved numerous partners in the technology industry, including IBM […], which began its involvement in big data education about four years ago. IBM revealed to Fortune that it plans to expand its academic partnership program by launching new academic programs and new curricula with more than twenty business schools and universities, to begin in the fall….

“Business analytics is now a nearly $16 billion business for the company, IBM says—which might be why it is interested in cultivating partnerships with more than 1,000 institutions of higher education to drive curricula focused on data-intensive careers.”

Whatever forms these programs, and these jobs, ultimately take, one thing is clear: for those willing and able to gain the skills, the field of big data is wide open. Anyone with a strong love of (and aptitude for) working with data should consider entering the field now, while competition for qualified workers is so very high.

Cynthia Murrell, July 29, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Commvault to Help Sponsor SharePoint Fest Denver

July 29, 2014

For anyone in need of a little SharePoint training for the fall, SharePoint Fest Denver will be held September 22-24. Mark your calendar. Commvault is a platinum sponsor this year, and the press release, “Commvault Confirmed as Platinum Sponsor of SharePoint Fest – Denver 2014,” tells more.

The article begins:

“Commvault is a Platinum Sponsor of SharePoint Fest Denver, and joins other sponsors in bringing this conference to the Colorado Convention Center on September 22-24, 2014. Conference delegates will hear from keynote speakers and attend breakout sessions. Over 70 sessions will be offered across multiple tracks, as well as an optional day of workshops preceding the conference.”

Stephen E. Arnold is a longtime leader in search and follows all things SharePoint on his Web site, ArnoldIT.com. His SharePoint feed is a good place to check in on the latest trainings and professional development opportunities. He also follows the latest tips, tricks, and workaround, which are helpful for SharePoint implementations of all shapes and sizes.

Emily Rae Aldridge, July 29, 2014

Google Searches, Prediction, and Fabulous Stock Market Returns?

July 28, 2014

I read “Google Searches Hold Key to Future Market Crashes.” The main idea in my opinion is:

Moat [female big thinker at Warwick Business School’ continued, “Our results are in line with the hypothesis that increases in searches relating to both politics and business could be a sign of concern about the state of the economy, which may lead to decreased confidence in the value of stocks, resulting in transactions at lower prices.”

So will the Warwick team cash in on the stock market?

Well, there is a cautionary item as well:

“Our results provide evidence of a relationship between the search behavior of Google users and stock market movements,” said Tobias Preis, Associate Professor of Behavioral Science and Finance at Warwick Business School. “However, our analysis found that the strength of this relationship, using this very simple weekly trading strategy, has diminished in recent years. This potentially reflects the increasing incorporation of Internet data into automated trading strategies, and highlights that more advanced strategies are now needed to fully exploit online data in financial trading.”

Rats. Quants are already on this it seems.

What’s fascinating to me is that the Warwick experts overlooked a couple of points; namely:

  1. Google is using its own predictive methods to determine what users see when they get a search result based on the behavior of others. Recursion, anyone?
  2. Google provides more searches with each passing day to those using mobile devices. By their nature, traditional desktop queries are not exactly the same as mobile device searches. As a workaround, Google uses clusters and other methods to give users what Google thinks the user really wants. Advertising, anyone?
  3. The stock pickers that are the cat’s pajamas at the B school have to demonstrate their acumen on the trading floor. Does insider trading play a role? Does working at a Goldman Sachs-type of firm help a bit?

Like perpetual motion, folks will keep looking for a way to get an edge. Why are large international banks paying some hefty fines? Humans, I believe, not algorithms.

Stephen E Arnold, July 28, 2014

Surprising Sponsored Search Report and Content Marketing

July 28, 2014

Content marketing hath embraced the mid tier consulting firms. IDC, an outfit that used my information without my permission from 2012 until July 2014, has published a study about “knowledge.” I was not able to view the entire report, but the executive summary was available for download at http://bit.ly/1l10sGH. (Verified at 11 am, July 25, 2014) If you have some extra money, you may want to pay an IDC scale fee to learned about “the knowledge quotient.”

I am looking forward to the full IDC report, which promises to be as amusing as a recent Gartner report about search. The idea of rigorous, original research and an endorsement from a company like McKinsey or Boston Consulting Group is a Holy Grail of marketing. McKinsey and BCG (what I call blue chip firms), while not perfect, are produce client smiles for most of their engagements.

Consulting, however, does not have an American Bar Association or other certification process to “certify” a professional’s capabilities. In fact, at Booz, Allen I learned that Halliburton NUS, a nuclear consulting and services shop, was in the eyes of Booz, Allen a “grateful C.” Booz, Allen, like Bain and SRI, were grade A firms. I figured if I were hired at Booz, Allen I could pick up some A-level attributes. Consultants not trained by one of the blue chip firms had to work harder, smarter, and more effectively. Slack off and the consulting firms lower on the totem pole were unlikely to claw their way to the top. When a consulting firm has been a grade C for decades, it is highly unlikely that the blue chip outfits will worry too much about these competitors.

This IDC particular report 249643ES is funded by whom? The fact that I was able to download the report from one of the companies listed as a “sponsor” suggests that Smartlogic and nine other companies were underwriting the rigorous research. You can download the report (verified at 2 30 pm, July 25, 2014) at this link. Hasten to do it, please.

In the consulting arena, multi-client studies come in different flavors or variants. At Booz, Allen & Hamilton, the 1976 Study of World Economic Change was paid for by a number of large banks. We did not write about these banks. We delivered previously uncollected information in a Booz, Allen package. The boss was William Simon, former secretary of the US treasury. He brought a certain mindset and credibility to our project.

The authors of the IDC report are Dave Schubmehl and Dan Vesset. Frankly I don’t known enough about these “experts” to compare them to William Simon. My hunch is that Mr. Simon’s credentials might have had a bit more credibility. We supplemented the Booz, Allen team with specialists from Claremont College, where Peter Drucker was grooming some quite bright business analysts. In short, the high caliber Booz, Allen professionals, the Claremont College whiz kids, and William Simon combined to generate a report with a substantive information payload.

Based on my review of the Executive Summary of “The Knowledge Quotient,” direct comparisons with the Booz, Allen report or even reports from some of the mid tier firms’ analyses in my files are difficult to make. I can, however, highlight a handful of issues that warrant further consideration. Let’s look at three areas where the information highway may be melting in the summer heat.

1. A Focus on Knowledge and the Notion of a Quotient

I do a for fee column for Knowledge Management Magazine. I want to be candid. I am not sure that I have a solid understanding of what the heck “knowledge” is. I know that a quotient is the result obtained by dividing one number by another number.  I am not able to accept that an intangible like “knowledge” can be converted to a numeric output. Lard on some other abstractions like “value” and the entire premise of the report is difficult to take seriously.

image

Well, quite a few companies did take the idea seriously, and we need to look at the IDC material to get a feel for the results based on a survey of 2,155 organizations and in depth interviews with 11 organizations “discovered.” The fact that there are 11 sponsors and 11 in depth interviews suggests that the sample is not an objective one as far as the interviews are concerned. But I may be wrong. Is that a signal that this IDC report is a marketing exercise dressed up as an objective report?

2. The Old Chestnut Makes an Appearance

A second clue is the inclusion of a matrix that reminded me of an unimaginative variation on the 1970 Boston Consulting Group’s tool. The BCG approach used market share or similar “hard” data about products and business units. A version of the BCG quadrant appears below:

image

IDC’s “experts” may be able to apply numbers to nebulous concepts. I would not want to try and pull off this legerdemain. The Schubmehl and Vesset version for IDC strikes me a somewhat spongy; for example, how does one create a quotient for knowledge when parameterizing “socialization” or “culture.” Is the association with New Age and pop culture intentional?

3. The Sponsors: An Eclectic Group United by Sponsoring IDC?

The third tip off to the focus of the report are the sponsors themselves. The 11 companies are an eclectic group, including a giant computer services firm (IBM) a handful of small companies with little or no corporate profile, and an indexing company that delivers training, services, and advice.

4. A Glimpse of the Takeaways

Fourth, the Executive Summary highlights what appear to be important takeaways from the year long research effort. For example, KQ leaders have their expectations exceeded presumably because these KQ savvy outfits have licensed one or more of the study sponsors’ products. The Executive Summary references a number of case studies. As you may know, positive case studies about search and content processing are not readily available. IDC promises a clutch of cases.

And IDC on pages iv and v of the Executive Summary uses a bullet list and some jargon to give a glimpse of high KQ outfits’ best practices. The idea is that if content is indexed and searchable, there are some benefits to the companies.

After 50 years, I assume IDC has this type of work nailed. I would point out that IDC used my information in its for fee reports from August 2012 until July 2014. My attorney was successful in getting IDC to stop connecting my name and that of my researchers with one of IDC’s top billing analysts. I find surfing on my content and name untoward. But again there are substantive differences between blue chip consulting firms and those lower on the for fee services totem pole.

I wonder if the full report will contain positive profiles of the sponsoring organizations. Be prepared to pay a lot for this “knowledge quotient” report. On the other hand, some of the sponsors may provide you with a copy if you have a gnawing curiosity about the buzzwords and jargon the report embraces; for example, analytics,

Some potential reader will have to write a big check. For example, to get one of the IDC reports with my name on it from 2012 to July 2014, the per report price was $3,500. I would not be surprised if the sticker for this KQ report is even higher. Based on the Executive Summary, KQ looks like a content marketing play. The “inclusions” are the profiles of the sponsors.

I will scout around for the Full Monty, and I hope it is fully clothed and buttoned up. Does IDC have a William Simon to ride herd on its “experts”? From my experience, IDC’s rigorousness is quite different. For example, IDC’s Dave Schubmehl used my information and attached himself to my name. Is this the behavior of a blue chip?

Stephen E Arnold, July 28, 2014

Pre Oracle InQuira: A Leader in Knowledge Assessment?

July 28, 2014

Oracle purchased InQuira in 2011. One of the writers for Beyond Search reminded me that Beyond Search covered the InQuira knowledge assessment marketing ploy in 2009. You can find that original article at http://bit.ly/WYYvF7.

InQuira’s technology is an option in the Oracle RightNow customer support system. RightNow was purchased by Oracle in 2001. For those who are the baseball card collectors of enterprise search, you know that RightNow purchased Q-Go technology to make its customer support system more intuitive, intelligent, and easier to use. (Information about Q-Go is at http://bit.ly/1nvyW8G.)

InQuira’s technology is not cut from a single chunk of Styrofoam. InQuira was formed in 2002 by fusing the Answerfriend, Inc. and Electric Knowledge, Inc. systems. InQuira was positioned as a question answering system. For years, Yahoo relied on InQuira to deliver answers to Yahooligans seeking help with Yahoo’s services. InQuira also provided the plumbing to www.honda.com. InQuira hopped on the natural language processing bandwagon and beat the drum until it layered on “knowledge” as a core functionality. The InQuira technology was packaged as a “semantic processing engine.”

InQuira used its somewhat ponderous technology along with AskJeeves’ type short cuts to improve the performance of its system. The company narrowed its focus from “boil the ocean search” to a niche focus. InQuira wanted to become the go to system for help desk applications.

InQuira’s approach involved vocabularies. These were similar to the “knowledge bases” included with some versions of Convera. InQuira, according to my files, used the phrase “loop of incompetence.” I think the idea was that traditional search systems did not allow a customer support professional to provide an answer that would make customers happy the majority of the time. InQuira before Oracle emphasized that its system would provide answers, not a list of Google style hits.

The InQuira system can be set up to display a page of answers in the form of sentences snipped from relevant documents. The idea is that the InQuira system eliminates the need for a user to review a laundry list of links.

The word lists and knowledge bases require maintenance. Some tasks can be turned over to scripts, but other tasks require the ministrations of a human who is a subject matter expert or a trained indexer. The InQuira concept knowledge bases also requires care and feeding to deliver on point results. I would point out that this type of knowledge care is more expensive than a nursing home for a 90 year old parent. A failure to maintain the knowledge bases usually results in indexing drift and frustrated users. In short, the systems are perceived as not working “like Google.”

Why is this nitty gritty important? InQuira shifted from fancy buzzwords as the sharp end of its marketing spear to the more fuzzy notion of knowledge. The company, beginning in late 2008, put knowledge first and the complex, somewhat baffling technology second. To generate sales leads, InQuira’s marketers hit on the idea of a “knowledge assessment.”

The outcome of the knowledge marketing effort was the sale of the company to Oracle in mid 2011. At the time of the sale, InQuira had an adaptor for Oracle Siebel. Oracle appears to have had a grand plan to acquire key customer support search and retrieval functionality. Armed with technology that was arguably better than the ageing Oracle SES system, Oracle could create a slam dunk solution for customer support applications.

Since the application, many search vendors have realized that some companies were not ready to write a Moby Dick sized check for customer support search. Search vendors adopted the lingo of InQuira and set out to make sales to organizations eager to reduce the cost of customer support and avoid the hefty license fees some vendors levied.

What I find important about InQuira are:

  1. It is one of the first search engines to be created by fusing two companies that were individually not able to generate sustainable revenue
  2. InQuira’s tactic to focus on customer support and then add other niche markets brought more discipline to the company’s message than the “one size fits all” that was popular with Autonomy and Fast Search.
  3. InQuira figured out that search was not a magnetic concept. The company was one of the first to explain its technology, benefits, and approach in terms of a nebulous concept; that is, knowledge. Who knows what knowledge is, but it does seem important, right?
  4. The outcome of InQuira’s efforts made it possible for stakeholders to sell the company to Oracle. Presumably this exist was a “success” for those who divided up Oracle’s money.

Net net: Shifting search and content processing to knowledge is a marketing tactic. Will it work in 2014 when search means Google? Some search vendors who have sold their soul to venture capitalists in exchange for millions of jump start dollars hope so.

My thought is that knowledge won’t sell information retrieval. Once a company installs a search systems, users can find what they need or not. Fuzzy does not cut it when users refuse to use a system, scream for a Google Search Appliance, or create a work around for a doggy system.

Stephen E Arnold, July 28, 2014

From Search to Sentiment

July 28, 2014

Attivio has placed itself in the news again, this time for scoring a new patent. Virtual-Strategy Magazine declares, “Attivio Awarded Breakthrough Patent for Big Data Sentiment Analysis.” I’m not sure “breakthrough” is completely accurate, but that’s the language of press releases for you. Still, any advance can provide an advantage. The write-up explains that the company:

“… announced it was awarded U.S. Patent No. 8725494 for entity-level sentiment analysis. The patent addresses the market’s need to more accurately analyze, assign and understand customer sentiment within unstructured content where multiple brands and people are referenced and discussed. Most sentiment analysis today is conducted on a broad level to determine, for example, if a review is positive, negative or neutral. The entire entry or document is assigned sentiment uniformly, regardless of whether the feedback contains multiple comments that express a combination of brand and product sentiment.”

I can see how picking up on nuances can lead to a more accurate measurement of market sentiment, though it does seem more like an incremental step than a leap forward. Still, the patent is evidence of Attivio’s continued ascent. Founded in 2007 and headquartered in Massachusetts, Attivio maintains offices around the world. The company’s award-winning Active Intelligence Engine integrates structured and unstructured data, facilitating the translation of that data into useful business insights.

Cynthia Murrell, July 28, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Searchcode Is a Valuable Resource for Developers

July 28, 2014

Here is a useful tool that developers will want to bookmark: searchcode does just what its name suggests—paste in a snippet of code, and it returns real-world examples of its use in context. Great for programming in an unfamiliar language, working to streamline code, or just seeing how other coders have approached a certain function. The site’s About page explains:

“Searchcode is a free source code and documentation search engine. API documentation, code snippets and open source (free software) repositories are indexed and searchable. Most information is presented in such a way that you shouldn’t need to click through, but can if required.”

Searchcode pulls its samples from Github, Bitbucket, Google Code, Codeplex, Sourceforge, and the Fedora Project. There is a way to search using special characters, and users can filter by programming language, repository, or source. The tool is the product of one Sydney-based developer, Ben E. Boyter, and is powered by open-source indexer Sphinx Search. Many, many more technical details about searchcode can be found at Boyter’s blog.

Cynthia Murrell, July 28, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Snowden Effect on Web Search

July 27, 2014

If you are curious about the alleged impact of intercepts and monitoring on search, you will want to read “Government Surveillance and Internet Search Behavior.” You may have to pay to access the document. Here’s a passage I noted:

In the U. S., this was the main subset of search terms that were affected. However, internationally there was also a drop in traffic for search terms that were rated as personally sensitive.

Stephen E Arnold, July 27, 2014

Next Page »