Use Bing? You Are an Early Adopter

February 8, 2010

AdAge ran “What Your Choice of Search Engine Says about You.” Marketing and economic research usually leaves me baffled. Most of the data and the interpretation of those data are bit like a comedian’s joke. The punch line is unexpected, and I often wonder how the comedian’s imagination was able to hit on a twist to a tired light bulb incident. You will need to read this AdAge article and make your own decision.

What I underlined in my hard copy were these points:

  • The search engine I use tells me a lot about myself. Example: If I use Bing I am an early adopter. My office is filled with new stuff, which arrives every day. I don’t use Bing. Guess what I do as a technology analyst is not what a “real early adopter does”, right?
  • I also shop at Wal*Mart. Wrong. I don’t shop. When I buy stuff I use online services to find low prices and then do some research. Then I buy. When I shop, I get what I need at the junky store attached to the gasoline pump I use to fill my Honda every 10 days.
  • Search engine choice sheds light on consumer behavior. Okay, what about a sample of users of which fewer than 15 percent use something other than Google. For the rest of the sample, it is Google all the way. Doesn’t this change the results? Sure makes me think about this baloney but not the AdAge editors.

Yep, a “new wrinkle”. Maybe for the marketing and econ majors. Not for me.

Stephen E Arnold, February 8, 2010

No one paid me to point out the flaws in this survey. Maybe I will report this non payment to the Bureau of the Census. That outfit knows how to count.

Google and the Lazarus Myth for Books

February 8, 2010

I read “Google: We Will Bring Books Back to Life” by Google’s legal eagle, David Drummond. The title of the article suggested to me that books are dead. I tried to visualize an information Lazarus, but I just received a royalty check from one of my half dozen publishers. The numbers looked fine, so that book of mine was not dead. On life support maybe. Definitely not dead.

image

Google saith, “Get up and read.” Van Gogh sees “yellow” as in journalism. Source: http://cruciality.files.wordpress.com/2009/09/van-gogh-the-raising-of-lazarus-1890.jpg

The argument in the Googler’s write up was well honed. I can see in my mind’s eye several Googlers laboring away in their cubes over the rough draft. This passage struck me as interesting:

First, this passage: “Yet doubts remain, and there is particular concern among authors that they are in danger of handing control of their work to Google.”

Doubts is a bit of an understatement. The love Google enjoyed when it was a wee tot of two or three is long gone. The Google is does more than engender doubt.

Second, this passage: “Some have questioned the impact of the agreement on competition, suggesting it will limit consumer choice and hand Google a monopoly. In reality, nothing in this agreement precludes any other organization from pursuing its own digitization efforts.”

Okay, let’s be clear. National libraries should be digitizing their holdings. National libraries did not take this duty, leaving it to others. Now the “others” are a set of one, the Google. No one other than Google is going to scan books, Period. I accepted this years ago when I worked briefly at UMI in 1986. I thought scanning books was a great idea. The idea got reduced to scanning specific sets of books and then providing a set of microfilm for related information. When I looked into sets, it became clear that one could scan a single book like Pollard & Redgrave and then provide microfilm of the referenced content objects. But UMI’s financial set up precluded much more than a very small, undernourished effort. If money was not available in the mid 1980s, I don’t think money will be available in the post crash, pre depression 2010s.

Third, this passage: “Imagine if that information could be made available to everyone, ­everywhere, at the click of a mouse. Imagine if long-forgotten books could be enjoyed again and could earn new ­revenues for their authors. Without a settlement it can’t happen.”

I can imagine it. I can also imagine the data available to Google’s internal knowledgebases, its advertising revenue, and its potential to generate new content objects from the information in these processed documents.

Apple is working at books from its iTunes angle. Amazon is working at books from the digital Wal*Mart angle.

Where are the national libraries? Where are the consortia of government entities responsible for archives? Where are the UN members’ pooling resources to tackle must have collections such as health and medicine? Where are the publishers’ associations?

Answer nowhere, silent, or hoping for the hobbling of Google.

Google is doing the government’s job. If the Google is stopped, the information in books is going to be handled like so many important tasks in today’s world. Poorly or not at all.

I am on the side of Googzilla. But the Google does not know I exist and does not care about an addled goose in Kentucky. I do hear, “Get up and read” in my mind.

Stephen E Arnold, February 8, 2010

No one paid me to write this. I will report this sad state of affairs to the manager of the US government document repository near my goose pond.

Online Information: The View of a Real Author

February 8, 2010

The addled goose publishes monographs. These are expensive and sell in dribs and drabs. The addled goose has no real publishing experience so I gobbled up the information in the TechCrunch article “Hey, 1997 – Macmillan Called, They Want the Net Book Agreement Back.” If the information in this write up is on the money, real publishers have been making life difficult for authors and booksellers for a long time. I recall reading that Charles Dickens was a slippery dude with whom to deal. Now I am wondering if the dust up between the world’s smartest man (Jeff Bezos) and a bunch of publishers is an information apocalypse, a business negotiation, or a new era in information access. I am working on a really dull monograph with zero interest to anyone except a few attorneys and possibly an investment banker or two. I may even give the monograph away because with the speed with which my stuff published by my publishers is selling, I will be in the big duck oven in the sky before I can pay for a meal at McDuck’ down the road.

In the TechCrunch write up, I noted these items:

First this passage:

Of course, publishers still choose their wholesale price, but there’s nothing to stop, say, Borders from heavily discounting bestsellers to get people through the door. Publishers didn’t necessarily like this as it led to booksellers demanding more aggressive discounting (sometimes more than 60% off the cover price), but they didn’t have much of a choice but to accept. The fact is that publishers couldn’t justify opening up their own stores, so if they wanted readers to be able to actually read their books, they had to keep bookstores happy.

Ah, control is not complete. I did not know that publishers were at the mercy of their retail partners.

Second, this passage:

It took until the late 90s [in the United Kingdom] for the Restrictive Practices Court to declare that the Net Book Agreement was anti-competitive and should be scrapped. Shortly afterwards, Borders entered the UK market, hundreds of UK independent bookshops went bankrupt and publishers decided to change their contracts with authors. Now, instead of being based on the cover price of a book, the author’s royalty would be based on ‘net receipts’, which is to say the price that publishers actually received from bookshops.

Yikes, price manipulation and then pricing actions that further reduced what was paid to real authors. Geese like me are even further down the food chain. Yikes again.

Finally, this passage:

For the first time in the UK since 1997, and ever in the US, publishers are able to set – and enforce- their own prices on ebooks. And they will; not to make a fair return on ebooks but rather to cripple their sales in order to protect early hardback book sales. They’ve admitted as much themselves, saying that prices will start high on hardback release, before dropping steadily over time. The idea that this benefits anyone, least of all authors, is laughable.

So what are the consequences?

You can read the TechCrunch original for its view. Mine is that a disintermediation option is open. I argued that Google could move in this space with the flip of a bit. So far Google is dragging its giant Googzilla feet. But for how long? Have publishers read The Strategy Paradox by Michael E. Raynor. It might be available on Amazon or in your local bookstore. Worthwhile reading for understanding in my opinion.

Stephen E Arnold,

No one paid me to write this. I will report non payment to the US Department of Treasury, which prints money, unlike publishers and authors.

ArnoldIT.com Partner Lands New Web Site

February 8, 2010

Dr. Tyra Oldham Banks is an ArnoldIT.com partner. She notified us on February 7, 2010 of some news. She has rolled out a new Web site. You can see Dr. Oldham on our TheSeed2020.com video and learn about her management and engineering services work at LAND Construct.

tyra small copy

Dr. Tyra Oldham, President, LAND Construct, an engineering services firm and ArnoldIT.com partner.

Point your browser to www.landconstruct.com. A happy quack to her.

Stephen E Arnold, February 8, 2010

This was a compensated placement. Dr. Oldham bought Stuart Schram and me a hot chocolate as an inducement to review her new Web site and make constructive suggestions. The addled goose and one of my goslings were eager to comply. It only took a hot chocolate.

Online Pricing: Disruption Is the Game

February 8, 2010

It’s Monday morning. The Super Bowl is over, but the world football ecosystem is unfazed. The same cannot be said of for-fee content. I want to point out two seemingly unrelated developments and link them to one of the keystones of doing business in an online, Web-centric world. I am working on a couple of oh-so-secret write ups, and I will make oblique references to research findings by the goslings here in Harrod’s Creek that will be more widely known in the spring.

image

When world’s collide. The boundary is the exciting spot in my opinion. Image source: http://www.sciencedaily.com/images/2008/01/080112152249-large.jpg

First, consider the plight of Google Books. Suddenly the Department of Justice is showing some moxie. That’s a good thing, but I think the reality of derailing Google Books is like to have some interesting repercussions going forward. For now, the big story is that Google Books has become the poster child of Google being Google. You can get the received wisdom in the UK newspaper The Telegraph and its write up “Justice Department Cr5iticises Google Books Settlement.” The glee is evident to me in this write up, but perhaps I am jaded and worn down by the approach certain publications take to Google. The company is essentially the first examples of what will be a growing line up of firms that use technology to alter business processes. I will be talking about this in my NFAIS speech on March 1, 2010. I am the luncheon speaker, and I think some of those in the room will get indigestion. The reason is that Google comes from a domain that people within 20 years of my age of 65 don’t fully understand. The Telegraph doesn’t get it either, and I think this passage highlights that generational divide:

The ruling is a blow to Google and authors’ groups who had supported the search giant’s ambitious plan to create a vast online library of digitised books. The controversial Google Book Search project attracted fierce criticism from authors, who believed their rights were being eroded, while winning praise from other quarters for helping to widen access to classic, rare or useful works of literature.

Too bad the writer, a real journalist, omitted the word “goodie”. My hunch is that since national libraries have not shown any interest in creating digital collections, students and researchers will be doing their work the way John Milton and Andrew Marvell did. Great for those who have the time, money, and cursive writing skills. Not so great for those who need to sift through lots of content quickly. With library budgets shrinking and librarians forced to decide which books to keep, which to store, and which to trash, I think the failure of national libraries is evident. Google made a Googley and somewhat immature attempt to step into the breach and look what has resulted? A bureaucratic, legal eagle snarl. Books are an intellectual resource and I keep asking, “If not Google who?” Reed Elsevier? The British government? The National Library of China? A consortium of publishers? The answer is, in my opinion, now clear, “No one.” Maybe Google will keep going with this project. Hard to tell. Life might be easier to shift gears, go directly to authors, and cut specific deals for their future work. In a decade or so, end of problem. Also, end of traditional publishing. If Google actually talked to me, I would offer this advice, “Go for it, dudes.”

Read more

Ad Revenue Gets Exciting

February 7, 2010

ZDNet’s “Classified Ads Plunge to $6bn from $19.6bn in 2000.” I am not sure there is a particular passage from this write up for me to cite. What’s clear is that the iPad, the Googlet, the better Kindle, the Nook (Nook?), and the dozens of other handheld reading devices are going to have to sell one heck of a lot of magazines, books, and other for fee content. In my opinion, the gizmos won’t be used for reading by most folks. I wonder if anyone has checked the lists of the top selling apps on some of the smartphones. I did. Pretty depressing for me but probably even more depressing for vendors of for fee content who may expect some ad revenue in their digital content. I suppose you could convert Scientific American into a game.

Stephen E Arnold, February 5, 2010

No one paid me to write this. I don’t know to whom to report this sad situation. Federal Bankruptcy Court? Sounds good.

Extractiv: Content Provisioning

February 7, 2010

A happy quack to the reader who alerted me to Extractiv. The company is in the “content provisioning business”, and I did not know what this phrase meant. I know about “telecommunications provisioning”, but the “content” part threw me. I followed the links my reader sent me and located an interview (“Quick Q&A on Extractiv”) on the AndyHIckl.com blog. It took me about a half hour to figure out that the interviewer and the interview subject seemed to be the same person.

The key points that pierced the addled goose’s skull were:

  • The service “helps consumers ‘make sense’ of large amounts of unstructured text. The method is natural language processing
  • Unstructured text is transformed into structured text for sentiment tracking and semantic search
  • The technology is “unique distributed computing platform makes it possible for us to crawl — and extract content from — zillions of pages at the same time. (Our performance is pretty unbeatable, too: we’re currently able to download and extract content from 1 million pages in just under an hour.)”
  • “Extractiv’s a joint venture between two companies: 80Legs and Language Computer. It’s really a great match. 80Legs offers the world’s first truly scalable web crawling platform, while Language Computer provides some of the world’s best — and most scalable — natural language processing tools.”

The company says:

Extractiv is a new kind of content provisioning service which is making the Web truly actionable. Rather than simply passively “monitoring” the Web, our industry-leading data harvesting and content extraction goes out and delivers the information that really matters to you and your business. With Extractiv, it’s easy to build semantically-aware applications – regardless if you’re a newcomer to the Semantic Web or a deep believer in the power of semantic metadata.

For more information you will want to read “Learning More about Swinly and Extractiv”. The company’s Web site is at www.extractiv.com. The company is in its alpha stage. More information when it becomes available.

Stephen E Arnold, February 8, 2010

No one paid me to write this. I think this company is in Dallas, and I don’t go to Dallas. Texas makes me nervous. I will report this to the US House of Representatives with the “this” meaning money and my nervousness about the Lone Star State.

Microsoft Realizes Its $1.3 Billion Challenge

February 7, 2010

My wife watched the film Titanic last night. I enjoyed the picture but the scenes that stuck in my mind were the calm officers who were on a sinking ship. Nifty but not too many survivors. I fell asleep before the film ended. I assume everyone was saved, but that was a motion picture, not real life.

Last week there were some icebergs spotted and a big ship speeding among them.

Phone calls and emails from frantic 20 somethings who want to write a story before heading to yoga. A couple of clients pinging me about the future of enterprise search. Even a fax from a UK client who wanted a link to the Fast Linux integrator / consulting outfit Comperio, where some of the original Fast engineers now work.

image

Ships in iceberg territory.

Let’s take these inquiries—informed, desperate, meaningful, and clueless—and probe each.

First, a rumor surfaced about 10 days ago that there was a shakeup in the Microsoft Fast unit. The rumor I heard from a contact in Scandinavia was that Microsoft was backing away slowly from Fast’s Linux and Solaris versions of ESP and I thought I heard a reference staff who were told “don’t let the door hit you on the way out”. Apparently those bidding farewell wore penguin pins on their parkas.

image

I then read a pretty good write up by Kurt Mackie in RedmondMag.com called “Fast Search Will Be a Windows Only Product.” In my opinion, the most interesting comment in that write up was attributed to the fellow running Microsoft search, or at least one of the people running Microsoft search:

We will always interoperate with non-Windows systems on  both the front- and back-end, Olstad wrote. Our search solutions  will crawl and index content stored on Windows, Linux, and UNIX systems, and  our UI controls will work with UI frameworks running on any operating system. [Emphasis added]

Don’t you believe in categorical affirmatives? Well, I am careful with broad statements that imply “diamonds are forever”.

I don’t know what the staffing situation is, but I was surprised it took Microsoft’s business experts more than 18 months to realize the complexity, cost, and legal issues associated with the Fast Search & Transfer SA’s technology. The Fast company cost MSFT about $1.2 billion. Then there was the December 2009 departure of the Microsoft CFO on whose watch the deal went down (Chris Liddell). The whole acquisition struck me as an possible indication that Microsoft’s acquisition team was not as informed as it should have been with Fast ESP (enterprise search platform). I concluded that Microsoft did not have real ESP (extrasensory perception) when it decided to buy Fast Search. Then I was pitched enterprise search as a user experience with a tile of images is not going to deal with the brave new world of search enabled applications, eDiscovery, and seamless access to structured and unstructured data. Interface is one thing. Meeting users’ information needs in the enterprise is quite another.

image

Source: http://ar-d.com/images/emailblast/bing%20visual.png

Second, I documented in this Web log one public statement that a Fast ESP customer was done with the product due to lack of service, technical capabilities, etc. You can read that story to which I pointed in this Web log as “Microsoft Fast Questioned by Ayna”. Most licensees just grandfather a search system and move on. What made this story interesting is that Ayna went on the record on its Web site specifically pointing out the problems the company had with the Microsoft Fast system. Remarkable. This addled goose’s rule of thumb is that where there is one really annoyed customer there are probably lots more sitting in their cubicles wondering how to extricate themselves from a muddle.

Third, the whole Windows plumbing versus Unix plumbing does not require much technical acumen to understand. Trying to support an aging search technology with layers of acquired code, home brew scripts, open source software, and licensed “components” is a challenge in my experience. Now add engineers steeped in the arcana of Dot Net. The mixture triggers long meetings and slow, slow progress.

Read more

Quote to Note: Dick Brass on MSFT Innovation

February 6, 2010

I met Dick Brass many years ago. He left Oracle and joining Microsoft to contribute to a confidential initiative. Mr. Brass worked on the ill-fated Microsoft tablet, which Steve Jobs has reinvented as a revolutionary device. I am not a tablet guy, but one thing is certain. Mr. Jobs knows how to work public relations. Mr. Brass published an article in the New York Times, and it captured the attention of Microsoft and millions of readers who enjoyed Mr. Brass’s criticism of his former employer. I have no opinion about Microsoft, its administrative methods, or its ability to innovate. I did find a quote to note in the write up:

Microsoft is no longer considered the cool or cutting edge place to work. There has been a steady exist of its best and brightest. (“Microsoft’s Creative Destruction”, the New York Times, February 4, 2010, Page 25, column 3, National Edition)

Telling because if smart people don’t work at a company, that company is likely to make less informed decisions than an organization with smarter people. This applies in the consulting world. There are blue chip outfits like McKinsey, Bain, and BCG). Then there are lesser outfits which I am sure you can name because these companies “advertise”, have sales people who “sell” listings, and invent crazy phrases to to create buzz and sales. I am tempted to differentiate Microsoft with a reference to Apple or Google, but I will not. Oh, why did I not post this item before today. The hard copy of my New York Times was not delivered until today. Speed is important in today’s information world.

The quote nails it.

Stephen E Arnold, February 7, 2010

No one paid me to write this, not a single blue chip consulting firm, not a single savvy company. I will report this lack of compensation to the experts at the IRS, which is gearing up for the big day in April.


* Featured
* Interviews
* Profiles

Featured
Microsoft and Mikojo Trigger Semantic Winds across Search Landscape

Semantic technology is blowing across the search landscape again. The word “semantic” and its use in phrases like “semantic technology” has a certain trendiness. When I see the word, I think of smart software that understands information in the way a human does. I also think of computationally sluggish processes and the complexity of language, particularly in synthetic languages like English. Google has considerable investment in semantic technology, but the company wisely tucks it away within larger systems and avoiding the technical battles that rage among different semantic technology factions. You can see Google’s semantic operations tucked within the Ramanathan Guha inventions disclosed in February 2007. Pay attention to the discussion of the system and method for “context”.

image

Gale force winds from semantic technology advocates. Image source: http://www.smh.com.au/ffximage/2008/11/08/paloma_wideweb__470×289,0.jpg

Microsoft’s Semantic Puff

Other companies are pushing the semantic shock troops forward. I read yesterday in Network World’s “Microsoft Talks Up Semantic Search Ambitions.” The article reminded me that Fast Search & Transfer SA offered some semantic functionality which I summarized in the 2006 version of the original Enterprise Search Report (the one with real beef, not tofu inside). Microsoft also purchased Powerset, a company that used some of Xerox PARC’s technology and its own wizardry to “understand” queries and create a rich index. The Network World story reported:

With semantic technologies, which also are being to referred to as Web 3.0, computers have a greater understanding of relationships between different information, rather than just forwarding links based on keyword searches.  The end game for semantic search is “better, faster, cheaper, essentially,” said Prevost, who came over to Microsoft in the company’s 2008 acquisition of search engine vendor Powerset. Prevost is still general manager of Powerset.  Semantic capabilities get users more relevant information and help them accomplish tasks and make decisions, said Prevost.

The payoff is that software understands humans. Sounds good, but it does little to alter the startling dominance of Google in general Web search and the rocket like rise of social search systems like Facebook. In a social context humans tell “friends” about meaning or better yet offer an answer or a relevant link. No search required.

I reported about the complexities of configuring the enterprise search system that Microsoft offers for SharePoint in an earlier Web log post. The challenge is complexity and the time and money required to make a “smart” software system perform to an acceptable level in terms of throughput in content processing and for the user. Users often prefer to ask someone or just use what appears in the top of a search results list.

Read more »
Interviews
Inside Search: Raymond Bentinck of Exalead, Part 2

This is the second part of the interview with Raymond Bentinck of Exalead.

Isn’t this bad marketing?

No. This makes business sense.Traditional search vendors who may claim to have thousands of customers tend to use only a handful of well managed references. This is a direct result of customers choosing technology based on these overblown marketing claims and these claims then driving requirements that the vendor’s consultants struggle to deliver. The customer who is then far from happy with the results, doesn’t do reference calls and ultimately becomes disillusioned with search in general or with the vendor specifically. Either way, they end up moving to an alternative.

I see this all the time with our clients that have replaced their legacy search solution with Exalead. When we started, we were met with much skepticism from clients that we could answer their information retrieval problems. It was only after doing Proof of Concepts and delivering the solutions that they became convinced. Now that our reputation has grown organizations realize that we do not make unsubstantiated claims and do stick by our promises.

What about the shift to hybrid solutions? An appliance or an on premises server, then a cloud component, and maybe some  fairy dust thrown in to handle the security issues?

There is a major change that is happening within Information Technology at the moment driven primarily by the demands placed on IT by the business. Businesses want to vastly reduce the operational cost models of IT provision while pushing IT to be far more agile in their support of the business. Against this backdrop, information volumes continue to grow exponentially.

The push towards areas such as virtual servers and cloud computing are aspects of reducing the operational cost models of information technology provision. It is fundamental that software solutions can operate in these environments. It is surprising, however, to find that many traditional search vendors solutions do not even work in a virtual server environment.

Isn’t this approach going to add costs to an Exalead installation?

No, because another aspect of this is that software solutions need to be designed to make the best use of available hardware resources. When Exalead provided a solution to the leading classified ads site Fish4.co.uk, unlike the legacy search solution we replaced, not only were we able to deploy a solution that met and exceeded their requirements but we reduced the cost of search to the business by 250 percent. A large part of this was around the massively reduced hardware costs associated with the solution.

What about making changes and responding quickly? Many search vendors simply impose a six month or nine month cycle on a deployment. The client wants to move quickly, but the vendor cannot work quickly.

Agility is another key factor. In the past, an organization may implement a data warehouse. This would take around 12 to 18 months to deploy and would cost a huge amount in hardware, software and consultancy fees. As part of the deployment the consultants needed to second guess the questions the business would want to ask of the data warehouse and design these into the system. After the 12 to 18 months, the business would start using the data warehouse and then find out they needed to ask different types of questions than were designed into the system. The data warehouse would then go through a phase of redevelopment which would last many more months. The business would evolve… making more changes and the cycle would go on and on.

With Exalead, we are able to deploy the same solution in a couple months but significantly there is no need to second guess the questions that the business would want to ask and design them into the system.

This is the sort of agile solution that businesses have been pushing their IT departments to deliver for years. Businesses that do not provide agile IT solutions will fall behind their competitors and be unable to react quickly enough when the market changes.

One of the large UK search vendors has dozens of niche versions of its product. How can that company keep each of these specialty products up to date and working? Integration is often the big problem, is it not?

The founders of Exalead took two years before starting the company to research what worked in search and why the existing search vendors products were so complex. This research led them to understand that the search products that were on the marketplace at the time all started as quite simple products designed to work on relatively low volumes of information and with very limited functional capabilities. Over the years, new functionality has been added to the solutions to keep abreast of what competitors have offered but because of how the products were originally engineered they have not been clean integrations. They did not start out with this intention but search has evolved in ways never imagined at the time these solutions were originally engineered.

Wasn’t one of the key architects part of the famous AltaVista.com team?

Yes. In fact, both of the founders of Exalead were from this team.

What kind of issues occur with these overly complex products?

As you know, this has caused many issues for both vendors and clients. Changes in one part of the solution can cause unwanted side effects in another part. Trying to track down issues and bugs can take a huge amount of time and expense. This is a major factor as to why we see the legacy search products on the market today that are complex, expensive and take many months if not years to deploy even for simple requirements.

Exalead learned from these lessons when engineering our solution. We have an architecture that is fully object-orientated at the core and follows an SOA architecture. It means that we can swap in and out new modules without messy integrations. We can also take core modules such as connectors to repositories and instead of having to re-write them to meet specific requirements we can override various capabilities in the classes. This means that the majority of the code that has gone through our quality-management systems remains the same. If an issue is identified in the code, it is a simple task to locate the problem and this issue is isolated in one area of the code base. In the past, vendors have had to rewrite core components like connectors to meet customers’ requirements and this has caused huge quality and support issues for both the customer and the vendor.

What about integration? That’s a killer for many vendors in my experience.

The added advantage of this core engineering work means that for Exalead integration is a simple task. For example, building new secure connectors to new repositories can be performed in weeks rather than months. Our engineers can take this time saved to spend on adding new and innovative capabilities into the solution rather than spending time worrying about how to integrate a new function without affecting the 1001 other overlaying functions.

Without this model, legacy vendors have to continually provide point-solutions to problems that tend to be customer-specific leading to a very expensive support headache as core engineering changes take too long and are too hard to deploy.

I heard about a large firm in the US that has invested significant sums in retooling Lucene. The solution has been described on the firm’s Web site, but I don’t see how that engineering cost is offset by the time to market that the fix required. Do you see open source as a problem or a solution?

I do not wake up in the middle of the night worrying about Lucene if that is what you are thinking! I see Lucene in places that have typically large engineering teams to protect or by consultants more interested in making lots of fees through its complex integration. Neither of which adds value to the company in, for example, reducing costs of increasing revenue.

Organizations that are interested in providing cost effective richly functional solutions are in increasing numbers choosing solutions like Exalead. For example, The University of Sunderland wanted to replace their Google Search Appliance with a richer, more functional search tool. They looked at the marketplace and chose Exalead for searching their external site, their internal document repositories plus providing business intelligence solutions over their database applications such as student attendance records. The search on their website was developed in a single day including the integration to their existing user interface and the faceted navigation capabilities. This represented not only an exceptionally quick implementation, far in excess of any other solution on the marketplace today but it also delivered for them the lowest total cost of ownership compared to other vendors and of course open-source.

In my opinion, Lucene and other open-source offerings can offer a solution for some organizations but many jump on this bandwagon without fully appreciating the differences between the open source solution and the commercially available solutions either in terms of capability or total cost. It is assumed, wrongly in many instances, that the total cost of ownership for open source must be lower than the commercially available solutions. I would suggest that all too often, open source search is adopted by those who believe the consultants who say that search is a simple commodity problem.

What about the commercial enterprise that has had several search systems and none of them capable of delivering satisfactory solutions? What’s the cause of this? The vendors? The client’s approach?

I think the problem lies more with the vendors of the legacy search solutions than with the clients. Vendors have believed their own marketing messages and when customers are unsatisfied with the results have tended to blame the customers not understanding how to deploy the product correctly or in some cases, the third-party or system integrator responsible for the deployment.

One client of ours told me recently that with our solution they were able to deliver in a couple months what they failed to do with another leading search solution for seven years. This is pretty much the experience of every customer where we have replaced an existing search solution. In fact, every organization that I have worked with that has performed an in-depth analysis and comparison of our technology against any search solution has chosen Exalead.

In many ways, I see our solution as not only delivering on our promises but also delivering on the marketing messages that our competitors have been promoting for years but failing to deliver in reality.

So where does Exalead fit? The last demo I received showed me search working within a very large, global business process. The information just appeared? Is this where search is heading?

In the year 2000, and every year since, a CEO of one of the leading legacy search vendors made a claim that every major organization would be using their brand of meaning based search technology within two years.

I will not be as bold as him but it is my belief that in less than five years time the majority of organizations will be using search based applications in mission critical applications.

For too long software vendors have been trying to convince organizations, for example, that it was not possible to deploy mission critical solutions such as customer 360 degree customer view, Master Data Management, Data Warehousing or business intelligence solutions in a couple months, with no user training, with with up-to-the-minute information, with user friendly interfaces, with a low cost per query covering millions or billions of records of information.

With Exalead this is possible and we have proven it in some of the world’s largest companies.

How does this change the present understanding of search, which in my opinion is often quite shallow?

Two things are required to change the status quo.

Firstly, a disruptive technology is required that can deliver on these requirements and secondly businesses need to demand new methods of meeting ever greater business requirements on information.

Today I see both these things in place. Exalead has proven that our solutions can meet the most demanding of mission critical requirements in an agile way and now IT departments are realizing that they cannot support their businesses moving forward by using traditional technologies.

What do you see as the trends in enterprise search for 2010?

Last year was a turning point around Search Based Applications. With the world-wide economy in recession, many companies have put projects on hold until things were looking better. With economies still looking rather weak but projects not being able to be left on ice for ever, they are starting to question the value of utilizing expensive, time consuming and rigid technologies to deliver these projects.

Search is a game changing technology that can deliver more innovative, agile and cheaper solutions than using traditional technologies. Exalead is there to deliver on this promise.

Search, a commodity solution? No.

Editor’s note: You can learn more about Exalead’s search enable applications technology and method at the Exalead Web site.

Stephen E Arnold, February 4, 2010

I wrote this post without any compensation. However, Mr. Bentinck, who lives in a far off land, offered to buy me haggis, and I refused this tasty bribe. Ah, lungs! I will report the lack of payment to the National Institutes of Health, an outfit concerned about alveoli.
Profiles
Vyre: Software, Services, Search, and More

A happy quack to the reader who sent me a link to Vyre, whose catchphrase is “dissolving complexity.” The last time I looked at the company, I had pigeon holed it as a consulting and content management firm. The news release my reader sent me pointed out that the company has a mid market enterprise search solution that is now at version 4.x. I am getting old, or at least too sluggish to keep pace with content management companies that offer search solutions. My recollection is that Crown Point moved in this direction. I have a rather grim view of CMS because software cannot help organizations create high quality content or at least what I think is high quality content.

The Wikipedia description of Vyre matches up with the information in my archive:

VYRE, now based in the UK, is a software development company. The firm uses the catchphrase “Enterprise 2.0? to describe its enterprise  solutions for business.The firm’s core product is Unify. The Web based services allows users to build applications and content management. The company has technology that manages digital assets. The firm’s clients in 2006 included Diageo, Sony, Virgin, and Lowe and Partners. The company has reinvented itself several times since the late 1990s doing business as NCD (Northern Communication and Design), Salt, and then Vyre.

You can read Wikipedia summary here. You can read a 2006 Butler Group analysis here. My old link worked this evening (March 5, 2009), but click quickly.  In my files I had a link to a Vyre presentation but it was not about search. Dated 2008, you may find the information useful. The Vyre presentations are here. The link worked for me on March 5, 2009. The only name I have in my archive is Dragan Jotic. Other names of people linked to the company are here. Basic information about the company’s Web site is here. Traffic, if these data are correct, seem to be trending down. I don’t have current interface examples. The wiki for the CMS service is here. (Note: the company does not use its own CMS for the wiki. The wiki system is from MedioWiki. No problem for me, but I was curious about this decision because the company offers its own CMS system.  You can get a taste of the system here.

image

Administrative Vyre screen.

After a bit of poking around, it appears that Vyre has turned up the heat on its public relations activities. The Seybold Report here presented a news story / news release about the search system  here. I scanned the release and noted this passage as interesting for my work:

…version 4.4 introduces powerful new capabilities for performing facetted and federated searching across the enterprise. Facetted search provides immediate feedback on the breakdown of search results and allows users to quickly and accurately drill down within search results. Federated search enables users to eradicate content silos by allowing users to search multiple content repositories.

Vyre includes a taxonomy management function with its search system, if I read the Seybold article correctly. I gravitate to the taxonomy solution available from Access Innovations, a company run by my friend and colleagues Marje Hlava and Jay Ven Eman. Their system generates ANSI standard thesauri and word lists, which is the sort of stuff that revs my engine.

Endeca has been the pioneer in the enterprise sector for “guided navigation” which is a synonym in my mind for faceted search. Federated search gets into the functions that I associated with Bright Planet, Deep Web Technologies, and Vivisimo, among others. I know that shoving large volumes of data through systems that both facetize content and federated it are computationally intensive. Consequently, some organizations are not able to put the plumbing in place to make these computationally intensive systems hum like my grandmother’s sewing machine.

If you are in the market for a CMS and asset management company’s enterprise search solution, give the company’s product a test drive. You can buy a report from UK Data about this company here. I don’t have solid pricing data. My notes to myself record the phrase, “Sensible pricing.” I noted that the typical cost for the system begins at about $25,000. Check with the company for current license fees.

Stephen Arnold, March 6, 2009
Latest News
Mobile Devices and Their Apps: Search Gone Missing

VentureBeat’s “A Pretty Chart of Top Apps for iPhone, Android, BlackBerry” shocked me. Not a little. Quite a bit. You will want to look at the top apps f

The Facebook Magnet

February 6, 2010

Short honk: Facebook keeps on growing. I read “Facebook to Hit 400m Users.” At this time, in my opinion, Google does not have an answer to Facebook. Google should find this situation worrying. Facebook, like Apple, is more of a closed garden than Google’s wide open Web search service. Inside a walled garden, there are some opportunities denied to a company like Google. For me, the most significant comment in the write up was not the 400 million users. I responded to this comment:

Facebook has also sought to emphasize its applications and games offerings more by creating new links to these.

Different game from the one Google is playing and potentially game changing.

Stephen E Arnold, February 6, 2010

Nope. No one paid me to write this. When DC dig outs again, I will report this to the FAA which monitors high flying stuff.

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta