New Beyond Search White Paper: Coveo G2B for Mobile Email Search

September 8, 2008

The Beyond Search research team prepared a white paper about Coveo’s new G2B for Email product. You can download a copy from us here or from Coveo here. Coveo’s system works across different mobile devices, requires no third-party viewers, delivers low-latency access when searching, evidenced no rendering issues, and provided access to contacts and attachments as well as the text in an email. When compared to email search solutions from Google, Microsoft and Yahoo–Coveo’s new service provided a more robust and functional service. Beyond Search identified 13 features that set G2B apart. These include a graphical administrative interface, comprehensive usage reports, and real time indexing of email. The Beyond Search research team—Stephen Arnold, Stuart Schram, Jessica Bratcher, and Anthony Safina–concluded that Coveo established a new benchmark for mobile email search. For more information about Coveo, navigate to www.coveo.com. Pricing information is available from Coveo.

Stephen Arnold, September 5, 2008

Attivio: New Release, Support for 50+ Languages

September 7, 2008

I’m not sure if it’s because Attivio is located less than five miles from Fenway Park and that everyone in that area is, by default, a rabid Sox fan, but I got a preview of a slick new baseball demo they’ve put together to showcase the capabilities of their Active Intelligence Engine (AIE), which is trademarked.

For the upcoming Enterprise Search Summit West in late September, Attivio created a single index that’s composed of more than 700,000 news articles, dating from 2001 to 2007 about baseball. Attivio told me that these were fed into the AIE in XML format. Attivio also processed a dozen comma delimited files that contain baseball statistics such as batting , pitching, player salaries, team information, players post season performances. Here’s the results from my search of steroids.

steroids

© Attivio, 2008

Several aspects of this interface struck me as noteworthy. I liked:

  1. The ability to enter a word or phrase, a SQL query, or a combination “free text” item and a SQL query. Combining the ambiguity of natural language with the precision of a structured query language instruction gives me the type of control I want in my analytic work. Laundry lists don’t help me much. Fully programmatic systems like those from SAS and SPSS are too unwieldy for the fast-cycle work that I have to do.
  2. The point-and-click access to entities, alternative views, and other “facet” functions. Without having to remember how to perform a pivot operation, I can easily view information from structured and unstructured sources with a mouse click. For my work, I often pop between data and information associated with a single individual. The Attivio approach is a time saver, which is important for my work on tight deadlines.
  3. Administrative controls. The Attivio 1.2 release makes it easy for me to turn on certain features when I need them; for example, I can disable the syntax view with a mouse click. When I need to fiddle with my search statement, a click turns the function back on. I can jump to an alerts page to specify what I want to receive automatically and configure other parameters.
  4. Hit highlighting. I want to be able to spot the key fact or passage without tedious scanning.

Read more

Life before Google: History from Gen X

September 7, 2008

When I am in the UK, I enjoy reading the London papers. The Guardian often runs interesting and quirky stories. My newsreader delivered to me “Life before Google” by Kevin Anderson who was in college in the 1990s. Ah. Gen X history. I dived right in, and you may want to read this article here. After a chronological run down of Web search (happily ignoring the pre-Web search systems), Mr. Anderson wrote:

Using the almost 250 year-old theories British mathematician and Presbyterian minister Thomas Bayes, Page and Brin developed an algorithm to analyse the links to a site, helping to predict what sites were relevant to search terms.

This is a comment that is almost certain to catch the attention of Autonomy, the British vendor that has claimed Bayesian methods as its core technology.

Then Mr. Anderson added:

Google hasn’t solved search. There is still the so-called dark web, or deep web – terabytes of data that aren’t searchable or indexed.

Mr. Anderson, despite his keen Gen X intellect, overlooked Google’s Programmable Search Engine inventions or this query on Google. air schedule LGA SFO. The result displayed is

airschedule

What you are looking as is a “deep Web” search result. Mr. Anderson also overlooked the results for Baltimore condo.

The results displayed when I ran this search on September 6, 2008, at 7 10 pm Eastern were:

baltimorecondo

Yep, another “deep Web” search.

What’s the problem with Gen X research for Mr. Anderson’s article? I think for this article it was shallow. Much of the analysis of Google is superficial, incomplete, and misleading in my opinion. Agree or disagree? Help me learn.

Stephen Arnold, September 7, 2008

Personalized Network Searching

September 7, 2008

On September 4, 2008, the USPTO granted US2008/0215553 to Google. The invention is “personalized network searching”. The inventors are Gregory Badros and Stephen Lawrence. In this short post, I want to provide a glimpse of the background of the inventors and then briefly comment on the invention. With the availability of Chrome, Google’s browser, “network searching” becomes more important to me. You, of course, may be indifferent to Google’s “inventions”, but I find them useful windows through which to observe Google engineering at work. A patent does not mean that the inveniton will be used or that it will work, but patents can provide some information about a firm that keeps its lips zipped.

First, who is Stephen Lawrence? He has a low profile, which is not surprising. The biography available on the Queesnland University of Toronto provides some information. You can read the biography here. Some information from that write up suggests that he is a top notch thinker. After getting his PhD, he went to work at the NEC Research Institute in Princeton, New Jersey. He then jumped to Google, where he seems to still work as a Senior Staff Research Scientist. Among the projects on which he has worked at Google are the desktop search application.

Greg Badros is former InfoSpace engineer. He is a graduated of teh University of Washington and a PhD in computer science and engineering. A Duke Unviersity undergraduate, he graduated Magna Cum Laude in 1995. He signed on at Google in 2003. Among his projects were Gmail, calendar, and AdSense. He has received two Google Founders’ Awards and two Executive Management Group awards. You can pick up biographical details here.

These two fellows teamed up in 2003 to work on “personalized network searching.” The patent application resulted in the granting of US2008/0215553 in September 2008.

The abstract for the invention is:

Personalized network searching, in which a search query is received from a user, and a request is received to personalize (a search result. Responsive to the search query and the request to personalize the search result, a personalized search result is generated by searching a personalized search object. Responsive to the search query, a general search result is generated by searching the general search object. The personalized search result and the general search result are provided to – a client device, an advertisement is selected based at least in part upon the personalized search object, and the advertisement, the personalized search result, &d the general search result are displayed.

My reading of this document is that Google uses the user’s bookmarks, search history, annotations, and the query to determine what the user seeks.  The results may be enhanced with a symbol to add information for the user. Users with similar interests could be woven into a community. Users may excitly provide Google with bookmarks, but the invention can pull these items and others from the user’s computing device. The patent document provides a number of examples of how this invention might be used, ranging from pushing information to the user to performing collaborative work. One feature is that if a user doesn’t use bookmarks, the system will monitor what the user does and generate bookmarks based on those actions and data available to the system. The claims include personalization of advertising, information, and interface.

For me the key point is that the membrane or boundary between the user’s personal computer and its data and Google is opened. Whether this makes the user’s computer part of the broader Google computing environment or not depends on how you interpret the language of the patent document. You may find reading the 14 page document interesting. I did. A copy is available from the USPTO here. My view is that Chrome makes this type of Google private network connection easier for the GOOG to control and instrument. I can think of some interesting uses of this technology for intelligence and enterprise applications. What are your thoughts?

Stephen Arnold, September 8, 2008

Text Processing: Why Servers Choke

September 6, 2008

Resource Shelf posted a link to a Hewlett Packard Labs’s paper. Great find. You can download the HP write up here (verified at 7 pm Eastern) on September 5, 2008. The paper argues that an HP innovation can process text at the rate of 100 megabytes per second per processor core. That’s quite fast. The value of the paper for me was that the authors of Extremely Fast Text Feature Extraction for Classification and Indexing” have done a thorough job of providing data about the performance of certain text processing systems. If you’ve been wondering how slow Lucene is, this paper gives you some metrics. The data seem to suggest that Lucene is a very slow horse in a slow race.

Another highlight of George Forman’s and Evan Kirshebaum’s write up was this statement:

Multiple disks or a 100 gigabit Ethernet feed from many client computers may certainly increase the input rate, but ultimately (multi-core) processing technology is getting faster faster than I/O bandwidth is getting faster. One potential avenue for future work is to push the general-
purpose text feature extraction algorithm closer to the disk hardware.  That is, for each file or block read, the disk controller itself could distill the bag-of-words representation and then transfer only this small amount  of data to the general-purpose processor.  This could enable much higher indexing or classification scanning rates than is currently feasible.  Another potential avenue is to investigate varying the hash function to improve classification performance, e.g. to avoid a particularly unfortunate collision between an important, predictive feature and a more frequent word that masks it.

When I read this, two thoughts came to mind:

  1. Search vendors counting on new multi core CPUs to solve performance problems won’t get the speed ups needed to make some systems process content more quickly. Bad news for one vendor whose system I just analyzed for a company convinced that performance is a strategic advantage. In short, slow loses.
  2. As more content is processed and short cuts taken, hash collisions can reduce the usefulness of the value-added processing. A query returns unexpected results. Much of the HP speed up is a series of short cuts. The problem is that short cuts can undermine what matters most to the user–getting the information needed to meet a need.

I urge you to read this paper. Quite a good piece of work. If you have other thoughts about this paper, please, share them.

Stephen Arnold, September 6, 2008

WordLogic, Codima: Entering the Search War

September 6, 2008

WordLogic (Vancouver, BC) and Codima (Edmonton, AB) have teamed in a joint venture to develop Web search technology. Not much information is available on the tie up. Mediacaster Magazine has a short announcement of the deal here. WordLogic has carved a path for itself in mobile device interfaces. Codima is a VoIP specialist. More information about this company is here. Mobile search is attracting interest from Google and Yahoo. Coveo, another Canadian outfit, has a mobile email search service that looks very solid. As more information becomes available about the WordLogic and Codima play, I will pass the information along.

Stephen Arnold, September 6, 2008

Another Google 180

September 6, 2008

Physorg.com ran a story called “Google Chief Admits to Defensive Component of Browser Launch”. You can read the full story here. The point of the story is that Google needed a browser to protect and attack. For me, the most interesting statement in the story was this quote attributed to Eric Schmidt:

It is true that we actually, and I in particular, have said for a long time that we should not do a browser because it wasn’t necessary,” he told the business daily. The thing that changed in the past couple of years … is that people started building powerful applications on top of browsers and the browsers that were out there, in particular in Explorer, were not up to the task of running complex applications.

Now that Google has hit age 10, it seems to be able to changes its mind like a 10 year old. How do I know what Google says today will be true tomorrow? Answer: I don’t. Do you?

Stephen Arnold, September 6, 2008:

LexisNexis and Interwoven: An Odd Couple

September 6, 2008

The for fee legal information sector looks like a consistent winner to those who don’t know the cost structures and marketing hassles of selling to attorneys, intelligence agencies, and law schools. Let’s review at a high level the sorry state of the legal information business in the United States. Europe and the Asia Pacific region are a different kitchen of torts.

Background

First, creating legal information is still a labor intensive operation. Automated processes can reduce some costs, but other types of legal metatagging still require the effort of attorneys or those with sufficient legal education to identify and correct egregious errors. As you may know, making a mistake when preparing for a major legal matter is not too popular with the law firms’ clients.

Second, attorneys and law firms make up one of those interesting markets. At one end there are lots and lots of attorneys who work in very small shops. Someone told me that 90 percent of the attorneys are involved with small firms or work in legal flea markets. Several attorneys get together, lease a space, and then offer desks to other attorneys. Everyone pays the overhead, and the group can pursue individual work or form a loose confederation if necessary. Other attorneys abandon ship. I don’t have data on the quitters in the US, but I know that one of my acquaintances in Louisville, Kentucky, gave up the law to become a public relations advisor. One of my resources is an attorney who works only on advising companies trying to launch an IPO. He hires attorneys, preferring to use his managerial skills without the mind numbingly dull work that many legal eagles perform.

Third, there are lots of attorneys who have to mind their pennies. Clients in tough economic times are less willing to pay wild and crazy legal bills. These often carry such useful line items as “Research, $2,300” or “Telephone call, $550”. I have worked as an expert witness and gained a tiny bit of insight into the billing and the push back some clients exert. Other clients don’t pay the bills, which makes life tough for partners who can’t buy a new BMW and for the low paid “associates” who can’t buy happiness or pay law school loans.

Fourth, most people know that prices for legal information are high, but there’s a growing realization that the companies with these expensive resources are starting to look a lot like monopolies. Running the only poker game in town makes some of the excluded players want options. In the last few years, I’ve run across services that a single person will start up to provide specific legal type information to colleagues because the blue chip companies were charging too much or delivering stale information at fresh baked bread prices.

Folks like Google.com, small publishers, trade associations, and the Federal government put legal information on Web servers and let people browse and download. Granted, some of the bells and whistles like the nifty footnotes that tell a legal eagle to look at a specific case for a precedent are missing. But some folks are quite happy to use the free services first. Then, as a last resort, will the abstemious legal eagle pay upwards of $250 per query to look up information in a WestLaw, LexisNexis, or other blue chip vendors’ specialist online file.

Google’s government index service sports what may presage the “new look” for other Google vertical search services. Check it out in the screen shot below. Notice that the search box is unchanged, but the page features categories of information.

govt search

Now run the query , district court decisions. Sorry about the screen shots, but you can navigate to this site and run your own queries. I ran the bound phrase “district court decisions”. Here’s what Google showed me:

disttrict court decisions

Let me make three observations:

Read more

Google and Robots aka Computational Intelligence

September 5, 2008

Ed Cone, CIO Insight, posted a short article that had big implications. You can read his “The Cloud, the Haptic Web and Robotic Telepresence” here. Mr. Cone wrangled an interview with Vint Cerf, a Googler, in fact a super Googler. For me, the most important comment in this interview was:

I expect to see much more interesting interactions, including the possibility of haptic interactions – touch. Not just touch screens, but the ability to remotely interact with things. Little robots, for example, that are instantiations of you, and are remotely operated, giving you what is called telepresence. It’s a step well beyond the kind of video telepresence we are accustomed to seeing today.

I find the idea quite suggestive. In my analyses of Google patent documents, I noticed a number of references to agents, intelligent processes, and predictive methods. Is Mr. Cerf offering us his personal view, or is he hinting at Google’s increasing activity in computational intelligence and smart systems. Let me know your thoughts, humans.

Stephen Arnold, September 5, 2008

TinEye: Image Search

September 5, 2008

A happy quack to the reader who tipped me about TinEye, a search system that purports to do for images what the GOOG did for test.” The story about TinEye that I saw appeared in the UK computer news service PCPro.co.uk. The story “Visual Search Engine Is Photographer’s Best Friend” is here. The visual search engine was developed by Idée, based in Toronto. The company says:

TinEye is the first image search engine on the web to use image identification technology. Given an image to search for, TinEye tells you where and how that image appears all over the web—even if it has been modified.

The image index contains about one billion images. Search options include uploading an image for the system to pattern match, an image url, or via a plug in for Firefox or Internet Explorer.

Search results are displayed graphically. You can explore the images with a mouse click. One interface appears below:

image

The technology powering the service is Espion. I couldn’t locate a public demonstration of the service. You can request a demonstration of the system here. Toronto is becoming a hot bed of search activity. Arikus and Sprylogics both operate there. OpenText has an office. Coveo is present. I will add this outfit to my list of Canadian vendors.

Stephen Arnold, September 5, 2008

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta