Navigation bar  
       
Search AIT


Only match whole words
 

Digital Reasoning

An Interview with Tim Estes

Tim Estes of Digital Reasoning

In a taxi from Baltimore-Washington Airport to a speaking engagement in Washington, DC, a colleague and I were discussing my search blog. We were sharing a taxi with two other people. One of them asked, "Are you the fellow who writes about search and content processing?" I replied, "I was."

The person asking the question introduced himself as a reader and began to tell me about his company's technology. I took his card, did some research, and this interview is one outcome of that encounter.

Digital Reasoning, based in Franklin, Tennessee, has developed a suite of software that adds value to content.

I learned that the company develops technologies that help solve the problem of information overload. The company's tools  allow users to read, understand, and make use of vast amounts of data.

Digital Reasoning has patented its technology that, according to the firm's Web site, "deeply, conceptually searches within unstructured data, analyzes it and presents dynamic visual results with minimal human intervention. It reads everything, forgets nothing and gets smarter as you use it."

I followed up with the company's chief executive officer, Tim Estes. The full-text of my interview with him appears below.

What is "digital reasoning"?

Digital Reasoning is unique in the market in its ability to bootstrap a model from the data down to the entity level and then start resolving entities and aggregating their connections to give you a much better picture of the data. We are a real summarization technology that is not limited to the a priori model or ontology that is applied. I think this is where the market is going– but time will tell.

What is your background?

I went to the University of Virginia.

That's interesting. My son attended UVA .

Quite a coincidence.

I’m a philosopher by training. Ironically – when I graduated we had T-shirts that said: “Philosophy – I’m in it for the money.” My background was in semiotics, Philosophy of Language and Philosophy of Mathematics. A principle area of interest was in the works of Wittgenstein and Leibnitz. I have a passion to find hidden structure in things and proceed from the assumption that the world is held together by necessary and intrinsic order (thus the Leibnitz bias). In founding the company, the idea was that with sufficient introspection of mathematical and structural invariances that present themselves inside of data, a “model” would emerge from the data that could allow software to execute on imprecise goals using learned contexts.

Were there key influencers that shaped your firm's technical approach?

I credit two primary influences with driving me to start the company. One was a brilliant article written by David Gelernter called the “The Second Coming” and the other was an interview that Bill Gates gave in Red Herring in the Spring of 2000. Bottom line – they both pointed forward to a day when all software would learn and the other software would be commoditized and simply infrastructure. Digital Reasoning was really about trying to bridge that gap – and it still is. We saw the most opportunity and challenging problems in the area of having systems understand unstructured data to be able to help bootstrap the context necessary for a new level of software automation – i.e. ambient intelligence in software that could prioritize, summarize, and make a reasonable level of proxy decisions for humans that are overloaded with information. To me – most of the buzzwords in search are just repackaging these core ideas.

Faceted navigation, for instance, is really just prioritization and summarization that draws more out of the user to substitute for a system not having sufficient context or understanding of a users intention to bring back the right results. It has the ancillary benefit of surfacing connections or facets in the data that probably were not known at the outset (the summarization function that lists mentioned entities or histograms of hits over time give you).

What was the trigger in your career that made search and retrieval and content processing focal points? Weren't there other, easier opportunities for you to use your technical training and expertise?

Well – Digital Reasoning pretty much is my career. I started it in my 3rd year in school and have been doing it ever since. I can’t think of anything else that would make sense in the Industry – I’d probably be teaching if I weren’t running DRSI.

I suppose after 9/11, I could have taken a route to get into the Government and Intelligence space as a Blue Badger. But given my age – just turned 30 last year – I doubt at the time I could have had the impact I wanted. Now – after 8-9 years of working hard problems in this space, I think we are really starting to make a difference.

What type of performance can a licensee expect with your system?

Digital Reasoning’s core product offering is called “Synthesys.” It is designed to take an enterprise from disparate data silos (both structured and unstructured), ingest and understand the data at an entity level (down to the “who, what, and wheres” that are mentioned inside of documents), make it searchable, linkable, and provide back key statistics (BI type functionality). It can work in an online/real-time type fashion given its performance capabilities.

Synthesys is unique because it does a really good job at entity resolution directly from unstructured data. Having the name “Umar Farouk Abdul Mutallab” misspelled somewhere in the data is not a big deal for us – because we create concepts based on the patterns of usage in the data and that’s pretty hard to hide. It is necessarily true that a word grounds its meaning to the things in the data that are of the same pattern of usage. If it wasn’t the case no receiving agent could understand it. We’ve figured out how to reverse engineer that mental process of “grounding” a word. So you can have Abdulmutallab ten different ways and it doesn’t matter. If the evidence links in any statistically significant way – we pull it together.

Synthesys trials can be had at around $50k or so (depending on specifics). Enterprise deals are substantially higher – but that is true of just about everyone in our space. We offer all of the typical high-level features you’d find in players in Unstructured Data Analytics – entity extraction, geotagging, faceted navigation, query suggestion, etc. But few, if any of them, can really resolve entities accurately without a lot of “humans in the loop.”

The system can index ~10 million files on large single systems. We are in testing on a large distributed model for Synthesys with a government customer right now where we will crack 150M files on less than a dozen servers. The new model is proven to be horizontally scalable and implements the first “eventually consistent” model for a player in our space that we are aware of. It is our hope to prove web scale (i.e. billions of documents) before too long.

Most of our throughput is tied into memory/caching. For instance, with four cores and 12 GB of memory and standard SATA drives, you would probably see ingestion in the hundreds of KB per second up until the single millions of documents and then degradation as caches start to get lower and miss more often.

The number of new companies entering the search and content processing "space" is increasing. What's your view on too many hungry mouths and too few chocolate chip cookies?

I think that it is a lot of noise in the system. One of the areas that is particularly disappointing right now is the lack of innovation in the eDiscovery area. Most of that market is using technology that got lifecycled out of the Intel/Defense space 5-10 years ago. In enterprise search, I suppose the many mouths will lead to natural Darwinian results.

My only hope is that the new companies offer some real innovation and don’t rehash the same old marketing (“Bring Order to Information Chaos.” Etc.) with the same failed approaches (extract, load a DB, search it with more metadata, etc.). I think the sophisticated IT buyer/CIO is pretty tired of being promised more than can be delivered in this space.

Like the old commercial – we are hopefully going to be getting into a “where’s the beef?” type attitude soon.

Finally – I think that while the academic conferences and contests have been interesting – I think there needs to be a better way to prove that these technologies generalize to a real customer’s data. Everything looks pretty once the data gets well formed and cleaned up. Boy don’t those Palantir demos look really cool – but what happens when you really hit the junk we call data in real businesses or Federal enterprises? We need to focus on the real data – not the slickest demos. The people in the Intel community especially understand the “bait and switch” of demoing on clean, structured data and then having to face the reality of their data on the inside where these demos never seem to work against the large amount of noise.

When the market leaders get honest about the challenges of noisy data and start delivering predictable quality over that real data that’s when we (speaking as a member of the unstructured data analytics market) will get our credibility back.

What are the functions that you want to deliver to your customers?

Well, I think we want all data to be available to users from a content/entity level versus a document level. Documents are containers of facts and ideas. We don’t have time to read 1/10th of what we want to or need to. We need summarization and prioritization feeding visualization. We’d like to see that as common practice.

In the Intel business – why do we read stuff before we start creating charts and graphs of key connections? Because the software is too stupid to do it for us right now in an automated fashion. That needs to change. Our analysts our overworked, our managers have to consume too much at too high a level, and we are drowning in email and Facebook/Twitter feeds. Something has to sit between us and the firehose of content and status updates that are overwhelming us. It’s not just new tools to navigate it and read. It’s really something quite different – show me what it means in a snapshot and let me dig in to whatever looks important and novel. And, do it as fast as Google but from a concept or entity-centric point of view.

That’s what we deliver in our Defense/Intel efforts and it’s what we look to deliver to other contexts and markets as we expand into those this coming year.

Are you able to give me some insight into new features you will be offering your licensees in the next release?

I don't want to go into too much detail. But on the backend side, we have two major efforts going on that we believe will disrupt the market. First, we have a real answer to entity resolution that works at scale. Right now we are integrating it with the ability to apply it to both structured and unstructured data. That’s going to be a real killer. It conceptually integrates the actual entities in enterprise data and does so with minimal a priori modeling and customization (especially compared to the other approaches on the market today).

Next, we are implementing a backend that is very similar to what Amazon has as software infrastructure. It is going to allow horizontal scalability of the underlying storage and processing and allow for multiple datacenters and clouds to synchronize this understanding. This means that Digital Reasoning is positioned to have a real offering for understanding data in the hybrid cloud space.

There's a push to create mash ups--that is, search results that deliver answers or reports. What's your view of this trend?

I think it’s pretty useful so long as the quality of analytics is good. It’s always tricky when you automate a process that has a 0.8 F-measure (F1) at best on noisy data. You end up getting some very humorous mistakes. But that’s the price of the early stage of disruptive technologies. If we can create supplemental processes (like ensembles that are tuned toward recall paired with others toward precision) we can emulate what’s worked well in the medical community in terms of the testing process. I want to credit Ted Senator (used to be at DARPA now at SAIC) with the above analogy. He used it in a paper a few years ago and I think is still one of the better analogies I’ve heard in this space.

What sets your technology apart from some other vendors' systems?

Our solution is generally complementary to the Oracle/MySQL/MSSQL solutions we find in the government and enterprise. It can be stood up on its own – this is the default – but we don’t have issues integrating into the broader enterprise with those other systems.

I think I’ve covered the differentiation point already – but really the ability to find entities, resolve them, and then retain their connections to other entities and all related data is a pretty big differentiation. We also believe that scale and speed are differentiators for us. While others may index for search faster, few if any can match our depth of understanding of the data at scale or with the speed we have.

Our approach is fundamentally different from 90% or more of the market, because we have a real bias against trying to leverage a priori models against the data (i.e. exhaustive extraction or ontology type models). Digital Reasoning tells you what you didn’t already know and also sorts out data easily so you can find what you expect to find if it’s there – we deal with both the knowns and the unknowns elegantly. That’s how we are different. We’re particularly good enabling the discovery of the non-obvious and the unknown from noisy unstructured data.

Semantic systems have been getting quite a bit of coverage, yet the Powerset technology and other semantic players like Hakia.com have been slow out of the gate. What's your view on semantics and natural language processing? Are these technologies ready for prime time?

It’s getting there. I have a fundamental disagreement with the Extract, Transform, Load (ETL) for text type approach, however. It tends to work well in fixed/stable domains and poorly in domains with evolving semantics and noisy data. I think that is exactly what we see right now in terms of the limitations. I think this approach will ultimately succumb to approaches that can bootstrap form the data (this is a variation of the Peter Norvig camp on the problem). We are still waiting for the iPod of learning algorithms that works at scale to really show how futile all of this a priori modeling investment really is.

I also think that most of these guys probably were optimistic about their ability to scale their analytics to web scale and got caught off guard with how hard it is to go from tens of millions or hundreds of millions of pages and work at tens of billions of pages. It’s just a hard problem. Google succeeds because 6-7/10 hits on the first page helps them keep their business model rolling. Trying to get 9/10 on much more semantically narrow domains is at least an order of magnitude harder problem if not two.

A number of vendors have shown me very fancy interfaces. The interfaces take center stage and the information within the interface gets pushed to the background. Are we entering an era of eye candy instead of results that are relevant to the user?

We are always taken in by the demo. It’s pretty typical. People and enterprises want an information savior – and the demo is like a “miracle proof” even if it is really more Wizard of Oz than anything else. I think that the real work in this space is not being done by the demo artists. It’s being done by those that can make sense of the data while asking less and less of the user.

I think that “Intelligence Augmentation” – something that Palantir was blogging about recently – is very much a cop out. It basically states we still want the human to have to do all of this work but we are going to make it a lot less onerous on them. This doesn’t solve the problem at all. Sure – most of the time investment in applying machine learning algorithms is data normalization – but that’s the point. If we had algorithms that were smart enough to create a model from mathematical order in the data that meant something to a human, we wouldn’t have to ETL it into a specific schema. Data normalization is a machine learning problem. I think that is where they miss the boat. The Intelligence Augmentation approach (left alone) creates false assurance that the user is making progress when, actually, key items are being missed due to the fact the software has no real, evolving understanding of the data. We need computers to see the whole picture of what’s going on in millions or billions of messages because there is no way a human can. No visualization can role up that many nodes to make it tractable for a human to understand. Any visualization without the capacity to understand the underlying data in sophisticated ways is just doing a disservice to the mission.

Like all complex problems, we need substantial automation to grow productivity. To us understanding data is as a lot like automated landing systems in aircraft. At some point in the not-too-distant past it simply became too much for human beings to manage all of the complex subsystems in a commercial jet aircraft. Now pilots only manage those items in emergencies and focus on the major judgment-oriented tasks in flight (direction, altitude, etc.).

We need automated awareness systems across most information-centric activities. That’s the real meat. Visualization is a means to present this underlying capacity for maximum utility. It is not the utility itself.

What text processing functions do you offer?

Currently we offer indexing, entity extraction, geotagging, search, faceted search, relationship extraction (basic), and dynamic graph generation from those relationships. Our entity extraction and language processing is being rebuilt into a next generation capability right now. We plan on offering anaphora resolution, in document co-reference, and deeper extraction in future releases. We are currently English only but also plan to pick up other languages. We hope to do that this year (its not a technology issue for us), but that depends on competing customer demands. Right now, there is a lot of business supporting English since that is what nearly all of the analysts are using.

Also, our new horizontally scalable backend will be in the next release along with new entity resolution capabilities against structured and unstructured data. Other bells and whistles too – but those are the majors.

What is it that you think people are looking for from content technology?

People are looking for semantic technology to help them read less and understand more. Sounds simple right? They don’t readily trust the summarization part – so that’s an area that needs a big step up.

A major source of discontent is the upfront cost of building models (the ETL bias) to turn unstructured data into structured data. This is probably the biggest holdback in the enterprise (especially in a tight budgetary environment). They are tired of software that has an even bigger up front deployment and maintenance cost. Given how we solve the problem, we expect to have a compelling story here.

I think the other big piece that is holding back semantic technology is the obsession with search and reactive applications. Enterprises need to start looking at how to use semantic technology more proactively and vendors need to be delivering better solutions here.

What are the hot trends in search for the next 12 to 24 months?

I think faceted navigation is going to become standard- even passé. The trick will be how well this can happen from noisy data. That’s where it will be interesting to compare what Endeca has (which is heavy on up front modeling of your data) to what Nova Spivack is working on over at Radar Networks (probably a much more elegant approach).

I think the wave that is coming, however, is how do we get into proactive applications in semantics and search – i.e. ambient awareness yielding autonomous action by systems where the principle data streams are unstructured. That’s the next big wave. We are working that both in our direct business in Defense/Intel and in new markets. We expect to pursue partnerships with existing enterprise players during the coming year. Beyond that – well we’ll see.

Where can people get more information?

Our Web site has some current information. Blogging has been a little slow recently since we’ve really been maxed out with new items taking up time from the likely internal contributors but we hope to get a little more diligent on that in the coming months. We’ve got some material on request – we’ve actually got a ton of material but we like to understand the need first so we can maximize both our potential customers time as well as ours.

ArnoldIT Comment

Digital Reasoning has captured the attention of a number of US government agencies. The firm's profile in the commercial sector is on the upswing. The firm's approach provides those with a need to know what's relevant to a particular concept or topic in a large flow of content will find that Digital Reasoning's approach offers an alternative to the older, one-size-fits-all solutions from vendors with technology dating the from mid 1990s. The company is aggressive and committed to making its licensees get full value from the company's patented technology. More information is available from http://www.digitalreasoning.com.

Stephen E. Arnold, February 2, 2010

       
    Navigation bar  
ArnoldIT Home Articles Speeches Services Features About ArnoldIT Articles Speeches Services Features About ArnoldIT ArnoldIT Home