MarkLogic 4.0: A Next-Generation Content System

October 7, 2008

Navigate to KDNuggets here, and you can see a line up of some of the content processing systems available on October 2, 2008. The list is useful but it is not complete. There are more than 350 vendors in my files, and each asserts that it has a must-have content system. Most of these systems suffer from one or more drawbacks; for example, scaling is problematic or just plain expensive, repurposing the information is difficult, or modifying the system requires lots of fiddling.

MarkLogic 4.0 addresses a number of these common shortcomings in text processing and XML content manipulation. The company “accelerates the creation of content applications.” With MarkLogic’s most recent release of its flagship server product, MarkLogic offers a content platform, not a content utility. Think in terms of most content processing companies as tug boats. MarkLogic 4.0 is an ocean growing vessel with speed, power, and range. When I spoke with MarkLogic’s engineers, the ideas for enhancements to MarkLogic 3.2, the previous release, originated with MarkLogic users. One engineer said, “Our licensees have discovered new uses for the system. We have integrated into the code base functions and operations that our customers told us they need to get the most value from their information. Version 4.0 is a customer driven upgrade. We just did the coding for them.”

image

Most text processing systems, including XML databases, are useful but limited in power and scope. The MarkLogic 4.0 system is an ocean going vessel among harbor bound craft.

You can learn quite a bit about the functionality of MarkLogic in this Dr Dobbs’s interview with Dave Kellogg, CEO of this Sequoia-backed firm. The Dr Dobbs’ interview is here.

MarkLogic is an ocean going vessel amidst smaller boats. The product is an XML server, and it offers search, analytics, and jazzy features such as geospatial querying. For example, I can ask a MarkLogic system for information about a specific topic within a 100 mile radius of a particular city. But the core of MarkLogic 4.0 is an XML database. When textual information or data are stored in MarkLogic 4.0, slicing, dicing, reporting, and reshaping information provides solutions, not results lists.

According to Andy Feit, vice president, MarkLogic is “a mix of native XML handling, full-text search engines, and state-of-the-art DBMS features like time-based queries, large-scale alerting, and large-scale clustering.” The new release adds important new functionality. New features include:

  • Geospatial support for common geospatial markup standards plus an ability to display data on polygons such as state boundaries or a sales person’s region. The outputs or geospatial mash ups are hot linked to make drill down a one-mouse click operation

    geospatial image

  • Push operations such as alerts sent to a user’s mobile phone or triggers which operate when a content change occurs which, in turn, launches a separate application. The idea is to automate content and information operations in near real time, not leave it up to the system user to run a query and find the important new information.
  • Embedded entity enrichment functionality including support for Chinese, Russian and other languages
  • Improved support for third party enterprise entity extraction engines or specialized systems. For example, the new version ships with direct support for TEMIS’s health and medical processing, financial services, and pharmaceutical content processing system. MarkLogic calls its approach “an open framework”
  • Mobile device support. A licensee can extract data from MarkLogic and the built in operations will format those data for the user’s device. Location services become more fluid and require less developer time to implement.

The new release of MarkLogic manipulates XML quickly. In addition to performance enhancements to the underlying XML data management system, MarkLogic supports the Xquery 1.0 standard. Users of earlier versions of MarkLogic server can continue to use these systems along side Version 4.0. According to Mr. Feit, “Some vendors require a lock step upgrade when a new release becomes available. At MarkLogic, we make it possible for a licensee to upgrade to a new version incrementally. No recoding is required. Version 4, for example, supports earlier versions’ query language and scripts.”

Read more

Powerset’s Approach to Search

October 6, 2008

Powerset was acquired by Microsoft for about $100 million in June 2008. I haven’t paid too much attention to what Microsoft has done or is doing with the Powerset semantic, natural language, latent semantic indexing, et al system it acquired. A reader sent me a link to Jon Udell’s well Web log interview that focuses on Powerset. If you want to know more about how Microsoft will leverage the aging Xerox Parc technology, you will want to click here to get an introduction to the Perspectives interview conducted on September 30, 2008, with Scott Prevost. You will need to install Silverlight, or you can read the interview transcript here.

I can’t summarize the lengthy interview. For several three points were of particular interest:

  1. The $100 million bought Powerset, but Microsoft had to then license the Xerox Parc technology. You can get some “inxight” into the functions of the technology by exploring the SAP/ Business Objects’ information here.
  2. The Powerset technology can be used with both structured and unstructured information.
  3. Microsoft will be doing more work to deliver “instant answers”.

A happy quack to the reader who sent me this link, and two quacks for Mr. Udell for getting some useful information from Scott Prevost. I am curious about the roles of Barney Pell (Powerset founder) and Ron Kaplan (Powerset CTO and former Xerox Parc wizard) in the new organization. If anyone can shed light on this, you too will warrant a happy quack.

Stephen Arnold, October

Exalead’s High Performance Platform: CloudView

October 5, 2008

It’s no secret. When I profiled Exalead in one of the first three editions of Enterprise Search Report that I wrote, I likened the company’s plumbing to Google’s. The DNA of AltaVista.com influenced Google and Exalead. For most 20 somethings, AltaVista.com was one of a long line of pre-Google flops. That, like prognostications about Web 3.0, is not exactly on target.

The AltaVista.com search system was a demonstration of several interesting technologies developed by Digital Equipment Corporation’s engineers over many years. First, there was the multi core processor that ran hotter than the blood of a snorting bull in Pamplona. Second, there was the nifty manipulation of memory. In fact, that memory manipulation allowed Oracle performance in the system I played with to zip right along in the mid 1990s as I recall. And, the DEC engineers were able to index the Internet with its latency and flawed HTML so that a query was processed and a results list displayed quickly on my dial up modem in 1996. I even have a copy of AltaVista desk top search, one of the first of these scaled down search systems intended to make files in hierarchical systems findable. On my bookshelf is a copy of Eric and Deborah Ray’s AltaVista Search Revolution. Louis Monier wrote the forward. He used to work at Google, and, what few people know, is that Mr. Monier lured the founder of Exalead to work on the AltaVista.com project. Like I said, the DNA of AltaVista influenced Google and Exalead. In 1997, some AltaVista engineers were not happy campers after DEC was acquired by Compaq and then Hewlett Packard acquired Compaq. In the fury of the HP’s efforts to become really big, tiny AltaVista.com was an orphan, and an unwanted annoyance clamoring for hardware, money, engineering, and a business model.

François Bourdoncle–unlike Louis Monier, Jeff Dean, Sanjay Ghemawat, and Simon Tong, among others–did not join Google. In year 2000, he set up Exalead to build a next-generation information access and content processing system. What I find interesting is that just the trajectory of Google in Web search was affected by the AltaVista.com “gravity,” Exalead’s trajectory in content processing was also touched by the AltaVista.com experiment.

screen shot 10 04

A result list from Exalead’s Web search system. Try it here.

When M. Bourdoncle founded Exalead, he wanted to resolve some of AltaVista’s known weaknesses. For example, the heat issues associated with the DEC Alpha chips was one problem. Another was rapid scaling, using commodity hardware, not hand crafted components which take months to obtain.

Exalead now has, according to the company’s Web site, more than 170 licensees. Earlier this week (October 1, 2008), Exalead CloudView, a new version of the company’s platform and new software features.

Paula Hane, Information Today, provided this run down of the new Exalead features:

Unlimited scalability and high performance
Business-level tuning and management of the search experience
Streamlined administration UI
Full traceability within the product
WYSIWYG configuration of indexing and search workflows
Advanced configuration management system (with built-in version control)
Improvements in the relevancy model
Provision for additional connectors with simple and advanced APIs for third-party implementations

You can read her “Exalead Offers a Cloud(y) View of Information Access here. The article provides substantive, useful information. For example, Ms. Hane reports:

One large [Exalead] customer in the U.K. can’t say enough good things about the choice of Exalead—its search solution was up and running in just 3 months. “After performing an extensive three-month technical evaluation of the major enterprise search software vendors we found that Exalead had the best technology, vision and ability to fulfill our demanding requirements,” says Peter Brooks-Johnson, product director of Rightmove, a fast-growing U.K. real estate Web site. “Not only does Exalead require minimal hardware to work effectively, but Exalead has a strong, accessible support team and a culture that takes pride in its customer implementations.”

(Note: A happy quack to Ms. Hane, whom I am quoting shamelessly in this Web log post.)

Phil Muncaster’s “Exalead Claims Enterprise Search Boost” here does a good job of explaining what’s coming from this Paris-based information access company. For me the most significant point in the write up was this passage:

The new line features a streamlined user interface, improved relevancy and the ability to extend business intelligence applications to textual search…

In my investigation of search company technology, I learned that Exalead’s ability to scale is comparable to Google’s. As Mr. Muncaster noted, the forthcoming version of the Exalead software–called CloudView–will put Exalead squarely in the business intelligence sector of the content processing market.

You can get more information about Exalead here. A fact sheet is also available here. Exalead’s Web index is available at www.exalead.com.

I have to wrangle a trip to Paris and learn more about Exalead. I hear the food is okay in Paris. The French have a strong tradition in math as well. I remember such trois étoiles innovators as Descartes, Mersenne, Poincaré, and Possson, and others. In my opinion, Microsoft should have acquired Exalead, not Fast Search & Transfer. Exalead is a next generation system; it scales; and it is easily “snapped in” to enterprise environments, including those dependent on SharePoint. I think Exalead is a company I want to watch more closely.

Stephen Arnold, October 5, 2008

The Goose Quacks: Arnold Endnote at Enterprise Search Summit

October 4, 2008

Editor’s Note: This is a file with a number of screen shots. If you are on a slow connection, skip this document.

One again I was batting last. I arrived the day before my talk from Europe, and I wasn’t sure what time it was or what day it was. In short, the addled goose was more off kilter than I had been in the Netherlands for my keynote at the Hartmann Utrecht conference and my meetings in Paris squished around the Utrecht gig.

I poked my head into about half of the sessions. I heard about managing search, taxonomies, business intelligence, and product pitches disguised as analyses. I’m going to be 65; I was tired; and I had heard similar talks a few days earlier in Europe. The challenges facing those involved with search are reaching a boiling point.

After dipping into the presentations, including the remarkable Ahead in the Clouds talk by Dr. Werner Vogels, top technical gun at Amazon, and some business process management razzle dazzle, I went back to the drawing board for my talk. I had just reviewed usage data that revealed that Google’s lead in Web search was nosing towards 70 percent of the search traffic. I also had some earlier cuts at the traffic data for the Top 50 Web sites. In the two hours before my talk, I fiddled with these data and produced an interesting graph of the Web usage. I did not use it in my talk, sticking with my big images snagged from Flickr. I don’t put many words on PowerPoint slides. In fact, I use them because conference organizers want a “paper”. I just send them the PowerPoint deck and give my talk using a note card which I hold in my hand or put on the podium in front of me. I hate PowerPoints.

Here’s the chart I made to see how the GOOG was doing in terms of Microsoft and Yahoo.

Source: http://blogs.zdnet.com/ITFacts/

The top six sites are where the action is. The other 44 sites are in the “long tail”. In this case, the sites out of the top 50 have few options for getting traffic. The 44 sites accounted in August 2008 for a big chunk percent of the calculated traffic, but no single site is likely to make it into the top six quickly. Google sits on top the pile and seems to be increasing its traffic each month. Google monetizes its traffic reasonably well, so it is generating $18 billion or so in the last 12 months.

In the enterprise search arena, I have only “off the record” sources. These ghostly people tell me that Google has:

  • Shipped 24, 600 Google Search Appliances. For comparison, Fast Search & Transfer prior to its purchase by Microsoft had somewhere in the neighborhood of 2,500 enterprise search platform licensees. Now, of course, Fast Search has access to the 100 million happy SharePoint customers. Who knows what the Fast Search customer count is now? Not me.
  • Become the standard for mapping in numerous government agencies, including those who don’t have signs on their buildings
  • Been signing up as many as 3,000 Google Docs users per day, excluding the 1.5 million school children who will be using Google services in New South Wales, Australia.

I debated about how to spin these data. I decided to declare, “Google has won the search battle in 2008 and probably in 2009.” Not surprisingly, the audience was disturbed with my assertion. Remember, I did not parade these data. I use pictures like this one to make my point. This illustration shows a frustrated enterprise search customer setting fire to the vendor’s software disks, documentation, and one surly consultant:

How did I build up to the conclusion that Google has won the 2008-2009 search season. Here are the main points and some of the illustrations I used in my talk.

Read more

Cognos 8: Blurring Business Intelligence and Search

October 4, 2008

The death of enterprise search and the wobblies pulling down content management systems (CMS) are not well understood by licensees–yet. In the months going forward, the growing financial challenges in North America and Western Europe will take a toll on spending for information technology. The strong interest (based on my analysis of the clicks on the articles on this Web site) suggest that some folks are thinking hard about the utility of open source search systems and lower-cost alternatives to the seven figure price tags on some of the high profile search systems. I can’t mention these firms by name. My attorney is no fun at all. You can identify these vendors by going to almost any Web search system and keying the phrase “enterprise search” or “information access”. You can figure out the rest of the information from these results pages.

IBM baffles me. The company offers more information products and services than any other firm I track. Each year I try to sort out the product and service names. This year I noticed this information buried deep in one of the news stories about the new version of Cognos 8. My source is here,

x-marks-the-spot-map

My hunch is that IBM is creating a new map for business intelligence. On that map, IBM will point out the big X where the real high value payoff may be found. Here’s the pertinent passage from the IBM Cognos news release:

IBM’s recent CEO and CIO surveys have found unstructured corporate information such as user files, customer comments, medical images, Web and rich media content to be growing at 63%. The explosive growth of this type of business information has pushed the convergence of the BI and Search categories. It has created demand for new BI search capabilities to provide quick and easy access to both ranked and relevant BI content and unstructured information. Newly updated, IBM Cognos 8 Go! Search v4 lets any business user extend the decision-making capabilities of IBM Cognos 8 BI by securely accessing and dynamically creating BI content using simple key-word search criteria. The software works with popular enterprise search applications such as IBM OmniFind Enterprise Edition, Google, Yahoo and Autonomy so users can see structured, trusted BI content and unstructured data such as Word documents and PDF’s in the same view within a familiar interface. Users can search all fully-indexed metadata as well as titles and descriptions within a report. Search-assisted authoring and exploration gives them options to refine queries or analyze data cubes based on search terms. These capabilities speed access to the most relevant business information regardless of naming similarities between reports, helps business users quickly refine queries as required and frees IT from constantly re-creating commonly used reports. This leaves IT with more time for strategic business initiatives. The software is completely integrated with the web-based administration and security parameters set by IT administrators for IBM Cognos 8 BI. This integration provides a centralized, efficient approach to administration and security and effectively addresses two common areas of concern for resource-constrained IT departments, who want to provide more autonomy to business users, but need a single administration point and assurance that corporate authentication policies will be maintained. ‘These new enhancements to our Go! Portfolio provide business-driven performance information to help each area of the organization strategically manage the information that is most pertinent to them,’ said Leah MacMillan, vice president, product marketing, Cognos, an IBM Company. ‘Both the business and IT gain more autonomy whether employees are in the office searching, monitoring and analyzing business outcomes or on the road looking for new business updates or geographically relevant information.’ The IBM Cognos 8 Go! Portfolio of software is a key component of IBM’s Information Agenda, a new approach consisting of industry-specific software and consulting services geared to helping customers use information as a strategic asset across their businesses. [Emphasis added]

Let me deconstruct this passage using my addled goose methods.

Read more

Intel’s Interest in Medical Terminology Translation

October 4, 2008

Intel continues to be a slippery fish when it comes to search and content processing. The ill fated Convera deal burned thorough millions in the early 2000s. Earlier this year, Intel pumped cash into Endeca, one of the two high profile enterprise search systems, known for their ecommerce and information access systems. (The other vendor is Autonomy. Fast Search & Transfer seems to be shifting from a vendor to an R&D role, but its trajectory remains unclear to me.)

Intel has one engineer thinking about language. The posting on an Intel Software Network Web log “Designing for Gray Scale: Under the Hood of Medical Terminology Translation” is suggestive. The author is Joshua Painter, who identifies himself with Intel. You can read this post here. Translation of scientific, technical, and medical terminology is somewhat easier than translating general business writing. The task is difficult, particularly when a large pharmaceutical company wants to monitor references to a drug’ formal and casual names in English and non-English document sets.

Mr. Painter’s write up concerns standards; specifically, “data standards in enabling interoperability in healthcare.” For me the interesting passage in this write up was:

An architecture for Health Information Exchange must accommodate choice and dealing with change – it must be designed for grayscale. This includes choice of medical vocabularies, messaging standards, and other terminology interchange considerations. In my last post I introduced the notion of a Common Terminology Services to deliver a set of capabilities in this space. In this post, I will discuss a technical architecture for enabling this.

The word grayscale, I think, means fuzziness. Intel makes these tantalizing pieces of information available, and I continue to watch for them. My hunch is that Intel wants to put some content centric operations in silicon. Imagine. Endeca on a multi core chip. So far this is speculation, but it is clear that juiced hardware can deliver some impressive content processing performance boosts. Exegy’s appliance demonstrates the value of this hardware angle.

Stephen Arnold, October 4, 2008

Endeca Pursues Publishers

October 3, 2008

MarkLogic has been making headway in the world of publishing. I know that I have predicted the demise of traditional newspaper, magazine, and book publishers, but there is life in a number of publishing sectors. Publishers–spurred by amateur journalists like this addled goose and fast changing Web companies like Google–have been increasingly open to new technology. Nstein, a former content processing vendor, has worked hard to reposition some of its technology specifically for the publishing industry. Now Endeca is hopping on the bandwagon. One of the early entrants from the search and content processing sector was Fast Search & Transfer. The company acquired a company in Utah and created a remarkable PowerPoint presentation showing Fast ESP (enterprise search platform) as the foundation of a next-generation newspaper. I’m not sure what happened to that initiative since Microsoft gobbled up Fast Search and turned Oslo’s engineers into the heart of Redmond’s search innovation effort.

Endeca, therefore, years ago made a well considered move to tailor its technology to the needs of publishers. I heard that the company has more than 150 publishing clients. You can read about the services in the company’s news release here or a boiled down version from Customer Interaction Solutions here. According to Endeca’s Steve Papa:

Media and publishing represents one of Endeca’s largest and fastest growing areas of focus. Web and mobile platforms, once seen as a required complement to traditional print and broadcast mediums, have rapidly become the primary area for new product creation and revenue growth. We’re working closely with our most innovative clients and partners to develop next-generation offerings that deliver a differentiated cross-medium experience, simplify the re-use of content across media platforms, and create new opportunities to monetize text, audio and video assets.

The question becomes, “With more search and content processing vendors chasing publishing companies, will the vendors be able to deliver enough value to warrant the high license fees some vendors charge?” What may happen is that price competition may force some of the smaller, less well known vendors to park on the side of the information highway hoping another ride comes along. “Value”, as I use the term, means that these potent systems scale economically, deliver good performance, and accommodate change without requiring a Roman legion of programmers. In my experience, publishers often lack a good understanding of the problems their own content creates for them. Publishers often don’t want search; publishers want the ability to create new information products from existing content. The ideal system delivers what publishers call “content repurposing” without requiring expensive, vain, and erratic human editors. Publishers would prefer life without equally expensive, vain, and erratic authors if possible. Publishing looks like an ideal market, but in some ways it is a difficult sector in which to gain traction and make sales. Sci-tech publishers want to “own” a solution so competitors can’t enjoy the benefits of a level playing field.

You can learn more about Endeca here.

Stephen Arnold, October 3, 2008

Attensity and Tremendous Momentum

October 3, 2008

With the economy in the US stumbling along, I found Attensity’s September 30, 2008, “Momentum” news release intriguing. The information issued by the the analytics company is here. I had to struggle to decipher some of the jargon. For example, First Person Intelligence. This is a product name with a trademark.  The idea is that email or phone calls from a customer are analyzed by Attensity. The resulting insights yield information about a particular customer; hence, First Person Intelligence. You can see FPI in action by clicking here. The company won an award called the Stevie. If you are curious or you want to enter to compete to snag the 2009 award, click here. I think I know what text analytics is, so I jumped to VoC. The acronym means “voice of the customer.” I think the notion is that a company pays attention to emails, call center notes, and survey data. I’m not certain if VoC is a subset of FPI or if VoCis the broader concept and FPI is a subset of VoC.

The core of the news release is that Attensity has landed some major accounts. Customer names are tough to come by, so you may want to note these organizations who have licensed the Attensity technology but hopefully not the jargon:

  • JetBlue
  • Royal Bankk of Canada
  • Travelocity

For me, the most useful part of the company-written article was this passage:

The text analytics market is rapidly moving out of the early adopter stage. Industry analyst firm Hurwitz & Associates estimates an annual growth rate for this market at 30 to 50 percent. According to a survey conducted last year by the firm, the largest growth area is in customer care-related applications. In fact, over 70 percent of the companies surveyed that had deployed, or were considering deploying the technology, cited customer care as a key application area.

The growth rate does not match my calculation which pegs growth at a more leisurely 10 to 18 percent on an annual basis. The Hurwitz organization is much larger than this single goose operation. Endangered species like this addled goose are more conservative, and its estimates in a grim financial market are less optimistic than other consultants’ and analysts’.

In my Beyond Search study for the Gilbane Group, published in April 2008, I gave Attensity high marks. Its deep extraction technology yields useful metadata. Since my early 2008 analysis, Attensity has worked hard to productize its system. Calls centers are a market segment in need of help. Most companies want to contain support costs.

In my opinoin, Attensity’s technology is better than its explanation of its products and those products names. I wonder if the addition of marketers to a technology-centric company is a benefit or a drawback. Thoughts?

Stephen Arnold, October 3, 2008

Silobreaker: Mary Ellen Bates’ Opinion Is on Target

September 30, 2008

Mary Ellen Bates is one sharp information professional. She moved from Washington, DC, to the sunny clime in Colorado. The shift from the nation’s capital (the US crime capital) to the land of the Prairie Lark Finch has boosted her acumen. Like me, she finds much goodness in the Silobreaker.com service. (You can read an interview with one of the founders of Silobreaker.com here.) Writing in the September number of Red Orbit here she said:

What Silobreaker does particularly well is provide you with visual displays of information, which enable you to spot trends or relationships that might not be initially obvious. Say, for example, you want to find out about transgenic research. Start with what Silobreaker calls the “360[degrees] search,” which looks across its indexes, including fields for entities (people, companies, locations, organizations, industries, and keywords), news stories, YouTube videos, blog postings, and articles.

If you want to try Silobreaker yourself, click here. With Ms. Bates in the wilds of Colorado and me in a hollow in rural Kentucky, I am gratified that news about next-generation information services reaches us equally. A happy quack to Silobreaker and Ms. Bates.

Stephen Arnold, September 30, 2008

Dow Jones and Automatic Taxonomy Generation

September 30, 2008

An eager beaver reader (I only have two or three) sent me a link to “Taxonomies for Human Vs Auto-Indexing.” The author of the Synaptica Central write up is Wendy Lim. She is summarizing or reproducing information attributed to Heather Hedden. From a bibliographic angle, I think a tad more work could be done to make clear who was writing what, where, and when. But that’s an old, failed database goose quacking about the brilliant work done by “experts” decades younger than I. Quack. Quack.

You can read the September 26, 2008, write up here. The article is about a Taxonomy Bootcamp. After a bit of sleuthing, I discovered that this is an add on to some Information Today trade shows. The bootcamp, as I understand it, is an intellectual Camp Lejune except the that the attendees skip the push ups, the 5 am wake up calls, and the 20 mile runs. Over a period of two or three days, taxonomy recruits emerge battle ready, honed to deal with the intellectual rigors of creating taxonomies.

image

A real taxonomy. Source: www.nnf.org.na

The word “taxonomy” is more popular than “enterprise search” and for good reason. Enterpriser search has emerged from organizations with a bold 4F stamped on its fitness report. After hours, maybe months of work, and some hefty bills to pay, enterprise search customers are looking for a way to kill the enterprise search enemy. That’s where a taxonomy comes it. I’m no expert in taxonomies. I know I was involved in creating taxonomies for some once-hot commercial databases like ABI / INFORM, Business Dateline, General Business File, Health Reference Center, and the 1993 Web direct Point (Top 5% of the Internet). What those experiences taught me was that I don’t know too much about taxonomies or classification systems in general for that matter. I keep in touch with people who do know; for example, Marje Hlava at Access Innovations, Barbara Quint (Searcher Magazine), Marydee Ojala (Online Magazine), Ulla de Stricker (De Stricker & Associates), and other specialists. I get nervous when a 20- or 30-something explains that taxonomies are not big deal or that a business process can crack a taxonomy problem or a certain vendor’s software can auto-magically create a taxonomy.

tag cloud

A Synaptica Central tag cloud.

In my experience, the truth is not to be found in any one solution. In fact, the reality of taxonomies is that the concept has gained traction because of fundamental errors in planning and deploying information access systems. I don’t think a taxonomy can retrofit stupid, short sighted decisions. For that reason, I steer clear of most taxonomy discussions because after working with these beasts for more than 30 years, I understand their unpredictable behavior.

Read more

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta