Attensity in PR Full court Press

March 2, 2010

Risking the quacking of the addled goose, Attensity sent me a link to its “new” voice of the customer service. I have been tracking Attensity’s shift from deep extraction for content processing to customer support for a while. I posted on the GlobalETM.com site a map of search sectors, and Attensity is wisely focusing on customer support. You can read the “new” information about customer support at the company’s VOC Community Advantage page. The idea is to process content to find out if customers are a company’s pals. Revenues and legal actions can also be a helpful indicator too.

What interested me was the link to the Attensity blog post. “Leveraging Communities through Analytic Engines” presents an argument that organizations have useful data that can yield insights. I found this passage interesting:

Analytical engines cannot stop at simply producing a report for each community; they have to become a critical part of the platform used by the organizations to interact with and manage their customers. This platform will then integrate the content generated by all channels and all methods the organization uses to communicate, and produce great insights that can be analyzed for different channels and segments, or altogether.  This analysis, and the subsequent insights, yield far more powerful customer profiles and help the organization identify needs and wants faster and better. Alas, the role of analytical engines for communities is not to analyze the community as a stand-alone channel, although there is some value on that as a starting point, but to integrate the valuable data from the communities into the rest of the data the organization collects and produce insights from this superset of feedback.

Now this is an interesting proposition. The lingo sounds a bit like that cranked out by the azure chip crowd, but that’ is what many search and content processing vendors do now? Wordsmithing.

An “analytical engine” – obviously one like Attensity’s – is an integration service. In my opinion this elevation of a component of text processing to a much larger and vital role sounds compelling. The key word for me is “superset”. This notion of taking a component and popping it up a couple of levels is what a number of vendors are pursuing. Search is not finding. Search is a user experience. Metatagging is not indexing. Metatagging is the core function of a content management system.

I understand that need to make sales, and as my GlobalETM.com diagram shows, the effort is leading to marketing plays that focus on positioning search and content processing technologies as higher value solutions. From a marketing point of view, this makes sense. The problem is that most vendors are following this path. What happens is that the technical plumbing does one or two things quite well and then some other things not so well.

Many vendors run into trouble with connectors or performance or the need for new coding to “hook” services together. Set Attensity aside, how many search and content processing vendors have an architecture that can scale economically, quickly, and efficiently? In my experience, scaling, performance, and flexibility – not the marketing lingo – make the difference. Just my opinion.

Stephen E Arnold, March 2, 2010

No one paid me to write this. I suppose I have to report poverty to the unemployment folks. Ooops. Out of money like some of the search and content processing vendors.

Recommind and Predictive Coding

March 2, 2010

I received a flood of “news” from vendors chasing the legal market. Now law firms have fallen on hard times. One quip making the rounds in Kentucky is that a law degree is as valuable as a degree in Harry Potter studies from Frostburg State. I did not know one could get a degree in Harry Potter, so this may be some cheap jibe at the expense of attorneys.

The real action for legal licensing is in the enterprise. In the lousy financial climate, its seems that lawyering should be done in doors and back at the ranch. The demand for software and services that can chop discovery down to a management hunk of work have been selling. I prepared a legal market briefing for a couple of clients last year, and I was surprised at how much churn was underway in the segment. Even storage vendor Seagate poked its nose into the eDiscovery market.

I was delighted to receive a file from a reader that had the title “An Interview with Craig Carpenter of Recommind: A Discussion on Predictive Coding.” My recollection was that Recommind’s Mr. Carpenter, a polymath and attorney, was working on Recommind marketing. He is also a vice president of Recommind and teaches at the University of San Francisco. His focus in his class work is high technology marketing, content management, and digital rights management. Heady stuff.

You can get a copy of this document from JD Supra, whose tag line is “Give Content. Get Noticed.” I had not heard of this service previously.

Several points in the interview struck me as interesting. Let me highlight these and offer some of the ideas flapping around my goose brain.

First, Recommind won an award as the best product in the Knowledge Management Systems category. I think that is a good marketing angle, but I do not know what “knowledge management” means. Mr. Carpenter explained Recommind’s “knowledge management” product this way:

MindServer Search is our flagship enterprise search product. It provides highly accurate and relevant search results through a simple, intuitive interface. It uses proprietary, machine learning technology to automatically create concept models based on the information within the enterprise. That gives it the unique ability to accurately identify and rank relevant information for each user without the need for additional input from the user.  For our legal customers, we also offer a popular Matters and Expertise module for MindServer Search, which enables them to find all relevant matter information and expertise within the firm. The module’s Expertise Location feature automatically updates areas of expertise based on work product, projects, clients, etc., which makes it simple to find attorneys with relevant experience on a particular topic, as well as the documents and matters associated with them.

Well and good, but I think this is search, retrieval, and social graph functions. That means that I understand Recommind’s definition of “knowledge management.”

Second, Recommind offers a description of its “knowledge management” system. The elements are CORE (context optimized relevance engine), which I believe is a probability based method somewhat akin to Autonomy’s approach. But the interesting statement, in my opinion, was:

There’s no doubt Predictive Coding is accurate enough – this has been proven in many cases. A number of AmLaw 30 firms have proven it by using Predictive Coding and comparing it to the results from contract attorney review (and partner review as well) on the same data. The results in every case were that they achieved better accuracy with Predictive Coding, and in the process saved 50-80% of what they would have spent on traditional review because contract attorneys were either not needed or were able to work far more efficiently (or both). This is what we mean when we talk about revolutionizing the economics of eDiscovery; no one else is doing this.

Third, this system’s automated methods can be used in legal matters. I found this statement interesting:

Judges care about getting to a just result as efficiently as possible; they care far less about the means used to get to that result – so long as the means do not undermine the pursuit of justice. So judges are not in the business of ?validating? any particular technology or process. That said, given the broken economics of today’s eDiscovery judges have definitely been expressing a fervent desire for a better approach to document review, and prominent judges like Judge Facciola, Grimm and Peck have indicated that technology can and should be brought to bear on the problem, because it can really help. It’s important to look at how the top litigation firms have responded now that they have a mandate from judges to change the economics of eDiscovery. And if you look at the top firms in the world — WilmerHale, Morgan Lewis and Fulbright & Jaworski, just to name three — they have made a commitment to Predictive Coding as the future. That’s a very, very strong endorsement.

Endorsements are good marketing, but in my limited experience with the legal system, what’s okay and what’s not okay can be variable.

Fourth, the role of humans remains important. I found this statement interesting:

There will always be a need for human review in eDiscovery. But bear in mind that the traditional eDiscovery process relies on an outdated, paper-based model that requires attorneys to  sit in a room and review terabytes of ESI, one at a time. That’s a textbook example of work that  should be assisted by intelligent automation. With the continuing rise of eDiscovery, there will  always be plenty of work for attorneys. Some firms, and some clients, will always want to have  an attorney’s eye on every document – which does not at all preclude the use of Predictive Coding. Even in such a case, they can perform that task much faster and more consistently using Predictive Coding.

Fifth, this comment about the cost of the system was instructive. This is the relevant passage:

We have certainly added more choices to our price list to accommodate the overwhelming demand we’ve seen, but if you are asking if we have had to lower our prices the answer is not at all. It’s definitely the case that much of the eDiscovery process, including culling, processing, hosting and forensic imaging, has been commoditized; older vendors trying to maintain market share and the rather simplistic appliance offerings and vendors have pushed this trend. But where we play and what our products are capable of doing for clients – Predictive Coding being perhaps the best example – is nowhere near becoming commoditized. The basic problem with eDiscovery is that it still uses the paper-based, linear review model, even though 99% of information these days is digital. Most EDD products try to alleviate the symptoms of that problem rather than address the problem itself, e.g. ?better linear review, a simple culling appliance, etc. Those technologies are commoditized now or will soon be. But we attack the fundamental problems of eDiscovery, the illness rather than its symptoms. Predictive Coding doesn’t just streamline document review for human reviewers-though it delivers that too-it actually automates the majority of the process using intelligent technology and defensible workflow. That’s something no other company or technology can deliver – period. And because it truly is game-changing technology, law firms and clients alike are more than willing to pay a premium. After all, it will save them a tremendous amount of time and money so the investment is easy to justify. Because this is so unique and such a difficult problem, in spite of a noisy market there’s no danger those capabilities will be commoditized any time soon.

If true, it suggests that the statistical methods used by other vendors such as Autonomy and Google, for example, should perform in a similar manner.

My view on this automation and prediction angle is that Recommind’s approach works well. If we accept that statement, what will happen if Autonomy or Google offers a lower-cost service. Might that shift some customers toward the lower-cost service. Numbers are numbers.

In a price war, Google—if it decides to push into the legal sector—might have an advantage over Autonomy with nearly $800 million in annual revenues. Recommind’s argument sets the stage for an interesting dynamic if larger firms go after this sector offering more value per dollar.

Excitement ahead in the fiercely contested and tumultuous legal market I “predict”.

Stephen E Arnold, February 27, 2010

No one paid me to write this article. Since I mentioned legal activity, I will report a no fee write up to the DOJ, an organization which cares about the law.

TigerText for Private Text Messages

March 2, 2010

A company called X Sigma has a plan to make text messages private. The AFP story “TigerText App Removes Embarrassing Text Message” said:

People receiving the messages are prompted to download the TigerText application for free in order to read the text, which is not actually sent to the recipient’s iPhone. Instead, the message is hosted on the company’s servers where it can be erased whenever the sender wishes. Sent messages can be deleted on demand or be set to automatically vanish after a specified period. A “delete on read” feature starts a 60 second countdown when a text message is opened and then erases it at zero.

Good idea but one that should be available for PDF files. The idea that a PDF is forever is a far greater problem than 140 word text messages. Tough to search for information unless the hosting company provides information to third parties. Just my opinion.

Stephen E Arnold, March 2, 2010

No one paid me to write this news item. I will report non payment to the National Archives, an outfit where deleted messages are a constant concern.

Surprising Non Endorsement of SharePoint from SharePoint Expert

March 2, 2010

I had to chuckle at this comment in a SharePoint expert’s SharePoint blog and its write up “What Has Been, What Is Now, and What Is Coming!. The context is that the expert is starting a new blog. Here’s the relevant passage:

One last thing… you gotta check out our new site!  Dustin Miller and I collaborated on creating a new SharePoint Bootcamp site that uses WordPress as a content management solution (what, no SharePoint?  Yes… we don’t use SharePoint for the sake of using it.  I believe in honest assessment of which tool is the best for your needs). The design and CMS system is one thing, but the bigger thing is the ease of access to information.  Subscribe to our RSS feed or iCal for the course schedule, and check out our courses by track, product and audience.

Yep, experts who don’t use the product which is their expertise. Interesting but part of the azure chip approach I opine.

Stephen E Arnold, March 2, 2010

No one paid me to write this. I will report logic of such illogical behavior to NASA which wishes to go to Mars. Bake sale to raise money?

Search Wars

March 1, 2010

I really enjoyed the eWeek article “Microsoft Says IT Told DOJ, EC How Google Holds Search Hostage.” I did not think I would find Microsoft, after 11 years of silence, becoming so forthcoming in expressing its view about Google.

Google is no start up, and I think that after 11 years, the sudden escalation toward Google might be a wee bit too late. But when lawyers are involved, one never knows, does one.

There are some great quotes in this article as well. I am sorely tempted to chop out each as a Quote to Note, but you need to navigate to the original story and enjoy each in context.

I quite like the monopolist complaining about another monopolist. Also enjoyable is the idea that Google’s algorithms are not learning as much as Bing’s algorithms. The reason? Google relies on lots of data to inform its algorithms. When I read this, I understand that Microsoft has less data and, therefore, must work its mathematics harder to deliver comparable results. As I have pointed out in my 2005 The Google Legacy, still germane after five years I might add, Google’s efficiency gives it a cost advantage. For Microsoft to catch up, Microsoft must spend more money. At some point, which appears to have been reached, money won’t do the job.

I think the eWeek article is evidence that Microsoft’s accountants have finally figured out that the company cannot spend enough to close the gap. Enter the lawyers. With legal eagles on the job, Microsoft may have a chance.

Stephen E Arnold, March 1, 2010

No one paid me to write this. I mentioned lawyers, so I will report non payment to the Department of Justice, where not getting paid is not too popular.

The Brainware Dolphin Combo

March 1, 2010

I was clicking around for information about the killer whale who killed. One of the links shot me to a page about dolphins and on that page was a link to “a top provider of business process optimization and information lifecycle management for customers using SAP.” A few more clicks and I landed on the Brianware Web site with this item: “Dolphin and Brainware Delivery SAP Invoice Automation Solution to Global Manufacturer.” Brainware backlinked to Dolphin, a company with the tagline “Smart Adaptable Proven.”

I scanned the story and the main point struck me as embedded in this passage:

The [Brainware-Dolphin] solution will automate the processing of several hundred thousand invoices per year for the customer’s European Operations. Dolphin, which helps companies with SAP environments run business processes and information lifecycle management solutions better and smarter, will provide the customer with its unique SAP-certified invoice ingestion and process tracking platform along with implementation and ongoing support services.

SAP is one of my tracking companies. I think its actions provide some indication of how other large, IBM-inspired software companies will be coping with the global economic slow down. This deal may be part of SAP’s effort to get back on the growth track by creating more affordable offerings. See “SAP Ecosystem: Going Direct for SMBs.” My question is, “Which outfit is the killer whale?”

Flash back five or six years, and I think SAP would have built this type of system. Today, SAP may not have the time, money, or market opportunity. Companies like Brainware and Dolphin are able to fill the gap. Brainware has a search engine, and I thought there was a search vendor called Dolphin at one time. Lost track of who does what with the repositioning underway in the enterprise software sector.

Interesting how a search for a killer whale provided some insight into how companies with quite unusual names can surface unexpected insights in the opportunities created by leviathans who may be losing vigor. By the way queries for “dolphin” and “Brainware” turn up some very unusual hits in both Bing.com and Google.com. Naming companies is a difficult task I have found.

Stephen E Arnold, February 27, 2010

Nope, no one paid me to write about dolphins, brains, or whales. I think I report non payment to the directory of the National Zoo, a fine institution despite some glitches in animal care.

Google and Rich Media Intent

March 1, 2010

Google’s YouTube.com has traffic, but it is not a revenue home run. In fact, Google provides no detailed numbers about YouTube.com’s financials. Clicking around YouTube.com, one of the traffic magnets on the Internet, ads are appearing but the Google overlays can be intrusive. The lion’s share of Google’s revenue comes from AdSense and AdWords. Neither YouTube.com nor Google’s enterprise services are revenue hat tricks yet.

As one of the top five Web sites on the Internet, Google seems to be at a loss to make YouTube.com a money machine like AdWords and AdSense.

Is Google indifferent to YouTube.com, its costs, and its lackluster advertising performance? The answer is a qualified “no.” Google’s been interested in video for years.

On February 23, 2010, the United States Patent & Trademark Office granted Google a patent for its smart software video segmenting invention. The patent’s title is clearer than some of Google’s patent documents. “Deconstructing Electronic Media Stream into Human Recognizable Portions” (US7668610) understates the Google invention.

Google’s smart software watches videos and listens to music. The system figures out segments that make sense to a human. The method indexes the video and tags it so each piece carries an identifier. These chunks can be dealt out to Google’s servers and then delivered via Google’s content delivery network to users.

The technology disclosed in this patent document is industrial strength, and it may be beyond the reach of companies with a more successful rich media business. The technology, which few companies can match today, was developed in 2005, maybe as early as 2004.

US7668610, filed on November 2005, operates at Google scale. Google’s smart software can chop up audio and video into logical segments, index them, and tag with unique identifiers. But the most impressive function in the patent is that as the system operates, it learns.

Google has dozens of patents and applications that bear directly on rich media. With the cost of research and legal fees, Google’s rich media inventions make clear that Google is serious about video.

With years of investment and the efforts of some of the world’s brightest engineers, why is Google making little apparent progress? The Sundance test allowed YouTube.com users a way to pay to view a handful of independent films, screened at the Sundance Festival earlier this year. Then nothing.

Our research reveals that Google has a rich media push underway. Like its earlier foray into telecommunications, Google moves slowly.

There are some interesting signals that Google’s activity in rich media is increasing. Vizio, a manufacturer of flat panel televisions, is now advertising that YouTube.com can be watched on Internet-capable Vizios. At the Computer Electronics Show, one company showed a Google set top box running Android (Google’s mobile operating system) with a personalized program guide. In Barcelona, tablets running Android were spotted by those with a nose for the novel gizmo.

Is Google going to be late to the rich media party? Google has mishandled its social media service, Orkut. The new Google Buzz triggered a landslide of criticism from those offended by Google’s exposing information without users’ permission. Now Google faces push back from the Department of Justice in the US about Google Books and possible antitrust trouble from the European Union.

Music and video are big business. Just look at Apple’s revenues from its integrated hardware, software, online retailing operation. Walmart paid $100 million for service that can stream HD videos.

Where’s Google? Visible but lacking the money-spinning angle like a theater owner who sells high-margin soft drinks and popcorn.

Stephen E Arnold, March 1, 2010

Nope, a freebie. No one paid me to write about video. I think that means I have to report non payment to the FCC. Consider it done.

Wild and Crazy Tweeting

March 1, 2010

In the flow of stories for our Strategic Social Networking blog we see a lot of wild and crazy articles. Some of the information is a reminder of the “Wild and Crazy Guys” skits on the fourth season of the American comedy show Saturday Night Live. The tag lines, “We are wild and crazy guys” still echoes when I read some of the outputs from the azure chip crowd with its mavens, poobahs, and self appointed experts.

One of the more interesting items was “DOD Authorizes Soldiers to Tweet, Access Facebook,” which appeared in PC Magazine on February 26, 2010. The main point was:

Provided they’re not giving away classified information, employees at the Department of Defense are now officially allowed to use social media sites like Facebook and Twitter…The policy covers everyone using the department’s non-classified Internet system, known as NIPRNET.

It is, therefore, not too surprising that some think tanks, azure chip consultants, and poobahs are on the social media bandwagon too. A reader sent me a link to “Banks Need to Wake Up to the Potential of Social Media.” The “article” appeared on the Datamonitor Web site (“the home of business information”) on February 18, 2010. The main point, in my opinion, is:

UK traditional banks need to recognise the value of social media if they are to keep their grip on customers in the thawing economic climate according to Datamonitor.  The independent market analyst believes the rise of social media has facilitated a fundamental shift in power from banks to consumers.  The research* reveals how UK consumers are leading the way, as 50% are using a variety of online tools to make their financial decision compared to 41% globally.  According to the Datamonitor findings, ‘online media’ is most popular amongst the 25-34 year old segment in all regions except APAC (Australia, Singapore and Japan).

Lots of buzzwords and fancy verbal dancing. When I read this, I heard the voice of Steve Martin.

image

Steve Martin, “Yes, the military and banks should make the tweets.” Dan Aydroyd: “We must post pictures of our strategic policy meetings on Facebook too.” Source: http://www.la2day.com/images/page_image/SteveMartinWild.jpg

Audience laughs. Loudly. A lot.

What caused me to think about this quite remarkable paragraph was another news story, “Experts on Bank Crisis Will Name and Shame.”? The main thrust of this story is that the exploration of some “issues” in Ireland will identify some bankers who may be involved in an interesting way.

Now why did I connect the “Banks Need to Wake Up to the Potential of Social Media” and “Experts on Bank Crisis Will Name and Shame?”

Easy.

Can you imagine folks like German economist Klaus Regling or Max Watson, a bank expert, sending tweets about their activities? How about some Facebook posts with pictures of a couple of meetings or a toast at a restaurant? What about a link to some little-known public PDF documents on a public Web site?

What about the banks themselves? Should the Royal Bank of Scotland, an outfit that managed to match some of the fine lads and lasses in the US with a lost of $5.5 billion in 2009. See “Royal Bank of Scotland Loses $5.5 Billion in 2009”?

Yep, the financial community should jump on that social media bandwagon. Start a social media campaign? Forget information policies, governance, and legal concerns. Tweet now!

Sometimes I wonder why the azure chip crowd with its assorted poobahs, mavens, and glib souls cook up recommendations that: [a] will not make much sense to the senior executives, [b] may create additional legal hassles if the messages are not in step with what the legal eagles define as appropriate, and [c] are little more than a sales pitch less subtle that the columns I write for Information World Review.

Now back to the military. The alleged assassination in a far off country, reported in “Inquiry Grows in Dubai Assassination”, which appeared in the digital New York Times, is helping to keep this story fresh. I am not sure who is involved in what. The social info zooming around adds layers of messaging to a strange story.

I am on the fence about the military and the banks getting “social”.

So what about search?

Well, that’s the point. With services like Collecta.com or even the newcomer like Wowd.com, an investigator or attorney working in one of the legal matters related to “name and shame” are going to have an * easy time * of finding a comment, an observation, or other item that * may * be material to the legal proceedings. Great idea to urge more social media in the midst of a financial downturn. Keep in mind that Datamonitor’s poobahs see the economic climate “thawing”. Sorry. I don’t agree.

I sure hope that the folks pushing certain institutions toward social media have thought about some of the implications for security and personnel safety.

A more prudent approach would emphasize the use of social media in a particular context with certain information governance policies in place and working. Defense and financial institutions may find that more analysis preferable to a rush to tweeting.

For one, I am leaning toward a more conservative approach to social media unlike the cheerleaders, poobahs, and bandwagon riders.

Stephen E Arnold, March 1, 2010

No one paid me to write this. I think I have to report non payment of articles that are about wild and crazy consulting idea must be reported to the US Agency for International Development (USAID), a canny lot.

When Domains Collide

March 1, 2010

Editors’s Note: This is a modified version of the lecture that Stephen E Arnold, ArnoldIT.com delivered in Philadelphia, March 1, 2010. The actual presentation was an extemporaneous talk based on this preliminary set of notes.

I want to thank NFAIS for inviting me to address the members of this professional organization. The world of bibliography, abstracting, indexing, professional publishing and academic research has been shaken to its foundations in the last three or four years. The Richter scale measuring the waves pulsing through the bedrock of information access is being stretched. I find that talking about what is happening and what information professionals can do about those pulses difficult.

This morning I want to put the pulses into a context. I am cautiously optimistic about a finding my research has revealed. Specifically, the shocks are coming from the integration of formerly separate disciplines into new services. In short, the traditional methods are being put into software and hardware modules and used to build new, more efficient, and more flexible services. Complete information businesses are now a commodity component that a clever engineer can use like a building block. Good news for engineers skilled in integration. Not such good news for experts in a hand-craft like Linotype operation. By snapping together modules, domains collide and are reinvented.

That’s today’s world of information.

Where We Are

Today we live in a world of a number of global, possibly monopolistic online research services stands and literally a hundred million or more citizen journalists creating blogs and tweets.

Until recently, say about 1979 or 1980, a scholar transported from the 11th century scriptorium would have become familiar quickly with the hard copy research books painstakingly documented by Constance Winchell. But move that person to today’s world and the mental shift would be more difficult, perhaps impossible.

Bring that 11th century researcher to today’s world, and I think adjustment would be difficult. Since the advent of online (anyone remember NLS?), information is just “out there”. Today information is “here” when it appears on a screen. The display of information is evanescent until it is “written”—that is, copied—to a storage device which may be located “out there”. It is possible to print an item of information, but the digital instance is the “real information.” This is a significant conceptual shift since online became our common information currency.

In fact, I cannot begin work until I “find” the particular electronic instance on which I am to work. Without search and retrieval, I am a cooked goose.

And just finding a particular document can be difficult even with the many search systems available. If our time traveling 11th century research can print a document, the information needed may surrounded by unwanted images and advertisements. Without the ability to recognize the “real” information our 11th century scholar would be hard pressed to use today’s information retrieval systems. The monk comes from another time, and that time has its own domain of information. The domain includes ways to create information, way to access information, and ways to reference other information. The monk might be squashed when his domain collided with the domain of 2010 information access. When domains collide, methods are crushed, recycled, and remade. This is deeply disturbing to people who cling to specific ways of doing such things as research.

The implications of domain collision are important in my opinion. Economics, human behavior, work processes, and speed are defined by domains. Let’s run down a handful of the challenges domain collisions ignite. The good news is that domains that touch create a boundary condition in which opportunities can flourish.

Challenges of Domain Collisions

If you have a business school degree, you have studied the touchstone buggy whip reference in Theodore Levitt’s “Marketing Myopia” that appeared in the Harvard Business Journal in 1960. The idea is that a buggy whip manufacturer who anticipated the advent of the automobile could have expanded the product line to include a leather steering wheel wrap or automobile interiors.

Thus, the problem is that each domain has a certain way of perceiving phenomena. I won’t dwell on phenomenological existentialism, but I think it has quite a bit to teach us about what we can see when something “new” this way comes. We are, in the telling phrase of William James, stricken with “a certain blindness”. We simply cannot see beyond our domain. When domains collide, not only our vision is impaired we must deal with processes and methods that have been transformed by the forces involved.

Not surprising, the problems of apprehending have triggered a cascade of challenges. Vocabulary is an issue. One example is the use of abbreviated spelling and neologisms to communicate in Twitter “tweets” or short messages via a mobile device. Messages such as ru w/me grate on some. To those in the domain, the messages is clear and appropriate.

Other phenomena I have observed include:

  1. Work methods crafted for one domain such as copying a manuscript by hand on animal skin do not transfer to another domain such as copying information to a storage device. An entire lifetime of learning is irrelevant in the new domain.
  2. The time required to assemble a document is measured by manual tasks that are often organized in a sequential manner. The digital domain allows many tasks to be handled quickly and, in some cases, in parallel.
  3. The costs for manual, serialized work processes can be problematic. When software can be used to eliminate certain work previously done by humans, the economics change.

I think you can see from these examples that our time traveling researcher from Mont St Michel in the Middle Ages would have a steep learning curve.

I have given quite a bit of thought to the implications of this type of domain collision. I know when I look at banking, retail, manufacturing, and finding the right person to marry that domain collisions are one of the defining attributes of today’s world.

Publishing

I want to comment about publishing because most NFAIS members are involved in the creation, selection, and dissemination of information. The domain collision began with the advent of the online search systems for the NASA RECON project, the work of Dr. Gerald Salton (Cornell University), and the non-linear increase in the capabilities of hardware and software.

What is interesting to me is that since this revolution began, arguably in the 1970s, publishing has been eager to embrace certain technologies yet reluctant to get too close to other technologies.

Let me give you an example. When I worked at the Courier Journal & Louisville Times Co., we operated a rotogravure press and we printed the New York Times Sunday Magazine. We embraced traditional rotograveur printing technology and then we adopted technology that chopped the manual plate making process out of the work flow. We used computers, fancy software, and numerically-controlled presses as early as the early 1980s.

The Courier-Journal Board of Directors understood the importance of electronic information and created a separate separate business unit to build digital products. I was lucky to participate in the development a profitable online business with ABI/INFORM, Business Dateline, Pharmaceutical News Index, and the core technical databases that were the foundation of today’s Cambridge Scientific Abstracts. This work took place in the early 1980s and relied on traditional mainframes and timesharing businesses like Tymnet and Dialcom as service bureaus.

I know from first-hand experience that those who managed the technologies steeped in the domain of traditional newspaper production believed their unit of the company was in the thick of technological change. The electronic publishing technology was a radical and strange undertaking. The people running the state-of-the-art four color printing presses did not see how electronic information could be a viable business.

We know now that the electronic publishing technology has emerged as one of the key technologies for information companies today. In fact, the brutal struggles between Macmillan and Amazon, Apple and Sony, and Google and book publishers are anchored in the technology that was a second-class citizen in the 1980s.

What’s interesting is that within publishing the domain of the traditional products like books, music, motion pictures, and television programming is now colliding with the domain of the network computing infrastructure. Complete businesses and their nested processes are now a Web service. One can download a electronic publishing system as open source software. The key point is that anyone anywhere in the world can become a digital newsroom with a Web site, newsfeed, and a community.

What’s even more interesting is that the agents of change are the children of many publishing executives and in some cases, the former employees of established publishing and rich media companies.

Another interesting point is that the new domain of content production is surrounding the traditional information industry which Paul Zirkowski tried to capture in this diagram from the Information Industry Association in the mid-1980s, which, in my opinion, nicely summarizes what we now know as the Petri dish for Amazon, Apple, and Google, among other firms.

clip_image002

This is a diagram created by the “old” Information Industry Association. Created in the mid 1980s, it is an attempt to show how the information world at that was beginning to develop. What’s interesting is that the successes of Amazon, Apple, and Google, among other companies is dependent to some degree on combining several of these “old” segments in one service.

When I look at this diagram, I can see that the success of Amazon, Apple, and Google in information comes from taking the building blocks from this 20-year-old diagram and combining pieces into new constructions. Keep in mind that these firms are not in the strict sense traditional publishing companies. These are technology-centric companies whose engineering uses information as a catalyst to create new functions.

Read more

« Previous Page

  • Archives

  • Recent Posts

  • Meta