LexisNexis and Interwoven: An Odd Couple

September 6, 2008

The for fee legal information sector looks like a consistent winner to those who don’t know the cost structures and marketing hassles of selling to attorneys, intelligence agencies, and law schools. Let’s review at a high level the sorry state of the legal information business in the United States. Europe and the Asia Pacific region are a different kitchen of torts.

Background

First, creating legal information is still a labor intensive operation. Automated processes can reduce some costs, but other types of legal metatagging still require the effort of attorneys or those with sufficient legal education to identify and correct egregious errors. As you may know, making a mistake when preparing for a major legal matter is not too popular with the law firms’ clients.

Second, attorneys and law firms make up one of those interesting markets. At one end there are lots and lots of attorneys who work in very small shops. Someone told me that 90 percent of the attorneys are involved with small firms or work in legal flea markets. Several attorneys get together, lease a space, and then offer desks to other attorneys. Everyone pays the overhead, and the group can pursue individual work or form a loose confederation if necessary. Other attorneys abandon ship. I don’t have data on the quitters in the US, but I know that one of my acquaintances in Louisville, Kentucky, gave up the law to become a public relations advisor. One of my resources is an attorney who works only on advising companies trying to launch an IPO. He hires attorneys, preferring to use his managerial skills without the mind numbingly dull work that many legal eagles perform.

Third, there are lots of attorneys who have to mind their pennies. Clients in tough economic times are less willing to pay wild and crazy legal bills. These often carry such useful line items as “Research, $2,300” or “Telephone call, $550”. I have worked as an expert witness and gained a tiny bit of insight into the billing and the push back some clients exert. Other clients don’t pay the bills, which makes life tough for partners who can’t buy a new BMW and for the low paid “associates” who can’t buy happiness or pay law school loans.

Fourth, most people know that prices for legal information are high, but there’s a growing realization that the companies with these expensive resources are starting to look a lot like monopolies. Running the only poker game in town makes some of the excluded players want options. In the last few years, I’ve run across services that a single person will start up to provide specific legal type information to colleagues because the blue chip companies were charging too much or delivering stale information at fresh baked bread prices.

Folks like Google.com, small publishers, trade associations, and the Federal government put legal information on Web servers and let people browse and download. Granted, some of the bells and whistles like the nifty footnotes that tell a legal eagle to look at a specific case for a precedent are missing. But some folks are quite happy to use the free services first. Then, as a last resort, will the abstemious legal eagle pay upwards of $250 per query to look up information in a WestLaw, LexisNexis, or other blue chip vendors’ specialist online file.

Google’s government index service sports what may presage the “new look” for other Google vertical search services. Check it out in the screen shot below. Notice that the search box is unchanged, but the page features categories of information.

govt search

Now run the query , district court decisions. Sorry about the screen shots, but you can navigate to this site and run your own queries. I ran the bound phrase “district court decisions”. Here’s what Google showed me:

disttrict court decisions

Let me make three observations:

Read more

Blossom Search for Web Logs

September 5, 2008

Over the summer, several people have inquired about the search system I use for my WordPress Web log. Well, it’s not the default WordPress engine. Since I wrote the first edition of Enterprise Search Report (CMSWatch.com), I have had developers providing me with search and content processing technology. We’ve tested more than 50 search systems in the last year alone. After quite a bit of testing, I decided upon the Blossom Software search engine. This system received high marks in my reports about search and content processing. You can learn more about the Blossom system by navigating to www.blossom.com. Founded by a former Bell Laboratories’ scientist, Dr. Alan Feuer, Blossom search works quickly and unobtrusively to index content of Web sites, behind-the-firewall, and hybrid collections.

You can try the system by navigating to the home page for this Web log here and entering the search phrase in quotes “search imperative” and you will get this result:

search imperative blossom

When you run this query, you will see that the search terms are highlighted in red. The bound phrase is easily spotted. The key words in context snippet makes it easy to determine if I want to read the full article or just the extract.

Most Web log content baffles some search engines. For example, recent posts may not appear. The reason is that the index updating cycle is sluggish. Blossom indexes my Web site on a daily basis, but you can specify the update cycle appropriate to your users’ needs and your content. I update the site at midnight of each day, so a daily update allows me to find the most recent posts when I arrive at my desk in the morning.

The data management system for WordPress is a bit tricky. Our tests of various search engines identified three issues that came up when third-party systems were launched at my WordPress Web log:

  1. Some older posts were not indexed. The issue appeared to be the way in which WordPress handles the older material within its data management system.
  2. Certain posts could not be located. The posts were indexed, but the default OR for phrase searching displayed too many results. With more than 700 posts on this site, the precision of the query processing system was not too helpful to me.
  3. Current posts were not indexed. Our tests revealed several issues. The content was indexed, but the indexes did not refresh. The cause appeared to be a result of the traffic to the site. Another likely issue was WordPress’ native data management set up.

As we worked on figuring out search for Web logs, two other issues became evident. First, redundant hits (since there are multiple paths to the same content) as well as incorrect time stamps (since all of the content is generated dynamically). Blossom has figured out a way to make sense of the dates in Web log posts, a good thing from my point of view.

The Blossom engine operates for my Web log as a cloud service; that is, there is no on premises installation of the Blossom system. An on premises system is available. My preference is to have the search and query processing handled by Blossom in its data centers. These deliver low latency response and feature fail over, redundancy, and distributed processing.

The glitches we identified to Blossom proved to be no big deal for Dr. Feuer. He made adjustments to the Blossom crawler to finesse the issues with WordPress’ data management system. The indexing cycle does not choke my available bandwidth. The indexing process is light weight and has not made a significant impact on my bandwidth usage. In fact, traffic to the Web log continues to rise, and the Blossom demand for bandwidth has remained constant.

We have implemented this system on a site run by a former intelligence officer, which is not publicly accessible. The reason I mention this is that some cloud based search systems cannot conform to the security requirements of Web sites with classified content and their log in and authentication procedures.

The ArnoldIT.com site, which is the place for my presentations and occasional writings, is also indexed and search with the Blossom engine. You can try some queries at http://www.arnoldit.com/sitemap.html. Keep in mind that the material on this Web site may be lengthy. ArnoldIT.com is an archive and digital brochure for my consulting services. Several of my books, which are now out of print, are available on this Web site as well.

Pricing for the Blossom service starts at about $10 per month. If you want to use the Blossom system for enterprise search, a custom price quote will be provided by Dr. Feuer.

If you want to use the Blossom hosted search system on your Web site, for your Web log, or your organization, you can contact either me or Dr. Alan Feuer by emailing or phoning:

  • Stephen Arnold seaky2000 at yahoo dot com or 502 228 1966.
  • Dr. Alan Feuer arf at blossom dot com

Dr. Feuer has posted a landing page for readers of “Beyond Search”. If you sign up for the Blossom.com Web log search service, “Beyond Search” gets a modest commission. We use this money to buy bunny rabbit ears and paté. I like my logo, but I love my paté.

Click here for the Web log search order form landing page.

If you mention Beyond Search, a discount applies to bloggers who sign up for the Blossom service. A happy quack to the folks at Blossom.com for an excellent, reasonably priced, efficient search and retrieval system.

Stephen Arnold, September 5, 2008

Why Dataspaces Matter

August 30, 2008

My posts have been whipping super-wizards into action. I don’t want to disappoint anyone over the long American “end of summer” holiday. Let’s consider a problem in information retrieval and then answer in a very brief way why dataspaces matter. No, this is not a typographical error.

Set Up

A dataspace is somewhat different from a database. Databases can be within a dataspace, but other information objects, garden variety metadata, and new types of metadata which I like to call meta metadata, among others can be encompassed. These are represented in an index. For our purpose, we don’t have to worry about the type of index. We’re going to look up something in any of the indexes that represent our dataspace. You can learn more about dataspaces in the IDC report #213562, published on August 28, 2008. It’s a for fee write up, and I don’t have a copy. I just contribute; I don’t own these analyses published by blue chip firms.

Now let’s consider an interesting problem. We want to index people, figure out what those people know about, and then generate results to a query such as “Who’s an expert on Google?” If you run this query on Google, you get a list of hits like this.

google expert

This is not what I want. I require a list of people who are experts on Google. Does Live.com deliver this type of output? Here’s the same query on the Microsoft system:

live expert output

Same problem.

Now let’s try the query on Cluuz.com, a system that I have written about a couple of times. Run the query “Jayant Madhavan” and I get this:

cluuz

I don’t have an expert result list, but I have a wizard and direct links to people Dr. Madhavan knows. I can make the assumption that some of these people will be experts.

If I work in a company, the firm may have the Tacit system. This commercial vendor makes it possible to search for a person with expertise. I can get some of this functionality in the baked in search system provided with SharePoint. The Microsoft method relies on the number of documents a person known to the system writes on a topic, but that’s better than nothing. I could if I were working in a certain US government agency use the MITRE system that delivers a list of experts. The MITRE system is not one whose screen shots I can show, but if you have a friend in a certain government agency, maybe you can take a peek.

None of these systems really do what I want.

Enter Dataspaces

The idea for a dataspace is to process the available information. Some folks call this transformation, and it really helps to have systems and methods to transform, normalize, parse, tag, and crunch the source information. It also helps to monitor the message traffic for some of that meta metadata goodness. An example of meta metadata is an email. I want to index who received the email, who forwarded the email to whom and when, and any cutting or copying of the information in the email to which documents and the people who have access to said information. You get the idea. Meta metadata is where the rubber meets the road in determining what’s important regarding information in a dataspace.

Read more

The Enterprise Search Thrill Ride

August 29, 2008

Summer’s ending, and the search engine thrill ride is accelerating. Before you fire up your personal computer and send me an email asking for juicy details, appreciate that I can only comment in a broad way, making observations at a high level. If you have an appetite for more information, you will have to dip into your piggy back and engage me to show up and discuss the state of the industry in a less chatty setting like this Web log.

Every amusement park has a thrill ride. Kids love these roller coasters, bungee jumps, and spinning barrels. Adults or people with an aversion to fear are generally content to watch. Once in a great while, a thrill ride goes wrong. The thrill seekers can be injured and once in a while killed.

Search and content processing companies are in a sense a thrill ride in way. The launch of a company is filled with anticipation. Then the company chugs along and usually gets a sale, and the process repeats itself. At the end of the ride, the company speeds along and in most cases the ride ends with the employees’ displaying big smiles. When a ride goes wrong, the employees aren’t so chipper, but the lawyers often show sly grins.

rollercoaster blur copy

I am quite confident that the September to December 15, 2008, period will be quite exciting for me. First, the search and content processing sector of the enterprise software market is poised for change. Second, a number of companies will have to make their numbers or face the prospect of enduring the lash of venture capitalists’ whips, changing careers, or closing up for good. Third, the GOOG is beginning to move slowly forward in the enterprise sector. Even if Google’s management insists “We’re just running a beta test”, those “beta tests” will be disruptive for established search and content processing vendors. Fourth, newcomers to the North American market will make their presence felt to a greater degree than in the first six months of 2008. Newcomers often become irritants with their promise of better, faster, or cheaper. Of course, the customer may pick two of these claims, but incumbents have to waste time and money deflecting these competitive challenges. Finally, superplatforms–big enterprise software vendors–have to protect their turf. I expect significant pressure from these firms to add another variable to the search and content sector. After all, what can a company do when Microsoft bundles an incrementally improving search and retrieval system with a widely used server product like SharePoint.

Read more

nGenera Bakes in Autonomy Search

August 26, 2008

Just when Microsoft makes search “free”, along comes Autonomy and proves that licensing deals are alive and well. According to CRM Buyer, nGenera inked an original equipment manufacturing deal with Autonomy. What’s interesting is that it’s not “search”. The deal is for Web 2.0 technology for search. The application is not finding. The application is knowledge management. I have to be up front and admit that I don’t know what knowledge is. Absent that understanding, I’m baffled at how to manage what I don’t grasp. Nevertheless, the deal is done.

Let’s sort out who is who in this deal. Talisma, according to CRM Buyer, “OEM’ed the Autonomy search engine.” An Autonomy reseller told me that Autonomy’s search engine no longer needs training, and it now shares many features with “appliance like” search systems from Google and Thunderstone, among others. You can get more information about Talisma here. The Talisma catchphrase is “Software that enables an exceptional online customer experience.”

nGenera bought Talisma in May 2008. nGenera’s Steve Papermaster is reported as having said at the time of the deal:

The future of innovation is customer co-creation: talking directly to customers, listening to them, learning from them. We’re taking content and processes from customer interaction software and mashing that with Web 2.0 collaboration tools to help companies discover brilliant new product ideas inspired by their own customers. Source: Paul Greenberg.

nGenera now has its own customer support product line to complement its other management consulting type software offerings. nGenera is a cloud computing – Web 2.0 services firm. The company has a remarkable “manifesto” here that sets forth its vision for organizational operations. One idea in the manifest is that organizations must move from knowledge management” to what the company calls “content collaboration and collective intelligence”. Since I don’t know what “knowledge management” means, I am in the dark about information operations that reach beyond.  The manifesto also advocates moving from “traditional information technology” to “a next generation enterprise platform.”  Again my experience is not much help to me in figuring out what nGenera’s services will deliver. The company has its fingers in many different pies. Each pie is stuffed with Web 2.0 goodness and goodies like “leveraging institutional memory,” “mass collaboration”, “business analytics”, and “transformational change”. These notions are too sophisticated for this addled goose.

talisma knowledgebase

The Talisma Knowledgebase which may now incorporate Autonomy technology.

The purchase of Talisma adds what nGenera describes here as:

The leading Customer Interaction Management (CIM) software solution provider enabling organizations globally to deliver an exceptional online customer experience while dramatically increasing their efficiency and effectiveness.  Talisma’s customers include Aetna, AOL, Canon, Citibank, Comcast, Dell, Ford, University of Notre Dame, Microsoft, Pitney Bowes, Siemens, Sony, and Sprint.  Talisma is headquartered in Bellevue, Washington, and has offices located across Asia-Pacific, Europe, and North America.

To sum up, nGenera bought Talisma in May 2008. Talisma inked a deal for Autonomy’s search and content processing technology. Autonomy, therefore, “snaps in” to the broader range of nGenera’s Web 2.0 services. Autonomy joins Atlassian Confluence as a technology provider to nGenera.  I must admit these names leave my head spinning.

Read more

Another Extract from the Harmann-Communicatie Interview

August 24, 2008

I received a couple of requests for additional extracts from my interview with Eric Hartmann, who is sponsoring a conference about content management and content processing in Utrecht in September 2008. You can obtain more information about the program here. Here are three more snippets from the interview. The question is in bold. My response is in normal weight.

Everybody who’s talking about search has Google on his mind. Is that good or bad?

I have written two detailed studies of Google, The Google Legacy in 2005 and Google Version 2.0 in 2007. Google is an important company because it legitimized an alternative to desktop applications and on premises enterprise solutions. Along the way, Google changed the Web search landscape, dominated online advertising, and pushed its snout into telephony, online payments, publishing, and several other major non-search market sectors.

Google now has 70 percent of search traffic in North America. In Denmark and Germany, Google’s share of the search market is over 90 percent.

There’s a lot of talk about Google, but there is not much understanding of how the company’s strategy of disruption works, its business model options, or its potential to move into non search markets without warning.

Google’s also important because innovators are learning from the Google model. People who quit Google to start a new company—what are called Xooglers—build on the ideas made concrete by Google. As a result, Google the company could go out of business. But Google the model will have a continuing impact for many years.

hot seat fixed

On the hot seat.

After several take-overs, the market of enterprise search parties has somewhat shrunk. What’s your view on the investment and revenue opportunities?

That’s a good question. On the surface, it looks as if search companies are selling out. For example, Lexalytics has fused with a UK company. Powerset sold out to Microsoft. Fast Search also accepted a Microsoft offer. SAS Institute bought Teragram. Business Objects (now part of SAP) purchased Inxight Software.

However, there’s investment as well. Intel and SAP pumped $14 million in Endeca. I have worked on a couple of investments in search and content processing systems not yet announced to the public.

In my files I have the names of more than 300 companies engaged in search, text analytics, and content processing. The search sector is quite active even in the present economic climate.

The reason is that many people think, “If Google did it, so can we”. I don’t see any let up in search activity for the foreseeable future. Most search systems are not so good; therefore, there’s a big payday in the enterprise market. There’s a growing suspicion that Google may not be everyone’s idea of “My Favorite Monopoly”.

The search space is still like two or three interacting magnetic fields. It’s dynamic, unpredictable, and exciting to some.

What can we expect from Google, Microsoft, Autonomy and other parties?

There companies are good at keeping secrets and each is willing to sue anyone who provides highly specific information about what’s next from their creative ovens. I can offer some high level opinions with the caveat that my hunches may not be what these outfits actually do.

Autonomy. This company is morphing from search into a different type of information solutions company.  When  I look at the range of products on offer, I see a mini solutions conglomerate, not a search or content processing company. For example, fraud detection may or may not involve words. Fraud detection focuses on patterns in data, not search. Another example, is the company’s video solutions. Search plays a part, but Autonomy offers a more robust way for an organization to manipulate its rich content. On the strength of its non search businesses, Autonomy seems poised to grow to $300 million or more in revenue. This is a great achievement, but it is not a pure search play.

Google is a bit of a mystery to me. The company has some interesting patent documents and fascinating demonstration services. Google is content to collect billions from online advertising and sit on its hands as Amazon, Salesforce.com, and other companies push aggressively into cloud services. Google makes money from ads, but I am reluctant to say, “Google is a search company.” Google is an applications platform. Search and advertising are a couple of popular applications, not the whole company.

Microsoft is quite interesting to me. I think the fate of Microsoft will  be to end up as an applications company, a game company, and a server company. Microsoft wants to have an online company like Google, but I don’t think it can achieve this unless it shatters itself and then starts online without the baggage from the past. In terms of search, Microsoft is a me-too squared company. Google is deeply duplicative of AltaVista.com, Overture.com, and Microsoft.com. Microsoft, oddly enough, is trying to duplicate Google which has duplicated part of Microsoft. Copies of copies get blurry, so Microsoft lacks focus in its search efforts across its very different business units. The Microsoft money comes from upgrades to operating systems and applications. I think the company has a struggle for the foreseeable future.

Stephen Arnold, August 24, 2008

Five Tips for Reducing Search Risk

August 20, 2008

In September 2008, I will be participating in a a conference organized by Dr. Erik M. Hartman. One of the questions he asked me today might be of interest to readers of this Web log. He queried by email: “What are five tips for anyone who wants to start with enterprise search but has no clue?”

Here’s my answer.

That’s a tough question. Let me tell you what I have found useful when starting a new project with an organization that has a flawed information access system.

First, identify a specific problem and do a basic business school or consulting firm analysis of the problem. This is actually hard to do, so many organizations assume “We know everything about our needs.” That’s wrong. Inside of a set you can’t see much other than other elements of the set. Problem analysis gives you a better view of the universe of options; that is, other perspectives and context for the problem.

Second, get management commitment to solve the problem. We live in a world with many uncertainties. If management is not behind solving the specific problem you have analyzed, you will fail. When a project needs more money, management won’t provide it. Without investment, any search and content processing system will sink under the weight of itself and the growing body of content it must process and make available. I won’t participate in projects unless top management buys in. Nothing worthwhile comes easy or economically today.

Read more

Email or Search: Which Wins the Gold

August 18, 2008

My son (Erik Arnold) runs a nifty services firm called Adhere Solutions. He’s hooked up with Google, and he views the world through Googley eyes. I (Stephen Arnold) run the addled goose outfit ArnoldIT. Google does not know I exist, and if Googzilla did, the Mountain View giant would make a duvet from my tail feathers.

The setting. We’re sitting in a cafeteria. The subject turns to which is the killer application for today’s 20 something. Is it email (the Brett Favre of online) or is it search (the Michael Phelps of cloud services). My son and I play this argument MP3 file frequently, and our wives have set down specific rules for these talks. First, we have to be by ourselves. Two, we have to knock off the debate after 30 minutes or so. Erik and I can extend analytic discussions of digital theory over years, and we have marching orders to knock that off.

Here’s the argument. Erik asserts that search is the new killer app. I agree, but I tell him I want to make a case for email as long as I can extend it to SMS and newer services under the category Twitterish. He agrees.

My Argument: Messaging

Messaging is communications. Search is finding and discovering. Therefore, the need to communicate is higher on the digital needs scale than simple finding. With services that allow me to call, text, create mini blogs, and broadcast brief Tweets, I am outputting and receiving messages that are known to be:

  • Important. I don’t text a client to tell her what I had for lunch is the wonderful cafeteria. Grilled cheese as it turns out. Important to me, but to no one else. I send important messages that have an instrumentality.
  • Timely. I control the time delivery, matching urgency with medium. I sent a fax last week. What a hassle, but the message warranted a fungible copy, not urgent delivery. I want to dial in the “time” function, not leave it to chance or to some other authority.
  • Content rich. I write baloney, but I wouldn’t write baloney unless it was important to me and to the recipient of one of my messages, articles, or 350 page studies.

In conclusion, messaging–particularly electronically implemented messaging–is the killer app. Search is useful, just not one to one, one to many, many to one, or many to many communications. By definition, search is not timely, of uncertain importance, and often not content rich due to format, editorial policy or the vapidity of the data.

My Son’s Argument

Messaging is not necessarily digital. Though crucial, when we talk about an online killer app, it’s not email. The killer app must deliver a function that we can’t duplicate in the analogue world. For that reason search is the killer application for the 21st century. Here’s why:

Read more

The Future of Search Layer Cake

August 14, 2008

Yesterday I contributed a short essay about the future of search. I thought I was being realistic for the readers of AltSearchEngines.com, a darn good Web log in my opinion. I wanted to be more frisky than the contributions from SearchEngineLand.com and Hakia.com too. I’m not an academic, and I’m not in the search engine business. I do competitive technical analysis for a living. Search is a side interest, and prior to my writing the Enterprise Search Report, no one had taken a comprehensive look at a couple dozen of the major vendors. I now have profiles on 52 companies, and I’m adding a new one in the next few days. I don’t pay much attention to the university information retrieval community because I’m not smart enough to figure out the equations any more.

From the number of positive and negative responses that have flowed to me, I know I wasn’t clear about my focus on behind the firewall search and Google’s enterprise activities. This short post is designed to put my “layer cake” image into context. If you want to read the original essay on AltSearchEngines.com, click here. To refresh your memory, here’s the diagram, which in one form or another I have been using in my lectures for more than a decade. I’m a lousy teacher, and I make mistakes. But I have a wealth of hands on experience, and I have the research under my belt from creating and maintaining the 52 profiles of companies that are engaged in commercial search, content processing, and text analytics.

search future

I’ve been through many search revolutions, and this diagram explains how I perceive those innovations. Furthermore, the diagram makes clear a point that many people do not fully understand until the bills come in the mail. Over time search gets more expensive. A lot more expensive. The reason is that each “layer” is not necessarily a system from a single vendor. The layers show that an organization rarely rips and replaces existing search technology. So, no matter how lousy a system, there will be two or three or maybe a thousand people who love the old system. But there may be one person or 10,000 who want different functionality. The easy path for most organizations is to buy another search solution or buy an “add in” or “add on” that in theory brings the old system closer to the needs of new users or different business needs.

Read more

MarkLogic: The Army’s New Information Access Platform

August 13, 2008

You probably know that the US Army has nicknames for its elite units. Screaming Eagle, Big Red One, and my favorite “Hell on Wheels.” Now some HUMINT, COMINT, and SIGINT brass may create a MarkLogic unit with its own flash. Based on the early reports I have, the MarkLogic system works.

Based in San Carlos (next to Google’s Postini unit, by the way), MarkLogic announced that the US Army Combined Arms Center or CAC in Ft. Leavenworth, Kansas, has embraced MarkLogic Server. BCKS, shorthand for the Army’s Battle Command Knowledge System, will use this next-generation content processing and intelligence system for the Warrior Knowledge Base. Believe me, when someone wants to do you and your team harm, access to the most timely, on point information is important. If Napoleon were based at Ft. Leavenworth today, he would have this unit report directly to him. Information, the famous general is reported to have said, is nine tenths of any battle.

Ft. Leavenworth plays a pivotal role in the US Army’s commitment to capture, analyze, share, and make available information from a range of sources. MarkLogic’s technology, which has the Department of Defense Good Housekeeping Seal of Approval, delivers search, content management, and collaborative functions.

img 813a

An unclassified sample display from the US Army’s BCKS system. Thanks to MarkLogic and the US Army for permission to use this image.

The system applies metadata based on the DOD Metadata Specification (DDMS). The content is managed automatically by applying metadata properties such as the ‘Valid Until’ date. The system uses the schema standard used by the DOD community. The MarkLogic Server manages the work flow until the file is transferred to archives or deleted by the content manager. MarkLogic points to savings in time and money. My sources tell me that the system can reduce the risk to service personnel. So, I’m going to editorialize and say, “The system saves lives.” More details about the BCKS is available here. Dot Mil content does move, so click today. I verified this link at 0719, August 13, 2008.

Read more

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta