Yahoo Open: Why the Odds Don’t Favor Yahoo

September 16, 2008

When we started The Point (Top 5% of the Internet) in 1993, our challenge was Yahoo. I recall my partner Chris Kitze telling me that the Yahoo vision was to provide a directory for the Internet. Yahoo did that. We sold The Point to Lycos and moved on. So did Yahoo. Yahoo become the first ad-supported version of America Online. The company also embarked on a series of acquisitions that permitted each unit to exist as a tiny fiefdom within the larger “directory” and emerging “ad-supported” AOL business. In the rush to portals and advertising, Yahoo ignored search and thus began its method of buying (Inktomi), licensing (InQuira), or getting with a buy out (Flickr) different search engines. Google was inspired by the Overture ad engine. Yahoo surveyed its heterogeneous collection of services, technologies, and systems and ended up the company it is today–an organization looking to throw a Hail Mary pass for the game winning touchdown. That strategy won’t work. Yahoo has to move beyond its Yahooligan approach to management, technology, and development.

image

The ArnoldIT.com and Beyond Search teams have had many conversations about Yahoo in the last year. Let me summarize the points that keep a lid on our enthusiasm for Yahoo and its present trajectory:

  1. Code fiddling. Yahoo squandered an opportunity to make the Delicious bookmarking service the dominant player in this segment because Yahoo’s engineers insisted on rewriting Delicious. Why fiddle? Our analysis suggests that Yahoo’s engineers don’t know how to take a hot property, scale it, and go for the jugular in the market. The approach is akin to recopying an accounting worksheet by hand because it is just better when the worksheet is perfect. Wrong.
  2. Street peddler pushcart. Yahoo never set up a method to integrate tightly each acquisition the company made. I recall a comment from a person involved in GeoCities years ago. The comment was, “Yahoo just let us do out own thing.” Again this is not a recipe for cost efficiency. Here’s why: The Overture system when acquired ran on Solaris with some home grown Linux security. Yahoo bought other properties that were anchored in MySQL. Then Yahoo engineers cooked up their own methods for tests like Mindset. When a problem arose, experts were in submarines and could not really help with other issues. Without a homogeneous engineering vision, staff were not interchangeable and costs remain tough to control. The situation is the same when my mother bought a gizmo from the street peddler in Campinas, Brazil. She got a deal, but the peddler did not have a clue about what the gizmo did, how it worked, or how to fix it. That’s Yahoo’s challenge today.
  3. Cube warfare. Here’s the situation that, according to my research, forced Terry Semel to set up a sandwich management system. One piece of bread was the set of technical professionals at Yahoo. The other piece of bread was Yahoo top management. Top management did not understand what the technical professionals said, and when technical professionals groused about other silos at Yahoo, Mr. Semel put a layer of MBAs between engineers and top management to sort out the messages. It did not work, and Yahoo continues to suffer from spats across, within, and among the technical units of the company. It took Yahoo years to resolve owning both Flickr and Yahoo Photos. I still can’t figure out which email system is which. I can’t find some Yahoo services. Shopping search is broken for me. An engineer here bought a Yahoo Music subscription service for his MP3 player. Didn’t work from day one, and not a single person from Yahoo lifted a finger, not even the one tracked down via IRC. I created some bookmarks and now have zero idea what the service was or where the marks are located. It took me a year to cancel the billing for a Yahoo music service a client paid me to test. (I think it was Yahoo Launch. Or Yahoo Radio. Or Yahoo Broadcast. Hard to keep ’em straight.) Why? No one cooperates. Google and Microsoft aren’t perfect. But compared to Yahoo, both outfits get passing grades. Yahoo gets to repeat a semester.

When I read the cheerleading for Google in CNet here or on the LA Times’s Web log here, I ask, “What’s the problem with nailing Yahoo on its deeper challenges?” I think it’s time for Yahoo to skip the cosmetics and grand standing. With the stock depressed, Yahoo could face a Waterloo if its Google deal goes south. Microsoft seems at this time to be indifferent to the plight of the Yahooligans. Google is cruising along with no significant challenge except a roadblock built of attorneys stacked like cord wood.

Yahoo is a consumer service. The quicker its thinks in terms of consumerizing its technology to get its costs in line with a consumer operation the better. I’m not sure 300 developers can do much for the corrosive effects of bad management and a questionable technical strategy. Maybe I’m wrong? Maybe not? We sold The Point in 1995 and moved on with our lives. Yahoo, in my opinion, still smacks of the Internet circa 1995, not 2008 and beyond.

Stephen Arnold, September 16, 2008

Extending SharePoint Search

September 15, 2008

Microsoft SharePoint is a widely used content management and collaboration system that ships with a workable search system, which I’ll refer to as ESS, for Enterprise Search System. But for program expansion and customization, you’ll want to look to third-party systems for help.

Sharepoint has reduced the time and complexity of customizing result pages, handling content on Microsoft Exchange servers, and accessing most standard file types. In our tests of SharePoint, ESS does a good job and offers some bells and whistles like identifying the individual whose content suggests an author is knowledgeable about a specific topic. Managing crawls or standard index cycles are point and click, SharePoint is security aware, and customization is easy. But licensees will hit a “glass ceiling” when indexing upwards of 30 million documents. To provide a solution, Microsoft purchased Fast Search & Transfer. Microsoft has released a Fast Search Web part to make integration of the FAST Enterprise Search Platform or ESP easier. The SharePoint FAST ESP Web part is located Microsoft’s CodePlex web site and the documentation can be obtained here.

But licensing Fast ESP can easily soar above $250,000, excluding customizing and integrating service fees making it a major investment to deliver acceptable search-and-retrieval functionality for large, disparate document collections. So what can a SharePoint licensee do for less money?

The good news is that there are numerous solutions available. These range from open source options such as Lucene and FLAX to the industrial-strength Autonomy IDOL (intelligent data operating layer), which can cost $300,000 or more before support and maintenance fees are tacked on.

Third-party systems can reduce the time required to index new and changed documents. One of the major reasons for shifting from the ESS to a third-party system is a need to provide certain features for your users. Among the most-requested functions are deduplication of result sets, parametric searching/browsing, entity extraction and on-the-fly classification, and options for merging different types of content in the SharePoint environment. The good news is that there are more than 300 vendors with enterprise search systems that to a greater or lesser degree support SharePoint. The bad news is that you have to select a system.

Switching Methodology

Each IT professional with Microsoft certification knows how to set up, configure, and maintain SharePoint and other “core” Microsoft server systems. Let’s look at a methodology for replacing SharePoint with ISYS Search Software’s ISYS:web. ISYS is one of a half-dozen vendors offering so-called “SharePoint Search” capabilities.

Here’s a run down of a procedure that minimizes pitfalls:

  1. Set up a development server with SharePoint running. You don’t need to activate the search services. This can be on a computer running Windows Server 2003 or 2008. Microsoft recommends at a minimum a server with dual CPUs, each running at least 3 GHz, and 2 GB of memory. Also necessary for installation are Internet Information Services (IIS, along with its WWW, SMTP, and Common Files components), version 3.0 or greater of the .NET Framework, and ASP.NET 2.0. A more detailed look at these requirements can be found here.
  2. Create a single machine with several folders containing documents and content representative of what you will be indexing.
  3. Install ISYS:web 8 on the machine running SharePoint.
  4. Work through the configuration screens, noting the information required to add additional content repositories to index. An intuitive ISYS Utilities program will let you configure SharePoint indexes.
  5. Launch the ISYS indexing component. Note the time indexing begins and ends. You will need these data in order to determine the index build time when you bring the system up for production.
  6. Run test queries on the indexed content. If the results are not what you expect, make a return visit to the ISYS set up screens, verify your choices, delete the index, and reindex the content collection. Be sure to check that entities are appearing in the ISYS display.
  7. Open the ISYS results template so you can familiarize yourself with the style sheet and the behind-display controls.
  8. Once you are satisfied that the basics are working, verify that ISYS is using security flags from Active Directory.

At this point, you can install ISYS on the production server and begin the processing of generating the master index. Image files for the ISYS installation are available from ISYS. These include screen shots illustrating how to set up the ISYS index.

Some Gotchas to Avoid

First, when documents change, the search system must recognize that change, copy or crawl the document, and make the changed document available to the indexing subsystem. The new index entries must be added to the main index. When a slow down occurs, check the resources available.

Second, keep in mind that new documents must be indexed and changed documents have to be reindexed. Setting the index update at too aggressive a level can slow down query processing. Clustering can speed up search systems, but you will need to allocate additional time to configure and optimize the systems.

Third, additional text processing features such as deduplication, entity extraction, clustering, and generating suggestions or See Also hints for users suck computing resources. Fancy extras can contribute to sluggish performance. Finally, trim the graphical bells and whistles. Eye candy can get in the way of a user’s getting the information required quickly.

To sum up, SharePoint ships with a usable search-and-retrieval system. When you want to break through the current document barrier or add features quickly, you will want to consider a third-party solution. Regardless of the system you select, set up a development server and run shake downs to make user the system will deliver the results the users need.

Stephen Arnold, September 15, 2008

Google and ProQuest

September 15, 2008

The Library Journal story “ProQuest and Google Strike Newspaper Digitization Deal” puts a “chrome” finish on a David and Goliath story. Oh, maybe that is ProQuest and Googzilla? In the story my mother told me, David used a sling to foil to big, dumb Goliath. With some physics, Goliath ended up dead. You need to read Josh Hadro’s version of this tale here.

The angle is that Google will pay UMI–er, ProQuest–to digitize. For me the most important paragraph in the story was:

The deal leaves significant room for ProQuest to differentiate its Historical Newspapers offering, which contain such publications as the New York Times and Chicago Tribune, as a premium product in terms of added editorial effort and the human intervention required to make its selectively scanned materials more discoverable and useful to expert researchers. In contrast to scanning by Google, editors hired by ProQuest check headlines, first paragraphs, captions, and more to achieve their claim of “99.95 percent accuracy.” In addition, metadata is added along with tags describing whether the scanned content is an article, opinion piece, editorial cartoon, etc. Finally, ProQuest stresses that the agreement does not affect long-term preservation plans for the microfilm collection. “Microfilm will always be the preservation medium…”

Three thoughts:

  1. Commercial databases are starting to face rough water. Google, though not problem free, faces rough water with a nuclear powered stealth war craft. UMI–er, ProQuest–has a birch bark canoe.
  2. Once the data are in the maw of the GOOG, what’s the outlook for UMI–er, ProQuest? In my opinion, this is a short term play with the odds in the mid and long term favoring Google.
  3. Will the Cambridge Scientific financial wizards be able to float the Dialog Information Services boat, breathe life into library sales, and make the “microfilm will always be the preservation medium” a categorical affirmative? In my opinion, the GOOG has its snoot in the commercial database business and will disrupt it sending incumbent leaders into a tizzy.

Yes, and the point about David and Goliath. I think Goliath wins this one. Agree? Disagree? Help me learn. Just bring facts to the party.

Stephen Arnold, September 15, 2008

Google: With Maturity Cometh Fear

September 15, 2008

CIOL News reported on September 13, 2008, “Google Mobile Chief Says Can’t Afford a Dud.” You can read the story by Yinka Adegoke and Eric Auchard here. The peg for the write up is that a Googler (Andy Rubin, director of mobile platforms) told folks that Android had to be a success. Not long ago, Google would roll out a beta and walk away carefree. Now, it seems, the company recognizes that a foul up with Android might chip one of Googlzilla’s fangs. CIOL News does a good job of summarizing the promise, the disappointments, and the status of Android. For me, the most important statement in the article was this passage:

Google plans its own software store, called Android Market. “It’s not necessarily the operating system software that is the unifying factor, it is the marketplace,” Rubin said. Unlike Apple, Google does not expect to generate revenue by selling applications or to share revenue with partners. “We made a strategic decision not to revenue share with the developers. We will basically pass through any revenue to the carrier or the developer,” said Rubin.

I found this interesting, but a trifle off center with some of the research I have done for my two Google studies here. Let me highlight three thoughts and invite you to purchase a copy of my studies to get more detail.

First, Google’s telephony related inventions span a wide range of technologies. While the marketplace is important, the investment Google has made in its telco inventions suggests that the marketplace may be the current focus, not the only focus, particularly over a span of years.

Two, Google, like Microsoft, is behind the eight ball in terms of Apple. The iPhone is a game changer, and the ecosystem that Apple has in place and generating money has momentum. Google and Microsoft have words and some devices that are not yet in iPhone’s league.

Third, mobile is a big deal, and I found a number of patent documents that suggest that Google is headed down the path to a walled garden. Right now, I don’t think that aspect of the Google strategy has been analyzed fully. The battle, therefore, may not be the one that most pundits write about; namely, Google and Microsoft. There are other wars to fight and soon.

Agree? Disagree? Help me learn.

Stephen Arnold, September 15, 2008

Future of Business Intelligence

September 15, 2008

Chris Webb penned a thoughtful and interesting article about the future of business intelligence. “Google, Panorama, and the Future of BI” here. A number of the comments touch upon delivering business intelligence from the cloud. Take a look at his write up. For me the most interesting point was:

It [cloud based business intelligence]  all depends on how quickly the likes of Google and Microsoft (which is supposedly going to be revealing more about its online services platform soon) can deliver usable online apps; they have the deep pockets to be able to finance these apps for a few releases while they grow into something people want to use…

What stuck me about this comment is that it suggests that the future of business intelligence will be determined by two companies who are not particularly well known for their business intelligence offerings. What becomes of SAP, SAS, and SPSS (just to name the companies whose names begin with “s”)?

What do you think? A two horse race or a couple of nags not sure where the race track is? Let me know.

Stephen Arnold, September 15, 2008

Privacy: One of Google’s Seven Deadly Sins?

September 15, 2008

The Register, CNet, and other Web information services were abuzz over Google’s clarification of its data retention policy. The story originated on CNet in an article keyboarded by Chris Soghoian here. The story was titled “Debunking Google’s Log Anonymization Propaganda.” In a real coup, Mr. Soghoian elicited a response from the GOOG that will be a favorite of mine for many years to come. Mr. Soghoian asked Google for clarification, and the GOOG replied:

After nine months, we will change some of the bits in the IP address in the logs; after 18 months we remove the last eight bits in the IP address and change the cookie information. We’re still developing the precise technical methods and approach to this, but we believe these changes will be a significant addition to protecting user privacy…. It is difficult to guarantee complete anonymization, but we believe these changes will make it very unlikely users could be identified…. We hope to be able to add the 9-month anonymization process to our existing 18-month process by early 2009, or even earlier.

The Register picked up the story here. The most interesting comment in the Register’s “Google’s Privacy Reform Is a Hoax.” For me the most interesting point in this article was:

What Google plans on doing means that it will still be able to track its users’ web search histories longer than nine months. And if, as one might be forgiven for suspecting, Google never clears users’ cookie identifiers, then it can track them forever. Without clearing its users’ cookie identifiers, Google’s widely praised, supposed “reform” of its individually identifying data retention practices is meaningless, and no true reform.

I am now making a catalog of Google’s Seven Deadly Sins. Privacy is definitely a candidate or should I consider mendacity? Watch this Web log for my decision and the other six sins. I may need to expand the limit in this Zeta Function. Stay tuned.

Stephen Arnold, September 15, 2008

More HP Search Related Information

September 14, 2008

One of my two or three readers sent along some kind words about Hewlett Packard. According to this professional, HP has been writing interesting white papers about search for a number of years. I dipped into the company’s Web site, and it seemed to me that HP was turning up the heat on search and content processing. I wanted to pass along one white paper recommended by your fellow reader that I found quite interesting. The subject is Lucene, the open source search engine which lurks at the heart of the IBM Yahoo “free” search system. The paper is by Mark Butler and James Rutherford. “Distributed Lucene: A Distributed Free Text Index for Hadoop.” Free is good. Hadoop is better. Distributed is the best. The paper became available in June 2008, and you can download it by navigating to http://www.hpl.hp.com/techreports/2008/HPL-2008-64.pdf. I have had quite a  bit of trouble locating information on the HP Web sites. I can’t guarantee that this link will be valid for months. I verified it on September 9, 2008. The paper is useful, but I liked Section 1.2.6 “Current Limitations”. Enjoy and a happy quack to the canny Beyond Search reader who submitted this link. The goose loves you. Your auto’s paint job is safe–for now.

Stephen Arnold, September 14, 2008

Attensity and BzzAgent: What’s the Angle

September 14, 2008

Attensity made a splash in the US intelligence community after 2001. A quick review of Attensity’s news releases suggests that the company began shifting its marketing emphasis from In-Q-Tel related entities to the enterprise in 2004-2005. By 2006, the company was sharpening its focus on customer support. Now Attensity is offering a wider range of technologies to organizations wanting to deal with their customers using Attensity’s technology.

In August 2008, the company announced that it had teamed up with the oddly named BzzAgent to provide insights into consumer conversations. BzzAgent, a specialist in word of mouth media. You can learn more about WOM–that is, word of mouth marketing–at the company’s Web site here.

The Attensity technology makes it possible for BzzAgent to squeeze meaning out of email or any other text. With the outputs of the Attensity system, BzzAgent can figure out whether a product is getting marketing lift or down draft. Other functionality provides beefier metrics to buttress the BaaAgent’s technology.

The purpose of this post is to ask a broader question about content processing and text analytics? To close, I want to offer a comment about the need to find places to sell rocket science information technology.

Why Chase Customer Support?

The big question is, “Why chase customer support?” Call centers, self service Web sites, and online bulletin board systems have replaced people in many organizations. In an effort to slash the cost of support, organizations have outsourced help to countries with lower wages than the organization’s home country. In an interesting twist of fate, Indian software outsourcing firms are sending some programming and technical work back to the US. Atlanta has been a beneficiary of this reverse outsourcing, according to my source in the Peach State.

Attensity’s technology performs what the company once described as “deep extraction.” The idea is to iterate through source documents. The process outputs metadata, entities, and a wide range of data that one can slice, dice, chart, and analyze. Attensity’s technology is quite advanced, and it can be tricky to optimize to get the best performance from the system on a particular domain of content.

Customer support appears to be a niche that functions like a hamburger to a hungry fly buzzing around tailgaters at the college football game. Customer support, despite vendors’ efforts to reduce costs and keep customers happy, has embraced every conceivable technology. There are the “live chat” telepresence services. There work fine until the company realizes that customers may be in time zones when the company is not open for business. There are the smart systems like the one Yahoo deployed using InQuira’s technology. To see how this works, navigate to Yahoo help central, type this question “How do I can premium email?”, and check out the answers. There are even more sophisticated systems deployed using tools from such companies as RightNow. This firm includes work flow tools and consulting to improve customer support services and operations.

The reason is simple–customer support remains a problem, or as the marketers say, “An opportunity.” I know that I avoid customer support whenever possible. Here’s a typical example. Verizon sent me a flier that told me I could reduce my monthly wireless broadband bill from $80 to $60. It took a Web site visit and six telephone calls to find out that the lower price came with a five gigabyte bandwidth cap. Not only was I stressed by the bum customer support experience, I was annoyed at what I perceived rightly or wrongly as the duplicity of the promotion. Software vendors jump at the chance to license Verizon a better mousetrap. So far, costs may have come down for Verizon, but this mouse remains far away from the mouse trap.

The new spin on customer support rotates around one idea: find out stuff * before * the customer calls, visits the Web site, or fires up a telepresence session.

That’s where Attensity’s focus narrows its beam. Attensity’s rocket science technology can support zippy new angles on customer support; for example, BzzAgent’s early warning system.

What’s This Mean for Search and Content Processing?

For me that is the $64 question. Here’s what I think:

  1. Companies like Attensity are working hard to find niches where their text analytics tools can make a difference. By signing licensing deals with third parties like BzzAgent, Attensity gets some revenue and shifts the cost of sales to the BzzAgent’s team.
  2. Attensity’s embedding or inserting its technology into BzzAgent’s systems deemphasizes or possibly eliminates the brand “Attensity” from the customers’ radar. Licensing deals deliver revenue with a concomitant loss of identify. Either way, text analytics moves from the center stage to a supporting role.
  3. The key to success in Attensity’s marketing shift is getting to the new customers first. A stampede is building from other search and content processing vendors to follow a very similar strategy. Saturation will lower prices, which will have the effect of making the customer support sector less attractive to text processing companies than it is now. ClearForest was an early entrant, but now the herd is arriving.

The net net for me is that Attensity has been nimble. What will the arrival of other competitors in the customer support and call center space mean for this niche? My hunch is that search and content processing is quickly becoming a commodity. Companies just discovering the customer support market will have to displace established vendors such as InQuira and Attensity.

Search and content processing certainly appear to be headed rapidly toward commoditization unless the vendor can come up with a magnetic, value add.

Stephen Arnold, September 14, 2008

Google: How Many Achilles’ Heels

September 14, 2008

The essay “Another Step to Protect User Privacy” on the Official Google Blog triggered a wave of commentary on Web logs and news services. I’m writing this commentary on September 9, 1008, but it will post on Sunday, September 14, 2008, as I fly to the Netherlands for a conference. I won’t rehash the arguments stated ably and well in the hundreds of links available on various news aggregation sites. Instead, I want to highlight the key sentence in the post and then offer several observations. For me the key sentence was:

We’ll anonymize IP addresses on our server logs after 9 months.

The rule of thumb in online information is that the most current data are the most useful. In any online system, historical data are useful as a base. The action is the most recent data. I’ve seen this in the most accessed documents in a commercial database, in our Point (Top 5%) of the Internet service in the mid 1990s, and I see it now on this Web log. This means that nine months is a long time when it comes to log and usage data. Think of the baseline of data as a bank filled with gold bricks. The current and timely data are the load bearing ore. Once processed, the high value component can be put in the vault and tapped when needed.

You don’t need me to reinterate that the issues of privacy and security are important. Both are intertwined, and I am uniformly critical of online systems that don’t pay much attention to these issues. Martin White and I have a new monograph nearing completion, and we are collaborating on the security section. I won’t repeat our arguments in detail. A one word summary is “Important”.

Privacy and security, therefore, are an Achilles’ heel for many companies, including Google. Google gets headlines because people have been slow to realize what the company has been building for upwards of a decade. Messrs. Brin and Page started with BackRub, learned from AltaVista.com, benefited from the portal craze, borrowed the Overture ad model, and befuddled everyone with lava lamps. Now folks are starting to realize that Google is a different kind of company. I won’t even say “I told you so.” My Google studies made this clear years ago.

The thoughts in my addled goose brain are the following:

  1. Google is not a search and advertising company. Those are applications running on what Google is. The company is the 21st centruy version of Ma Bell, US Steel, and Standard Oil. The problem is that those outfits were confined to one nation state. Google is supra national; that is, it’s opeating across nation states. This makes it tough to regulate.
  2. Security and privacy are one point of vulnerability, but one off challenges won’t make much difference.
  3. Google’s diffusion of its origional ethos is another Achilles’ heel. In the last year, the company has been dinged for going in many directions with little apparent focus. I’m not so sure. Google’s quite good as misdirection.
  4. Google’s now in the public eye, and the company is finding itself having to reverse directions, often quickly. The license agreement for Chrome is one example. The change in user data retention is another.

How many Achilles’ heels does Google have? I refer to Google as Googzilla and have since late 2004. That means that there are four key vulnerabilities that Google has. So far, none of the charges directed at Google have aimed at these weaknesses. As long as the critics target Google’s tough, protective hide, there is little chance of [a] leap frogging Google’s technology or [b] knocking out one of its four legs.

Stephen Arnold, September 14, 2008

For SharePoint and Dot Net Fans: The London Stock Exchange Case

September 13, 2008

Cyber cynic Stephen J. Vaughan-Nichols wrote “London Stock Exchange Suffers Dot Net Crash”. You should click here and read this well-written post. Do it now, gentle readers. The gist of the story is that LSE, with the help of Accenture and Microsoft, built a near real time system running on lots of Hewlett Packard upscale servers, Windows Server 2003, and my old pal, SQL Server 2000. The architecture was designed to run really fast, a feat my team has never achieved with Windows Server or SQL Server without lots of tricks and lots of scale up and scale out work. The LSE crashed. For me the most significant statement in the write up was:

Sorry, Microsoft, .NET Framework is simply incapable of performing this kind of work, and SQL Server 2000, or any version of SQL Server really, can’t possibly handle the world’s number three stock exchange’s transaction load on a consistent basis. I’d been hearing from friends who trade on the LSE for ages about how slow the system could get. Now, I know why.

Why did I find this interesting? Three reasons:

  1. There’s a lot of cheerleading for Microsoft SharePoint. This LSE melt down is a reminder that even with experts and resources, the Dot Net / Windows Server / SQL Server triumvirate get along about as well as Pompey, Crassus and Caesar. Pretty exciting interactions with this group.
  2. Microsoft is pushing hard on cloud computing. If the LSE can’t stay up, what’s that suggest for mission critical enterprise applications running in Microsoft’s brand new data centers running on similar hardware and using the same triumvirate of software
  3. Speed and Dot Net are not like peanut butter and jelly or ham and eggs. Making Microsoft software go fast requires significant engineering work and sophisticated hardware. The speed ups don’t come in software, file systems, or data management methods. Think really expensive engineering year in and year out.

I know there are quite a few Dot Net fans out there. We have it running on one of our servers. Are your experiences like mine, generally good. Or are your experiences like the LSE, less than stellar. Oh, Mr. Vaughan-Nichols asserts that the LSE is starting to use Linux on its hardware.

Stephen Arnold, September 13, 2008

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta