Punching Google in the Snoot

August 25, 2008

The San Jose Mercury News, Google’s home town newspaper, points out lousy decisions at the Mountain View firm. Chris O’Brien wrote “Google’s Ventures Outside Search Fail to Pay Dividends”. The sub title is even more direct, “Google to face first real test of its leadership as ventures outside search fail to show dividends.” You must read Mr. O’Brien’s story here.

For me, the most interesting point in the write up was this statement:

all those high-profile ventures the company has launched, and the acquisitions it’s made, have yet to contribute much to the bottom line. In a filing with the Securities and Exchange Commission, the company noted that revenue from services such as YouTube, Google Checkout and a host of others ‘were not material.’

Material is a code word for worthless. Even more galling is that this story puts some wood behind a remark I recall hearing about Google from a Microsoft professional: “Google’s a one trick pony.”

That trick continues to spin money, but Google is now officially fallible, a charge that must be galling to the Googlers.

My research suggests that Google’s short term flops cannot be interpreted as the longer term trajectory of the company. Here are three points from my 2007 Google Version 2.0 study for Infonortics, an outfit located near Oxford, England:

  1. Google focused on search, built a good system by leveraging indifference from competitors and the good fortune of having AltaVista.com engineers available due to Hewlett Packard’s cluelessness about online
  2. Google discovered that by solving some problems in search, the resulting infrastructure could do other functions quite well. The first big other function was a running a rework of the GoTo.com/Overture.com ad engine
  3. Google’s infrastructure is an application platform which can be repurposed without too much effort if you are a Google class brain.

The net net is that Google only has to get traction in one or two tangential business sectors to generate new revenue. My research indicates that a “blast off” will generate a fraction of the core business revenue, but if the area is mobile services or enterprise applications, these markets are sufficiently big to make the revenue contribution sufficient for Wall Street’s greed appetite.

I agree with Mr. O’Brien’s analysis in general. But I’m not sure I want to count Google out just yet. Google is one tiny step from becoming a commercial publisher and a video production company. The company has mow through other business sectors quickly and only put effort into those where money begins to flow. That’s what makes Google a threat in the short term and for the longer term as well.

Stephen Arnold, August 25, 2008

Microsoft Search Executive: Scorecard Update

August 22, 2008

I have a tough time keeping track of Microsoft “search” executives. Imagine my surprise when I read in Network World here the following:

Microsoft has appointed former Multimap CEO Jeff Kelisky to be the general manager of a new business unit focused on commercial search

I’m not sure what this means, “commercial search”. Elizabeth Montalbano, who wrote the story that caught my attention–“Microsoft taps Multimap CEO to Steer Commercial-Search Unit”–is a pretty clear writer. She clarified my understanding (a little, I think). She writes:

the new unit would be a part of Microsoft’s larger Search Business Group, the general manager of which is Brad Goldberg.

Ms. Montalbano includes Microsoft “search” guru, Satya Nardella, the boss of Microsoft search and portal advertising. She mentions Chris Liddell. She does not mention Gary Flake, former Yahoo search guru.

Please, read the story yourself and let me know if you can help me answer these questions:

  1. What happened to the top dogs at Fast Search & Transfer and Powerset?
  2. Who is in charge of SharePoint “commercial” search?
  3. Who is in charge of search in other Microsoft products like Dynamics?
  4. What is “commercial search”?

I guess I’m not smart enough to understand who these folks are or what their plan is to close the modest market gap between Google and other search engines, including those available from Microsoft. Help me out, please.

Stephen Arnold, August 22, 2008

Powerset as Antigen: Can Google Resist Microsoft’s New Threat

August 20, 2008

I found the write ups about Satya Nadella’s observations about Microsoft’s use of the Powerset technology in WebProNews, Webware.com, and Business Week magnetizing. Each of these write ups converged on a single key idea; namely, Microsoft will use the Powerset / Xerox PARC technology to exploit Google’s inability to deal with tailoring a search experience to deliver a better search experience a user. The media attention directed at a conference focused on generating traffic to a Web site without regard to the content on that site, its provenance, or its accuracy is downright remarkable. Add together the assertion that Powerset will hobble the Google, and I may have to extend my anti-baloney shields another 5,000 kilometers.

Let’s tackle some realities:

  1. To kill Google, a company has to jump over, leap frog, or out innovate Google. Using technology that dates from the 1990s, poses scaling challenges, and must be “hooked” into the existing Microsoft infrastructure is a way to narrow a gap, but it’s not enough to do much to wound, impair, or kill Google. If you know something about the Xerox PARC technology that I’m missing, please, tell me. I profiled Inxight Software in one of my studies. Although different from Xerox PARC technology used by Powerset, it was close enough to identify some strengths and weaknesses. One issue is the computational load the system imposes. Maybe I’m wrong but scaling is a big deal when extending “context” to lots of users.
  2. Microsoft is slipping further behind Google. The company is paying users, and it is still losing market share. Read my short post on this subject here. Even if the data are off by an order of magnitude, Microsoft is not making headway in the Web search market share.
  3. Cost is a big deal. Microsoft appears to have unlimited resources. I’m not so sure. If Google’s $1 of infrastructure investment buys 4X the performance that a Microsoft $1 does, Microsoft has an infrastructure challenge that could cost more than even Microsoft can afford.

So, there are computational load issues. There are cost issues. There are innovation issues. There are market issues. I must be the only person on the planet who is willing to assert that small scale search tweaks will not have the large scale effects Microsoft needs.

Forget the assertion that Business Week offers when its says that Google is moving forward. Google is not moving forward; Google is morphing into a different type of company. “Moving forward” only tells part of the story. I wonder if I should extend my shields of protection to include filtering baloney about search emanating from a conference focused on tricking algorithms into putting a lousy site at the top of a results list.

Agree? Disagree? I’m willing to learn if my opinions are scrambled.

Stephen Arnold, August 20, 2008

Microslump: If Search Data Are Accurate, Bad News for Microsoft

August 20, 2008

Statistics are malleable. Data about online usage are not just malleable, they are diaphanous. Silicon Valley Insider reported Web search market share data at Silicon Alley Insider here. The article by Michael Learmonth was “Google Takes 60% of Search Market, While MSN Loses Share.” The highlight of the write up is a chart, which I am reluctant to reproduce. I can, I believe, quote one statement that struck me as particularly important; namely:

MSN, which lost more than two percentage points of market share from month to month, going from 14.1% of searches to 11.9%. So if Microsoft’s “Cashback” search engine shopping gimmick actually helped boost search share in May and June, its impact seems to be dropping.

The data come from Nielsen Online, specifically the cleverly named MegaView Search report. Wow, after pumping big money into data centers, buying Fast Search & Transfer and Powerset, and ramping up search research and development, the data suggest that:

  • A desktop monopoly doesn’t matter in search
  • Microsoft’s billions don’t matter in search
  • Aggressive marketing such as the forced download for the Olympic content doesn’t matter.

Google is like one of those weird quantum functions that defy comprehension. What else must Redmond do? Send me your ideas for closing the gap between Microsoft and Google.

Stephen Arnold, August 20, 2008

Five Tips for Reducing Search Risk

August 20, 2008

In September 2008, I will be participating in a a conference organized by Dr. Erik M. Hartman. One of the questions he asked me today might be of interest to readers of this Web log. He queried by email: “What are five tips for anyone who wants to start with enterprise search but has no clue?”

Here’s my answer.

That’s a tough question. Let me tell you what I have found useful when starting a new project with an organization that has a flawed information access system.

First, identify a specific problem and do a basic business school or consulting firm analysis of the problem. This is actually hard to do, so many organizations assume “We know everything about our needs.” That’s wrong. Inside of a set you can’t see much other than other elements of the set. Problem analysis gives you a better view of the universe of options; that is, other perspectives and context for the problem.

Second, get management commitment to solve the problem. We live in a world with many uncertainties. If management is not behind solving the specific problem you have analyzed, you will fail. When a project needs more money, management won’t provide it. Without investment, any search and content processing system will sink under the weight of itself and the growing body of content it must process and make available. I won’t participate in projects unless top management buys in. Nothing worthwhile comes easy or economically today.

Read more

Silverlight Analysis: Not Quite Gold, Not too Light

August 19, 2008

In my keynote at Information Today’s eContent conference in April 2008, I referenced Silverlight’s importance to Microsoft. Since most organizations rely on Windows desktop operating systems and applications, Silverlight becomes a good fit for some organizations. I also suggested that Silverlight would play a much larger role in online rich media. I was not able at the time to reference the role Silverlight would play in the Beijing Olympics. Most in the audience of about 150 big time media executives were not familiar with the technology, nor did those in attendance see much relevance between their traditional media operations and Silverlight. Now that the Olympics have been deemed a success for both Microsoft and NBC, I hope that some of the big media mavens understand that rich media may be important to the survival of many information organizations. I’m all for printed books and journals, but the future beckons video and other types of TV-type material.

Tim Anderson’s excellent analysis of Silverlight is available in The Register, one of my favorite news services. The analysis is “Microsoft Silverlight: 10 Reasons to Love It, 10 Reasons to Hate It”, and you should read it here. Unlike most of the top 10 lists that are increasingly common on Web logs, Mr. Anderson’s analysis is based on a solid understanding of what Silverlight does and how it goes about its business. The write up provides the advertised 10 items of strengths and weaknesses, but he supports each point with a useful technical comment.

Let me illustrate just one of his 20 points, and then you can navigate to The Register for the other 19 items. For example, consider item five in the plus column is that Silverlight interprets XAML–Microsoft’s extensible application mark up language–is interpreted directly by Silverlight “whereas Adobe’s XML GUI language, MXML, gets converted to SWF at compiling time. In fact, XAML pages are included as resources in the compiled .XAP binary used for deploying Silverlight applications.”

Mr. Anderson also includes one of those wonderful Microsoft diagrams that show how Microsoft’s various moving parts fit together. I download these immediately because they come in handy when explaining why it costs an arm and a leg to troubleshoot some Microsoft enterprise applications. This version of the chart about Silverlight requires that you install Silverlight. Now you get the idea about Microsoft’s technique for getting its proprietary technology on your PC.

A happy quack to Tim Anderson for a useful analysis.

Stephen Arnold, August 19, 2008

Search Engine Optimization Meets Semantic Search

August 19, 2008

I’ve been sitting in the corn fields of Illinois for the last six days. I have been following the SES (Search Engine Strategies) Conference via the Web. If you have read some of my previous posts about the art of getting traffic to a Web page, you know my views of SEO. In a word, “baloney.” Web sites without content want to get traffic. The techniques used range from link trading to meta tag spamming. With Google the venturi for 70 percent of Web search, SES is really about spoofing Google. Google goes along with this stuff because the people without traffic will probably give AdWords a go when the content-free tricks don’t work reliably.

I was startled when I read the summary of the panel “Semantic Search: How Will It Change Our Lives?” The write up I saw was by Thomas McMahon, and it seemed better than the other posts I looked at this evening. You can read it here. The idea behind the panel is that “semantic search” goes beyond key words.

This has implications for people who stuff content free Web pages with index terms. Google indexes using words and sometimes the meta tags play a role as well. If semantic search grabs on, people will not search by key words, people will ask questions. The idea is that instead of typing Google +”semantic Web” +Guha, I would type, “What are the documents by Ramanathan Guha that pertain to the semantic Web.” The fellow helped write the standard document several years ago. He’s a semantic Web guru, maybe the Yoda of the semantic Web?

image

Source: http://www.kimrichter.com/Blog/uploaded_images/snakeoil_1-794216.jpg

Participating in this panel were Powerset (Xerox PARC technology plus some original code), Hakia (original technology and a robust site), Ask.com (I’m not sure where it falls on the semantic scale since the rock band wizard from Rutgers cut out), and Yahoo (poor, fragmented Yahoo).

The elephant in the room but not on the panel is Google, a serious omission in my opinion. Microsoft R&D has some hefty semantic talent as well, also not on the panel.

In my opinion the semantic revolution is going to make life more difficult for the SEO folks. Semantic methods require content. Content free Web sites are going to be struggling for traffic unless several actions are taken:

  1. Create original, compelling information. I just completed an analysis of a successful company’s Web site. It was content free. It had zero traffic. The short cut to traffic is content. The client lacks the ability to create content and doesn’t understand that people who create content charge money for their skills. If you don’t have content, go to item two below.
  2. Buy ads. Google’s traffic is sufficiently high that an ad with appropriate key words will get some hits. Buying ads is something SES attendees understand. Google understands it. You may need to pump $20,000 per month into Googzilla’s maw, but you will get traffic.
  3. Combine items one and two.
  4. Buy a high traffic Web site and shoehorn a message into it. There are some tasty morsels available. Go direct and eliminate the hassle and delay of building an audience. Acquire one.

Most SEO consulting is snake oil and expensive snake oil at that. The role of semantic methods will be similar to plumbing. It is important, but like the pipes that carry water, I don’t have to see them. The pipes perform a function. Semantics and SEO are a bit of an odd couple.

Stephen Arnold, August 19, 2008

Search Engine Plumbing Revealed

August 19, 2008

Explaining search is a very difficult business. I want to recommend  “The Linear Algebra Behind Search Engines” by Amy Langville. The discussion was developed several years ago and is now available without charge on MathDL, a service of the Mathematical Association of America’s Digital Library. You can find the excellent write up here. Dr. Langville does include equations, something most publishers quickly delete from most books and reports about search and content processing. Useful comments and explanatory material set this essay apart. If you are interested in the inner workings of some of the search methods in use today, this is must read material. Two–yes, two–happy quacks to Dr. Langville and her excellent work. Now on the University of Charleston team, Dr. Langville has a Ph.D. in operations research from North Carolina State University.She is a recipient of the multi year CAREER Award from the National Science Foundation. She has a new book about Google’s PageRank method in the works.

Stephen Arnold, August 19, 2008

GraphOn Vs Google

August 18, 2008

Patents are complicated. Software patents are even more complicated. GraphOn, a publicly traded company with the motto “Fast and Secure Application Access”, asserts that Google has infringed on GraphOn patents. Forbes’ Magazine has a good summary here. GraphOn’s technology includes systems and methods for cloud-based services. One bone of contention pertains to data management.

The GraphOn organization has pressed claims against Juniper Networks, AutoTrader.com, and other high profile outfits. Some of Google’s highest profile services may be affected, including Google Base and Google AdWords. Google has a number of patents for its systems and methods. A partial list of these is available at ArnoldIT.com here. Some of the information from my study of selected Google inventions may be located by navigating here and entering the phrase Google patents in the search box. I do maintain a relatively complete listing of Google’s patent documents, but this information is available to my clients. If you are interested in accessing these data, write me at seaky2000 at yahoo dot com for more information. My Google Version 2.0 reviews a number of Google’s patent documents, including some references to Google’s approach to data management, publishing, and a number of innovation drivers; that is, inventions in which Sergey Brin or Larry Page play a role. Keep in mind that I am not a legal eagle. My discussion of these inventions is intended to share my findings about how certain Google innovations enable certain applications. As Google’s influence grows, legal charges are likely to increase as well. Google has a number of legal matters underway, some involving data management systems and methods. Patent litigation is slow and expensive. Information will dribble out which it difficult to know exactly what’s happening. What’s clear is that GraphOn believes it has a strong case based on its patents:

  • 6,324,538, Automated on-line information service and directory, particularly for the world wide web
  • 6,850,940, Automated on-line information service and directory, particularly for the world wide web
  • 7,028,034, Method and apparatus for providing a dynamically-updating pay-for-service web site,
  • 7,269,591, Method and apparatus for providing a pay-for-service web site

You can get more information about each of these from the search system at the US Patent & Trademark Office. Remember to check your query syntax. It must match the sample searches in order to get goodies from the USPTO’s wonderful system. I am making no warranties or guaranties about these references. You will need to verify these numbers and titles yourself.

The ZDNet discussion of this issue is here.

Stephen Arnold, August 18, 2008

SharePoint: Custom Search Scopes

August 18, 2008

A reader sent me a link to SearchWinIT at TechTarget.com. The article explains how to “Create Custom Global Search Scopes in Microsoft SharePoint 2007.” The author is Natalya Voskresenskaya, and you can read the full text here. A “search scope” is a narrowing function. It’s somewhat like setting up a collection of documents and then routing specific users’ queries to that collection. The idea is that the content in the scope (“collection”) will be more appropriate. For example, the marketing department needs access to content from two departments and the documents reside in specific folders. A scope allows a user in marketing to get hits from that specific subset of content. SharePoint has other documents in its index, but the marketing person sees documents from that scope. The article does a good job of explaining the procedure to set up a scope.

Stephen Arnold, August 18, 2008

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta