Microsoft and SEO Optimization

August 23, 2009

Whilst poking around for the latest Microsoft search information, I came across a Web site called Internet Information Services at www.iis.net. I was curious because the write up on the Web site said:

The Site Analysis module allows users to analyze local and external Web sites with the purpose of optimizing the site’s content, structure, and URLs for search engine crawlers. In addition, the Site Analysis module can be used to discover common problems in the site content that negatively affects the site visitor experience. The Site Analysis module includes a large set of pre-built reports to analyze the sites compliance with SEO recommendations and to discover problems on the site, such as broken links, duplicate resources, or performance issues. The Site Analysis module also supports building custom queries against the data gathered during crawling.

The word “experience” did it. I zipped to whois and learned that the site is Microsoft’s. The registrar is an outfit called CSC Protect-a-Brand. Microsoft does not want to let this url slip through its hands I assume. You  can download the tool here.

What interested me was that Microsoft has written the description of the tool without reference to its own Web indexing system. Furthermore, the language is generic which leads me to believe that this extension and the other nine in the category “Search Engine Optimization Toolkit” apply to Google as well.

If you are an SEO wizard and love the Microsoft way, you will want to download and experiment with these tools. Testing might be a good idea. If the tools work equally well for Bing.com and Google.com, has Microsoft emulated the Google Webmaster guidelines? If not, what will be the impact on a highly ranked Google site. With 75 to 85 percent of Web search traffic flowing to Google.com, an untoward tweak might yield interesting effects.

Stephen Arnold, August 23, 2009

Convera and the Bureau of National Affairs

August 22, 2009

A happy quack to the reader who sent me a Tweet that pointed to the International HR Decision Support Network: The Global Solution for HR Professionals. You can locate the Web site at ihrsearch.bna.com. The Web site identifies the search system for the site as Convera’s. Convera has morphed or be absorbed into another company. This “absorption” struck me as somewhat ironic because the Convera Web site carries a 2008 white paper by a consulting outfit called Outsell. You can read that Convera was named by Outsell as a rising star for 2008. Wow! I ran query for executive compensation in “the Americas” and these results appeared:

bna convera

The most recent result was dated August 14, 2009. Today is August 21, 2009. It appears to me that the Convera Web indexing service continues to operate. I was curious about the traffic to this site. I pulled this Alexa report which suggests that the daily “reach” of the site is almost zero percent.

alexa bna

Compete.com had no profile for the site.

I think that the human resources field is one of considerable interest. My recollection is that BNA has had an online HR service for many years. I could not locate much information about the Human Resource Information Network that originally was based in Indianapolis.

Convera appears to be providing search results to BNA, and BNA has an appetite for an online HR information service. The combination, however, seems to be a weak magnet for traffic. Vertical search may have some opportunities. Will Convera and BNA be able to capitalize on them?

But with such modest traffic I wonder why the service is still online. Anyone have any insights?

Stephen Arnold, August 21, 2009

More Local Loco Action: Guardian UK Gets the Bug

August 21, 2009

Short honk: I don’t want to dig too deeply into the efforts of a traditional newspaper company to get more traction in the Webby world. You will want to read Online Journalism Blog’s “The Guardian Kicks Off the Local Data Land Grab” and ponder the implications of the write up.” The idea is that a newspaper wants to hop on the hyper local toboggan before the run dumps the sledder into the snow at the base of the mountain. Mr. Bradshaw, the author of the article, wrote:

Now The Guardian is about to prove just why it is so important, and in the process take first-mover advantage in an area the regionals – and maybe even the BBC – assumed was theirs. This shouldn’t be a surprise to anyone: The Guardian has long led the way in the UK on database journalism, particularly with its Data Blog and this year’s Open Platform. But this initial move into regional data journalism is a wise one indeed: data becomes more relevant the more personal it is, and local data just tends to be more personal.

For whatever reason, hyper local information is getting as much attention as real time search and Twitter. I wish the Guardian good luck with scaling, monetizing, and marketing. My thought is that the hyper local crowd will want to move quickly before Googzilla wanders through this information neighborhood. Finding is a big part of the local information challenge. The deal breaker will be monetizing. The Guardian may well have the most efficient monetization method known to man. I hope so. The Google’s a good monetizer too.

Stephen Arnold, August 21, 2009

Twitter Stream Value

August 19, 2009

Short honk: I want to document the write up in Slashdot “Measuring Real Time Public Opinion With Twitter.” The key point for me was that University of Vermont academics are investigating nuggets that may be extracted from the fast flowing Twitter stream of up to 140 character messages. No gold bricks yet, but the potential for high value information seems to warrant investigation.

Stephen Arnold, August 19, 2009

Readers Digest Enters Intensive Care

August 18, 2009

The Readers Digest, according to the Baltimore Sun’s “Reader’s Digest Bankruptcy Report”, did not surprise me. This outfit had a clever business model and some very confident executives. I interacted with the Readers Digest when it bought the Source, one of the early online services. For me, the Readers Digest had a money machine with its “association” model when my grandmother subscribed to the chubby little paperback-book-sized monthly stuffed with recycled content. I liked the “digest” angle. The notion that my busy grandmother could not read the original article amused me. She was indeed really busy when she was in her 70s. What she liked was that the content was sanitized. The jokes were clean and usually not subject to double entendre.The Readers Digest recognized that the Source was a harbinger and made the leap to get into electronic information with the now moribund Control Data Corporation. The step was similar to an uncoordinated person’s jump off the high dive. The Readers Digest knocked its head on the Source deal and dropped off my online radar. Now the Readers Digest is blazing a new trail for magazine publishers: chopping the number of issues published per year, cutting its circulation guarantee, and learning to love bankruptcy attorneys. Which magazine will be next? Oh, I know the leadership of the dominant magazine companies will chase crafts, home decoration, and Future Publishing’s book-a-zine model. New thinking and methods are needed to save the traditional magazine, a group eager to turn back the clock to the glory days of the Saturday Evening Post. Like Babylonian clay tablets morphing into Home Sweet Home on ceramic wall hangings, magazines will survive. The market is moving beyond the old information delivery vehicles, and the 1938 Fords are struggling to keep pace with Twitter “tweets”. Here’s a quote by Charlie added to the Baltimore Sun article: “Still interesting to thumb through, but reprinting articles that were already published – how long ago? – is not a good model for those who make regular use of the Internet.” Well said.

Stephen Arnold, August 18, 2009

Bing Cherries Ripen Slowly

August 18, 2009

Short honk: Dan Frommer (Silicon Valley Insider) reported that “Bing Search Share Rises Modestly in July”. He said, “Bing’s share was 8.9 percent, up from 8.4 percent in June”. Because online is seasonal, any growth in the summer months, is a positive. Mr. Frommer points out that Yahoo’s search share is heading south. He points out that Yahoo has to grow its search share because “Yahoo will only get revenue from Bing searches performed on Yahoo.” Three quick observations: [a] Yahoo continues to struggle to make its services visible. I have to do a lot of clicking to see current email messages. [b] Yahoo’s search technologies may have been also rans to Google’s but I find the different search interfaces and the unpredictable results when searching for computer gear annoying. I had to write one SSD vendor to locate the product on the vendor’s Yahoo store. The outfit was Memory Suppliers. When I located the product on Yahoo, it was priced at more than $1,000. Error or a merchant trying to skim the unknowing? [c] I ran the query “iss weather photos” for an article I had to write yesterday for the International Online Show’s Web log. I did not get “International space station” snaps of weather systems. I got hits to backyard weather stations. Google delivered what I needed. I settled on using images from USA.gov, which uses the Bing.com system. Yahoo’s image search was less useful than Bing’s and Google’s. I have made this statement before: Yahoo is a floundering AOL. Instead of Yahoo buying AOL, maybe AOL should buy Yahoo. AOL is trying to become a content generation company. I still am not sure what the Yahooligans are doing. I don’t think it is search and that is going to prove to be a misstep.

Stephen Arnold, August 18, 2009

Google in Jeopardy

August 18, 2009

Two heavyweights in the search expertise department have concluded that Google may be vulnerable to Microsoft and Yahoo when their search systems are combined. And if not actually in trouble, two search experts think that Googzilla could be cornered and its market share reduced.

Let’s look at what the search experts assert, using new data from online monitoring and analytics vendors.

Search Engine Land, a publication focused on search engine optimization or SEO, ran “Report: MicroHoo Penetration Near Google’s, Google Users Most Loyal”. Greg Sterling analyzed the comScore data. He noted that Google had “65 percent of the search volume in the US”. He added, [the data] shows that 84 percent of search users are on Google.” Then he inserts the killer comment for those who want Google neutered:

However 73.3 percent of the search user population are on Yahoo and Microsoft, when the two are combined.

He pointed out that Google is a habit. He closed the analysis with the comment:

The new conventional wisdom is that people simply use Google because they’re familiar with it and have become habituated to using it. But I suspect that explanation doesn’t really capture what’s going on.

My question is, “What is going on?” You can look at his presentation of the comScore data and draw your own conclusions.

Along the same line of reasoning, the New York Times weighed in with “The Gap between Google and Rivals May Be Smaller than You Think”. Miguel Helft wrote:

ComScore found that for the combined Yahoo-Microsoft, “searcher penetration,” or the percentage of the online population in the United States that uses one of those search engines, is 73 percent. Google’s searcher penetration is higher, but not by that much: at 84 percent.

Again the foundation of the argument is comScore. Mr. Helft concluded with what I found a surprising direction:

“The challenge will be to create a search experience compelling enough to convert lighter searchers into regular searchers which is generally easier than converting new users,” Eli Goodman, comScore Search Evangelist, said in a press release. “Though clearly easier said than done, if they were to equalize the number of searches per searcher with Google they would command more than 40 percent market share.” That suggests Microsoft may want to spend more of its money improving Bing, rather than on marketing Bing. Spending on both, of course, can’t hurt.

So, two search experts see the comScore data as evidence of an important development in Web searchers’ behavior. And the New York Times is offering Microsoft marketing advice. I recognize the marketing savvy of the New York Times and I think that the New York Times should tell Microsoft what to think and do.

Three questions flapped through the addled goose’s mind:

  1. Are the comScore data accurate? I did not see any information from other Web traffic research firms, nor did I see any data from either Google or Microsoft. My recollection is that any data about Web traffic and user behavior has to be approached with some care. Wasn’t there that notion of “margin of error”?
  2. What is the projected “catch up” time for Microsoft and Yahoo, once the firms’ search businesses are combined? My analyses suggest that the challenge for Microsoft is “to get there from here”. The there is a significant chunk of Google market share. The “here” is the fact that the Bing.com traffic data are in early days, influenced by “pay for search schemes”, and research methods that have not been verified.
  3. Are the search experts working hard to create a story based on a consulting firm’s attempt to generate buzz in the midst of the August doldrums?

Experts like Search Engine Land and the New York Times know their material and are eminently qualified as consultants in information retrieval. What keeps nagging me is that pesky 80 percent Google market share and the 11 years of effort necessary for Google to achieve that share. I am not expert. I see a lot of work ahead for Microsoft.

Stephen Arnold, August 18, 2009

Commercial Online at Crunch Time

August 17, 2009

Einstein would be confused about the meaning of “time” in the search and content processing sector.

In the early days of online, commercial database producers controlled information that was accessible online. The impetus for electronic information was the US government. Some of the giants of the early online world were beneficiaries of government contracts and other government support for the technology that promised to make information findable.

I recall hearing when my father worked in Washington, DC in the 1950s that there was “government time.” The idea, as I recall, is that when a government entity issues a contract or support, the time lines in that deal stated start and stop dates, but not how fast the work had to be completed. I learned when but a youth that “government time” could be worked so that the contract could be extended. As a result, government time had a notional dimension known to insiders. Outsiders would have another view of time.

image

Source: http://focus.aps.org/files/focus/v23/st18/time_tunnel_big.jpg

When the first commercial online systems became available, time gained another nuance. Added to the idea of “government time” was the idea that computing infrastructure required time to process information. Programmers needed time to write code and debug programs. Systems engineers needed time to figure out how to expand a system. More time was needed to procure the equipment and time was necessary to get the hardware like DASDs (direct access storage devices) deliver and online.

One word—”time”—was used to refer to these many different nuances and notions of time. Again the outsider was essentially clueless when it came to understanding the meaning of “time” when applied to any activity related to electronic information.

Fast forward to 1993 and the availability of the graphic browser to make the Internet usable to average folks. The idea that a click could display a page in front of the user in very little time was compelling. The user received information quickly and formed an impression that the time required to access information via the Internet was different from the time required to schlep to a library to get information. Time became distorted with another load of meaning: work processes.

Now think about the meaning of “time” today. Vendors are no longer content with describing a system as fast and responsive. The word time has been turbo charged with the addition of the adjectival phrase “real time”.

What is real time? What is real time search? If you think about the meaning of time itself in the online world, you may conclude as I have that when an online vendor says “time”, you don’t have a firm understanding of what the heck the vendor means. When a vendor says “real time” or “near real time”, we are further into the fog.

Read more

Google Pushes over the Book Shelves

August 17, 2009

If there is one thing I admire about Google, it is the company’s craftiness. Craftiness is not a negative. I refer to the art of execution, a skill in doing what is needed to advance the company’s interests and achieving its goals with little effort. A good example is “Bringing the Power of Creative Commons to Google Books.” If I were a traditional publisher, the Google announcement will not make much sense. “Real” authors have agents and understand the New York publishing game. However, the individuals who want to create material and get it distributed by the Google now have a way to achieve this goal. I explain some of the tools Google has at its disposal in my Google: The Digital Gutenberg. You can find a link in the ad at the top of this Web log’s splash page. I heard about this program and I comment about Google’s platform as a new type of text and rich media “River Rouge”. What will interest me is watching to see which publishers understand that the math club has put a cow on top of the building where the writing and journalism faculties have their offices. How will publishers top creative commons content available via the Google with few financial burdens placed on authors, students,  or Internet users with an interest in longer form content? The third quarter is now beginning. Google kicks off. Publishers receive.

Stephen Arnold, August 14, 2009

Microsoft Gets an F from Professor Google for Scale Paper

August 16, 2009

I wrote a short post here about Microsoft’s suggestion that Google has gone off track with its engineering for petascale computing. Since I have two or three readers, no one really paid much attention to my observations. I was surprised to learn from Tom Krazit in his article “Google’s Varian” Search Scale Is Bogus” that Google disagrees with Microsoft. Dr. Varian is a Google wizard, and he teaches at Berkeley. His comments, as I understand them from Mr. Krazit’s article, amount to Microsoft’s getting an F. Yikes. Academic probation and probably a meeting with the dean.

For me the most interesting comment in the article was a comment by Dr. Varian:

So in all of this stuff, the scale arguments are pretty bogus in our view because it’s not the quantity or quality of the ingredients that make a difference, it’s the recipes. We think we’re where we are today because we’ve got better recipes and we have better recipes because we spent 10 years working on search improving the performance of the algorithm. Maybe I’m pushing this metaphor farther than it should go, but I also think we have a better kitchen. We’ve put a lot of effort into building a really powerful infrastructure at Google, the development environment at Google is very good.

Microsoft now has to repeat a class and prove that it can generate revenue from its Web search business. Oh, Microsoft also has to repay prior investment plus interest to make the numbers satisfy this addled goose’s penchant for counting pennies.

Stephen Arnold, August 15, 2009

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta