Googzilla Plays Crawfish: Back Tracking on Chrome Terms

September 4, 2008

Ina Fried wrote “Google Backtracks on Chrome License Terms”. You can read her CNet story here. The point of the story is that Google has withdrawn some of the language of its Chrome license terms. Ms. Fried wrote:

Section 11 now reads simply: “11.1 You retain copyright and any other rights you already hold in Content which you submit, post or display on or through, the Services.”

For me, this this sudden reversal is good news and bad news. The good news is that the GOOG recognized that it was close to becoming a Microsoft doppelgänger and reversed direction–fast. The bad news is that the original terms make it clear that Google’s browser containers would monitor the clicks, context, content, and processes of a user. Dataspaces are much easier to populate if you have the users in a digital fishbowl. The change in terms does little to assuage my perception of the utility of dataspaces to Google.

To catch up on the original language, click here. To find out a bit about dataspaces, click here.

Stephen Arnold, September 4, 2008

A Vertical Search Engine Narrows to a Niche

September 4, 2008

Focus. Right before I was cut from one of the sports teams I tried to join I would hear, “Focus.” I think taking a book to football, wrestling, basketball, and wrestling practice was not something coaches expected or encouraged. Now SearchMedica, a search engine for medical professionals, is taking my coach’s screams of “Focus” to heart. The company announced on September 3, 2008,  a practice management category. The news release on Yahoo said:

The new category connects medical professionals with the best practice management resources available on the Web, including the financial, legal and administrative resources needed to effectively manage a medical practice.

To me the Practice Management focus is a collection of content about the business of running a health practice. In 1981, ABI/INFORM had a category tag for this segment of business information. Now, the past has been rediscovered. The principal difference is that access to this vertical search engine is free to the user. ABI/INFORM and other commercial databases charge money, often big money to access their content.

If you want to know more about SearchMedica, navigate to www.searchmedica.com. The company could encourage a host of copy cats. Some would tackle the health field, but others would focus on categories of information for specific user communities. If SearchMedica continues to grow, it and other companies with fresh business models will sign the death sentence for certain commercial database companies.

The fate of traditional newspapers is becoming increasingly clear each day. Super star journalists are starting Web logs and organizing conferences. Editors are slashing their staff. Senior management teams are reorganizing to find economies such as smaller trim sizes, fewer editions, and less money for local and original reporting. My though is that companies like SearchMedica, if they get traction, will push commercial databases companies down the same ignominious slope. Maybe one of the financial sharpies at Dialog Information Services, Derwent, or Lexis Nexis will offer convincing data that success is in their hands, not the claws of Google or upstarts like SearchMedica. Chime in, please. I’m tired of Chrome.

Stephen Arnold, September 4, 2008

Google Chrome License

September 3, 2008

Update: September 4, 2008, 9 30 pm Eastern

Useful summary of the modified Chrome license terms. Navigate to TapTheHive at http://tapthehive.com/discuss/This_Post_Not_Made_In_Chrome_Google_s_EULA_Sucks

Update: September 4, 2008, 11 30 am Eastern

Related links about the Chrome license:

  • Change in Chrome license terms here
  • Key Stroke Logging here
  • Security issues here
  • Back Peddling on terms here

Update: September 3, 2008, 9 18 am Eastern

WebWare’s take on the Chrome license agreement. Worth reading. It is here.

Original Post

If true, this post by Poss is a keeper. You can read his original article on Shuzak beta here. The juicy part is an extract from the Chrome terms of service. I quote Mr. Shuzak beta:

11.1 You retain copyright and any other rights you already hold in Content which you submit, post or display on or through, the Services. By submitting, posting or displaying the content you give Google a perpetual, irrevocable, worldwide, royalty-free, and non-exclusive license to reproduce, adapt, modify, translate, publish, publicly perform, publicly display and distribute any Content which you submit, post or display on or through, the Services.

As I understand this passage, Googzilla has rights to what I do, what I post, what I see via its browser. Seems pretty reasonable for a Googzilla bent on conquering the universe. What do you think? Before you answer, check out the data model I included in my KMWorld column in July 2008.

Stephen Arnold, September 3, 2008

Google: More Chrome Browser Goodness

September 3, 2008

In my Google Version 2.0, published by Infonortics, I present a table of patent documents that act as beacons for Google’s engineers. On September 2, 2008, the USPTO published US 7421432 B1. Among the inventors of the “Hypertext Browser Assistant” is Larry Page. He is assisted by two super wizards, Urs Höelzle and Monika Henzinger. My research into Google’s investments in technology suggested that when either Mr. Brin’s or Mr. Page’s names appear on a patent document, that innovation is important. You and the legions of super smart MBAs who disdain grunting through technical documents will probably disagree. Nevertheless, I want to call the abstract for this invention to the attention of my two or three readers.

A system facilitates a search by a user. The system detects selection of one or more words in a document currently accessed by the user,  generates a search query using the selected word(s), and retrieves a document based on the search query. When the document includes one or more links corresponding to a linked document, the system analyzes each of the links, pre fetches the linked documents corresponding to a number of the links, and presents the document to the user. The system receives selection of one of the links and retrieves the linked document corresponding to the selected link. The system identifies one or more pieces of information in the retrieved document, determines a link to a related document for each of the identified pieces of information, and provides the determined links with the related document to the user.

My “pal” Cyrus, a Google demi-wizard, thinks that I create Google images in Photoshop. No, Cyrus, these images appear in Google’s patent documents, which I suggest you and your fellow demi-wizards read before opining on my Photoshop skills. You will see that the browser represented is not Mozilla’s, Microsoft’s or Opera’s.

smart browsing

What this invention purports to do is provide intelligent “training wheels” to help users find information they are seeking. The system uses a range of Google infrastructure functions to perform its “helper” functions; for example, predictive math, parsed content, and related objects. A more detailed analysis will appear in the Google monograph I am preparing for Infonortics, the publisher who has an appetite for my analyses of Googley innovations. Look for the monograph before the New Year.

If you want to revel in the Page-meister’s golden prose, you can download a copy for free from the outstanding USPTO Web site here. Hint: reading the syntax examples carefully. The patent narrative suggests that this “training wheels” function will work in a standard browser, my hunch is that some of the more sophisticated functions known to “those skilled in the art” will require Chrome. After you have read the patent document, feel free to post your views of the technology Google has “invented”.

Oh, Cyrus, if you have difficulty locating Google’s patent documents, give me a call. I’m in the system.

Stephen Arnold, September 3, 2008

Microsoft Squeezes Google’s Privacy Policies

September 3, 2008

ZDNet (Australia) reported on August 29, 2008, about Microsoft’s perception of Google and its approach privacy. I saw the post in the ZDNet UK Web log. (I have to tell you that the failure to have a common index to the ZDNet content is less than helpful. If  Bill Ziff were still running the outfit, I believe this oversight would have been addressed and quickly. Ah, youth and the lack of corporate memory. The folks don’t know why I am risking a heart attack over this sort of carelessness.) Liam Tung wrote “Microsoft Exec: Google Years behind Us on Privacy”. You can read the full UK article here. I haven’t been able to locate the Australian original thanks to ZDNet’s fine search system.

For me, the key point in the article was:

Google had not invested enough to build privacy into its products, citing Street View as a prime example.

What I find interesting is that Google does not break out its investments. The company prefers, like Amazon, to offer a big fuzzy ball of numbers. As a result, I don’t think I or anyone outside of Google’s finance unit knows what Google spends on privacy. The notion that a company trying to make headway in online advertising, personalization, and social functions is going to pay much attention to privacy tickles my funny bone. Yahoo’s disappointing ad performance might be attributable to the company’s alleged inability to deliver rolled up demographics so advertisers can pinpoint where to advertise to reach which specific demographic sector. If Microsoft wants to make real money from its $400 million purchase of Ciao.com, the company may have to revisit its own privacy policies.

Google’s picture taking is a privacy flash point. However, based on my research, there are other functions at Google that may warrant further research. Microsoft may be forced to follow in Google’s very big paw prints in its quest for money and catching up to Googzilla.

Stephen Arnold, September 3, 2008

Google Browser: ABCs of Information Access

September 1, 2008

A is for Apple. The company uses WebKit in Safari. B is for browser, the user’s interface to cloud applications and search. C is for containers, Google’s nifty innovation for making each window a baby window on functions. The world is abuzz today (September 1, 2008) with Google’s browser project. The information, according to Google Blogoscoped, appeared in a manga or comic book. You can read that story here. There are literally dozens of posts appearing every hour on this topic, and I want to highlight a few of the more memorable posts and offer several comments.

First, the most amusing post to me is Kara Swisher’s post here. She a pal of the GOOG and, of course, hooked up with the media giant, currently challenged for revenues and management expertise The Wall Street Journal. The best think about her story is that Google’s not creating an extension of the Google environment. Nope, Google is “igniting a new browser war”. I thought Google and Microsoft were at odds already. After a decade, a browser war seems so 1990s to me. But she’s a heck of a writer.

Second, Carnage4Life earned a chuckle with its concluding statement about the GOOG:

Am I the only one that thinks that Google is beginning to fight too many wars on too many fronts. Android (Apple), OpenSocial (Facebook), Knol (Wikipedia), Lively (IMVU/SecondLife), Chrome (IE/Firefox) and that’s just in the past year.

Big companies don’t have the luxury of doing one thing. Google is more in the “controlled chaos” school of product innovation. Of course, Google goes in a great many directions. The GOOG is not a search engine; it is an application platform. It makes sense to me to see the many tests, betas, and probes. Google’s been doing this innovation by diffusion since its initial public offering and never been shy about its approach or its success and failure rate.

Finally, I enjoyed this comment by Mark Evans in “Google Browser or Slow News Day” here. He writes:

The bigger question is whether a Google browser will resonate with computers users. Many people are using an increasing number of Google services (search, GMail, Blogger, etc.) but are they ready to surrender to Google completely by dumping Firefox and IE?

My take is a bit different. Two points without much detail. I have more but this is, after all, a free Web log written by an addled goose.

  1. Why do we assume that Google is suddenly working on a browser? Looking at the screen shots of Google patent documents over the last couple of years, the images do not look like Firefox, Opera or Safari. Indeed when I give talks and show these screen shots, some Googlers like the natty Cyrus are quick to point out that these are photoshopped. Not even some canny Googlers pay attention to what the brainiacs in the Labs are doing to get some Google functions to work reliably.
  2. Google’s patent documents make reference to janitors, containers, and metadata functions that cannot be delivered in the browsers I use. In order to make use of Google’s “inventions”, the company needs a controlled environment. Check out my dataspaces post and the IDC write up on this topic for a glimpse of the broader functionality that demands a controlled computing environment.

I’m not sure I want to call this alleged innovation a browser. I think it is an extension of the Googleplex. It is not an operating system. Google needs a data vacuum cleaner and a controlled computing environment. The application may have browser functions, but it is an extension, not a solution, a gun fight, or an end run around Firefox.

Stephen Arnold, September 1, 2008

The Knol Way: A Google Wobbler on the Information Highway

September 1, 2008

Harry McCracken greeted Google on September 1, 2008, with a less than enthusiastic discussion of Knol, Google’s user-generated repository of knowledge. The story ran in Technologizer, a useful Web log for me. You can read the full text of the story here. The thesis of the write up, as I understand the argument, is that while a good idea, the service lacks depth. The key point for me was this statement:

Knol’s content will surely grow exponentially in the months to come, but quantity is only one issue. Quality needs to get better, too–a Knol that’s filled with swill would be pretty dismaying, and the site in its current form shows that the emphasis on individual authors creates problems that Wikipedia doesn’t have. Basic functionality needs to get better, too: The Knol search engine in its current form seems to be broken, and I think it needs better features for separating wheat from chaff. And I’d give the Knol homepage a major overhaul that helps people find the best Knols rather than featuring some really bad ones.

I agree. One important point is that the Wikipedia method of allowing many authors to fiddle has its ups and downs. Knol must demonstrate that it is more than a good idea poorly executed and without the human editorial input that seems to be necessary under its present set up.

I have a mental image of the Knol flying across the information super highway and getting hit by a speeding Wikipedia. Splat. Feathers but no Knol.

In closing, let me reiterate that I think Knol is not a Wikipedia. It is a source of input for Google’s analytical engines. The idea is that an author is identified with a topic. A “score” can be generated so that the GOOG has another metric to use when computing quality. My hunch is that the idea is to get primary content that copyright free in the sense that Google doesn’t have to arm wrestle publishers who “own” content. The usefulness to the user is a factor of course, but I keep thinking of Knol as useful to Google first, then me.

Will Google straighten up and fly right the way the ArnoldIT.com logo does? Click here to see the logo in action. Very consistent duck, I’m sure. Will Knol be as consistent? I don’t know. Like the early Google News, the service is going to require programmatic and human resources,which may be a while in coming. For now, Google is watching clicks. When the Google has sufficient data, then more direction will be evident. If there’s no traffic, then this service will be an orphan. I hope Googzilla dips into its piggy back to make Knol more useful and higher quality.

Stephen Arnold, September 1, 2008

IBM and Sluggish Visualizations: Many-Eyes Disappointment

September 1, 2008

IBM’s Boston research facility offers a Web site called Many Eyes. This is another tricky url. Don’t forget the hyphen. Navigate to the service at http://www.many-eyes.com. My most recent visit to the site on August 31, 2008, at 8 pm eastern timed out. The idea is that IBM has whizzy new visualization tools. You can explore these or, when the site works, upload your own data and “visualize” it. The site makes clear the best and the worst of visualization technology. The best, of course, is the snazzy graphics. Nothing catches the attention of a jaded Board of Directors’ compensation committee like visualizing the organization’s revenue. The bad is that visualization is still tricky, computationally intensive, and capable of producing indecipherable diagrams. A happy quack to the reader who called my attention to this site, which was apparently working at some point. IBM has a remarkable track record in making its sites unreliable and difficult to use. That’s a type of consistency I suppose.

Stephen Arnold, September 1, 2008

Citation Metrics: Another Sign the US Is Lagging in Scholarship

August 31, 2008

Update: August 31, 2008. Mary Ellen Bates provides more color on the “basic cable” problem for professional informatoin. Worth reading here. Econtent does an excellent job on these topics, by the way.

Original Post

A happy quack to the reader who called my attention to Information World Review’s “Numbers Game Hots Up.” This essay appeared in February 2008 and I overlooked it. For some reason, I am plagued by writers who use the word “hots” in their titles. I am certain Tracey Caldwell is a wonderful person and kind to animals. She does a reasonable job of identifying problems in citation analysis. Dr. Gene Garfield, the father of this technique, would be pleased to know that Mr. Caldwell finds his techniques interesting. The point of the long essay which you can read here is that some publishers’ flawed collections yields incorrect citation counts. For me, the most interesting point in the write up was this statement:

The increasing complexity of the metrics landscape should have at least one beneficial effect: making people think twice before bandying about misleading indicators. More importantly, it will hasten the development of better, more open metrics based on more criteria, with the ultimate effect of improving the rate of scientific advancement.

Unfortunately, traditional publishers are not likely to do much that is different from what the firms have been doing since commercial databases became available. The reason is money. Publishers long to make enough money from electronic services to enjoy the profit margins of the pre digital era. But digital information has a different cost basis from the 19th century publishing model. The result is reduced coverage and a reluctance to move too quickly to embrace content produced outside of the 19th century model.

Services that use other methods to determine link metrics exist in another world. If you analyze traditional commercial information, the Web dimension is either represented modestly or ignored. Mr. Caldwell’s analysis looks at the mountain tops, but it does not explore the valleys. In those crevices is another story; namely, researchers who rely on commercial databases are likely to find themselves lagging behind those researchers in countries where commercial databases are simply too expensive for most researchers to use. A researcher who relies on a US or European commercial database is likely to get only an incomplete picture.

Stephen Arnold, August 31, 2008

Google Maps Attract Flak

August 31, 2008

Google inked a deal with GeoEye to deliver 0.5 meter resolution imagery. One useful write up appears in Softpedia here. The imagery is not yet available but will be when the GeoEye-1 satellite begins streaming data. The US government limits commercial imagery resolution. Th Post Chronicle here makes this comment, illustrating the keen insight of traditional media:

Google did not have any direct or indirect financial interest in the satellite or in GeoEye, nor did it pay to have its logo emblazoned on the rocket. [emphasis added]

In my opinion, Google will fiddle the resolution to comply. Because GeoEye-1 was financed in part by a US government agency, my hunch is that Google will continue to provide geographic services to the Federal government and its commercial and Web users. The US government may get the higher resolution imagery. The degraded resolution will be for the hoi polloi.

Almost coincident with news of this lash up, Microsoft’s UK MSN ran “UK Map Boss Says Google Wrecking Our Heritage.” You can read this story here. The lead paragraph to this story sums up the MSN view:

A very British row appears to be brewing after the president of the British Cartographic Society took aim at the likes of Google Maps and accused online mapping services of ignoring valuable cultural heritage. Mary Spence attacked Google, Multimap and others for not including landmarks like stately homes and churches.

The new GeoEye imagery will include “valuable cultural heritage” as well as cows in the commons and hovels in Herfortshire.

Based on my limited knowledge of British security activities, I would wager a curry that Google’s GeoEye maps will be of some use to various police and intelligence groups working for Queen and country. Microsoft imagery in comparison will be a bit low resolution I surmise. MSN UK will keep me up to date on this issue I hope.

Stephen Arnold, August 31, 2008

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta