Web Wide API: The Battleground

May 14, 2009

If you wondered what was the driver for the API snowstorm, read “Can Amazon Be the Default Payment API for the Web?” here. The author aaronchua did a good job of explaining the logic behind a single Web API for online payments. The issue is not multiple payment systems. The call is for a single payment system. Assume this happens. Monopoly, right? The APIs are important to Amazon, Google, and others. Winner takes all is logical, right?

Stephen Arnold’ May 14, 2009

Google, Micro-Blogging: Makes Perfect Sense

May 14, 2009

Google, the tarantula of the web, purchased Jaiku in October 2007, a service that allows it’s users to gather micro-blogs from other Web sites. The content can be viewed via the Web or by mobile phone. Google open sourced Jaiku in January 2009, just as Twittermania was gaining momentum.

Google’s decision could be a vote of confidence for open source, or it could be a response to Google’s failure to gain traction among the Twitterati.

Sites like Twitter, Flickr and MySpace each offer their own twists and user-friendly ways of appealing to mass amounts of micro-bloggers and furthermore, potential customers, but using a site that collects each feed and makes it accessible through one’s cell or computer, just makes sense.

In a business world where it’s crucial to keep in contact and notice emerging trends, it would be easy to spend your entire day signing-in and utilizing the sites previously mentioned. Google, despite its success in other search spaces, recognized the importance of real time search in its recent Searchology mini-camp.

The reality may be that Twitter, despite the hype, may be a challenger to Facebook. Facebook’s recent redesign nods in the direction of Twitter. Google, on the other hand, acknowledges the importance of real time search, making a distinction between Twitter’s indexing of tweets and the larger, Google-scale challenge of real time search of Web content.

“Less talk and more indexation” is the goose’s cry.

Hunter Embry, May 15, 2009

Some Google in the White House

May 13, 2009

A month ago, I received a call from a journalist asking about the Obama White House’s uses of Google. I did not answer the question because big time journalists ask me question, and I am not a public library reference desk worker any more.

One insight can be found here. Google said:

App Engine supports White House town hall meeting
In late March, the White House hosted an online town hall meeting, soliciting questions from concerned citizens directly through its website. To manage the large stream of questions and votes, the White House used Google Moderator, which runs on App Engine. At its peak, the application received 700 hits per second, and across the 48-hour voting window, accepted over 104,000 questions and 3,600,000 votes. Despite this traffic, App Engine continued to scale and none of the other 50,000 hosted applications were impacted. For more on this project, including a graph of the traffic and more details on how App Engine was able to cope with the load, see the Google Code blog.

How Googley is the Obama White House? Pretty Googley I hear.

Ste3phen Arnold, May 13, 2009

.

Google Time

May 13, 2009

Searchology strikes me as a forum for Google to remind journalists, the faithful, unbelievers, and competitors that the GOOG is the big dog in search, You can read dozens of reports about Google’s search enhancements, A good round up was “Google Unveils New Search Features” here. Don’t like AFP, run this query on Google News and pick a more useful summary. For me, the key announcements had to do with time. The date of a document and the time of an event are important but different concepts. Time is a difficult problem, and Google’s announcements underscore the firm’s time expertise. Timelines? No problem. Date sort? No problem. For me what’s important is that time prowess is a tiny tip of much deeper underlying technical capabilities. The Google has some muscles it is just starting to flex.

Stephen Arnold, May 13, 2009

Wolfram Alpha and Beta PR

May 12, 2009

At the airport this morning, I flipped through a beta wave of publicity for the Alpha search system, actually the soon-to-be-released search system. I found

Christopher Dawson’s write up a useful example of beta PR. The article “More Details Emerge on Wolfram Alpha” here representative. For me, the most interesting comment in Mr. Dawson’s story was:

Google has also attempted to add semantic search capabilities (and I’m sure will get there sooner than later; they’re Google, after all), but so far, this doesn’t give you much.

The view seems to me to be that Google is not in the semantic search race. Wolfram Alpha, with its demo and previews, is.

My research for my three Google studies suggests otherwise. You can scan information about my Google studies here. I can’t easily summarize the research I have conducted over the last six or seven years. Making the situation more tricky is the fact that some of my work has been published by BearStearns’ and IDC as client-only reports.

Nevertheless, the notion that a demo makes Wolfram Alpha ahead of Google strikes me as incorrect.

The interesting question that i have been thinking about is, “Why are observers so keen on finding an alternative to Google?” What surprised me was the high expectations for Cuil.com by former Googler Anna Patterson and her team. Cuil.com has improved, but I have picked up hints that the GOOG has not been far from the Cuil.com project, particularly with regard to some tests on message collections.

What’s happening is a bit of cover your tail combined with wishful thinking. The pundits saw Google as Web search and ads for a decade. Now that anyone with a willingness to look at its mobile, shopping, maps, and other services can see that Google has been a platform and is now sufficiently diverse to make the Web and ads crowd look, well, anachronistic.

Enter a new search system.

The pundits claim that it is a Google killer without setting forth much in the way of a yardstick by which one can measure the progress of Google death. Here are some examples:

  • Cost of infrastructure, to date, to grow, over five years
  • Number of users of sophisticated search outputs. (Remember only about five percent of search users take advantage of advanced search features)
  • Number of documents processed by time unit, including transformation, parsing, and indexing
  • Business model (ads are okay but will advertisers pay Google scale cash flows to reach a sophisticated service)

These four points need some consideration. But when speculating about “to be” products and services, one has the advantage of working with modest evidence.

Stephen Arnold, May 12, 2009

Information Architecture and Search

May 12, 2009

Usability guru Jakob Nielsen’s “Top 10 Information Architecture Mistakes” here is a useful list of issues to consider. What struck me as particularly useful was his second point “Search and Structure Not Integrated.” He wrote:

search and navigation fail to support each other on many sites. This problem is exacerbated by another common mistake: navigation designs that don’t indicate the user’s current location. That is, after users click a search result, they can’t determine where they are in the site — as when you’re searching for pants and click on a pair, but then have no way to see more pants.

Within the last 10 days, I have had four separate discussions with “search experts” who were in the midst of trying to use search to fix deeper information problems. One content management wizard told me in Philadelphia, “Search is not able to deal with the complicated information stored in an industrial strength CMS.” No kidding. You expect a third party solution to resolve the glitches in these linguini code monsters? A person at a big money consulting firm opined, “We see search as a Web 2.0 solution to our heterogeneous content.” Yep, and I see myself as 15 years old. Fantasy, sheer fantasy.

You will find Mr. Nielsen’s other nine points equally insightful.

Stephen Arnold, May 12, 2009

Alpha Cold like Cuil or Hot like Google

May 11, 2009

Adam Ostrow has an excellent write up about the Wolfram Alpha system. He works through the limited examples in a useful way. Compared to the MIT Technology Review analysis, Mr. Ostrow took the pants off MIT’s Alpha reviewer. He gathers screenshots of the mash up and answers the demo Alpha has on offer. For me, the most interesting comment in the article was:

Ultimately, it’s hard to see how Wolfram Alpha could be called either the next Google or the next Cuil. Rather, it seems to have the ambition of making accessible a whole different type of information, that could be quite useful to a significant subset of Internet users. And eventually, that might make it a good compliment, but not a replacement, for today’s leading search engines.

Clip and save this write up for reference.

Stephen Arnold, May 9, 2009

XML as Code and Its Implications

May 11, 2009

I read Tom Espiner’s ZDNet article “EC Wants Software Makers Held Liable for Code” here. I have been thinking about his news story for a day or two.  The passage that kept my mind occupied consists of a statement made by an ED official, Meglena Kuneva:

If we want consumers to shop around and exploit the potential of digital communications, then we need to give them confidence that their rights are guaranteed,” said Kuneva. “That means putting in place and enforcing clear consumer rights that meet the high standards already existing in the main street. [The] internet has everything to offer consumers, but we need to build trust so that people can shop around with peace of mind.

Software makers for some high profile products shift the responsibility for any problems to the licensee. The licensee is accountable but the software maker is not. I am not a lawyer, and I suppose that this type of thinking is okay if you have legal training. But what if XML is programmatic? What does that mean for authors who write something that causes some type of harm? What about software that generates content from multiple sources and one of those sources is defined as “harmful”?The blurring of executable code and executable content is a fact of online life today. Good news for lawyers. Probably not such good news for non lawyers in my opinion.

Stephen Arnold, May 11, 2009

Autonomy Scores a PR Coup

May 11, 2009

If you are in the search marketing business, you may want to do a case study of Autonomy. The London Times’s story “It May Seem Confusing but Autonomy Can Help” by Mike Harvey was a master stroke. You can read the full text of the write up here. With headlines going to Google, Microsoft, and Wolfram Alpha, Autonomy’s management has wrested attention from these firms and slapped the attention on its products. The subhead for the article made a case for building an organization’s information framework with Autonomy’s digital building blocks with this statement, “The company’s technology enables customers to decipher information from multiple sources, giving it a world-leading role.” For me, the most interesting comment in the article was:

According to Dr Lynch, Autonomy is leading a revolution in the information technology industry. After 40 years of computers being able to understand only structured information that could be found in the rows and columns of a database, computers armed with Autonomy’s software can understand human-style information, such as phone conversations. That means, Dr Lynch argues, that Autonomy now has the world’s most advanced search engine for businesses, which can help companies to reveal the value in the masses of e-mails, phone calls and videos that form the majority of ways in which staff communicate with each other.

I think it will be interesting to see how its competitors respond. Oh, the article includes a biographical profile of Sir Michael Lynch. Not even Messrs. Brin and Page rate that type of coverage.

Stephen Arnold, May 11, 2009

Link Horror: The Thomas Crampton Affair

May 10, 2009

Link loss is not movie material. Careless “removal” of a server or its content can cause some pain. You can read about the personal annoyance (anguish maybe”?) expressed by journalist Thomas Crampton. His stories written for a fee have disappeared. The details are here. There is another angle that is not just annoying, it is expensive to rectify. Wikipedia linked to a Web site called IHT.com, the online presence of the International Herald Tribune, or was was the Web site. You can read about that issue here. Now the Wikipedia links are dead and the fix is going to require lots of volunteers or a script that can make the problem go away. Either way, this is an example of how organization’s think about what’s good for themselves or what those organizations perceive is the “right” approach and the unexpected consequences of the decision. I see this type of uninformed decision making too frequently. The ability to look at an issue from an “overflight” position is seen as silly, too time consuming, or not part of the management process. I think the Thomas Crampton Affair might make a compelling video.

Stephen Arnold, May 10, 2009

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta