In-Q-Tel Pumps Cash into Visible Technologies

October 21, 2009

Overflight snagged a news item called “Visible Technologies Announces Strategic Partnership with In-Q-Tel”. In-Q-Tel is the investment arm of one unit of the US government. Visible Technologies is a content processing company that ingests Web log, Tweets, and other social content and extracts information from these data.

The company said:

Through its comprehensive solution set, Visible Technologies helps organizations adopt new ways of gaining actionable insight from social media conversations. By using Visible Technologies’ platform, organizations both big and small can harness business intelligence derived from social media data to drive strategic direction and tactical implementation of marketing initiatives, improve the customer experience and grow business. Visible Technologies’ end-to-end suite, powered by the truCAST engine, encompasses global features that enable real-time visibility into online social conversations regardless of where dialogue is occurring. Additionally, the company’s truREPUTATION solution is a best-in-class online reputation management service that provides both individuals and brands an effective way to repair, protect and proactively promote their reputation in search engine results.

The company is no spring chicken. Founded in 2003, Visible Technologies has a range of monitoring, reputation, and content analysis tools. The firm’s social media monitoring system is a newer weapon in the company’s arsenal. With police and intelligence agencies struggling to deal with social media, an investment in a firm focusing on this type of content makes clear that the US government wants to keep pace with these content streams.

Stephen Arnold, October 21, 2009

Newsnow Writes to Big Newspaper Bosses

October 21, 2009

I met with executives of Newsnow.co.uk when the company first opened its doors. Since that meeting, I have relied on Newsnow.co.uk as a way to keep track of certain content that is not available to me via other indexes. (Yes, Newsnow.co.uk is an index in my goose pond.) I am not wild about some of the interface but when I am looking for information from Australia, to cite one example, I use Newsnow.co.uk. I don’t think the managers of the index think of the service as a pointer to Australian technology information, but I use the service to tap into that content domain.

The open letter appeared when I did some routine checking about a New Zealand company called Pingar. To be frank, I did not chase down Pingar. I read the “Open Letter to the UK’s National, Regional, and Local Newspapers”. Several points made sense to me. The passage that struck me as quite important was:

The truth is, if anything, it is the growth of the Internet itself — not link aggregation — that has undermined your businesses by destroying the virtual monopoly that you once held over the mass distribution of written news. If you are seeking to blame something for your current predicament, we suggest you start there. It is disingenuous to blame legitimate link aggregation websites like ours for your financial woes and it is misguided to attempt to control linking. This cannot be the way forward. Linking is free, and links (and the sites that provide them) are at the heart of the Web. They are the means by which the Web works. We don’t think linking is something you can, or should be allowed to, control or charge for.

Please, keep in mind that I worked for the Courier Journal & Louisville Times Co., owner of a newspaper that was at one time listed as one of the world’s 25 best. I also worked for Bill Ziff, whose businesses usually had magazines at their core and who funded a daily technology newspaper which, I must point out, failed because the effort was a decade ahead of its time.

I understand the position taken by Newsnow.co.uk. I want to add three points:

  1. The children of newspaper (and magazine and book publishers) are going to put the nails in the coffin of many publications. The parents have not been able to prevent their children from finding the path that best suits their information acquisition preferences. When the kids of the newspaper bosses are undermining the future of certain traditional information business models, I don’t think there’s much hope for changing the behaviors of individuals who are not living in one’s home and under the direct control of one’s parents.
  2. The shift to the “digital Gutenberg” is just starting and the changes will be coming more quickly and be more far reaching than most people anticipate. I think Newsnow.co.uk executives will be better prepared than traditional publishing companies, but the changes will be stressful for young-at-heart outfits as well. The difference is that the Newsnow.co.uk type of company can adapt. Ossified publishing companies try to adapt but in the end fall back on the tired excuses that “we don’t have money” or “we have methods to protect” or some other reason. Without the ability to adapt, organisms die. Lawyers cannot change this fact of life… yet.
  3. The technology of the Internet is not new. What is different is that some countries are forcing their citizens and organizations to shift their business methods to these technologies. Some countries want citizens and businesses to have high speed access. Others drag their feet. At the end of the day, the diffusion of Internet-centric technologies is spreading in multiple dimensions because users are willing to embrace these solutions. The reasons may vary, but the diffusion is quite real.

The sum of these three points is obvious to some people. Others struggle with the “new math” of the Internet. What we have is a new dividing line between those who surf the new waves of technology and those who want, like Venice, to alter the way in which the Mediterranean flows. Sure, Venice may survive as a European Disneyland, but Venice no longer rules the financial world, and Venice is a long shot to regain that title. Venice survives, but it does so at a considerable cost and the anguish of a new business model.

Newspapers are in a tough spot, but Newsnow.co.uk and similar Web sites did not create the problem. A failure to adapt created many of the problems that bedevil newspapers. Maybe a Venice strategy will work? I know I won’t make many trips to Venice. I like to visit every six or seven years. I can envision a time when today’s 12 year old views a printed newspaper in a similar way. Useful but not a frequent or necessary activity.

The Gutenberg era is ending. The digital Gutenberg era is beginning and gaining momentum. Just my opinion.

Stephen Arnold, October 21, 2009

To the DEA, IRS, and DoC or whichever US federal entity is regulating Web logs: No one paid me to write this. I did it for internal mental satisfaction, which for an addled goose is sufficient compensation.

Google and YouTube.com Real Time Search

October 17, 2009

On an exhausting slog through two European nerve centers, I emerged to learn that Google has tiptoed into real time search. The story that alerted me was “YouTube Launches Real-Time Discussion Search and Tracking”, which I read after a meal at McDo in Paris. The nugget in the write up was:

Real-time information is red hot all around the web but it made a surprise appearance on YouTube tonight in the form of real-time search for comments, of all things.

I took a quick look at made a mental note that Google’s engineers have tackled real time content of a specific type on the YouTube.com servers. Although big, the test bed is relatively narrow in Google’s scale of the world. The content provides a useful test for Google because many YouTube.com comments, in my opinion, are similar to the stuff that turns up on slang choked public message systems.

Is this an end game for Google and real time search? In my opinion, this is a beta test. My hunch is that there is more to come from the Google. This begs the question, “Is Google late to the real time search party?” My view is, “No.” A number of firms have useful real time search systems. I write a column each month for Information World Review, and I try to document some of the more interesting systems and their research use cases.

Compared to Google, most of these systems operate at a non Google scale. Google’s notion of scale means that the company must tackle engineering problems that some firms may not have to tackle. As a result, Google will take baby steps in real time search. When the toddler starts taking tween sized steps, Google will put pressure on some of today’s winners.

Therefore, Google is moving at an appropriate speed which gives today’s leaders in real time search to find ways to out innovate Google. Google’s baby steps in real time search helps ensure continued innovation. Good news.

Stephen Arnold, October 16, 2009

Searchtastic: Twitter Search System

October 12, 2009

TechCrunch’s “Searchtastic Throws Its Hat Into The Twitter Search Engine Ring” called my attention to another real time search engine provider. The screenshot below shows the unique feature of the new service. I can search for Twitter users by their name:

searchtastic splash

I ran several test queries and found the results useful. When queries return null results, the system displays the search terms with a strikethrough and the message “remove words from search”. Useful but the interface was initially confusing. In comparative tests against my current favorite real time search system Collecta.com, I thought Searchtastic was useful but Collecta seemed more mature in this evolving sector of the search market.

Stephen Arnold, October 12, 2009

Google and Content Processing

October 12, 2009

I find the buzz about Google’s upgrades to its existing services and the chatter about Google Books interesting but not substantive. My interest is hooked when Google provides a glimpse of what its researchers are investigating. I had a conversation last week that pivoted on the question, “Why would anyone care what a researcher or graduate students working with Google do?” The question is a good one and it illustrates how angle of view determines what is or what is not important. The media find Google Books fascinating. The Web log authors focus on incremental jumps in Google’s publicly accessible functions. I look for deeper, tectonic clues about this trans-national, next generation company. I sometimes get lonely out on my frontier of research and analysis, but, as I said, perspective is important.

That’s why I want to highlighting a dense, turgid, and opaque patent application with the fetching title “Method and System for Processing Published Content on the Internet”. The document was published on October 8, 2009, but the ever efficient USPTO. The application was filed on June 9, 2009, but its technology drags like an earthworm through a number of previous Google filings in 2004 and more recent disclosures such as the control panel for a content owner’s administering of a distribution and charge back for content. As an isolated invention, the application is little more than a different charge at the well understood world of RSS feeds. The problem Google’s application resolves is inserting ads into RSS content without creating “unintended alerts”. When one puts the invention is a broader context, the system and method of the invention is more flexible and has a number of interesting applications. These are revealed in the claims section of the patent application.

Keep in mind that I am not a legal eagle. I am an addled goose. Nevertheless, what I found suggestive is that the system and method hooks into my analysis of Google’s semantic functions, its data management systems, and, of course, the guts of the Google computational platform itself for scale, performance, and access to other Google services. In short, this is a nifty little invention. The component that caught my attention is the controls made available to publishers. The idea is that a person with a Web log can “steer” or “control” some of the Google functions. The notion of an “augmented” feed in the context of advertising speaks to me of Google’s willingness to allow a content producer to use the Google system like a giant information facility. Everything is under one roof and the content producer can derive revenue by using this facility like a combination production, distribution, and monetization facility. In short, the invention builds out the “digital Gutenberg” aspect of the Google platform.

Here’s how Google explains this invention:

The invention is a method for processing content published on-line so as to identify each item in a unique manner. The invention includes software that receives and reads an RSS feed from a publisher. The software then identifies each item of content in the feed and creates a unique identifier for each item. Each item then has third party content or advertisements associated with the item based on the unique identifier. The entire feed is then stored and, when appropriate, updated. The publisher then receives the augmented feed which contains permanent associations between the third party advertising content and the items in the feed so that as the feed is modified or extended, the permanent relationships between the third party content and previously existing feed items are retained and readers of the publisher’s feed do not receive a false indication of new content each time the third party advertising content is rotated on an item.

The claims wander into the notion of a unique identifier for content objects, item augmentation, and other administrative operations that have considerable utility when applied at scale within the context of other Google services such as the programmable search engine. This is a lot more interesting than a tweak to an existing Google service. Plumbing is a foundation, but it is important in my opinion.

Stephen Arnold, October 12, 2009

Monetizing Info via Creating Content Ads

October 6, 2009

I found “One Riot Aims to Make Money from Twitter Search” quite interesting. If you are looking for a new slant on monetizing content, Rafe Needleman’s write up is a good place to start. The real time search vendor One Riot has developed a method for placing content objects adjacent search results. Instead of a Google ad, One Riot uses a semi-Google style design to present related content. The new twist: content providers pay One Riot for placement. For me the most interesting comment in the article was:

It’s a unique plan to monetize Twitter, but it’s a delicate balance. Essentially it’s an arbitrage model: Musk is asking publishers, who are paid by advertisers, to themselves pay for advertising on One Riot to get more traffic, thus increasing their revenue yield per page. There’s nothing fundamentally new about the concept (TV shows are advertised on TV all the time), but it’s a bit of a tightrope. (Disclosure: I have heard that CBS is a partner of One Riot, but Musk would not confirm this with me. CNET News is published by CBS Interactive, a unit of CBS. ) RiotWise ads will run on the One Riot.com site, but the real potential for this plan, according to Musk, lies in the integration of RiotWise into Twitter apps. Potential customers are Tweetdeck, Seesmic, etc. In two weeks, a new application programming interface will let developers embed RiotWise suggestions into search results. One Riot will share revenues with app developers for these paid links.

The content objects are tweets, but I see applications for other sources going forward. The challenge will be to monetize content at a low cost. If One Riot cracks that nut, the company could be caught in a windstorm of cash.

Stephen Arnold, October 6, 2009, No dough

Wave Rolls Ashore

September 30, 2009

Google Wave rolls into a town of about 100,000 developers. There are dozens of stories that explain Wave. I think that most of them state the obvious. A good example is the Computer World write up “Google Wave: A New Kind of Mega-Application.

For me, an interesting observation in the Computer World story was:

As for businesses, companies desperately need a technology like Wave to help their employees collaborate in a more streamlined way. Unfortunately, most enterprises remain years away from switching to this type of information stream, due to their current technology infrastructures.

What I noticed is that Wave is being explained in terms of existing technologies and well known services. I think such comparisons are helpful. In my opinion, however, those comparisons are incorrect and potentially misleading. Wave is not familiar nor old. Functions within Wave appear to be the familiar services. The environment in which these services exist is quite new. Google is not doing a variant of SharePoint. Google is not putting email on steroids.

Wave is one of the first, although primitive, dataspace applications. Until that concept gets traction, competitors may see Wave and its applications as something familiar given a new coat of paint or some lipstick. That is the type of thinking that created the mental wheel spinning in the telecommunications and publishing sectors when Google was dismissed as a Web search and ad vendor.

There’s a simplified description of dataspaces in Google: The Digital Gutenberg and an IDC report which Sue Feldman and I wrote on the subject in September 2008. If you are an IDC client, request report 213562 or snag a copy of my new monograph from Infonortics.

Stephen Arnold, September 30, 2009

Twitter Trends: A Glimpse of the Future of Content Monitoring

September 23, 2009

A happy quack to the reader who sent me information about “Trendsmap Maps Twitter Trends in Real-Time.” The Cnet write up points out that this Web site uses “trending Twitter topics by geographical location by combining data from Twitter’s API and What The Trend.” Very interesting publicly accessible service. Similar types of monitoring systems are in use in certain government organizations. The importance of this implementation is that the blend of disparate systems provide new ways to look at people, topics, and relationships. With this system another point becomes clear. If you want to drop off the grid, move to a small town where data flows are modest. Little data won’t show up so more traditional monitoring methods have to be used. On the other hand, for those in big metro areas, interesting observations may be made. Fascinating. The site has some interface issues but a few minutes of fiddling will make most of the strengths and weaknesses clear. The free service is at http://www.trendsmap.com/.

Stephen Arnold, September 22, 2009

Googley Real Time Search

September 14, 2009

Manipulating Google urls is a bit of an exotic hobby, probably less popular than polo in Harrod’s Creek, Kentucky. I wanted to pass along a tip that appeared in the Google Operating System Web log, “Even More Recent Google Search Results.” The GOS blog points to Ran Geva as the person who discovered a way to get Google to display the mosst recent index updates for a query. The syntax is, according to GOS:

The date restriction feature is quite flexible, but you need to know the syntax used by Google’s URLs: tbs=qdr:[name][value] where [name] can be one of these values: s (second), n (minute), h (hour), d (day), w (week), m (month), y (year), while [value] is a number.

Here is an example for the query “obama”:  http://www.google.com/search?q=obama&tbs=qdr:s45

To make Google real time search user friendly, just create a short cut to a known good query and then substitute your own search string after the q=. If you have a good memory, use the string tbs=qdr:[name][value]. The name value pair allows you to control the time interval.

If you prefer point-and-click real time search, keep specialists such as Collecta, ITpints, Scoopler, or one of the user friendly services in mind.

Stephen Arnold, September 14, 2009

Real Time Search: Point of View Important

September 3, 2009

Author’s Note: I wrote a version of this essay for Incisive Media, the company that operates an international online meeting. This version of the write up includes some additional information.

Real-time search is shaping up like a series of hurricanes forming off the coast of Florida. As soon as one crashes ashore, scattering Floridians like dry leaves, another hurricane revs up. Real-time search shares some similarities with individual hurricanes and the larger weather systems that create the conditions for hurricanes.

This is a local-global or micro-macro phenomenon. What real time search is and is becoming depends on where one observes the hurricane.

Look at the two pictures below. One shows you a local weather station. Most people check their local weather forecast and make important decisions on the data captured. I don’t walk my dogs when there is a local thunderstorm. Tyson, my former show ring boxer, is afraid of thunder.

Caption: The Local Weather: Easy to Monitor, Good for a Picnic

clip_image001

Image source: http://www.usa.gov

The other picture taken from an earth orbit shows a very different view of a weather system. Most people don’t pay much attention to global weather systems unless they disrupt life with hurricanes or blizzards.

Local weather may be okay for walking a dog. Global weather may suggest that I need to prepare for a larger, more significant weather event.

The Weather from the International Space Station

clip_image002

Image source: http://www.usa.gov

I want to identify these two storms and put each in the context of a larger shift in the information ecosystem perturbed by real time search. The first change in online is the momentum within the struggling traditional newspaper business to charge for content. Two traditional media oligopolies appear to be shifting from the horse latitudes of declining revenue, shrinking profit, and technology change. Rupert Murdoch’s News Corporation wants to charge for quality journalism which is expensive. I am paraphrasing his views which have been widely reported.

The Financial Times–confident with its experiments using information processing technology from Endeca (www.endeca.com) and Lexalytics (www.lexalytics.com)–continues to move forward with its “pay for content” approach to its information. The fact that the Financial Times has been struggling to find a winning formula for online almost as long as the Wall Street Journal has not diminished the newspaper’s appetite for online success. The notion of paying for content is gaining momentum among organizations that have to find a way to produce money to cover their baseline costs. Charging me for information seems to be the logical solution to these companies.

With these two international giants making a commitment to charge customers to access online content, this local storm system is easy to chart. I think it will be interesting to see how this shift in a newspaper’s traditional business model transfers to online. In a broader context, the challenge extends to book, magazine, and specialist publishers. No traditional print-on-paper company is exempt from inclement financial weather.

One cannot step into the same river twice, so I am reluctant to point out that both News Corporation and the Pearson company have struggled with online in various incarnations. News Corporation has watched as Facebook.com reached 350 users as MySpace.com has shriveled. Not even the tie for advertising with Google has been sufficient to give MySpace.com a turbo boost. The Wall Street Journal has embraced marketing with a vengeance. I have documented in my Web log (www.arnoldit.com/wordpress) how the Wall Street Journal spams paying subscribers to buy additional subscriptions. You may have noticed the innovation section of the Wall Street Journal that featured some information and quite a bit of marketing for a seminar series sponsored by a prestigious US university. I was not sure where “quality journalism” began and where the Madison Avenue slickness ended.

Read more

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta