Tweets with Pickles: DataSift and Its Real Time Recipe

September 25, 2010

We have used Tweetmeme.com to see what Twitter users are doing right now. The buzz word real time has usurped “right now” but that’s the magic of folks born between 1968 and 1978.

DataSift combines some nifty plumbing with an original scripting language for filtering 800 tweets a second. The system can ingest and filter other types of content, but as a Twitter partner, DataSift is in the Twitterspace at the moment.

Listio describes the service this way:

DataSift gives developers the ability to leverage cloud computing to build very precise streams of data from the millions and millions of tweets sent everyday. Tune tweets through a graphical interface or through its bespoke programming language. Streams consumable through our API and real-time HTTP. Comment upon and rank streams created by the community. Extend one or more existing streams to create super streams.

The idea is that a user will be able to create a filter that plucks content, patterns like Social Security Numbers, and metadata like the handle, geographic data, and the like. With these items, the system generates a tweet stream that matches the parameters of the filter. The language is called “Filtered Stream Definition Language” and you can see an example of its lingo below:

RULE 33e3891a3aebad56f962bb5e7ae4dc94AND twitter.user.followers_count > 1000

A full explanation of the syntax appears in the story “FSDL”.

You can find an example on the DataSift blog which is more accessible than the videos and third party write ups about a service that is still mostly under wraps.

The wordsmiths never rest. Since I learned about DataSift, the service has morphed into “cloud event processing.” As an phrase for Google indexing, this one is top notch. In terms of obfuscating the filter, storage, and analysis aspect of DataSift, I don’t really like cloud event processing or the acronym CEP. Once again, I am in the minority.

The system’s storage component is called “pickles.” The filters can cope with irrelevant hash tags and deal with such Twitter variables as name, language, location, profiles, and followers, among others. There are geospatial tricks so one can specify a radius around a location or string together multiple locations and get tweets from people close to bankrupt Blockbuster stores in Los Angeles.

The system is what I call a next generation content processing service. Perched in the cloud, DataSift deals with the content flowing through the system. To build an archive, the filtered outputs have to be written to a storage service like Pickles. Once stored, clever users can slice and dice the data to squeeze gems from the tweet stream.

The service seems on track to become  available in October or November 2010. A graphical interface is on tap, a step that most next generation content processing systems have to make. No one wants to deal with an end user who can set up his own outputs and make fine decisions based on a statistically-challenged view of his or her handiwork.

For more information point your browser at www.datasift.net.

Stephen E Arnold, September 25, 2010

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta