Twitter: Short Text Outfit Gets Excited about Computer Vision

July 30, 2014

Robots. Magic stuff like semantic concept lenses. Logical wackiness like software that delivers knowledge.

I read “Twitter Acquires Deep Learning Startup Madbits.” The write up points out to the drooling venture capitalists that Twitter’s purchase is “the latest in a spate of deep learning and computer vision acquisitions that also includes Google, Yahoo, Dropbox, and Pinterest.” What this means is that these oh-so-hot outfits are purchasing companies that purport to have software that can figure stuff out.

I recall a demonstration in Japan in the late 1990s. I was giving some talks in Osaka and Tokyo. One of the groups I met showed me a software system that could process a picture and spit out what was in the picture. I remember that the system was able to analyze a photo of a black and white cow standing in a green pasture.

The software nailed it. The system displayed in Japanese, ?. My hosts explained that the idiograph meant “cow.” High fives ensued. On other pictures, the system did not perform particularly well.

Flash forward 30 years. In a demonstration of image recognition at an intelligence conference, the system worked like a champ on clear images that allowed the algorithm to identify key points, compute distances, and then scurry off to match the numeric values of one face with those in the demo system’s index. The system, after decades of effort and massive computational horsepower increases, was batting about .500.

The problem is that different pictures have different looks. When the subject is wearing a baseball cap, has grown a beard, or is simply laughing, the system does not do particularly well.

You can see how Google performs. Navigate to Google Images, select a picture of a monument, and let Google find matches. Some are spot on. I use a perfect match example in my lectures about open source intelligence tools. I have some snaps in my presentatio0n that do not work particularly well. Here’s an example of a Google baffler:

image

This is a stuffed pony wearing a hat. Photo was taken in Santiago, Chile at an outdoor flea market.

This is the match that Google returned:

image

Notice that there were no stuffed horses in the retrieved data set. The “noise” in the original image does not fool a human. Google algorithms are another kettle of fish or booth filled with stuffed ponies.

The Twitter purchase of Madbits (the name suggests swarming or ant methods) delivers some smart folks who have, according to the write up, developed software that:

automatically understands, organizes and extracts relevant information from raw media. Understanding the content of an image, whether or not there are tags associated with that image, is a complex challenge. We developed our technology based on deep learning, an approach to statistical machine learning that involves stacking simple projections to form powerful hierarchical models of a signal.

Once some demonstrations of Twitter’s scaling of this interesting technology is available, I can run the radiation poisoning test. Math is wonderful except when it is not able to do what users expect, hope, or really want to get.

Marketing is good. Perhaps Twitter will allow me to track down this vendor of stuffed ponies. (Yep, it looked real to me.) I know, I know. This stuff works like a champ in the novels penned by Alastair Reynolds. Someday.

Stephen E Arnold, July 30, 2014

PetMatch for iOS Finds Furry Friends

July 25, 2014

A new image-based search tool can take some of the research out of adopting a pet. Lifehacker turns our attention to the free iOS app in, “PetMatch Searches for an Adoptable Pet Based on Appearance.” Now, pet lovers who see their perfect pet on the street can take a picture and find local doppelgangers in need of homes. Perhaps this will help lower dog-napping rates. Reporter Dave Greenbaum notes:

“You should never adopt an animal solely based on looks, of course—you should research the personality of the breed you want—but looks are a factor. This app works great for mixed breed dogs when you aren’t sure what kind of dog you are looking at. I like the fact it will look at local adoption agencies to find a match, too. Online services like Petfinder.com help you find local pets to adopt, but you have to know which breed you are looking for first, and searching for mixed breed dogs (common at shelters) is difficult. This app makes it easy to do a reverse image search and do your research based on the results.”

Another point to note is that PetMatch includes a gallery of dog and cat breeds, so if the picture is in your head instead of your phone, you can still search for a look-alike. It’s a clever idea, and an innovative use of image search functionality.

Cynthia Murrell, July 25, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Create Your Own Image Compendium with the Met’s The Collection Online

June 17, 2014

The Metropolitan Museum of Art offers the new picture archive The Collection Online, with some 400,000 images of art searchable by artist, culture, method, material, geographic location, date or era, and even by department. From costumes to books to ceramics to the more obvious paintings and sculpture in limestone or bronze, this collection is incredible in its scope and detail. Searching, say, for a painting by Vincent Van Gogh yields a list of 124 records. Many of these were not works by Van Gogh, but there is an option to limit to the records by that painter. Click on The Potato Peeler and find not only the stats of the painting (oil on canvas from 1885, 16 x 12 1/2in.) but also where to find it in the Museum (Gallery 826) if available. Beneath the image there is some additional information available,

“This painting from February/March 1885, with its restricted palette of dark tones, coarse facture, and blocky drawing, is typical of the works Van Gogh painted in Nuenen the year before he left Holland for France. His peasant studies of 1885 culminated in his first important painting, The Potato Eaters (Van Gogh Museum, Amsterdam).”

You are also informed that on the reverse of the same canvas is a Van Gogh’s Self Portrait with a Straw Hat. Users have the possibility of registering for free, which would enable them to create their own assortment by saving images to an individual collection. The Met is no stranger to successful online endeavors, having just recently won a Webby for their Instagram account through the Academy of Digital Arts & Sciences.

Chelsea Kerwin, June 17, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Jelly Supplies Crowdsource-Powered Image Search

February 10, 2014

Here’s a new way to search from one of the minds that helped loose Twitter upon the world. The Los Angeles Times shares an interview with a Twitter co-founder in, “Biz Stone Answers our Questions About New Q&A App Jelly.” Forget algorithms; this app lets you take or upload a picture and pose a question about it to other humans, both within and outside your social-media circles.

Stone and co-founder, Ben Finkel, started with a question: if we were to design a search tool around today’s online landscape, as opposed to the one that existed about a decade ago, what would it look like? As the app’s website explains, “It’s not hard to imagine that the true promise of a connected society is people helping each other.” (Finkel, by the way, founded Q&A site Fluther.com and served as its CEO until that service was acquired by Twitter in 2010.)

One of Jelly‘s rules may annoy some: users cannot post a question without including an image. Writer Jessica Guynn asks Stone why he incorporated that requirement. He responds:

“We did a lot of testing and more often than not, an image very much deepens the context of a question. That’s why we made it so you can either take a picture with your camera and say, ‘What kind of tree is this?’ Or you can pull from the photo albums you already have. Or you can get [a photo] from the Web. Photos are what make mobile mobile. We are really taking advantage of the fact that this is a mobile native application…. Everyone is carrying around these great cameras. It’s a uniquely mobile experience to pair a short question with a photo. It might frustrate a few people in the long run but it will only end up with better quality for us. There is a higher bar to submitting a question.”

The image requirement is just one way Jelly differs from Twitter. The team also worked toward making the new app less conversational in order to avoid the clutter of non-answers. (And we thought 140 characters was limiting.) We’re curious to see how well users will warm to this unique service.

Cynthia Murrell, February 10, 2014

Sponsored by ArnoldIT.com, developer of Augmentext

Google: A New Way to Search and Sell Ads Too

July 20, 2012

LifeHacker catches us up with some developments in “Remains of the Day: Google Image Search Gets Knowledge Graph Integration.” The headlining item promises smarter and more comprehensive “Search by Image” results. The article quotes Google’s blog on a feature I’ve been looking forward to (the second one):

“Google updated its Image Search with a couple of new features. One being an expanded view that lets searchers see the text around matching images, and the other being added support for Knowledge Graph to image search results, which means Google will attempt to identity any photo that you upload or link to and provide more information about the subject.”

In other news, the write up notes that the Mac video player VLC is now at version 2.0.2, updated for Windows and OS X. Several small tweaks and bug fixes are addressed, and Retina Display support has been added. Also, Sparrow, an OS X email client, released an update to its desktop version. The update includes support for Retina Display and Mountain Lion. Amazon’s Flow app, already available for iOS, now brings barcode scanning and augmented reality to Android users.

Finally, Google is continuing its name-shuffle game. The Google Places iOS app follows the Google Places service in being renamed Google+ Local. A voice-search feature is now included in the app version.

Cynthia Murrell, July 20, 2012

Sponsored by PolySpot

CCFinder Offers Flexibility but How Far Do They Stretch

July 16, 2012

Creative commons offers a lot of versatility, but up until now the available finders have been limited. Flexibility was needed and AbelsSoft’s new creative commons image finder provides just that. Lifehacker’s article “CCFinder Simplifies Creative Commons Image Searches” talks about the pluses and minuses of this new program.

AbelsSoft offers a few perks when it comes to defining search, such as:

“You can filter your search to omit or include various types of CC restrictions such as non-commercial use only, references required to the original author, etc. Once you perform a search you can select a single or multiple images and either download to your preferred folder, visit the source image web site, or set the image as your desktop wallpaper.”

One taut aspect of CCFinder’s search engine is that it only utilizes Flickr, which ironically has the largest selection of CC licensed images available. Creative Commons offers users several sites to choose from, like Google Images, Open Clip Art Library, and Fotopedia, however users are still limited to one site per search.

The download is free for CCFinder, but registration does sign users up to receive an occasional newsletter. In itself, that is not a lot to ask for the convenience of well-defined search. AbelsSoft also offers a professional version of CCFinder that further defines how users search by implementing color filters. At first glance, CCFinder seems a user friendly program with search flexibility. We will have to see how far they stretch.

Jennifer Shockley, July 16, 2012

Sponsored by Polyspot

Short List of Image Search Tools

October 29, 2010

Short honk: One never knows when this type of list will be needed. “7 Image Search Tools That Will Change Your Life” provides descriptions, some screenshots, and links to seven image search tools. My life has not been changed, but a happy quack to Brain Pickings for the information. One example:

Retrievr at http://labs.systemone.at/retrievr/

Stephen E Arnold, October 29, 2010

Freebie

July Google Becoming Bing?

July 23, 2010

It seems that Google isn’t immune to adopting good ideas when it sees them in other places. Reading Google Positively Bing-Like With New Image Search Capabilities we see they’ve updated their search technology to view over 1000 images on each page. It shows that even Google knows when they need to change and keep moving ahead and it shows that they’re not immune to influences from the likes of Microsoft.

There are other noteworthy changes and these include increasing the density of the search results page and the ability to get a bigger preview of an image by hovering a mouse over it.

Of course no changes would be complete without some kind of advertising friendly features as well. Hence the new image format called Google Search Ads. Still the Microsoft influence makes us wonder whatever happened to innovation at Google? Strange for a company where searches are the lifeblood.

I don’t like the endless page “thing”. Latency remains an issue with certain network connections. How about a button to reclaim the “old” image search. Better yet, do something original.

Stephen E Arnold, July 23, 2010

Google Probes the Underbelly of AutoCAD

October 15, 2009

Remember those college engineering wizards who wanted to build real things? Auto fenders, toasters, and buildings in Dubai. Changes are the weapon of choice was a software product from Autodesk. Over the years, Autodesk added features and functions to its core product and branched out into other graphic areas. In the end, Autodesk was held captive by the gravitational pull of AutoCAD.

In one of my Google monographs, I wrote about Google’s SketchUp program. I recall several people telling me that SketchUp was unknown to them. These folks, I must point out, were real, live Google experts. SketchUp was a blip on a handful of users’ radar screen. I took another angle of view, and I saw that the Google coveted the engineering wizards when they were in primary school and had a method for keeping these individuals in the Google camp until they designed their last, low-cost fastener for a green skyscraper in Shanghai.

No one really believed that this was possible.

My suggestion is that some effort may be prudently applied to rethinking what the Google is doing with engineering software that makes pictures and performs other interesting Googley tricks. The first step could be reading the Introducing Google Building Maker article on the “official” Google Web log. I would gently suggest that the readers of this Web log buy a copy of the Google trilogy, consisting of my three monographs about Google technology. Either path will give you some food for thought.

For me, the most interesting comment in the Google blog post was:

Some of us here at Google spend almost all of our time thinking about one thing: How do we create a three-dimensional model of every built structure on Earth? How do we make sure it’s accurate, that it stays current and that it’s useful to everyone who might want to use it? One of the best ways to get a big project done — and done well — is to open it up to the world. As such, today we’re announcing the launch of Google Building Maker, a fun and simple (and crazy addictive, it turns out) tool for creating buildings for Google Earth.

The operative phrase is “every built structure on early”. How is that for scale?

What about Autodesk? My view is that the company is going to find itself in the same position that Microsoft and Yahoo now occupy with regard to Google. Catch up is impossible. Leap frogging is the solution. I don’t think the company can make this type of leap. Just my opinion.

Stephen Arnold, October 15, 2009
Another freebie. Not even a lousy Google mouse pad for my efforts.

Oracle Taps Brainware

October 15, 2009

The Reuters’s story “Brainware Signs OEM Agreement with Oracle for Intelligent Data Extraction” caught me and probably the folks at ZyLAB and other content processing companies by surprise. Brainware and its patented trigram technology has created strong believers in some markets such as litigation support. But the company has been working to strengthen its content acquisition functionality as well. The idea is that paper and electronic information enter at one end and searchable at the other. Oracle has been lagging in search. The Triple Hop technology has not taken center stage in my opinion. The Brainware deal seems to be for the content acquisition functions, what the news story calls “intelligent data capture”; that is, scanning and transforming functions plus entity extraction. Will Oracle embrace Brainware’s search and retrieval technology as well? Good question. Secure Enterprise Search needs some vitamins in my opinion. My hunch is that Oracle is beefing up its back end content intake system in order to deal with the increasingly successful Autonomy combine which continues to put pressure on big boys like Oracle. Brainware benefits from the publicity this tie up will produce. Search vendors, in my opinion, need this type of buzz to light up the radar of information technology professionals who too often focus on three or four search vendors, ignoring some interesting  alternatives.

Stephen Arnold, October 14, 2009

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta