CyberOSINT banner

Image Search: Getting Better and Better

May 15, 2015

Image search means having software which can figure out from a digital photo that a cow is a cow. In more complex photos, the software identifies what it can. I recall one demonstration which recognized me as a 20 year old criminal. Close but no cigar.

I received an email from a former clandestine professional. The link provided informed me that Baidu was better at image recognition than the Google. The alleged error rate is 4.58 percent. I love the two decimal accuracy.

Not to be outdone, WolframAlpha is in the image recognition game as well.  Navigate to “Wolfram Alpha Image Identification Identifies Steven Wolfram as Podium.” The write up points out:

Speaking of which, a picture of Steven Wolfram returned the answer ‘podium’. So no recognition for the creator. Unfortunately, it couldn’t identify a map of France at all and just came back with a big question mark. Sorry, France.

You can try the system at this page.

I uploaded the image of the cover of my new CyberOSINT study. The system returned this result:


My book cover is a a piece of electronic equipment that mixes two or more input signals to give a single output signal.

I did not know that. I thought it was a book cover with a blue hand.

Stephen E Arnold, May 15, 2015

Looking for GIF Files?

April 26, 2015

Need a GIF file? Check out “5 GIF Search Engines & Tools You Haven’t Heard Of Yet.” Searching for GIFs using some Web search engines can yield interesting results.

Stephen E Arnold, April 26, 2015

Progress in Image Search Tech

April 8, 2015

Anyone interested in the mechanics behind image search should check out the description of PicSeer: Search Into Images from YangSky. The product write-up goes into surprising detail about what sets their “cognitive & semantic image search engine” apart, complete with comparative illustrations. The page’s translation seems to have been done either quickly or by machine, but don’t let the awkward wording in places put you off; there’s good information here. The text describes the competition’s approach:

“Today, the image searching experiences of all major commercial image search engines are embarrassing. This is because these image search engines are

  1. Using non-image correlations such as the image file names and the texts in the vicinity of the images to guess what are the images all about;
  2. Using low-level features, such as colors, textures and primary shapes, of image to make content-based indexing/retrievals.”

With the first approach, they note, trying to narrow the search terms is inefficient because the software is looking at metadata instead of inspecting the actual image; any narrowed search excludes many relevant entries. The second approach above simply does not consider enough information about images to return the most relevant, and only most relevant, results. The write-up goes on to explain what makes their product different, using for their example an endearing image of a smiling young boy:

“How can PicSeer have this kind of understanding towards images? The Physical Linguistic Vision Technologies have can represent cognitive features into nouns and verbs called computational nouns and computational verbs, respectively. In this case, the image of the boy is represented as a computational noun ‘boy’ and the facial expression of the boy is represented by a computational verb ‘smile’. All these steps are done by the computer itself automatically.”

See the write-up for many more details, including examples of how Google handles the “boy smiles” query. (Be warned– there’s a very brief section about porn filtering that includes a couple censored screenshots and adult keyword examples.) It looks like image search technology progressing apace.

Cynthia Murrell, April 08, 2015

Stephen E Arnold, Publisher of CyberOSINT at

Image and Video Recognition: A Bump in the Road

March 24, 2015

I read “Images That Fool Computer Vision Raise Security Concerns.” I found the write up a reminder that the marketing and venture capitalists’ hype are one thing. Real world software performance is another thing.

The article states:

Cornell researchers have found that computers, like humans, can be fooled by optical illusions, which raises security concerns and opens new avenues for research in computer vision.

The passage I highlighted in a mellow yellow says:

But computers don’t process images the way humans do, Yosinski [a Cornell wizard] said. “We realized that the neural nets did not encode knowledge necessary to produce an image of a fire truck, only the knowledge necessary to tell fire trucks apart from other classes,” he explained. Blobs of color and patterns of lines might be enough. For example, the computer might say “school bus” given just yellow and black stripes, or “computer keyboard” for a repeating array of roughly square shapes.

So what?

It turns out that this diagram looks exactly like a penguin.


The smart software sees the abstraction as what most grade school children know as a lovable penguin. I did not smell a penguin until after I left grade school. Someone should have warned me.


And the challenge? I have no comment about the expectations of a government professional who relies on image recognition as part of an on going investigation.

Stephen E Arnold, March 24, 2015

Finding Elusive Image Libraries

February 17, 2015

In order to build a fantastic Web site these days, you need eye-catching graphics. While creating a logo can be completed with Fiverr, making daily images for your content feed is a little bit more difficult. It is not cost efficient to hire a graphic designer for every image (unless you have deep pockets), so it helps to have an image library to retrieve images. The problem with typing in image library into a search engine means you have to sift through results and assess each possible source.

Graphic designer Ash Stallard-Phillips collected “25 Awesome Sites With Stunning Free Stock Photos.” He rounded up the image libraries, because:

“As a web designer myself, I always find it handy to have an image library that I can use for dummy images and testing. I have compiled a list of the best sites offering free stock photos that you can use for your projects. “

Ash evaluates each resource, listing the pros and cons. Many of the image Web sites he lists are ones we have not used before and will be useful as we create content. There is an increase in the number of articles like Ash’s on the Internet and they are not just for photo libraries. They are lists that have tons of helpful information that you would usually have to sift through search results for. It saves time on searching and the evaluation process.

Whitney Grace, February 17, 2015
Sponsored by, developer of Augmentext

IBM on Skin Care

January 19, 2015

Watson has been going to town in different industries, putting to use its massive artificial brain. It has been working in the medical field interpreting electronic medical record data. According to Open Health News, IBM has used its technology in other medical ways: “IBM Research Scientists Investigate Use Of Cognitive Computing-Based Visual Analytics For Skin Cancer Image Analysis.”

IBM partnered with Memorial Sloan Kettering to use cognitive computing to analyze dermatological images to help doctors identify cancerous states. The goal is to help doctors detect cancer earlier. Skin cancer is the most common type of cancer in the United States, but diagnostics expertise varies. It takes experience to be able to detect cancer, but cognitive computing might take out some of the guess work.

Using cognitive visual capabilities being developed at IBM, computers can be trained to identify specific patterns in images by gaining experience and knowledge through analysis of large collections of educational research data, and performing finely detailed measurements that would otherwise be too large and laborious for a doctor to perform. Such examples of finely detailed measurements include the objective quantification of visual features, such as color distributions, texture patterns, shape, and edge information.”

IBM is already a leader in visual analytics and the new skin cancer project has a 97% sensitivity and 95% specificity rate in preliminary tests. It translates to cognitive computing being accurate.

Could the cognitive computing be applied to identifying other cancer types?

Whitney Grace, January 19, 2015
Sponsored by, developer of Augmentext

Could It Be? An Accurate Image Search?

January 8, 2015

Image search is a touchy subject. Copyright, royalties, privacy, and accuracy are a huge concern for image holders and searchers. People are scouring the Internet for images they can freely use without problems, but often times the images have a watermark or are so common they are mediocre. Killer Startups points to a great new startup that could revolutionize how people find pictures: “Today’s Killer Startup: Compfight.”

Compfight is an image search engine comparable to Flicker, except it is faster and uses features similar to the advanced search function on Google.

“The site also lets you specify if you’re looking only for Creative Commons licensed images or ones to use commercially. If you’re new to this kind of image use, Compfight even provides a handy little guide on how to cite your sources properly. Last and probably least, Compfight also provides access to professional stock photos, starting as low as $1 per image.”

Developers are still trying to create the perfect image search and while it is a work in progress, Compfight shows we’re on the right path.

Whitney Grace, January 08, 2014
Sponsored by, developer of Augmentext

Facial Recognition: A Clue for Dissemblers

November 29, 2014

The idea that numerical recipes can identify a person in video footage is a compelling one. I know one or two folks involved in law enforcement who would find a reliable, low cost, high speed solution very desirable.

image image

The face on the left is a reverse engineered FR image. The chap on the right is the original Creature from the Black Lagoon. Toss in a tattoo and this Black Lagoon fellow would not warrant a second look at Best Buy on Cyber Monday.

I read “This Is What Happens When You Reverse Engineer Facial Recognition.” the internal data about an image is not a graduation photograph. The write up contains an interesting statement:

The resulting meshes were then 3D-printed, creating masks that could be worn by people, presenting cameras with an image that is no longer an actual face, yet still recognizable as one to software.

Does this statement point to a way to degrade the performance of today’s systems? A person wanting to be unrecognized could flip this reverse engineering process and create a mash up of two or more “faces.” Sure, the person would look a bit like the Creature from the Black Lagoon, but today’s facial recognition systems might be uncertain about who was wearing the mask.

Stephen E Arnold, November 29, 2014


The Fleeting Image Search

October 16, 2014

Image search is touted as being intuitive and accurate. Users simply need to submit an image to the search engine and based off the picture analyzing algorithms similar images will be returned. It, however, is still in the works. Image search still proves to be a difficult task for search engines to master. Search Engine Watch brings us the news “Bing Unveils Responsive Design For Image Search” that the search engine is ramping up to improve its image search.

The newest improvements optimizes image search for touch screen mobile devices. Bing has changed the way uses can browse through images, making it simpler to explore and refine results. Pinterest board searches have been added and a mini-header that will slide with users as they scroll down will offer quick access to popular results. The image hover feature has also been updated.

Along with the updates, Bing has these tips to improve image search:

• “Quality: No matter what the user is searching for, Bing is focused on providing high-quality and relevant image search results.

• Suggestions: Users that are scrolling page after page are clearly having a difficult time finding what they are looking for. Bing maintains a set of search suggestions and collections to help users find what they need.

• Actions: There are many different ways to search and endless topics to search about. Bing has provided the tools necessary to filer results, create an image match, and create one-click access to Pinterest.”

These upgrades will improve image search, but it still has a long way to go.

Whitney Grace, October 16, 2014
Sponsored by, developer of Augmentext

Twitter: Short Text Outfit Gets Excited about Computer Vision

July 30, 2014

Robots. Magic stuff like semantic concept lenses. Logical wackiness like software that delivers knowledge.

I read “Twitter Acquires Deep Learning Startup Madbits.” The write up points out to the drooling venture capitalists that Twitter’s purchase is “the latest in a spate of deep learning and computer vision acquisitions that also includes Google, Yahoo, Dropbox, and Pinterest.” What this means is that these oh-so-hot outfits are purchasing companies that purport to have software that can figure stuff out.

I recall a demonstration in Japan in the late 1990s. I was giving some talks in Osaka and Tokyo. One of the groups I met showed me a software system that could process a picture and spit out what was in the picture. I remember that the system was able to analyze a photo of a black and white cow standing in a green pasture.

The software nailed it. The system displayed in Japanese, ?. My hosts explained that the idiograph meant “cow.” High fives ensued. On other pictures, the system did not perform particularly well.

Flash forward 30 years. In a demonstration of image recognition at an intelligence conference, the system worked like a champ on clear images that allowed the algorithm to identify key points, compute distances, and then scurry off to match the numeric values of one face with those in the demo system’s index. The system, after decades of effort and massive computational horsepower increases, was batting about .500.

The problem is that different pictures have different looks. When the subject is wearing a baseball cap, has grown a beard, or is simply laughing, the system does not do particularly well.

You can see how Google performs. Navigate to Google Images, select a picture of a monument, and let Google find matches. Some are spot on. I use a perfect match example in my lectures about open source intelligence tools. I have some snaps in my presentatio0n that do not work particularly well. Here’s an example of a Google baffler:


This is a stuffed pony wearing a hat. Photo was taken in Santiago, Chile at an outdoor flea market.

This is the match that Google returned:


Notice that there were no stuffed horses in the retrieved data set. The “noise” in the original image does not fool a human. Google algorithms are another kettle of fish or booth filled with stuffed ponies.

The Twitter purchase of Madbits (the name suggests swarming or ant methods) delivers some smart folks who have, according to the write up, developed software that:

automatically understands, organizes and extracts relevant information from raw media. Understanding the content of an image, whether or not there are tags associated with that image, is a complex challenge. We developed our technology based on deep learning, an approach to statistical machine learning that involves stacking simple projections to form powerful hierarchical models of a signal.

Once some demonstrations of Twitter’s scaling of this interesting technology is available, I can run the radiation poisoning test. Math is wonderful except when it is not able to do what users expect, hope, or really want to get.

Marketing is good. Perhaps Twitter will allow me to track down this vendor of stuffed ponies. (Yep, it looked real to me.) I know, I know. This stuff works like a champ in the novels penned by Alastair Reynolds. Someday.

Stephen E Arnold, July 30, 2014

Next Page »