Physiognomy for the Modern Age

December 6, 2016

Years ago, when I first learned about the Victorian-age pseudosciences of physiognomy and phrenology, I remember thinking how glad I was that society had evolved past such nonsense. It appears I was mistaken; the basic concept was just waiting for technology to evolve before popping back up, we learn from NakedSecurity’s article, “’Faception’ Software Claims It Can Spot Terrorists, Pedophiles, Great Poker Players.”  Based in Isreal, Faception calls its technique “facial personality profiling.” Writer Lisa Vaas reports:

The Israeli startup says it can take one look at you and recognize facial traits undetectable to the human eye: traits that help to identify whether you’ve got the face of an expert poker player, a genius, an academic, a pedophile or a terrorist. The startup sees great potential in machine learning to detect the bad guys, claiming that it’s built 15 classifiers to evaluate certain traits with 80% accuracy. … Faception has reportedly signed a contract with a homeland security agency in the US to help identify terrorists.

The article emphasizes how problematic it can be to rely on AI systems to draw conclusions, citing University of Washington professor and “Master Algorithm” author Pedro Domingos:

As he told The Washington Post, a colleague of his had trained a computer system to tell the difference between dogs and wolves. It did great. It achieved nearly 100% accuracy. But as it turned out, the computer wasn’t sussing out barely perceptible canine distinctions. It was just looking for snow. All of the wolf photos featured snow in the background, whereas none of the dog pictures did. A system, in other words, might come to the right conclusions, for all the wrong reasons.

Indeed. Faception suggests that, for this reason, their software would be but one factor among many in any collection of evidence. And, perhaps it would—for most cases, most of the time. We join Vaas in her hope that government agencies will ultimately refuse to buy into this modern twist on Victorian-age pseudoscience.

Cynthia Murrell, December 6, 2016

 

Is Sketch Search the next Big Thing

December 5, 2016

There’s text search and image search, but soon, searching may be done via hand-drawn sketching. Digital Trends released a story, Forget keywords — this new system lets you search with rudimentary sketches, which covers an emerging technology. Two researchers at Queen Mary University of London’s (QMUL) School of Electronic Engineering and Computer Science taught a deep learning neural network to recognize queries in the form of sketches and then return matches in the form of products. Sketch may have an advantage surpassing image search,

Both of those search modalities have problems,” he says. “Text-based search means that you have to try and describe the item you are looking for. This is especially difficult when you want to describe something at length, because retrieval becomes less accurate the more text you type. Photo-based search, on the other hand, lets you take a picture of an item and then find that particular product. It’s very direct, but it is also overly constrained, allowing you to find just one specific product instead of offering other similar items you may also be interested in.

This search technology is positioning itself to online retail commerce — and perhaps also only users with the ability to sketch? Yes, why read? Drawing pictures works really well for everyone. We think this might present monetization opportunities for Pinterest.

Megan Feil, December 5, 2016

Word Embedding Captures Semantic Relationships

November 10, 2016

The article on O’Reilly titled Capturing Semantic Meanings Using Deep Learning explores word embedding in natural language processing. NLP systems typically encode word strings, but word embedding offers a more complex approach that emphasizes relationships and similarities between words by treating them as vectors. The article posits,

For example, let’s take the words woman, man, queen, and king. We can get their vector representations and use basic algebraic operations to find semantic similarities. Measuring similarity between vectors is possible using measures such as cosine similarity. So, when we subtract the vector of the word man from the vector of the word woman, then its cosine distance would be close to the distance between the word queen minus the word king (see Figure 1).

The article investigates the various neural network models that prevent the expense of working with large data. Word2Vec, CBOW, and continuous skip-gram are touted as models and the article goes into great technical detail about the entire process. The final result is that the vectors understand the semantic relationship between the words in the example. Why does this approach to NLP matter? A few applications include predicting future business applications, sentiment analysis, and semantic image searches.

Chelsea Kerwin,  November 10, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Reverse Image Searching Is Easier Than You Think

October 6, 2016

One of the newest forms of search is using actual images.  All search engines from Google to Bing to DuckDuckGo have an image search option, where using keywords you can find an image to your specifications.  It seemed to be a thing of the future to use an actual image to power a search, but it has actually been around for a while.  The only problem was that reverse image searching sucked and returned poor results.

Now the technology has improved, but very few people actually know how to use it.  ZDNet explains how to use this search feature in the article, “Reverse Image Searching Made Easy…”. It explains that Google and TinEye are the best way to begin reverse image search. Google has the larger image database, but TinEye has the better photo experts.  TinEye is better because:

TinEye’s results often show a variety of closely related images, because some versions have been edited or adapted. Sometimes you find your searched-for picture is a small part of a larger image, which is very useful: you can switch to searching for the whole thing. TinEye is also good at finding versions of images that haven’t had logos added, which is another step closer to the original.

TinEye does have its disadvantages, such as outdated results and not being able to find them on the Web.  In some cases Google is the better choice as one can search by usage rights.  Browser extensions for image searching are another option.  Lastly if you are a Reddit user, Karma Decay is a useful image search tool and users often post comments on the image’s origin.

The future of image searching is now.

Whitney Grace, October 6, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Recent Developments in Deep Learning Architecture from AlexNet to ResNet

September 27, 2016

The article on GitHub titled The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3) is not an article about the global media giant but rather the advancements in computer vision and convolutional neural networks (CNNs). The article frames its discussion around the ImageNet Large-Scale Recognition Challenges (ILSVRC), what it terms the “annual Olympics of computer vision…where teams compete to see who has the best computer vision model for tasks such as classification, localization, detection and more.” The article explains that the 2012 winners and their network (AlexNet) revolutionized the field.

This was the first time a model performed so well on a historically difficult ImageNet dataset. Utilizing techniques that are still used today, such as data augmentation and dropout, this paper really illustrated the benefits of CNNs and backed them up with record breaking performance in the competition.

In 2013, CNNs flooded in, and ZF Net was the winner with an error rate of 11.2% (down from AlexNet’s 15.4%.) Prior to AlexNet though, the lowest error rate was 26.2%. The article also discusses other progress in general network architecture including VGG Net, which emphasized depth and simplicity of CNNs necessary to hierarchical data representation, and GoogLeNet, which tossed the deep and simple rule out of the window and paved the way for future creative structuring using the Inception model.

Chelsea Kerwin, September 27, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

A Snapchat Is Worth a Thousand Twitter Characters or More

September 8, 2016

The article titled Snapchat Passes Twitter in Daily Usage on Bloomberg Technology provides some insights into the most popular modes of communication. As the title suggests, that mode is not with words. Rather, 150 million people appear to prefer images to language, at least when it comes to engaging with other on social media. The article reveals,

Snapchat has made communicating more of a game by letting people send annotated selfies and short videos. It has allowed people to use its imaging software to swap faces in a photo, transform themselves into puppies, and barf rainbows… Snapchat encourages people to visit the app frequently with features such as the “Snapstreak,” which counts the number of consecutive days they’ve been communicating with their closest friends. Snapchat’s other content, such as news and Live Stories, disappear after 24 hours.

Other Silicon Valley players have taken note of this trend. Facebook recently purchased the company that built Masquerade, an app offering photo-manipulation akin to Snapchat’s. Are words on their way out? The trend of using abbreviations (“abbrevs”) and slang to streamline messaging would logically result in a replacement of language with images, which can say volumes with a single click. But this could also result in a lot of confusion and miscommunication. Words allow for a precision of meaning that images often can’t supply. Hence the crossbreed of a short note scrawled across an image.

Chelsea Kerwin, September 8, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Image Recognition: Think Tattoo Recognition

June 8, 2016

I know that some bad guys encourage their “assistants” to get facial tattoos. I am not personally into tattoos, but there are some who believe that one’s immune system is strengthened via the process. The prison tattoos I have seen in pictures mind you, did not remind me of the clean room conditions in some semi conductor fabrication facilities. I am confident that ball point pen ink, improvised devices, and frequent hand washing are best practices.

I read “Tattoo Recognition Research Threatens Free Speech and Privacy.” The write up states:

government scientists are working with the FBI to develop tattoo recognition technology that police can use to learn as much as possible about people through their tattoos.

The write up points out that privacy is an issue.

My question:

If a person gets a facial tattoo, perhaps that individual wants others to notice it?

I have heard that some bad guys want their “assistants” to get facial tattoos. With a message about a specific group, it makes it difficult for an “assistant” to join another merry band of pranksters.

Stephen E Arnold, June 8, 2016

Turn to Unsplash for Uncommon Free Photos

June 7, 2016

Stock photos can be so, well, stock. However, Killer Startups points to a solution in, “Today’s Killer Startup: Unsplash.” Reviewer Emma McGowan already enjoyed the site for its beautiful free photos, with new ones posted every day. She especially loves that their pictures do not resemble your typical stock photos. The site’s latest updates make it even more useful. She writes:

“The new version has expanded to include lovely, searchable collections. The themes range from conceptual (‘Pure Color’) to very specific (‘Coffee Shops’). All of the photos are free to use on whatever project you want. I can personally guarantee that all of your work will look so much better than if you went with the usual crappy free options.

“Now if you want to scroll through beautiful images a la old-school Unsplash, you can totally still do that too. The main page is still populated with a seemingly never ending roll of photos, and there’s also a ‘new’ tab where you can check out the latest and greatest additions to the collection. However, I really can’t get enough of the Collections, both as a way to browse beautiful artwork and to more easily locate images for blog posts.”

So, if you have a need for free images, avoid the problems found in your average stock photography, which can range from simple insipidness to reinforcing stereotypes and misconceptions. Go for something different at Unsplash. Based in Montreal, the site launched in 2013. As of this writing, they happen to be hiring (and will consider remote workers).

 

Cynthia Murrell, June 7, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

No Search Just Browse Images on FindA.Photo

March 2, 2016

The search engine FindA.Photo proves itself to be a useful resource for browsing images based on any number of markers. The site offers a general search by terms, or the option of browsing images by color, collection (for example, “wild animals,” or “reflections”) or source.  The developer of the site, David Barker, described his goals for the services on Product Hunt,

“I wanted to make a search for all of the CC0 image sites that are available. I know there are already a few search sites out there, but I specifically wanted to create one that was: simple and fast (and I’m working on making it faster), powerful (you can add options to your search for things like predominant colors and image size with just text), and something that could have contributions from anyone (via GitHub pull requests).”

My first click on a swatch of royal blue delivered 651 images of oceans, skies, panoramas of oceans and skies, jellyfish ballooning underwater, seagulls soaring etc. That may be my own fault for choosing such a clichéd color, but you get the idea. I had better (more various) results through the collections search, which includes “action,” “long-exposure,” “technology,” “light rays,” and “landmarks,” the last of which I immediately clicked for a collage of photos of the Eiffel Tower, Louvre, Big Ben, and the Great Wall of China.

 

Chelsea Kerwin, March 2, 2016

Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

She Is a Meme Librarian

January 20, 2016

Memes are random bits of Internet culture that come and go faster than the highest DSL speed.  There are so many memes out there that it seems impossible to catalog the trends, much less each one.  The Independent tells us that Amanda Brennan has made a career out of studying and documenting memes, becoming the world’s first meme librarian: “Meet Tumblr’s ‘Meme Librarian,’ The Woman With The Best Job On The Internet.”

Brennan works at Tumblr and her official title is content and community manager, but she prefers the title “meme librarian.” She earned a Master’s in Information from Rutgers and during graduate school she documented memes for Know Your Meme, followed by Tumblr.

“[In graduate school] immediately I knew I did not want to work in a traditional library. Which is weird because people go to library school and they’re like ‘I want to change the world with books!’ And I was like ‘I want to change the world of information.’ And they started a social media specialization in the library school, and I was like, ‘This is it. This is the right time for me to be here.’”

Brennan is like many librarians, obsessed with taxonomy and connections between information.  The Internet gave her an outlet to explore and study to her heart’s content, but she was particularly drawn to memes, their origins, and how they traveled around the Internet.  After sending an email to Know Your Meme about an internship, her career as a meme librarian was sealed.  She tracks meme trends and discovers how they evolve not only in social media, but how the rest of the Internet swallows them up.

I wonder if this will be a future focus of library science in the future?

 

Whitney Grace, January 20, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Next Page »