Pinterest Offers the Impulse Shopper a Slice of Wonderfulness

February 20, 2017

How about point-and-click impulse buying? Sound good? Pinterest has merged looking at pictures with spending money for stuff.

Navigate to “Pinterest’s New ‘Lens’ IDs Objects and Helps You Buy Them.” I know that I spend hours looking at pictures on Pinterest. When I see wedding snapshots and notice a pair of shoes to die for, I can buy them with a click… almost. My hunch is that some children may find Pinterest buying as easy as Alexa Echo and Dot buying.

I learned:

[Pinterest] announced a new feature called Lens, which will enable people to snap a picture of an item inside the Pinterest app. The app will then suggest objects it thinks are related. Think Shazam but for objects, not music. Surfacing the products will make it easier for people to take action, according to Pinterest. That could include everything from making a purchase to cooking a meal.

One of Pinterest’s wizards (Evan Sharp) allegedly said:

“Sometimes you spot something out in the world that looks interesting, but when you try to search for it online later, words fail you.” The new technology, Sharp said, “is capable of seeing the world the way you do.”

Isn’t the consumerization of no word search a life saver? Now I need a new gown to complement my size 11 triple E high heels. There’s a bourbon tasting in Harrod’s Creek next week, and I have to be a trend setter before we go squirrel hunting.

Stephen E Arnold, February 20, 2017

Blippar: Your Phone May Recognize You, Not Just a Product

January 4, 2017

I read “Blippar AI Visual Search Engine Recognizes Faces in Real Time.” The main point of the write up is that you can point your phone at something, and the phone will recognize that thing or person. The flip side is that if your phone has a camera which can see you, your phone makes it easy for “someone” to recognize you. Isn’t that special? Blippar info is at this link.

I learned:

Blippar expanded its augmented reality visual search browser on Tuesday to recognize faces in real time with a simple smartphone camera and return information about that person.

The write up talks about how consumers will drool over this feature. My thought was, “Gee, wouldn’t that function be useful for surveillance purposes?”

The write up included this statement:

The feature allows users to point the camera phone at any real person or their image in a picture on television and the Blippar app returns information about the person from the company’s database filled with more than three billion facts. Real-time facial recognition is the latest tool, amidst expansion in artificial intelligence and deep-learning capabilities.

Yep. Just another “tool.”

Blippar includes a feature for humans who want to be recognized:

For public figures, their faces will be automatically discovered with information drawn from Blipparsphere, the company’s visual knowledge Graph that pulls information from publicly accessible sources, which was released earlier this year. Public figures can also set up their own AR Face Profile. The tool enables them to engage with their fans and to communicate information that is important to them by leveraging their most personal brand — their face.  Users also can create fact profiles — Augmented Reality profiles on someone’s face, which users create so they can express who they are visually.Users can view each other’s profiles that have been uploaded and published and can add pictures or YouTube videos, as well as AR moods and much more to express themselves in the moment.

Why not convert existing images to tokens or hashes and then just match faces? Maybe not. Who would want to do this to sell toothpaste?

Stephen E Arnold, January 4, 2017

Google Buys Image Search: Invention Out

December 23, 2016

I read “Google Buys Shopping Search Startup to Make Images More Lucrative.” The Alphabet Google thing has been whacking away at image search for more than a decade. I have wondered why the GOOG’s whiz kids cannot advance beyond fiddling with the interface. Useful ways to slice and dice images are lacking at Google, but other vendors have decided to build homes on the same technical plateau. Good enough is the watchword for most information search and retrieval systems today.

The news that the Google is buying yet another outfit comes as no surprise. Undecidable  Labs, founded by a denizen of Apple, wants to make it easy to see something and buy it.

Innovation became very hard for the Alphabet Google thing once it had cherry picked the low hanging fruit from research labs, failed Web search systems, and assorted disaffected employees from search, hardware, and content processing companies.

Now innovation comes from buying outfits that are nimble, think outside the Google box, and have something that is sort of real. According to the write up:

The acquisition suggests that Google, the largest unit of Alphabet Inc., is making further moves to tie its massive library of online image links with a revenue stream.

eBay is paddling into the same lagoon. The online flea market wants to make it easy for me to spot a product I absolutely must have, right now. Click it and be transported to an eBay page so I can buy that item. Google seems to be thinking along a similar line, just without the “old” Froogle.com system up and running. Google’s angle will make an attempt to hook a search into a product sale. Think of Google as an intermediary or broker, not a digital store with warehouses. Yikes, overhead. No way at the GOOG. Not logical, right?

Earlier efforts around online commerce have delivered mixed results at Google. The company’s mobile payments have yet to see significant pickup. Its comparison shopping service, which facilitates online purchases within search results, has growing traction with advertisers, according to external estimates.

Perhaps one asset for the GOOG is that the founder is Cathy Edwards. I wonder if she wears blue jeans and a black turtle neck. What are the odds she uses an Apple iPhone?

Stephen E Arnold, December 23, 2016

Physiognomy for the Modern Age

December 6, 2016

Years ago, when I first learned about the Victorian-age pseudosciences of physiognomy and phrenology, I remember thinking how glad I was that society had evolved past such nonsense. It appears I was mistaken; the basic concept was just waiting for technology to evolve before popping back up, we learn from NakedSecurity’s article, “’Faception’ Software Claims It Can Spot Terrorists, Pedophiles, Great Poker Players.”  Based in Isreal, Faception calls its technique “facial personality profiling.” Writer Lisa Vaas reports:

The Israeli startup says it can take one look at you and recognize facial traits undetectable to the human eye: traits that help to identify whether you’ve got the face of an expert poker player, a genius, an academic, a pedophile or a terrorist. The startup sees great potential in machine learning to detect the bad guys, claiming that it’s built 15 classifiers to evaluate certain traits with 80% accuracy. … Faception has reportedly signed a contract with a homeland security agency in the US to help identify terrorists.

The article emphasizes how problematic it can be to rely on AI systems to draw conclusions, citing University of Washington professor and “Master Algorithm” author Pedro Domingos:

As he told The Washington Post, a colleague of his had trained a computer system to tell the difference between dogs and wolves. It did great. It achieved nearly 100% accuracy. But as it turned out, the computer wasn’t sussing out barely perceptible canine distinctions. It was just looking for snow. All of the wolf photos featured snow in the background, whereas none of the dog pictures did. A system, in other words, might come to the right conclusions, for all the wrong reasons.

Indeed. Faception suggests that, for this reason, their software would be but one factor among many in any collection of evidence. And, perhaps it would—for most cases, most of the time. We join Vaas in her hope that government agencies will ultimately refuse to buy into this modern twist on Victorian-age pseudoscience.

Cynthia Murrell, December 6, 2016

 

Is Sketch Search the Next Big Thing?

December 5, 2016

There’s text search and image search, but soon, searching may be done via hand-drawn sketching. Digital Trends released a story, Forget keywords — this new system lets you search with rudimentary sketches, which covers an emerging technology. Two researchers at Queen Mary University of London’s (QMUL) School of Electronic Engineering and Computer Science taught a deep learning neural network to recognize queries in the form of sketches and then return matches in the form of products. Sketch may have an advantage surpassing image search,

Both of those search modalities have problems,” he says. “Text-based search means that you have to try and describe the item you are looking for. This is especially difficult when you want to describe something at length, because retrieval becomes less accurate the more text you type. Photo-based search, on the other hand, lets you take a picture of an item and then find that particular product. It’s very direct, but it is also overly constrained, allowing you to find just one specific product instead of offering other similar items you may also be interested in.

This search technology is positioning itself to online retail commerce — and perhaps also only users with the ability to sketch? Yes, why read? Drawing pictures works really well for everyone. We think this might present monetization opportunities for Pinterest.

Megan Feil, December 5, 2016

Word Embedding Captures Semantic Relationships

November 10, 2016

The article on O’Reilly titled Capturing Semantic Meanings Using Deep Learning explores word embedding in natural language processing. NLP systems typically encode word strings, but word embedding offers a more complex approach that emphasizes relationships and similarities between words by treating them as vectors. The article posits,

For example, let’s take the words woman, man, queen, and king. We can get their vector representations and use basic algebraic operations to find semantic similarities. Measuring similarity between vectors is possible using measures such as cosine similarity. So, when we subtract the vector of the word man from the vector of the word woman, then its cosine distance would be close to the distance between the word queen minus the word king (see Figure 1).

The article investigates the various neural network models that prevent the expense of working with large data. Word2Vec, CBOW, and continuous skip-gram are touted as models and the article goes into great technical detail about the entire process. The final result is that the vectors understand the semantic relationship between the words in the example. Why does this approach to NLP matter? A few applications include predicting future business applications, sentiment analysis, and semantic image searches.

Chelsea Kerwin,  November 10, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

Reverse Image Searching Is Easier Than You Think

October 6, 2016

One of the newest forms of search is using actual images.  All search engines from Google to Bing to DuckDuckGo have an image search option, where using keywords you can find an image to your specifications.  It seemed to be a thing of the future to use an actual image to power a search, but it has actually been around for a while.  The only problem was that reverse image searching sucked and returned poor results.

Now the technology has improved, but very few people actually know how to use it.  ZDNet explains how to use this search feature in the article, “Reverse Image Searching Made Easy…”. It explains that Google and TinEye are the best way to begin reverse image search. Google has the larger image database, but TinEye has the better photo experts.  TinEye is better because:

TinEye’s results often show a variety of closely related images, because some versions have been edited or adapted. Sometimes you find your searched-for picture is a small part of a larger image, which is very useful: you can switch to searching for the whole thing. TinEye is also good at finding versions of images that haven’t had logos added, which is another step closer to the original.

TinEye does have its disadvantages, such as outdated results and not being able to find them on the Web.  In some cases Google is the better choice as one can search by usage rights.  Browser extensions for image searching are another option.  Lastly if you are a Reddit user, Karma Decay is a useful image search tool and users often post comments on the image’s origin.

The future of image searching is now.

Whitney Grace, October 6, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph

 

Recent Developments in Deep Learning Architecture from AlexNet to ResNet

September 27, 2016

The article on GitHub titled The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3) is not an article about the global media giant but rather the advancements in computer vision and convolutional neural networks (CNNs). The article frames its discussion around the ImageNet Large-Scale Recognition Challenges (ILSVRC), what it terms the “annual Olympics of computer vision…where teams compete to see who has the best computer vision model for tasks such as classification, localization, detection and more.” The article explains that the 2012 winners and their network (AlexNet) revolutionized the field.

This was the first time a model performed so well on a historically difficult ImageNet dataset. Utilizing techniques that are still used today, such as data augmentation and dropout, this paper really illustrated the benefits of CNNs and backed them up with record breaking performance in the competition.

In 2013, CNNs flooded in, and ZF Net was the winner with an error rate of 11.2% (down from AlexNet’s 15.4%.) Prior to AlexNet though, the lowest error rate was 26.2%. The article also discusses other progress in general network architecture including VGG Net, which emphasized depth and simplicity of CNNs necessary to hierarchical data representation, and GoogLeNet, which tossed the deep and simple rule out of the window and paved the way for future creative structuring using the Inception model.

Chelsea Kerwin, September 27, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

A Snapchat Is Worth a Thousand Twitter Characters or More

September 8, 2016

The article titled Snapchat Passes Twitter in Daily Usage on Bloomberg Technology provides some insights into the most popular modes of communication. As the title suggests, that mode is not with words. Rather, 150 million people appear to prefer images to language, at least when it comes to engaging with other on social media. The article reveals,

Snapchat has made communicating more of a game by letting people send annotated selfies and short videos. It has allowed people to use its imaging software to swap faces in a photo, transform themselves into puppies, and barf rainbows… Snapchat encourages people to visit the app frequently with features such as the “Snapstreak,” which counts the number of consecutive days they’ve been communicating with their closest friends. Snapchat’s other content, such as news and Live Stories, disappear after 24 hours.

Other Silicon Valley players have taken note of this trend. Facebook recently purchased the company that built Masquerade, an app offering photo-manipulation akin to Snapchat’s. Are words on their way out? The trend of using abbreviations (“abbrevs”) and slang to streamline messaging would logically result in a replacement of language with images, which can say volumes with a single click. But this could also result in a lot of confusion and miscommunication. Words allow for a precision of meaning that images often can’t supply. Hence the crossbreed of a short note scrawled across an image.

Chelsea Kerwin, September 8, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

Image Recognition: Think Tattoo Recognition

June 8, 2016

I know that some bad guys encourage their “assistants” to get facial tattoos. I am not personally into tattoos, but there are some who believe that one’s immune system is strengthened via the process. The prison tattoos I have seen in pictures mind you, did not remind me of the clean room conditions in some semi conductor fabrication facilities. I am confident that ball point pen ink, improvised devices, and frequent hand washing are best practices.

I read “Tattoo Recognition Research Threatens Free Speech and Privacy.” The write up states:

government scientists are working with the FBI to develop tattoo recognition technology that police can use to learn as much as possible about people through their tattoos.

The write up points out that privacy is an issue.

My question:

If a person gets a facial tattoo, perhaps that individual wants others to notice it?

I have heard that some bad guys want their “assistants” to get facial tattoos. With a message about a specific group, it makes it difficult for an “assistant” to join another merry band of pranksters.

Stephen E Arnold, June 8, 2016

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta