Design Tool Picular Taps into Google Image Color Data

October 9, 2018

We learn from a write-up at Fast Company that “Google Image Search Is Now a Design Tool.” More specifically, the new design tool Picular taps into Google Image Search for its data. This is an intriguing approach. Writer and associate editor Katharine Schwab writes:

Picular is a new color search tool that lets you enter any search term and presents you with a slew of options, basing all of its color choices on what pops up first in Google image search. It’s a color-picker, courtesy of internet hive mind. For instance, if you type the word ‘desert’ into Picular’s search bar, the tool scrapes the top 20 image results from Google and finds the most dominant color in each image. It presents these results in a series of tiles: A sea of sandy browns and oranges, with a few blues (presumably from the sky) thrown in. Each tile has the color’s RGB code that instantly copies to your clipboard when you click on the tile, making it easy to instantly try out the colors in your work. Picular is a quick and handy way to get color ideas for a design project, especially because you can type in more emotional, evocative words and see what Google instantly associates with each idea.”

And where does Google get their associations? From its algorithms’ studies of human nature, of course. It may at first seem odd to consult an AI to better know the colors of human emotions and ideas, but some color associations we think of as natural actually vary from culture to culture, and Google extracts its data from around the world. Such a tool could certainly help designers and, especially, advertisers better connect with their intended audiences through color. Picular was created by Future Memories, a digital studio out of Sweden that was founded in 2014.

Cynthia Murrell, October 9, 2018

Images Are Hot

October 8, 2018

Snapchat is reinventing itself or at least tweaking its high school science club management methods. That creates an opportunity to other picture sharing services.

Consider Pinterest.

We know that watching YouTube videos and fiddling with a mobile phone are the future of education. Enter Pinterest. This highly visual platform detailed some if its plans for advancement in a recent Social Media Today story, “Pinterest Adds Pinch to Zoom, Updated Visual Search.”

According to the story:

“We’ve made some improvements to the tool based on feedback we’ve heard from Pinners. We updated the button so it’s clearer, especially for people who are new to Pinterest, and moved it so it’s a little easier to reach. And it’s working too – in early tests of the improved button, nearly 70% more people used the visual search tool.”

While the possibilities of Pinterest becoming the leader of visual search, the information highway is not pothole free. Snapchat, Instagram, and other services beckon.

Will Snapchat convert the click to buy into revenue gold? And there is the often ignored image system at Amazon.

A picture may be worth a thousand words, but the trick is to shift the equation to making a picture worth a $1,000.

Patrick Roland, October 8, 2018

Chinese Art Online

September 13, 2018

With a little digging, anyone can find great stuff for free on the Internet.  This stuff includes: games, books, audiobooks, language lessons, software tutorials, coding expertise, and more.  The problem, however, is if you do not know where to look this information is near impossible to find.  Open Culture is one of the Internet’s bastions for great free stuff and they announced a new Asian acquisition: “Free: Download 70,000+ High-Resolution Images Of Chinese Art From Taipei’s National Palace Museum.”

While China is now open to the West, many of its cultural aspects remain a mystery and unavailable to curious and interested people.  Two of the greatest dynasties in China’s history are the Ming and Qing dynasties, ranging from 1386 to 1912.  As one can imagined, Chinese artists created amazing pieces, but they have not been available to the public until now.  The Taipei National Palace Museum has scanned over 70,000 items in high resolution photos for fee browsing and download.  Not only was this a big expense and huge amount of work, it also brings new cultural history and content to the Internet.

There is currently an English version of the image archive, but the Chinese version, of course, has the richer and easier to navigate content (if you speak Chinese).

“Still, the National Palace Museum has been improving its English portal, which allows searches not just by category of object but by dynasty, a list that now reaches far beyond the Ming and Qing, all the way back to the Shang Dynasty of 1600 BC to 1046 BC. But even as the English version catches up to the Chinese one — as of this writing, it contains more than 4700 items — it will surely take some time before National Palace Museum Open Data catches up with the complete holdings of the National Palace Museum, with its permanent collection of about 700,000 Chinese imperial artifacts and artworks spanning eight millennia. As with Chinese history itself, a formidable subject of study if ever there was one, it has to be taken one piece at a time.”

This is an amazing contribution to humanity’s rich cultural history, but the biggest downside is that unless you visit the museum’s web site no one else is going to know about this online museum.  One of the biggest problems with online databases and archives, such as those curated by museums and historical societies, is that their information is not connected to search engines like Google and DuckDuckGo.

Whitney Grace, September 13, 2018

Semantic Video Search Engine

September 2, 2018

I saw a link to a Semantic Video Search Engine” with the logo of MediaMill attached. Curious I did a bit of exploring and noted a video at this link. I learned that MediaMill is the name of the multimedia search engine. The system “watches” or “processes” a video and then assigns an index term or category to the subject of the video scene; for example a scene with a boat is tagged “boat.”

The function is to identify specific video fragments. The system provides automatic content detection. The goal is to make huge amounts of video data accessible. The video i watched was dated 2009. I located the MediaMill Web site and learned:

MediaMill has its roots in the ICES-KIS Multimedia Information Analysis project (in conjunction with TNO) and the Innovative Research Program for Image processing (IOP). It blossomed in the BSIK program MultimediaN the EU FP-6 program VIDI-Video, the Dutch/Flemish IM-Pact BeeldCanon project, and the Dutch VENISEARCHER project. The MediaMill team is currently funded by the Dutch VIDI STORYproject, the Dutch FES COMMIT program, and the US IARPA SESAME project.

The project’s news ended in 2015. Bing and Google searches turn up a significant amount of academic-oriented information. TREC data, technical papers, and links to the MediaMill Web site abound.

The question becomes:

Why has video search remained a non starter?

Since we started our DarkCyber video series, available on YouTube and Vimeo, we have had an opportunity to monitor how these two services index videos. YouTube, for example, makes the video available in the YouTube index in about a day, sometimes more. Vimeo does not index DarkCyber on a regular schedule. We provide an explicit link to the Vimeo video in our Beyond Search announcement of each week’s video.

It is possible to get a listing of DarkCyber videos on the not-well-known Google Video search service. You can find this index at www.googlevideo.com. Run the query “arnold darkcyber” and you will see a list of DarkCyber videos. Note that these are not in chronological order. In fact, running the “arnold darkcyber” query at different times generates results lists with different items and a similarly jumbled  or non chronological order. Why? Google search does not handle time in its public facing services. For high accuracy time based queries, you will have to use the commercial Google technology. Check out Recorded Future for some additional details.

Searching for video is a difficult task. YouTube search is quirky. For example, search for “hawaii volcano live shipley” and one does not get a link to the current live stream. YouTube provides links to old videos. To find the live stream, one has to click on the picture of Mr. Shipley and then select the live stream. Vimeo has its oddities as well. When I post a DarkCyber to Vimeo, I cannot search for it. The new video just sort of shows up on my Vimeo dashboard but I cannot locate the most recent video with a query. So much for real time.

Exalead tried its hand at video search, enlisting a partner for the effort. The test was interesting, but I heard chatter that the computational demand (think expense) made the project less than attractive.

My hunch is that video search is lousy because of the costs associated with processing video. Even basic rendering is a slog. Imagine the expense of grinding through a day’s worth of YouTube or Vimeo output?

To sum up, nifty video search ideas abound. Academics have a treasure trove of opportunity. But despite the talk about the cloud and the magic of modern technology. Video search remains difficult and mostly unsatisfying.

Maybe that’s why social media sites rely on those posting the video to tell friends where the content resides? Searching for a snippet of video is almost as difficult as wrestling with a Modern Talking Pictures catalog.

Stephen E Arnold, September 2, 2018

 

Finding Music Samples

August 27, 2018

Short honk: Looking for free music samples? A collection of samples is available on “Free Sound Samples.” Queries via search engines for samples produces some wonky results. Worth noting.

Stephen E Arnold, August 27, 2018

DarkCyber for August 21, 2018 Now Available

August 21, 2018

The DarkCyber video news program for August 21, 2018, is now available. You can view the nine minute show at www.arnoldit.com/wordpress or on Vimeo at this link.

This week’s program reports about Methods for hacking crypto currency … hijacking mobile phones via SIM swapping… TSMC hacked with an Eternal Blue variant… and information about WikiLeaks leaked.

The first story runs down more than nine ways to commit cybercrime in order to steal digital currency. A student assembled these data and published them on a personal page on the Medium information service. Prior to the step by step explanation, ways to exploit blockchain for the purpose of committing a possible crime was difficult to find. The Dark Cyber video includes a link to the online version of this information.

The second story reviews the mobile phone hacking method called SIM swapping. This exploit makes it possible for a bad actor to take control of a mobile phone and then transfer digital currency from the phone owner’s account to the bad actor’s account. More “how to” explanations are finding their way into the Surface Web, a trend which has been gaining momentum in the last six months.

The third story reviews how a variant of the Eternal Blue exploit compromised the Taiwan Semiconductor Manufacturing Company. Three of the company’s production facilities were knocked offline. Eternal Blue is the software which enables a number of ransomware attacks. The code was allegedly developed by a government agency. The DarkCyber video provides links to repositories of some software developed by the US government. Stephen E Arnold, author of Dark Web Notebook, “The easier and easier access to specific methods for committing cybercrime make it easy to attack individuals and organizations. On one hand, greater transparency may help some people take steps to protect their data. On the other hand, the actionable information may encourage individuals to try their hand at crime in order to obtain easy money. Once how to information is available to hackers, the likelihood of more attacks, exploits, and crimes is likely to rise.”

The final story reports that WikiLeaks itself has had some of its messages leaked. These messages provide insight into the topics which capture WikiLeaks interest and reveal information about some of the source of support the organization enjoys. The Dark Cyber video provides a link to this collection of WikiLeaks messages.

Stephen E Arnold will be lecturing in Washington, DC, the week of September 6, 2018. If you want to meet or speak with him, please contact him via this email benkent2020 at yahoo dot com.

Kenny Toth, August 21, 2018

Facial Recognition: Not for LE and Intel Professionals? What? Hello, Reality Calling

July 30, 2018

I read “Facial Recognition Gives Police a Powerful New Tracking Tool. It’s Also Raising Alarms.” The write up is one of many pointing out that using technology to spot persons of interest is not a good idea. The Telegraph has a story which suggests that Amazon is having some doubts about its Rekognition facial recognition system. What? Hello, reality calling.

The “Raising Alarms” story makes this statement, obtained from an interview with an outfit called Kairos. I circled these statements:

“Time is winding down but it’s not too late for someone to take a stand and keep this from happening,” said Brian Brackeen, the CEO of the facial recognition firm Kairos, who wants tech firms to join him in keeping the technology out of law enforcement’s hands. Brackeen, who is black, said he has long been troubled by facial recognition algorithms’ struggle to distinguish faces of people with dark skin, and the implications of its use by the government and police. If they do get it, he recently wrote, “there’s simply no way that face recognition software will be not used to harm citizens.”

The write up points out:

Many law enforcement agencies — including the FBI, the Pinellas County Sheriff’s Office in Florida, the Ohio Bureau of Criminal Investigation and several departments in San Diego — have been using those databases for years, typically in static situations — comparing a photo or video still to a database of mug shots or licenses. Maryland’s system was used to identify the suspect who allegedly massacred journalists at the Capital Gazette newspaper last month in Annapolis and to monitor protesters following the 2015 death of Freddie Gray in Baltimore.

Yep, even the Hollywood gangster films have featured a victim flipping through a collection of mug shots. The idea is pretty simple. Bad actors who end up in a collection of mug shots are often involved in other crimes. Looking at images is one way for LE and intel professionals to figure out if there is a clue to be followed.

Now what’s the difference between having software look for matches? Software can locate similar fingerprints. Software can locate similar images, maybe even the image of the person who committed a crime. The idea of a 50 year old man robbed at an ATM flipping through images of bad actors in a Chicago police station is, from my point of view, a bridge too far. The 50 year old will either lose concentration or just point at some image and say, “Yeah, yeah, that looks like the guy.”

Let’s go with software because there are a lot of bad actors, there are some folks on Facebook who are bad actors, and there are bad actors wandering around in a crowd. Don’t believe me. Go to Rio, stay in a fancy hotel, and wander around on a Saturday night. How long before you are robbed? Maybe never, but maybe within 15 minutes. Give this test a try.

Software, like humans, makes errors. However, it seems to make sense to use available technology to take actions required by government rules and regulations. That means that big companies are going to chase government contracts. That means that stopping companies from providing facial recognition technology is pretty much impossible.

I would suggest that the barn is on fire, the horses have escaped, and Costco built a new superstore on the land. Well, maybe I will suggest that this has happened.

Facial recognition systems are tools which have been and will continue to be used. Today’s systems can be fooled. I showed a pair of glasses which can baffle most facial recognition systems in my DarkCyber video a couple of months ago.

The flaws in the algorithms will be improved slowly. The challenge of crowds, lousy lightning, disguises, hats, shadows, and the other impediments to higher accuracy will be reduced slowly and over time.

But let’s get down to basics: The facial recognition systems are here to stay. In the US, the UK, and most countries on the planet. Go to a small city in Ecuador. Guess what? There is a Chinese developed facial recognition system monitoring certain areas of most cities. Why? Flipping through a book with hundreds of thousands of images in an attempt to identify a suspect doesn’t work too well. Toss in Snapchat and YouTube. Software is the path forward. Period.

Facial recognition systems, despite their accuracy rates, provide a useful tool. Here’s the shocker. These systems have been around for decades. Remember the Rand Tablet. That was in the 1960s. Progress is being made.

Outrage is arriving a little late.

Stephen E Arnold, July 30, 2018

Amazon Rekognition: The View from Harrods Creek

July 29, 2018

I read the stories about Amazon’s facial recognition system. A representative example of this genre is “Amazon’s Facial Recognition Tool Misidentified 28 Members of Congress in ACLU Test.” The write up explains the sample. The confidence level was set at 80 percent. Amazon recommends 95 percent.

The result? Twenty eight individuals were misidentified.

At a breakfast meeting this morning (Sunday, July 29, 2018) one uninformed Kentucky resident asked:

What if these individuals are criminals?

Another person responded:

Just 28?

I jotted down the remarks on my mobile phone. Ah, the Bluegrass state.

Stephen E Arnold, July 29, 2018

The Western Electric Model: Has It Resurfaced?

July 12, 2018

I read “Magic Leap Signs AT&T as sole U.S. Wireless Vendor and Gets Investment.” The story asserts that AT&T has become the exclusive distributor of the Magic Leap virtual realty device. The story makes no reference to Western Electric. Who remembers Western Electric, how its equipment deals worked, or how it meshed with Bell/AT&T. Perhaps the Western Electric model has surfaced again?

Stephen E Arnold, July 12, 2018

Google: Office Pix

June 27, 2018

A brief write-up at the Android Police supplies a bit of PR for Google— “Tip: Google Photos Can Find All the Photos You’ve Taken at ‘Work’.” We like that “work” angle. Writer Rita El Khoury observes that one of Google’s finest products, as she sees it, has added several features since it came out, including a function that auto-groups users’ photos. It seems she and her colleagues stumbled upon one apparently unheralded feature. She writes:

“But did you know that you can search Photos for ‘work’ and get all the images you’ve snapped at work? I didn’t. We’re not sure how new or old the functionality is, but we just ran across it and it seems very helpful. If your work requires you keep tab of documents or items, or if you make creative products that you catalog, or if you snap pics at work for any other miscellaneous reason, you may want an easy way to filter those photos. You can quickly do that by typing ‘work’ in the Google Photos search field. Photos is probably using your Google location setting for home and work to quickly sift and find pictures taken at work. However, doing a search for ‘Home’ doesn’t yield results of pictures taken at home — instead it shows me all photos of houses and homes that I’ve taken.”

Perhaps Google recognizes there could be more security issues behind automatically grouping photos taken at “home” than there would be for those taken at “work.” That’s a welcome bit of common sense, but it still seems problematic to assign a “work” grouping unbidden. I suppose I’m just old-fashioned that way.

Cynthia Murrell, June 27, 2018

Next Page »

  • Archives

  • Recent Posts

  • Meta