Facial Recognition: Not As Effective As Social Recognition

January 8, 2021

Facial recognition is a sub-function of image analysis. For some time, I have bristled at calls for terminating research into this important application of algorithms intended to identify, classify, and make sense of patterns. Many facial recognition systems return false positives for reasons ranging from lousy illumination to people wearing glasses with flashing LED lights.

I noted “The FBI Asks for Help Identifying Trump’s Terrorists. Internet (and Local News) Doesn’t Disappoint.” The article makes it clear that facial recognition by smart software may not be as effective as social recognition. The write up says:

There is also Elijah Schaffer, a right-wing blogger on Glenn Beck’s BlazeTV, who posted incriminating evidence of himself in Nancy Pelosi’s office and then took it down when he realized that he posted himself breaking and entering into Speaker of the House Nancy Pelosi’s office. But screenshots are a thing.

What’s clear is that technology cannot do what individuals’ posting to their social media accounts can do or what individuals who can say “Yeah, I know that person” delivers.

Technology for image analysis is advancing, but I will be the first to admit that 75 to 90 percent accuracy falls short of a human-centric system which can provide:

  • Name
  • Address
  • Background details
  • Telephone and other information.

Two observations: First, social recognition is at this time better, faster, and cheaper than Fancy Dan image recognition systems. Second, image recognition is more than a way to identify a person robbing a convenience store. Medical, military, and safety applications are in need of advanced image processing systems. Let the research and testing continue without delay.

Stephen E Arnold, January 8, 2021

PimEyes Brings Facial Recognition to the Masses

December 18, 2020

If a search engine based on facial recognition is controversial when in the hands of law enforcement, it is downright scary when made available to the general public for free. However, it comes as no surprise to those of us who follow such things that PetaPixel reveals, “This Creepy Face Search Engine Scours the Web for Photos of Anyone.” Officially marketed as a way for users to protect their own privacy, PimEyes uses facial recognition technology to hunt down photos of anyone across the Web. The basic, one-time search is free, but for an extra $15 one can receive up to 25 alerts a month as the service searches perpetually. Reporter Michael Zhang writes:

“After you provide one or more photos of a person (in which their face is clearly visible), PimEyes compares that person to faces found on millions of public websites — things like news articles, blogs, social media, and more. Within a few seconds, it provides results showing other photos found that match the person and links to where those portraits were found. … Google’s popular reserve image search can find photos similar in appearance to images you provide, but PimEyes specifically uses facial recognition and can accept multiple reference photos to find images of specific individuals.”

The brief write-up cites this OneZero article. It also shares an example search featuring the lovely, and often photographed, Meghan Markle. Based in Poland, PimEyes was created in 2017 and commercialized in 2019.

Cynthia Murrell, December 18, 2020

Fujitsu Simplifies, Reduces Costs of Preventing Facial Authentication Fraud

September 25, 2020

Fujitsu says it has developed a cost-effective way to thwart attempts to fool facial recognition systems, we learn from IT-Online’s write-up, “Fujitsu Overcomes Facial Authentication Fraud.” The same factor that makes facial authentication systems more convenient than other verification methods, like images of fingerprints or palm veins, also makes them more vulnerable to fraud—photos of faces are easy to capture and reproduce. We’re told:

“Fujitsu Laboratories has developed a facial recognition technology that uses conventional cameras to successfully identify efforts to spoof authentication systems. This includes impersonation attempts in which a person presents a printed photograph or an image from the internet to a camera. Conventional technologies rely on expensive, dedicated devices like near-infrared cameras to identify telltale signs of forgery, or the user is required to move their face from side to side, which remains difficult to duplicate with a forgery. This leads to increased costs, however, and the need for additional user interaction slows the authentication process. To tackle these challenges, Fujitsu has developed a forgery feature extraction technology that detects the subtle differences between an authentic image and a forgery, as well as a forgery judgment technology that accounts for variations in appearance due to the capture environment. … Fujitsu believes that, by using these technologies, it becomes possible to identify counterfeits using only the information of face images taken by a general-purpose camera and to realize relatively convenient and inexpensive spoofing detection.”

We’re told the company tested the system in a real-world office/ telecommuting setting and confirmed it works as desired. Fujitsu hopes the technology will prove popular as remote work continues and, possibly, grows. The venerable global information and communication tech firm serves many prominent companies in several industries. Based in Tokyo, Fujitsu has been operating since 1935.

Cynthia Murrell, September 25, 2020

Defeating Facial Recognition: Chasing a Ghost

August 12, 2020

The article hedges. Check the title: “This Tool could Protect Your Photos from Facial Recognition.” Notice the “could”. The main idea is that people do not want their photos analyzed and indexed with the name, location, state of mind, and other index terms. I am not so sure, but the write up explains with that “could” coloring the information:

The software is not intended to be just a one-off tool for privacy-loving individuals. If deployed across millions of images, it would be a broadside against facial recognition systems, poisoning the accuracy of the data sets they gather from the Web. <

So facial recognition = bad. Screwing up facial recognition = good.

There’s more:

“Our goal is to make Clearview go away,” said Dr Ben Zhao, a professor of computer science at the University of Chicago.

Okay, a company is a target.

How’s this work:

Fawkes converts an image — or “cloaks” it, in the researchers’ parlance — by subtly altering some of the features that facial recognition systems depend on when they construct a person’s face print.

Several observations:

  • In the event of a problem like the explosion in Lebanon, maybe facial recognition can identify some of those killed.
  • Law enforcement may find narrowing a pool of suspects to a smaller group may enhance an investigative process.
  • Unidentified individuals who are successfully identified “could” add precision to Covid contact tracking.
  • Applying the technology to differentiate “false” positives from “true”positives in some medical imaging activities may be helpful in some medical diagnoses.

My concern is that technical write ups are often little more than social polemics. Examining the upside and downside of an innovation is important. Converting  a technical process into a quest to “kill” a company, a concept, or an application of technical processes is not helpful in DarkCyber’s view.

Stephen E Arnold, August 12, 2020

Wolfcom, Body Cameras, and Facial Recognition

April 5, 2020

Facial recognition is controversial topic and is becoming more so as the technology advances. Top weapons and security companies will not go near facial recognition software due to the cans of worms it would open. Law enforcement agencies want these companies to add it. Wolfcom is actually adding facial recognition to its cameras. Techdirt has the scoop on the story, “Wolfcom Decides It Wants To Be The First US Body Cam Company To Add Facial Tech To Its Products.”

Wolfcom makes body camera for law enforcement and they want to add facial recognition technology to their products. Currently Wolfcom is developing facial recognition for its newest body cam, Halo. Around one thousand five hundred police departments have purchased Wolfcam’s body cameras.

If Wolfcom is successful with its facial recognition development, it would be the first company to have body cameras that use the technology. The technology is still in development according to Wolfcom’s marketing. Right now, their facial recognition technology rests on taking individuals’ photos, then matching them against a database. The specific database is not mentioned.

Wolfcom obviously wants to be an industry leader, but it is also being careful about no making false promises or drumming up bad advertising:

“About the only thing Wolfcom is doing right is not promising sky high accuracy rate for its unproven product when pitching it to government agencies. That’s the end of the “good” list. Agencies who have been asked to beta test the “live” facial recognition AI are being given free passes to use the software in the future, when (or if) it actually goes live. Right now, Wolfcom’s offering bears some resemblance to Clearview’s: an app-based search function that taps into whatever databases the company has access to. Except in this case, even less is known about the databases Wolfcom uses or if it’s using its own algorithm or simply licensing one from another purveyor.”

Wolfcom could eventually offer realtime facial recognition technology and that could affect some competitors.

Whitney Grace, April 5, 2020

Facial Recognition: Those Error Rates? An Issue, Of Course

February 21, 2020

DarkCyber read “Machines Are Struggling to Recognize People in China.” The write up asserts:

The country’s ubiquitous facial recognition technology has been stymied by face masks.

One of the unexpected consequences of the Covid 19 virus is that citizens with face masks cannot be recognized.

“Unexpected” when adversarial fashion has been getting some traction among those who wish to move anonymously.

The write up adds:

Recently, Chinese authorities in some provinces have made medical face masks mandatory in public and the use and popularity of these is going up across the country. However, interestingly, as millions of masks are now worn by Chinese people, there has been an unintended consequence. Not only have the country’s near ubiquitous facial-recognition surveillance cameras been stymied, life is reported to have become difficult for ordinary citizens who use their faces for everyday things such as accessing their homes and bank accounts.

Now an “admission” by a US company:

Companies such as Apple have confirmed that the facial recognition software on their phones need a view of the person’s full face, including the nose, lips and jaw line, for them to work accurately. That said, a race for the next generation of facial-recognition technology is on, with algorithms that can go beyond masks. Time will tell whether they work. I bet they will.

To sum up: Masks defeat facial recognition. The future is a method of identification that can work with what is not covered plus any other data available to the system; for example, pattern of walking and geo-location.

For now, though, the remedy for the use of masks is lousy facial recognition and more effort to find innovations.

The author of the write up is a — wait for it — venture capital professional. And what country leads the world in facial recognition? China, according to the VC professional.

The future is better person recognition of which the face is one factor.

Stephen E Arnold, February 21, 2020

Easy Facial Recognition

February 11, 2020

DarkCyber spotted a Twitter thread. You can view it here (verified on February 8, 2020). The main point is that using open source software, an individual was able to obtain (scrape; that is copying) images from publicly accessible services. Then the images were “processed.” The idea was identify a person from an image. Net net: People can object to facial recognition, but once a technology migrates from “little known” to public-available, there may be difficulty putting the tech cat bag in the black bag.

Stephen E Arnold, February 11, 2020

The Clearview Write Up: A Great Quote

January 20, 2020

DarkCyber does not want to join in the hand waving about the facial recognition company called Clearview. Instead, we want to point out that the article is available without a pay wall from this link: https://bit.ly/2TO26H1

Also, the write up contains a great quote about technology like facial recognition. Here it is:

It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math.—Al Gidari, a privacy professor at Stanford Law School

DarkCyber wants to point out that a number of companies have gathered collections of images from a wide range of sources. The write up points to investors who may or may not be the power grid behind this particular technology application.

The inventor fits a stereotype: College drop out, long hair, etc.

The write up also identifies officers who allegedly found the database of images and the services helpful.

The New York Times continues to report on specialized technology. There are upsides and downsides to the information. One upside is that the write ups inform people about technology and its utility. The downside is that the information presented may generate a situation in which individuals can be put at risk or a negative tint given to something that is applied math and publicly accessible data.

It is interesting to consider combining services; for example, brand monitoring and image search. Perhaps that is another story for the New York Times?

Stephen E Arnold, January 20, 2020

New Chinese Facial Recognition Camera Reduces False Positives

January 19, 2020

In a move that should surprise nobody, China has created the ultimate facial recognition hardware. The Telegraph reports, “China Unveils 500 Megapixel Camera that Can Identify Every Face in a Crowd of Tens of Thousands.” Researchers revealed the “super camera,” which can see four times more detail than the human eye, at China’s International Industry Fair. Of course, no surveillance tech is complete without an AI; writer Freddie Hayward tells us:

“The camera’s artificial intelligence will be able to scan a crowd and identify an individual within seconds. Samantha Hoffman, an analyst at the Australian Strategic Policy Institute, told the ABC that the government has massive databases of people’s images and that data generated from surveillance video can be ‘fed into a pool of data that, combined with AI processing, can generate tools for social control, including tools linked to the Social Credit System’.”

Yes, the Social Credit System. China is no stranger to spying on its people, and this development will only make their current practices more effective. We learn:

“China currently has an estimated 200 million CCTV cameras watching over its citizens. For the past few years the country has been building a social credit system that will generate a score for each citizen based upon data about their lives, such as their credit score, whether they donate to charity, and their parenting ability. Punishments and rewards that citizens will receive based upon their score include access to better schools and universities and restricted travel. The current CCTV network is a central tool in gathering data about its citizens, but the cameras aren’t always powerful enough to take a clear picture of someone’s face in a crowd. The new 500 megapixel, or 500 million pixel, camera will help to remedy this.”

Indeed it will. I suppose if you are going to build a social system around snooping on the people, it should be as accurate as possible. You wouldn’t want to keep one citizen out of a good school because someone who looked like them was caught littering.

Cynthia Murrell, January 19, 2020

From the Desk of Captain Obvious: How Image Recognition Mostly Works

July 8, 2019

Want to be reminded about how super duper image recognition systems work? If so, navigate to the capitalist’s tool “Facebook’s ALT Tags Remind Us That Deep Learning Still Sees Images as Keywords.” The DarkCyber teams knows that this headline is designed to capture clicks and certainly does not apply to every image recognition system available. But if the image is linked via metadata to something other than a numeric code, then images are indeed mapped to words. Words, it turns out, remain useful in our video and picture first world.

Nevertheless, the write up offers some interesting comments, which is what the DarkCyber research team expects from the capitalist tool. (One of our DarkCyber team saw Malcolm Forbes at a Manhattan eatery keeping a close eye on a spectacularly gaudy motorcycle. Alas, that Mr. Forbes is no longer with us, although the motorcycle probably survives somewhere unlike the “old” Forbes’ editorial policies.

Here’s the passage:

For all the hype and hyperbole about the AI revolution, today’s best deep learning content understanding algorithms are still remarkably primitive and brittle. In place of humans’ rich semantic understanding of imagery, production image recognition algorithms see images merely through predefined galleries of metadata tags they apply based on brittle and naïve correlative models that are trivially confused.

Yep, and ultimately the hundreds of millions of driver license pictures will be mapped to words; for example, name, address, city, state, zip, along with a helpful pointer to other data about the driver.

The capitalist tool reminds the patient reader:

Today’s deep learning algorithms “see” imagery by running it through a set of predefined models that look for simple surface-level correlative patterns in the arrangement of its pixels and output a list of subject tags much like those human catalogers half a century ago.

Once again, no push back from Harrod’s Creek. However, it is disappointing that new research is not referenced in the article; for example, the companies involved in Darpa Upside.

Stephen E Arnold, July 8, 2019

Next Page »

  • Archives

  • Recent Posts

  • Meta