From the Desk of Captain Obvious: How Image Recognition Mostly Works

July 8, 2019

Want to be reminded about how super duper image recognition systems work? If so, navigate to the capitalist’s tool “Facebook’s ALT Tags Remind Us That Deep Learning Still Sees Images as Keywords.” The DarkCyber teams knows that this headline is designed to capture clicks and certainly does not apply to every image recognition system available. But if the image is linked via metadata to something other than a numeric code, then images are indeed mapped to words. Words, it turns out, remain useful in our video and picture first world.

Nevertheless, the write up offers some interesting comments, which is what the DarkCyber research team expects from the capitalist tool. (One of our DarkCyber team saw Malcolm Forbes at a Manhattan eatery keeping a close eye on a spectacularly gaudy motorcycle. Alas, that Mr. Forbes is no longer with us, although the motorcycle probably survives somewhere unlike the “old” Forbes’ editorial policies.

Here’s the passage:

For all the hype and hyperbole about the AI revolution, today’s best deep learning content understanding algorithms are still remarkably primitive and brittle. In place of humans’ rich semantic understanding of imagery, production image recognition algorithms see images merely through predefined galleries of metadata tags they apply based on brittle and naïve correlative models that are trivially confused.

Yep, and ultimately the hundreds of millions of driver license pictures will be mapped to words; for example, name, address, city, state, zip, along with a helpful pointer to other data about the driver.

The capitalist tool reminds the patient reader:

Today’s deep learning algorithms “see” imagery by running it through a set of predefined models that look for simple surface-level correlative patterns in the arrangement of its pixels and output a list of subject tags much like those human catalogers half a century ago.

Once again, no push back from Harrod’s Creek. However, it is disappointing that new research is not referenced in the article; for example, the companies involved in Darpa Upside.

Stephen E Arnold, July 8, 2019

How Smart Software Goes Off the Rails

June 23, 2019

Navigate to “How Feature Extraction Can Be Improved With Denoising.” The write up seems like a straight forward analytics explanation. Lots of jargon, buzzwords, and hippy dippy references to length squared sampling in matrices. The concept is not defined in the article. And if you remember statistics 101, you know that there are five types of sampling: Convenience, cluster, random, systematic, and stratified. Each has its strengths and weaknesses. How does one avoid the issues? Use length squared sampling obviously: Just sample rows with probability proportional to the square of their Euclidean norms. Got it?

However, the math is not the problem. Math is a method. The glitch is in defining “noise.” Like love, there are many ways to define love. The write up points out:

Autoencoders with more hidden layers than inputs run the risk of learning the identity function – where the output simply equals the input – thereby becoming useless. In order to overcome this, Denoising Autoencoders(DAE) was developed. In this technique, the input is randomly induced by noise. This will force the autoencoder to reconstruct the input or denoise. Denoising is recommended as a training criterion for learning to extract useful features that will constitute a better higher level representation.

Can you spot the flaw in approach? Consider what happens if the training set is skewed for some reason. The system will learn based on the inputs smoothed by statistical sanding. When the system encounters real world data, the system will, by golly, convert the “real” inputs in terms of the flawed denoising method. As one wit observed, “So s?c^2 p gives us a better estimation than the zero matrix.” Yep.

To sum up, the system just generates “drifting” outputs. The fix? Retraining. This is expensive and time consuming. Not good when the method is applied to real time flows of data.

In a more colloquial turn of phrase, the denoiser may not be denoising correctly.

A more complex numerical recipes are embedded in “smart” systems, there will be some interesting consequences. Does the phrase “chain of failure”? What about “good enough”?

Stephen E Arnold, June 23, 2019

Facial Recognition: In China, Deployed. In the US, Detours

April 9, 2019

Amazon faces push back for its facial recognition system Rekognition. China? That is a different story.

Chinese authorities seem to be fond of re-education camps and assorted types of incarceration facilities. China is trying to become the recognized (no pun intended) technology capital of the world. Unlike Chile and Bolivia which have somewhat old school prison systems, the Chinese government is investing money into its prison security systems. Technode explains how Chinese upgraded its security system in, “Briefing: Chinese VIP Jail Uses AI Technology To Monitor Prisoners.”

One flagship for facial recognition is China’s Yancheng Prison, known for imprisoning government officials and foreigners. The facility has upgraded its security system with a range of surveillance technology. The new surveillance system consists of a smart AI network with cameras and hidden sensors that are equipped with facial recognition, movement analysis The system detects prisoners’ unusual behavioral patterns, then alerts the guards and it is included in daily reports.

Yancheng Prison wants to cut down on the number of prison breaks, thus the upgrade:

“Jointly developed by industry and academic organizations including Tianjin-based surveillance technology company Tiandy, the system is expected to provide blanket coverage extending into every cell, rendering prison breaks next to impossible. The company is also planning to sell the system to some South American countries for jails with histories of violence and security breaches. The use of technology to monitor prisoners prompted concern over negative effects on prisoners’ lives and mental state from one human behavior expert who also suggested that some prisoners may look find ways to exploit the AI’s weaknesses.”

China continues to take steps to put technology into use. The feedback to the engineers who develop these systems can make adjustments. Over time, China may become better at facial recognition than almost any other country.

Whitney Grace April 9, 2019

Federating Data: Easy, Hard, or Poorly Understood Until One Tries It at Scale?

March 8, 2019

I read two articles this morning.

One article explained that there’s a new way to deal with data federation. Always optimistic, I took a look at “Data-Driven Decision-Making Made Possible using a Modern Data Stack.” The revolution is to load data and then aggregate. The old way is to transform, aggregate, and model. Here’s a diagram from DAS43. A larger version is available at this link.das42 diagram

Hard to read. Yep, New Millennial colors. Is this a breakthrough?

I don’t know.

When I read “2 Reasons a Federated Database Isn’t Such a Slam-Dunk”, it seems that the solution outlined by DAS42 and the InfoWorld expert are not in sync.

There are two reasons. Count ‘em.

One: performance

Two: security.

Yeah, okay.

Some may suggest that there are a handful of other challenges. These range from deciding how to index audio, video, and images to figuring out what to do with different languages in the content to determining what data are “good” for the task at hand and what data are less “useful.” Date, time, and geocodes metadata are needed, but that introduces the not so easy to solve indexing problem.

So where are we with the “federation thing”?

Exactly the same place we were years ago…start ups and experts notwithstanding. But then one has to wrangle a lot of data. That’s cost, gentle reader. Big money.

Stephen E Arnold, March 8, 2019

Natural Language Generation: Sort of Made Clear

February 28, 2019

I don’t want to spend too much time on NGA (natural language generation). This is a free Web log. Providing the acronym should be enough of a hint.

If you are interested in the subject and can deal with wonky acronyms, you may want to read “Beyond Local Pattern Matching: Recent Advances in Machine Reading.”

Search sucks, so bright young minds want to tell you what you need to know. What if the system is only 75 to 80 percent accurate? The path is a long one, but the direction information retrieval is heading seems clear.

Stephen E Arnold, February 28, 2019

ChemNet: Pre Training and Rules Can Work but Time and Cost Can Be a Roadblock

February 27, 2019

I read “New AI Approach Bridges the Slim Data Gap That Can Stymie Deep Learning Approaches.” The phrase “slim data” caught my attention. Pairing the phrase with “deep learning” seemed to point the way to the future.

The method described in the document reminded me that creating rules for “smart software” works on narrow domains with constraints on terminology. No emojis allowed. The method of “pre training” has been around since the early days of smart software. Autonomy in the mid 1990s relied upon training its “black box.”

Creating a training set which represents the content to be processed or indexed can be a time consuming, expensive business. Plus because content “drifts”, re-training is required. For some types of content, the training process must be repeated and verified.

So the cost of the rule creation, tuning and tweaking is one thing. The expense of training, training set tuning, and retraining is another. Add them up, and the objective of keeping costs down and accuracy up becomes a bit of a challenge.

The article focuses on the benefits of the new system as it crunches and munches its way through chemical data. The idea is to let software identify molecules for their toxicity.

Why hasn’t this type of smart software been used to index outputs at scale?

My hunch is that the time, cost, and accuracy of the indexing itself is a challenge. Eighty percent accuracy may be okay for some applications like identifying patients with a risk of diabetes. For identifying substances that will not kill one outright is another.

In short, the slim data gap and deep learning remain largely unsolved even for a constrained content domain.

Stephen E Arnold, February 27, 2019

Google Book Search: Broken Unfixable under Current Incentives

February 19, 2019

I read “How Badly is Google Books Search Broken, and Why?” The main point is that search results do not include the expected results. The culprit, as I understand the write up, looking for rare strings of characters within a time slice behaves in an unusual manner. I noted this statement:

So possibly Google has one year it displays for books online as a best guess, and another it uses internally to represent the year they have legal certainty a book is released. So maybe those volumes of the congressional record have had their access rolled back as Google realized that 1900 might actually mean 1997; and maybe Google doesn’t feel confident in library metadata for most of its other books, and doesn’t want searchers using date filters to find improperly released books. Oddly, this pattern seems to work differently on other searches. Trying to find another rare-ish term in Google Ngrams, I settled on “rarely used word”; the Ngrams database lists 192 uses before 2002. Of those, 22 show up in the Google index. A 90% disappearance rate is bad, but still a far cry from 99.95%.

There are many reasons one can identify for the apparent misbehavior of the Google search system for books. The author identifies the main reason but does not focus on it.

From my point of view and based on the research we have done for my various Google monographs, Google’s search systems operate in silos. But each shares some common characteristics even though the engineers, often reluctantly assigned to what are dead end or career stalling projects, make changes.

One of the common flaws has to do with the indexing process itself. None of the Google silos does a very good job with time related information. Google itself has a fix, but implementing the fix for most of its services is a cost increasing step.

The result is that Google focuses on innovations which can drive revenue; that is, online advertising for the mobile user of Google services.

But Google’s time blindness is unlikely to be remediated any time soon. For a better implementation of sophisticated time operations, take a look at the technology for time based retrieval, time slicing, and time analytics from the Google and In-Q-Tel funded company Recorded Future.

In my lectures about Google’s time blindness DNA, I compare and contrast what Recorded Future can do versus what Google silos are doing.

Net net: Performing sophisticated analyses of the Google indexes requires the type of tools available from Recorded Future.

Stephen E Arnold, February 19, 2019

Amazon: Wheel Re-Invention

December 19, 2018

Some languages have bound phrases; that is, two words which go together. Examples include “White House”, a presidential dwelling, and “ticket counter”, a place to talk with an uninterested airline professionals. How does a smart software system recognize a bound phrase and then connect it to the speaker’s or writer’s intended meaning. There is a difference between “I toured the White House” and “Turn left at the white house.”

Traditionally, vendors of text analysis, indexing, and NLP systems used jargon to explain a collection of methods pressed into action to make sense of language quirks. The guts of most systems are word lists, training material selected to make clear that in certain contexts some words go together and have a specific meaning; for example, “terminal” doesn’t make much sense until one gets whether the speaker or writer is referencing a place to board a train (railroad terminal), the likely fate of a sundowner (terminal as in dead), or a computer interface device (dumb terminal).

How does Amazon accomplish this magic? Amazon embraces jargon, of course, and then explains its bound phrase magic in “How Alexa Knows “Peanut Butter” Is One Shopping-List Item, Not Two.”

Amazon’s spin is spoken language understanding. The write up explains how the system operates. But the methods are ones that others have used. Amazon, to be sure, has tweaked the procedures. That’s standard operating procedure in the index game.

What’s interesting is that no reference is made to the contextual information which Amazon has to assist its smart software with disambiguation.

But Amazon is now talking, presumably to further the message that the company is a bold, brave innovator.

No argument from Harrod’s Creek. That’s a bound phrase, by the way, with capital letters and sometimes and apostrophe or not.

Stephen E Arnold, December 19, 2018

Facial Recognition and Image Recognition: Nervous Yet?

November 18, 2018

I read “A New Arms Race: How the U.S. Military Is Spending Millions to Fight Fake Images.” The write up contained an interesting observation from an academic wizard:

“The nightmare situation is a video of Trump saying I’ve launched nuclear weapons against North Korea and before anybody figures out that it’s fake, we’re off to the races with a global nuclear meltdown.” — Hany Farid, a computer science professor at Dartmouth College

Nothing like a shocking statement to generate fear.

But there is a more interesting image recognition observation. “Facebook Patent Uses Your Family Photos For Targeted Advertising” reports that a the social media sparkler has an invention that will

attempt to identify the people within your photo to try and guess how many people are in your family, and what your relationships are with them. So for example if it detects that you are a parent in a household with young children, then it might display ads that are more suited for such family units. [US20180332140]

While considering the implications of pinpointing family members and linking the deduced and explicit data, consider that one’s fingerprint can be duplicated. The dupe allows a touch ID to be spoofed. You can get the details in “AI Used To Create Synthetic Fingerprints, Fools Biometric Scanners.”

For a law enforcement and intelligence angle on image recognition, watch for DarkCyber on November 27, 2018. The video will be available on the Beyond Search blog splash page at this link.

Stephen E Arnold, November 18, 2018

Google Struggles with Indexing?

November 14, 2018

You probably know that Google traffic was routed to China. The culprit was something obvious. In this case, Nigeria. Yep, Nigeria. You can read about the mistake that provided some interesting bits and bytes to the Middle Kingdom. Yeah, I know. Nigeria. “A Nigerian Company Is in Trouble with Google for Re-Routing Traffic to Russia, China” provides some allegedly accurate information.

But the major news I noted here in Harrod’s Creek concerned Google News and its indexing. Your experience may be different from mine, but Google indexing can be interesting. I was looking for an outfit identified as Inovatio, which is a university anchored outfit in China. The reference to Inovatio in Google aimed me at a rock band and a design company in Slovenia. Google’s smart search system changed Inovatio to innovation even when I used quote marks. I did locate the Inovatio operation using a Chinese search engine. I was able to track which listed Inovatio and provided the university affiliation to allow me to get some info about an outfit providing surveillance and intercept services to countries in need of this capability.

Google. Indexing. Yeah.

Google News Publishers Complaining About Indexing Issues” highlights another issue with the beloved Google. I learned:

In the past few days there has been an uptick in complaints from Google News publishers around Google not indexing their new news content. Gary Illyes from Google did a rare appearance on Twitter to say he passed along the feedback to the Google News team to investigate. You can scan through the Google News Help forums and see a nice number of complaints. Also David Esteve, the SEO at the Spanish newspaper El Confidencial, posted his concerns on Twitter.

The good news is that the write up mentions that this indexing glitch is a known issue.

Net net: Many people with whom I speak believe that Google’s index is comprehensive, timely, and consistent.

Yeah, also smart because Inovatio is really innovation.

Stephen E Arnold, November 14, 2018

Next Page »

  • Archives

  • Recent Posts

  • Meta