Neuroscience To the Rescue if Developers Allow

February 5, 2021

Machine learning has come a long way, but there are still many factors that will confuse an algorithm. Unfortunately, these adversarial examples can be exploited by hackers. The Next Web offers hope for a defense against some of these assaults in, “Here’s How Neuroscience Can Protect AI from Cyber attacks.” As is often the case, the key is to copy Mother Nature. Reporter Ben Dickson writes:

“Creating AI systems that are resilient against adversarial attacks has become an active area of research and a hot topic of discussion at AI conferences. In computer vision, one interesting method to protect deep learning systems against adversarial attacks is to apply findings in neuroscience to close the gap between neural networks and the mammalian vision system. Using this approach, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more predictable in their behavior and more robust to adversarial perturbations. In a paper published on the bioRxiv preprint server, the researchers introduce VOneNet, an architecture that combines current deep learning techniques with neuroscience-inspired neural networks. The work, done with help from scientists at the University of Munich, Ludwig Maximilian University, and the University of Augsburg, was accepted at the NeurIPS 2020, one of the prominent annual AI conferences, which will be held virtually this year.”

The article goes on to describe the convolutional neural networks (CNNs) now used in computer vision applications and how they can be fooled. The VOneNet architecture works by swapping out the first few CNN layers for a neural network model based on primates’ primary visual cortex. Researchers found this move proved a strong defense against adversarial attacks. See the piece for the illustrated technical details.

The researchers lament the tendency of AI scientists toward pursuing larger and larger neural networks without slowing down to consider the latest findings of brain mechanisms. Who can be bothered with effectiveness when there is money to be made by hyping scale? We suspect SolarWinds and FireEye, to name a couple, may be ready to think about different approaches to cyber security. Maybe the neuro thing will remediate some skinned knees at these firms? The research team is determined to forge ahead and find more ways to beneficially incorporate biology into deep neural networks. Will AI developers take heed?

Cynthia Murrell, February 5, 2021


Comments are closed.

  • Archives

  • Recent Posts

  • Meta