The Dark Potential Behind Neural Networks

September 27, 2017

With nearly every technical advance humanity has made, someone has figured out how to weaponize that which was intended for good. So too, it seems, with neural networks. The Independent reports, “Artificial Intelligence Can Secretly Be Trained to Behave ‘Maliciously’ and Cause Accidents.”  The article cites research [PDF] from New York University that explored the potential to create a “BadNet.” They found it was possible to modify a neural net’s code to the point where they could even cause tragic physical “accidents,” and that such changes would be difficult to detect. Writer Aatif Sulleyman explains:

Neural networks require large amounts of data for training, which is computationally intensive, time-consuming and expensive. Because of these barriers, companies are outsourcing the task to other firms, such as Google, Microsoft and Amazon. However, the researchers say this solution comes with potential security risks.

 

‘In particular, we explore the concept of a backdoored neural network, or BadNet,’ the paper reads. ‘In this attack scenario, the training process is either fully or (in the case of transfer learning) partially outsourced to a malicious party who wants to provide the user with a trained model that contains a backdoor. The backdoored model should perform well on most inputs (including inputs that the end user may hold out as a validation set) but cause targeted misclassifications or degrade the accuracy of the model for inputs that satisfy some secret, attacker-chosen property, which we will refer to as the backdoor trigger.’

Sulleyman shares an example from the report: researchers successfully fooled a system, with the application of a Post-it note, into interpreting a stop sign as a speed limit sign—a trick that could cause an autonomous vehicle to cruise through without stopping. Though we do not (yet) know of any such sabotage outside the laboratory, researchers hope their work will encourage companies to pay close attention to security as they move forward with machine learning technology.

Cynthia Murrell, September 27, 2017

 

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta