Thinking about Security: Before and Earlier, Not After and Later

September 30, 2020

Many factors stand in the way of trustworthy AI, not the least of which is the involvement of those for whom a raise, a bonus, or a promotion is involved. Then there is the thorny issue of bias built into machine learning. InformationWeek, however, looks at a few more straightforward threats in its article, “Dark Side of AI: How to Make Artificial Intelligence Trustworthy.”

Gartner VP and analyst Avivah Litan notes that, though AI is becoming more mainstream, security and privacy considerations still keep many companies away. They are right to be concerned—according to Garnter’s research, consumers believe responsibility lies with organizations that adopt AI technology, not the developers or vendors behind it. Litan describes two common ways bad actors attack AI systems: malicious inputs and query attacks. She writes:

“Malicious inputs to AI models can come in the form of adversarial AI, manipulated digital inputs or malicious physical inputs. Adversarial AI may come in the form of socially engineering humans using an AI-generated voice, which can be used for any type of crime and considered a ‘new’ form of phishing. For example, in March of last year, criminals used AI synthetic voice to impersonate a CEO’s voice and demand a fraudulent transfer of $243,000 to their own accounts….“Query attacks involve criminals sending queries to organizations’ AI models to figure out how it’s working and may come in the form of a black box or white box. Specifically, a black box query attack determines the uncommon, perturbated inputs to use for a desired output, such as financial gain or avoiding detection. Some academics have been able to fool leading translation models by manipulating the output, resulting in an incorrect translation. A white box query attack regenerates a training dataset to reproduce a similar model, which might result in valuable data being stolen. An example of such was when a voice recognition vendor fell victim to a new, foreign vendor counterfeiting their technology and then selling it, which resulted in the foreign vendor being able to capture market share based on stolen IP.”

Litan emphasizes it is important organizations get ahead of security concerns. Not only will building in security measures at the outset thwart costly and embarrassing attacks, it is also less expensive than trying to tack them on later. She recommends three specific measures: conduct a threat assessment and carefully control access to and monitoring of training data/ models; add AI-specific aspects to the standard software development life cycle (SDLC) controls; and protect and maintain data repositories to prevent data poisoning. See the article for elaboration of each of these points.

Cynthia Murrell, September 30, 2020

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta