Cyber Security: An Oxymoron Maybe?

January 8, 2021

AI neural networks are only as smart as they are programmed and the technology is still in its infancy. In other words, AI neural networks are biased and make mistakes. This is not a problem now, especially when many AI neural networks are in the experimental stage; however, as the technology advances says we need to discuss future problems now in, “The Inevitable Symbiosis Of Cybersecuriity And AI.”

AI neural networks, like other technology, is hackable. The problem Hacker Noon brings up is that companies that rely on AI to power their products and services, such as Tesla’s self-driving algorithm, are ready to launch them to the public. Are these companies aware of vulnerabilities in their algorithms and actively resolving them or are they ignoring them?

AI engineers are happy to discuss how AI is revolutionizing cybersecurity, but there is little about how the cybersecurity is or could improve AI. Cybersecurity companies are not applying their algorithms to find vulnerabilities. Complacency is the enemy of AI safety:

“Moreover, there are still few use cases where it is paramount to guarantee the AI algorithms have no life-threatening vulnerabilities. But as AI takes over more and more tasks such as driving, flying, designing drugs to treat illnesses and so on, AI engineers will need to also learn the craft of, and be, cybersecurity experts.

I want to emphasize that the responsibility of engineering safer AI algorithms cannot be delegated to an external cybersecurity firm. Only the engineers and researchers designing the algorithms have the intimate knowledge necessary to deeply understand what and why vulnerabilities exists and how to effectively and safely fix them.”

Cyber security: An oxymoron?

Whitney Grace, January 8, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta