Securing AI during the development process
There is enormous interest in and momentum around using AI to reduce the need for human monitoring while improving enterprise security. Machine learning and other techniques are used for behavioral threat analytics, anomaly detection and reducing false-positive alerts.
At the same time, private and nation-state cybercriminals are applying AI to the other side of the security coin. Artificial intelligence is used to find vulnerabilities, shape exploits and conduct targeted attacks.
But what about the AI tools themselves? How does an enterprise protect the tools it is building and secure those it is running during the production process?
AI security threats
AI is software, so threats to AI systems include compromises to get to money, confidential information or access to other systems via lateral attacks, as well as denial-of-service (DoS) attacks.
AI systems are vulnerable to a different kind of attack called a corruption of service attack. An adversary may wish not so much to disable an AI system as to reduce its effectiveness. If your competitor uses AI to time trades, throwing the timing off could make the program less effective without making it wholly useless.