Brianna White

Administrator
Staff member
Jul 30, 2019
4,608
3,443
Artificial intelligence (AI) can be used to enhance the efficiency and scale of SecOps teams, but it will not solve all your cybersecurity needs without the need for some human involvement — at least, not today.
Most commercial AI successes have been associated with supervised machine learning (ML) techniques specifically tuned for prediction tasks that yield business value. These use cases for ML, such as spoken language understanding for your smart-home assistant and object recognition for self-driving cars, make use of vast amounts of labeled data and computation required to train complex deep learning models. They also focus on solving problems that barely change. This is in contrast to cybersecurity, where we rarely have the millions of examples of malicious activity needed to train deep learning models, and we face intelligent adversaries that frequently change their tactics to try to outmaneuver our latest detection capabilities, including those using ML.
In addition, the digital exhaust from human behavior in enterprise environments is extremely hard to predict. Anomalies in these systems are common and very rarely represent malicious threat actor behavior. It is therefore unreasonable to expect that unsupervised anomaly detection can be used to learn about an enterprise environment’s normal behavior and be able to generate meaningful alerts about malicious activity without creating false alarms on unusual but benign events.
Continue reading: https://www.darkreading.com/analytics/distinguishing-ai-hype-from-reality-in-secops
 

Attachments

  • p0008180.m07812.ai_vs_reality.jpg
    p0008180.m07812.ai_vs_reality.jpg
    66.9 KB · Views: 10
  • Like
Reactions: Brianna White