Brianna White

Administrator
Staff member
Jul 30, 2019
4,620
3,447
Security-Think-Tank-hero.jpg


I believe a part of making sense out of the issue of artificial intelligence (AI) in the world of cyber comes down to definitions. AI is the idea that a machine can mimic human intelligence, while machine learning (ML) teaches a machine how to perform a task and identify patterns. A lot of cyber security vendors are jumping on the bandwagon, hyping up their products, and slapping an AI sticker on them when they aren’t actually AI. It’s been the same way as any fad since snake oil.

And for some, the end goal for AI is that it is automatic and doesn’t need human intervention. But I’m a firm believer that we need to accept that the answers and processes that AI generates shouldn’t be taken as gospel. We should always treat its outputs as a starting point to then apply human decision-making to, rather than see it as the end product. AI will always need a human perspective to make it ethical, and its outputs relevant.

Meanwhile, the use cases – currently – are quite slender for AI. For instance, GitHub Copilot. It turns natural language prompts into coding suggestions. And while it’s great, it’s great at being deep on one particular thing.

Deep, in this context, is like training for a particular career – like a neurosurgeon. Whereas wide is a GP who is good at treating lots of different medical conditions. But you could argue that Copilot is ML and not true AI. Midjourney’s capabilities for creating images, for instance, go deep but not broad. You need the AI to be both deep and broad to do a particular thing well. We are getting a bit closer thanks to ChatGPT, but it still feels a while away.

And specifically for security, we haven’t got our head round how we can effectively use it. This is where it can be used as a baseline of what a security team needs to consider, and for humans to take it to the next level. For example, security controls and policy decisions. But the interesting part about that is how do we actually take it and put it into practical solutions we can use down the line.

Continue reading: https://www.computerweekly.com/opin...security-Yes-but-only-if-it-works-with-humans