The good guys are clearly already investing heavily in AI-defence research, but what about the bad guys and weaponised AI?
According to research announced during the recent Black Hat conference in Vegas, some 62 per cent of infosec pros reckon weaponised AI will be in use by threat actors within 12 months. That artificial intelligence was on the agenda at Black Hat should come as no surprise. The promise of AI, from machine learning through to automation, in cyber security has become a major marketing tool amongst vendors. The good guys are clearly investing heavily in AI-defence research, but what about the bad guys?
Interesting read, but we aren’t talking real artificial intelligence are we? Machine learning is one thing and real artificial intelligence another. Plus weaponizing is the wrong terminology as cybercrime actors would be implementing the technology in the same way they implement everything else used to penetrate network security. Monetizing artificial intelligence would have been a better headline for this.
Hmmm. I’m not going to get into the semantics argument regarding the title, let’s just agree that however you phrase it the implications of the bad guys using this technology are not great. I will argue about AI being an appropriate term, as it’s generally accepted that ML is a subset or type of AI if not exactly the stuff of science fiction.
Semantics aside, I don’t think it is helpful to equate machine learning with artificial learning.