In business, artificial intelligence is growing more and more and better. According to a Capgemini Research Institute study in July 2019, two out of three companies plan to deploy AI systems in order to strengthen their defense from 2020. And that’s where the problem lies. Indeed, such technology could well play the hackers game. As it democratizes in all spheres, hackers begin to divert the use of AI. In the coming years, would we be moving towards a defensive AI versus offensive AI confrontation?
Will AI automate malware attacks?
If hackers’ attacks are becoming more complex, you should know that they are still within the reach of any pirate. On the darkweb, for a few hundred dollars, it is already possible to download hack “tool kits”. These kits are very interesting for launching automated attacks against user logins. In a few minutes, hundreds of different passwords will be tested. Tomorrow, thanks to the democratization of AI, it is not impossible to imagine that a hacker could, by the same process, use artificial intelligence tool kits with pre-designed malware.
A concrete example would be the construction of malware that can propagate autonomously and spread over a network until it finds its specific target. The DeepLocker virus, for example, was specially designed by IBM to be completely autonomous in its decision-making and to activate on a specific target, determined via facial and voice recognition.
Before artificial intelligence, one of the weak points of malware lay in the outgoing communication necessary for the hacker to interact with what the malware had discovered until it reached its target. However, malware with autonomous AI is more difficult to detect, since it no longer necessarily needs to communicate outside. The malware hides its actions in the mass of data and can strike until the mission is completed.
Beyond malware automation, the current challenge for hackers is mainly to hijack the applications of defensive AI. In 2016, a few weeks after Microsoft’s chatbot was launched, hackers hijacked him to make racist comments. In a similar vein, it took a Vietnamese company just a few minutes to hack Apple’s brand new Face ID system. Finally, a group of Chinese researchers has shown how hijacking an autonomous car – despite having ultra-sophisticated artificial intelligence – is child’s play.
The phenomenon would force companies that use AI to leave the keys of the truck to a defensive AI to protect themselves at the risk of being infiltrated. Artificial intelligence remains a black box. Letting go of ballast entirely to an AI is also a risk when it comes to regaining control of cybersecurity.
The massive use of artificial intelligence, if it is beneficial, represents more possibilities for diversion. In the years to come, is it not impossible to imagine that a confrontation could take place between defensive AI on the corporate side and offensive AI on the pirate side? The question is legitimate: should we still innovate in AI at the risk of being diverted and infiltrated? Should we always make life easier for the user? And at what cost?