Just as it would be in a science fiction movie, artificial intelligence (AI) could soon be weaponised to attack innocent victims.
While AI technology’s applications in online security are on the “good side” – searching for and battling malware – hackers have been trying to play catch up and use AI to their advantage.
Cybercriminals are manipulating every conceivable technique to steal valuable data, penetrate vulnerable networks, interrupt essential utilities, and target every other source they can benefit from.
Considering the volume and frequency of cyber-attacks, and the efforts and resources needed to repair affected systems, security experts are often struggling to keep up with hackers.
A prospective solution to this problem is artificial intelligence as it could stop attacks in their tracks by learning suspicious behaviour, taking some of the burdens off of their human counterparts.
On the other hand, the same methods could be adopted by would-be hackers and used to their advantage.
As experts are discussing the imminent “AI Era”, IBM Research has produced a tool that can mimic AI-centric malware’s characteristics.
The “highly targeted and evasive” AI-powered malware, called DeepLocker, is passed along video conference software and stays inactive until it reaches its target.
Online Security Researchers have discovered a new, possibly undetectable, type of malware attack.
The specific target could be identified through facial or voice recognition, geolocation, and data linked to social media or other online trackers.
Security analysts liken AI-centric malware to sniper attacks when compared to the “spray and pray” methods of traditional malicious attacks. They are designed to be sneaky and undetectable until they have acquired their target.
DeepLocker also has stringent conditions that trigger it, if a target is not found or certain conditions are not met, they remain hidden. This makes the malware almost impossible to reverse engineer, according to the team behind it.
To show how potent DeepLocker could be, the IBM research team conducted an experiment wherein they concealed the WannaCry ransomware strain within a video conference software. It went undetected by the antivirus programs or sandboxing methods used for the experiment.
The experimental AI was then taught to use facial recognition for its attack, triggering itself once the target is spotted, and unleashing the ransomware thereafter.
The troubling aspect of the potential of AI-centric malware is that it could infect millions of devices or systems without being detected, only showing itself when the specified parameters are met.
The good news is that this type of attack hasn’t been done yet, however, this doesn’t mean it can’t happen in the future. The DeepLocker experiment serves as a warning to consumers and enterprises of a potential type of malware attack.
To combat these types of malware and other similar online risks, Trend Micro Maximum Security’s advanced artificial intelligence learning can stop ever-evolving threats in their tracks – ensuring you and your family are safe online.