MIT is Developing AI to Stop Cybercrimes

Looking for proof that someone was able to breach your computer security is difficult. Going through all the files and data to find irregularities takes time and effort – computer experts can only work so much. Artificial Intelligence (AI) on the other hand, does not tire and can collaborate with people to yield far better results.

Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory developed a computer system called AI2; and it has the ability to go through tens of millions of log lines of data a day, while also being able to point out any irregularities. From there, a human takes over and checks for signs of any security or data breaches. This process allows the team to identify up to 86% of attacks while sparing analysts of the task of chasing false leads.

The balance between artificial and human intelligence is imperative as relying entirely on computers to learn to look for anomalies in the system will lead to codes being labeled wrongly. Humans cannot keep up with the work volume to ensure maximum security. MIT prides AI2 as the best of both worlds – bringing together the analyst institution and an artificially intelligent system.

A huge part of AI2’s work assists a company in determining what already happened so it can respond correctly, while the system also highlights any typical signs of a breach. For example, a huge increase in log-in attempts on a website could mean someone is attempting to hack the account. Also, an uptick in devices or computers connected to a single IP address might suggest credential theft.

MIT was able to develop artificial intelligence that fast-tracks identifying system irregularities more precisely.

While other machine-learning systems sift through mountains of data searching for suspicious activity, AI2 utilises constant input from analysts to turn the mountain into a molehill. A computer lacks the expertise to be able to complete the job on its own.

MIT research team leader Kalyan Veeramachaneni says that “you have to bring some contextual information to it. Doing so is the role of human analysts as they recognise external elements that might explain a deviation. An example could be when companies conduct their stress-tests on the systems, leading to abnormalities that are expected. An AI system, on its own, would not be able to discern this test from a real threat. AI2 would be able to figure it out within a matter of weeks.

Veeramachaneni says that from the first day they deploy the system, it is as good as anyone else. But what the MIT team does is let AI2 show an analyst 200 of the day’s abnormal events. The analyst then provides feedback by identifying the legitimate threats out of the lot. This information is used by the system to fine-tune the monitoring process. The more times this process is done, the more efficient AI2 becomes in identifying outliers and real threats.

Veeramachaneni adds, “Essentially, the biggest savings here is that we’re able to show the analysts only up to 200 or even 100 events per day, which is a very tiny percentage of what happens.”

AI2 is not theoretical. It honed its skills through 3 months of reviewing log data of an e-commerce site whose name was not released. The database consisted of 40 million log lines a day, an estimated 3.6 billion in all. After the 3-month period, AI2 was able to detect around 85% of the attacks. Veeramachaneni says the e-commerce site received about 5-6 threats a day at the time, and AI was able to detect 4-5 of these threats.

85% is far from perfect but Veeramachaneni says this detection rate through unsupervised machine learning would mean having the experts review thousands of cases a day, not hundreds. Pulling 200 cases identified by the machine, on the other hand, without an analysts help would yield a 7.9 % success rate.

AI2 also has the capacity of preventing attacks by building predictive models of what can possibly happen the next day. If cybercriminals repeat the same mode of attack over a span of a few days, a business can reinforce their security by requiring additional secure confirmation steps from their users.

AI2 shows great promise in the field of cyber-security; but it cannot take the place of human analysts as security cannot just be left to artificial intelligence and security threats are adapt. Veeramachaneni adds, “The attacks are evolving. We need analysts to keep flagging new types of events. This system doesn’t get rid of analysts. It just augments them.”

Science will almost definitely provide a failsafe security system in the future, but for now, a combination of precision and efficiency might be the best we can hope for. And in this case, it means man and machine collaborating.

Comprehensive multi-device protection for you and your family for up to 6 PCs, Macs, Android, and iOS devices. For more info click here.

StumbleUponEmail
Share on social media: