You can protect your digital footprint by deleting any accounts you no longer use, adjusting your privacy settings, avoiding oversharing on social media and using a
When it comes to data security, there are no magic bullets. There is, however, a very potent cyberdefense solution representing a great leap forward in the struggle to protect data and information. Actually, it is a combination of two closely related technologies – one fairly new and the other decades old.
They are artificial intelligence (AI) and machine learning (ML). Think of AI as the ability of computers to be programmed to do things that normally require human intelligence, such as speech recognition, decision making, and language translation. It has been around in one form or another since the dawn of the computer age.
ML, the new kid on the block, is closely related to AI but uses highly complex algorithms to actually learn to make decisions on its own – without being programmed to do so. ML programs actually change on their own when exposed to new data. For example, last year a machine-learning program called AlphaGo from Google DeepMind beat one of the world’s best players of Go, a highly complex and ancient Chinese board game. AlphaGo was not programmed to play Go, but rather learned to play on its own.
Machine learning already hard at work
Already ML solutions are bearing some of the data security burden when it comes to sniffing out money-laundering schemes, preventing an all-out security attack, and protecting customer credit cards. For example, honest consumers are often dogged by temporary “stops” put on their credit cards when suspicious activity is noted, like purchases in foreign countries. Machine learning algorithms learn to “think” differently. If the credit card activity includes things like buying airline tickets, taking Uber to the airport, buying a meal at the airport, and so forth, this “suspicious” activity is noted by the ML program as normal, and no annoying credit card stops are needed.
Specifically when it comes to data security, AI and ML shine in unique ways. They can create customer personas that are self-adjusting (they “think” on their own) as underlying business rules change. They can aggregate data from sources as diverse as terrorist watch lists and the near century-old Interpol. And they can integrate and interoperate with other global financial firms to mine an even deeper pool of fraud prevention data – all with little or no human intervention.
According to Patrick Tiquet, director of security and architecture at Keeper Security, baking advanced security techniques like AI and ML into the fabric of cybersecurity strategies is not simply a nice feature – it is essential.
“New vulnerabilities and threats are being developed and discovered on a 24/7 real-time basis,” Tiquet says. “Traditional cybersecurity threat detection has relied mostly on static rules or signatures of known threats, which leaves an organization blind and virtually unprotected from newly developed, unknown, or zero-day vulnerabilities. The ability to utilize a learning artificial intelligence system to detect and identify unknown threats or zero-day exploits is a game changer. Organizations will be able to detect previously unknown threats based on behavior rather than matching to known static rules or signatures.”
At Keeper, a leading password management solution company, password-enabled authentication solutions will increasingly rely upon AI and ML to authenticate a user. For example, AI-based authentication will be used to examine a number of different factors in making a decision to authenticate a user. These factors could include biometric inputs, location, behavior, and even proximity of known devices. As Tiquet says, “Think of it as AI-based multifactor authentication.”
An end to false positives?
One persistent argument against AI- and ML-based security systems is that they tend to report too many false positives and alerts that can lead to “alert fatigue” – think “the boy who cried wolf.” But the reality is that the sheer volume and complexity of security-related data generated today have already blown past the capacity of human beings to analyze it all.
So MIT’s Computer Science and Artificial Intelligence Lab is attacking this false-positives issue head-on by developing a system called AI2. The system can review security-related data from literally millions of data logs every day, reporting back anomalies and suspicious data. Its reports are studied by a human analyst who provides the system input on whether the threats were legitimate. Over time, this ML/AI system learns from its past mistakes and gets better at finding real threats while reporting fewer false ones. In one test of the solution, false positives were eventually reduced by a factor of 5 as the system crunched through some 40 million log lines of data per day generated by an ecommerce website.
Security systems incorporating AI and ML techniques have arrived at a key time cybersecurity threats are growing in number and sophistication, and the stakes in protecting sensitive data have never been higher. It will be a race to see if these new techniques can keep up or possibly stay ahead of the threat environment.