Artificial Intelligence – The Next Frontier in IT Security?
By April Koury and Rohit Talwar
How might AI impact future developments in cyber security?
Security has always been an arms race between attacker and defender. He starts a war with a stick, you get a spear; he counters with a musket, you upgrade to a cannon; he develops a tank, you split the atom. While the consequences of organizational cybersecurity breaches may not be as earth-shatteringly dramatic today, the centuries-old arms race continues into the digital sphere of today.
The next challenge for companies with an eye towards the future should be to recognize that artificial intelligence (AI) is already entering the scene. For example, we are seeing the emergence of AI tools like PatternEx—focused on spotting cyberattacks—and Feedzai for fraud detection across the ecommerce value chain. The technology is developing so rapidly that it is too early to say whether the impact will be revolutionary, or just the next evolution in the continued digital age cybersecurity arms race.
Artificial Intelligence
Some AI evangelists argue this new technological force could render all others seemingly irrelevant given the scale of change, risk, and opportunity it could bring about in IT security. This new dark art offering seemingly magical technological wizardry does indeed have the potential to change our world, and—depending on who you choose to believe—either make life a little better, lead to total societal transformation, or end humanity itself.
As a result of a new generation of disruptive technologies coupled with AI, we are entering the Fourth Industrial Revolution. The three previous revolutions gave us steam-based mechanization, electrification, and mass production, followed by electronics, information technology, and automation. This new fourth era, with its smart machines, is fueled by exponential improvement and convergence of multiple scientific and technological fields into an all-encompassing Internet of things (IoT). The medium to long-term outcomes of these converging exponential technologies for individuals, society, business, government, and IT security are far from clear.
The pace of AI development is accelerating and astounding even those in the sector. In March 2016, Google DeepMind’s AlphaGo system beat the world GO champion—demonstrating the speed of development taking place in machine learning, a core AI technology. The board game GO has billions of possible moves; you cannot teach the system all the rules and permutations. Instead, AlphaGo was equipped with a machine learning algorithm that enabled it to deduce the rules and possible moves from observing thousands of games. Its successor AlphaGo Zero taught itself to play GO in three days without observing any human games and then beat AlphaGo by 100 games to nil. This same technology can now be used in IT security in applications ranging from external threat detection and prevention to spotting the precursors of potentially illegal behavior amongst employees.
The Current State of Affairs in IT Security
In 2015 in the US, the Identity Theft Resource Center noted that almost 180 million personal records were exposed to data breaches, and a PwC survey report highlighted that 79% of responding US organizations had experienced at least one security incident. Industry research indicates that while hackers exploit vulnerabilities within minutes of their becoming known, companies take roughly 146 days to fix critical vulnerabilities. With the average cost of a data breach estimated at US$4 million, there is growing concern over how companies can keep up with the constant onslaught of ever stealthier, faster, and malicious attacks today and in the future.
As it stands, many firms focus more on reacting to security breaches rather than preventing them, and the current approach to network security is often aimed more at “standards compliance” rather than detecting new and evolving threats. The result is an unwinnable game of whack-a-mole that could overwhelm companies in the future unless they are willing to adopt and adapt the mindset, technology, and techniques used by the hackers. And there is very little doubt that hackers are—or soon will be—developing AI tools to increase the frequency, scale, breadth, and sophistication of their attacks.
Organizations in this digital age create infinite amounts of data, both internally through their own processes and externally via customers, suppliers, and partners. No one human can analyze all that data to monitor for potential security breaches—our systems have simply become too widespread, data-laden, and unwieldy. However, when combined with big data management tools, AI is becoming ever more effective at crunching vast amounts of data and picking out patterns and anomalies. In fact, with most AI systems, the more information they are fed, the smarter they become.
AI’s Future Potential
One of the biggest potential security benefits of AI lies in detecting internal threats. Imagine an AI system that, day in and day out, watches the comings and goings of all employees within a corporate headquarters via biometrics and login information. It knows, for example, that the CFO normally logs out of the cloud each day by 12 noon and heads to the company gym, where she spends an average of 45 minutes. One day it spots an anomaly—the CFO has logged into the cloud at 12:20 pm. is intelligent enough to compare her location with this unexpected login—according to its data, the CFO’s face was last scanned on entering the gym and has not been seen leaving the gym, but the cloud login originated from her office.
The AI recognizes the anomaly, correlates the discrepancy between login and CFO locations, shuts down cloud access to the CFO’s account, and begins defensive measures against potential cyberattacks. The system also alerts the CFO, and escalates this high priority problem to human cybersecurity within seconds. Important company data and financial records are safe thanks to AI security. Imagine how its capabilities will grow as this same AI system continues to learn from and predict the behavior of hundreds or hundreds of thousands of employees across the organization—helping it monitor for and predict similar security breaches.
Beyond employee behavior, our AI security application is also watching the company’s internal systems and learning how they interact. It discovers when customer information is added into the company’s database, accounting software automatically picks up the information and an invoice is generated within an average of 7.5 seconds. Any deviation outside of normal behavior by .25 seconds triggers the AI to investigate every link within the process and tease out the cause. In this case, based on what it discovers (an inconsequential lag in the system), the AI security properly prioritizes the incident as a nonthreatening low risk, but will continue to monitor for similar lags and alert system maintenance to the issue just in case. Now let’s take this scenario a step further—imagine that not only has this AI system learned the behavior of hundreds of employees and of the internal company networks, but it is also capable of continually learning from external cyberattacks. The more cyberattacks thrown at the AI, the more data it can parse, and like a thinking, rational soldier who has manned the battlements through numerous campaigns, the better educated and prepared it becomes for future attacks. It will recognize totally new, hostile code based on experience and previous exposure to related patterns of attack behavior. It will build defenses as it works to unravel the new hostile code, and as the offensive AI code attempts to adapt to the new defenses, the AI security will continually develop new methods to counter and destroy the invader.
This is the potential AI security system of the near future—fully integrated inside and out, noninvasive to daily business, and always on alert and ready to defend. It will be the ultimate digital sentry— hopefully learning and adapting as quickly as the attackers.
Organizations’ Approach to AI Security
Just as the stick fight eventually escalated to nuclear weapons, so too will the AI battle between organizations and hackers keep evolving. Continual one-upmanship will become the norm in AI security, perhaps to the point where even developers will be unable to decipher the exact workings of their constantly learning and evolving security algorithms. As complex and expensive as this all sounds, will companies in the future, especially smaller organizations, be able to survive without AI?
As the stakes become higher and failures loom larger, ever evolving AI threats may encourage far more collaboration across multiple companies. Smaller organizations could band together under one AI security system, dispersing the cost and maintenance across multiple payers, while larger players with the financial and technological muscle to own their own AI security may exchange critical information on cyberattacks—or rather, their AIs could exchange information on cyberattacks and learn from each other.
Alternatively, companies could become so overwhelmed that they simply opt for simple, technologically cheaper “brute force” non-AI solutions to counter increasingly complex AI hacks. The simple, or dumb, solution may entail more checks and passwords across accounts and devices, or perhaps security enhanced devices that are changed every two weeks. While adding five layers of complex passwords to any login or continuously rotating through smart phones could protect company security, the increased overhead, employee frustration, and time wasted with cumbersome security measures would not be seen as ideal and could hinder the firm’s reputation—which might make it more susceptible to attack.
While an AI system will quietly monitor security and enable employees to focus on their work, the simple non-AI solution will place an unnecessary security burden on the employees—they will be responsible for keeping up with those five complex passwords and changing devices on a biweekly basis. Whereas the AI system is maintained by a few cybersecurity experts, the simple security solution is in the hands of every employee, vastly multiplying the chances of a security breach. In the future, this simple non-AI solution might become a defensive strategy of survival rather than an adaptive offensive campaign of a leading, thriving business.
The Role of Humans in AI Security
Of course, at this point a natural question is, “If AI is quicker, smarter, and continually adapting to do its job better, why even bother with human cybersecurity?” Today, AI security must still learn from humans, and although it may one day reach the point where it no longer requires expert involvement, that day remains at least a few years down the road. Furthermore, depending on how valuable we deem human oversight and intuition in security, that day may never come to pass. AI security systems currently need humans to write their starter algorithms, and provide the necessary data, training, and feedback to guide their learning. Humans are currently an essential part of the deployment of AI, and as AI security evolves beyond this nascent stage, the role for humans in AI will evolve as well.
As organizations increasingly digitize processes, amassing mind-boggling amounts of sensitive data, new importance will be placed on the role of the human architects and minders of AI security systems. Never has so much data been so easily accessible to attack, and even small attacks gathering seemingly innocuous data could add up to catastrophic security breaches. Developers of AI security will become akin to nuclear weapons inspectors in importance—highly trustworthy individuals who have undergone extensive background checks and intensive training, vetting, and accreditation. They will not only build AI security, but also provide oversight and intuitive guidance in the training process and be an integral line of cybersecurity defense.
AI security will go far beyond human capabilities, freeing organizations and cybersecurity experts from the impossible task of constant vigilance, allowing them to prevent future attacks without interrupting daily workflow. Tomorrow’s AI security system will learn, self-improve, and run discreetly behind the scenes—intelligently monitoring, prioritizing, and destroying threats; ever evolving into the next finely-honed weapon in the cybersecurity armory.
- Where does cybersecurity in the AI age sit on your organization’s priority list?
- Will SME’s be able to survive without AI security in the future?
- Could we see corporations increasingly resorting to cyberattacks as a means of gaining competitive advantage?
This article is excerpted from Beyond Genuine Stupidity – Ensuring AI Serves Humanity. You can order the book here.
A version of this article was originally published in Network Security Journal.
Image: https://pixabay.com/images/id-4580815/ by geralt