While walking around the larger industry shows, those hosting say more than 140 vendors, it doesn't take long to realise that artificial intelligence and machine-learning are the current ‘it' girls of the cyber-security industry.
In an effort to define what ‘artificial intelligence' actually is, Luger & Stubblefield described in their 2004 book on artificial intelligence, that an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximise its chance of success at some goal based on a complex set of calculations.
As notifications from UBA, SIEM and threat intelligence systems continue to grow, artificially intelligent systems are being touted as the solution to the fatigue experienced by SOC teams who have to try and figure out what to do with each threat, and whether or not they should investigate it further.
Research from Hexadite, a security automation company, claimed that 37 percent of cyber-security professionals face 10,000 alerts per month” with 52 percent of alerts turning out to be false positive.
"artificially intelligent systems are being touted as the solution to the fatigue experienced by SOC teams who have to try and figure out what to do with each threat"
We asked, David Thompson, LightCyber's senior director of product management, what kind of tasks machine-learning is best suited for. He responded: “Highly repetitive and intricate tasks may be well suited for a machine rather than a human. On the other hand, making a firm conclusion or judgement and then acting upon it is better suited for a human. Machine-learning can do a lot of the busy work and can more readily keep a comprehensive perspective that is both broad and historical. It's hard for humans to keep too many things at the forefront of their minds, and even harder to institutionalise this perspective among multiple people.”
Commenting on the idea that AI is perhaps being as used as a misnomer for an intelligent system, Thompson said: “Unfortunately, there is a tremendous amount of hype and confusion around the terms artificial intelligence (or AI), machine-learning and data science when it comes to security. In some circles, AI is the broader and more theoretical field, and machine learning is the more specific application of these technology principles. A common refrain in the industry is AI is defined as whatever outcome machine-learning is not (yet) capable of achieving, which is an ever-receding target.
Perhaps owing to the hype, research by security company Ipswitch into artificial intelligence and machine-learning is being taken very seriously. The company's recent research has shown that investment in intelligent business systems and automation is well underway across the globe.
Top current application deployment areas cited by respondents include digital customer engagement systems (55 percent), process automation and workflow systems (52 percent), and automated risk monitoring and management solutions (50 percent).
Gareth Lauder, director of Cyberseer, which describes itself as “specialists in advanced threat detection and cyber-incident resolution” said that: “The interest in machine-learning is coming from customers, as CISOs currently feel the tools they have are somewhat inadequate and not capturing everything.”
In the search for the ‘known unknown', Lauder said that: “the increase in logs and alerts being created has meant that customers both large and small are looking to try and catch the full range of abnormal behaviour, from the malicious insider to the infiltration from a third party supplier. Companies simply don't have the capacity or manpower to catch all of these manually from in between all the data.”
Are intelligent systems as intelligent as we think they are? And are we prepared for their arrival?
In the same research from Ipswitch, the company showed that intelligent systems are coming fast but businesses are ill-equipped to protect themselves and unprepared for the effect of those systems on business.
The research, conducted by Freeform Dynamics, showed that security, funding and lack of knowledge are all key concerns from those surveyed.
In security terms, 68 percent said their current network security and access management capabilities are inadequate or need strengthening to cope. Some 30 percent said funding constraints are a big worry and 24 percent feel they don't have the knowledge and understanding of intelligent systems to adopt them safely.
We spoke with Webroot's solution architect, Matthew Aldridge, and asked if machine learning is able to tackle the challenge and he said: “Machines are much quicker and more efficient at tracking abnormal behaviour. They can track hundreds of variables and not get tired or make any mistakes. However we're always going to need aSOC analyst who can apply the threshold and ensure nothing slips through the net.”
Kirsten Bay, CEO of Cyber adAPT explained it takes a lot of monitoring to detect breaches. With expansion of BYOD and by extension the perimeter, Bay said: “Threat intelligence logs, rule updates and algorithms constantly need updating to successfully monitor movements on the network. Can we get machines to do the thinking pro-actively to assist in these activities? Absolutely, but they need be armed with plenty of contextual data, correlations and maths to be able to accurately protect a system from the right threats.”
“Machines are much quicker and more efficient at tracking abnormal behaviour."
And Phil Codd, managing director of SQS UK&I agrees: “Its ability to help with day-to-day tasks, reduce human error and improve speed to market has seen machine-learning's increased presence in sectors from gaming through to manufacturing. Precautionary measures and continued quality assurance of the software behind machine-learning are still imperative.”
Bay goes on to explain that in the case of detecting a ransomware attack on a network, “Typically you'd want to monitor for the malware sending commands across a network and particularly watch out for those that fail, as for example, the C&C server is down. Machine-learning would be able to correlate this kind of behaviour with other similar attacks and realise that this is not normal behaviour.”
However the research from Ipswitch showed that the systems aren't without their risks. Almost half (48 percent) of those surveyed said they believed that commercial damage could result of the operational failure and breakdown of intelligent systems in the future, and 44 percent believed that commercial damage due to poor actions, ‘decisions' and recommendations is a future risk. Furthermore despite the speed of adoption, the study reveals that IT decision-makers are finding it difficult to assess the full extent of the risks, challenges and threats posed by intelligent business systems.
Dealing the ultimate blow, Peter Gyongyosi, product manager at Balabit agreed that: “Ultimately, people don't trust AI. Humans need a certain level of accountability to blame someone when the intelligence does miss something it shouldn't have, and suddenly the network becomes infected.”
So what are the steps we need to take to make AI work for us?
Simon Crosby, chief technology officer of Bromium, argues that “there's no silver bullet in security.” He says the idea that, “you can just detect bad guys and stop attacks is hugely misleading.”
This is because many attacks are carried out through tiny steps, that don't often seem like much, but are concealed in the guise of legitimate requests and commands. Crosby explains, “In cyber-security you're often up against criminals who already know very well how machines and machine-learning works and how to circumvent their capabilities.”
More and more SOC teams are saying that they experiencing breach notification fatigue, which is presumably down to the increase in these little steps taken to try and breach a company's perimeter.
And this is why it's very difficult to argue that machine-learning doesn't have a ‘business case' within the cyber-security industry. Everyone interviewed for this article agreed - machine-learning is not perfect - but it's a great companion for sifting through the thousands of notifications the average SOC teams see on a monthly basis.
As Crosby said, “Having tools which can help find the needle in the haystack is amazing.” In other words, people generally sing machine-learning's praises when it comes to analysing large data sets.
"...this is why it's very difficult to argue that machine-learning doesn't have a ‘business case' within the cyber-security industry."
And it is because of this that it is helping improve the fight against cyber-crime, despite the fact that, “we still don't feel comfortable leaving it to its own devices,” according to Crosby. It allows SOC operations teams to concentrate on what matters and investigate the things which are potentially the biggest threats to their systems.
Oliver Tavakoli, chief technology officer of Vectra Networks said it best: “We need to use machine learning where it makes sense - when we need to analyse the most advanced of attacks, correlate behaviour and conduct data reduction exercises. When we call it artificial intelligence we're constructing a certain narrative, and is often found to be a term used by marketing teams who use it to build buzz. The term is one of popular culture, rather than an actual scientific term.”
So let's stop with the hype, let's not try and dress up machine-learning as something which it isn't and let's start using it to our advantage. By the sound of it, a lot vendors are keen on using the idea of ‘artificial intelligence' to sell their product and to give it the image of it being futuristic. But really they should probably figure out a way to just talk about the extra pairs of hands it affords SOC teams in sifting through the millions of notifications, beeps and flashing lights SOC teams have to deal with.
This article originally appeared at scmagazineuk.com