Artificial Intelligence tools have the potential to improve many areas of our lives. But AI in cyber security is a double-edged sword – both threat and solution.
According to NATO’s Assistant Secretary-General for Emerging Security Challenges, David van Weel, Artificial intelligence (AI) is playing a massive role in cyber-attacks – and is proving to be both a “double-edged sword” and a “huge challenge”.
At a meeting of NATO’s Defence Innovation Accelerator for the North Atlantic (DIANA) for emerging and disrupting technologies in December 2022, van Weel said, “Artificial intelligence allows defenders to scan networks more automatically and fend off attacks rather than doing it manually. But the other way around, of course, it's the same game.”
To understand the magnitude of the threat, simply look at the multi-national defence group’s activity on this new, virtual battlefront. In October 2021, NATO launched its Data Exploitation Framework strategic plan and an Artificial Intelligence Strategy.
The latter is designed to:
• to provide a foundation for NATO and Allies to lead by example and encourage the development and use of AI in a responsible manner for Allied defence and security purposes;
• to accelerate and mainstream AI adoption in capability development and delivery, enhancing interoperability within the Alliance, including through proposals for AI use cases, new structures, and new programmes;
• to protect and monitor our AI technologies and ability to innovate, addressing security policy considerations such as the operationalisation of our Principles of Responsible Use; and
• to identify and safeguard against the threats from malicious use of AI by state and non-state actors.
But what does this mean for you and your organisation?
As van Weel explained, AI in cyber security is a double-edged sword – both threat and solution.
Interest in the AI-based cyber security solutions is booming. Acumen Research and Consulting estimate the market for AI-based security products is currently worth $14.9 billion. By 2030, its worth is expected to reach $133.8 billion.
AI offers opportunities in several important areas:
• behavioural analysis: spotting patterns to attack in timing, methods, behaviour once hackers have breached defences and how they move inside systems.
• data analysis: filtering false positives identified by network intrusion and SIEM tools to reduce the burden on analysts, free their time and help with the prioritisation of events.
• automation: offering potential for automated or semi-automated responses to attacks.
• data modelling: modelling can help to identify attacks and power predictive models designed to identify risk and strengthen against likely future attacks.
Cyber security products that use AI include: anti-virus tools; data loss prevention; fraud detection; identity and access management; intrusion detection and prevention; and risk and compliance management solutions.
Yet, despite all this potential, the power of AI is also a threat to your organisation’s cyber security.
AI is already being used by hackers and cyber criminals:
• to identify patterns in computer systems in order to reveal weaknesses or vulnerabilities that could be exploited;
• to create phishing emails designed to spread malware, ransomware or collect information;
• to design malware that is continually changing so that is avoids detection by automated defensive tools; and
• as intelligent spyware that sits inside a system to observe behaviour and collect data until it is ready to launch the next phase of an attack or send out the information it has collected.
Worryingly, the use of AI in this way is already proving successful. Security experts have found that AI-generated phishing emails have higher opening rates than manually crafted phishing emails.
It seems that the AI-powered arms race is already underway. And while common criminals might not have the resources to access the tech talent required to develop these solutions, the underground market for malware, phishing and ransomware does widens their availability.
Further, state-sponsored cyber criminals will be able to access the tech talent capable of designing and deploying these solutions in anger. Little wonder, then, that NATO is highlighting the risks.
As with any cyber-security investment, the decision about whether your organisation should invest in AI-powered security tools will ultimately come down to a balanced consideration of risk.
The UK’s National Cyber Security Centre (NCSC) advises, “If you are thinking of incorporating AI into your security systems, there are a few things you need to understand first. You should clarify your own needs, understand the nature of the technology underlying any products you're considering, and finally, determine whether an 'intelligent' solution will give you a net gain in security.”
It offers a series of steps and questions to help you balance the benefits and risk:
• Does the product aim to solve a problem that is important to you?
• Is an intelligent tool right for this problem?
• Consider the governance of this information or action.
• Ensure you have the data your tool requires.
• Understand the costs and risks involved in collecting the data.
• Ensure that the data will be handled correctly.
• Ensure the required resources are available.
• Ensure that all relevant members of staff have appropriate skills.
• Identify the support that is available for the product.
• How reliable is the tool?
• Is the system resilient enough for the task?
• What are the limitations?
If you’d like to know more about how AI-powered cyber security tools could help your organisation to boost its cyber security defences, please reach out to our team.
Call us: 0808 164 4142
Message us: https://www.grantmcgregor.co.uk/contact-us
Discover more articles about cyber security on our blog:
• How do you solve a problem like Suella?
• A recent case exposes why cyber security requires multiple lines of defence
• 10 signs it's time to get a new IT support company
• New changes to Cyber Essentials for 2023
• The 2023 cyber threats for which you should prepare