Picture the scenario. Your home address, national insurance number and bank details have been stolen by hackers, and now they’re asking you to negotiate directly to get them back.
It’s a terrifying idea, but this is the situation more than 100,000 staff at the BBC, British Airways and Boots now find themselves in after hacker group Clop exploited a weakness in third-party payroll software to access the data of potentially hundreds of companies.
What’s even scarier is that this is not an isolated incident. During the first half of 2023 there were an estimated 1,248 damaging cyber-attacks per week affecting businesses around the world. The hackers are getting more sophisticated, advanced, and in many cases, brazen about their assaults, and apparently no one is safe. Even NASA, the seat of some of the brightest scientific and technical minds in America, revealed a flaw in their website this week that could have tricked users into visiting malicious sites by disguising a dangerous URL with NASA’s name.
Key developments have also revealed that Artificial Intelligence (AI) has been used as a fundamental element of several high-profile attacks. This new wave of strikes is harnessing “offensive AI” which takes all the main features of open-source systems to enable cybercriminals to undertake targeted attacks at an unprecedented speed and scale. An important feature of these new threats is that they often fly under the radar of standard, rule-based detection tools, so are effective at evading discovery and are severely damaging as a result. Major businesses around the world have already fallen victim to AI-supported cyber-attacks, including the famous NotPetya and BlackEnergy malware incidents. Often used in deepfake and malware attacks, AI use is also driving an increase in incidents across the cyber hacking landscape. In fact, one tech security firm, Zscaler, recently commented that AI has been a significant driver in the 47% surge in phishing attacks they experienced last year.
With offensive AI not simply “on the horizon” anymore, all organisations will need to evolve their defences to combat the new wave of threats. The “AI arms race” is well and truly underway.
Is AI a problem or a solution?
There is no doubt that the huge leaps made in AI and Machine Learning (ML) over the last few years have pioneered a new phase of human development. Key aspects of our lives are now easier, faster and more efficient due to the incredible tech behind these systems. AI-powered platforms used by Harvard Medical School are now able to successfully identify people at the highest risk for pancreatic cancer up to three years before diagnosis, saving countless lives, and in the UK AI-enabled precision farming is helping farmers to make data-driven decisions so they can optimize irrigation, improve fertilization and reduce waste.
But as huge leaps are made in AI and ML, so too are new vulnerabilities exposed and new ways opened for cybercriminals to attack an organisation’s network. With its ability to learn and anticipate what an individual or business will do, AI is a particularly effective tool for the hackers. Once injected into a company’s network, it can remain dormant and undetected for long periods, eavesdropping on meetings, extracting data and spreading malicious software which allows hackers to learn about their target, spot weaknesses in cyber systems and set up back doors into critical infrastructure.
Some of the key methods cybercriminals have used to hack into companies’ networks with AI include:
Generative Adversarial Networks (GANs)
ML-enabled penetration testing tools
There’s also evidence that generative AI tools such as ChatGPT are supercharging the hacking landscape by removing barriers to entry and making it easier to master capabilities that were previously the reserve of advanced cybercriminals. Generative AI can allow malicious code to morph, resulting in an even greater threat as it evades detection and traditional cybersecurity defenses. Our safeguards therefore need to innovate and evolve to remain a step ahead of adversaries if they are to remain effective.
What can be done to stop the advance?
It’s not all bad news though. The same tools that are being used by hackers to break into our systems can also be our best defence against them.
AI and ML-powered systems including security information and event management (SIEM) are playing a crucial role in aiding security teams to detect threats and respond to incidents faster. Unlike people, AI never needs to take a break, so it can be used to constantly monitor a particular IP or endpoint and automatically shut down malicious activity. 95% of cyber-attacks are still the result of human error, so automated systems are essential in overcoming our weaknesses.
In an effort to bolster countermeasures against attacks, the US government recently announced a $140 million investment in the National Science Foundation and partner agencies devoted to AI research. Big corporations are also helping to stem the tide; Microsoft’s Security Copilot, released in March, was made available for public use with the aim of helping security professionals to rapidly respond to threats by posing questions to a closed loop learning system. By spotting errors or augmenting existing code, it is hoped the new system will help us to gain ground in the quality of detection, the speed of response and the ability to strengthen security posture.
Ultimately, AI will become essential in threat detection, bot mitigation and behavioral analytics, amongst others. Its many uses include scanning reams of network traffic logs for anomalies, making routine programming tasks significantly faster and seeking out known and unknown vulnerabilities that need to be resolved.
What does the future hold?
Despite high-profile calls for AI development to be paused, including the recent claim that it will lead to our “extinction”, it’s clear to most people that it is very much here to stay. In fact, according to Mimecast, the global AI cybersecurity tech market is predicted to grow at a rate of 23% until 2027, reaching a value of $46.3. As the possible applications for these technologies evolve, there is a huge opportunity to use AI and ML to improve efficiencies and outcomes in all areas of our lives. Yet the more intelligent it gets, the more criminals can harness it for their own means.
Cybersecurity defenses need to innovate and evolve to stay a step ahead of cybercriminals if they are to remain effective. The future of cyber will include the AI arms race, and our ability to win this will depend on how we harness technology now.