Beyond Cyber Security
Security is a very dynamic area both from the attacking and the defendant point of view. Changes occur continuously creating threats and opportunities. The time window to react on new threats is always shorter while the time to exploit them is always asymmetrically longer.
These changes are not always incremental in nature since they can be caused by disruptive technologies or events. Threats of disruptive nature can require longer time to address. Therefore, we think as ISSS that it is important to look not only at legacy security but also at what is coming next to increase awareness and capabilities to identify the new threats and to react to it.
ISSS is currently addressing the impact on security of several emerging technology topics that have or are about to change the landscape where businesses are operating and are providing information to grow awareness and to react to the new threats. Topic in our agenda, where we are looking at the security implications, are for example Artificial intelligence, Distributed ledger, Digital ID, Ransomware next generation, IOT, security in Metaverse and post quantum.
How AI is changing the security landscape
A good example on how the security landscape is changed by technology is represented by the usage of Artificial intelligence (AI) for cybersecurity. The usage of AI in cyber-defense allows to better correlate events, to recognize patterns and supports a more efficient identification of threats and attacks. On the other side AI is boosting the attacker’s capabilities, helping to identify quicker and systematically weaknesses in software, systems, controls and processes. Attacks can be more precise, with a better leverage on social engineering. Malware can be smarter, and obfuscation can be brought to further levels.
AI is available as a service and the creation of smarter attacks is becoming a commodity. These services can (and are) used to improve dramatically the efficiency of attacks. In august 2021 at DEF CON 29 conference e.g., we noted a presentation (Link) on how to create an efficient and well targeted phishing attack. The approach is using a workflow that leverages an API to a Human Resource AI service, to identify targets and collect information about them, and the OpenAI’s GPT3 capability to prepare appealing mails for the target. AI makes phishing attacks more precise and can scale, while the approach/tool is readily available.
Other examples of security implications of AI can be found in the increasing usage of deep fakes to attack human decisions but as well in automated controls, e.g. controls based on face recognition known from our smartphones or at gate controls. Refer for that to the JPEG Initiative on Fake Media (Link), a cooperation of several universities.
Of course, the industry is reacting as well, nevertheless the asymmetry between attacker and defender seems to be growing through the usage of AI. Machine learning is used to analyse malware and traffic, but the existing tools need to become better in monitoring cyber-events and malware driven by AI, refer e.g. to this article by Zhi Wang, Chaoge Liu, and Xiang Cui on Hiding Malware Inside of Neural Network Models, where it is shown how to propagate malware without having antivirus raising any suspicion (Link).
Beyond this, new risks are appearing in relation to AI. AI models and AI based tools are very fragile, exposed to manipulation and hacking, as several studies have been demonstrating, well summarized by this article of Alex Polyakov. It is possible, with various different approaches, forcing a machine learning algorithm to misclassify events overriding controls. This is also working for unknown (black box) algorithms. Famous example is the misclassification of road signals for self driving cars. The fragility of AI has been demonstrated to be a threat to the privacy of information used to train the models, with possible inference not only on the outputs but as well on its input, where privacy should matter.
Cyber-attacks can therefore not only attack the targeted system but also AI supporting the defense.
A new type of risk is predicted to emerge in reaction to the lack of transparency of AI (refer for example to Bruce Schneier (Link)). AI will be incorporated in most if not all the business processes. It can be expected that AI will be used to look for loopholes in the rules of the processes with eventually the objective to optimize. The lack of transparency and the nature of the algorithms, not working as humans, can lead to AI systems over-optimizing and hacking other AI systems with unknown consequences for humans.
AI and Cybersecurity therefore is a growing topic, with several aspects to be monitored and studied: refer for example to the Center of Security Studies of the ETH Zürich (Link) and look at the book Cyber Security Politics (Link), in particular to chapter 5 on Artificial intelligence and the offense-defense balance in cyber security by Matteo Bonfanti.
What is the threat to a private person or to small enterprise and how can they cope to it?
Persons and small enterprises will be deeply affected in their privacy, and their interests by sublte attacks that are hard to detect. Awareness is key, while permanent attention will be required. It becomes more and more important to protect personal information and to introduce where possible multiple independent sources of information when taking decisions. Awareness will be not sufficient and new tools need to be designed to detect malicious AI and regulators
While these tools are made available in apervasise way, we have to learn to think that what we see may not be what is, in contradiction with our normal way of thinking: refer to the famous WYSIATI (What you see is all there is) by Daniel Kahneman, the famous Nobel Prize winner who wrote “Thinking Fast and slow”. If we cannot beat AI on many dimensions, then we should start to think differently and more critically, not assuming to be safe per definition.
Angelo Mathis/Marcel Zumbühl