Threat Inteliigence / OSINT / NETSEC / NATSEC

AI, Cybersecurity and the European Perspective

First of all, a few words about counterintelligence.pl. I haven't published here for almost a year, however, this has nothing to do with the decline in interest in counterintelligence topics 🙂 Quite the opposite, and from April 2024 to the end of March this year I am a Virtual Routes (formerly European Cyber Conflict Research Initiative) Fellow and that is what I devoted my remaining free time to. The Fellowship is absolutely great experience. I met many people equally involved in cybersecurity, published columns in a related publication Binding Hook, and even had the opportunity to visit the most important EU institutions and NATO headquarters. The Fellowship itself is probably a topic for a separate post. But that's the whole secret of the decreased activity and let's see what happens after April.

In the meantime, let's get back to the actual post, although its content will still be somewhat related to Virtual Routes. Namely, I took part in an event organized by it, specifically essay competition on the consequences of AI for cybersecurity in Europe. Unfortunately, I didn't manage to win, I recommend you check out the winning works which you can find here, but why not publish my entry here. So I here it is Since I submitted the text in English, I am publishing it same way here:

Artificial Intelligence (AI) and particularly Generative AI (GAI) has quickly found its way into cybersecurity landscape. significant While hype surrounds this technology, and expectations are high regarding how it can reshape the discipline, its actual impact is more nuanced. AI supports adversarial operations by lowering the barrier of entry for tasks like malware development or creating Foreign Information Manipulation and Interference (FIMI) materials. Still, its application does not significantly change the landscape yet. Creating malware can be streamlined by GAI supporting writing code or preparing phishing lures, but it does not introduce previously unknown capabilities. For FIMI, the popularity of GAI materials is a double-edged sword. The proliferation of generated images, videos, or voice impersonation attempts makes users suspicious of any new content, which can lead to further examination. Furthermore, the utility of using deepfakes for information operations is limited because exposing materials as artificial content discredits the source of information. Finally, despite the rapid increase in the availability of AI solutions, threat actors do not present an equal level of proficiency, emphasizing how just using GAI does not translate to advanced capabilities by itself. Hence, AI tools currently provide threat actors with quantitative rather than qualitative advantages. fortunately, an essential role of AI for defense is how quickly it can learn to recognize GAI content. Rapid flagging of FIMI materials and intrusion artifacts created using GAI can level the field for defenders, ensuring they can react and adapt to emerging tactics. Beyond strictly AI-related threats, AI and ML (machine learning) solutions aimed at analyzing malicious behavior can support security analysts in detection and response, but this will still require human expertise to tune and deploy the systems. Additionally, the long-term direction of AI products is unclear as even major companies struggle to define profitable business models.

Given the multitude of challenges, policymakers need to take advantage of this early stage of AI's evolution and proactively pinpoint emerging threats, especially as the implications will extend beyond just technical matters. First, there is an issue of strategic autonomy. Much of the advanced AI development occurs outside of Europe, with a concentration of research centers in the US and China. This reliance can lead to supply chain risks as companies must import critical solutions from regions with different legal regimes. Therefore, it is essential to ensure that Europe possesses its own R&D (research and development) hubs and research institutions.

Furthermore, policymakers must harmonize Europe's legal frameworks around privacy and data protection with the operational necessities of implementing AI in cybersecurity. Many detection models require extensive datasets to train and refine their algorithms. Achieving the right balance between data minimization and data utility is challenging. Solutions should ensure that AI cybersecurity applications remain compliant with European privacy and security principles while allowing the use of data to advance the state of knowledge.

In terms of adversarial activity, threat modeling should drive response and provide an assessment of the most important scenarios to tackle. Traditional risk assessments often rely on historical data and fixed assumptions about adversarial behavior; however, as AI-driven threats are still in their infancy, they will likely evolve rapidly. A dedicated framework for AI threat modeling can provide policymakers, security professionals, and industry stakeholders with a clearer, more predictive understanding of the evolving threat landscape. The basis of the framework should be a common taxonomy, such as MITRE ATLAS, streamlining the sharing of results and ensuring the repeatable analysis process. Private and public sector Computer Security Incident Response Teams (CSIRTs) in Europe can provide a wealth of information on adversarial tactics, allowing rapid detection of new techniques deployed by threat actors. Modeling efforts should combine those empirical findings with red team exercises to discover attack paths that adversaries can use and support an “inside out” approach to creating threat models. The strategy will involve first determining relevant threats, such as facilitating attacks against critical infrastructure or capability to erode the integrity of the AI model, and then establishing which attack paths would allow adversaries to reach objectives. Researchers can model attack paths in a solution like graph database to represent combinations of techniques and their relationship to attacked assets, which will support the discovery of gaps in understanding of adversarial tradecraft. Finally, disseminating information through initiatives like the Cyber Solidarity Act (CSA) will allow a comprehensive understanding of the threat landscape by government and industry stakeholders.

Furthermore, threat modeling's utility extends beyond technical aspects. Policymakers can use this approach to draft forward-looking policies and legislation. For instance, if the framework predicts a surge in deepfake-driven phishing attacks, policymakers might strengthen guidelines around identity verification, mandate explainable AI filters for platforms, or allocate funds to research detecting and flagging media created by GAI.

The key strategic challenge is the concentration of AI companies outside Europe, leaving European industries, governments, and research institutions to rely on third-party technologies. To offset this disadvantage, Europe should leverage its geographical position and the values it upholds. Tensions in the Southeast Asia region and possible changes in the domestic policy in the United States can make Europe a preferable location for AI professionals looking for physical safety and support of liberal democratic institutions. As such, efforts to incentivize the movement of the workforce, like fast tracks for issuing work visas and settlement schemes, can ensure the availability of talent. Further, policymakers should combine incentivizing talent with targeted investments, including expanding programs like Horizon Europe and targeting funds for AI cybersecurity research. Initiatives can make research grants more accessible to early-stage startups and small research labs that show promise in niche areas like privacy-preserving analytics. Finally, a crucial element of R&D efforts is a resilient digital infrastructure backbone that provides cloud computing resources and platforms, allowing large-scale testing and implementation of AI models. Support for the development of Europe-based infrastructure would additionally mean control over privacy controls and compliance with European privacy and cyber security laws.

To build on that, Europe's regulatory leadership can become a strategic advantage. The AI Act and existing cybersecurity measures, such as the NIS Directive, should offer clear, risk-based guidelines. Testing, certification, and explainability requirements for mission-critical applications can ensure trustworthiness. Meanwhile, streamlined compliance pathways can support startups and industry partners, allowing innovation initiatives to avoid bureaucratic burdens. A balanced framework will encourage Europe-based AI ventures to scale, incentivizing transparency in decision-making processes, which will translate into alignment with European values. Europe should also lead discussions at international standardization bodies, ensuring that transparency, privacy by design, and interoperability will become the agreed approach. By advocating responsible AI use in cybersecurity, Europe can lead in defining norms and shaping the global approach to AI deployment.

Threat modeling, incentivizing research initiatives, Europe-based infrastructure supporting R&D, and regulatory leadership will ensure that Europe can leverage AI to enhance cybersecurity and achieve strategic autonomy. While there are multiple challenges, including the lack of major local providers of software and hardware AI assets, the fact that AI-driven threats are in nascency provides an opportunity to react early. Focusing efforts on emerging threats and assigning resources accordingly to tackle the most critical scenarios will ensure that Europe will adapt to any challenge AI-driven threats can pose.

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish