Threat Inteliigence / OSINT / NETSEC / NATSEC

AI, Cyberbezpieczeństwo i Europejska Perspektywa

Na początek tego posta kilka słów odnośnie counterintelligence.pl. Nie publikowałem tutaj od niemal roku jednak w żadnym stopniu nie ma to związku ze spadkiem zainteresowania tematami kontrwywiadowczymi 🙂 Wręcz przeciwnie, a od kwietnia 2024 do końca marca tego roku jestem stypendystą Virtual Routes (dawniej European Cyber Conflict Research Initiative) i przede wszystkim temu poświęcałem dostępne zasoby czasu wolnego. Stypendium jest naprawdę świetną sprawą. Poznałem tam wiele równie zaangażowanych w cyberbezpieczeństwo osób, publikowałem felietony w powiązanym wydawnictwie Binding Hook, a nawet miałem okazję odwiedzić najważniejsze instytucje UE i siedzibę NATO. Stypendium to zapewne zresztą temat na osobny post. Także to cała tajemnica mniejszej aktywności i w kwietniu zobaczymy co dalej.

Tymczasem wracamy do właściwego posta, choć jego treść wciąż będzie niejako powiązana z Virtual Routes. Otóż brałem udział w organizowanym przez tę instytucję konkursie na esej o konsekwencjach AI dla cyberbezpieczeństwa w Europie. Niestety nie udało mi się wygrać, oczywiście polecam zapoznać się ze zwycięskimi pracami które znajdziecie tutaj, ale czemu by nie opublikować tekstu właśnie tutaj. Zapraszam więc do lektury. Ponieważ tekst zgłaszałem w języku angielskim taki też publikuję:

Artificial Intelligence (AI) and particularly Generative AI (GAI) has quickly found its way into cybersecurity landscape. While significant hype surrounds this technology, and expectations are high regarding how it can reshape the discipline, its actual impact is more nuanced. AI supports adversarial operations by lowering the barrier of entry for tasks like malware development or creating Foreign Information Manipulation and Interference (FIMI) materials. Still, its application does not drastically change the landscape yet. Creating malware can be streamlined by GAI supporting writing code or preparing phishing lures, but it does not introduce previously unknown capabilities. For FIMI, the popularity of GAI materials is a double-edged sword. The proliferation of generated images, videos, or voice impersonation attempts makes users suspicious of any new content, which can lead to further scrutiny. Furthermore, the utility of using deepfakes for information operations is limited because exposing materials as artificial content discredits the source of information. Finally, despite the rapid increase in the availability of AI solutions, threat actors do not present an equal level of proficiency, emphasizing how just using GAI does not translate to advanced capabilities by itself. Hence, AI tools currently provide threat actors with quantitative rather than qualitative advantages. Consequently, an essential role of AI for defence is how quickly it can learn to recognize GAI content. Rapid flagging of FIMI materials and intrusion artifacts created using GAI can level the field for defenders, ensuring they can react and adapt to emerging tactics. Beyond strictly AI-related threats, AI and ML (machine learning) solutions aimed at analysing malicious behaviour can support security analysts in detection and response, but this will still require human expertise to tune and deploy the systems. Additionally, the long-term direction of AI products is unclear as even major companies struggle to define profitable business models.

Given the multitude of challenges, policymakers need to take advantage of this early stage of AI’s evolution and proactively pinpoint emerging threats, especially as the implications will extend beyond just technical matters. First, there is an issue of strategic autonomy. Much of the advanced AI development occurs outside of Europe, with a concentration of research centres in the US and China. This reliance can lead to supply chain risks as companies must import critical solutions from regions with different legal regimes. Therefore, it is essential to ensure that Europe possesses its own R&D (research and development) hubs and research institutions.

Furthermore, policymakers must harmonize Europe’s legal frameworks around privacy and data protection with the operational necessities of implementing AI in cybersecurity. Many detection models require extensive datasets to train and refine their algorithms. Achieving the right balance between data minimization and data utility is challenging. Solutions should ensure that AI cybersecurity applications remain compliant with European privacy and safety principles while allowing the use of data to advance the state of knowledge.

In terms of adversarial activity, threat modelling should drive response and provide an assessment of the most important scenarios to tackle. Traditional risk assessments often rely on historical data and fixed assumptions about adversarial behaviour; however, as AI-driven threats are still in their infancy, they will likely rapidly evolve. A dedicated framework for AI threat modelling can provide policymakers, security professionals, and industry stakeholders with a clearer, more predictive understanding of the evolving threat landscape. The basis of the framework should be a common taxonomy, such as MITRE ATLAS, streamlining the sharing of results and ensuring the repeatable analysis process. Private and public sector Computer Security Incident Response Teams (CSIRTs) in Europe can provide a wealth of information on adversarial tactics, allowing rapid detection of new techniques deployed by threat actors. Modelling efforts should combine those empirical findings with red team exercises to discover attack paths that adversaries can use and support an “inside out” approach to creating threat models. The strategy will involve first determining relevant threats, such as facilitating attacks against critical infrastructure or capability to erode the integrity of the AI model, and then establishing which attack paths would allow adversaries to reach objectives. Researchers can model attack paths in a solution like graph database to represent combinations of techniques and their relationship to attacked assets, which will support the discovery of gaps in understanding of adversarial tradecraft. Finally, disseminating information through initiatives like the Cyber Solidarity Act (CSA) will allow a comprehensive understanding of the threat landscape by government and industry stakeholders.

Furthermore, threat modeling’s utility extends beyond technical aspects. Policymakers can use this approach to draft forward-looking policies and legislation. For instance, if the framework predicts a surge in deepfake-driven phishing attacks, policymakers might strengthen guidelines around identity verification, mandate explainable AI filters for platforms, or allocate funds to research detecting and flagging media created by GAI.

The key strategic challenge is the concentration of AI companies outside Europe, leaving European industries, governments, and research institutions to rely on third-party technologies. To offset this disadvantage, Europe should leverage its geographical position and the values it upholds. Tensions in the Southeast Asia region and possible changes in the domestic policy in the United States can make Europe a preferable location for AI professionals looking for physical safety and support of liberal democratic institutions. As such, efforts to incentivize the movement of the workforce, like fast tracks for issuing work visas and settlement schemes, can ensure the availability of talent. Further, policymakers should combine incentivizing talent with targeted investments, including expanding programs like Horizon Europe and targeting funds for AI cybersecurity research. Initiatives can make research grants more accessible to early-stage startups and small research labs that show promise in niche areas like privacy-preserving analytics. Finally, a crucial element of R&D efforts is a resilient digital infrastructure backbone that provides cloud computing resources and platforms, allowing large-scale testing and implementation of AI models. Support for the development of Europe-based infrastructure would additionally mean control over privacy controls and compliance with European privacy and cyber security laws.

To build on that, Europe’s regulatory leadership can become a strategic advantage. The AI Act and existing cybersecurity measures, such as the NIS Directive, should offer clear, risk-based guidelines. Testing, certification, and explainability requirements for mission-critical applications can ensure trustworthiness. Meanwhile, streamlined compliance pathways can support startups and industry partners, allowing innovation initiatives to avoid bureaucratic burdens. A balanced framework will encourage Europe-based AI ventures to scale, incentivizing transparency in decision-making processes, which will translate into alignment with European values. Europe should also lead discussions at international standardization bodies, ensuring that transparency, privacy by design, and interoperability will become the agreed approach. By advocating responsible AI use in cybersecurity, Europe can lead in defining norms and shaping the global approach to AI deployment.

Threat modeling, incentivizing research initiatives, Europe-based infrastructure supporting R&D, and regulatory leadership will ensure that Europe can leverage AI to enhance cybersecurity and achieve strategic autonomy. While there are multiple challenges, including the lack of major local providers of software and hardware AI assets, the fact that AI-driven threats are in nascency provides an opportunity to react early. Focusing efforts on emerging threats and assigning resources accordingly to tackle the most critical scenarios will ensure that Europe will adapt to any challenge AI-driven threats can pose.

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *

pl_PLPolish