Cybersecurity: Is AI an opportunity or a threat?

Published on 07/10/2024

As artificial intelligence (AI) transforms industries, including cybersecurity, and becomes part of our daily lives, it offers solutions to effectively combat cybercrime. However, cybercriminals are also refining their strategies, using these same technologies to carry out increasingly sophisticated attacks.

AI and cyber threats

AI can automate threat detection, analyze massive amounts of data in real-time, and anticipate cyberattacks through predictive models. That said, cybercriminals also exploit AI to conduct more elaborate attacks, such as automating malware and manipulating detection algorithms. The major challenge lies in balancing the improvement of security systems and managing the risks posed by the malicious use of AI.

Deepfake techniques are a perfect example of this trend. They enable the manipulation of audio and video to create extremely realistic forgeries. By mimicking influential figures, such as company executives, these forgeries facilitate fraudulent transactions and the extraction of sensitive information.

"AI can automate threat detection, analyze massive amounts of data in real-time, and anticipate cyberattacks through predictive models,"  says Djamel Khadraoui, Head of Reliable Distributed Systems Unit at LIST

Similarly, AI enhances phishing tactics by generating highly convincing and targeted emails and websites. By analyzing large datasets, AI can create fake communications that closely mimic legitimate ones, making it difficult for individuals to distinguish between genuine and malicious interactions.

Powered by AI, Distributed Denial of Service (DDoS) attacks overwhelm networks, leading to major operational disruptions in companies and other organizations. These adaptable attacks make traditional defense mechanisms less effective.

Another threat is AI-optimized ransomware attacks. Cybercriminals use AI to fine-tune their attacks, target specific victims, and improve encryption techniques, making these ransomware attacks not only more devastating but also harder to counter.

Toward multifaceted security approaches

Faced with these evolving threats, a multifaceted approach is necessary to secure IT systems with AI support, combining technical, strategic, and ethical solutions. However, this requires greater trust in the AI systems being used.

AI models rely on vast and complex datasets. In cybersecurity, as in many other critical applications, securing data pipelines is a crucial step. Ensuring the integrity of these pipelines requires data encryption, strict access controls, and regular audits to prevent unauthorized access and data tampering.

At the same time, strengthening the robustness of operational systems (networks or otherwise) against hostile attacks is a major challenge. These attacks modify a system's input data to generate incorrect results. To counter this threat, models must be exposed to altered data during training, a method that improves their resistance to adversarial manipulation. Protective approaches are necessary to make them more resilient to such attacks.

Moreover, while AI can be used for cyberattacks, it also serves as a powerful weapon for strengthening system security. AI-powered Security Information and Event Management (SIEM) systems detect anomalies and suspicious patterns in real-time, enabling quick responses to threats.

AI-based threat intelligence platforms play a key role in aggregating data from various sources, such as social media and dark web forums. They provide early warning signals about emerging threats, giving organizations a head start over cybercriminals.

Finally, automating incident response, facilitated by AI, is an effective solution. This technology can isolate affected systems or quickly block malicious IP addresses, reducing response time and minimizing damage.

Cybersecurity: A Trust Guarantee for AI-Based Solutions

The growing use of AI in highly regulated fields such as healthcare demands enhanced levels of trust and security, especially when it comes to sharing sensitive medical data.

One of the primary concerns is cybersecurity. In federated data systems, each participant (a hospital, for example) keeps its data locally while contributing to a centralized global model hosted by an aggregator. However, the integrity of this model can be compromised if a client injects poisoned data or intentionally corrupts model updates. In a medical context, this could skew diagnoses or predictions made by AI, directly impacting patients.

"The growing use of AI in highly regulated fields such as healthcare demands enhanced levels of trust and security, especially when it comes to sharing sensitive medical data," states Djamel Khadraoui.

LIST is developing advanced solutions in the field of federated learning (Trusted Federated Machine Learning), offering robust methods to secure exchanges in decentralized AI systems. These innovations aim to ensure that users, such as hospitals, can trust that the global model remains reliable and free of corruption while optimizing the efficiency of anomaly detection processes.

Share this page:

Contact

Dr Djamel KHADRAOUI
Dr Djamel KHADRAOUI
Send an e-mail