5G network AI models: Threats and Mitigations
Modern communications networks are increasingly reliant on the use of AI models for enhancing the performance, reliability and security of their offerings. 5G networks especially, with a landscape of service-based architecture, increasingly use AI models for real-time data processing, predictive maintenance and traffic optimization. Large volumes of network data, user behavior data and device interactions are analyzed more thoroughly and quickly than can ever be possible without AI. AI-driven traffic management models dynamically allocate resources based on demand, reducing latency and improving user experience.
AI can also be used to enhance Defense communications infrastructure, coordinating non-terrestrial networks with air/ground/sea assets to assure mission success criteria are effectively achieved. Energy usage optimization, smart network slicing for autonomous/IoT use cases and dynamic prioritization of Emergency Services also benefit from the effective application of AI models. As 5G networks continue to expand, AI-driven analytics and automation will be essential in ensuring operational efficiency and security in increasingly complex environments.
AI models, however, can also be disrupted or disabled, severely affecting the environments that are dependent on them.
To disrupt or disable an AI model in 5G network environments, attackers can leverage various tactics, exploiting weaknesses that exist throughout the lifecycle of the model – from data ingestion to inference and decision-making. The following is a list of possible attack techniques on AI models and suggested mitigations:
- Data Poisoning: Alteration of training data to degrade model accuracy.
- Model Evasion: Usage of adversarial inputs to bypass model detection.
- Model Inversion: Reverse-engineering of sensitive data or decision logic.
- Model Poisoning: Introduction of hidden backdoors for future access.
- Model Extraction: Reconstruction of a model via carefully crafted queries.
- Denial-of-Service on Infrastructure: Overloading resources to disrupt model operation.
- Trojan Attacks: Embedding of malicious code in models.
- Supply Chain Attacks: Compromise of third-party components used by models.
Data Poisoning
Description:
Attackers inject malicious or misleading data into the AI model’s training dataset to corrupt its learning process. This can cause the model to make incorrect predictions or behave erratically.
How it Works:
Training Data Manipulation – Adversaries introduce false data or label legitimate data incorrectly, influencing the AI model’s predictions and decreasing its effectiveness.
Example:
In a 5G network, poisoned traffic data could mislead AI systems responsible for anomaly detection, causing them to overlook genuine threats.
Impact:
Degraded model accuracy and incorrect predictions. This is particularly harmful in systems performing real-time or critical decision-making processes.
Defense:
- Secure data pipelines and thorough data validation processes.
Model Evasion
Description:
Attackers create inputs that deceive the AI model without detection. These inputs (called Adversarial Examples) cause the model to make erroneous predictions or classifications.
How it Works:
Adversarial Examples – By making subtle changes to input data (e.g. network traffic patterns or packet contents), attackers can bypass security measures without triggering detection mechanisms.
Example:
In a 5G intrusion detection system, an adversary could manipulate traffic patterns to evade detection and access restricted environments.
Impact:
Allows attackers to bypass AI-based security controls, leading to security breaches.
Defense:
- Employ adversarial training and robust ML architectures.
- Secure learning infrastructure against unauthorized access to the model (zero trust, identity management, privilege escalation prevention, content security, host and network security).
Model Inversion
Description:
Attackers can reverse-engineer a model to gain insights about its training data or parameters, which can lead to privacy breaches or vulnerability exploitation.
How it Works:
Model Querying: By systematically querying the model and analyzing responses, attackers infer sensitive data or proprietary model information.
Example:
In a 5G healthcare application, attackers might query an AI-based diagnostic model to reconstruct patient health data.
Impact:
Disclosure of sensitive information, leading to privacy violations and compliance risks. In Defense environments, this can also lead disclosure of mission and asset data.
Defense:
- Implement Differential Privacy for sensitive data.
- Secure learning infrastructure against unauthorized access to the model (zero trust, identity management, privilege escalation prevention, content security, host and network security).
- Secure production infrastructure to control access to and exposure of the model (zero trust, identity management, privilege escalation prevention, content security, host and network security)..
Model Poisoning (Backdoor Attacks)
Description:
Attackers insert a hidden “backdoor” into the AI model during training, which can later be triggered to manipulate the model.
How it Works:
Triggering the Backdoor: During a backdoor compromise, the model is trained to respond abnormally to specific, attacker-defined triggers in input data.
Example:
In a traffic control system for 5G networks, attackers could add a backdoor that prevents the detection of specific IP addresses, facilitating undetected traffic flow.
Impact:
Enables attackers to bypass model security and disrupt operations on demand.
Defense:
- Regularly audit model training pipelines and perform backdoor detection testing.
- Secure learning infrastructure against unauthorized access to the model (zero trust, identity management, privilege escalation prevention, content security, host and network security).
Model Extraction (Stealing)
Description:
Attackers attempt to “steal” the AI model by querying it, reconstructing its parameters and decision boundaries for analysis. This can be used to propagate deeper attacks or facilitate unauthorized use of the model.
How it Works:
API Exploitation: An attacker queries the model extensively, building a local version that replicates the model’s behavior.
Example:
In 5G service APIs, attackers can query AI-driven traffic management or optimization models to reconstruct their logic to potentially exploit the system.
Impact:
Exposes proprietary models to misuse and facilitates future targeted attacks.
Defense:
- Implement query limits
- Obfuscate model responses
- Use privacy-preserving mechanisms like differential privacy.
- Secure production infrastructure to control access to and exposure of the model (zero trust, identity management, privilege escalation prevention, content security, host and network security).
Denial-of-Service on Infrastructure
Description:
Attackers disrupt the infrastructure supporting AI models by overwhelming the system’s computational or network resources, rendering the model temporarily unavailable.
How it Works:
Resource Exhaustion: By sending an excessive number of requests, the attacker can exhaust the system’s resources, leading to a slowdown or shutdown.
Example:
In a 5G-based AI traffic optimization service, a DoS attack could cripple the infrastructure, resulting in degraded network performance.
Impact:
Service outages, failed predictions, and delayed operations.
Defense:
- Implement query limits
- Implement load balancing, rate limiting, and infrastructure redundancy.
Trojan Attacks
Description:
Attackers embed malicious code (a Trojan) into the AI model, which can be activated later to alter the model’s behavior.
How it Works:
Trojan Implantation: The attacker inserts code into the model architecture or training environment, for later activation to disrupt service, cause incorrect predictions or enable Model Evasion.
Example:
In a 5G application, a Trojan in the model could disable traffic optimization during peak hours, leading to service congestion.
Impact:
Can allow attackers to disable or manipulate the AI model at will.
Defense:
- Secure development environments and regular auditing of model code and performance.
- Secure learning infrastructure against unauthorized access to the model (zero trust, identity management, privilege escalation prevention, content security, host and network security).
Supply Chain Attacks
Description:
Attackers compromise third-party components (e.g., libraries, frameworks, or pre-trained models) used in building or deploying the AI model.
How it Works:
Third-Party Component Compromise: Attackers introduce vulnerabilities into third-party software or models, which is incorporated into the target system. CI/CD infrastructure is a logical target of these attacks.
Example:
In a 5G security monitoring model, attackers could tamper with third-party libraries to weaken detection capabilities or allow malicious traffic.
Impact:
Compromises the AI model’s reliability and security, often without immediate detection
Defense:
- Regularly audit third-party components.
- Restrict sources to trusted vendors.
- Secure development environments and regular auditing of model code and performance.
- Secure learning infrastructure against unauthorized access to the model (zero trust, identity management, privilege escalation prevention, content security, host and network security).
References:
- https://ieeexplore.ieee.org/document/8418594
- https://www.researchgate.net/publication/269935591_Explaining_and_Harnessing_Adversarial_Examples
- https://www.researchgate.net/publication/321718936_Wild_Patterns_Ten_Years_After_the_Rise_of_Adversarial_Machine_Learning
- https://www.researchgate.net/publication/301419711_Model_Inversion_Attacks_that_Exploit_Confidence_Information_and_Basic_Countermeasures
- https://www.researchgate.net/publication/323249035_Trojaning_Attack_on_Neural_Networks
- https://www.semanticscholar.org/paper/Stealing-Machine-Learning-Models-via-Prediction-Tram%C3%A8r-Zhang/8a95423d0059f7c5b1422f0ef1aa60b9e26aab7e
- https://www.researchgate.net/publication/359261334_DDoS_Attack_Preventing_and_Detection_with_the_Artificial_Intelligence_Approach
- https://ieeexplore.ieee.org/document/9137011
- https://arxiv.org/abs/2202.07183v1
- https://owasp.org/www-project-machine-learning-security-top-10/docs/ML06_2023-AI_Supply_Chain_Attacks.html
- https://www.darkreading.com/cloud-security/ml-model-repositories-next-big-supply-chain-attack-target
- https://thehackernews.com/2024/08/researchers-identify-over-20-supply.html