AI Driven Cybersecurity and Emerging Risk
Risks of AI in Cyber security
Adversarial AI/ML
Adversarial AI/ML, or adversarial machine learning, focuses on understanding how to compromise machine learning (ML) systems and the strategies to protect against such attacks. This area is increasingly concerning for security teams, as it poses substantial risks by manipulating AI systems into making erroneous decisions. Some of the key adversarial techniques are:
- Data poisoning: Modifying training data to misguide the model's behaviour, thereby impacting its decision-making process negatively.
- Unauthorized alterations to the parameters or structure of a machine learning model which can lead to incorrect outputs.
- Model Extraction: In this technique attacker try to gain access to the model's architecture, parameters, or behaviour, allowing the attacker to create a copy or exploit its capabilities without authorization.
Pliny the Prompter is among numerous hackers who 'jailbreak' the newest cutting-edge AIs right after their release. He demonstrated how even advanced AI models could be manipulated to generate inappropriate or harmful content, highlighting the need for stronger safeguards and ethical considerations in AI development.
'Pliny the Prompter' claims he usually needs around 30 minutes to crack the world’s most advanced artificial intelligence models according to the Financial Times article.
Polymorphic Malware
We can think of AI generated malware as a virus that can mutate continuously to hide themselves from traditional security solutions like EDRs. It would be difficult to have a permanent cure since the malware is able to change on the fly.
Key characteristics include:
- Mutation - automatically modifies its code each time it infects a new system.
- Encryption - uses encryption to hide its payload.
- Obfuscation - conceal its true functionality by using technique like dead code insertion, register renaming and instruction substitution.
- Functionality Preservation - retains its original malicious functionality despite constant changes in its code.
- Harder to Detect and Analyze - challenging for antivirus to detect and to analyze and understand.
HYAS Researchers has developed a PoC “BlackMamba” to demonstrate the capability of AI-based malware. BlackMamba has the capability to leveraging a LLM to create polymorphic keylogger features in real-time, altering the benign code during execution — all without relying on any command-and-control infrastructure to deploy or validate the malicious keylogger capabilities.
AI-powered Phishing
Phishing generally consists of bulk emails designed to deceive recipients into sharing sensitive information. AI enhances this threat by allowing cybercriminals to personalize attacks extensively, leveraging data gathered from online behaviour. These models can even imitate the writing style of specific individuals or organizations, making their attacks more difficult to identify and counter.
How AI-driven phishing attacks look like – Attackers analyze victim data by gathering information from social media profiles, public records, and online activities, often employing tools such as WormGPT. Utilizing the collected data, AI generates highly personalized phishing emails tailored to the individual. Additionally, AI is used to emulate the victim's writing style, enhancing the credibility of the messages. Finally, the AI orchestrates large-scale attacks targeting a wide range of individuals or organizations.
Sam Mitrovic, a solutions consultant at Microsoft, has raised an alarm after nearly being deceived by what he termed a "highly realistic AI scam call" that could fool even the most experienced of users. This sophisticated attack involves fake emails and phone calls that impersonate Google, using AI to write convincing messages. The scammers use AI-generated voices and phone number spoofing to enhance the deception.
AI-powered DDoS Attacks
The introduction of AI enhances the efficiency and accuracy of DDoS attacks. Cybercriminals can leverage AI to analyze vast amounts of network traffic data, feeding it into complex and innovative algorithms to create a effective strategy. Cybercriminals are now leveraging AI to coordinate large botnets more efficiently, enhancing the effectiveness of these attacks.
AI-based DDoS attack could be used in various ways, including:
- Automated attack orchestration - analyze network traffic patterns and adapt attack strategies in real time.
- Adaptive attack strategies - adapt the attack strategy based on the target’s defences.
- IoT botnets - compromised IoT devices can be used to expand powerful botnet networks.
- Bypassing security measures - investigate and bypass certain security measures, such as next-gen firewalls and intrusion detection systems.
Data from Imperva Threat Research reveals that from April to September 2024, retail sites collectively experienced 569,884 AI-driven attacks each day, with 30.6% of these being AI-driven DDoS attacks. For last many years DDoS attack is very effective and introduction of AI can make it more devastating, leading to significant revenue loss.
How to Mitigate the Risk of AI-driven Attacks
Mitigating the risk of AI-driven attacks requires a comprehensive strategy that encompasses technology, training, and proactive security measures. Traditional cybersecurity measures often fall short against these advanced tactics, necessitating the adoption of AI-powered mitigation strategies. By leveraging the capabilities of artificial intelligence, organizations can enhance their defences, enabling proactive identification and response to emerging threats.
Generative Adversarial Networks (GANs) not only for adversarial attacks but also as a defence mechanism. It involves generating adversarial examples to fortify AI models against similar attacks. Adversarial training enhances machine learning model robustness by incorporating adversarial examples into training datasets. Model distillation further strengthens resilience by training a student model on soft probabilities from a teacher model, reducing sensitivity to adversarial attacks and improving generalization capabilities.
AI-driven approaches leverage machine learning algorithms, natural language processing, and pattern recognition to identify and mitigate polymorphic malware or phishing threats with greater accuracy and efficiency. AI based anomaly detection can be used to continuously monitor user behaviour and system interactions to identify deviations indicative of polymorphic malware or phishing attempts. Behavioural analysis capabilities can enable us to detect malware based on its actions and patterns rather than relying on static signatures.
Organizations should conduct regular audits of their systems to identify and address vulnerabilities, thereby minimizing AI-related risks. Engaging experts in cybersecurity and artificial intelligence can enhance this process.
The risks associated with AI are extensive. It's essential to engage with experts in cybersecurity and AI to provide training for employees on managing these risks. For instance, employees should be educated on the importance of fact-checking emails that could be AI-generated phishing attempts.
Even with best security measures in place, organizations remains vulnerable to AI-related cybersecurity. It’s crucial to establish a well-defined incident response plan that encompasses containment, investigation, and remediation strategies to effectively recover from such incidents.
Conclusion
AI has revolutionized the cybersecurity landscape, providing advanced threat detection, incident response, and risk management capabilities. However, AI also introduces new risks and challenges that must be addressed to ensure the security and integrity of our digital assets. By understanding the risks associated with AI in cybersecurity and implementing mitigation strategies, organizations can harness the power of AI to enhance their cybersecurity posture while minimizing potential risks.
Syed Makitul | RIEPL
October 24, 2024