Knostic.ai Blog: Research and updates

AI Attacks: Novel or Iterations of Existing Challenges?

Written by Knostic Team | May 9, 2024 6:16:49 PM

The rapid advancement of artificial intelligence (AI) has revolutionized various aspects of our lives, however, this comes with a growing concern: AI-powered attacks. These attacks leverage AI's capabilities to automate tasks, personalize scams, and bypass traditional security measures.

GenAI has undeniably brought about a host of new challenges and uncertainties. Yet, amidst this complexity, a fundamental question arises: are the threats posed by GenAI truly novel, or are they merely iterations of existing ones?

Pragmatic Classification System of Potential GenAI Risks

We present a systematic method for categorizing the expanding array of attack tools and threats arising from the proliferation of artificial intelligence and GenAI technology. Our proposal entails a simple, yet pragmatic classification system, distinguishing between two primary categories: those that upgrade existing attacks and those that represent emerging threats.

Two Major Categories of Potential Risks: Upgrading Attacks and Emerging Threats

Our proposed classification system, while simple, provides a structured approach to categorizing these evolving threats, enabling a more comprehensive assessment of their nature and level of development.

Upgrading Existing Attacks

The emergence of GenAI tools enables attackers to upgrade their existing attack methods, and enhance the quality of their attacks. Attackers exploit vulnerabilities in AI algorithms to improve the ease and speed of attacks and enhance their ability to deceive systems and individuals.

Upgrading Attacks through Increased Efficiency and Lower Barrier to Entry

Increased Attack Efficiency: Automating Malicious Code and Improved Execution

One of the key advantages offered by GenAI tools is the ability to execute attacks at an unprecedented speed through automated code generation. This newfound efficiency enables attackers to target a larger number of systems within a condensed time frame. By automating the writing of code for malicious tools and malware, attackers can now develop sophisticated and highly effective attack vectors, or conversely, adopt a more spread-out approach and target a larger amount of potential victims with low-knowledge attacks that rely solely on GenAI-created code. The necessity for security measures becomes paramount to protect against both simple and sophisticated attacks enabled by GenAI tools.

Greater Accessibility: Lowering the Barrier to Entry for Hacking and Attacking

The accessibility of GenAI tools has further lowered the entry barrier to hacking, making it possible for anyone, even people who lack coding knowledge, to engage in cyber attacks. This proliferation of hacking tools has widened the pool of potential attackers. Even if the code quality generated by users of these tools is lesser than that of human-generated code, the sheer quantity of these low-tech attacks will become a new iteration on the existing security threats.

Increasingly Availability: Malicious Tools are Becoming Readily Available and Easily Accessible

DarkGemini is a new GenAI chatbot, currently being sold on the dark web as a monthly subscription for $45. DarkGemini caters to amateur attackers by providing them with a range of functionalities, including generating a reverse shell, building malware, or even locating individuals based on an image. While these features aren’t novel, their amalgamation into a single tool significantly streamlines an attacker's operations. 

Personalization and Deception: Enhancing Non-Code Aspects of Attacks

Enhanced Deception: Reshaping Phishing Attacks with Linguistic Precision

Attackers can now leverage GenAI tools to write or translate messages in the language of their chosen victims without inputting the text into traditional translation services such as Google Translate. This level of authenticity increases the likelihood of successful phishing attack attempts, as victims may be more inclined to trust messages that appear to be tailored specifically to them, using colloquial language conventions and idiosyncrasies that enhance the credibility of the messages. In combination with the rapid automation of attack tools, phishing campaigns are likely to evolve to become more sophisticated and varied, making it more difficult for automated scanners to detect them.

Automation of Deception and Manipulation: AI Empowers Attackers to Manipulate Individuals

The emergence of GenAI Tools facilitates attackers in exploiting the vast amounts of non-sensitive data individuals share online to their advantage through social manipulation attacks. Large language models (LLMs) can be leveraged to automatically generate malicious content and conversation ideas tailored to individual profiles for manipulation. Attackers can now gather information about their targets, including interests, hobbies, and personality traits, based on their online presence. Subsequently, the attacker can swiftly employ an ai model to construct the perfect alias fitting the target, enabling more effective manipulation in their communication to gather sensitive information.

Emergence of New Threats in the Cyber Landscape

The emergence of new threats, such as sponge attacks and the creation of deepfake content with malicious intent, underscores the imperative to adapt and fortify defenses against AI-powered attacks.

Beyond Phishing: The Spectrum of GenAI-Driven Threats

The vulnerabilities of AI systems, particularly when used in critical applications like autonomous vehicles and medical diagnostics, significantly contribute to the emergence of new threats. Threat actors may exploit vulnerabilities in AI systems and manipulate AI behavior by tampering with training data and models, resulting in incorrect outcomes and compromising their effectiveness. Industrial control systems, vital components of our critical infrastructure, are particularly vulnerable to this type of poisoning attack and other AI-driven threats. 

Model Denial of Service (Sponge Attacks)

Similar to denial-of-service (DoS) attacks, sponge attacks aim to overwhelm Large Language Models (LLMs) by inundating them with complex queries that demand excessive processing power. Comparable to RegexDoS, attackers devise automated queries posing large or impossible questions, acting as sponges that soak up the LLM's processing capacity. To achieve this, attackers craft prompts triggering intricate calculations or endless loops within the LLM or exploit inefficiencies in how it handles variable-length inputs, inundating it with data.

The ramifications of a successful sponge attack on ai system are severe. The LLM may become sluggish or unresponsive, impeding user experience and potentially leading to system crashes. Sponge attacks have the potential to disrupt customer service chatbots and virtual assistants. Moreover, cloud-based LLMs might incur escalated costs due to increased resource consumption.

The Proliferation of Deepfake Technology in Cyber-Physical Critical Infrastructure

Deep fakes are digitally altered images, data, videos, or audio recordings created using artificial intelligence and deep learning algorithms. They convincingly portray individuals doing or saying things they never did in reality. These manipulations can be employed in various malicious activities, including social engineering attacks, financial fraud, evasion schemes, and disinformation campaigns.

While the use of social manipulation and deception is not new, the advent of deepfake technology represents a significant advancement in the ability to deceive individuals. Deep fakes have already been used in state-aligned disinformation campaigns to spread false information. As deep fake tools continue to evolve in availability and sophistication, the potential for malicious activity and their widespread misuse across political arenas, financial sectors, and other nefarious activities becomes increasingly apparent.

A Real-World Example: The $25 Million Deepfake Heist

Avatar AI VideoCallSpoofer (or DeepFake AI) is readily accessible on the dark web, alongside numerous other AI tools. Promoted as 'better than Worm GPT' on Telegram, this tool exemplifies the ease of access to advanced AI-based deception techniques. The potential consequences of such attacks were vividly portrayed in a recent CNN article, where an attacker utilized AI to defraud $25 million by impersonating the CFO during a video chat. This real-life incident serves as a poignant reminder of the tangible impact deep fake technology can have on businesses.

For additional information, please see the original article from Heather Chen and Kathleen Magramo: https://lnkd.in/dinZ_gm4

How to Prevent AI Attacks with Enhanced Security Measures

Building Defenses against Deep-Fake Attacks 

The adoption of GenAI has ushered in a wave of novel cyber threats, intending to wreck havoc by inflicting real-world harm by targeting and disrupting critical systems like autonomous vehicle's, medical imaging analysis systems, and essential infrastructure

The need for new security measures to combat the threat of deep fakes is evident. While no solution can guarantee complete protection, individuals and organizations can adopt practical steps to mitigate risks. One effective approach is to request additional verification during video calls, such as showing more than just the individual's face. Deepfake tools struggle to convincingly replicate full-body movements, making this an additional layer of defense.

Established Security Techniques Can be Adapted to Help Meet Emerging Threats

Moreover, established security protocols, like call-back verification and passphrase usage, remain valuable in addressing the deepfake challenge. Integrating these techniques into robust business practices can bolster defenses against unauthorized data access and maintain secure communication channels.

Understanding AI technology is paramount in developing effective defense strategies against AI-driven attacks, including deep fakes. By comprehending AI capabilities, organizations can anticipate and mitigate evolving threats, strengthening resilience against offensive tactics powered by artificial intelligence.

Navigating GenAI-Driven Cybersecurity Risks

Why it's important for CISOs to be aware of AI attacks

In today's digital landscape, chief information security officers (CISOs) face a critical challenge: the escalating threat of artificial intelligence (AI) attacks, spanning from model poisoning to identity theft. Understanding and anticipating these AI-driven threats is imperative, as failure to do so can lead to severe data breaches and financial losses for organizations. Moreover, the emergence of threat actors harnessing AI technology for sophisticated attacks highlights the necessity for CISOs to evolve their defense strategies and security tools.

Given the potential for AI to exponentially escalate cyber risk and amplify the scale and complexity of security breaches, CISOs must maintain vigilance and proactively adapt their cybersecurity approach. Safeguarding critical infrastructure and data environments requires continuous awareness and readiness to combat evolving threats in the GenAI-driven era.

For regular updates and insights from Knostic research, follow us on Linkedin.