Skip to main content

We propose a simple way to classify new attack tools and threats that emerge from GenAI adoption. 

After we posted online, .....

Based on the reaction on the post on DarkGemini 

We at Knostic propose a new way to segment ..., 

Intro - comments on the Dark Gemini Post and confusion about classification 

We are going to propose a way to segment the nea ear of GenAI enabled attacks.... \

 

Existing attacks are becoming easier to execute 

Upgrading Existing Attacks:

Scale: 

- Anyone can access 

- automating the writing of code for malicious tools or malware

- shortening and upgrading the present skills of the attackers. Everything that has occurred so far will occur again, but in a more efficient manner at scale 

Quality of non-cyber specific actions 

- For example, writing/translating in the chosen victim language or using chatgpt rather than Google translate.

-  improving the quality of the writing on the fishing messages

 

Emergence of New Threats in the Cyber Landscape

Nascent Attacks : 

The rise of LLM technology has led to new attack vector and threats in the cyber landscape. We present two new threats and a potential method for mitigation. 

Dark Gemini, ChatGPT for the Dark Web

DarkGemini is a powerful new GenAI chatbot, now being sold on the dark web for a $45 monthly subscription.

It can generate a reverse shell, build malware, or even locate people based on an image. A “next generation” bot, built specifically to make GenAI more accessible to the attacker next door.

Dark Gemini embodies the dark side of AI accessibility, empowering even amateur attackers with potent tools of disruption.

For a video demonstration, see Gadi's original post.

 

Deepfakes: Manipulating Reality with Alarming Ease

AI is gaining traction on the dark web. A brand new deepfake tool for live video conversations is now sold freely, where just like in the attack described in the news story below, you can pretend to be the CFO and order the transfer of 25 million USD.

It is advertised on the dark web along-side other AI tools, labeled as “better than Worm GPT” and sold through Telegram.

Attached is a screenshot of the tool, called Avatar AI VideoCallSpoofer (or DeepFake AI) being advertised, and another screenshot of the recorded demo. LinkedIn doesn’t allow to upload videos along-side pictures.

Then, also a screenshot from the CNN article the attack I mentioned above, where an attacker stole 25 million dollars just now, by spoofing video chat with the CFO.

For additional information, please see the original article from Heather Chen and Kathleen Magramo: https://lnkd.in/dinZ_gm4)

 

Mitigation Technique:

Protecting againt 

Fortifying Against Digital Malevolence

As these technologies proliferate, posing unprecedented challenges to security and trust, vigilance and innovative countermeasures become imperative shields in the ongoing battle against digital malevolence.

One tip that won’t protect you every time, but can help reduce risk is asking the person on the other side of the line to show you more than their face. It’s a harder problem to crack, and this particular tool can’t do that. Asking the person to move their head around is no longer sufficient, but will still defeat older technologies.
Call-back to a known number and pass phrases are still the state of the art in combatting this issue, outside of solid business processes

For regular updates and insights from Knostic research, follow us on Linkedin

bg-shape-download

Learn How to Protect Your Enterprise Data Now!

Knostic delivers an independent, objective assessment, complementing and integrating with Microsoft's own tools.
Assess, monitor and remediate.

folder-with-pocket-mockup-leaned
background for career

What’s next?

Want to solve oversharing in your enterprise AI search? Let's talk.

Knostic offers the most comprehensively holistic and impartial solution for enterprise AI search.

protect icon

Knostic leads the unbiased need-to-know based access controls space, enabling enterprises to safely adopt AI.