You can find all my articles on my Medium page, GitHub, and also in MISC magazine
I created a functional POC of a spread feature that allows malware to propagate through artificial intelligence via Outlook's COM API. This methodology enables the generation of credible and convincing emails, thereby enhancing the offensive capabilities of malware. Check out the article in MISC140 (LES EDITIONS DIAMOND) !
Read
What if AIs started attacking each other ? What if tomorrow’s attacks involved defensive AIs battling offensive ones ? Or perhaps just the constant AI assistance in our daily work will slowly lower our guard? Jailbreaking an AI isn’t particularly difficult, especially with some of the more 'recent' LLMs. The purpose of this article is to ask one LLM to attack a competitor (or at least provide us directly with the prompt to inject). Simple, effective, and functional. I managed to extract MISTRAL's system prompt in just a few minutes, quite easily !
Read
The Fodhelper bypass is widely exploited in the cybercrime world to
gain access to admin-level machines (mainly targeting individual
users), by bypassing UAC to execute malware with administrative
privileges. It's also used to add exclusion folders in Windows
Defender using those same elevated rights.
The issue : Windows
Defender detects the use of Fodhelper for such actions.
In this
article, I analyzed the detection criteria and made the technique
undetectable. I also automated the process through a POC written in
Python and C.
Read
Jailbreaking an AI is actually quite fun ! Since an AI’s 'ethical
system' is based solely on the ethics of its creator, it’s
naturally possible to bypass that framework. It’s all about context
!
It’s not just about injection techniques using files, images,
etc... It’s also a form of social engineering targeting the logic
behind the 'role' the AI is confined to.
I see too many
people using the same old prompt over and over instead of crafting
their own.
So, on a whim, I decided to quickly create several
prompts to explore all sorts of taboo topics.
For example,
'Little Kitten' will explain how to make cocaine,
'Little Harmless Chick' will walk us through making a sort
of napalm-style Molotov cocktail for civilians, or during the
conversation, prompt the AI to build a complete ransomware.
Read
The main interest in reverse engineering was to find ways to prevent
sensitive strings in my malware from being read, and to understand
whether it was possible to deceive both automated and manual
analysis.
Overall, I don’t have advanced reverse engineering
skills (in fact, they’re quite basic), but that shouldn't be a
barrier. That’s the mindset behind this article.
I present a
simple methodology, accessible to beginners, for conducting malware
analysis through various means.
My approach is straightforward
: automate the analysis using online resources, then manually
extract and review key information to refine and validate the
initial findings.
Read
I detailed the steps taken by an infostealer to steal, decrypt, and target browser data. I carried out this process manually to make it understandable and accessible to anyone interested in what this type of malware actually does. The goal was to demonstrate, on one hand, how easily such malware can be executed, and on the other, to show that a malware is ultimately nothing more than the automation of actions that could be performed manually.
Read