DFKI Kaiserslautern · StartUpSecure / BMFTR

We research
Deception Environments
against Advanced Threats

Morphosis AI researches how large language models can automate cyber deception — generating believable honeypots and honeytokens that detect intrusions early, derail attackers, and poison stolen data.

The Problem

Attackers only need one mistake.
Defenders need to stop them all.

Advanced Persistent Threats (APTs) — including state-sponsored actors — operate with time, resources, and asymmetric advantage. Traditional defenses like anomaly detection and intrusion detection systems fail to generalise against novel attacks, zero-days, and human-driven exploit chains. The only constant in any attack is the attacker's motive.

Challenge 01

Asymmetric Threat Landscape

State actors and organised criminals invest far more in attacks than most organisations can afford in defence. One successful exploit chain is enough for maximum damage.

Challenge 02

Reactive Detection Only

Successful breaches are typically discovered after the fact. Machine-learning anomaly detection cannot generalise to truly novel attack strategies or insider threats.

Challenge 03

Deception Is Hard to Deploy

Honeypot systems must be carefully tailored to each organisation's infrastructure. Current solutions require expensive expert time and lack automation — keeping them out of reach for SMEs.

Our Approach

Turn the attacker's motive into a weapon against them.

Morphosis AI leverages advances in large language models to generate highly convincing honeytokens — fake documents, credentials, systems, and data — deployed automatically and tuned to the specific threat profile of each infrastructure.

01

Early Detection

Every interaction with a honeytoken is an indicator of compromise. Attackers reveal themselves the moment they touch a decoy — regardless of how sophisticated their tools are.

02

Derailment

Convincing honeynets draw attackers away from real targets. Discovery of a honeypot plants fear, uncertainty and doubt. Undiscovered, it wastes the attacker's time and resources.

03

Data Poisoning

By mixing generated data with real data, stolen information becomes near-worthless. The effort required to separate real from fake drastically reduces the value of any breach.

Explore Morphosis AI

Dive deeper into
our research.

Get occasional updates on our research, new publications, and project milestones.