Learn about the Morphosis AI
Morphosis AI is a BMFTR-funded StartUpSecure project at DFKI Kaiserslautern, running since November 2025. We research and develop new strategies for autonomous, dynamic cyber deception — using generative AI to produce high-quality honeypots, fabricated documents, credentials, and network artefacts at scale. The project lays the groundwork for a DFKI spin-off bringing deception-based security to SMEs and public institutions.
Cyber threats are more pervasive than ever. Private and public organisations must fear becoming targets of Advanced Persistent Threats (APTs) or state-sponsored campaigns. In practice, successful attacks are discovered only after the fact — when only reactive measures remain.
Cyber deception through honeypots and honeytokens offers a proactive approach: early detection of attacks, and the deliberate misleading of adversaries using psychological countermeasures — spreading fear, uncertainty, and doubt.
morphosis.ai researches and develops an autonomous, dynamic cyber deception platform leveraging advances in generative AI. The central question is whether a dynamic honeypot platform with low operational cost and high attacker engagement is technically feasible — and how to bring it to market through a spin-off.
Introduction of our
Open Questions
What new deception strategies and honeytoken types does LLM advancement make possible, given a formal threat and attacker attention model?
Can LLMs be fine-tuned, pruned, and distilled for deception without measurable quality degradation relative to the threat model?
How must models be dimensioned to produce high-quality results under heavily constrained compute resources?
Can honeypot models communicate via RAG and co-evolve in a bio-inspired manner, producing increasingly convincing environments?
To what degree can LLM-generated honeypot deployment be automated, and where in the pipeline is human oversight required?
What metrics can measure the cognitive and psychological impact of deception on a human attacker?
How effective are generated decoys against targeted attacks in a controlled empirical study?