Learn about the Morphosis AI

Research Project

Morphosis AI is a BMFTR-funded StartUpSecure project at DFKI Kaiserslautern, running since November 2025. We research and develop new strategies for autonomous, dynamic cyber deception — using generative AI to produce high-quality honeypots, fabricated documents, credentials, and network artefacts at scale. The project lays the groundwork for a DFKI spin-off bringing deception-based security to SMEs and public institutions.

Lizard

Overview

Cyber threats are more pervasive than ever. Private and public organisations must fear becoming targets of Advanced Persistent Threats (APTs) or state-sponsored campaigns. In practice, successful attacks are discovered only after the fact — when only reactive measures remain.

Cyber deception through honeypots and honeytokens offers a proactive approach: early detection of attacks, and the deliberate misleading of adversaries using psychological countermeasures — spreading fear, uncertainty, and doubt.

morphosis.ai researches and develops an autonomous, dynamic cyber deception platform leveraging advances in generative AI. The central question is whether a dynamic honeypot platform with low operational cost and high attacker engagement is technically feasible — and how to bring it to market through a spin-off.

Project Goals

  • An architecture whitepaper describing the honeytoken ML pipeline with continuous learning
  • Empirical LLM fine-tuning experiments for honeytoken generation, optimising for attacker engagement and generation cost
  • Publication of results in an open-access journal in the field of ML for network security
  • A detailed commercialisation plan for a spin-off and a clear path to StartUpSecure Phase 2

Introduction of our

Key Research Areas

Open Questions

Research Questions

RQ1

What new deception strategies and honeytoken types does LLM advancement make possible, given a formal threat and attacker attention model?

RQ2.1

Can LLMs be fine-tuned, pruned, and distilled for deception without measurable quality degradation relative to the threat model?

RQ2.2

How must models be dimensioned to produce high-quality results under heavily constrained compute resources?

RQ3

Can honeypot models communicate via RAG and co-evolve in a bio-inspired manner, producing increasingly convincing environments?

RQ4

To what degree can LLM-generated honeypot deployment be automated, and where in the pipeline is human oversight required?

RQ5.1

What metrics can measure the cognitive and psychological impact of deception on a human attacker?

RQ5.2

How effective are generated decoys against targeted attacks in a controlled empirical study?