Implement the Agentic Chaos pattern

In this post, you learn about a sample application I have created to demonstrate the chaos pattern for a multi‑agent system. You can read why I created the sample, what functionality it has, and I’ll discuss some implementation details. As a bonus, you can also read about the tools I used for the demo and about some areas that need improvement.
I hope you like it. Feedback is, as always, more than welcome.
Why I created this tool
For some time now, I have been working on agents. On the 11th of March 2025, I did the first run of a presentation I called “Bring a crew to do your job”. This was at a conference called Blipz on the Radar. Shortly after that, I rewrote the samples to use Amazon Bedrock. Again for another talk at the AWS meetup in Den Haag. A lot has changed since then; agents have become smarter, with better models. And now, every company needs multiple MCP servers to wrap their APIs.
In our Devoxx workshop about writing your custom agent for the JVM, we also discussed multiple Agent patterns. One of the patterns is the group chat pattern. I like to call it the chaos pattern, because any involved agent or person can reply to messages, resulting in utter chaos at times.

I started thinking about this pattern again for the next presentation I am doing at Apeldoorn IT. At the time of writing, there are still tickets available. A little warning, all talks are in Dutch. I like to give demos during my talks; therefore, I created this demo application.
What I created
I have created an application that allows multiple agents to register to listen for messages. In theory, you can register all agents that implement the interface. To show you can also register non‑AI agents, I have implemented the echo agent. Yes, this agent echoes all the messages it receives. The other agents in this example all make use of Spring AI—more on that in the technical design section.
In the demo, it is always a question or remark from the Human user that starts a message thread. The prompt for the agent determines whether it responds to a message. For example, the system prompt for the Football Agent is below.
You are an AI agent that knows everything about Football.
If you see a message about Football, you will answer it correctly.
Feel free to answer any question about Football teams, players, matches, scores, history, and statistics.
Always reply in short answers.
If the message is not about Football, respond with the exact placeholder “#nothingtosay#” with no additional text.
Another component filters messages containing #nothingtosay#. That way, these messages are not published.
Moderation
A moderator is in place. At the moment, moderation is rule‑based. We prevent duplicates and stop a thread after a configured number of messages. This is essential, as some agents keep verbally fighting with each other if you do not stop them. You can see two agents that are told to reply to messages that are opposite of each other. The Star Trek Agent must respond to questions about Star Trek, but also send a message explaining why Star Trek is better or more fun than Star Wars. The Star Wars agent is doing the same thing. As you can imagine, they keep replying to each other.
The UI also shows the moderation events. You can find them on the right side.
Choose the active agents.
For demo purposes, it is handy to be able to switch agents on and off. Each message that you post is sent to all active agents. With smaller models, this is fine; using bigger, more expensive models can result in higher bills. Check the left side of the screen for active and inactive agents.
The User Interface
I wanted a user interface that feels snappy, is easy to send updates to and looks good without too many problems. The top‑right corner of the screen shows whether the frontend is connected to the backend.
Some features in the UI
- Each message has an info icon with additional info, like the thread depth
- When hovering a message, the parent message gets a glowing contour.
- The delete button clears the messages.
- A day‑and‑night mode.
Below is a video demonstrating the application’s features.
The tools I used
I started using Backlog.md and OpenAI Codex. The integration between the tools and working on tasks helps a lot when starting fresh. Read more about these tools in my blog Spec‑driven development using Codex and Backlog.md. I did run into a context window becoming too big. Just restarting Codex resolved that issue. The compact feature did not work for me. Another thing I did not like was that I ran through the number of calls I am entitled to in one week.
For debugging and doing reviews, I use Warp. This started as an intelligent terminal on the Mac. By now, it is my go‑to tool for finding problems, writing scripts and everything else you want to do on the command line. Warp also helped me a lot when restructuring the sample.
The biggest challenge is simple refactoring across many source files. Please keep using IntelliJ’s excellent refactoring tools for this. You encounter fewer bugs and save a lot of time.
I asked Codex to develop the right frameworks. I did mention I wanted to use Spring Boot and Spring AI. On top of that, I want responsiveness and message passing between the client and the server. This is what Codex came up with. I was happy with the choice of frontend framework, Chakra UI.
Backend frameworks
Spring Boot 3.5.5
├── Spring WebFlux (reactive web)
│ ├── Netty (non‑blocking I/O)
│ └── Reactor Core (Flux, Mono)
├── Spring Context (DI container)
├── Jackson (JSON processing)
└── Logback (logging)
Project Reactor
├── Reactor Core (reactive streams)
└── Reactor Sinks (message bus)
Frontend frameworks
Next.js 15.0.3
├── React 18.3.1 (UI library)
├── React DOM 18.3.1 (web renderer)
└── Webpack (bundler, internal)
Chakra UI 2.9.1
├── Emotion 11.14.0 (CSS‑in‑JS)
├── Framer Motion 11.11.10 (animations)
└── React Icons 5.3.0 (icons)
TypeScript 5.6.2
└── Type definitions (@types/*)
Technical design

The goal for the architecture is to create a clean interface for agents. I am facilitating the addition of new agents that subscribe to all messages without knowledge of the subscribing and publishing system. The AgentLifecycleManager is essential to accomplish this decoupling. The default implementation of this interface enables agents to receive notifications and post responses.
Spring WebFlux is used to handle the WebSocket connection to the React‑based frontend.
The code module has the domain’s essential classes and interfaces. Through these interfaces, we can later replace some in‑memory components with production‑ready implementations.
The event bus is a basic in‑memory implementation that uses reactor sinks. Nothing fancy, but ideal for having multiple subscribers and publishers.
The code and steps to try it yourself
The project is available on GitHub. The readme is verbose and provides a lot of detail to understand better the sample and how to run it. Contrary to this blog post, the README file is entirely generated by Warp.
GitHub – RAG4J/agents‑chatter: A new project where agents and humans can chat together
To run the demo, you need access to OpenAI. Providing the OPENAI_API_KEY as an environment variable should be enough. You can use Maven to run the frontend as well as the backend.
# Building the project
mvn clean install
# Run the backend
mvn -pl web-app spring-boot:run
# Run the frontend
mvn -pl frontend frontend:npm -Dfrontend.npm.arguments="run dev"
Want to know more about what we do?
We are your dedicated partner. Reach out to us.

