4 Comments
User's avatar
Pawel Jozefiak's avatar

Hit the spaghetti wall around automation 12. Everything worked, nothing was debuggable. The layering approach makes sense, I arrived at something similar by accident after spending three hours tracing a failure across six tools with no clear owner. Separating UX, workflow, and persistence means each layer can fail independently (or be replaced).

Turns out the persistence layer is where most complexity actually lives, not the orchestration. State is the hard problem. The Slack-to-Fibery example is a good one but I'd want to see what happens when the Fibery schema changes and the workflow layer doesn't know yet.

Jurgen Appelo's avatar

For now, when there are schema changes, I'm usually smart enough to reflect them in the scenarios. Claude is watching over my shoulder. 🙂

Peter Bell's avatar

Agreed 100% with the layered architecture. For anyone technical (big caveat!), though, I'd propose rethinking the persistence and workflow implementations.

- I persist skills, agent files, agent specific context and small units of slow changing shared context (vision, mission, etc) in a GitHub repo, using Obsidian to "file - open vault - open folder as vault" if I need to edit them directly.

- Operational data is mostly in a relational database. Right now supabase for easy of starting, but I don't use any supabase specific features in case I choose to port to a more affordable hosted solution over time.

- Right now I use supabase blob storage for files - eventually i'll move it to S3 or similar

- For orchestration, I'm still tweaking, but I really like having my own lightweight deterministic orchestrator (looking at restate for the durable execution piece) - that way I can control exactly how I set up the validation gates and any tool use is fairly easy to generate code for.

It's interesting because a friend is using n8n for workflows at his company - mainly to not have to generate a UX but I still feel like you get more out of building a lightweight orchestrator than using the third party tools (at least for now).

Certainly interesting times as we all become (at least for a while) distributed system engineers :)

Jurgen Appelo's avatar

All my prompts, skills, input payloads and responses are (for now) in Fibery. The prompts are loaded dynamically and it works really well. I can also switch LLM models on the fly via the database.

I now have a Claude skill that analyzes the prompts and performance of the agents and can replace the prompts. But I'm guessing that will be automated too in a few weeks. :-)