This is what prompted my whole approach. I was stuck in analysis paralysis for a really long time, and I also noticed many tools were hard to integrate with AI and also came to the realization that I had no idea how quickly any of the tools I used might become uneconomic or obsolete. So I stopped.
Then I just said - what's the minimum that I need as a technical solo chef to build what I want. I needed a foundation model (I use claude - but don't tie myself to claude specific features so I can do council of elders, pick the right lab/model for the job and have business continuity if one foundation lab falters. I needed transcriptions (I picked granola as non-intrusive and good enough - better now it has an API and raised another round). I needed a place to store the git repo with all the prompts and skills and slow changing shared context - GitHub for now - we'll see if they keep having 500 errors. I needed emails and calendar (a google for work account - one for me, one for each primary agent) and I needed a relational database as my system of record for tasks, research, content, contacts, companies, pipelines and everything else. For now I'm on supabase, but (other than blog storage) I'm not using any deep supabase features so I could port to any other hosted postgres in a day.
That's my stack. No shiny object syndrome. If I see something cool I send it to my agents to review - not to see whether I should trial it, but to ask them if there are any patterns they could extract for my own orchestration harness.
You need to be technical and you need to have good technical judgement and a solid sense of good practices for reliable distributed systems. But if you are, it's not that hard.
Most of the struggle with AI orchestration (in my case) is just dealing with all the edge cases. An external service is down for an hour; I run out of tokens mid-workflow, or some input is formatted in an unexpected way, etc. Each time, I fix the problem and improve the workflow. It's slow going but ultimately rewarding. The scenarios are getting more solid every day.
The problem is all exceptions that don't get me further: Claude had a bug in their Cowork architecture that messed things up and held me back for several weeks. Make has a few annoying quirks with arrays that waste my time each time I use them. Today, I ran into a wall with Perplexity: the MCP connectors available in Perplexity Computer are *not* available in Perplexity Chat, which completely defies the whole point of using Perplexity as an interface to my agentic workflows. There goes another two hours of work leading to nothing (until they fix this)... 🤷🏻♂️
It's the pain of living at the frontier of AI agent orchestration. Only half of everything works. The other part is permanently under construction. 🚧
Agreed 100% - this is the innovator (tech adoption lifecycle curve) tax. The ROI is the area under the curve (extra productivity compared to manual), the learning, and the positioning as a leader in the field. But it’s super annoying. Trick is minimizing number of tools and investing heavily in o11y and monitoring alerting and resilience. Once you assume everything will fail and design around that at least you can recover good known state and get things going using a different tool or custom code… but it is an environment that rewards simplicity in terms of tool counts.
This is what prompted my whole approach. I was stuck in analysis paralysis for a really long time, and I also noticed many tools were hard to integrate with AI and also came to the realization that I had no idea how quickly any of the tools I used might become uneconomic or obsolete. So I stopped.
Then I just said - what's the minimum that I need as a technical solo chef to build what I want. I needed a foundation model (I use claude - but don't tie myself to claude specific features so I can do council of elders, pick the right lab/model for the job and have business continuity if one foundation lab falters. I needed transcriptions (I picked granola as non-intrusive and good enough - better now it has an API and raised another round). I needed a place to store the git repo with all the prompts and skills and slow changing shared context - GitHub for now - we'll see if they keep having 500 errors. I needed emails and calendar (a google for work account - one for me, one for each primary agent) and I needed a relational database as my system of record for tasks, research, content, contacts, companies, pipelines and everything else. For now I'm on supabase, but (other than blog storage) I'm not using any deep supabase features so I could port to any other hosted postgres in a day.
That's my stack. No shiny object syndrome. If I see something cool I send it to my agents to review - not to see whether I should trial it, but to ask them if there are any patterns they could extract for my own orchestration harness.
You need to be technical and you need to have good technical judgement and a solid sense of good practices for reliable distributed systems. But if you are, it's not that hard.
For me it's been transformative.
Most of the struggle with AI orchestration (in my case) is just dealing with all the edge cases. An external service is down for an hour; I run out of tokens mid-workflow, or some input is formatted in an unexpected way, etc. Each time, I fix the problem and improve the workflow. It's slow going but ultimately rewarding. The scenarios are getting more solid every day.
The problem is all exceptions that don't get me further: Claude had a bug in their Cowork architecture that messed things up and held me back for several weeks. Make has a few annoying quirks with arrays that waste my time each time I use them. Today, I ran into a wall with Perplexity: the MCP connectors available in Perplexity Computer are *not* available in Perplexity Chat, which completely defies the whole point of using Perplexity as an interface to my agentic workflows. There goes another two hours of work leading to nothing (until they fix this)... 🤷🏻♂️
It's the pain of living at the frontier of AI agent orchestration. Only half of everything works. The other part is permanently under construction. 🚧
Agreed 100% - this is the innovator (tech adoption lifecycle curve) tax. The ROI is the area under the curve (extra productivity compared to manual), the learning, and the positioning as a leader in the field. But it’s super annoying. Trick is minimizing number of tools and investing heavily in o11y and monitoring alerting and resilience. Once you assume everything will fail and design around that at least you can recover good known state and get things going using a different tool or custom code… but it is an environment that rewards simplicity in terms of tool counts.