Hey builders 👋
I love the flow of building with tools like Cursor, Windsurf, or Replit, but I hit a wall recently. Every time I tried to build a complex AI agent (one that actually remembers things or plans tasks), I spent 90% of my time fighting with infrastructure—setting up vector DBs, message queues, and state management—and only 10% on the actual agent logic.
So, I spent the last few months building Soorma Core to fix this. It’s an open-source "Operating System" for agents that handles the plumbing so you can focus on the "brain."
I’m looking for feedback from this community specifically: Does this workflow make sense for how you vibe-code?
âš¡ Why I think it fits the Vibe Coder workflow:
1. The "Magic Prompt" (Architecture as Context) I wrote the documentation (ARCHITECTURE.md) specifically to be read by LLMs, not just humans.
- The Workflow: Drag the doc into Cursor/Claude context → Prompt "Build me a researcher agent" → It generates production-ready code that automatically hooks into the system.
- Question: Is this something you'd actually use, or do you prefer hand-rolling the setup?
2. Hybrid Dev Mode (No Docker Pain) I hate debugging inside containers. So I built a "Hybrid Mode":
- The Infra: Run
soorma devand your database, vector store, and event bus spin up in Docker. - Your Code: Runs locally on your host machine. You can hit "Play" in VS Code, use breakpoints, and get instant feedback.
- Question: Is this fast enough for your iteration loops?
3. It comes with "Memory" out of the box You don't need to learn LangChain or set up Pinecone. The framework has a Memory Service (Postgres + pgvector) built-in. Your agents automatically get "Episodic Memory" (history) and "Semantic Memory" (RAG).
🛠Try it out (2 mins)
I’d love for someone to try this with their favorite AI editor and tell me if the DevEx holds up.
- Repo:github.com/soorma-ai/soorma-core
- The Guide:Zero to AI Agent in 10 Minutes(wrote a deep dive on the architecture here)
The Ask: If you spin this up, does it feel "heavy" or helpful? I'm trying to find the balance between "Enterprise Grade" and "Vibe Friendly."
Roast my architecture in the comments. 👇
This is an exciting initiative! Streamlining the AI agent development process is definitely a game changer for builders. The "Magic Prompt" feature sounds especially innovative—using LLMs to generate context-specific code can significantly accelerate development. I also appreciate your approach to hybrid modes to eliminate the common pain points developers face with Docker. I am curious about how the memory capabilities perform in practice; having episodic and semantic memory pre-built is a major time saver. I’ll try out Soorma Core and give you detailed feedback!
Let me know about your experience getting hands on with soorma-core, would love to hear feedback from folks playing with the code.
What you built essentially centralizes state, memory, and orchestration into a predictable substrate so the LLM can reason against stable contracts instead of ad hoc glue code
This is a solid pitch and hybrid dev mode alone will appeal to a lot of people who hate container debugging. What’s the smallest agent you think someone should build first to feel the value without getting overwhelmed?
Going with simple definition of agent == LLM + memory + tool and keeping up with the philosophy of event driven architecture, I'd suggest following:
- create a simple tool with command "soorma init --type tool" and fill in business logic for your tool (e.g. define event to listen on to perform web search using "ddgs" and event tool will publish with result)
- create an agent with command "soorma init --type worker" and fill in business logic (e.g. define event to listen for work request, use LLM to discover tool and event schema to submit work request, listen to event from tool result, use LLM to summarize results from tool, define the event to publish after work is complete)
- write your client that interacts with the agent by publishing to work request event and listens to work complete event for results)
the above can be simplified further, of-course, by implementing tool in the agent code itself and call in synchronously (similar to other agent frameworks), and removing need for LLM for dynamic discovery and summarization etc.
It splits the persistent infra layer from your local dev loop so changes propagate fast without rebuilding everything. Would that speed up your agent testing cycles enough?
yes, absolutely. since the architecture is distributed and event driven, each individual agent or tool components in a multi-agent system can be iterated independently. the architecture allows for complete decoupling with dynamic discovery and autonomous agent / tool selection using LLM based reasoning.