The multi-LLM approach is a great direction and I like the polished feel of the application. Moving up to a more project managenent approach is welcomed. I will download and give it a go.
One thought - vendors like cursor.ai have the benefit of highly tuned prompts, presumably by programming language, as the result of their user bases. How is it possible to compete with this?
On another note, I have played around with v0 etc, but AFAIK there is no really good UX/UI AI tool that can effectively replace a designer in the way that coding tools are replacing engineers (to a certain extent).
Thanks for giving it a try!
On prompts: We've been competing with Cursor for the last 2 years in the enterprise with Zencoder, and winning nice deals based on quality. At some point, we were very protective of our prompts, but two things happened: -most of the coding vendors' prompts were leaked, there are repos online that have prompts from a bunch. The moment you allow a custom end-point for LLM, your prompts are sniffable. -agents became better at instruction following, so a lot of prompting changed to "less is more".
So with these two industry trends, we reversed the course: -moved our harness into CLI - this exposes our tips and tricks, but is better for user privacy and for user's ability to tinker the harness. For example, this allows a set-up where no code leaves your perimeter (if you use local harness and "local" model, where "local" means different things for different people) -opened the workflows in Zenflow (they are in markdown and editable)
Spent a bit of time with the app. I use Zencoder plugins for my personal projects therefore already familiar with their ecosystem.
First of all kudos for the nice UI. I like when apps looks well. Onboarding process was smooth. I paired it with Zencoder's agent (as mentioned I use their VSCode plugin and already had a sub).
I used it to implement a small refactoring for my side project. What I like compared to plugins, I did not have to switch between agents or explicitly ask to write a plan/spec. It's I guess one of the core ideas behind the app and feels really AI-ish because it's not code editor (similar to claude code). The only thing I missed in the process is rendered markdown for previews. But I did not used the app for long, maybe there is an option to render markdown.
Overall great experience so far. Gonna explore it more. Wanna try it with Gemini and Claude Code. Again kudos it's not locked to use only Zencoder's agents.
Thank you for the kind words! Point taken on the markdown rendering.
This looks fantastic and I'm excited to try it today!
One question: I see this supports custom workflows, which I love and want to try out. Could this support a "Ralph Wiggum"-style [0][1] continuous loop workflow? This is a pattern I've been playing around with, and if I could implement it here with all the other features of this product, that would be pretty awesome.
[0] https://paddo.dev/blog/ralph-wiggum-autonomous-loops/ [1] https://github.com/onorbumbum/ralphio
In short, yes:
Create a new task with your prompt, and hit "Create" (instead of "Create and Run"). The interface will show a little hint "Edit steps in plan.md", with 'plan.md' being clickable. Click on it and edit it, experimenting with some ideas. {Bonus tip: toggle "Auto-start steps", to keep it Ralph-y)
I just winged the workflowsbelow, and it worked for the prompt I threw at it. If you like it, you can save it as your custom workflow and use it in the future. If you don't like it - change to your preference.
Now, I prefer a slightly different flow: Implement > Review > [Fix] (and typically limit the loop to 3 times to avoid "divergence"). We'll ship some pre-built templates for that soon. Our researchers are currently working on various variations on our private datasets.
--- # Quick change
## Configuration - *Artifacts Path*: {@artifacts_path} → `.zenflow/tasks/{task_id}`
---
## Agent Instructions
This is a quick change workflow for small or straightforward tasks where all requirements are clear from the task description.
### Your Approach
1. Proceed directly with implementation 2. Make reasonable assumptions when details are unclear 3. Do not ask clarifying questions unless absolutely blocked 4. Focus on getting the task done efficiently
This workflow also works for experiments when the feature is bigger but you don't care about implementation details.
If blocked or uncertain on a critical decision, ask the user for direction.
---
## Workflow Steps
### [ ] Step: Implementation
Implement the task directly based on the task description.
1. Make reasonable assumptions for any unclear details 2. Implement the required changes in the codebase 3. Add and run relevant tests and linters if applicable 4. Perform basic manual verification if applicable
Save a brief summary of what was done to `{@artifacts_path}/report.md` if significant changes were made.
After you are done with the step add another one to `{@artifacts_path}/plan.md` that will describe the next improvement opportunity.
Awesome, thanks! I will try this out today.
Tested it for a while. Great that I can finally run my SDD workflows easily without juggling bunch of Claude Code commands in terminal.
Also, I found unexpected use case for it. Even when I need to only change couple lines of code, I just run quick fix workflow for it, because Zenflow automatically creates worktree, branch, commit etc. And PR is created with few clicks. It'll seems like a minor thing, but it irritates me a lot to do all this stuff myself for small changes. One thing I miss here is automatic PR name and description creation according to templates my company uses.
Agreed. We planned to add it for Zencoder subs. For the free product, if we included it, the API could be abused (unfortunately, we learnt that lesson the hard way), and doing it through your other CLI is just too slow for a nice UX (they take time to spin up). We'll play with it a bit more, maybe there's a happy path we missed.
Is it possible to see the prompt files for the built-in workflows? I find them to be quite good, but not exactly what I want. My preference would be to simply tweak them slightly rather than starting from scratch with a custom workflow.
Yep. Create a new task, and hit "Create" (instead of "Create and Run"). The interface will show a little hint "Edit steps in plan.md", with 'plan.md' being clickable. Click on it, and it will open up in full glory for you.
Ah, perfect. Thank you!
> “…agents getting stuck in loops, apologizing, and wasting time. We tried to manage this … So we built Zenflow…Cross-Model Verification, Parallel Execution, Dynamic Workflows”
So you got me with the hook, and you bullet three features, but where’s the resolution of the hook issue? You left me with the hook?? What am I missing?
I think the easiest is just to try (it's a free download that would work with your CC/Codex/Gemini CLI). Pick a mid-size task that you would typically throw at an agent (not something trivial, but also not something that will bring your mid-size engineer to his knees), and run it in Zenflow's "Spec and Build" mode.
For us, in this scenario: 1) the pipeline helps agent perform better 2) reviewing the spec is much more convenient than when juggling between TUI and text editor (esp. if you are running 5 of those pipelines in parallel) 3) if you configure the reviewer in the settings, cross-agent review saves us from some of the minutae of guiding/aligning the agent
Lmk if I misunderstood your question, happy to help.
It looks like the main difference in the plans is the premium LLM calls, but what does that add if I just bring my own key?
I am just a hobbyist but was curious how you’re thinking through the pricing plans.
In our IDE plugins, we support BYOK: https://docs.zencoder.ai/features/models I'll check with the team how to do it in Zenflow.
Meanwhile, you can BYOA - bring your own agent - if you are a hobbyist, Gemini is free with gmail (but they WILL train on your data). And if you have ChatGPT sub, you can use codex CLI with Zenflow for no extra charge (and they don't train on paid users data).
no linux version :( also, how does this compare to something like vibe-kanban?
The multi-model approach makes sense. We've noticed different models handle different things better, so being able to run them side by side is pretty useful. The dynamic workflow stuff is neat. Most tools are too rigid once you start. Will be interesting to see how it handles unexpected turns.
Thanks, keep us posted. We are thinking of launching a gallery for workflows at some point, so people could pick the best one for specific situations.
This is the next step after SDD: a system that enforces the orchestration to execute the spec. I appreciate that you can bring your own agent (Claude Code, Codex, Gemini, Zencoder)
I suspect that we'll eventually loop back to formal specifications, with formal or semi-formal verification that the implementation matches the specification, but with agents writing the actual code.
ok, this tool really did some magic for my poor promt "create a game like google chrome has when wifi is off". i also like that it kinda teaches me on proper spec development and thinking for me about the tests and things i needed to specify in the first place.
nice
ty!
The app is really good, gave it a quick spin earlier, quite like it!
,d88b.d88b,
88888888888
`Y8888888Y'
`Y888Y'
`Y'FYI: advanced tracking protection in Firefox breaks your download form
Quick fix:
Apple Silicon (ARM64): https://download.zencoder.ai/zenflowapp/stable/0.0.52/app/da...
Intel (x64): https://download.zencoder.ai/zenflowapp/stable/0.0.52/app/da...
We'll figure out the FF script blocking.
:-0 thanks for lmk, will get back to you on this asap
The language used on the website is very fresh! They are brave enough to call bad AI output "slop", which immediately makes me think they are trustworthy, that they are in the know. An AI bandwagoner wouldn't be brave enough to call it slop.
Then there's a blurb about the CEO who claims "AI doesn't need better prompts. It needs orchestration." which is something I have always felt to be true, especially after living through highly engineered prompts becoming suddenly useless when conditions change because of how brittle they are.
I might even give this a shot and I usually eschew AI plugins because of how cloud connected they are.
I am a nobody, but I think these people are making a bunch of right moves in this AI space.
Thank you!
Interesting! Just tested and it's quite impressive so far. I use multi-agent workflows on a daily basis. Overseeing them and limiting hallucinations has become a major pain for me. This is much needed.
thank you!