I'm a co-founder of Calcapp, an app builder for formula-driven apps, and I recently received an email from a customer ending their subscription. They said they appreciated being able to kick the tires with Calcapp, but had now fully moved to an AI-based platform. So we're seeing this reality play out in real time.
The next generation of Calcapp probably won't ship with a built-in LLM agent. Instead, it will expose all functionality via MCP (or whatever protocol replaces it in a few years). My bet is that users will bring their own agents -- agents that already have visibility into all their services and apps.
I hope Calcapp has a bright future. At the same time, we're hedging by turning its formula engine into a developer-focused library and SaaS. I'm now working full-time on this new product and will do a Show HN once we're further along. It's been refreshing to work on something different after many years on an end-user-focused product.
I do think there will still be a place for no-code and low-code tools. As others have noted, guardrails aren't necessarily a bad thing -- they can constrain LLMs in useful ways. I also suspect many "citizen developers" won't be comfortable with LLMs generating code they don't understand. With no-code and low-code, you can usually see and reason about everything the system is doing, and tweak it yourself. At least for now, that's a real advantage.
Sorry to hear about the customer churn, but the MCP-first strategy makes sense to me and seems like it could be really powerful. I also suspect that the bring your own agent future will be really exciting, and I've been surprised we haven't seen more of it play out already.
Agree there will be a place for no-code and low-code interfaces, but I do think it's an open question where the value capture will be--as SaaS vendors, or by the LLM providers themselves.
I highly suggest you expose functionality through Graphql. It lets users send out an agent with a goal like: "Figure out how to do X" and because graphql has introspection, it can find stuff pretty reliably! It's really lovely as an end user. Best of luck!
I think this view is really short-sighted. Low-code tools date back to the '80s, and the more likely outcome here is that low-code and agentic tools simply merge.
There's a lot of value in having direct manipulation and visual introspection of UIs, data, and logic. Those things allow less technical people to understand what the agents are creating, and ask for help with more specific areas.
The difficulty in the past has been 1) the amount of work it takes to build good direct manipulation tools - the level of detail you need to get to is overwhelming for most teams attempting it - but LLMs themselves make this a lot easier to build, and 2) what to do when users hit the inevitable gaps in your visual system. Now LLMs fill these gaps pretty spectacularly.
This makes the most sense to me too. My feeling is so-called AI is going to deliver on a lot of the things we're used to having shoddy versions of -- good natural language interfaces, good WYSIWYG type tools, all of this could turn the wix/squarespace/wordpress/etc landscape into something pretty good, rather than just OK.
In my most hopeful of futures, we've figured out how to do lightweight inference, and if the models don't run locally at least they aren't harming the planet, and all this AI tooling hydrates all the automation projects of the last 40 years so that my favorite tiny local music label can have a super custom online shop that works exactly the way they need without having to sacrifice significant income to do it.
I agree. I think that once your LLM hits a baseline level of computer science / programming "understanding" it can pretty easily work with whatever language. Using narrow DSLs and low code platforms could be a great way to constraint an LLM and keep it on the happy path.
It's not just about the language. The good money for these low-code tools is larger organizations which have deployment/hosting/compliance/maintenance concerns that need to be accounted for. You can knock out as many apps in whatever platform you want, but they don't want these at the IT gatekeeper level.
They want a tool that makes this file share talk to this SharePoint site which updates this ERP tool over there. The LLM approach is great for the departmental person (if they can still host shadow IT) but falls down at the organizational level. The nature of this work is fundamentally different, crappier, and less interesting than what any person on HN wants to be doing which is a contributor to misunderstanding of the market.
EDIT: fixed grammar.
Mendix is nothing more than MS Access for the Web.
Ok funny anecdote: I once did a special assignment for the CEO of Mendix to build a convertor for MS Access apps to Mendix. So, yes, I can confirm that this is roughly true ;)
I wrote a short post about it on my blog: https://blog.waleson.com/2022/10/access2mendix.html
> the cost of shipping code now approaches zero
Does anyone actually believe this is the case? I use LLMs to ‘write’ code every day, but it’s not the case for me; my job is just as difficult and other duties expand to fill the space left by Claude. Am I just bad at using the tools? Or stupid? Probably both but c’est la vie.
It would probably have been more accurate to say "the cost of writing code" -- and you're totally right about the rise of other duties (and technologies) that expand to fill that gap.
As a dev team, we've been exploring how we grapple with the cultural and workflow changes that arise as these tools improve--it's definitely an ongoing and constantly evolving conversation.
Same here. I use Claude Code everyday, very useful, but nowhere near to where I don't have to jump in and fix very simple stuff. I actually have a bug in an app that I don't fix because I use it as a test for LLM's and so far not one could solve it, it's a CSS bug!
It's those who are shipping easily who are stupid. And what I mean by that is you can just ask the LLM to use the browser to get API keys and then use them to deploy. That's how the cost of shipping is zero. A hefty amount of YOLO code on top of YOLO deploy. I mean, you could also have the LLM build you a CI CD pipeline, but that's not YOLO.
Dumb question: whats the difference between "low-code" and "libraries+frameworks"?
Usually the point of a library or framework is to reduce the amount of code you need to write. Giving you more functionality at the cost of some flexibility.
Even in the world of LLMs, this has value. When it adopts a framework or library, the agent can produce the same functionality with fewer output tokens.
But maybe the author means, "We can no longer lock in customers on proprietary platforms". In which case, too bad!
> Dumb question: whats the difference between "low-code" and "libraries+frameworks"?
There's not much technical difference.
The way those names are used, "low-code" is focused on inexperienced developers and prefers features like graphical code generators and ignoring errors. On the other hand, "frameworks" are focused on technical users and prefer features like api documentation and strict languages.
But again, there's nothing on the definition of those names that requires that focus. They are technically the same thing.
I was going through my low-code bookmarks and I found this jewel from November 2021 (1 year BC-GPT = Before ChatGPT):
Low-Code and the Democratization of Programming: Rethinking Where Programming Is Headed
https://www.oreilly.com/radar/low-code-and-the-democratizati...
Interesting article -- my take on low-code has always been less about how much initial development time the application takes to code, and more about how it can ease the long term maintenance of an application. With AI tooling it is going to be easy for companies to spin up hundreds of internal applications, but how are they accounting for the maintenance and support of those applications?
Think about the low-code platform as a place to host applications where many (not all) of the operational burdens long term maintenance are shifted to the platform so that developers don't have to spend as much time doing things like library upgrades, switching to X new framework because old framework is deprecated, etc..
Very correct! Why internal dashboards keep getting rebuild: https://www.timestored.com/pulse/why-internal-dashboards-get... It took me a few years to home in on the exact idea you've captured and I work in this exact area. There's a middle layer between UI team and notebook experiments that isn't worth companies building themselves.
Auth is a pretty classic case where it’s not hard to make your own account create/login form but it’s really hard to make a good one that does all the “right things”.
Authentication and authorization are important requirements for internal tools. Low-code platforms support authn/authz for app access. Building internal tools with code is much easier now with GenAI, but ensuring proper RBAC access controls remains a challenge.
I have been building https://github.com/openrundev/openrun to try and solve internal tooling deployment challenges. OpenRun provides a declarative deployment platform which supports RBAC access controls and auditing. OpenRun integrates with OIDC and SAML, giving your code based apps authn/authz features like low-code platforms.
This is a good example, but the build vs buy decision in this case also includes viable open source options, which become even more attractive when LLMs reduce the implementation + maintenance barriers.
> "For us, abandoning low-code to reclaim ownership of our internal tooling was a simple build vs buy decision with meaningful cost savings and velocity gains. It also feels like a massive upgrade in developer experience and end-user quality of life. It’s been about 6 months since we made this switch, and so far we haven’t looked back."
Fascinating but not surprising given some of the AI-for-software development changes of late.
"While it’s possible low-code platforms will survive by providing non-technical users with the kind of magical experience that’s already possible for developers with AI coding tools today" - that magical experience is available to non-technical users today already. The last barrier is deployment / security / networking / maintenance, but I'm assuming here are a lot of startups working on that.
In a way, low-code has been the worst of both worlds: complex, locked-in, not scalable, expensive, with small ecosystems of support for self-learning.
(Context: worked at appsheet which got acquired by Google in 2020)
My favorite part of "low code" was the conciseness of conveying what the process was and the expected outcome.
In an ideal world, we come to a consensus on best practices for specification to feed into AI, especially random non-tech companies looking for internal LOB applications. On the other hand, that would us to care about documentation again...
Aka the best parts of swe imo
LLMs can assist you to write a shitload of useless bloatware, or can assist you to take something existing and complicated, and create something minimal that is almost as good. It's up to you.
You can produce a shitload of useless bloatware and not come close to the bloated uselessness of a large-scale enterprise software platform.
Well, where I work these two trends are merging. Enterprise bloat plus LLM bloat. And leadership is "excited" about how much more code / applications IT can deliver this year
They should be! I think this is going to be a pretty transformational couple years for IT. To be honest, I've spent a bunch of my career being kind of skeptical about IT (software developer chauvinism), and I think the rug is getting pulled out from under that mentality.
Exactly, the phenomenon is not exactly new :)
Low code is not disappearing - it is simply changing.
What do you think an LLM is if not no/low-code?
And all the other components such as MCPs, skills, etc this is all low-code.
And who is going to plug all of these into a coherent system like Claude Code, Copilot, etc which is basically a low code interface. Sure it does not come with workflow-style designer but it does the same.
As far as the vibe-coded projects go, as someone who has personally made this mistake twice in my career and promised to never make it again, soon or later the OP will realise that software is a liability with and without LLMs. It is a security, privacy, maintenance and in general business burden and a risk that needs to be highlighted on every audit, and at every step.
When you start running the bills, all of these internal vibe-coded tools will run 10-20x the cost the original subscriptions that will be paid indirectly.
> What do you think an LLM is if not no/low-code?
An LLM is not low code. It's something that generates the thing that does the thing.
Most of the time it generates 'high' code. That high code is something that looks like hieroglyphics to non developers.
If it generated low code then it's possible that non developers could have something that is comprehensible to them (at least down as far as the deterministic abstraction presented by the low code framework)
Low or no code tools abstract away the idea of code being used. You are working with high-level concepts like workflows or in the case of coding assistants PRDs, etc. These coding assistants produce code but if you don't read the code at all it might as well not exist and frankly users who use Lovable (like may 10yo to make a website) have a vague idea that there is code behind all of this but frankly it does not really matter. So these are technically low-code tools - not in a traditional sense like workflows but still low code tools.
Great to see your name here, Zack. I think the problem with low-code is that its a catch all that spans between primarily data-storage-and-use work (Airtable, quick base, Filemaker, etc), the primarily app-alternative platforms (retool, Mendix, etc, and the ETL tools.
To me, AI changes the inflection points of build vs buy a bit for app platforms, but not as much for the other two. Ultimately, AI becomes a huge consumer of the data coming from impromptu databases, and becomes useful when it connects to other platforms (I think this is why there is so much excitement around n8n, but also why Salesforce bought informatica).
Maybe low-code as a category dies, but just because it is easier for LLMs to produce working code, doesn't make me any more willing to set up a runtime, environment, or other details of actually getting that code to run. I think there's still a big opportunity to make running the code nice and easy, and that opportunity gets bigger if the barriers to writing code come down.
Great point, and I agree the catch-all nature of the category feels overly broad. At our company, we've felt this shift most clearly on the app-building side so far but I'm curious to see how the low-code data applications fare as context windows grow and the core LLM providers improve their collaboration tools, governance, and improve the UX of on demand app creation. And nice to see you, too!
I always felt that the biggest problem with low code was the wall you hit when you tried to change something small. You had to fight the tool just to make the button look the way you wanted. AI gives you the speed of low code but allows you to build anything you can imagine. It makes sense to stop paying for tools that limit your freedom.
I don't see LLMs as competing with low code at all. Low code solutions make it easier for LLMs to setup something working and robust.
Yes, If the low code solution has a good interface for LLMs. There are enough low code / no code solutions with a pure graphical interface and nothing else. Dinosaurs.
Ah yeah, anything that's strictly GUI driven is probably not in a good space right now. Low code, but code first is the ideal spot.
>the cost of shipping code now approaches zero.
Is this a commonly held assumption?
The people saying that never specify what code is being deployed.
I can get assembly from /dev/urandom for cents on the TB.
I thought the point would be that the number of lines of code a LLM produces to solve a given problem is... not low.
SAP the lowest on both axis... had to crack up at that
I can only speak for my work. We tried for a few months to use Retool and N8n but the calculus just wasn't there when I could spin up internal dashboard tools in less time with better functionality and less work. It was truly a win/win/win all around.
Things I built for internal use pretty quickly:
- Patient matcher
- Great UX crud for some tables
- Fax tool
- Referral tool
- Interactive suite of tools to use with Ashby API
I don't think these nocode tools have much of a future. Even using the nocode tool's version of "AI" was just the AI trying to finagle the nocode's featureset to get where I needed it to be. Failing most of the time.Much easier to just have a Claude Code build it all out for real.
how much of it is because the VC funding dried up and went to LLMs instead
Broken link, should point to https://www.zackliscio.com/posts/rip-low-code-2014-2025/
thanks for flagging! reached out the HN team to update -- can't delete or edit once there are comments.
Apologies, mods, please accept this as approval to detach my top comment from the thread once fixed.
>> in a world where the cost of shipping code now approaches zero.
Reallly ?