Just be super careful to understand what CPU time means before you go ahead and waste time on this. They don't immediately flag this, but once you go past a very, very very small time threshold on used CPU time, they'll start aborting requests. This does not happen (as harshly) on fully paid accounts, of course.
But since many are comfortably being dragged into the Cloudflare vortex through their otherwise generously free offers, you'll find that the Cloudflare Worker CPU time limitation can turn into a huge waste of time, after the fact, once you realize the worker code you converted a few days ago and you're all joyful about suddenly starts failing a few days later.
Addendum: Just to illustrate the moment where you'll trip over it: here it casually mentions the default minimum being 30s, without being clear that this *only* applies to paid accounts. Only further down somewhere there's a tiny mention of 10ms! https://developers.cloudflare.com/workers/platform/limits/#c...
Here is the only other mention of it: https://developers.cloudflare.com/workers/platform/pricing/
So, if your script can get by with a max of 10 milliseconds of CPU time per invocation (not runtime), you'll be fine. You will, however, and this is crucial, only realize this a few days in. They're taking the average and eventually cap you and it stops responding.
The really annoying thing about Cloudflare is that Workers don’t belong to zones (i.e. editing any worker is an account level permission, either: Read only or Edit), and thus you can’t scope a particular user or API key access to a particular set of workers.
This means you can’t physically set different permissions between prod and dev workers, which is a disaster waiting to happen.
(You can’t just make a second Cloudflare account for Prod, because it won’t let you bind single sign-on to two different accounts…)
It also means any employee in the company can just open a dev branch, print out the dev deploy key (from the Pipeline), and use it to deploy to prod. It’s currently impossible to block or mitigate.
> can’t just make a second Cloudflare account for Prod
Multi account support when you pay for enterprise.
Thanks, didn’t know. Starting at $2k/month though.
As someone growing up with shared hosting, VPS and eventually K8s, I never really got Cloudflare's offering (apart from CDN/DDOS/DNS). I'm not sure if it's their positioning or if I never had the problems they're trying to solve, but it just doesn't click for me. Durable objects, Wrangler, D1, some custom Node.js API... it's all kind of opaque to me how it really solves any problem better than just using Postgres, Redis, etc on top of K8S or something like that.
Edge compute.
The workers execute from the same colos as the CDN, which are regionally distributed. They respond fast because they are physically close to the visitor and CloudFlare limits which runtimes they support to only very highly optimized ones.
And for my money, any platform that doesn’t require K8s is superior thank any which does.
Cloudflare seems to exclusively offer "serverless" products, which rules out applications like Postgres (or any other "standard" database technology).
Why don't they just offer "managed Postgres"? This is because their infrastructure is as homogenized as possible so does not offer hosting of arbitrary services or software, the only customizable code made available to customers are things like workers which are deliberately constrained (in execution time, resource usage, etc) to, again, allow them to keep all their infrastructure as homogenized as possible.
Most of their other products are to provide supplementary capabilities to workers.
For example, their durable objects are comparable (in terms of technical approach, problems they solve and trade-offs) to AWS's DynamoDB or Azure's Cosmos DB. These products are distributed by nature and work very well for certain kinds of projects and not so well for others. They're also fully in-line with the generally homogenous infrastructure that Cloudflare is engineered to work on.
In summary, Cloudflare has essentially homogenous infrastructure globally and is able to make their extensive edge infrastructure available to customers for customized applications by constraining it to "serverless" offerings. For customers that can work within the trade-offs of these serverless products, it's an appealing product.
Same for me, these things you mentioned either felt like stuff for edge or "convoluted hobby project", with maybe some cv padding along. Perhaps we need to buy into the full ecosystem to understand the value.
It's just marketing bullshit. Make no mistake, the people using those things don't understand much more than you do; they are just going after shiny new toys, because that's much easier than building something solid that lasts and is cost-effective.
I really like the offering that Cloudflare has with workers, but for me they just seem to be lacking some DX tooling/solutions. Debugging is hell, but for quick projects like this I'll definitely look into it again. These days Railway is my go-to for hosting "throw-away" projects.
Always wondering how its going for folks that are using Cloudflare Workers as their main infra?
> Debugging is hell
Most people won't care because the extent of their debugging skills is console.log, echo, print. repeat 5000 times.
printf() debugging is still considered a best practice in the eyes of many. I still remember being really surprised when I heard my famous (Turing award-winning) CS professor tell the class this for the first time.
https://tedspence.com/the-art-of-printf-debugging-7d5274d6af...
You missed out the important bit - think about the problem and data flow first.
After that it doesn’t matter much which tool you use to verify assumptions.
> Most people won't care because the extent of their debugging skills is console.log, echo, print. repeat 5000 times.
I don't agree. The first thing any developer does when starting out a project is setting up their development environment, which includes being able to debug locally. Stdout is the absolute last option on the table, used when all else fails.
I don't agree! It's easiest to printf() things since you don't have to have tooling to debug every language you want to work with!
while you should know this for anything you're proficient in, I usually reach for printf since it's usually quicker than messing with a debugger :)
You guys are probably gonna laugh me out of the room, but I use a combination of both printing and debugging tools when identifying issues.
I'm building my entire back-end on CF Workers. The DX was really frustrating starting out, but I'm using Rust/WASM, which means most of my issues get caught at compile time. My suggestion: avoid all the CF offerings (DB, Pages, KV, etc.) and stick with just Workers. They're pretty stable and reliable (more so than Cloudflare itself, hehe), and once you figure out their rough edges, you'll be fine.
What DB do you use? I tried the same for while but eventually gave up because it was incredibly restrictive and not much cheaper than a self managed VPS with some Docker containers. I mean the biggest thing that could happen to me is landing on the HN front page and a $5 per month VPS can manage that easily
You won't beat a good self-managed VPS with some docker containers unless you start adding criteria like SLAs and whatnot.
Then you'll still not beat a good self-managed VPS but you'll have someone else to blame
I am not much of a devops person but running your own DB in a VPS with docker containers don't you also need to handle all this manually too?
1) Creating and restoring backups
2) Unoptimized disk access for db usage (can't be done from docker?)
3) Disk failure due to non-standard use-case
4) Sharding is quite difficult to set up
5) Monitoring is quite different from normal server monitoring
But surely, for a small app that can run one big server for the DB is probably still much cheaper. I just wonder how hard it really is and how often you actually run into problems.
My guess is some people have never worked with the constraints of time and reliability. They think setting up a database is just running a few commands from a tutorial, or they're very experienced and understand the pitfalls well; most people don't fall into the latter category.
But to answer your question: running your own DB is hard if you don't want to lose or corrupt your data. AWS is reliable and relatively cheap, at least during the bootstrapping and scaling stages.
CF workers are v8 isolates FYI
AWS RDS. I have no intention of managing my own DB and the insanity that comes with that.
> I'm building my entire back-end on CF Workers. The DX was really frustrating starting out, but I'm using Rust/WASM, which means most of my issues get caught at compile time.
Cloudflare Workers support WASM, which is how they support any runtime beyond JavaScript. Cloudflare Worker's support for WASM is subpar, which is reflected even in things like the Terraform provider. Support for service bindings such as KV does not depend on WASM or anything like that: you specify your Wrangler config and you're done. I wonder what you are doing to end up making a series of suboptimal decisions.
The bindings are still done at the JS level. But to answer your question, I'm building a git workflow engine (kind of a lightweight GitHub Actions alternative; see https://codeinput.com). In that context, you get lots of events and payloads from Git/GitHub that typically require very little resources to respond to or relay.
The worker model made sense, so I developed the whole app around it. Now of course, knowing what I know today, I might have considered different options. But at the time, I read the description (and had some Cloudflare Workers experience) and thought this looked good.
> I really like the offering that Cloudflare has with workers, but for me they just seem to be lacking some DX tooling/solutions.
Cloudflare in general is a DX mess. Sometimes it's dashboard doesn't even work at all, and is peppered with error messages. Workers + Wrangler + it's tooling doesn't even manage to put together a usable or coherent change log, which makes it very hard to even track how and why their versioning scheme should be managed.
Cloudflare is a poster child of why product managers matter. They should study AWS.
Yeah one time I tried do to something and the button didn't do anything, only after I opened DevTools I saw the error message in the response body.
Cloudflare Workers has really improved lately, e.g. "Observations" and "Metrics", and on top of that their product suite keeps growing all the time. If you use Astro[1] together with Cloudflare then you have a solution that is at least on par with NextJS and Vercel, but that only costs a fraction. My latest project[2] also uses Astro and Cloudflare and it is rendered on the "edge" (i.e. SSR) in about 100ms – you won't get better performance.
[1]: https://astro.build
Until they allow to deploy native code for serverless, they aren't on pair with Vercel.
Yeah it's great for toy/hobby projects with little complexity or features, but as often is the case with these kinds of platforms, running a substantial app on them is a different proposition
I tried to port a nextjs project to cf + astro recently and it was a nightmare of usability and build issues. I'm sure they will work it out eventually but I won't be trying it again any time soon.
While Cloudflare is long-established, the Workers platform is relatively new and did have the issues you described; however, over the past few months it has become stable. Compared to Vercel, it is more technical and advanced.
I like Workers in general and I've had good experience with it. Here I'm talking about deploying Astro to Pages though, which is not nearly as polished as deploying Nextjs on Vercel.
I use Cloudflare Pages for my blog, and I really like it, but my static blog generator (Quartz) only supports Giscus, which requires signing in with a GitHub account.
I was thinking I might be able to hobble together a vibe-coded straightforward thing with Rust-> WASM to make an embeddable comment system, using Cloudflare Workers.
I gotta say that Workers are shockingly pleasant to use. I think I might end up using them for a bigger project.
Is Cloudflare Pages still a thing? It looks like it's just Workers now.
It is, they've just aligned them under the same umbrella. It's literally "Workers & Pages" under the CF navigation.
No I cannot, because I usually don't use the programming languages it supports.
> No I cannot, because I usually don't use the programming languages it supports.
You didn't even bothered to open the link, as it covers how the blogger vibecoded a couple of projects that convert existing projects built with different languages+frameworks to run on Cloudflare Workers.
Yes, and that is exactlyt the problem, some of us do read the articles before posting.
Given that they support WASM, which then means they support traditional compiled languages life C, C++, Golang, and Rust, what're you using, Malbodge?
Yeah, they are the only ones everyone uses, there is nothing else.
And even those, have you ever tried anything beyond printf debugging? I bet not.
What language are you using that can't compile to WASM and isn't otherwise supported?
Java, C# for example.
Yeah, they are supported depending on the semantic interpretation of what the English word supported means.
As long one is happy with printf debugging, a language subset, and gimmick toolchains.
brainfuck
I was also suspicious of Cloudflare as a full platform, but now it's one of my favorite ways to develop and scale web applications. I have implemented Minifeed[1] (and Exotext [2]) completely in Cloudflare Workers (except for the full-text search, for which I use a self-hosted instance of Typesense; though in my testing, Cloudflare's D1 database does come with full-text search enabled - it's SQLite compatible, and it works well!).
I also didn't want to have any kind of rich frontend layer, so all my HTML is generated on the backend. I don't even use complex templating libraries, I just have a few pure functions that return HTML strings. The only framework in use is Hono which just makes HTTP easier, although standard handlers that Cloudflare offers are just fine; it takes maybe 2-3 times more lines of code compared to Hono.
D1 is a fine database. Queues are fantastic for my purpose (cron-scheduled fetches of thousands of RSS feeds). Vector database is great, I generate embeddings for each fetched blog post and store them in the vector database, which allows me generate "related" posts and blogs. R2 is a simple S3-compatible object storage, though I don't have many files to store. Deployments and rollbacks are straight-forward, and the SQLite database even has time-travel when needed. (I've also tried Workflows instead of Queues, but found them unstable while in open beta; I haven't tried them after they became generally available.)
I know this might sound like an ad or something; I have nothing to do with Cloudflare. In fact, I couldn't even get through to the initial interview for a couple of their positions :/ It's just I always had this cloud over my head every time I needed to create and maintain a web project. Ruby on Rails + Heroku combo was probably the easiest in this regard, abstracting away most of the stuff I hate to deal with (infra, DB, deployment, etc.) But it was still not as robust and invisible, and also pricey (Heroku). Cloudflare workers is an abstraction that fits my mindset well: it's like HTTP-as-a-service. I just have to think in terms of HTTP requests and responses, while other building blocks are provided to me as built-in functions.
Minifeed has been chugging along for 2+ years now, with almost 100% uptime, while running millions of background jobs of various types of computing. And I didn't have to think of different services, workers, scaling and stuff. I am well aware of how vendor-locked in the project is at this point, but I haven't enjoyed web development before as much as I do now.
The only two big missing pieces for me are authentication/authorization and email. Cloudflare has an auth solution, but it's designed for enterprise I think. I just didn't get it and ended up implementing simple old-school "tokens in db + cookie". For email - they have announced the new feature, so I hope I can migrate away from Amazon SES and finally forget about the nightmare of logging into the AWS console (I have written step-by-step instruction notes for myself which feel like "how to use a TV" note for some old, technically-unsavvy person).
Do they still blow up your billing during DDoS?