• sam_lowry_ 11 minutes ago

    Should't we stop sending 100 IP packets on every keystroketo start with?

    • iterateoften 32 minutes ago

      Why is there all the sudden an explosion of sandbox related posts and tools? Llms and agents always needed sandboxes… was it just the collective conscious decided all at once that it mattered and the area to focus building tools?

      • simonw 5 minutes ago

        I think sandboxes are having their moment because it's become undeniable that coding agents are useful, and that they're more useful if you run them in YOLO mode rather than having to approve everything they want to do.

        Coding agents are still a relatively new category to most people. Claude Code dates back to February last year, and it took a while for the general engineering public to understand why that format - coding LLMs that can execute and iterate on the code they are writing - was such a big deal.

        As a result the demand for good sandboxing options is skyrocketing.

        It also takes a while for new solutions to spin up - if someone realized sandboxes were a good commercial idea back in September last year the products they built may only just be ready for people to start trying out today.

        • ambicapter a minute ago

          Why/how are they more useful in YOLO mode than in careful mode?

        • cedws 27 minutes ago

          Particularly an explosion of SaaS sandboxes... why should I pay a subscription for some remote sandbox with paltry compute power, which I need a constant internet connection to access? I have this brilliant processor in my own laptop I want to use that I have already paid for, I don't want to use someone else's!

          • reactordev 15 minutes ago

            Some companies only allow access through a VDI like Windows Remote Desktop or some VMWare setup. It’s crazy.

        • tuhgdetzhh 3 hours ago

          I’m experiencing a similar issue hosting MCP Server on Cloud Run with scale-to-zero for cost optimization. As far as I know, Cloud Functions v2 and Cloud Run both are container-based, and they tend to have noticeable startup times.

          In contrast, AWS Lambdas, which run on Firecracker, have sub-second startup latency, often just a few hundred milliseconds.

          Is there anything comparable on GCP that achieves similar low latency cold starts?

          • mnazzaro 3 hours ago

            I'm a huge GCP fan, but cloud run wouldn't fit our use case because of the routing and ephemeral nature. I think you would have to try to build something yourself using GKE + gVisor

          • mlhpdx 42 minutes ago

            Interesting. It seems to me that client side prediction and lag compensation (aka the basics for games in similar situations) would have been a viable alternative.

            • mnazzaro 35 minutes ago

              While I can see that working well for echoing keystrokes in a terminal, I'm not sure how it would work when you actually enter commands into the terminal. Same for opening files in the IDE.

              • mlhpdx 31 minutes ago

                I didn’t get that the IDE is running on both sides, if that’s true. Wow.

                • mnazzaro 26 minutes ago

                  Yup! There's a language server and file server running in the sandbox that the editor on the frontend interacts with.

                  • formerly_proven 25 minutes ago

                    This is why most IDEs nowadays ask you something about "trusting files" when opening a project. They tend to lick and run on everything in there (at least for dynamic-ish languages, and maybe not "run" intentionally but do stuff which is arbitrary code execution more or less by definition) to analyze the code.

                • jgtrosh 37 minutes ago

                  These rely on undoing within a game's constrained environment. There isn't a way to magically undo any possible procedure with side effects.

                  • mlhpdx 33 minutes ago

                    How so? Perhaps I don’t understand the context. Undoing text display is trivial, undoing code changes is already there, what’s missing? We’re not talking eons, less than a second.

                • jpalepu33 3 hours ago

                  Great write-up on the evolution of your architecture. The progression from 200ms → 14ms is impressive.

                  The lesson about "delete code to improve performance" resonates. I've been down similar paths where adding middleware/routing layers seemed like good abstractions, but they ended up being the performance bottleneck.

                  A few thoughts on this approach:

                  1. Warm pools are brilliant but expensive - how are you handling the economics? With multi-region pools, you're essentially paying for idle capacity across multiple data centers. I'm curious how you balance pool size vs. cold start probability.

                  2. Fly's replay mechanism is clever, but that initial bounce still adds latency. Have you considered using GeoDNS to route users to the correct regional endpoint from the start? Though I imagine the caching makes this a non-issue after the first request.

                  3. For the JWT approach - are you rotating these tokens per-session? Just thinking about the security implications if someone intercepts the token.

                  The 79ms → 14ms improvement is night and day for developer experience. Latency under 20ms feels instant to humans, so you've hit that sweet spot.

                  • mnazzaro 3 hours ago

                    1. The pools are very shallow- two machines per pool. While it's certainly possible for 3 tasks to get requested in the same region within 30 seconds, we handle that by falling back to the next closest region if a pool is empty. This is uncommon, though. 2. I haven't considered it, but yeah- the caching seems to work great for us. 3. The tokens are generated per-task, so if you are worried about your token getting leaked, you can just delete the task!

                    • hinkley 2 hours ago

                      One of the perennial problems with on call situations I encountered was that at some point everyone knew that a production incident was going on and people were either trying to help or learn by following along running the same diagnostics the on point people were running, and exhausting the available resources that were needed to diagnose the problem.

                      Splunk was a particular problem that way, but I also started seeing it with Grafana, at least in extremis, once we migrated to self hosted on AWS from a vendor. Most times it was fine, but if we had a bug that none of the teams could quickly disavow as being theirs, we had a lot of chefs in the kitchen and things would start to hiccup.

                      There can be thundering herds in dev. And a bunch of people trying a repro case in a thirty second window can be one of them. The question is if anyone has the spare bandwidth to notice that it’s happening or if everyone trudges along making the same mistakes every time.

                  • barishnamazov 3 hours ago

                    Not directly related but can't read the text on my phone. It's too thin, maybe you could increase the font weight a bit?

                    • mnazzaro 2 hours ago

                      Thanks for letting me know- I'll take a look

                    • hinkley 2 hours ago

                      When Covid hit I wasn’t the only one working remotely at my company, but I was the only one working remotely in North America, and apparently the only one trying to Work Smarter. By then there were a handful of feature toggles I had implemented that I quickly set to always on in development, but chief among them was that gzip service calls were a net loss in AWS but very very handy while working from home.

                      I also had switched a head of line service call that was, for reasons I never sorted out, costing us 30ms TTFB per request for basically fifty bytes of data, to use a long poll in Consul because the data was only meant to be changed at most once every half hour and in practice twice a week. So that latency was hidden in dev sandbox except for startup time, where we had several consul keys being fetched in parallel and applied in order, so one more was hardly noticeable.

                      The nasty one though was that Artifactory didn’t compress its REST responses, and when you have a CI/CD pipeline that’s been running for six years with half a hundred devs that response is huge because npm is teh dumb. So our poor UI lead kept having npm install timeout and the UI team’s answer for “my environment isn’t working” started with clearing your downloaded deps and starting over.

                      They finally fixed it after we (and presumably half of the rest of their customers) complained but I was on the back 9 of migrating our entire deployment pipeline to docker and so I had nginx config fairly fresh in my brain and I set them up a forward proxy to do compression termination. It still blew up once a week but that was better than him spending half his day praying to the gods of chaos.

                      • PaulHoule 2 hours ago

                        One of the most dangerous ideologies is "all good things come to those who wait" or that waiting is a virtue. Applied by people working at all the levels of a system for years and years it leads to steps that could be 30ms taking 30s.

                      • alooPotato 3 hours ago

                        @mnazzaro have you seen fly.io's new sprites.dev offering?

                        • mnazzaro 2 hours ago

                          I have! It's pretty interesting and handles a lot of the problems discussed here, but is a little young for us. For one thing, it doesn't have fly replay, so we'd have to build a separate proxy again.

                          If we were starting from 0, I would definitely try it. My favorite thing about it is the progressive checkpointing- you can snapshot file system deltas and store them at s3 prices. Cool stuff!