Kinda funny but I think LLM-assisted workflows are frequently slow -- that is, if I use the "refactor" features in my IDE it is done in a second, if I ask the faster kind of assistant it comes back in 30 seconds, if I ask the "agentic" kind of assistant it comes back in 15 minutes.
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
See https://www.reddit.com/r/programming/comments/1lxh8ip/study_...
As a counter example (re: agents), I routinely delegate simple tasks to Claude Code and get near-perfect results. But I've also had experiences like yours where I ended up wasting more time than saved. I just kept trying with different types of tasks, and narrowed it down to the point where I have a good intuition for what works and what doesn't. The benefit is I can fire off a request on my phone, stick it in my pocket, then do a code review some time later. This process is very low mental overhead for me, so it's a big productivity win.
Sounds like a slot machine. Insert api tokens, get something that's pretty close to right, insert more tokens and hope it works this time.
Except the tokens you insert have meaning, and some yield better results than others. Not like a slot machine at all, really. Last I checked, those only have 1 possible input, no way to improve your odds.
Ok so it's poker rather than a slot machine
Not really, it's not a zero-sum game. You're not competing against anything, you're working with something. It's just a tool that takes practice, has some variability and isn't free. Like most things in life. More like buying corn or having friends.
Poker takes practice, has variability and isn't free. In fact it's the only game I know of that's pointlessly boring without money on the table.
LLM workflow is competing with other ways of writing code. DIY, stack overflow, paired, offshored...
> pointlessly boring without money on the table.
I bought a bunch of poker chips and taught Texas Hold'em to my kids. We have a fantastic time playing with no money on the line, just winning or losing the game based on who wins all the chips.
Give them enough time and they'll realize they can trade poker chips for other things.
Yes I accept this analogy!
That’s fine if your expectations are consummate.
How's that different from a human developer? Give the same task to different developers and you'll get different levels of correctness and quality. Give the task to the same developer on different days and it is the same.
Its a lot faster to give a task to an ai agent than a developer. The agent is always at their desk, always listening, and will immediately prioritize whatever you tell it to do.
An ai agent always has capacity, does not have competing priorities, nor does it have ideas about what does or does not fall within their "scope of work".
Thats cool, how are you integrating your phone with your Claude workflow?
You can set up hooks: https://docs.anthropic.com/en/docs/claude-code/hooks-guide
And use something like ntfy to get notifications on your phone:
I’ve also seen people assign Claude code issues on GitHub and then use the GitHub mobile app on their phone to get notifications and review PRs.
I don't know how to do it with Claude Code, but I was at a beach vacation for the past few days and I was studying french on my phone with an webapp that I made. Sometimes I'd notice something bug me, and I used cursor's "background agents" tool to ask it to make a change. This is essentially just a website where you can type in your request, and they allocate a VM, check out your repository, then run the cursor LLM agent inside that VM to implement your requested changes, then push it and create a pull request to your repo. Because I have CI/CD setup, I then just merged the change and waited for it to deploy (usually going for a swim in-between).
I realized as I was doing it that I wouldn't be able to tell anyone about it because I would sound like the most obnoxious AI bro ever. But it worked! (For the simple requests I used it on.) The most annoying part was that I had to tell it to run rustfmt every time, because otherwise it would fail CI and I wouldn't be able to merge it. And then it would take forever to install a rust toolchain and figure out how to run clippy and stuff. But it did feel crazy to be able to work on it from the beach. Anyway, I'm apparently not very good at taking vacations, lol
Is this Terragon's offering, or does Cursor also have web-based background agents?
Cursor has web-based background agents, they launched them recently
I just SSH into my CC machine from the phone, then use CC.
My dev environment works perfectly on Termux, and so does Claude Code. So I just run `claude` like normal, and everything is identical to how I do it on desktop.
Edit: clarity
Do you use it on a phone or on a tablet?
Phone. One of those foldy ones though so pretty big screen.
The cost is in the context switching. Throw 3 tasks that came 15, 20 and 30 min later. The first is mostly ok, you finish by hand. The second have some problems, ask for a rework. Then came the other and, while ok, is have some design problems. Ask another rework. Comes back the second one, and you have to remember the original task and what things you asked for change.
I've already written about this several times here. I think the current trend of LLMs chasing benchmark scores are going in the wrong direction at least as programming tools. In my experience they get it wrong with enough probability, so I always need to check the work. So I end up in a back and forth with the LLM and because of the slow responses it becomes a really painful process and I could often have done the task faster if I sat down and thought about it. What I want is an agent that responds immediately (and I mean in subseconds) even if some benchmark score is 60% instead of 80%.
Programmers (and I'm including myself here) often go to great lengths to not think, to the point of working (with or without a coding assistant) for hours in the hope of avoiding one hour of thinking. What's the saying? "An hour of debugging/programming can save you minutes of thinking," or something like that. In the end, we usually find that we need to do the thinking after all.
I think coding assistants would end up being more helpful if, instead of trying to do what they're asked, they would come back with questions that help us (or force us) to think. I wonder if a context prompt that says, "when I ask you to do something, assume I haven't thought the problem through, and before doing anything, ask me leading questions," would help.
I think Leslie Lamport once said that the biggest resistance to using TLA+ - a language that helps you, and forces you to think - is because that's the last thing programmers want to do.
> Programmers (and I'm including myself here) often go to great lengths to not think, to the point of working (with or without a coding assistant) for hours in the hope of avoiding one hour of thinking. What's the saying? "An hour of debugging/programming can save you minutes of thinking," or something like that. In the end, we usually find that we need to do the thinking after all.
This is such a great observation. I'm not quite sure why this is. I'm not a programmer, but a signal-processing/system engineer/researcher. The weird thing seems that it's the process of programming that causes the "not-thinking" behaviour, e.g. when I program a simulation and I find that I must have a sign error somewhere in my implementation (sometimes you can see this from the results), I end up switching every possible sign around instead of taking a pen and pencil and comparing theory and implementation, if I do other work, e.g. theory, that's not the case. I suspect we try to avoid the cost of the context switch and try to stay in the "programming-flow".
This is your brain trying conserve your energy/time by recollecting/brute-forcing/following known patterns, instead of diving into unknown. Otherwise known as „being lazy” / procrastinating.
There is an illusion that the error is tiny, and its nature is obvious, so it could be fixed by an instant, effortless tweak. Sometimes it is so (when the compiler complains about a forgotten semicolon), sometimes it may be arbitrarily deeply wrong (even it manifests just as a reversed sign).
I do both. I like to develop designs in my head, and there’s a lot of trial and error.
I think the results are excellent, but I can hit a lot of dead ends, on the way. I just spent several days, trying out all sorts of approaches to PassKeys/WebAuthn. I finally settled on an approach that I think will work great.
I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
> I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
To be fair it’s great advice when you’re dealing with atoms.
Mutable patterns of electrons, not so much (:
Sometimes thinking and experimenting go together. I had to do some maintenance on some Typescript/yum that I didn't write but had done a little maintenance.
Typescript can make astonishingly complex error messages when types don't match up so I went through a couple of rounds of showing the errors to the assistant and getting suggestions to fix it that were wrong but I got some ideas and did more experiments and over the course of two days (making desired changes along the way) I figured out what was going wrong and cleared up the use of types such that I was really happy with my code and when I saw a red squiggle I usually knew right away what was wrong and if I did ask the assistant it would also get it right right away.
I think there's no way I would have understood what was going on without experimenting.
Agree, also llms change the balance of plan vs do for me, sometimes it cheaper to do & review than up-front plan.
When you can see what goes wrong with the naive plan you then have all the specific context in front of you for making a better plan.
If something is wrong with the implementation then I can ask the agent to then make a plan which avoids the issues / smells I call out. This itself could probably be automated.
The main thing I feel I'm "missing" is, I think it would be helpful if there were easier ways to back up in the conversation such that the state of the working copy was restored also. Basically I want the agent's work to be directly integrated with git such that "turns" are commits and you can branch at any point.
I agree with your comment in general, however I would say that on my field, the resistence to TLA+ isn't having to think, rather having to code twice without guarantees that it actually maps to the theorical model.
Tools like Lean and Dafny are much more appreciated, as they generate code from the model.
But both Dafny and Lean (which are really hard to put in the same category [1]) are used even less than TLA+, and the problem of formally tying a spec to code exists only when you specify at a level that's much higher than the code, which is what you want most of the time because that's where you get the most bang for you buck. It's a little like saying that the resistance to blueprints is that a rolled blueprint makes a poor hammer.
TLA+ is for when you have a 1MLOC database written in Java or a 100KLOC GC written in C++ and you want to make sure your design doesn't lead to lost data or to memory corruption/leak (or for some easier things, too). You certainly can't do that with Dafny, and while I guess you could do it in Lean (if you're masochistic and have months to spare), it wouldn't be in a way that's verifiably tied to the code.
There is no tool that actually formally ties spec to code in any affordable way and at real software scale, and I think the reason people say they want what doesn't exist is precisely because they want to avoid the thinking that they'll have to do eventually anyway.
[1]: Lean and TLA+ are sort-of similar, but Dafny is something else altogether.
Architectural blueprints are very precise. What gets built is a more detailed form of what is in the blueprint.
That is not the case for the TLA+ spec and your 1MLOC Java Database. You hope with fingers crossed that you've implemented the design, but have you?
I can measure that a physical wall has the same dimensions as specified in the blueprint. How do I know my program follows the TLA+ spec?
I'm not being facetious, I think this is a huge issue. While Dafny might not be the answer we should strive to find a good way to do refinement.
And the thing is, we can do it for hardware! Software should actually be easier, not harder. But software is too much of a wild west.
That problem needs to be solved first.
> That is not the case for the TLA+ spec and your 1MLOC Java Database.
That is the case. Of course, nobody bothers to write the TLA+ proof that that is the case, because even if somebody had the resources to do it, the ROI on doing that is not good. If you can avoid 4 major bugs with 10 hours of work, you probably won't want to work an extra 10,000 hours to avoid two additional minor ones. That most people choose to stop when the ROI gets bad and not when they achieve perfection is not a problem.
The question isn't what tool guarantees perfection (there isn't one), but what toolset can reduce the greatest number of (costly) bugs with the least effort, and tools that help you think rigorously about design are a part of such a toolset.
> You hope with fingers crossed that you've implemented the design, but have you?
The same way you always validate that you've implemented what you intended - which is more than just keeping your fingers crossed - except that TLA+'s job is to make sure that what you intend actually works (if implemented).
> While Dafny might not be the answer we should strive to find a good way to do refinement.
TLA+ does refinement in a much more powerful way than Dafny. Neither is able to do it from a high-level design to a large and realistic codebase, certainly in any afforable way, but nothing can. I guess that is a problem, but it's not the problem we can solve, and there are other big problems that we can.
Too defeatist. If much of the software infrastructure of the world was built on say... Idris, we could do it. That's the promise of dependent types, proof carrying code.
Can we extend that to large scale software? There's no obvious barrier to it, beyond a lack of existing provably correct code to build upon.
I don't expect this to change, however, since the cost/benefit ratio just isn't there. And that makes me sad. We build everything on quicksand.
Appreciation isn't the same as market share, formal proofs in general are pretty much inexistent in enterprise, unless there are legal requirements to do otherwise.
I fail to see how you validate that TLA+ model is actually correctly mapped into the written Java code.
> formal proofs in general are pretty much inexistent in enterprise, unless there are legal requirements to do otherwise.
Formal proofs are rarely used when specifying with TLA+, too, BTW. Writing formal proofs (as you would in Lean) has a very low ROI, and even formal method fans (like me) would tell you that's a tool you should reach for very rarely, and only if you must.
> I fail to see how you validate that TLA+ model is actually correctly mapped into the written Java code.
You don't (not even with Lean), but that we can't have cars that are completely crash-proof doesn't mean that's the standard for accepting or rejecting a safety measure. With TLA+ you can make sure that the design that you have (and you can't validate is actually implemented in code with or without TLA+) is actually good.
In other words, it lets us think about design rigorously. Maybe that's not all we wish for, but it's a lot, and it's not like there are better, easier ways of doing that. Of course, if the goal is to avoid thinking hard about design, then a tool that helps us think even harder isn't what we want.
Main way we're validating that now is by using TLA+ models to generate test suites. Mongo came out with a new paper on this recently: https://will62794.github.io/assets/papers/mdb-txns-modular-v...
> "An hour of debugging/programming can save you minutes of thinking,"
I get what you're referring to here, when it's tunnel-vision debugging. Personally I usually find that coding/writing/editing is thinking for me. I'm manipulating the logic on screen and seeing how to make it make sense, like a math problem.
LLMs help because they immediately think through a problem and start raising questions and points of uncertainty. Once I see those questions in the <think> output, I cancel the stream, think through them, and edit my prompt to answer the questions beforehand. This often causes the LLM's responses to become much faster and shorter, since it doesn't need to agonise over those decisions any more.
Absolutely! I have used Copilot for a few weeks and then stopped when I worked on a machine that didn't have Copilot installed and I immediately struggled with even basic syntax. Now I often use LLMs as advanced rubber ducks. By describing my problems, the solution often comes to my mind on its own and sometimes the responses I get are enough for me to continue on my own. In my opinion, letting LLMs directly code can be really harmful for the software developers, because they forget to think for themselves. Maybe I'm wrong and I am just slow to accept the new reality, but I try to keep writing most of my code on my own and improve my coding skills more than my prompting skills (while still using these tools, of course). For me, LLMs are like a grumpy and cynical old senior dev who is forced to talk in a very positive manner and who has fun trickling in some completely random bullshit between his actual helpful advice.
> assume I haven't thought the problem through
This is the essence of my workflow.
I dictate rambling, disorganized, convoluted thoughts about a new feature into a text file.
I tell Claude Code or Gemini CLI to read my slop, read the codebase, and write a real functional design doc in Markdown, with a section on open issues and design decisions.
I'll take a quick look at its approach and edit the doc to tweak its approach and answer a few open questions, then I'll tell it to answer the remaining open questions itself and update the doc.
When that's about 90% good, I'll tell the local agent to write a technical design doc to think through data flow, logic, API endpoints and params and test cases.
I'll have it iterate on that a couple more rounds, then tell it to decompose that work into a phased dev plan where each phase is about a week of work, and each task in the phase would be a few hours of work, with phases and tasks sequenced to be testable on their own in frequent small commits.
Then I have the local agent read all of that again, the codebase, the functional design, the technical design, and the entire dev plan so it can build the first phase while keeping future phases in mind.
It's cool because the agent isn't only a good coder, it's also a decent designer and planner too. It can read and write Markdown docs just as well as code and it makes surprisingly good choices on its own.
And I have complete control to alter its direction at any point. When it methodically works through a series of small tasks it's less likely to go off the rails at all, and if it does it's easy to restore to the last commit and run it again.
1. Shame on you, that doesn't sound like fun vibe coding, at all!
2. Thank you for the detailed explanation, it makes a lot of sense. If AI is really a very junior dev that can move fast and has access to a lot of data, your approach is what I imagine works - and crucially - why there is such a difference in outcomes using it. Because what you're saying is, frankly, a lot of work. Now, based on that work you can probably double your output as a programmer, but considering the many code bases I've seen that have 0 documentation, 0 tests, I think there is a huge chunk of programmers that would never do what you're doing because "it's boring".
3. Can you share maybe an example of this, please:
> and write a real functional design doc in Markdown, with a section on open issues and design decisions.
Great comment, I've favorite'd it!
In general agreement about the need to think it through, and she should be careful to not oraise the other extreme.
> "An hour of debugging/programming can save you minutes of thinking"
The trap so many dev fall into is assuming code behaves like they think it is. Or believing documentation or seemingly helpful comments. We really want to believe.
People's mental image is more often than not wrong, and debugging tremendously helps bridge the gap.
it's funny, I feel like I'm the opposite and it's why I truly hate working with stuff like claude code that constantly wants to jump into implementation. I want to be in the driver's seat fully and think about how to do something thoroughly before doing it. I want the LLM to be, at most, my assistant. Taking on the task of being a rubber duck, doing some quick research for me, etc.
It's definitely possible to adapt these tools to be more useful in that sense... but it definitely feels counter to what the hype bros are trying to push out.
I like that prompt idea. Because I hate hate hate when it just starting “doing work”. Those things are much better as sounding board for ideas and clarifying my thinking than writing one-shot code.
World of LLMs or not, development should always strive for being fast. In the LLM World, users should always have the controls on accuracy Vs speed. (Though we can try for improving both and not one way or other). For eg at rtrvr.ai we use Gemini Flash as our default and did benchmarking on flash too with 0.9 min per task in the benchmark still yielding top results. That said, I have to accept there are certain web tasks on tail end sites that needs pro to accurately navigate at this point. This is the limitation given our reliance on Gemini models straight up, once we move to our models trained on web trajectories this hopefully will not be a problem.
If using off the shelf LLMs always have a bottleneck of their speed.
GitHub copilot's inline completions still exist, and are nearly instant!
The only thing I've found that LLM speeds up my work is a sort of advanced find replace.
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
> A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
If I reached a point where I would find this helpful, I would take this as a sign that I have structured the code wrongly.
You would be right about the code but probably wrong about the you. I’ve done such requests to clean up code written over the years by dozens of other people copying patterns around because ship was king… until it wasn’t. (They worked quite well, btw.)
If you're generalizing "you" to all developers, perhaps.
I know how I'd respond based on personal experience.
I knew someone would make this comment. I almost added a "I'm probably not leet enough to avoid these situations" disclaimer. It seemed a bit pointlessly self deprecating.
You don't always get to choose the state of or the way a system you work in/with is designed. In this case I was working in a limited scripting language that I have no choice about.
Keep that nose turned up. I'm sure you are leet10xninja. Maybe work on your reading comprehension before you dump on someone though as I already specified that I greatly simplified for comment sake.
sometimes you want a cutpoint for a refactor and only that refactor. And turns out that there is no nice abstraction that is useful beyond that refactor.
I supposed you haven’t tried emacs grep mode or vim quickfix? If the change is mechanical, you create a macro and be done in seconds. If it’s not, you still got the high level overview and quick navigation.
Finding and jumping to all the places is usually easy, but non trivial changes often require some understanding of the code beyond just line based regex replace. I could probably spend some time recording a macro that handles all the edge cases, or use some kind of AST based search and replace, but cursor agent does it just fine in the background.
Code structure is simple. Semantics is where it get tough. So if you have a good understanding of the code (and even when you don't), the overview you get from one of those tools (and the added interactivity) is nice for confirming (understanding) the needed actions that needs to be done.
> cursor agent does it just fine in the background
That's for a very broad definition of fine. And you still need to review the diff and check the surrounding context of each chunk. I don't see the improvement in metrics like productivity and cognitive load. Especially if you need to do serveral rounds.
You mentioned grep-mode, which to my knowledge is just bringing up a buffer with all the matches for a regex and easily jumping to each point (I use rg.el myself). For the record, this is basically the same thing as VSCode's search tool.
Now, once you have that, to actually make edits, you have to record a macro to apply at each point or just manually do the edit yourself, no? I don't pretend LLMs are perfect, but I certainly think using one is a much better experience for this kind of refactoring than those two options.
Maybe it's my personal workflow, but I either have sweeping changes (variable names, removing dependencies) which are easily macroable, or very targeted one (extracting functions, decoupling stuff,..,). For both, this navigation is a superpower and coupled with the other tools of emacs/vim, edit is very fast. That rely on a very good mental model of the code, but any question can be answered quickly with the above tools.
For me, it's like having a moodboard with code listings.
Yes I've done this kind of refactoring for ages using emacs macros and grep. Language Server and tree-sitter in emacs has made this faster (when I can get all the dependencies setup correctly that is.) Variable name edits and function extraction is pretty much table stakes in most modern editors like IntelliJ, VSCode, Zed, etc. IIRC Eclipse had this capability 15-20 years ago.
I used to have more patience for doing it the grep/macro way in emacs. It used to feel a bit zen, like going through the code and changing all the call-sites to use my new refactor or something. But I've been coding for too long to feel this zen any longer, and my own expectations for output have gotten higher with tools like language-server and tree-sitter.
The kind of refactorings I turn to an LLM for are different, like creating interfaces/traits out of structs or joining two different modules together.
I'm decent at that kind of stuff. However thats not really what I'm talking about. For instance today I needed two logic flows. One for data flowing in one direction. Then a basically but not quite reversed version of the same logic for when the data comes back. I was able to write the first version then tell the LLM
"Now duplicate this code but invert the logic for data flowing in the opposite direction."
I'm simplifying this whole example obviously but that was the basic task I was working on. It was able to spit out in a few seconds what would have taken me probably more than an hour and at least one tedium headache break. I'm not aware of any pre LLM way to do something like that.
Or a little while back I was implementing a basic login/auth for a website. I was experimenting with high output token LLM's (i'm not sure that's the technical term) and asked it to make a very comprehensive login handler. I had to stop it somewhere in the triple digits of cases and functions. Perhaps not a great "pro" example of LLM but even though it was a hilariously over complex setup it did give me some ideas I hadn't thought about. I didn't use any of the code though.
Its far from the magic LLM sellers want us to believe but it can save time same as various emac/vim tricks can to devs that want to learn them.
emacs macros aren't the same. You need to look at the file, observe a pattern, then start recording the macro and hope the pattern holds. An LLM can just do this.
And that's why I mentionned grep-mode, and such other tools. Here is some videos about what I'm talking about
https://youtu.be/f2mQXNnChwc?t=2135
And for Vim
Standard search and replace in other tools pales in comparison.
I am familiar with grep-mode and have used that and macro recording for years. I've been using emacs for 20 years. grep-mode (these days I use rg) just brings up all the matches which lets me use a macro that I recorded. That's not the same as telling Claude Code to just make the change. Macros aren't table stakes but find-replace across projects is table stakes in pretty much any post-emacs/vim code editor (and both emacs and vimlikes obviously have plenty of support for this.)
I guess it depends? The "refactor" stuff, if your IDE or language server can handle it, then yeah I find the LLM slower for sure. But there are other cases than an LLM helps a lot.
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
I'm consistently seeing personal and shared anecdotes of a 40%-60% speedup on targeted senior work.
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
Eeeh, I spend less time writing code, but way more time reviewing and correcting it. I'm not sure I come ahead overall, but it does make development less boilerplaty and more high level, which leads to code that otherwise wouldn't have been written.
I wonder if you observe this when you use it in a domain you know well versus a domain you know less well.
I think LLM assistants help you become functional across a more broad context -- and completely agree that testing and reviewing becomes much, much more important.
E.g - a front end dev optimizing database queries, but also being given nonsensical query parameters that don't exist.
Oh yes, of course, if I don't know a domain well, I can't review it. That doesn't mean the LLM makes fewer mistakes there, though.
That sounds plausible if the senior did lots of simple coding tasks and moves that work to an agent. Then the senior basically has to be a team lead and do code reviews/qa.
Curious, what do you count as senior work?
Roughly:
A senior can write, test, deploy, and possibly maintain a scalable microservice or similar sized project without significant hand-holding in a reasonable amount of time.
A junior might be able to write a method used by a class but is still learning significant portions and concepts either in the language, workflow orchestration, or infrastructure.
A principal knows how each microservice fits into the larger domain they service, whether they understand all services and all domains they serve.
A staff has significant principal understanding across many or all domains an organization uses, builds, and maintains.
AI code assistance help increase breadth and, with oversight, improve depth. One can move from the "T" shape to "V" shape skillset far easier, but one must never fully trust AI code assistants.
All the references to LLMs in the article seemed out-of-place like poorly done product placement.
LLMs are the anti-thesis of fast. In fact, being slow is a perceived virtue with LLM output. Some sites like Google and Quora (until recently) simulate the slow typed output effect for their pre-cached LLM answers, just for credibility.
Not only that, I am already typing enough for coding, I don't want to type on chat windows as well, and so far the voice assistance is so so.
I switch to vs code from cursor many times a day just to use their python refactoring feature. The pylance server that comes with cursor doesn't support refactoring.
Fun story time!
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
Working with a task scheduling system, we were told that every minute a airplane is delayed costs $10k. This was back in the 90s, so adjust accordingly.
if you ever remember that engineer's name you should tell them that I found the joke funny
So, did you make it faster?
Unfortunately, there wasn't a single bottleneck. A bunch of us, not just me, worked our asses off improving performance by a little bit in several places. The compounded improvement IIRC was satisfactory to the customer.
RE: P.P.S... God I love that humour. Actually was very funny.
> "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?"
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
> I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M
Of course the joke was silly. But perhaps I should have provided some context. We were making industrial automation software. This stuff runs in factories. Every saved second shrinks the manufacturing time of a part, leading to increase of the total factory output. When extrapolating to abusrd levels, zero time to manufacture means infinite output per factory (sans raw materials).
yeah it's one of those things that are funny to the people saying it because they don't yet realize it doesn't make sense. I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
> I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
Geez, life in my opinion is not so serious. It’s okay to say stupid things and not feel bad about it, as long as you are not trying to hurt anyone.
I bet they felt great and immediately forgot about this bad joke.
Their joke could have also been interpreted as sarcasm and when you’re going to be sarcastic you want to be doubly sure that you’re correct.
But I also concur with you that it is good to bring some levity to “serious” conversations!!
https://whatever.scalzi.com/2010/06/16/the-failure-state-of-...
Required reading for internet comedians.
Thanks for that! I miss old school blogs
Not in front of an executive of an important customer, it isn't. They are remarkably humorless about making money.
earlier in my career it'd be appealing to make jokes like that, or include a comment in an email. eventually you realize that people - especially "older" or those already a few years into their career - mostly don't want to joke around and just want to actually get the thing done you are meeting about.
Yikes. I hope to never need to work with such people
A process taking 0 seconds means that, in one year, it can be run 31540000 sec/0 sec = ∞ times, multiplying the profit by ∞.
Since when is the constraint "how many times can I run this thing"?
In principle, the reason that "every second saved here is worth $x" is because running the thing generates money, and saving time on it allows for running it more often.
At least in theoretical computer science, often, but that's another matter entirely.
Why do you count it as a highlight if your product failed to meet expectations?
It's well known, but this video[1] is a proof of concept demonstration from 4 years ago, Casey Muratori called out Microsoft's new Windows Terminal for slow performance and people argued that it wasn't possible, practical, or maintainable to make a faster terminal and that his claims of "thousands of frames per second" were hyperbolic, and one person said it would be a "PHD level research project".
In response, Casey spent <1 week making RefTerm, a skeleton proto-terminal with the same constraints Microsoft people had - using Windows APIs for things, using DirectDraw with GPU rendering, handling terminal escape codes, colours, blinking, custom fonts, missing font character fallback, line wrap, scrollback, Unicode and Right-to-Left Arabic combining characters, etc. RefTerm had 10x faster throughput than Windows Terminal and ran at 6-7000 frames per second. It was single-threaded, not profiled, not tuned, no advanced algorithms, no-cheating by sending some data to /dev/null, all it had to speed it up was simple code without tons of abstractions and a Least Recently Used (LRU) glyph cache to avoid re-rendering common characters, written the first way that he thought of. Around that time he did a video series on that YouTube channel about optimization and arguing that even talking about 'optimization' was too hopeful, we should be talking about 'not-pessimization', that most software is not slow because it has unavoidable complexity and abstractions needed to help maintenance, it's slow because it's choked by a big pile of do-nothing code and abstraction layers added for ideological reasons which hurt maintenance as well as performance.
[1] https://www.youtube.com/watch?v=hxM8QmyZXtg - "How fast should an unoptimized terminal run?"
This video[2] is another specific details one, Jason Booth talking about his experience of game development, and practical examples of changing data layout and C++ code to make it do less work, be more cache friendly, have better memory access patterns, and run orders of magnitude faster without adding much complexity and sometimes removing complexity.
[2] https://www.youtube.com/watch?v=NAVbI1HIzCE - "Practical Optimizations"
I simultaneously love and hate watching Casey Muratori. Love because he routinely does things like this, hate because I have conversations like this entirely too often at work, except no one cares.
Someone posted their word game Cobble[1] on HN recently, the game gives some letters and the challenge is to find two English words which together use up all the given letters, and the combined two words to be as short as possible.
A naive brute-force solver takes the Cobble wordlist of 64k words and compares every word against every other word and does 64k x 64k = 4Bn loops and in the inner loop body, loops over the combined characters. If the combined words average 10 characters long, that's 40 billion operations just for the code structure, plus character testing and counting and data structures to store the counts. Seconds or Minutes of work for a puzzle that feels like any modern computer should solve it in microseconds.
It's always mildly intresting to me how a simple to explain problem, a tiny amount of data, and four lines of nested loop, can generate enough work to choke a modern CPU for minutes. Then considering how much work 3D games do in milliseconds. It highlights how impressive algorithmic research of the 1960s was to find ways to get early computers to do anything in a reasonable time, let alone find fast paths through complex problem patterns. Or perhaps, of all the zillions of possible problems which could exist, find any which can be approached by human minds and computers.
Of course finding the optimal solution to a Cobble puzzle does not actually require the computation you describe. We can in a single pass find a limited set of candidate words and work out a solution with those.
Sure; after Casey Muratori saying that people argue with him that no normal developer needs to worry about performance, computers are fast enough, performance is a niche concern, I'm just musing how little data it takes - 64k is nothing to a modern person - and how abruptly anyone who wants a fast answer has to switch to think about performance, pre-processing the list, sorting more promising candidates first, using a faster language, noticing that it's embarrassingly parallel, etc.
I would have loved to live in a universe where we could replace the Windows Terminal with RefTerm - if only, to measure how many hours would pass before a Fortune 500 company has to halt operations, because RefTerm does not properly re- implement one of the subtle bugs creeping from one of the bazillion features that had made WinTerm slow over the years. [1]
I sighed when I read your comment, a comment which is exemplary of what Casey Muratori was ranting against - casual lazy dismissal of the idea that software can be faster, based on misunderstanding and lack of knowledge and/or interest, and throwing out the first objection that comes to mind as if it's an impassable obstacle. There were no bazillion features that made WinTerm slow over the years because Windows Terminal was a new product for Windows 10, released in 2019.[1]. There were piles of problems in Windows Terminal, Casey calls out that it didn't render Right-to-Left Arabic combining glyphs and it wasn't a perfect highly polished program from the outset. And it was an optional download, Fortune 500s wouldn't run it if they didn't want to.
RefTerm was explicitly not a production quality terminal and was not intended to be a replacement for Windows Terminal. RefTerm was a lower bound for performance of an untuned single-thread terminal. RefTerm was a proof of concept that if Microsoft had spent money and engineering skill on performance they could have profiled and used fancy algorithms and shortcuts and reimplemented slow Windows APIs with faster local ones, used threading, and improved on RefTerm's performance. A proof that "significantly faster terminals are unrealistic" is not true, that all the casual dismissals of why it's impossible are not the reasons for slowness, and that 10x better is an easily achievable floor, not a distant unreachable ceiling.
As a result of Casey's public shaming, Windows Terminal developers did improve performance.
> Rarely in software does anyone ask for “fast.”
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
Only in the small subset of programmers that post on HN is that the case. Most users or even most developers don't mind slow stuff or "getting into flow state" or anything like that, they just want a nice UI. I've seen professional data scientists using Github Desktop on Windows instead of just learning to type git commands for an easy 10x time save
GitHub Desktop is way better for reviewing diffs than the git cli. Everyone I’ve ever worked with who preferred cli tools also did an add and commit everything, and their PRs always have more errors overall that would be caught before even being committed if they reviewed visual diffs while committing.
The best interface is magit, IMO. I use a clone of it in VS Code that is nearly as good. But you get the speed of CLI while still being very easy to stage/unstage individual chunks, which is probably the piece that does not get done enough by CLI users.
Sublime Merge gets you all those benefits, PLUS it’s really fast!
They do mind, which is why we see such a huge drop off in retention if pages load even seconds too low. They just don't describe it in the same way.
They don't say they buy the iPhone because it has the fastest CPU and most responsive OS, they just say it "just works".
Not everyone is conscious about it but I feel like it’s something that people will always want.
Like the « evergreen » things Amazon decided to focus on : faster delivery, greater selection, lower cost.
You're taking the wrong conclusion, "Fast" is a winning differentiator only when you offer the same feature-set, but faster.
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
I hate it but it's true. Look at me, my fridge as an integrated tablet that tells me the weather outside. Never mind that it is a lil louder and the doors are creaky. It tells me the weather!
And is your fridge within line of sight of a window? :)
Really not sure about that. People will give up features for speed all the time. See git vs bzr/hg/svn/darcs/monotone,...
Hum, personally I've always found git having more features than those, though I don't know them all, at least when git released it distinguished itself by its features mostly, specifically the distributed nature and rebase. And hg/bzr never looked to me like they had more features, more so similar features +/-, so they'd be a good example of git has the same features+faster so it won.
Maybe for languages, but fast is easily left behind when looking for frameworks. People want features, people want compatibility, people will use electron all over.
> fast is easily left behind when looking for frameworks.
Nah. React, for example, only garnered attention because it said "Look how much faster the virtual DOM is!". We could go on all day.
> People want features, people want compatibility
Yes, but under the assumption that it is already built to be as "fast" as possible. "Fast" is assumed. That's why "faster" is such a great marketing trick, as it tunes people into "Hold up. What I'm currently using actually sucks. I'd better reconsider."
"Fast" is deemed important, but it isn't asked for as it is considered inconceivable that you wouldn't always make things "fast". But with that said, keep in mind that the outside user doesn't know what "fast" is until there is something to compare it with. That is how some products can get away with not being "fast" — until something else comes along to show that it needn't be that way.
It is only fast compared to a really dumb baseline. But you are right that the story of React being fast was a big part of selling it.
"Look how quickly it can render the component 50 times!"
"Look, it can render the whole app really quickly every time the user presses a key!"
That gets into a very interesting question of controlled vs. uncontrolled components.
On one hand I like controlled components because there is a single source of truth for the data (a useState()) somewhere in the app, but you are forced to re-render for each keypress. With uncontrolled components on the other hand, there's the possible anarchy of having state in React and in the actual form.
I really like this library
which has a rational answer to the problems that turn up with uncontrolled forms.
That's neat, thanks for the link!
Isn't React one of the slower frameworks?
https://krausest.github.io/js-framework-benchmark/current.ht...
Reactivity as an idea allowed you to manage data and dom/UI updates in a more performant way than the approach prior to React being popular.
But React started a movement where frontend teams were isolated from backend teams (who tend to be more conservative and performance minded), tons of the view was needlessly pushed into browser rendering, every paged started using 20 different JSON endpoints that are often polling/pushing adding overhead etc. So by every measure it made the Web slower and more complicated, in exchange for some slightly easier/cohesive design management (that needs changing yearly).
The particulars on the vdom framework itself are probably not that important in the grand scheme. Unless it's design encourages doing less of those things (which many newer ones do but React is flexible).
And yet we live in a world of (especially web) apps that are incredibly slow, in the sense that an update in response to user input might take multiple seconds.
Yes, fast wins people over. And yet we live in a world where the actual experience of every day computing is often slow as molasses.
The trouble is that "fast" doesn't mean anything without a point of comparison. If all you have is a slow web app, you have to assume that the web app is necessarily slow — already as fast as it can be. We like to give people the benefit of the doubt, so there is no reason to think that someone would make something slower than is necessary.
"Fast" is the feature people always wanted, but absent better information, they have to assume that is what they already got. That is why "fast" marketing works so well. It reveals that what they thought was pretty good actually wasn't. Adding the missing kitchen sink doesn't offer the same emotional reaction.
> The trouble is that "fast" doesn't mean anything without a point of comparison.
This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.
Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.
Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.
> you have to assume
We don't have to assume. We know that JavaScript is slow in many cases, that shipping more bundles instead of less will decrease performance, and that with regard to the amount of content served generally less is more.
Whether this amount of baggage every web app seems to come with these days is seen as "necessary" or not is subjective, but I would tend to agree that many developers are ignorant of different methods or dislike the idea of deviating from the implied norms.
The slow web app is probably still faster than the previous solution.
I’ll tell you what fast is.
I’ve mentioned this before.
Quest Diagnostics, their internal app used by their phlebotomists.
I honestly don’t know how this app is done, I can only say it appears to run in the tab of a browser. For all I know it’s a VB app running in an ActiveX plugin, if they still do that on Windows.
L&F looks classic Windows GUI app, it interfaces with a signature pad, scanner, and a label printer.
And this app flies. Dialogs come and go, the operator rarely waits on this UI, when she is keying in data (and they key in quite a bit), the app is waiting for the operator.
Meanwhile, if I want to refill a prescription, it fraught with beach balls, those shimmering boxes, and, of course, lots of friendly whitespace and scrolling. All to load a med name, a drugstore address, and ask 4 yes/no questions.
I look at that Quest app mouth agape, it’s so surprisingly fast for an app in this day and age.
This is a disingenuous response because I made it plenty clear what I meant with "fast": interactive response times.
And for that, we absolutely do have points of comparison, and yeah, pretty much all web apps have bad interactivity because they are limited too much by network round trip times. It's an absolute unicorn web app that does enough offline caching.
It's also absurd to assume that applications are as fast as they could be. There is basically always room for improvement, it's just not being prioritised. Which is the whole point here.
Molasses can be fast if you leave it in the packet and hurl it!
Seriously though, you're so right- I often wonder why this is. If it's that people genuinely don't care, or that it's more that say ecommerce websites compete on so many things already (or in some cases maintain monopolies) that fast doesn't come into the picture.
Eh, I think the HN crowd likes fast because most tech today is unreasonably slow, when we know it could be fast.
It's infuriating when I have to use a chatbot, and it pretends to be typing (or maybe looking up a pre-planned generic response or question)...
I'm already pissed I have to use the damn thing, please don't piss me off more.
Press enter.
Wait.
Wait for typing indicator.
Wait for cute text-streaming.
Skip through the paragraph of restating your question and being pointlessly sycophantic.
Finally get to the meat of the response.
It’s wrong.
What's sad is that I always open grok.com if it's a quick simple query because their UI loads about 10X faster than GPT/Gemini/Claude.
The claim was not that Rust was faster than C++, they said it’s about as fast.
C and C++ were and are the benchmark, it would have been revolutionary to be faster and offer memory safety.
Today, in some cases Rust can be faster, in others slower.
To a first approximation HN is a group of people who have convinced themselves that it's a high quality user experience to spend 11 seconds shipping 3.8 megabytes of Javascript to a user that's connected via a poor mobile connection on a cheap dual-core phone so that user can have a 12 second session where they read 150 words and view 1 image before closing the tab.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
The fact that this article and similar ones get upvoted very frequently on this platform is strong evidence against this claim.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
It's not that they convinced themselves, but that they don't know how to do any better. It is as fast as it can be to the extent of their knowledge, skill, and ability.
You see some legendary developers show up on HN from time to time, sure, but it is quite obvious that the typical developer isn't very good. HN is not some kind of exclusive club for the most prestigious among us. It is quite representative of a general population where you expect that most aren't very good.
This kind of slop is often imposed on developers by execs demanding things.
I imagine a large chunk of us would gladly throw all that out the window and only write super fast efficient code structures, if only we could all get well paid jobs doing it.
Just want to say how much I thank YCom for not f'ing up the HN interface, and keeping it fast.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
HN is literally the website I open to check if I have internet connectivity. HN is truly a shining beacon in the trashy landscape of web bloat.
I usually load my blog to check internet connectivity.
I work at an e-waste recycling company. Earlier this week, I had to test a bunch of laptop docking stations, so I kept force refreshing my blog to see if the Ethernet port worked. Thing is, it loads so fast, I kept the dev tools open to see if it actually refreshed.
I like to use example.com/net/org
bonus, these have both http & https endpoints if you needed a differential diagnosis or just a means to trip some shitty airline/hotel walled garden into saying hello.
httpS://neverssl.com?
yep, I do exactly the same thing. If HN isn't loading, something is definitely fckd.
Except when HN itself is fckd.
It does happen less than it used to, but still.
(Edit: Btw, it's fine to say 'fucked' or other swear words on HN - we don't care about profanity and aren't Bowdlers. I add this because people sometimes misinterpret https://news.ycombinator.com/newsguidelines.html that way, assuming that we want drawing-room politness.)
Oh it's lwn.net for me!
I find pinging localhost a bit more reliable, and faster too.
I blame HN switching to AWS. Downtime also increased after the switch.
When did you notice HN switching to AWS, and what changed?
(Those are trick questions, because we haven't switched to AWS. But I genuinely would like to hear the answers.)
(We did switch to AWS briefly when our hosting provider went down because of a bizarre SSD self-bricking incident a few years ago...but it was only for a day or two!)
The HN UI could do with some improvements, especially on mobile devices. The low contrast and small tap areas for common operations make it less than ideal, as well as the lack of dark mode.
I wrote my take on an ideal UI (purely clientside, against the free HN firebase API, in Elm): https://seville.protostome.com/.
To each their own, but I find the text for the number of points and "hours ago" extremely low contrast and hard to read on your site. More importantly, I think it emphasizes the wrong thing. I almost never really care who submitted a post, but I do care about its vote count.
That’s all totally fair.
I actually never care about the vote count but have been on this site long enough to recognise the names worth paying attention to.
Also the higher contrast items are the click/tap targets.
Anyone who goes to the trouble of making their own HN front end is entitled to complain as much as they want, in my book! Nicely done.
It’s hilarious to me that I find this thread. I read the comment you’re replying to before I saw who wrote it. I exclusively read HN on iOS using https://hackerweb.app/ in dark mode precisely because I found it to be the most pleasing mobile experience. And here’s dang replying to my co-worker who commented that he wrote his own HN reader because the actual site isn’t the best on mobile. I could literally reach out my hand, show my phone and share my mobile HN experience with him, except I’m 99% remote. (But I did sit at his desk just last Thursday when he was remote.)
Just goes to show that all of us reading HN don’t actually share with each other how we’re reading HN :)
Too funny… thank you!!
Information density and ease of identification is the antithesis of "engagement" which often has some time on site metric they're hunting.
If you can find what you want and read it you might not spend 5 extra seconds lost on their page and thus they can pad their stats for advertisers. Bonus points if the stupid page loads in such a way you accidentally click on something and give them a "conversion".
Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
> Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
Northstar should be user satisfaction. For some products that might be engagement (eg entertainment service) while for others it is accomplishing a task as quickly as possible and exiting the app.
The one and only thing I'd do is make the font bigger and increase padding. There's overwhelming consensus that you should have (for English) about 50–70 characters per line of text for the best, fastest, most accurate readability. That's why newspapers pair a small font with multiple columns: to limit number of characters per line of text.
HN might have over 100 chars per line of text. It could be better. I know I could do it myself, and I do. But "I fixed it for me" doesn't fix it for anyone else.
I use HN zoomed in at 133%. Its a lot more comfortable even when I'm wearing my glasses.
Increased padding comes at the cost of information density.
I think low density UIs are more beginner friendly but power users want high density.
High information density, not high UI density.
Having 50 buttons and 10 tabs shoved in your face just makes for opaqueness, power user or not.
I agree. In my experience, the default HN is terrible for accessibility (in many ways). I’ve just been waiting for dang and tomhow to get a lot older so that they face the issues themselves enough times to care.
A narrow column of text can make it easier to read individual sentences, but it does so by sacrificing vertical space, which makes it harder to skim a page for relevant content and makes it easier for me to lose track of my place since I can't see as much context, images, and headings on screen all at once. I also find it much harder to read text when the paragraphs form monotonous blocks spanning 10 lines of text rather than being irregularly shaped and covering 3-5 lines. I find Wikipedia articles much harder to read in "standard" mode compared to "wide" mode for this reason.
Different people process visual information differently, and people reading articles have different goals, different eyesight, and different hardware setups. And we already have a way for users to tell a website how wide they want its content to be: resizing their browser window. I set the width of my browser window based on how wide I want pages to be; and web designers who ignore this preference and impose unreadable narrow columns because they read about the "optimal" column width in some study or another infuriate me to no end. Optimal is not the same for everyone, and pretending otherwise is the antithesis of accessibility.
The user should have the choice. If I wanted my browser to display text in a tiny column on my monitor because I thought it would be easier to read, I would... resize my browser to be a tiny column on my monitor!
Why would shorter lines be regular? I use hn with `max-width: 60rem;`, and I get a ragged right (which I very much prefer over justification), while also getting a line length easier for my eyes to follow.
My eyes seem to navigate by paragraph more so than by line. It's hard to try to overanalyze how I read, but I think "corners" of a paragraph are landmarks that I latch onto, and when I reach the end of a line of text I don't scan back along the the line horizontally back to the left, I "jump" back, using the boundaries of the paragraph to estimate the start of the next line, and continue reading from there.
This means that I have a difficult time reading text with very large paragraphs. If a paragraph goes on for 10+ lines, I'll start to lose my place at the end of most lines. This is infuriating and drastically impairs my ability to read and comprehend the text.
It's interesting to me that you mention preferring a ragged right over justification, because I literally do not notice the difference. This suggests to me that we read in different ways -- perhaps you focus on the shape and boundaries of a line more than the shape of a paragraph. This makes intuitive sense to me as to why you would prefer narrower columns.
I don't think that I'm "right" for preferring wider columns or that you or anyone else are "wrong" for preferring narrower columns. I think it's just how my brain learned to process text.
I have pretty strong opinions on what's too wide of a column and what's too narrow of a column, so I won't fullscreen a browser window on anything larger than a laptop. Rather, I'll set it for a size that's comfortable for me. If some web designer decides "actually, your preferred text width is wrong, use mine instead" then I'm gonna be pretty annoyed, and I think rightfully so, because what those studies say is "optimal" for the average person is nigh unreadable for me. (Daring Fireball is the worst offender I can think of off the top of my head. I also find desktop Wikipedia's default view pretty hard to read, but the toggleable "wide" mode is excellent).
[delayed]
Naturally. Centuries of typography as a field and your anecdote obliterates it.
Centuries of typography during the age of print. Screens have continuously changed (resolution, refresh rate, size, reflectance, color reproduction, black levels, etc.) over the past 30 years and it's not guaranteed that conventions from print make sense for the current mix of screens.
But we have remained largely the same. And I don't see how colour accuracy affects how many words per line most humans can comfortably deal with.
This whole discussion is silly and rooted in ostensibly clever people who have mastered something unnecessary hard and have now tied their identity to that mastery.
Power users what their tools to be shit. Sure.
I’d very much prefer more padding between the clickable UI elements on mobile in particular, because the zoom in -> click upvote -> zoom out, or the click downvote by accident -> try to unvote -> try to upvote again, well, it gets pretty old pretty fast.
The text density, however, I rather like.
There are dozens of alternative HN front ends that would satisfy your needs
I don't think it was UI that killed Slashdot. The value was always in the comments, and in the very early years often there would be highly technical SMEs commenting on stories.
The site seemed to start to go downhill when it was sold, and got into a death spiral of less informed users, poor moderation, people leaving, etc. It's amazing that it's still around.
It's not bad. I still read it, but less than HN.
For me, Slashdot became full of curmudgeons. It’s pretty tiring when every “+5 Insightful” on a hard drive article questioning why you’d ever want so big of a drive, or why you’d require more than 256 colors or whatever new thing came out… like why are you even on a technology enthusiast site when you bitterly complain about every new thing? Basically either accept change or get left in the dust and slashdot’s crowd seemed determined to be left in the dust… forever loosing its relevance in the tech community.
Plus Rusty just pushed out Kuro5hin and it felt like “my scene” kind of migrated over.
As an aside, Kuro5hin was the only “large” forum that I ever bothered remembering people’s usernames. Every other forum it’s all just random people. (That isn’t entirely true, but true enough)
Kuro5hin was far less about technology though.
It was interesting in a different way though.
Like Adequacy.
Did you also move over to MetaFilter ?
Never really “got” MetaFilter.
Adequacy was awesome.
It’s not modern UIs that prevent websites from being performant. Look at old.reddit.com, for instance. It’s the worst of both worlds. An old UI that, although much better than its newer abomination, is fundamentally broken on mobile and packed to the gills with ad scripts.
It brings me genuine joy to use websites like HN or Rock Auto that haven't been React-ified. The lack of frustration I feel when using fast interfaces is noticeable.
I don't really get why so many websites are slow and bloated these days. There are tools like SpeedCurve which have been around for years yet hardly anyone I know uses them.
What changes have been made to the HN design since it was launched?
I know there are changes to the moderation that have taken place many times, but not to the UI. It's one of the most stable sites in terms of design that I can think of.
What other sites have lasted this long without giving in to their users' whims?
Over the last 4 years my whole design ethos has transformed to "WWHND" (What Would Hacker News Do?) every time I need to make any UI changes to a project.
Slashdot looked a lot like HN with high information density. It was fast and easy to read all the comments. Then a redesign happened because of web 2.0 or "mobile-first" hype and most of the comments got hidden/collapsed by default, sorted almost randomly etc. So a new user would come there and say "wtf this is a dead conversation" or would have to click too many times to get to the full conversation. So new user would leave, and so would the old ones because the page was so hard to use. It just lost users and that was that. All because of the redesign which they never wanted to revert. Sad really because I still think it had/has the best comment moderation by far.
Looks like I've had my Slashdot account over 20 years. I remember the original UI being much simpler - much more like HN. Did the membership collapse because of the UI changes they made or because Digg and Reddit took its place?
I think it was mostly because of the UI, but I don't think anyone has any data on that so it probably was a timing and combination of things. The UI redesign really was a hot topic for weeks/months on slashdot but unfortunately they stuck to their guns and got what they've deserved.
The only one I remember is adding the ability to collapse comment threads
Similar thing happened (to me) with Hackaday around 2010-2011. I used to check it almost daily, and then never again after the major re-design.
That, and all the trolls that piled on, when CNN and YouTube started policing their comment sections.
HN interface is goated
Is that when they went fully xhtml?
Ive wanted tp poll HN about how many people actively track usernames.
With IRC its basically part of the task, but every forum i read, its rare that i ever consider whose saying what.
I use this script: https://greasyfork.org/en/scripts/441566-hn-avatars-in-396-b...
Doesn't really help a ton with recognizing but it makes it easier to track within a thread.
I routinely notice a handful of people, such as Thomas Ptacek, whose opinions I have opinions about, and then in context I notice e.g. Martin Uecker for C and especially the pointer provenance problem (on which he has been diligently working for some years), or Walter Bright (for the D language), or Steve Klabnik (Rust)
There are people who show up much less often and have less obvious usernames, Andrew Ayer is agwa for example, and I'm sure there are people I blank on entirely.
Once in a while I will read something and realise oh, that "coincidental" similarity of username probably isn't a coincidence, I believe the first time I realised it was Martin Uecker was like that for example. "Hey, this HN person who has strong opinions about the work by Uecker et al has the username... oh... Huh. I guess I should ask"
Yep. Dang is basically the only one I notice.
It helps having the username in a lighter font than the comment.
HN goes to some lengths to de-emphasize the usernames, leaving them small and medium grey against a light grey background. It's not easy to track usernames here. Some other forums put far more emphasis on them, even letting users upload icons so you can tell who is who at a glance.
For me it's more a recognition after the fact thing: "Oh that was a good comment who said that? Oh that guy, yeah not surprised."
I don’t even slightly.
orange site still doesn't support markdown link tags though.
What's a markdown link tag?
I'm assuming [Example link text](https://example.org)
I don't know what use that would be for a text comment though.
Fast is also cheap. Especially in the world of cloud computing where you pay by the second. The only way I could create a profitable transcription service [1] that undercuts the rest was by optimizing every little thing along the way. For instance, just yesterday I learned that the image size I've put together is 2.5× smaller than the next open source variant. That means faster cold boots, which reduces the cost (and providers a better service).
ive approached the same thing but slightly differently. i can run it on consumer hardware for vastly cheaper than the cloud and don't have to worry about image sizes at all. (bare metal is 'faster') offering 20,000 minutes of transcription for free up to the rate limit (1 Request Every 5 Seconds)
I contributed "whisperfile" as a result of this work:
* https://github.com/Mozilla-Ocho/llamafile/tree/main/whisper....
* https://github.com/cjpais/whisperfile
if you ever want to chat about making transcription virtually free or so cheap for everyone let me know. I've been working on various projects related to it for a while. including open source/cross-platform superwhisper alternative https://handy.computer
> i can run it on consumer hardware for vastly cheaper than the cloud
Woah, that's really cool, CJ! I've been toying the with idea of standing up a cluster of older iPhones to run Apple's Speech framework. [1] The inspiration came from this blog post [2] where the author is using it for OCR. A couple of things are holding me back: (1) the OSS models are better according to the current benchmarks and (2) I have customers all over the world, so that geographical load-balancing is a real factor. With that said, I'll definitely spend some time checking out your work. Thanks for sharing!
[1] https://developer.apple.com/documentation/speech
[2] https://terminalbytes.com/iphone-8-solar-powered-vision-ocr-...
Is S3 slow or fast? It’s both, as far as I can tell and represents a class of systems (mine included) that go slow to go fast.
S3 is “slow” at the level of a single request. It’s fast at the level of making as many requests as needed in parallel.
Being “fast” is sometimes critical, and often aesthetic.
We have common words for those two flavors of “fast” already: latency and throughput. S3 has high latency (arguable!), but very very high throughput.
Fast is cheap everywhere. The only reasons software isn’t faster:
* developer insecurity and pattern lock in
* platform limitations. This is typically software execution context and tool chain related more than hardware related
* most developers refuse to measure things
Even really slow languages can result in fast applications.
Yep. I'm hoping that installed copies of PAPER (at least on Linux) will be somewhere under 2MB total (including populating the cache with its own dependencies etc). Maybe more like 1, although I'm approaching that line faster than I'd like. Compare 10-15 for pip (and a bunch more for pipx) or 35 for uv.
Fast doesn't necessarily mean efficient/lightweight and therefore cheaper to deploy. It may just mean that you've thrown enough expensive hardware at the problem to make it fast.
Your CSS is broken fyi
Not in development and maintenance dollars it's not
Hmm… That's a good point. I recall a few instances where I went too far to the detriment of production. Having a trusty testing and benchmarking suite thankfully helped with keeping things more stable. As a solo developer, I really enjoy the development process, so while that bit is costly, I didn't really consider that until you mentioned it.
This is interesting. It got me to think. I like it when articles provoke me to think a bit more on a subject.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
I feel the same way about Go vs Rust. Compilation speed matters. Also, Rust projects resemble JavaScript projects in that they pull in a million deps. Go projects tend to be much less dependency happy.
One of the Rust ecosystem's biggest mistakes, in my opinion, was not establishing a fiercely defensive mindset around dependency-bloat and compilation speed.
As much as Rust's strongest defenders like to claim, compilation speed and bloat just really wasn't a goal. That's cascaded down into most of the ecosystem's most used dependencies, and so most Rust ecosystem projects just adopt the mindset of "just use the dependency". It's quite difficult to build a substantial project without pulling in 100s of dependencies.
I went on a lengthy journey of building my own game engine tools to avoid bloat, but it's tremendously time consuming. I reinvented the Mac / Windows / Web bindings by manually extracting auto-generated bindings instead of using crates that had thousands of them, significantly cutting compile time. For things like derive macros and serialization I avoided using crates like Serde that have a massive parser library included and emit lots of code. For web bindings I sorted out simpler ways of interacting with Javascript that didn't require a heavier build step and separate build tool. That's just the tip of the iceberg I can remember off the top of my head.
In the end I had a little engine that could do 3D scenes, relatively complex games, and decent GPU driven UI across Mac, Windows, and Web that built in a fraction of the time of other Rust game engines. I used it to build a bunch of small game jam entries and some web demos. A clean release build on the engine on my older laptop was about 3-4 seconds, vastly faster than most Rust projects.
The problem is that it was just a losing battle. If I wanted Linux support or to use pretty much any other crate in the Rust ecosystem, I'd have to pull in dependencies that alone would multiple the compile time.
In some ways that's an OK tradeoff for an ecosystem to make, but compile times do impede iteration loops and they do tend to reflect complexity. The more stuff you're building on top of the greater the chances are that bugs are hard to pin down, that maintainers will burn out and move on, or that you can't reasonably understand your stack deeply.
Looking completely past the languages themselves I think Zig may accrue advantages simply because its initial author so zealously defined a culture that cares about driving down compile times, and in turn complexity. Pardon the rant!
It's fascinating to me how the values and priorities of a project's leaders affect the community and its dominant narrative. I always wondered how it was possible for so many people in the Rust community to share such a strong view on soundness, undefined behavior, thread safety etc. I think it's because people driving the project were actively shaping the culture.
Meanwhile, compiler performance just didn't have a strong advocate with the right vision of what could be done. At least that's my read on the situation.
As OP demonstrated, Rust compiler performance is not the problem, it's actually quite fast for what it does. Slow builds are rather caused by reliance on popular over-generic crates that use metaprogramming to generate tons of code at compile time. It's not a Rust specific tradeoff but a consequence of the features it offers and the code style it encourages. An alternative, fast building crate ecosystem could be developed with the same tools we have now.
By comparison, Go doesn't have _that_ problem because it just doesn't have metaprogramming. It's easy to stay fast when you're dumb. Go is the Forest Gump of programming languages.
The 'windows' crate is really good at not being slow.
It ruthlessly uses features to cut down the amount of generated code and it only uses fairly simple #ifdef-like metaprogramming.
And that leads to dependency hell once you realize that those dependencies all need different versions of the same crate. Most of the time this "just works" (at the cost of more dependencies, longer compile time, bigger binary)... until it doesn't then it can be tough to figure out.
In general, I like cargo a lot better than the Go tooling, but I do wish the Rust stdlib was a bit more "batteries included".
I feel like Rust could have added commonly used stuff as extensions and provided separate builds that have them baked in for those that want to avoid dependency hell while still providing the standard builds like they currently do. Sure the versions would diverge somewhat but not sure how big of a problem that would be.
This is all well and good that we developers have opinions on whether Go compiles faster than Rust or whatever, but the real question is: which is faster for your users?
...and that sounds nice to me as well, but if I never get far enough to give it to my users then what good is fast binaries? (implying that I quit, not that Rust can't deliver). The holy grail would be to have both. Go is generally 'fast enough', but I wish the language was a bit more expressive.
I've noticed over and over again at various jobs that people underestimate the benefit of speed, because they imagine doing the same workflow faster rather than doing a different workflow.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
I think people also vastly underestimate the cost of context switching. They look at a command that takes 30 seconds and say "what's the point of making it take 3 seconds? you only run it 10 times in a day; it's only 5 minutes". But the cost is definitely way more than that.
Whenever we make our code faster the users just run bigger models :P.
Me, looking at multi-hour CI pipelines, thinking how many little lint warnings I'd fix up if CI could run in like 20 minutes
Website is superfast. Reason I usually go for the comments first on HN is exactly this: they're fast. THIS is notably different.
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
The website is fast because it's minimal, just under 80 kB of which 55 is the custom font; this is fine for plain content sites, but others will have other requirements.
There's never a reason to make a content website use heavyweight JS or CSS though.
That’s actually why I don’t like discourse at all. If your community site needs loading icons I don’t want to use it.
Pavel Durov (founder of Telegram) totally nailed this concept.
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
Telegram is pretty slow, both the web interface and the Android app. For example, reactions to a message always take a long time to load (both when leaving one, and when looking at one). Just give me emoji, I don't need your animated emoji!
Can't agree.
These operations are near instant for me on telegram mobile and desktop.
It's the fastest IM app for me by a magnitude.
The industry (hint:this forum's readers) have replaced "fast" software with "portable" meaning -universally addressable libraries that must load from discrete and often remote sources, -zero hang time in programming language evolution (leaving no time for experts to discover, document, and implement optimizations) -insistence on "the latest version" focused software with no emphasis on long term code stability
I find most jobs I had fast becomes a big issue once things are too slow. Or expensive.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
In addition to all this, I’m also of the opinion that most users just have software “lumped on them” and have little to no recourse for complaint, so they’re just forced/trained to put-up-and-shut-up about it.
As a result, performance (and a few other things) functionally never gets “requested”. Throw in the fact that for many mid-to-large orgs, software is not bought by the people who are forced to use it and you have the perfect storm for never hearing about performance complaints.
This in turn, justifies never prioritising performance.
> Rarely in software does anyone ask for “fast.” We ask for features, we ask for volume discounts, we ask for the next data integration. We never think to ask for fast.
Almost everywhere I’ve worked, user-facing speed has been one of the highest priorities. From the smallest boring startup to the multi billion dollar company.
At companies that had us target metrics, speed and latency was always a metric.
I don’t think my experience has been all that unique. In fact, I’d be more surprised if I joined a company and they didn’t care about how responsive the product felt, how fast the pages loaded, and any weird lags that popped up.
At 6 out of 8 companies I've worked at (mostly a mixture of tech & finance) I have always had to fight to get any time allotted for performance optimization, to the point where I would usually just do it myself under the radar. Even at companies that measured latency and claimed it was important, it would usually take a backseat to adding more features.
That is how it is most of the time. If you want to experience the other extreme, go to HFT or low-latency projects.
My experience has been that people sometimes obsess over speed for things like how fast a search result returns but not over things like how fast a page renders or how many bites we send the user.
I have been paid to make things fast. Sometimes that was the explicit reason I was hired!
Efficient code is also environmentally friendly.
First, efficient code is going to use less electricity, and thus, fewer resources will need to be consumed.
Second, efficient code means you don't need to be constantly upgrading your hardware.
Well, that depends. Very inefficient code tends to only be used when absolutely needed. If an LLM becomes ten times faster at answering simple prompts, it may very well be used a hundred times more as a result, in which case electricity use will go up, not down. Efficiency gains commonly result in doing way more with more, not more with less.
Correct. This is also known as a rebound effect [1], or, specifically with regard to technological improvements, as the Jevons paradox [2].
[1]: https://en.wikipedia.org/wiki/Rebound_effect_(conservation)
Indeed, that is a common occurrence that called Jevons Paradox.
Unless your code is running on a large number of machines across data centers that energy is about 2-3 figures a month in total utilization.
So if we use cost as a proxy for environment impact it’s not saving much at all.
I think this is a meme to help a different audience care about performance.
Very true, but in recent years feature development has taken precedence over efficiency. VP of whatever says hardware is cheap, software engineers are not.
Energy used for lighting didn't decrease when the world moved to LED lights which use much less energy - instead we just used more lighting everywhere, and now cities are white instead of yellow.
I know what you mean, but do you have a citation for that? LEDs are so much more efficient that I wonder if its true.
No mention of google search itself being fast. It's one of the poster children of speed being part of the interface.
Microsoft needs to take heed, for example Explorer's search, Teams, make your computer seem extremely slow. VS Code on the other hand is fast enough, while slower than native editors such as Sublime Text.
Only sorta related, but it’s crazy that to me how much our standards have dropped for speed/responsiveness in some areas.
I used to play games on N64 with three friends. I didn’t even have a concept of input lag back then. Control inputs were just instantly respected by the game.
Meanwhile today, if I want to play rocket league with three friends on my Xbox series S (the latest gen, but the less powerful version), I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable.
> I have to deal with VERY noticeable input lag. Like maybe a quarter of a second. It’s pretty much unplayable
Your experience is not normal.
If you’re seeing that much lag, the most likely explanation is your display. Many TVs have high latency for various processing steps that doesn’t matter when you’re watching a movie or TV, but becomes painful when you’re playing games.
This does not undermine chamomeal's argument. The whole point is that back in the N64 days, they could not possibly have had that experience. There was no way to even make it happen. The fact that today it's a real possibility when you've done nothing obviously wrong is a definite failure.
TVs back then supported a given standard (NTSC, PAL) and a lower resolution. CRTs couldn't "buffer" the image. Several aspects made it so that "cheating" was not possible.
It was either fast, or nothing. Image quality suffered, but speed was not a parameter.
With LCDs, lag became a trade-off parameter. Technology enabled something to become worse, so economically it was bound to happen.
Luckily newer TVs and console can negotiate a low-latency mode automatically. It's called ALLM (Auto-Low Latency Mode).
it's possible, but it seems to specifically be a rocket league on xbox series s problem, not a display problem. Other games run totally fine on the same display with no lag!
That may be an issue of going from a CRT tv to an LCD tv. As far as I am aware there was no software manipulation of the video input on a CRT. It just took the input and displayed it on the screen in the only way it could. Newer tvs have all kinds of settings to alter the video which takes processing time. They also typically have a game mode to turn off as much of it as it will allow.
Why should the user care whether the lag is introduced by the software in the controller, or the software in the gaming console, or the software in the tv.
The lag is due to some software. So the problem is with how software engineering as a field functions.
I hear it claimed that you're only supposed to enable game mode for competitive multiplayer games -- but I've found that many single player games like Sword Art Online: Fatal Bullet are unplayable without game mode enabled.
It could be my unusual nervous system. I'm really good at rhythm games, often clearing a level on my first try and amazing friends who can beat me at other genres. But when I was playing League of Legends, which isn't very twitchy, it seemed like I would just get hit and there was nothing I could do about it when I played on a "gaming" laptop but found I could succeed at the game when I hooked up an external monitor. I ran a clock and took pictures showing that the external monitor was 30ms faster than the built-in monitor.
It’s not just the software, the analog electronics of LCD/LED screens are inherently laggy and have motion blur: https://en.wikipedia.org/wiki/Sample_and_hold
How about the speed of going from a powered off console to playing the actual game? Sleep mode helps with resuming on console, but god forbid you’re on a pc with a game that has anti cheat, or comped menus. You will sit there, sometimes for a full minute waiting. I absolutely cannot stand these games.
My buddy booted up his PC after gaming on his PS5 for two weeks and every single app needed multi-gig updates. Xbox app, Logitech app, Discord, Windows 11, Chrome, Steam. The whole enchilada. Rage inducing compared to sticking a cart in an N64.
Or how channel surfing now requires a 1-2 second latency per channel, versus the way it was seemingly instant from the invention of television through the early 1990s.
Having a lot more channels is cool I guess, but it was much better to watch and listen to a staticy analog channel 20 years ago, than a digital channel today where there is no audio and the image freezes.
Heck yes! I recently dusted off (had to literally dust the inside of the cartridges to get past a black screen, lol) my old Sega Genesis (and bought an HDMI adaptor for it), and have been letting my school age sons play it. They haven't even commented on the basic graphics. They're like "wow dad, no boot time, no connecting to server time, no waiting to skip ads time". They love it.
How about you enable game mode on the TV you're using
Game mode is on! The input log is not with the display. Other games run fine.
Fast is a distinctive feature.
For what is worth I built myself a custom jira board last month, so I could instantly search, filter and group tickets (by title, status, assignee, version, ...)
Motivation: Running queries and finding tickets on JIRA kills me sometimes.
The board is not perfect, but works fast and I made it superlightweight. In case anybody wants to give it a try:
https://jetboard.pausanchez.com/
Don't dare to try on mobile, use desktop. Unfortunately it uses a proxy and requires an API key, but doesn't store anything in backend (just proxies the request because of CORS). Maybe there is an API or a way to query jira cloud instance directly from browser, I just tried first approach and moved on. It even crossed my mind to add it somehow to Jira marketplace...
Anyway, caches stuff locally and refreshes often. Filtering uses several tricks to feel instant.
UI can be improved, but uses a minimalistic interface on purpose, like HN.
If anybody tries it, I'll be glad to hear your thoughts.
I once accidentally blocked TCP on my laptop and found out "google.com" runs on UDP, it was a nice surprise.
baba is fast.
I sometimes get calls like "You used to manage a server 6 years ago and we have an issue now" so I always tell the other person "type 'alias' and read me the output", this is how I can tell if this is really a server I used to work on.
fast is my copilot.
Specifically HTTP/3 and QUIC (which came out of Google):
https://en.wikipedia.org/wiki/HTTP/3
https://en.wikipedia.org/wiki/QUIC
They don't require you to use QUIC to access Google, but it is one of the options. If you use a non-supporting browser (Safari prior to 2023, unless you enabled it), you'd access it with a standard TCP-based HTTP connection.
> this is how I can tell if this is really a server I used to work on
Hm, shell environment is fairly high on the list of things I'd expect the next person to change, even assuming no operational or functional changes to a server.
And of course they'd be using a different user account anyway.
I was going to say one of the more recent times fast software excited me was with `uv` for Python packaging, and then I saw that op had a link to Charlie Marsh in the footnote. :)
A lot of people have low expectations from having to use shit products at work, and generally not being discerning.
Speed is what made Google, which was a consumer product at the time. (I say this because it matters more in consumer products.)
I don’t think people realize how much working with bad tools inspired you to write equally bad applications.
Beautiful tools make you stretch to make better things with them.
I’m senior developer on a feature bloated civil engineering web app that has 2 back end servers (one just proxies to the other) and has 8k lines of stored procedures as the data layer and many multi k line react components that intentionally break react best practices.
I loathe working on it but don’t have the time to refactor legacy code.
———————-
I have another project that I am principal engineer and it uses Django, nextjs, docker compose for dev and ansible to deploy and it’s a dream to build in and push features to prod. Maybe I’m more invested so it’s more interesting to me but also not waiting 10 seconds to register and hot reload a react change is much more enjoyable.
whats your setup for the frontend? do you autogen your queries from DRF? do you prefer react headless over django templates?
If it’s something simple, I’ll use Django template because it’s very easy. Most stuff I work on these days requires more fancy ui so I’ll 100% use react or nextjs for front end (and I’ve be using zustand for state management). I’ll have an api util that uses axios and handles appending a jwt auth header to all requests and use that in my components. I like using Django-all-auth because it makes integrating email/google/microsoft auth really easy out of the box.
DRF is good, ninja is a little more light weight and flexible so it really comes down to the project.
make fast sexy again... please growing up I've thoroughly enjoyed seeing workers tapping away at registers where it doesn't have a mouse, all muscle memory and layers and layers of menu accessible by key taps, whether it's airline, clothing store, even some restaurant used to have those dimly lit terminals glowing green or orange with just bunch of text and a well versed operator chatting while getting their work done. the keys were commercial grade mechanical which made pleasing sound.
nowadays it's fancy touch display, requires concentration and often sluggish, and the machine often felt cheap and made cheap sound when tapped on, I don't think the operator are ever enjoying interacting with it and the software's often slow across the network....
I'm all for fast. It shows no matter what, at least somebody cared enough for it to be blazing fast.
My favorite essay on this topic, not yet referenced, is James Somers's "Speed matters:" https://jsomers.net/blog/speed-matters
Discussed a few times:
Working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=36312295 - June 2023 (183 comments)
Speed matters: Why working quickly is more important than it seems (2015) - https://news.ycombinator.com/item?id=20611539 - Aug 2019 (171 comments)
Speed matters: Why working quickly is more important than it seems - https://news.ycombinator.com/item?id=10020827 - Aug 2015 (139 comments)
jsomers gets a lot of much-deserved love here!
>> Rarely in software does anyone ask for “fast.”
I have been asking about Latency-Free Computing for a very long time. Every Computing now is slow.
The owner of this site is involved in a scrapping business[0]. How can she justify that with fast-ness?
>Performant stealth mode
>Scale browsers with bot anti-detection. Access high performant residential proxies and built-in auto-CAPTCHA solvers
[0] https://www.onkernel.com/#:~:text=Performant%20stealth%20mod...
Reminds me "Fast Software, the Best Software" by Craig Mod: https://craigmod.com/essays/fast_software/
This is one of the reasons I switched from Unity to Godot. There is something about Godot loading fast and compiling fast that makes it so much more immersive to spend hours chugging away at your projects for.
My son told me to not develop a game with Unity because, as a player, he thought Unity games took way too long to load.
There might be some selection bias - Experienced programmers who care a lot about engine technology are more likely to use Godot and also optimize their load times. Unity includes a lot of first-time programmers who just want to get something shipped
Very much the opposite in my experience. People, especially on this site, ask for "fast" regardless of whether they need it. If asked "how fast?" the answer is always "as fast as possible". And they make extremely poor choices as a result. Fast is useful up to a point, but faster than that is useless - maybe actively detrimental if you can e.g. generate research reports faster than you can read them.
You make much better code, and much better products, if you "fast" from your vocabulary. Instead set specific, concrete latency budgets (e.g. 99.99% within x ms). You'll definitely end up with fewer errors and better maintainability than the people who tried to be "fast". You'll often end up faster than them too.
> it's obvious to anyone that writes code that we're very far from the standards that we're used to
This is true, but I also think there's a backlash now and therefore some really nice mostly dev-focused software that is reeaaaaly fast. Just to name a few:
- Helix editor - Ripgrep - Astral python tools (ruff, uv, ty)
That's a tiny subsection of the mostly bloated software that exists. But it makes me happy when I come across something like that!
Also, browsers seems to be really responsive despite being one of the most feature bloated products on earth thanks to expanding web standards. I'm not really counting this though because Firefox and Chrone might rarely lag, the websites I view with them often do, so it's not really a fast experience.
I always have to remind myself of the bank transfer situation in the US whenever I read an article complaining about it. Here in the UK, bank transfers are quick and simple (the money appears to move virtually instantly). Feel free to enlighten me to why they're so slow in the US.
"Community banks mostly don’t have programmers on staff, and are reliant on the so-called “core processors” ...
This is the largest reason why in-place upgrades to the U.S. financial system are slow. Coordinating the Faster ACH rollout took years, and the community bank lobby was loudly in favor of delaying it, to avoid disadvantaging themselves competitively versus banks with more capability to write software (and otherwise adapt operationally to the challenges same-day ACH posed)."
From the great blog Bits About Money: https://www.bitsaboutmoney.com/archive/community-banking-and...
UK Banks use FiServ too. So that can't be the only reason.
For ACH, it's the scheduling and batching that makes it slow. The transfer itself should be instant but often my bank sends it out around midnight. This is why Venmo and Zelle are so popular. You can also modify/cancel a bank transfer before it goes through, which is nice.
This is the same in Switzerland. If you request an IBAN transfer, it's never instant. The solution there for fast payments is called TWINT, which works at almost POS terminal (you take a picture of the displayed QR code).
I think BACS is similarly "slow" due to the settlement process.
These days ACH settlement runs multiple times a day. The biggest source of delay for ACH transfers is your bank delaying release of the funds for risk management. ACH transfers can be reversed even after they have "settled" and if the receiving bank has already disbursed the funds then they have to eat the cost of reimbursing the sender. Reversals are more likely to happen soon after the transfer completes, so delaying release of the funds makes it less likely the bank will be left holding the bag.
People are almost always talking about Faster Payments [0] rather than BACS. It really is instant.
[0] https://en.m.wikipedia.org/wiki/Faster_Payment_System_(Unite...
Yeah Faster Payments is great, though it's relatively new in the scheme of banking and was explicitly designed to speed up small transfers (though now up to 1M I think?). My point was that the legacy system here is comparable to other countries. And until very recently a lot of companies still used BACS because that's how their payroll was set up.
I definitely miss it in other places I've worked.
I was pleasantly surprised when I bought a house that I could just transfer everything instantly with faster payments. I was fully expecting to deal with CHAPS, etc.
But the faster payments ceiling is large enough that buying a house falls under the limit.
The US actually has two real-time payment systems: RTP and FedNow. The number of participating banks is growing rapidly.
https://real-timepayments.com/Banks-Real-Time-Payments.html
Prior to that, you could get instant transfers but it came with a small fee because they were routed through credit card networks. The credit card networks took a fee but credit card transactions also have different guarantees and reversibility (e.g. costing the bank more in cases of fraud)
From the linked RTP site: "Because of development and operational costs most banks and credit unions will offer "send only" capabilities"
Which means nobody can send me money.
FedNow on the backend is supported by fewer banks than Zelle is, which is probably why hardly any banks expose a front-end for it.
I am convinced that this is in some cases a pro-consumer behavior. A credit card company once pulled money from my bank via ACH due to the automatic payment feature I set up, but that bank account didn't have enough money in it. The bank sent me at least two emails about the situation. I finally noticed that second email and wired myself more money from a different account. The credit card company didn't notice anything wrong and didn't charge any late fees or payment returned fees. The bank didn't charge any overdraft fees or insufficient funds fees. And the wire transfer didn't have a fee due to account balance. (Needless to say, from then on I no longer juggle multiple bank accounts like that.)
The bank had an opportunity to notify me precisely because ACH is not real time. And I had an opportunity to fix it because wire transfers is almost real time (finishes in minutes not days). I appreciate it when companies pull money from my account I get days of notice but if I need to move money quickly I can do it too.
In most cases it's definitely better for it to be fast. For example I sold a buggy face to face today and they paid me by bank transfer, and the reason we could do that was that I had a high confidence it would turn up quickly and they weren't trying to scam me. It actually took around 1 second which is really quite fast.
You don't need slow transfers to get an opportunity to satisfy automatic payments. I don't know how it works but in the UK direct debits (an automatic "take money from my account for bills" system) gives the bank a couple of days notice so my banking app warns me if I don't have enough money. Bank transfers are still instant.
Here in Switzerland bank transfers only take place during business hours
I believe this is because Ürs has to load my silver pieces onto the donkey and drive it to the other bank.
We're pretty lucky here in Australia. Over the past decade or so, PayID has been successfully rolled out to virtually all banks, giving us free and (usually) instant money transfers - I'd say more than half of all personal payments are now done with PayID. Old-skool bank transfers are still the norm for business and administrative payments, but that's changing too, and in any case, those transfers are increasingly being executed behind the scenes over Osko (aka PayID), so they end up settling in seconds (or at least hours) instead of days.
IS IT NOW!? last time I visited (long time ago) it was BACS, and the bank clerk told me it takes one day to "properly" register they've received my fund, one day to make sure it transferred and on the third and final day, the other bank can "properly" acknowledge they've received the fund thus why it took 3 FRIGGIN DAYS. I used so much cash back then.
patio11 wrote a bit about that here: https://www.bitsaboutmoney.com/archive/the-long-shadow-of-ch...
Isn't there a law in the UK which says it must be fast?
>> Rarely in software does anyone ask for “fast.”
I don't know, there are a sizeable subset of folks who value fast, and it's a big subset, it's not niche.
Search for topics like turning off animations or replacing core user space tools with various go and rust replacements, you'll find us easily enough.
I'm generally a pretty happy MacOS user, especially since M1 came along. But I am seriously considering going back to linux again. I maintain a parallel laptop with nixos and i'm finding more and more niggles on the mac side where i can prioritise lower friction on linux.
So not PowerBI then. Or really any BI tool.
My favourite example of not "fast" right now is any kind of Salesforce report. Not only are they slow but you can't make any changes to the criteria more often than once a minute. Massively changes your behaviour.
I wish I could live in a world of fast.
C++ with no forward decls, no clang to give data about why the compile time is taking so long. 20 minute compiles. Only git tool I like (git-cola) is written in Python and slows to a crawl. gitk takes a good minute just to start up. Only environments are MSYS which is slow due to Windows, and WSL which isn't slow but can't do DPI scaling so I squint at everything.
I might get down voted for saying this on HN, but I'll still say it.
As C++ devs we used to complain a lot about it's compilation speed. Now after moving to Rust, sometimes we wish we could just go back to C++ due to Rust's terrible compilation speeds! :-)
The web is fast.
> Rarely in software does anyone ask for “fast.”
I don't think I can relate this article to what actually happened to the web. It went from being an unusable 3D platform to a usable 2D one. The web exploded with creativity and was out of control thanks to Flash and Director, but speeds were unacceptable. Once Apple stopped supporting it, the web became boring, and fast, very fast. A lot of time and money went into optimising the platform.
So the article is probably more about LLMs being the new Flash. I know that sounds like blasphemy, but they're both slow and melt CPUs.
The web might be fast compared to in 2005 but only if you don't normalize for average CPU performance and bandwidth. Websites that are mostly text often still manage to take remarkable amounts of time to finish rendering and stop moving things around.
The article makes two points—one about software performance, the other about development performance. On the topic of software performance, I think the best explanation I got came from the great Prof. Charles Leiserson’s MIT course on performance engineering: performance is a currency. It’s something we spend to gain other qualities—safety, rich features, user-friendliness, and so on. At the end of the day, a computer can executes about 100 simple primitve instructions. The reason we have a software industry is because of speed.
https://ocw.mit.edu/courses/6-172-performance-engineering-of...
> Instagram usually works pretty well—Facebook knows how important it is to be fast.
Fast is indeed magical, that's why I exclusively browse Instagram from the website; it's so slow I dip out before they get me with their slot machine.
Why's this so highly rated. Y'all don't know that fast is good?
guess everybody misses software that you can tell the maker cares...
Superhuman achieved their sub-something speed maybe (has anyone measured it except them? Genuinely, post a link, appreciated)
However the capital required will probably never happen again in relation to the return for any investor involved in that product.
Props to them for pushing the envelope, but they did it in the zero interest era and its a shame this is never highlighted by them. And now the outcome is pretty clear in terms of where the company has ended up.
> Superhuman's sub-100ms rule—plus their focus on keyboard shortcuts—changed the email game in a way that no one's been able to replicate, let alone beat.
https://blog.superhuman.com/superhuman-is-being-acquired-by-...
Being fast helps, but is rarely a product.
I often hear this sort of thing "Facebook was a success using PHP therefore language choice isn't important" or in this case "superhuman made their product fast and they still failed so speed isn't important".
It's obviously wrong if you think about it for more than a second. All it shows is that speed isn't the only thing that matters but who was claiming that?
Speed is important. Being slow doesn't guarantee failure. Being fast doesn't guarantee success. It definitely helps though!
>Being fast helps, but is rarely a product.
>Being fast doesn't guarantee success.
Sometimes it can be a deciding factor though.
Also sometimes speedyness or responsiveness beyond nominal is not as much of a "must have" compared to nominally fast performance in place of sluggishness.
Yeah absolutely. At some point you go from "successful despite being slow" to "would have been a success if it wasn't so slow".
Back in the 90's I ran a dev team building Windows applications in VB, and had the rule that the dev machines had to be lower-specced than the user machines they were programming for.
It was unpopular, because devs love the shiny. But it worked - we had nice quick applications. Which was really important for user acceptance.
I didn't make this rule because I hated devs (though self-hatred is a thing ofc), or didn't want to spend the money on shiny dev machines. I made it because if a process worked acceptably quickly on a dev machine then it never got faster than that. If the users complained that a process was slow, but it worked fine on the dev's machine, then it proved almost impossible to get that process faster. But if the dev experience of a process when first coding it up was slow, then we'd work at making it faster while building it.
I often think of this rule when staring at some web app that's taking 5 minutes to do something that appears to be quite simple. Like maybe we should have dev servers that are deliberately throttled back, or introduce random delays into the network for dev machines, or whatever. Yes, it'll be annoying for devs, but the product will actually work.
> Like maybe we should have dev servers that are deliberately throttled back
This is a good point. Often datasets are smaller in dev. If a reasonable copy of live data is used, devs would have an intuition of what is making things slow. Doesn't work for live data that is too big to replicate on a developer's setup though.
> When was the last time you used airplane WiFi and actually got a lot done?
The greatest day of productivity in my life was a flight from Melbourne to New York via LAX. No wifi on either flight, but a bit in transit. Downloaded everything I needed in advance. Coded like a mofo for like 16 hours.
Fast internet is great for distractions.
Same here. I am more productive on a plane than anywhere else. And for the reasons you describe.
Highly Agree.
Speed of all kinds is incredibly important. Give me all of it.
- Fast developers
- Fast test suites
- Fast feedback loops
- Fast experimentation
Someone (Napoleon?) is credited with saying "quantity has a quality all its own", in software it is "velocity has a quality all its own".
As long as there is some rigor and you aren't shipping complete slop, consistently moving very quickly fixes almost every other deficiency.
- It makes engineering mistakes cheaper (just fix them fast)
- It make product experimentation easy (we can test this fast and revert if needed)
- It makes developers ramp up quickly (shipping code increases confidence and knowledge)
- It actually makes rigor more feasible as the most effective rigorous processes are light weight and built-in.
Every line of code is a liability, the system that enables it to change rapidly is the asset.
Side note: every time I encounter JVM test startup lag I think someday I am going to die and will have spent time doing _this_.
For inspiration in this direction see Patrick Collison's great list: https://patrickcollison.com/fast
Another benefit of speed in this regard is that it lets you slow down a bit more and appreciate other things in life.
> Someone (Napoleon?) is credited with saying "quantity has a quality all its own"
Joe Stalin, I believe. It's a grim metaphor regarding the USSR's army tactics in WW2.
https://www.goodreads.com/quotes/795954-quantity-has-a-quali...
According to Wikiquotes, this is a common misattribution, and the first known record is Ruth M. Davis from 1978, who attributes it to Lenin: https://en.wikiquote.org/wiki/Quantity
I work on optimization a large fraction of my time. It is not something learned in a week, month or even a year.
At least in B2B applications that rely heavily on relational data, the best developers are the ones who can optimize at the database level. Algorithmic complexity pretty much screams at me these days and is quickly addressed, but getting the damned query plan into the correct shape for a variety of queries remains a challenge.
Of course, knowing the correct storage medium to use in this space is just as important as writing good queries.
Speed is the most fundamental feature. Otherwise we could do everything by hand and need no computers.
The flip-side of this is that if something is too fast, it raises doubts about whether it actually happened at all. I'm reminded of the TurboTax case, where Intuit found that adding a bunch of artificial loading screens to make it look like TurboTax was taking its time to really pore over customers' tax returns ended up being more appealing to users than not doing so. The actual "analyses" happen within less than a second, but that was (allegedly) too fast for users to believe.
A ticket booking system I was familiar with added latency after upgrades to maintain a particular experience for the operators.
I guess they were used to typing stuff then inspecting paperwork or other stuff waiting for a response. Plus, it avoided complaints when usage inevitably increased over time.
That’s an unusual case because most customers use it once a year, and speed is number 3 or 4 on their priorities behind getting it right (not getting in trouble), and understanding wtf is going on.
How did over a thousand people upvote this hollow article? Am I the only one who was looking for substance in vain?
The substance is in the audience, in the sense that a lot of people resonate with what the article is saying.
> Rarely in software does anyone ask for “fast.”
It is implicit, in the same way that in a modern car you expect electric windows and air-conditioning (yes, back in the day, those were premium extras)
If you want fast try booting your old DDR3 computer, that's fast!
What's amazing to me is that often all it takes to go fast is to keep things simple. JBlow once said that software should be treated like a rocket ship: every thing you add contributes weight.
Fast is why, after decades doing high-level scripting, I'm now exploring lower-level languages that live closer to the metal...
Fast and light weight. That's why I love vim/cli over IDEs.
Btw, cool site design.
Interesting that onkernel.com intentionally animates and slows down the loading of the web interface, making it harder to scroll and scan the site. Irony or good design?
Browse the HTML. This site looks hand-coded. The Google fonts and some light CSS are the only imported stuff. No javascript.
It's gorgeous
I was curious, so I checked, it is raw html. And yes it is beautiful.
Adding to this - that's why I insist all my students should learn touch-typing, for at least 10 minutes per lesson. It really changes how you interact with your computer, and how much touch typing quickly makes you able to type as fast as you can think changes your approach to automating things in a quick script or doing some bash-fu. A very underrated skill in todays world.
> students should learn touch-typing
I agree, but I wonder how not knowing how to spell would affect that. The highschool kids I work with, are not great spellers (nor do they have good handwriting).
Speed is an important usability consideration that gets disproportionate focus among many developers because it’s one of the few they can directly address with their standard professional toolkit. I think it’s a combo of a Hammer/Nail thing, and developers loving the challenge of code performance golf. (Though I never loved stuff like that, which is in part why I’m not a developer anymore.)
Figuring out the best way to present logically complex content, controls and state to users that don’t wrangle complexity for a living is difficult and specialized work, and for most users, it’s much more important than snappy click response.
Both of these things obviously exist on a spectrum and are certainly not mutually exclusive. (ILY McMaster-Carr) But projects rarely have enough time for either complete working features, or great thoughtful usability, and if you’re weighing the relative importance of these two factors, consider your audience, their goals, and their technical savvy before assuming good performance can make up for a suboptimal user workflow.
Thanks for enduring my indulgent rant.
> Rarely in software does anyone ask for “fast.”
That's because it's understood that things should work as quickly as possible, and not slowly on purpose (generally). No one asks that modern language is used in the UI as opposed to Sanskrit or heiroglyphs, because it's understood.
What about, centered text?
Finally, someone has thought about the importance of making things go faster!
Is the most pressing problem facing the world is that we are not doing enough things fast enough? Seems a bit off the mark, IMO.
Agree. One of my favorite tropes to product and leadership is that “performance is a feature”.
I got so tired of waiting for GitHub pages taking ~600ms to load (uncached), so decided to build my own Git hosting service with Go and HTMX.
I know this is completely different scale, but compare: [1] https://github.com/git/git [2] https://gitpatch.com/gitpatch/git-demo
And there is no page cache. Sub 100ms is just completely different experience.
Very nice. Also a plea. Don't animate the >. Or, don't wait for the animation to finish before showing the contents.
ah, interesting. It starts fetching tree items on mousedown (vs onclick) to load them faster, so > starts moving a bit too early.
> Fast is relative
I once used Notion at work and for personal note taking. I'd never felt it was "slow." Until later I moved my notes to Obsidian. Now when I have to use Notion at my job it feels sluggish.
Notion just seems to get worse and worse. I used to love it but now I find it infuriatingly slow.
Glad to hear Obsidian is better as I’ve been considering it as an alternative.
Obsidian's local first editing experience makes a huge difference to creativity and flow.
I've been working on making Obsidian real-time collaborative with the Relay [0] plugin. In combination with a few other plugins (and the core Bases plugin) you can build a pretty great Notion alternative.
I'm bullish on companies using Obsidian for knowledge management and AI driven workflows. It's pretty reasonable to build custom plugins for a specific vertical.
[0] https://relay.md
This page (consisting of 693 words) took a full second to load for me because it had to import multiple fonts from Google (which also constitute over 70% of the page's size).
Do you mean finish loading?
Google Webfont loader is (usually) non blocking when done right, but the text should appear fine before
The page loaded instantly for me
This is a great blog post. I have seen internal studies at software companies that demonstrate this, i.e. reducing UI latency encourages more software use by users. (Though a quick search suggests none are published.)
Yep. I often choose LLM apps not because of how great the model is, but how snappy the UI feels. Similarly I might choose the more lightweight models because they’re faster.
Fast is dead. The only software that keeps getting faster are emulators to run legacy, bloat-free code.
I'll take reliable over fast almost every time.
Trading software by it's nature has to be fast, fast to display new information and fast to act on it per users intent.
> Asking an LLM to research for 6 minutes is already 10000x faster than asking for a report that used to take days.
Assuming, like, three days, 6 minutes is 720x faster. 10000x faster than 6 minutes is like a month and a half!
More like 300x if you count working hours. Although I've yet to see anything that would take a person a few days (assuming the task is worth spending a few dats on) and that an LLM could do in six minutes, even with human assistance.
Google talked about this for years.
Yes. The same insight periodically appears. Also, often it's done with high contrast text, which is great.
That's why we all code with voice now, because it's faster, right? Right?
Would someone please forward this article to the folks that work on Jira?
True, but not fast. More fun than fast.
Fast reading does not just enumerate examples.
Fast reading does not straw-man.
Fun conveys opportunity and emotion: "changing behavior", "signals simplicity", "fun". Fun creates an experience, a mode, and stickiness. It's good for marketing, but a drag on operations.
Fast is principles with methods that just work. "Got it."
Fast has a time-to-value of now.
Fast is transformative when it changes a background process requiring all the infrastructure of tracking contingencies to something that can just be done. It changes system-2 labor into system-1 activity -- like text reply vs email folders, authority vs discussion, or take-out vs. cooking.
When writers figure out how to monetize fast - how to get recurrent paying users (with out-of-band payment) just from delivering value - then we'll no longer be dragged through anecdotes and hand-waving and all the salience-stretching manipulations that tax our attention.
Imagine an AI paid by time to absorb and accept the answer instead of by the token.
Fast is better than fun -- assuming it's all good, of course :)
Anyone who uses LLMS should suck on my ballsack
I think that people generally underestimate what even small increases in the interaction time between human and machine cost. Interacting with sluggish software is exhausting, clicking a button and being left uncertain whether it did anything is tedious and software being fast is something you can feel.
Windows is the worst offender here, the entire desktop is sluggish even though it there is no computational task which justifies those delays.
Apple software, especially lately, can be really bad for it too. Single core perf is slightly better on my iPad than my MacBook Pro and yet everything feels an order of magnitude slower. If I am impatiently tapping the space I know a button will appear waiting for an animation to finish, some aspect of software design has gone horribly awry.
iOS (iPhone, iPad) UI is typically smooth and fast though. If only car navigation and UI could be as responsive.
There's that wondering if the UI input was registered at all and the mental effort to suppress clicking again when you expect a delayed response.
Will apply this for the next interfaces that im going to build
> Rarely in software does anyone ask for “fast.”
> But software that's fast changes behavior.
I wonder if the author stopped to consider why these opposing points make sense, instead of ignoring one to justify the other.
My opinion is that "fast" only becomes a boon when features are robust and reliable. If you prioritize going twice as "fast" over rooting out the problems, you get problems at twice the rate too.
"The only way to go fast, is to go well." ― Robert C. Martin
Software that goes fast changes human behavior. It seems you’re thinking it changes the software’s behavior. Not sure. Either that, or I don’t follow your comment at all.
I'm not really sure how to rephrase it, so I can try an example.
Lets say that the author has a machine that is a self contained assembly line, it produces cans of soup. However, the machine has a problem - every few cans of soup, one can comes out sideways and breaks the machine temporarily, making them stop and unjam it.
The author suggests to double the speed of the machine without solving that problem, giving them their soup cans twice as fast, requiring that they unjam it twice as often as well.
I believe that (with situation exceptions) this is a bad approach, and I would address the problem causing the machine to get jammed before I doubled the speed of the machine
That being said, this is a very simplistic view of the situation, in a real situation either of these solutions has a number of variables that may make it preferable over the other - my gripe with the piece is that the author suggests the "faster" approach is a good default that is "simple", "magical" and "fun". I believe it is shortsighted, causes compounding problems the more it is applied in sequence, and is only "magical" if you bury your head in the sand and tell yourself the problems are for someone else to figure out - which is exactly what the author handwaves away at the end, with a nebulous allusion to some future date when these tools that we should accept because they are fast will eventually be made good by some unknown person.
> Developers ship more often when code deploys in seconds (or milliseconds) instead of minutes.
I don't want my code deployed in seconds or milliseconds. I'm happy to wait even an hour for my deployment to happen, as long as I don't have to babysit it.
I want my code deployed safely, rolled out with some kind of sane plan (like staging -> canary -> 5% -> 20% -> 50% -> 100%), ideally waiting long enough at each stage of the plan to ensure the code is likely being executed with enough time for alerts to fire (even with feature flags, I want to make sure there's no weird side effects), and for a rollback to automatically occur if anything went wrong.
I then want to enable the feature I'm deploying via a feature flag, with a plan that looks similar to the deployment. I want the enablement of the feature flag, to the configured target, to be as fast as possible.
I want rollbacks to be fast, in-case things go wrong.
Another good example is UI interactions. Adding short animations to actions makes the UI slower, but can considerably improve the experience, by making it more obvious that the action occurred and what it did.
So, no, fast isn't always better. Fast is better when the experience is directly improved by making it fast, and you should be able to back that up with data.
Slow is smooth, smooth is fast.
> Speed conquers all in martial arts.
Beautiful softwares are fast! Love the blog
This is such an important principle to me that I've spent a lot of effort developing tooling and mental models to help with. Biggest catalyst? Being on-call and being woken up at 3am when you're still waking up... in that state, you really( don't want things to go slowly. You just want to fix the damn thing and get back to sleep.
For example, looking up command flags within man pages is slooooow and becomes agonizingly frustrating when you're waking up and people are waiting for you so that they can also go back to sleep. But if you've spent the time to learn those flags beforehand, you'll be able to get back to sleep sooner.
"Fast signals simplicity"
Bookmarking this one
I like fast but more and more I get slow web applications where every clicks comes with a delay.
Linear vs JIRA described in 1 word
One of these days I’m going to get around to writing a little bash script or something that will let me take a plain-ish text file and upload it into Jira via the API.
I should be able to create a Jira ticket in however long it takes me to type the acceptance criteria plus a second or two. Instead I’ve got slow loading pages, I’ve got spinners, I’ve got dropdowns populating asynchronously that steal focus from what I’m typing, I’ve got whatever I was typing then triggering god knows what shortcuts causing untold chaos.
For a system that is—at least how we use it at my job—a glorified todo list, it is infuriating. If I’m even remotely busy lately I just add “raise a ticket for x” to my actual todo list and do it some other time instead.
Oh yeah, back in the late 80s we (for some finite and not so big values of "we") were counting MOS6502/6510 cycles to catch the electron beam on a display and turn on some nice/nasty viaual effects.
Tell me "fast" again!
Linus Torvalds said exactly that in a talk about git years ago. It's crazy to think back how people used to use version control before git. Git totally changed how you can work by being fast.
> Rarely in software does anyone ask for “fast.”
As some working on embedded audio DSP code I just had to laugh a little.
Yes, there is a ton of code that has a strict deadline. For audio that may be determined by your buffer size — don't write your samples to that buffer fast enough and you will hear it in potentially destructively loud fashion.
This changes the equation, since faster code now just means you are able to do more within that timeframe on the same hardware. Or you could do the same on cheaper hardware. Either way, it matters.
Similar things apply to shader coding, game engines, control code for electromechanical systems (there, missing the deadline can be even worse).
I feel like this should have some kind of "promotional" or "ad" label. I agree wholeheartedly with the words here, but I also note that the author is selling the fast developer tools she laments the dearth of: https://www.catherinejue.com/kernel
Again, no ill will intended at all, but I think it straddles the promotional angle here a bit and maybe people weren't aware
The tool in question is also a fairly unethical scraping and botting tool, advertising defrauding services to bypass captchas and scrape websites against the owners' wishes.
Facebook is fast?
That article loses its credibility because of this, my thoughts too. Facebook and Instagram websites are among the worse offenders when it comes to "time-to-content" or whatever metric cool kids use these days. Maybe the apps are faster, but I'd rather avoid spyware on my pocket computers. Probably the author is running a $3k+ laptop and renews it every year?
"Fast" is just another optimisation goal in a spectrum. First make it correct, then make it good, then make it fast (is a reasonable rubric).
My sites. In order of increasing complexity. Are they fast?
Here is some extensive advice for making complex websites load extremely quickly
https://community.qbix.com/t/qbix-websites-loading-quickly/2...
Here is also how to speed up APIs:
https://community.qbix.com/t/building-efficient-apis-with-qb...
One of the biggest things our framework does as opposed to React, Angular, Vue etc. is we lazyload all components as you need them. No need for tree-shaking or bundling files. Just render (static, cached) HTML and CSS, then start to activate JS on top of it. Also helps massively with time to first contentful paint.
https://community.qbix.com/t/designing-tools-in-qbix-platfor...
All this evolved from 2021 when I gave this talk:
Conversely, we have a whole generation of entry-level developers who think 250ms is "fast," when doing on-device processing work on computers that have dozens of cores.
> But software that's fast changes behavior.
(Throw tomatoes now but) Torvalds said the same thing about Git in his Google talk.
small (font)
this would align well with the concept of Permacomputing
> Instant settle felt surprising in a world where bank transfers usually take days.
Yeah, that's not "a world" it's just the USA. Parts of the world - EU, UK etc have already moved on from that. Don't assume that the USA is leading edge in all things.
> Yeah, that's not "a world" it's just the USA.
"In a world" is a figure of speech which acknowledges the non-universality of the statement being made. And no it is not "just the USA". Canada and Mexico are similarly slow to adopt real-time payments.
It is wild to tell someone "don't assume" when your entire comment relies on your own incorrect assumption about what someone meant.
There is better commentary on the same basic point regarding SEPA / Faster Payments / FedNow and how the US lags world-leading practice in the other thread here: https://news.ycombinator.com/item?id=44738579
It's a bit more substantial, and less complaints about the semantics of the wording.
You’re not wrong. The US banks all suck. I’m willing to bet that every single one of them suck, though I’ve only tried a handful.
Is it just me or is the premise of this the opposite of your work life as well? I have worked in the space of "fast" primarily and that is the main objective. Fast, iterate ... Don't be like "IT" (the slow team nobody can fire who never finishes anything).
Of course fast has downsides but it's interesting this pitch is here. Must have occurred many times in the past.
"Fast" was often labeled "tactical" (as opposed to "strategic" in institutions). At the time I remember thinking a lot about how delays plus uncertainty meant death (nothing gets done, or worse). Even though Fast is often at the cost of noise and error, there is some principle that it can still improve things if not "too far".
Anyone know deeper writings on this topic?
But the future is vibe coding and shipping fast even if we don’t understand the code the LLM wrote!! The only speed metric worth caring about is time to market! Just fucking ship! /s
> Rarely in software does anyone ask for “fast.”
Are you kidding me? My product owner and management ask me all the time to implement features "fast".
Not sure if this is sardonic obstinance... But assuming face-value - that's not what the statement is about.
I disagree with the statement too, as people definitely ask for UX / products to be "snappy", but this isn't about speed of development.
I remember the time they were cracking down because I had entered 90%+ of the tickets into the ticket system (the product manager didn't write tickets) and told me that "every ticket has to explain why it is good for the end user".
I put it in a ticket to speed up the 40 minutes build and was asked "How does this benefit the end user?" and I said "The end user would have had the product six months ago if the build was faster."
These days metrics are so ubiquitous that many internal back-end systems have SLAs for tail latencies as well.
Yeah, this was an attempt at humor. But it is quite easy to misunderstand the title.
Ew.
Genuinely hard to read this and think little more than, "oh look, another justification for low quality software."
I think you misunderstood the use of "fast" in the article? They mean that the software should run fast, not be produced fast necessarily. In my experience software that truly runs fast is usually much higher quality.