> Two hours of my life gone, just to pick up where I left off.
If I had only wasted two hours every time I had to use npm for some reason I'd be significantly ahead of where I am now.
> time to run it after not touching it for 4 years
> Two hours of my life gone...
Two hours of work after 4 years sounds ... perfectly acceptable?
And it would have run perfectly right away if the node version was specified, so a good learning, too
This feels like making a mountain out of a mole hill
I still can open my decade-old Java projects, run build with modern Maven/JDK and get working code - in a few minutes. Two hours of dancing with a drum doesn’t feel acceptable to me.
Maven, maybe, but Gradle absolutely not. If you don't have the exact version of Gradle that you used before, you're in for the same kind of misery documented above, with the same end state: just stick to the old version and deal with the upgrade later.
Well, I‘m not talking about Gradle, right? Sometimes conservative choice is what gets the job done.
Right, I'm just clarifying for others who may not know the difference that Node doesn't have a monopoly on instability.
There are a very small number of projects that specifically make it their goal to be backwards-compatible effectively indefinitely, and Maven is one of those. It's part of what people who hate it hate about it, but for others it's the main selling point.
C# devs can open decade+ old solutions without issues, maybe this is just "normal" for the JavaScript ecosystem, but there absolutely exist other ecosystems which don't absolutely waste your time in this way.
As it happens I've recently upgraded a 2 year old project with both node and C#
It was open much the same in both. If you're happy using outdated and unsupported components with security issues AND you can get hold of the right version of the dev tools and plugins AND your hiding environment still supports the old platform, you can maintain the old version with minimal change. But should any professional developer do this?
Not true for the entire C# ecosystem. I tried rebooting a Xamarin project I coded a couple of years ago. Had to spend weeks upgrading it cause Microsoft decided to discontinue Xamarin and force everyone to use NET MAUI
This has to do with specific framework and does not translate to the overall experience (for example targeting iOS is pain because Apple deprecates versions quickly, so downstream dependencies have to match this too).
You can open and build a back-end application that targets e.g. netcoreapp2.1 (6 years old target) just fine, it might require installing you an archived SDK for the build to succeed (which you can download regardless, it will complain that it is EOL though) but it's an otherwise simple procedure.
For library code it's even easier - if you have netstandard2.0 target, it will work anywhere from .NET Framework 4.6.1 to the latest version without requiring any maintenance effort whatsoever.
Lol, I left C# because I couldn't solve this issue and in Node.js it's particularly easy - just keep a nvmrc file and a dependency lockfile.
> C# devs can open decade+ old solutions without issues
For some definition of "without issues"...
I wish I lived in the world you described but trying to onboard a new dev onto an existing (edit: ancient) C# project at my job is frequently a multi-day endeavor.
Well, the "solution" ended up as "I gave up and just installed an old Node version and called it a day". So those 2 hours weren't even enough.
I've been using Jekyll/Ruby since 2014 for my website, with a few custom plugins I wrote myself. And I've never really needed to do anything like this. It "just works".
My Go and C programs are the same: "just works". I have some that are close to a decade old.
Good for you, my experience with Jekyll is closer to OP's experience with Node. I have a big website that I built in 2014, with tons of custom plugins, that is now stuck on Jekyll 2.x and Ruby 2.x, and has a ton of hidden C++ dependencies. The way I build it now is using a Dockerfile with Ubuntu 18.04. I probably could update it given enough effort, but I was rather thinking of rewriting it in Astro.js or Next.js.
If you're looking for a stable target you should not even consider Next.
This is the issue I have with the "build vs buy (or import)" aspect of today's programming.
There are countless gems, libraries or packages out there that make your life easier and development so much faster.
But software (in my experience) always lives longer than you expect it to, so you need to be sure that your dependencies will be maintained for that lifetime (or have enough time to do the maintenance or plug in the replacements yourself).
I dug out a small Rust project from 2016 and with edition = 2018 I got it running again in under 30 minutes, I was kinda surprised. 8 years is ancient in terms of Rust. I have had more problems with certain other crates. But yeah, C/C++ usually don't really compare. 5 years is nothing, it should just work. For Go the big breaking moment was modules. All my pre - 2016?ish code would need some work.
Not sure if I'd call out Jekyll as a paragon of stability. The last time I touched it, I made sure to write up detailed notes. In fairness, it's the only time I interact w/ Ruby.
Those mostly seem the standard Ruby/Jekyll/GitHub Pages setup instructions?
I don't love how bundler works by the way; I think it should/could be a lot better in many different ways. Same for Jekyll. But once it works, it generally keeps working.
I don't think Jekyll (or Ruby) are a paragon of stability. I'm sure some stuff has broken over the years. It just seems to break a lot less than the JS/Node ecosystems.
Every single time I clone anything Go, I first spend a few hours sorting out dependency issues.
This shocks me, what sort of issues do you hit?
+1 on this. I've been using Go almost exclusively for the last 5 ish years partly because this sort of thing never happens.
You lucked into the period when they solved the issues. If you need to work with older projects and can't easily convert them, you're going to have a bad time.
It's also two hours that would have been completely avoided if the author were familiar enough with Node to know to pin the version and not try to install 4 years of updates in one shot.
Most who are here saying that X, Y, or Z ecosystem "compiles and runs" fine after 4 years are talking about the time it takes to resume an old project in a language they're very familiar with running the same dependency versions, not the time it takes to version bump a project on a language that you don't know well without actually having it running first on the old version.
I can open my 4-year-old Node projects and run them just fine, but that's because I use the tools that the ecosystem provides for ensuring that I can do so (nvm, .nvmrc, engines field in package.json).
Yep, this could have been sorted by one line in a .tool-versions file and using mise or asdf.
Sounds like you are way too used to the javascript ecosystem if you think getting an old project to build should take hours...
What ecosystem are you comparing to?
Any C/C++ project with even mild complexity has a good chance of being extremely difficult to build due to either missing libraries that have to be installed manually, system incompatibilities, or compiler issues.
Python has like 28 competing package managers and install options, half of which are deprecated or incompatible. I can't even run `pip install` at all anymore on Debian.
Even Rust, which is usually great and has modern packaging and built-in dependency management, often has issues building old projects due to breaking changes to the compiler.
All this is to try to say that I don't think this is some problem unique to JS at all - but rather a side effect of complex interconnected systems that change often.
A big reason Docker and containers in general became so popular was because it makes this problem a lot less difficult by bundling much of the environment into the container, and Docker is very much agnostic to the language and ecosystem running inside it.
Java has a great ecosystem. It’s well thought out and I can compile and run 10 year old projects no problem. In fact, I wish everyone had just copied Java’s model instead of inventing their own worse model.
I love Python but it has a terrible package ecosystem with mediocre tooling that has only gotten worse with time.
JavaScript has gotten better but it seems they are just re-learning things that were long figured out.
When I see new package managers, I just see a list of problems that they forgot to account for. Which I find strange when there have been many package managers that you can learn from. Why are you re-inventing the wheel?
In JetBrains's Developer Ecosystem 2023 survey, 50% of developers were still regularly working in Java 8 [0]—the exact kind of "stick with the old version of the runtime" solution described in TFA.
Java 8 is 10 years old. If you had a project with a Java version that was recent 4 years ago (11 - 14), you could run it without any problems or changes.
Because they made the design choice to stop making large breaking changes to the language and tooling. Java 8 to 9 wasn't easier than Java 8 to 17 is, it's getting off of Java 8 that is hard because they made the choice to break so much in 9.
Node does not promise indefinite backwards compatibility, which is a design choice that they've made that allows them to shed old baggage, the same way that the Java developers chose to shed baggage in 8->9. Neither choice is inherently better, but you do have to understand which choice a language's designers were making during the time window in question when you go to run it later.
"Java has a great ecosystem. It’s well thought out and I can compile and run 10 year old projects no problem."
We just had to workaround breaking changes in a patch version update of Spring Boot. Maybe it was true in 2005, but certainly not the case today. I know of products that are stuck in Java 1.8 and not because they are too lazy to upgrade.
I've been involved in bringing real old Java 1.4 and 6 and whatnot up to 17 and from classic app servers into cloud, can take a bit of work but it's pretty straightforward, mostly switching out deprecated methods to their successors and copying over boilerplate config from similar applications.
I am not sure you should put ant or maven as shining examples here, but I am kinda warming up to Gradle, at least without Groovy being involved.
What do you get from Gradle that Maven cannot offer?
Javascipt is a horrible language because it basically is missing a standard library so you need external dependancies even for the most basic things that are already present in other languages. Python has a very rich standard library. You can do a lot with libc, if you had a c++ Qt project then it would provide you with basically everything you could ever need.
> Javascipt is a horrible language because it basically is missing a standard library so you need external dependancies even for the most basic things that are already present in other languages
That's not the only reason. :)
Horrible syntax full of inconsistencies, bolted on type system with TypeScript helps but will always be bolted on, quirks everywhere, as if `null` was not bad enough they also have `undefined`, I can go on.
I simply avoid it for anything but small enhancements scripts on otherwise static HTML pages.
You can't use 'pip install' in debian because they chose to do that during the transition from python2 to python3. You should use 'pip3 install' which is provided by package python3-pip from debian.
One can argue that this decision should be revised by debian but you should not install packages on system python installation for working into projects. Always use virtual environment.
No that does not work either. You get an error like this:
» pip3 install supervisor error: externally-managed-environment
× This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install.
As far as I can understand, they did this on purpose to dissuade users from installing packages globally to avoid conflicts with other Python environments.
Anyway, I'm not trying to argue about if that decision is right or not - I just wanted to use it as an example for my case that the JS ecosystem isn't alone and may even be far from the worst when it comes to this kind of issue.
I understand that, you can use `--break-system-packages` or change configuration `python3 -m pip config set global.break-system-packages true`.
Python is different here because in many linux distributions, there are many tools that rely on you system python. Python unlike node is not limited (in practice) to web applications. that's why you have to be more careful. So while I understand you are using this as an example, I don't feel that your comparison is apple to an apple.
>Python unlike node is not limited (in practice) to web applications. that's why you have to be more careful.
They may or may not be running Node.js specifically, but I believe that many Linux distributions, as well as Windows, include JavaScript code in core applications. I don't see this as particularly different, except that they might choose to assume a single standard system Python that is able to be modified by standard Python development, whereas I would rarely expect that to be the case with however each component chooses to execute JavaScript.
Apps that rely on OS provided Webview and electron apps are totally different situation. This is exactly what I said. And no, they don't use any standard nodejs installation like python. And they are different as I said. so this is still apples to orange comparison.
>Apps that rely on OS provided Webview and electron apps are totally different situation.
No, they're not. I'm talking about core apps and services that are essential to a functional operating system. This is exactly the same situation. The difference is choices made by the OS and language ecosystem about how to manage dependencies in various use-cases. It is an apples to oranges comparison because of those decisions and not because of the language.
My slightly heretical opinion is that Debian would have been better off removing system pip entirely. The system python is for the system.
My not so heretical opinion is that PIP should behave like NPM by default, and work under a local environment subdirectory, just like "npm install" already creates a "node_modules" directory where to put all files, without the user needing to specify how and where and which env tool to use.
Libraries in the project fixes this whole issue for C/C++. As for compiler issues, just run it with the same compiler. It really shouldn't take more than 20 mins of setup.
> Libraries in the project fixes this whole issue for C/C++.
Yeah, make sure no-one can ever fix your security vulnerabilities.
> As for compiler issues, just run it with the same compiler.
And when the same compiler doesn't exist for your new machine?
Freezing everything makes things easier in the short term, but much harder in the long term.
This is not even JS specific. All of Python / Ruby / other changing runtimes will require some upkeep. Even C recently needs some attention because clang updated default errors.
Even some of my Rust projects end up in this state, where updating one library ends up with needing to update interacting libraries.
Other ecosystems usually do not have problems to the extent the author had.
I am deep in the Python ecosystem, and I love Python, but I have to admit that Python has the same issue. Reviving a medium-size project after 4 or more years usually means I have to catch up on a lot of new surprising deprecations. That's not because there's anything wrong with Python; it's more of an economic issue: the authors of active libraries have little or no economic incentive to support old, deprecated versions, so they just don't. That's life in the modern world. It is a deep problem that should theoretically affect every large software ecosystem because very few library authors can predict the future with great accuracy, and very few open source library authors have any significant incentive to support old ideas.
> That's not because there's anything wrong with Python
It's absolutely because there's something wrong with Python, the package management, and also the type safety. JVM languages haven't had these problems for 20+ years.
> That's life in the modern world. It is a deep problem that should theoretically affect every large software ecosystem because very few library authors can predict the future with great accuracy, and very few open source library authors have any significant incentive to support old ideas.
I disagree. This is an easy problem to avoid with minimal due diligence, people just choose convenience and make unnecessary tradeoffs.
* Use the standard library (ironically not available for Node projects). It will be built with better backwards compatibility almost every time. What deprecations do occur will likely be VERY WELL documented with much quicker adaptions.
* Limit third party dependencies. Do you really need an ORM for your apps 40 sql queries? How long would it take you to scaffold it with GenerativeAI then make it production-worthy without the ORM? 1 hour? 5 hours? 20 hours?
* Pick technologies with better track records. Maybe don't use Beta software like Swift Data for your iOS App. Maybe choose golang for your API even though it'll take a little bit longer to build it.
> How long would it take you to scaffold it with GenerativeAI then make it production-worthy without the ORM?
Having a machine do codegen to map your queries to objects is still an ORM, except now it's nondeterministic and not updateable.
(mind you, I come from C# where you simply use LINQ+EF without worry, or occasionally Dapper for smaller cases)
And this is how you end up with rewriting the world and spending more time rewriting dozens of existing libraries to avoid adding them as dependencies and less time working on the problem you're actually trying to solve because you're fixing the same dozen bugs that the first person already went through the trouble of fixing for you had you simply used their library instead of eschewing it and having to learn everything that they had already learned for you. Often times because the problem space is deeper than you could have known before getting into the weeds and hopefully you don't get bit by sunk cost and decide to do yourself a favor and just use a library instead of continuing to work on solving problems that aren't related to what you set out to do.
There's a balance to be struck between LeftPad scenarios and "Now there are 37 competing libraries".
Exactly. The right thing to do is study each dependency and decide whether the reward of having the problem solved quickly is worth the many risks of adding dependencies.
I'll acknowledge here that there seems to be a significant difference between Python projects and Node projects: in my experience, a small Python project has a handful of dependencies and maybe a dozen sub-dependencies, while a small Node project usually has a handful of dependencies and a few hundred sub-dependencies. That's where Python's "batteries included" motto does seem to help.
Maybe they can try to get the node version into the package-lock tomorrow? This seems like an opportunity to improve the ecosystem, rather than a biting critique.
Or, instead of responding to sunk costs by getting sunk deeper into the muck, just cut your losses, ditch Node and its proprietary/non-standard APIs and unstable featureset, and use a standard runtime.
The author of the blog post is trying to run a static site generator. A static site generator doesn't need to be able to do anything that Node provides that can't be done with the World Wide Wruntime (which they're already going to use to verify the correctness of the SSG output). So use that runtime and tools that target it, not Node.
> Two hours of work after 4 years sounds ... perfectly acceptable?
Does it, though? Node wasn't exactly new 4 years ago, and plenty of other languages would offer a better experience for even older code -- Java, C, C++ to name a few.
> Java
50% of Java developers are still regularly working in Java 8 [0], which is the same solution that the author could have arrived at immediately—when starting up an old project after 4 years, use the same version you ran last time, don't try to update before you even have the thing running.
> C, C++
Not my experience, but maybe it depends on your operating system and distro? In my experience sorting through the C libs and versions that you need to install on your system to build a new project can easily take a few hours.
Define "better experience."
1.5 hours to get running again?
1?
In exchange for needing to run C? How many hours would it take to build a Node app equivalent in C, I wonder.
0 would be fine. I'd take 0. This could all have been avoided if the interpreter version had been recorded by default somewhere. That's all this needed.
I agree. It is not weird if you try to run old code through new env / vm / framework / compiler it can break.
Locking env version is important.
Double points for using experimental / POC technology like gatsby or nextjs. They are expected to burn and fail
Which is fine until your host doesn’t support older versions of node.
I just got burned by an old js (vue 2) project. I ended up re writing it using good old ssr in Django with htmx and alpine when necessary. Now it’ll run until the end of time. It doesn’t even have a build step.
I sympathize with you, I had one too. Luckily it was small.
It seems luck of the draw. My old React projects (old as in 2018) still work great with class components. I guess the Vue guy did say he would be more revolutionary, when he launched it.
I would love the author to test and old java/maven project. Node is a paradise compared to that stack.
Why do you think so? I have 10+ years old Java/Maven projects that build and run fine.
The only problems I've run into are related to certain package repos that went offline. So you have to track down suitable versions of certain packages on other repos.
OTOH with Node I always find myself in dependency hell with dealing with old projects.
Also, Gatsby has dependencies that aren’t even Node. I have had it break too.
it took two hours just to get the project running as it was 4 years ago. wait how much time it will take to upgrade everything to new versions.
and dare i say this is the lucky case. i had problems reactivating some older project because some dependencies were not version locked and newer versions of those were incompatible with other versioned ones. then it took forever to figure out the right versions. and i think i even had situations where the old version wasn't even available anymore (perhaps because of a linux dependency that wasn't available anymore on my newer linux system)
It took 2 hours to realize that project build for a specific version of node should be run with that version of node. And even that was self inflicted since author didn't vet dependencies and used something build as a C++ node addon instead of actual JS (my bet it was to have a slightly easier time writing CSS).
> Two hours of work after 4 years sounds ... perfectly acceptable?
Pefectly acceptable? Perfectly? Really? I have 10 year old C and Go projects that build and run fine as if nothing has changed. I can upgrade the dependencies if I want to but that's on me. The projects themselves have no problem building and running fine.
Did you read it? The author did not actually resolve the issue, only figured out that it should build with older Node version.
You’re absolutely right. My rational brain agrees and chalks it up to poor project management. However… emotions run high when you have zero idea why something isn’t working and the process of elimination is pretty taxing. So the point for me is venting / maybe someone will read this and remember to write their node version down!
node-sass is to blame for like 95% of these node-gyp issues in my experience, it's not that much grief to deal with but it's hard to grasp how it was allowed to hang around so terribly for so long
The worst part isn't just that it's nearly impossible to run/update an outdated JS project, but that this process will repeat itself ad infinitum.
On the flip side, anything that uses vanilla JS without a build will most likely run just fine, probably till the end of human civilization.
I truly believe some flavour of "Javascript Classic" (some future state of JS before some big shift in syntax/mass migration to something else), C and x86 instructions will follow humanity for the rest of time. There will be javascript somewhere aboard the interstellar spaceships of the future, and we will still complain about it.
I think it was 'A Deepness In The Sky' that posited so many layers of legacy underlying the starship control systems of the era that one of the most crucial positions on a ship was that of 'Programmer-Archeologist.'
Well, even Fortran is still around us in some lapack code in numpy and in a lot of the stuff behing scipy, so, a lot closer than a lot of people can imagine.
Basically a lot of AI depends on a bunch of absurdly optimized numeric libraries writen in Fortran.
Fortran is well into the way at becoming a centenarian programing language at 74 years of age.
My grandchildren will live to see Vernor Vinge's programmer-archeologists troubleshooting PHP issues on the Wordpress install responsible for life-support around Alpha Centauri.
I sometimes think about one of the Star Trek episodes where the ship was getting attacked by a "SQL injection", and I think that's pretty realistic
Yeah, starting to believe the hacking scene in the Matrix where the machine city was still running on IPv4 wasn't a blunder but foresight.
There will also be someone playing Tetris, Doom and Final Fantasy VI on their neural interface, long after all modern games have been lost to time (and DRM).
JavaScript will be killed off by WebAssembly.
Zombie JavaScript will be reduced to being glue code and then not even that.
JavaScript will be killed off by webassembly for about 5 years now
The difference lately is the number of tools that are now in place for WebAssembly development and the new extensions to WebAssembly (WasmGC, Memory64, etc.).
Despite 28 years of effort at optimization, JavaScript is outperformed by WebAssembly. There's not much coming back from that:
https://jordaneldredge.com/blog/speeding-up-winamps-music-vi...
https://www.amazon.science/blog/how-prime-video-updates-its-...
What if your project is old enough to predate the modern "just use vanilla js, it's fine"? :tableflip:
j/k - I'm slowly removing all the Zepto code I have and it's usually a relatively quick search&replace.
YMMV but I had a 4 year old project whose only dependencies were socketio and express and it booted right up. So stick to stable, mature projects and you're likely to be fine.
Just watch out because socketio must be version matched for client-server or you will get the most annoying errors and state inconsistencies in the world. That's a scary production update let me tell you. Version 2.3.0 still scares me to this day after trying to upgrade that without production downtime.
> it's nearly impossible to run/update an outdated JS project
You corrected yourself, but it's worth emphasizing here: a _NodeJS_ project, you mean.
Unless you're using non-standard APIs, stuff written to run in the browser generally keeps working just as well as it did before, no matter whether it was written 2 years ago or 10.
Or until Google decides to change things to be more standards compliant, regardless of the collateral damage
This will always be an issue for the node community - it’s endemic to the JavaScript shipping / speed culture and the package management philosophy.
Go is much, much better on these terms, although not perfect.
I’d venture a guess that Perl 5 is outstanding here, although it’s been a few years since I tried to run an old Perl project. CPAN was dog slow, but other than that, everything worked first try.
I’d also bet Tcl is nearly perfect on the ‘try this 10 year old repo’ test
I've had a fair amount of trouble with Perl/cpan simply because of the sheer number of XS (compiled C extension) modules in the ecosystem. For even a medium sized perl project that e.g. talks to databases or whatnot, building it after a long time requires you to spend tedious hours getting the right development headers/libraries for the compiled components, fussing with compiler flags, dealing with C ABI symbols that were deprecated in the interim, etc.
To be fair, Python and Ruby also have this problem (for newer Pythons, popular extension modules at recent versions are more likely to Just Work due to wheels, but if you're building old code for the first time in 3+ years, all the old problems come back with a vengeance). It's more of a "scripting language that got popular enough that ordinary projects have a deep tree of transitives, many of which are compiled on-site" issue than a Perl specific problem.
Clojure too, by all accounts. I'd say Common Lisp but they're in the weird position of code itself being rampantly portable across time but the ecosystem around it being astonishingly immature.
Things have improved a lot with the introduction of Quicklisp, but I'd have to agree when compared to others.
CL is still one of the nicest languages there is, and the only language that skirts the line between being some combination of dynamic and interpreted yet typed and compiled.
It is showing its age though, particularly around the edges like what you're saying.
CPAN.pm is not the fastest, no, though it generally spends most of its time running each distribution's tests before installing it, which while it does have a certain "start the install and go for lunch" to it is an excellent canary for if something's changed underneath you *before* you end up having to go spelunking your own code.
App::cpanminus (cpanm) is noticeably lighter, App::cpm (cpm) does parallel builds and skips tests by default.
An approach I've become quite fond of is using cpm to install fast into the local::lib I'm actually going to use, then creating a scratch setup in /home/tmp or similar and running cpanm in that under tmux/abduco/etc. to do a second install that *does* run the tests so I have those results to refer to later but don't have to wait for them right now.
(if I ever write a cpan client of my own, it's going to have a mode where it does a cpm-like install process and then backgrounds a test running process that logs somewhere well known so this approach becomes a single command, but I keep getting distracted by other projects ;)
Go’s minimum version selection is the way and I don’t understand why other ecosystems haven’t adopted it. You’re be able to compile an old project with all the library dependencies it had at the time it was released. It might have security issues, but at least you start with a version that works and then can go about upgrading it.
It also helps that if some library dependency generated Go code using a tool, the Go source code is checked in and you don’t have to run their tool.
Getting the exact dependencies it had at release is a solved problem in Node and most other languages with lock files too.
It's just no guarantee that those old versions work on the new system, or with the outside world as it exists by time of installation - which can be as true for Go as any other language. If the XYZ service API client still gets you version 1.2.37, that's not actually any help if 1.2.37 calls endpoints that the XYZ service has removed. Or a cgo package that binds to a version of OpenSSL that is no longer installed on your system, etc.
Some time ago, I wanted to update Arch, on a server running some python project I had inherited. Long story short, it relied on something that relied on something that etc., and then it turned out certain components that were needed for the upgrade process had been taken offline. Now the system can’t be changed, unless there’s significant work done to the code, and that’s too expensive. It runs on request in a container now, while it lasts.
back in the day you were supposed to check in your compiler into version control (not the lockfile, the whole distribution).
I used to think that people emailing screenshots of corporate dashboards were idiots. I now think that's actually genius - a frozen in time view which you can't regenerate but will be available until the end of time if you need it. (Hello, Exchange admins!)
This is why I say it's a cultural problem, not a technical problem. In goland, changing API calls in minor versions is pretty much a sin. At least it's something you'd do .. carefully, probably with apologies. In node, it's extremely routine to re-pin to newer modules without worry.
My hot take is that lock files and nested dependencies induce fragility. If packages were required to work with wide range of dependencies then that would force the ecosystem to build the packages in more robust way. Basically I think the dependency trees built with modern package managers in a sense over-constrain the environment, making it all sorts of difficult to work with.
On the other hand, the other extreme induces stuff like autoconf which is not that great either. Trying to have your code be compatible with absolutely everything is probably not good, although arguably platforms these days are generally much more stable and consistent than they were in the heydays of autoconf.
I truly think it's just because the engineers that started working with node were ... young. They wanted to rapidly iterate, and so crufty old habits like this weren't what they wanted or felt they needed.
What's been interesting is watching these devs age 10 years, and still mostly decide it's better to start new frameworks rather than treat legacy code as an asset. That feels to me like a generational shift. And I'm not shaking my cane and saying they're wrong -- a modern LLM can parse an API document and get you 95% of the way to your goal most of the time pretty quickly -- but I propose it's truly a cultural difference, and I suspect it won't wash out as people age, just create different benefits and costs.
You're talking about what's wrong with the NPM ecosystem, not JS.
Previously: You wouldn't conflate Windows development with "C" (and completely discount UNIX along the way) just because of Win32. <https://news.ycombinator.com/item?id=41899671>
Yeah I'd expect 20yo Perl5 stuff to work without issues.
A few weeks ago I was experimenting with a sound generation dsl/runtime called Csound and even most 30yo sources were working as long as they didn't use some obsolete UI.
It’s the same with R. The only thing preventing many ancient packages from running under new versions of R and vice-versa is the fact that the package author simply set the minimum version to whatever they happened to be using at the time.
We run node code that's 10 year old. No one dares to touch it; we just run it in docker and hope nothing goes wrong.
I would heavily recommend to avoid NodeJS packages that depend on node-gyp. Node-gyp powered dependencies are very seldomly worth the hassle.
If you must depend on node-gyp, perhaps use dev containers so at least every developer in your team can work most of the time.
I don't even know what node-gyp is, but I know it appears regularly in error messages to know it causes problems.
I don't even develop against Node, it has just crept into our front-end build toolchain.
It's the JS equivalent of allowing native bindings (like JNI in Java).
So I'm pretty uninformed about the guts of node-gyp, and why it's used, but if people need to bring in dependancies from outside javascript... could WASM be a good fit there? Could store the binaries instead, and ship those... and in theory (correct me if I'm wrong) that shouldn't be much of a security issue due to the security model of WASM modules... or at least equal to the risk of running arbitrary build commands on your machine from a random node package.
In principle, yes. In practice, the problem is that getting some random native library or tool compile with wasm as a target is not always easy. E.g. anything that relied on pthreads was out until fairly recently.
I've actually had a node project go bad in a mere 4 months. It must be a new record. That was about 4-5 years ago though.
Hopefully the ecosystem as improved since then, but it was nearly impossible to get going.
Some packages had been changed and the version number overwritten with incompatible packages, and the conflicts were plenty.
One of the things I'm intrigued by is that JS people, and the other couple of ecosystems where this is a big problem, go out to learn another language (as a good T-shaped developer does), and then start posting frantic questions to the new language's communities about how this popular library hasn't had a commit in six weeks, is it dead, oh my gosh wtf aaaaaaaaaaa.
It's OK. Not every language ecosystem is so busted that you can reliably expect a project not to work if someone isn't staring at it weekly and building it over and over again just in case. Now, it's always a risk, sure, no language anywhere is immune to the issue [1], but there's plenty of languages where you can encounter things from 5 years ago and your default presumption is that it's probably still working as well now as it did then. It may be wrong, but it's an OK default presumption.
[1]: Well... no language in common use anyhow. There's some really fringe stuff that uses what is basically content-based references for code dependencies, but I'm not aware of anything that I'd call "production quality" that even remotely looks like that, and is immune to someone just plain making an error with the semantic versioning or whatever.
> frantic questions to the new language's communities about how this popular library hasn't had a commit in six weeks
Lol, my perspective is almost the opposite. If it's got a lot of commits in the last six weeks, I need to look for something that's stable. Unless there's a good reason for so many commits; I feel like that many commits means it's in active development, which implies the requirements and interfaces aren't yet determined and who wants to rely on that?
These JS developers would probably shiver at seeing many Common Lisp repos with a last commit like 12 years ago and still working like a charm.
I’m curious, how do you measure the pulse of a project that old? Do people still talk about it? Or that not even necessary — use it until it breaks and otherwise don’t think about it?
Why do you want your building materials to have a pulse?
Ideally, in adopting dependencies, you should be looking for a mature utility whose design was clear and implementation is complete.
If it's open source, you should be able to read and unserstand the code yourself, and you should make an earnest effort to do so, in case it has faults you wouldn't usually allow in your own code and in case you need to fork it at some point.
This lets you you build well-designed, stable, maintainable, clear things yourself.
The alternate, building your project on a random collection of "living" projects undergoing active development is how you banish yourself to perpetual maintenance, build failures and CVE warnings that have nothing to do with your work, surprise regressions when you update your referenced version (you are, at least, pinning your versions??), etc
Something like a HTTP 1.1 client is something you might expect would be a pretty stable thing that doesn't need too many updates, right?
But I would not assume that a HTTP client that has been untouched in 12 years supports SNI, for example, which means that actually it might be totally useless for a lot of modern sites (certainly Android did not support SNI 12 years ago).
If it has an issue tracker, you can look in there for things that look like real issues and are unaddressed.
If there's no issue tracker, you can YOLO and try it and see if it works, or you can look around at the code and see if it looks reasonable.
Even if there are unaddressed issues, you can always use it and fix it when it breaks. If it's reasonable enough, it's a good start anyway. And at least my assumption with open source is I'm going to be fixing it when it breaks, so lack of a pulse is better than churn.
Maybe "pulse" could be transitive? Like, if a project doesn't have many recent commits, but many projects using it have recent commits.
Node is bad but the worst I have seen is Android
How about node on android?
Delete this comment right now, don't give them ideas.
Too late, we already have react native
It's a double whammy
I would expect most Java projects from 20 years ago to compile and run with zero issues.
Absolutely not. Not on the client side anyway.
I know of one application by a large multinational that requires java in the browser to run. Almost impossible to run now because of security restrictions.
We do have some very old and likely lost all sources "client apps" that are a single JAR and date from around 2003-2004, written in Swing. They still work.
Of course when they stop working they will be phased out, but we have been expecting their death for years now and not happening yet.
well java on the desktop and java in the browser are two entirely different beasts. the problem here is not java but the changes that have been made in the browser.
The ecosystem has not improved since then.
Wow, that’s honestly impressive.
If there was an option to guarantee versions could exist for X amount of years (maybe even months?) then that would greatly help the stability of projects.
the problem was your research before using the packages then. So much bad engineering and architectural planning is blamed on the tools, not the human using it.
I’ve started to adopt Nix devShells to help keep a record of each project’s dependencies.
If Nix is too heavy, the learning curve for tools like asdf-vm and mise is much lower and offers similar benefits.
I really wish there was a good equivalent for Windows.
You know, I ran into something similar recently with a static site engine (Zola). Was moving to a new host and figured I'd just copy and run the binary, only to have it fail due to linking OpenSSL. I had customized the internals years ago and stupidly never committed it anywhere, and attempting to build it fresh ran into issues with yanked crates.
Since it's just a binary though, I wound up grabbing the OpenSSL from the old box and patching the binary to just point to that instead. Thing runs fine after that.
This is all, of course, still totally stupid - but I did find myself thinking how much worse comparable events in JS have been for me over the years. What would have been easily an entire afternoon ended up taking 15 minutes - and a chunk of that was just double checking commands I'd long forgotten.
That is nothing..Try building your Android project after leaving it idle for a week. Or better yet, try building you react native project you left for 2 days.
OMG I feel this in my soul. Try looking at one of the gradle files wrong in a kotlin multiplatform app with shared ui.
Hold on, you had to do binary surgery using an OpenSSL version from an old box you had? I salute the dedication.
How exactly does one do that. Sounds exciting!
Not the OP but what sometimes works is as easy as:
``` ldd your-binary ``` on the old host and then copy all the thing that is referenced, put into ./foo and then start like so on new host: `LD_LIBRARY_PATH=./foo ./your-binary`. (may include typos, from memory)
A great tool for this used to be https://github.com/intoli/exodus - not sure if it still works.
Disclaimer: Also please don't do this with network-facing services, security applies, etc.pp. but it's a good trick to know.
A sewing needle, a spare magnet, and a very steady hand.
I think you copy the library file and add it to you load path
patchelf
but don't forget to make sure your new path is fewer characters than the original one so you don't overwrite any of the library
What’s the issue with yanked crates? It should still build from your lockfile, even if it contains yanked crates.
Assuming you actually committed the lockfile...
Never underestimate the potential of past-you to have accidentally missed a tiny but essential step in a way that won't have made a noticeable difference at the time, yeah.
This is why Nix (with flakes), in a git repository, will refuse to use a lockfile that isn't being tracked by git.
I always try to remember to put the node version in my package.json - but I do agree that the dependency chain on node-gyp has been a blight on node packages for awhile. Really wonder how that wart became such a critical tool used by so many packages.
node-gyp is a huge source of these issues for Node projects, especially older ones.
For those reading this who don't know much about node - node-gyp is how you pull in native code libraries to Node projects, typically for performance reasons. You get the same sorts of build issues with it that you can get whenever you start having binary, or source, dependencies, and you need the entire toolchain to be "Just Right(tm)".
I run into this issue with older Node projects on ARM Mac machines (Still!), but I run into similar issues with Python projects as well. Heck some days I still find older versions of native libraries that don't have working ARM builds for MacOS!
Node used to have a lot more native modules, in newer code you typically don't see as much of that, and accordingly this is much less of an issue now days.
> I always try to remember to put the node version in my package.json
This 100x over!
> For those reading this who don't know much about node
I would prefer to remain blissfully ignorant, thank you!
Why did you click on "The tragedy of running an old Node project" then
IMHO TypeScript is the best mainstream language to write code in right now. It is incredibly expressive and feature rich, and you can model in almost any paradigm you like. The ecosystem around it allows you to choose whatever blend of runtime vs compile time type safety you prefer. Lots of people just runtime type check at their endpoint boundaries, and use compile time for everything internal to a service, but again, the choice is yours.
The Node+Express backend ecosystem is also incredibly powerful. Node is light weight, the most naïve code can handle a thousand RPS on the cheapest of machines, and you can get an entire server up and running with CORS+Auth+JSON endpoints in just 5 or 6 lines of code, and none of that code has any DI magic or XML configuration files.
JS/TS is horrible for numeric stuff, but it is great for everything else.
>Really wonder how that wart became such a critical tool used by so many packages.
The original dream for Node was that it would simply be a glue wrapper around libuv that allowed for easy packaging/sharing of modules written in C++. But everyone just started writing everything in JS, and the ecosystem ended up as a mish-mash of native/non-native. Ryan Dahl stated this was indeed his biggest mistake/regret with Node, thus we have Deno now.
What is the Deno solution though? (I assume it's not sharing modules written in C++?)
Deno's solution is coming out years later when JS is fast enough that there is no need to involve C++ for most applications.
> But everyone just started writing everything in JS, and the ecosystem ended up as a mish-mash of native/non-native.
Because the native written stuff breaks all the darn time and it creates cross-plat nightmares.
My stress levels are inversely proportional to how many native packages I have to try to get building within a project, be that project in Python, Java, or JS.
JS+Node runs on everything. Prepackaged C++ libraries always seem to be missing at least one target platform that I need!
The CPAN 'Alien' infrastructure is great for this, you have pseudo-modules that you can depend on that use vendor packages if available and build the damn thing for you if not.
It's considered ... rude ... in most cases to write a module that needs to build against a native library without also having an Alien dist to handle making sure said library is there by the time it's trying to build against it.
Opinions on perl as a *language* ... vary, let us say ... but I wish people who didn't like writing perl would at least look at how our infrastructure works and steal more of the good parts to make dealing with their preferred language less painful.
Seamless native builds are quite doable, but the tooling needs to be very deliberately designed around that. For a good example of how far this can be taken, consider https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
I'm a huge Zig fan! Thank you for making native programming fun again! Zig is the exception to native build systems being painful.
But even a great build system doesn't help when old native libraries don't support newer hardware or OSs. At some point the high level -> native abstractions break and then builds break. :(
For sure. This is the number one reason I am switching as many projects as I can to HTMX.
https://dubroy.com/blog/cold-blooded-software/
Sibling comments say in so many words, it's no big deal bro, just update. But it is a big deal over time if you have dozens of cold-blooded projects to deal with.
Good old node-gyp. I have absolutely no idea what it even is but it has been giving me errors for what feels like a decade. Mostly via front end build stuff from various projects I have worked on
Same. One day I'll find out what it is.
if you want to know, it's a fork of google gyp, which is a C/C++ project/build system generator. I.e. it's a bit similar to CMAKE, a tool to describe native code projects and what needs to be built in order to make executables and dlls.
It's a python codebase, largely abandoned by google. They used to use it for building Chrome.
i forgot all about node-gyp. The only memories I do have of it are the errors and thinking about gimps.
At first I thought it would be a decade old project, but 4 years isn't old by any standards is it?
Anyways, npm ci should have been the first attempt, not npm install so that it installs the same package versions defined in the package-lock.json. Then as others have mentioned, pin your node versions. If you're afraid of this happening again, npm pack is your friend.
In the end, op could have done a bit more. BUT I'll give it to him that when bindings are involved, these things take more time than they should
I had this exact problem with multiple Node blog engines in the past. Constant version breakage was incredibly frustrating. I eventually moved to Hugo. A single binary which I committed with the blog files. Zero issues even years later. I can build the blog on any new machine within seconds. Which was the other revelation of Hugo. 10 seconds to build an 800+ post blog vs minutes using Hexo or similar.
Having CI would have avoided this problem.
Not for running the project locally
I think you're missing the point.
CI solves it because it proves that it can build in the pipeline, using a well defined environment.
No guessing at which node version you need or any other dependencies that may be required.
I'm pretty sanguine about languages and frameworks, but I draw the line at node. I have seen so many horrors visited by dependencies, often to do just one thing where 2 or 3 lines of code would do the job anyway.
When I was managing teams, whatever the language, I would ban any new dependencies which I didn't personally agree with. A lack of control just creates a nightmare.
Was that kind of control well-received by your teams? Out of context, it sounds like it would be pretty rough to be an engineer on a team where your manager had sole control over what tools you could use - I suppose it might make sense for junior devs or a very small codebase, but I would caution against taking that stance in a team where you want to facilitate mutual trust
Provided the manager only rarely exercises the power, and is open to being persuaded not to, having somebody able to veto risky dependencies can be really quite useful.
Normally when I'm the one with that power we rapidly get to a general understanding of what's small enough that I (a) probably won't care (b) will take responsibility for tweaking the schedule to makre time to get rid of it if I do.
And 'big' dependencies are generally best discussed amongst the entire team until consensus is reached before introducing one anyway.
Well back then there were fewer options, but the result was that completed products were easy to work with. Perhaps we live in different times.
Lately I have been revisiting some older golang tool I wrote since before they introduced go modules.
"go mod init" + identify a working dependency version was all I had to do on any of those 10+ year old projects (5 minute work tops)
This is literally every "hot new thing" since 2000.
It is systemic. Part of it is due to too many people creating systems on the fly with too little forethought, but also because there aren't enough "really smart people" working on long term solutions. Just hacks, done by hacks. What did you expect when the people writing the systems don't have long term experience?
The problem itself is old, but the extent to which it pervades different ecosystems varies. It's largely a cultural thing, and the problem with JS/Node ecosystem specifically is that most of the community (or, perhaps, rather most of the library/framework authors) accepts this kind of thing as normal.
i could not tell from the article whether this was a site with a backend using node.js or if it was just a frontend depending on node.js for the build tools.
for the latter i get around the problem by avoiding build tools altogether. i use a frontend framework that i can load directly into the browser, and use without needing any tools to manage dependencies. the benefit from that is that it will ensure that my site will keep running for years to come, even if i leave it dormant for some time. the downside is that it is probably less optimized. but for smaller sites that aren't under continuous maintenance this is a reasonable tradeoff. i built all my recent sites that way using a prebuilt version of the aurelia framework.
incidentally just today i tried to research if i could build a site with svelte that way. well, it turns out that although it should theoretically be possible, i was unable to find a prebuilt version to do so after a few hours of searching. for vuejs i found one within minutes. i'll be learning vuejs now.
see this thread for a discussion on going buildless: https://news.ycombinator.com/item?id=41479365
I've been experimenting recently, with quite some success, with having a 'libs.js' file that pulls in and re-exports everything external I want, and providing a script that applies 'bun build' to just that.
I haven't yet decided if/how I want to include a prebuilt version of it in the repo, I *think* I may go the approach of having a commit that modifies libs.js and/or the lockfile and then an immediately following one that commits an updated prebuild ... oh, huh, actually, I should probably also consider doing those two commits on a branch, then forcing a merge commit so they land on master atomically but it's easy to tease out the human changes and the regen changes by poking inside said merge commit ... yeah, like I say, still thinking about exactly how to do this, don't mind me.
Also for even simpler cases I've been using the preact-htm prebuild directly, since htm gives a lit-style html() tagged literal consuming function that can produce vnodes for preact so I can mess around without needing something that understands jsx between my editor and my browser window.
vue's component system is IIRC noticeably less nice to work with if you don't have a compile step, but it's still pretty nice even without that so please don't think I'm trying to dissuade you here :)
First thing I would have done is upgrade the version of Gatsby to latest. Did the author try that?
If upgrading is difficult because of 4 years of breaking changes, blame Gatsby for not being backwards compatible. Also blame your original choice of going with a hokey framework.
Speaking of hokey framework: 167 dependencies and 3000 versions of Gatsby in npm.
blaming anything or anyone gets you exactly zero seconds closer to getting the job done.
You could perhaps reframe "blame" as identifying the source of the problem and iunderstand why it can be a useful excercise (aslo none of us here are trying to solve the problem really, just wasting time on the internet). In this case Node and it's atendant ecosystem are certainly a part of the problem but I would agree that Gatsby is a bigger part of the issue as they don't seem to have any interest in taming the Node dependecy management beast. I've had to dig into Gatsby projects mere months old and it really was like opening a can of worms.
We use package.lock and docker image with local folder binding to run legacy node projects. Eg. docker run -v local:inner node:12 command
I joined a node project that was stuck on 0.12 while 7.0 was being developed. It was a shit show to get us to 6. As I recall, 10 was a little tricky, 12 and 16 had a lot of head scratchers. I finished the 16 upgrade more than a year after the last person tried, and it was a dumb luck epiphany that kept it that short.
I had a similar experience with emberJS when it was still young. Every time I picked the project up I had one to two hours of upgrade work to get it to run again, and I just had a couple hours to work on it. So half my time went to maintenance and it wasn’t sustainable.
I’m trying a related idea now in elixir and José may be a saint. Though I fear a Java 5 moment in their future, where the levee breaks and a flood of changes come at once.
Can't help but feel that this is a massive nothing-burger. You wouldn't generally expect your Java project to run if you use an incompatible version of the JVM, nor would you generally expect your C++ project to build if you swap one compiler for a different one. Etc, always specify what your project relies on, whether it's in the readme or in the dependency tree.
node-gyp was a mistake, building of native addons should have been an explicit separate step all along.
> node-gyp
We're in ... let's call it a transitional period at work. I've got something like a dozen versions of node being managed by asdf. And in half of the projects I work on regularly, I consistently get warnings about this particular project failing to build.
One day, I'll actually look up what it actually is, and what it does, and why it's being built, but is apparently optional.
It's basically a set of tools to make building native modules easier, that said modules then use to deal with their binding to C/C++/etc. code.
Everybody complains about it, and understandably so, but if it didn't exist you'd probably instead have one set of potential similar problems *per* native module which has a good chance of not actually being better overall.
The counterargument is, I guess, "well, only people who can write their own high quality build setup in-tree should be writing things that bind to external code," and I do sometimes dream of that, but it's not hard to see the downsides of living in *that* world instead either.
Dealing with node-gyp cost me at least 5 hours a month in the 2010s. I'm so very happy to not see those errors in my console anymore.
Node.js (or more accurately, the entire Javascript ecosystem) changes, but the tropes don't.
https://medium.com/hackernoon/how-it-feels-to-learn-javascri... (beware the green background, I recommend reader mode.)
Acknowledging this is absolutely awful, and also commenting that a project .nvmrc file is your friend!
...until the node version you locked can't be downloaded anymore, or hasn't ever existed for your CPU arch.
I see you too had to run node v14(? my memory fails me somewhat) on Apple Silicon hardware...
FWIW, I run Node 12 painlessly on Apple Silicon using fnm, so you might be thinking of a few versions before that.
shivers
How is that pretty hard?
Native code in an npm module should be regarded as a massive red flag.
This goes for both node and python: Avoid native extensions. For python this is less feasible due to its inherently poor performance, so limit yourself to the crucial ones like numpy. For node, there are few good reasons why you would need a native extension. Unless you have your node version pinned, it will try to find the binary for your node version, fail, then attempt to build it on your system, which will most likely fail as well.
You spent only two hours on this and you think it’s too much?
Also, do not run shit on a node version that is years out of date and out of service. Also, update your damn packages. I know I sound cranky, but running anything internet facing with god knows how many vulnerabilities in is an exceedingly bad idea.
Have you tried DevContainer before ?
I had to build some project that uses some Ruby package manager. I forgot already what the package manager is called. I got some error about "you don't have all the dev tools". So I installed what Google told me "dev tools" was. Then it still told me that I needed more dev tools. Stackoverflow had some question about this package manager. For Windows (Linux here). 20+ answers, mostly for Mac. All in the style of "this random thing worked for me". All with at least one upvote. Some answer about "I needed to symlink this system library".
Gave up.
Then I ran `devbox init` and installed whatever it told me that was needed. `devbox shell`.
yeah? now try running 4 years old React project, it's a hell on earth.
next.js user? :D
The tragedy of running an̶ ̶o̶l̶d̶ Node project.
On the one hand, it's not that terrible and most *of* the terrible is from people making silly choices.
On the other hand, there's a reason I regularly get annoyed enough at it to call it nope.js.
On the gripping hand, I mostly write perl, which argues for a different but unique set of masochistic tendencies on my part.
(you just have to remember that what 'perl' *really* stands for is 'Perenially Eclectic Rubbish Lister' and then you will have appropriate expectations and can settle back and have fun ;)
I had to chuckle after I read your username. Kudos.