Single file dependencies are amazing. I've never understood why it's so unpopular as a distribution model.
They can be a bit clunky in some languages (eg. C), but even then it's nothing compared to the dystopian level of nightmare fuel that is a lot of dependency systems (eg. Maven, Gradle, PIP). Free vendoring is a nice plus as well.
Lua does it right; happy to see some Fennel follow in that tradition.
The main reason you don't see it that often is because of the "what if some extremely common library that we depend on indirectly 63 times at a total of 11 different versions discovers that four of those versions have a major security vulnerability" problem.
For hobby projects, vulnerable dependencies are usually a minor annoyance that's often less annoying than dealing with more elaborate dependency systems.
For big professional projects, not being able to easily answer "are we currently running any libraries with major known vulnerabilities" makes this approach a non-starter.
Cheers!
You mean in terms of having one centralized source of truth? I find this exact same problem with dependency systems as well; every project has their own dependency tracking file, and unless there is a deliberate attempt at uniting these into one trackable unit, you get this exact mess either way.
If the problem is automation (limiting human factors), then I'd say that whatever issue exists with single file dependencies is a tooling issue more than anything else. There's nothing about having one file that makes this any harder. In fact I'd say the opposite is true (eg. running checksums).
The one thing that dependency systems have going for them, is homogenized standardization. I'd rather go install x than whatever Ninja-generating-CMake-files-generating-Makefiles-that-generates-files-that-produce-a-final-file carnival rides that linger from the past. Perhaps its because of those scars that I like single dependency files even in / especially in larger projects.
Clarification question: When you say "dependency systems," are you referring to PIP/NPM/go get/Cargo or just the C/C++ clusterfuck?
Because under the hood "go install" (and NPM/PIP/Cargo) makes sure that if you indirectly depend on a library, you only have one version* of that library installed, and it's pretty easy to track which version that is.
That's the key difference: With "go install" you only have one version of your indirect dependencies, with single-file libraries, you have one version for each thing that needs them and have no real way of tracking indirect dependencies.
I'm not going to try to defend the C/C++ clusterfuck. Switching to single-file libraries would improve that because any change would improve that.
* Sometimes one version of each major release
I think this comment shows the difference between web development and other industries.
This problem is not existent in languages that traditionally use single-file libraries because projects rarely get to the point where they use 63 libraries, let alone have 63 indirect dependencies to a library.
Also: popular single-file libraries in the tradition of stb won't even have dependencies to other libraries of the sort, only to the standard library.
If your dependency graph is simple enough that you don't have any indirect dependencies, then it really doesn't matter what you use for dependency management.
Modern dependency management tools are designed for more complicated cases.
I would argue that it's the opposite. It's "more complicated cases" that are designed to fit modern dependency management.
If the goal of your dependency system is to reduce the number of dependencies that library authors include, isn't encouraging people to make single-file libraries counterproductive?
How would that be different if you have a source file split up into multiple files?
Having a list of what version you're using of a single file library seems like an easy problem to solve. If nothing else you could put the version number in the file name and in a debug build print off the file name.
I'm not comparing single-file vs multiple files, I'm comparing single-file vs NPM/PIP/go get/Cargo
Let's say you depend on foo, which depends on 10 other libraries including bar, all of which depend on a library called baz. Then one day someone discovers an exploit for baz.
With npm, you only have one version of baz installed, and can easily check if it's a vulnerable version.
With single-file libraries, baz was built into a single file. Then bar was built into a single file containing baz. Then foo was built into a single file containing bar and other libraries, all which included baz.
Now your library contains baz 10 times, all of which might be at different versions, and none of which you can easily check. (You can check your version of foo easily enough, but baz is the library with the exploit)
> I'm not comparing single-file vs multiple files, I'm comparing single-file vs NPM/PIP/go get/Cargo
You are talking about your own thing. Everyone else is talking about the number of files, not the distribution and update mechanism.
The packaging system could support single file and still be able to track versions and upgrades. The JVM ecosystem is effectively single file for many deps esp now that jars can contain jars.
The comment I replied to was "They can be a bit clunky in some languages (eg. C), but even then it's nothing compared to the dystopian level of nightmare fuel that is a lot of dependency systems (eg. Maven, Gradle, PIP). Free vendoring is a nice plus as well."
How is that not talking about PIP?
Like the other person said, you're mixing up single file libraries with having no package manager or dependency management.
That being said in C and C++ the single file libraries typically have no dependencies, which is one of the major benefits.
Dependencies are always something that a programmer should be completely clear about, introducing even one new dependency matters and needs to be thought through.
If someone is blasting in new dependencies that themselves have new dependencies and some of these overlap or are circular, and nothing is being baked down to standing alone it's going to end in heart break and disaster no matter what. That basically the opposite of modularity, a web of dependencies that can't be untangled. This applies to libraries and it applies on a smaller scale to things like classes.
If the goal of your dependency system is to discourage people from adding dependencies, then isn't supporting single-file libraries counterproductive because it makes it easier to add new dependencies?
I actually find them to be extremely popular in C and C++, and also in some Lua communities.
They just have an extremely vocal opposition.
This is not too dissimilar to static builds and unity builds, which also "make your life easier" but people will write tirades about how dangerous they are.
I wonder if C++ modules (which I'm loving) will also get the same treatment, since they make a lot of clunky tooling obsolete.
I think SQLite's amalgamation is one of the reasons SQLite is so popular for embedding.
I like to combine the two and put a lot of single file libraries into one compilation unit. It compiles fast and puts lots of dependencies in one place that doesn't need to be recompiled often.
It takes a tiny bit of lateral thinking in C.
What if "build" hacked all the source into a single text file, instead of hacking it all into a single archive of object files?
Roughly, write static in front of most functions, don't reuse names between source files, cat them together in some viable order.
Now you can do whatever crazy codegen nonsense you want in the build and the users won't have to deal with it. The sqlite amalgamation build is surprising, using the result is trivial.
I suspect much of this is a historical result of c's compilation model. Since c compilers define a compilation unit as a file, there is no good way to do incremental compilation in c without splitting the project into multiple files. In an era of much slower computers, the incremental compilation was necessary so this was an understandable choice.
For me, today this split is almost always a mistake. Having everything in one file is superior the vast majority of the time. Search is easy and it is completely clear where everything is in the project. Most projects written by a single individual will be fewer than 10K lines which many compilers can clean compile in less than one second. And I have reached the stage of my programming journey where I would rather not ever work on sprawling hundred K + line projects written by dozens or hundreds of authors.
If the single file gets too unwieldy, splitting it in my opinion usually makes the problem worse. It is like cleaning up by sweeping everything under the rug. The fact that the file got so unwieldy was a symptom that the design got confused and the single file no longer is coherent. Splitting it rarely makes it more coherent.
To make things more concrete and simple. For me the following structure is strictly better (in lua like the op)
foo.lua:
local bar = function() print "bar" end
return function()
print "foo"
bar()
end
compared to bar.lua:
return function() print "bar" end
foo.lua:
local bar = require "bar"
return function()
print "foo"
bar()
end
In the latter, I now both have to keep track of what and where bar is and switch files to see its contents or rely on fancy editors tricks. With the former, I can use vim and if I want to remind myself of the definition bar, it is as easy as `?bar =`. I end up with the same code either way, but it is much easier to view in a single file and I can take advantage of lua's scoping rules to keep module details local to the module even from other modules defined in the same file.For me, this makes it much easier to focus and I am liberated from the problem of where submodules are located in the project. I can also recursively keep subdividing the problem into smaller and smaller subproblems that do not escape their context so that even though the file might grow large, the actual functions tend to be reasonably small.
That this is also the easiest way to distribute code is a nice bonus.
Yet another programming language (I haven't heard of): https://fennel-lang.org/
Thanks.
I thought it was the feature engineering company, Fennel, that was acquired by Databricks this year.