First I saw that it's written in Perl. Then I realized that the last release was 11 years ago and that the repository domains are hardcoded in the one-file script.
Does it still work, though?
Where else would you put the repository domains?
I would put them into a configuration file. You know, so people can configure which repositories are being searched.
Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.
The search APIs are separate from the repository URLs, and the different distros' APIs need to be parsed in different ways. And before you ask, the search APIs have to be separate from the repositories, if you don't want to waste disk, network, and time keeping hundreds of local index files up-to-date every week.
They can't just be "configured" by changing a URL. I guess maybe you could self-host the search page for some of the distros, and reuse the parser, but are people really doing that? Otherwise, you'd have to write new code to parse the results, at which point you might as well soft-fork the script anyway.
> Generally I advice against hard doing stuff that changes often and may need to be adjusted for different users or organizations.
YAGNI. And if your org does need it for some reason, you're probably better off running something specifically tailored for your own needs instead of whatever implementation makes it in.
The whole script's only 1300 lines. Would adding spending 150 lines on configuration and littering the user's dotfiles be worth it? Now what happens if the configuration's missing/corrupted? When you update the script, do you keep the old dotfile that might be using a deprecated API, or do you replace the old configuration and clobber any customization the user's done? Oops, there go another 1,000 lines, on edge cases, option flags, conf merging, warning messages... And good luck getting bug reporters to explain their configuration changes!
Also, this stuff doesn't "change often". The distros literally can't change it often, because doing so might break LTS stability. I know it's fun to point out perceived flaws in other people's work, but in this case, the URLs are tightly bound to the parsing logic, which is the right place to put them IMO.
Are you asking if this tool can find something on ubuntu 26.04 when the urls it has were hardcoded 11 years ago?
The URL to search for packages in Ubuntu for example hasn't changed to my knowledge. Are you assuming it's only looking for packages in releases that were current at the time?
The site it hardcodes is https://packages.ubuntu.com, so yes I would expect it to work fine
In about a hundred or so separate microservices, of course…
The last commit was four years ago.
Who has?
Nixpkgs has. :)
Nowadays the only search like this I need to run is
nix-locate -r 'bin/foo$'
It would be nice to have a CLI alternative to Repology, though.Another great tool, built on top of nix-locate, is comma. So for any program foo, if you have foo installed, you can run it like this:
foo
And if you don't have it installed, you can run it (without installing!) like this: , foo
And if multiple different packages provide a program named bin/foo then comma lets you interactively choose the one you want, and remembers your choice so you don't have to specify again unless you choose to via the -d flag.I've been using https://search.nixos.org/ this whole time to find packages. Thanks for dropping this!
....
function repology() {
curl -L --user-agent 'hackernews' \
"http://repology.org/api/v1/project/$@"
}Latest release: May 19, 2015
Abandoned, but forkable (since FOSS), and a decent idea.
Probably nowadays this gets done in Node, parsing the package search websites. Preferably, this would be done via an API though.
> Probably nowadays this gets done in Node, parsing the package search websites. Preferably, this would be done via an API though.
Repology provides an API but it's unstable: https://repology.org/api/v1
Yes, agree. The idea and concept is cool! Imo worth it to keep an eye on it and play with it.
First thought, which came to my mind, was a security use case to get it to a point for sbom handling and tracking. In particular, respective to all the recent package vulnerabilities.
Shame Homebrew for Linux is getting no love from any of the tools / lists mentioned here.
Since switching to that and flatpak my distro choice is "what sticks closest to the upstream of [my preferred DE]"
Do Linux users actually use Homebrew day to day? My impression of it was that it's mostly for MacOS users that want to keep doing things the same way instead of learning the Linux way (using the OS package manager).
I have for a while yeah. As mentioned it means the distro packages don't matter for a lot of developer tools / CLIs. Wanna use a stable Debian / Ubuntu LTS for years? Want to use rolling releases so your desktop is up to date? Homebrew's got you covered.
In bazzite/Fedora Silverblue, it's the expected way non-GUI packages are installed to the host system. The other way is toolbox/distrobox (rootless containers tightly integrated with the host).
It’s the default package manager in Bazzite and is once of the most functional packagr managers on atomic fedora.
Ohh yes, a minority of us do exist. I prefer it over appimages on my personal pc. Gets you almost rolling release software without needing to use a rolling release. I used to use distrobox with arch Linux on pop os base, but then just gave homebrew and nix a try to scratch the itch.
Nix is not there yet in terms of user friendliness. homebrew for linux is pretty awesome.
Only issue i have is that it creates a separate user and doesn’t support custom prefixes (their page says you are on your own if using custom prefixes). While their reasoning is sound, not having an easy way to know which programs will break if using custom prefix is a bummer for me at work.
I use it on DSM (Synology OS) because all the software can be easily installed outside of DSM.
There is also https://pkgs.org ..
I've been working on a GUI task manager for Linux and I've been wanting to put a "Funding" or ownership meta data next to the process or process group in the view so people can know where the upstream code lives, how to support the project, and what organizational unit "owns" that process.
So I actually vibe coded a script that does this against a sqlite db I've been considering to bundle with my task manager so it can know this stuff on the fly.
But yea this is a key missing component in Linux user space. Windows let's you encode organizational stuff into an exe but on Linux binaries don't really have that.
You can usually get info about the upstream from the package metadata, e.g. on Debian:
$ apt info whohas
...
Homepage: http://www.philippwesche.org/200811/whohas/intro.html
...
The distribution model on Linux (generally speaking) is different from Windows, though, so I don't think it makes sense to view processes as fully "owned" by the upstream in the same way as on Windows. Instead of letting each individual organization directly have administrator access to rummage around on our machines and install packages, this is mostly delegated to the Linux distribution, which may customize the packages. (And of course the user has the right to customize the program as well, assuming it's FOSS, so ultimately the user is the owner of their own processes.)Packages are not binaries. When I write software for Linux I'm not gonna sit there and wait for apt whatever to run in the background. That was the whole point of the sqlite db. Don't worry I poll the entire debian database.... and ubuntu ..... and fedora.... and gentoo.... . and arch..... etc.
The tldr is binaries on linux really should have org unit as a meta data field because when I write a task manager in C it needs to be fast.
It already exists, the appstream spec can associate binaries with metadata.
[dead]
I made this in under 100 lines bash and it supports Arch, RHEL-based, Debian-based, Alpine and OpenSuse. But the problem is that some distros just have rubbish native search of package files.
And of course my tool searches their native package manager, not their online services, API, package repos. That's a completely different approach.
This would pair nicely with distrobox or Bedrock Linux:)
"Just gimme thething, I don't care where from" is a great way to get supply chain vulns
Oh nice, I just implemented something like this for installing from any package manager uv-style https://abxpkg.archivebox.io/, but I haven't added a "search" command yet, I should add that!
Interesting, I've been wanting something like this. My main deal though is updates: how is that handled? Would love some kind of auto-update with a review/notification mechanism.
This is exactly the kind of boring CLI tool that earns its keep. Package names and availability differ just enough across distros to waste time in tiny annoying increments.
This kind of busy work should suit an AI agent:
Go and find me all the repolists and package/software metadata for any distro and OS ever released. Write the results to a local SQLite. Incrementally update, but don't hammer the sources to death. Provide a web UI and CLI.
Or you know, you could do that with a ~100 long script. You don't have to use LLMs for everything, especially when you're not dealing with freeform text at all, use data types and data structures, we've created the concepts for a reason.
Sure. But then I would have to use my brain to actually write code. I thought we were past that already. Also, if it's an agent that keeps scouring the net autonomously for more distros, then I wouldn't have to update the sources manually on my 100 line script.