The "declaring script dependencies" thing is incredibly useful: https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...
# /// script
# dependencies = [
# "requests<3",
# "rich",
# ]
# ///
import requests, rich
# ... script goes here
Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies, magically installed into a temporary virtual environment without you having to think about them at all.It's an implementation of Python PEP 723: https://peps.python.org/pep-0723/
Claude 4 actually knows about this trick, which means you can ask it to write you a Python script "with inline script dependencies" and it will do the right thing, e.g. https://claude.ai/share/1217b467-d273-40d0-9699-f6a38113f045 - the prompt there was:
Write a Python script with inline script
dependencies that uses httpx and click to
download a large file and show a progress bar
Prior to Claude 4 I had a custom Claude project that included special instructions on how to do this, but that's not necessary any more: https://simonwillison.net/2024/Dec/19/one-shot-python-tools/shebang mode is also incredibly useful and allows execution like ./script.sh
#!/usr/bin/env -S uv run --script
# /// script
# dependencies = [
# "requests<3",
# "rich",
# ]
# ///
import requests, rich
# ... script goes here
I really love this and I've been using it a lot. The one thing I'm unsure about is the best way to get my LSP working with inline dependencies.
Usually, when I use uv along with a pyproject.toml, I'll activate the venv before starting neovim, and then my LSP (basedpyright) is aware of the dependencies and it all just works. But with inline dependencies, I'm not sure what the best way to do this is.
I usually end up just manually creating a venv with the dependencies so I can edit inside of it, but then I run the script using the shebang/inline dependencies when I'm done developing it.
Yeah i don't think there is a neater way to do this right now. One thing that maybe saves some effort is a "uv sync" will pay attention to the $VIRTUAL_ENV env var, but only if that var points to a venv that already exists. You can't point to an empty dir and have it create the venv.
# make a venv somehow, probably via the editor so it knows about the venv, saving a 3rd step to tell it which venv to use
$ env VIRTUAL_ENV=.venv/ uv sync --script foo.py
but it's still janky, not sure it saves much mental taxThis was my contribution to their docs, happy to see people find it useful :)
That’s pretty wild. Thanks for sharing
'env -s' it's not portable.
Should be portable as long as there are no quotes. It’s basically a nop on macOS since in macOS the shebang args are already split by spaces (even when quoted).
Edit: further reading <https://unix.stackexchange.com/a/605761/472781> and <https://unix.stackexchange.com/a/774145/472781> and also note that on older BSDs it also used to be like that
Not under OpenBSD.
Thanks for pointing it out.
Indeed, OpenBSD’s and NetBSD’s `env` does not support `-S`. DragonflyBSD does (as expected).
Solaris as pointed out by the first link doesn’t even support more than one argument in the shebang so no surprise that its `env` does not support it either. Neither does IllumOS (but not sure about the shebang handling)
Also, it doesn't work on Busybox either (e.g. on Alpine Linux).
> Save that as script.py and you can use "uv run script.py" to run it with the specified dependencies,
Be aware that uv will create a full copy of that environment for each script by default. Depending on your number of scripts, this could become wasteful really fast. There is a flag "--link-mode symlink" which will link the dependencies from the cache. I'm not sure why this isn't the default, or which disadvantages this has, but so far it's working fine for me, and saved me several gigabytes of storage.
PEP723 is also supported by pipx and hatch, even though uv is (deservedly) getting all the attention these days.
Others like pip-tools have support in the roadmap (https://github.com/jazzband/pip-tools/issues/2027)
I thought that was a heart emoticon next to requests, for a second.
I mean, who doesn't love requests?
Well... maybe the people who have stopped working on it due to the mysterious disappearance of 30,000 fundraised dollars, selling paid support but "delegat[ing] the actual work to unpaid volunteers", and a pattern of other issues from other community members who have not spoken up about them.
https://vorpus.org/blog/why-im-not-collaborating-with-kennet...
> he said his original goal was just to raise $5k to buy a computer. Privately, I was skeptical that the $5k computer had anything to do with Requests. Requests is a small pure-Python library; if you want to work on it, then any cheap laptop is more than sufficient. $5k is the price of a beefy server or top-end gaming rig.
Kenneth Reitz has probably done more to enrich my life than most anyone else who builds things. I wouldn't begrudge him the idea of a nice workstation for his years of labour. Yeah, he's very imperfect, but the author has absolutely lost me
What has he built that you like using that much? Honest question, not being snarky.
I liked Requests way back when but prefer httpx or aiohttp. I liked piping for about a month when it first came out, but jumped ship pretty quickly. I'm not familiar with his other works.
I also wouldn't begrudge the guy a laptop, but I do get what the author was saying. His original fundraiser felt off, like, if you want a nice laptop, just say so, but don't create specious justifications for it.
> I liked Requests way back when but prefer httpx or aiohttp.
Those two tools are modeled after `requests`, so Reitz still has an influence in your life even if you don't use his implementation directly.
They’re pretty close, sure, but Requests itself basically models a web browser. The newer ones model newer browsers (eg with HTTP/2 and async).
Kenneth Reitz has been good and bad at times. And while things of his are genius level, he also has done asshole level things. But on the other hand, we get that with a lot of geniuses for some reason or another. Really smart people can be really dumb too.
It would be like saying, "Don't use Laplace transforms because he did some unsavory thing at some point in time."
Looking at this drama for a bit, I haven't seen anybody advocate for 'canceling' requests itself.
Maybe it's more like: Laplace created awesome things, but let's be fair and also put in his wikipedia page a bit about his political shenanigans.
A lot of of so-called geniuses, especially the self-styled ones with some narcissistic traits, get away with being an asshole. Their admirers have different norms for regular, boring people. I don't think that is fair or healthy for a community.
Back when the drama was fresh, there was a lot of talk about “canceling” Requests, but as often happens, everyone moved o to something else and it kinda got forgotten about with time.
It stopped accepting new features a decade ago, it doesn’t support HTTP/2, let alone HTTP/3, it doesn’t support async, and the maintainers ignored the latest security vulnerability for eight months.
It was good when it was new but it’s dangerously unmaintained today and nobody should be using it any more. Use niquests, httpx, or aiohttp. Niquests has a compatible API if you need a drop-in replacement.
Me, because nearly every time I see it used, it’s for a trivial request or two that can easily be handled with stdlib.
If you need it, sure, it’s great, but let’s not encourage pulling in 3rd party libs unnecessarily.
I've noticed that Claude Code prefers httpx because it's typed.
Weird! I have the opposite: I want it to prefer httpx but it always gives me Requests when I forget to specify it.
LLMs are funny.
I really like httpx’s handling of clients/sessions; it’s super easy to throw one in a context manager and then get multiple hits to the same server all reusing one http2 connection.
This is cool!
This gave me the questionable idea of doing the same sort of thing for Go: https://github.com/imjasonh/gos
(Not necessarily endorsing, I was just curious to see how it would go, and it worked out okay!)
That's fun. I gave this a try myself. I took a different approach to the solution and used a bash script.
https://gist.github.com/JetSetIlly/97846331a8666e950fc33af9c...
My question is: if you already have to put the "import" there, why not have a PEP to specify the version in that same import statement, and let `uv` scan that dependency? It's something I never understood.
They don't want tools like uv to have to parse Python, which is complex and constantly changes.
See also https://peps.python.org/pep-0723/#why-not-infer-the-requirem...
Ahh thanks for that! Makes sense, since you could also have an import somewhere down the script (although that would be bad design).
Another point is the ambiguous naming when having several packages doing roughly the same... Which is crucial here. Thank you! :)
This is cool, but honestly I wish it was builtin language syntax not a magic comment, magic comments are kind of ugly. Maybe some day…
(I realise there are some architectural issues with making it built-in syntax-magic comments are easier for external tools to parse, whereas the Python core has very limited knowledge of packaging and dependencies… still, one of these days…)
It IS built-in language syntax. It's defined in the PEP, that's built-in. It's syntax:
"Any Python script may have top-level comment blocks that MUST start with the line # /// TYPE where TYPE determines how to process the content. That is: a single #, followed by a single space, followed by three forward slashes, followed by a single space, followed by the type of metadata. Block MUST end with the line # ///. That is: a single #, followed by a single space, followed by three forward slashes. The TYPE MUST only consist of ASCII letters, numbers and hyphens."
That's the syntax.
Built-in language syntax.
Semantic whitespace was bad enough, now we have semantic comment blocks?
I'm mostly joking, but normally when people say language syntax they mean something outside a comment block.
I am - while awesomely valuing uv - also with the part of the audience wondering/wishing this capability were a language feature.-
The PEP explains why it isn't part of the regular python syntax.
uv and other tools would be forced to implement a full Python parser. And since the language changes they would need to update their parser when the language changes.
This approach doesn't have that problem.
Making it a "language feature" has no upside and lots of downside. As the PEP explains.
Perfect sense.-
Suppose I have such a begin block without the correct corresponding end block - will Python itself give me a syntax error, or will it just ignore it?
It might be “built-in syntax” from a specification viewpoint, but does CPython itself know anything about it? Does CPython’s parser reject scripts which violate this specification?
And even if CPython does know about it (or comes to do so in the future), the fact that it looks like a comment makes its status as “built-in syntax” non-obvious to the uninitiated
Funny how all this time, the only commonly-used import syntax like this was in HTML+JS with the <script> tag.
I hope it's in the same format as requirements files. Because then one could put a comment with a one liner next to it for people who don't have uv.
Something like `pip install -r <(head myscript.py)`. (Not exactly, but you get the idea).
uv is so cool. haha. goodbye conda.
conda is a completely no-go for me. It’s for data scientists, and only data scientists who do not need to care much about production.
I love this feature of uv but getting linters/language servers to pick up the venv when editing the files is a bit of a pain. I currently have a script 'uv-edit' which I am using to run Neovim with the correct environment:
#!/bin/bash
SCRIPT="$1"; shift
uv sync --quiet --script "$SCRIPT" && exec uv run --python "$(uv python find --script "$SCRIPT")" nvim "$SCRIPT" "$@"
One gotcha I caught myself in with this technique is using it in a script that would remediate a situation where my home has lost internet and needed the router to be power cycled. When the internet is out, `uv` cannot download the dependencies specified in the script, and the script would fail. Thankfully I noticed this problem after writing it but before needing it to actually work, and refactored my setup to pre-install the needed dependencies. But don't make the same mistake I almost made! Don't use this for code that may need to run airgapped! Even with uv caching you may still get a cache miss.
`uv run --offline` will use cached dependencies and not check for newer versions. Works for `uvx` too, i.e. `uvx --offline ...`.
But don't you have to only ever run it once to have the deps/venv for subsequent runs?
I don't know if uv garbage collects its cache, but it wouldn't surprise me. Otherwise disk usage would grow indefinitely.
Yes, disk usage grows indefinitely
Here's a fun one (says he, in a panic): Will we get to a point where (through fault or, gasp, design) a given AI will generatively code a missing dependency, on the fly - perhaps as a "last ditch" effort?
(I can imagine languages having official LLMs which would more or less "compress/know" enough of the language to be an ...
... import of last resort, of sorts, by virtue of which an approximation to the missing code would be provided.-
I've been doing something similiar utilizing guix shell[0] setting my shebang to e.g.:
#!/usr/bin/env -S guix shell python python-requests python-pandas -- python3
in scripts for including per-script dependencies. This is language agnostic as long as the interpreter and its dependencies are available as guix packages. I think there may be a similiar approach for utilizing nix shells that way as well.
[0]: https://guix.gnu.org/manual/en/html_node/Invoking-guix-shell...
This is my absolute favourite uv features and the reason I switched to uv.
I have a bunch of scripts in my git-hooks which have dependencies which I don't want in my main venv.
#!/usr/bin/env -S uv run --script --python 3.13
This single feature meant that I could use the dependencies without making its own venv, but just include "brew install uv" as instructions to the devs.
If I may interject, does anyone have any idea why the `-S` flag is required? Testing this out on my machine with a BSD env, `/usr/bin/env -S uv run --python 3.11 python` and `/usr/bin/env uv run --python 3.11 python` do the same thing (launch a Python interactive shell). Man page for env doesn't clarify a lot to me, as the end result still seems to be the same("Split apart the given string into multiple strings, and process each of the resulting strings as separate arguments to the env utility...")
Its needed on my ubuntu 24.04 system
$ cat test.sh
#!/usr/bin/env bash -c "echo hello"
$ ./test.sh
/usr/bin/env: ‘bash -c "echo hello"’: No such file or directory
/usr/bin/env: use -[v]S to pass options in shebang lines
$ ./test.sh # with -S
hello
Interesting, I guess it is indeed a BSD thing, cause this works for me with or without '-S' on Mac
There currently is a patch for adding '-S' to OpenBSD and in the discussion, the one who came originally up with it commented on how he added it to FreeBSD:
"IIRC, the catalyst for it was that early FreeBSD (1990's?) did split up the words on the '#!' line because that seemed convenient. Years later, someone else noticed that this behavior did not match '#!' processing on any other unix, so they changed the behavior so it would match. Someone else then thought that was a bug, because they had scripts which depended on the earlier behavior. I forget how many times the behavior of '#!' processing bounced back and forth, but I had some scripts which ran on multiple versions of Unix and one of these commits broke one of those scripts.
I read up on all the commit-log history, and fixed '#!' processing one more time so that it matched how other unixes do it, and I think I also left comments in the code for that processing to document that "Yes, '#!'-parsing is really supposed to work this way".
And then in an effort to help those people who depended on the earlier behavior, I implemented '-S' to the 'env' command.
I have no idea how much '-S' is used, but it's been in FreeBSD since June 2005, and somewhere along the line those changes were picked up by MacOS 10. The only linux I work on is RHEL, and it looks like Redhat added '-S' between RHEL7 and RHEL8." [https://marc.info/?l=openbsd-tech&m=175307781323403&w=2]
Thanks, but the main question is how come the behaviour is still the same whether you pass the flag or not? I would get it if it just failed without "-S" but it works as intended. I am wondering if this is cause I might not be using the GNU version of env, so this is less relevant? Edit: looks to be the version thing indeed, this doesn't work for someone on Ubuntu
At some point coreutils added -S in order to support the behavior of BSD and MacOS. I would guess that whatever implementation you're using then added -S as a noop in order to (at long last) permit the portable usage of multi-argument shebang lines. Until both of those things happened there was no portable way to pass multiple arguments in a shebang. You used to have to do really stupid things (ex https://unix.stackexchange.com/a/399698).
https://www.gnu.org/software/coreutils/manual/html_node/env-...
Actually, you might be onto something! If I test with explicit quoting in terminal, BSD and GNU produce the same behavior:(`env 'bash -c "echo hello"'` fails while `env -S 'bash -c "echo hello"'`) works. I wasn't aware of this:
"To test env -S on the command line, use single quotes for the -S string to emulate a single parameter. Single quotes are not needed when using env -S in a shebang line on the first line of a script (the operating system already treats it as one argument)"(from your second link).
This is different for shebang on Mac though:
GNU env works with or without '-S':
#!/opt/homebrew/bin/genv -S bash -v
echo "hello world!"
BSD env works with or without '-S' too:
#!/usr/bin/env -S bash -v
echo "hello world!"
To conclude, looks like adding `-S` is the safest option for comparability sake :).
Completely agree. UV for stalled what was going to be a major project to move lots of python to golang. There will still be a lot migrated, but smaller script like things are no longer in scope.
I still write small some scripts in golang when the bootup time is important. Python still takes its time to boot up, and it's not the best tool for the job if it's gonna be called like a shell utility for thousands of files, for example.
All my "permanent" scripts were transferred to Go a while ago with LLM assistance.
They're the ones that just keep running in cron with zero modifications.
Python is when I need to iterate fast and just edit crap on the go, most of those will also be migrated to Go after they stabilise - unless there's a library dependency that prevents it.
`uv` is nice, but it's not "static binary that runs literally anywhere with no prerequisites" -nice
I honestly believe this is a killer (killer) feature.-
I wish there was a straightforward way to let VS Code pick up the venv that uv transparently creates.
Out of the box, the Python extension redlines all the third-party imports.
As a workaround, I have to plunge into the guts of uv's Cache directory to tell VS Code the cached venv path manually, and cross fingers that it won't recreate that venv too often.
You can find the env path with `uv python find --script "${filePath}`
I've been working on a VSCode extension to automatically detect and activate the right interpreter: https://marketplace.visualstudio.com/items?itemName=nsarrazi...
Could you tell me how you do this in code ? Started using uv in a very old school team and they don’t like uv simply because it’s new. Something like this is what they base it on.
I also just activate the desired virtual env and start VS Code from the command line within it
I do this in Swift, using swift-sh[1].
Works well, but the compilation of the dependencies (first run of the script) can be a bit long. A problem which uv (Python) probably won’t have…
Love this feature of UV. Here's a one-liner to launch jupyter notebook without even "installing" it:
uv run --with jupyter jupyter notebook
Everything is put into a temporary virtual environment that's cleaned up afterwards. Best thing is that if you run it from a project it will pick up those dependencies as well.Note that it's not really "cleaned up" insofar as there is a uv cache folder that will grow bigger over time as you keep using that feature.
True. It's a good idea to periodically run:
uv cache clean
Or, if you want to specifically clean just jupyter: uv cache clean jupyter
I do this all the time as well with `uv run --with ipython --with boto3 ipython`. Great time-saver.
Note that this only works for single-file scripts.
If you have a project with modules, and you'd like a module to declare its dependencies, this won't work. uv will only get those dependencies declared in the invoked file.
For a multi-file project, you must have a `pyproject.toml`, see https://docs.astral.sh/uv/guides/projects/#managing-dependen...
In both cases, the script/project writer can use `uv add <dependency>`, just in the single-file case they must add `--script`.
Oh nice, I was already a happy user of the uv-specific shebang with in-script dependencies, but the `uv lock --script example.py` command to create a lock file that is specific to one script takes it to another level! Amazing how this feels so natural and yet only appeared after 20+ years of Python packaging.
What’s your use case for locking dependencies on a single script?
One things that’s useful to my organization is that we can then proceed to scan the lockfile’s declared dependencies with, e.g., `trivy fs uv.lock` to make sure we’re not running code with known CVEs.
Just better visibility into the dependencies that come with the script (exactly for things like vulnerability scanning that you mention). It's also easier for reproducibility in someone else's environment when I can give them the exact list of dependencies instead of having them resolve it themselves using the inline declarations. Explicit is better than implicit :-)
If you’re at all interested in this topic, I strongly recommend you read Simon Willison’s blog at https://simonwillison.net/
He does all the legwork and boils it down into an easily-understandable manner.
He’s also a user her in this thread at simonw. Just an immensely useful guy.
There are a lot of bits and pieces that are clicking into place lately in the Python ecosystem. Recently I've been using the combination of Marimo and these uv script dependencies for building reproducible rwporting and diagnostic tooling for other teams.
Quick plug here for a simple Jupyter kernel I created:
https://github.com/tobinjones/uvkernel
It’s a pretty minimal wrapper around “uv” and “iPython” to provide the functionality from the article, but for Jupyter notebooks. It’s similar to other projects, but I think my implementation is the least intrusive and a good “citizen” of the Jupyter ecosystem.
There’s also this work-in-progress:
https://github.com/tobinjones/pep723widget
Which provides a companion Jupyter plugin to manage the embedded script dependencies of noteboooks with a UI. Warning — this one is partially vibe-coded and very early days.
There is also marimo. For my use cases it has been a fantastic upgrade from Jupyter. I use it in the style of TFA (notebooks are just Python files). In the notebook UI, you can add packages to the script headers. It's so nice to have many notebooks in one directory with independent, but reproducible environments.
Marimo notebooks are easy to diff when committing to git. Plus you get AI coding assistants in the notebook, a reactive UI framework, the ability to run in the browser with Pyodide on WASM, and more. It will also translate your old Jupyter notebooks for you.
For me, what uv is to package managers, marimo is to notebooks.
This is pretty great. Passing python code out to my students is usually also confronted with the question of "How do I run it?", which is usually terrible to answer. Now, I can just tell them to get uv (single command) and run it.
Maybe worth mentioning that ruby has a very similar feature https://bundler.io/guides/bundler_in_a_single_file_ruby_scri...
I kinda always struggled to grok python's package management and module system. UV is finally making things more understandable and just as good as the nodejs packaging and module system that I'm more comfortable with. This new feature is kinda awesome. Even surpassing the DX of the node ecosystem.
As a tangent, one issue I face is how to force cursor to use uv over pip. I have placed it in rules and also explicitly add a rules.yml file in every agent conversation. It still tries to push through pip 4 out of 10 times. Any recommendations on best practice to force cursor to use uv will make a significant dent in productivity
more of a bandaid than anything else, but maybe something like this might help:
alias pip='echo "do not use pip, use uv instead" && false'
You can put that in your bashrc or zshrc. There's a way to detect if it's a cursor shell to only apply the alias in this case, but I can't remember how off the top of my head!
> alias pip='echo "do not use pip, use uv instead" && false'
Interesting times, eh?
Who (who!) would have told us all we'd be aliasing terminal commands to *natural language* instructions - for machine consumption, I means. Not for the dumb intern ...
(I am assuming that must have happened at some point - ie. having a verboten command echo "here be dragons" to the PFY ...)
I recently found a small issue with `uv run`. If you run a script from outside the project folder, it looks for the pyproject.toml on the folder from which you are calling `uv run`, not on the folder where the python script is located (or its parents)! Because of that scripts that store their dependencies in a pyproject.toml cannot be run successfully using a “bare” `uv run path/to/my/script.py` from outside the project folder.
You can work around this surprising behavior by always using inline dependencies, or by using the `--project` argument but this requires that you type the script path twice which is pretty inconvenient.
Other than that uv is awesome, but this small quirk is quite annoying.
I like it, and I once thought about this as well:
> Python doesn't require a requirements.txt file, but if you don't maintain this file, or neglect to provide one, it could result in broken functionalities - an unfortunate circumstance for a scripting language.
How many package managers can one language have? Its a simple language but setting it up is just incredibly bad. Maybe this is the one or should I wait for the next?
Its a simple language
It really isn't, and that is why the install and setup tools are so problematic. They have to hide and overcome a lot of complexity.
I had the same sentiment, but uv seems to have eliminated the competition. Installing uv using your OS package manager is enough as it can also download and install (isolated) Python interpreters as well.
> Its a simple language
No, it isn’t. This misconception is common and I don’t know where it comes from.
Python is a very complex language hiding behind friendly syntax.
I recommend trying it, it gets a ton of hype but I think for good reason.
This is the one.
imo all I usually need is pip-tools and venv, but uv kind of bundles the two together in a very natural way (and is very fast)
Unless you're installing a new version of python, I have trouble seeing how preceding all the normal command with "uv" can be seen as much of a difference.
It's not that much of a difference, but it is faster, and when you do need to switch pythons, you're already set up. uv also has some different workflows, universal lock files, and the caching is also nice. And the PEP 723 support is cool!
Poetry was a big difference, it was also very slow for me, and so I never picked it up.
I wish they would make uv default and end this game.
This is the way
Why doesn't pip support PEP 723? I'm all for spreading the love of our lord and savior uv, but it should be necessary to have an official implementation.
I don't know if there's an official reason, but my guess is that it's slightly out of scope for pip: pip is a packaging installation tool, not a virtual environment managing system.
uv, being both, is a more natural fit for an implementation of that PEP.
Here's a relevant discussion: https://discuss.python.org/t/idea-introduce-standardize-proj...
This almost as great as how much I love that this gets posted like once every two weeks
I'm glad it was reposted so I saw it. I may actually use python for scripting now because uv makes it so easy.
I am using this feature in my MCP servers and it is a god send! Some details on how I do it can be found at https://blog.toolkami.com/toolkami-shttp-server/.
There's a lot of small one off things where deploying uv and the script makes it easy to use. A lot of use cases for golang can be replaced with uv.
Oh this looks amazing! I had pretty much stopped using Python for my one-off scripts because of the hassle of dependencies. I can't wait to try this out.
I've played with it for a while now, and for one off scripts where python might make more sense than bash, e.g., I write the script this way with uv, and then I only need uv and the script.
One of the complaints about python is that a script stops working over time (because the python installation changes with os updates), and this kinda sorta doesn't make it go away entirely, but what it does do is to eliminate a bunch of the hassle to getting things to work.
> this kinda sorta doesn't make it go away entirely
it absolutely can, `uv` can also pin the python interpreter it uses with `requires-python = "==3.11"` or whatever.
Am I crazy for having my default interactive shell "/bin/env python" point to a virtual environment with all the random dependencies that my one off scripts need, so I can just run "python oneoneoff.py"? All of my one off scripts combined use maybe a dozen dependencies. If I need more, I just install them there. I use pyenv, so it's trivial to change the version of python that the interactive shell uses (default or temporary within the current shell).
From my perspective, people seem to make it difficult to use python, more from not understanding the difference between interactive shell and non-interactive shell configurations (if you think the above breaks system tools that use python, then you don't understand the difference, nor that system tools use /bin/python rather than /usr/env python), with a bit of cargo-cult mixed in, more than anything.
What would you do if some of the deps started to have conflicts in them? Also, what are your plans for migration when you'll need to move from one os version to another?
Implicit solutions like yours have lower cost of entrance, but larger cost of support. uv python scripts just work if you set them up once
One case this may not work well is when one of the dependencies is Pytorch. Uv has explicit support for Pytorch ( https://docs.astral.sh/uv/guides/integration/pytorch/ ) but with the script headers I don't see a way to pick the most appropriate wheel index (cpu vs cuda vs rocm)
This is pretty easy to do if you have Nix
https://github.com/pmarreck/yt-transcriber
A commandline youtube transcription tool I built that runs on Mac (nix-darwin) which automatically pulls in all the dependencies it needs
https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...
this is neat af. my throw-away scripts folder will be cleaner now.
My favorite part of this is the `exclude-newer` feature, giving you a somewhat reproducible script for very low effort.
I hate such poor docs that don't explain how things work, and instead prefer hiding behind some "magic".
> Constraints can be added to the requested dependency if specific versions are needed:
> uv run --with 'rich>12,<13' example.py
Why not mention that this will make uv download specified versions of specified packages somewhere on the disk. Where? Are those packages going to get cached somewhere? Or will it re-download those same packages again and again every time you run this command?
Caching linked on the left: https://docs.astral.sh/uv/concepts/cache/ The index of the Concepts section says: "Looking for a quick introduction to features? See the guides instead."
Maybe there should instead be a link to the Concepts section for people who want more details, but I feel it's fine as it is.
I mean... it's in the docs?
As a non-Python person, I find “uv” to be a confusing name.
For a long time I assumed it’s a Python wrapper for libuv which is a much older and well-known project. But apparently it’s Python package manager #137 “this one finally works, really, believe me”
For this use case I would try first to compile the script with nuitka. A single binary is more manageable than doing that
I just looked this up yesterday by sheer coincidence and was happy it actually worked. What brought it to your attention today?
Almost certainly the thread about jqfmt: https://news.ycombinator.com/item?id=44637716
Heard about this! Finally did something with it and I'm happy with how this tiny gist turned out:
https://gist.github.com/pythoninthegrass/e5b0e23041fe3352666...
tl;dr
Installs 3 deps into the uv cache, then does the following:
1. httpx to call get request from github api
2. sh (library) to run ls command for current directory
3. python-decouple to read an .env file or env var
Uh oh. I am thinking all the ways this can be misused to ship malicious dependencies. Pretty much all SCA tools today will be blind to this.
why would you ship `uv` though?
More comments the topic - https://news.ycombinator.com/item?id=44369388
Fun with uv and PEP 723
640 points | 227 comments