• linkregister 7 hours ago

    Has anyone implemented this end-to-end? This seems production ready for smaller shops where it's feasible for developers to sign artifacts individually. For a system where you'd want CI to publish artifacts, and then use the k8s policy controller to only run verified artifacts, it seems incomplete.

    It appears the reason to include this system in a toolchain would be to meet compliance requirements, but even the GCP, AWS, and Azure implementations of artifact signing & verification are in beta.

    • woodruffw 3 hours ago

      > Has anyone implemented this end-to-end?

      Yes; I (along with a bunch of other fantastic folks) implemented it end-to-end for both Homebrew[1] and PyPI[2]. This is at a "lower" level than most corporate uses, however: the goal with integrating Sigstore into these OSS ecosystems is not to build up complex verifiable policies (which OSS maintainers don't want to deal with), but to enable signing with misuse-resistant machine identities.

      [1]: https://blog.trailofbits.com/2023/11/06/adding-build-provena...

      [2]: https://blog.pypi.org/posts/2024-11-14-pypi-now-supports-dig...

      • KennyBlanken an hour ago

        What you all should have done was figured out how to use that very mysterious and new technology called "sudo", so that Homebrew doesn't add user-writable directories to user's paths, thus enabling anything they might run on their computer to modify binaries the user might run later, or anyone who sits down at their system unsupervised to do the same.

        • woodruffw an hour ago

          Apart from the snark (which is unwarranted), I can't even parse what you're saying.

          (Mentioning sudo in the context of Homebrew suggests that you're one of those incoherent threat model people, so I'm going to assume it's that. So I'll say what Homebrew's maintainers have been saying for years: having a user writable Homebrew prefix is no more or less exploitable in the presence of attacker code execution than literally anything else. The attacker can always modify your shell initialization script, or your local Python bin directory, or anything else.)

      • remram 3 hours ago

        End-to-end it would require something like a web-of-trust or similar. There is little benefit in knowing that your package was definitely built by GitHub Actions definitely from the code that definitely came from the fingers of the random guy who maintains that particular tool.

        Unless you have some trust relationship with the author, or with someone that audited the code, the whole cryptographically-authenticated chain hangs from nothing.

        Tools like Crev did a lot of work in that area but it never really took off, people don't want to think about trust: https://github.com/crev-dev/cargo-crev

        • arccy 6 hours ago

          yes, i've implemented it in multiple companies. cosign supports using generated keys and kms services, that's been pretty stable and usable for a long time. keyless signing is different and you need to think a bit more carefully about what you're trusting.

          • eikenberry 5 hours ago

            I recently implemented a software updating system using [The Update Framework](https://theupdateframework.io/) directly, with [go-tuf](https://github.com/theupdateframework/go-tuf). It required a lot of design work around how we were going to do package management on top of using it for a secure updating system. This was due to TUF's designing around the capability for existing package management systems to adopt it and integrate it into their system. So TUF is very unopinionated and flexible.

            Given how TUF made it particularly hard to implement a system from scratch... How was your experience using Sigstore? Is it designed more around building systems from scratch? I.E. Is it more opinionated?

            Thanks.

            • linkregister 4 hours ago

              I designed a system using Sigstore where the signing key is in a secret store, and the CI shells out to the cosign CLI to perform the signing. Is this an antipattern?

              For verification, did you use the policy controller in kubernetes? Or are you manually performing the verification at runtime?

          • djhn 6 hours ago

            Somewhat adjacent question: are there people working on ways to verify that a particular server or API backend are running the specific signed release that is open sourced? Can a company somehow cryptographically prove to its users that the running build is derived from the source unmodified?

            • captn3m0 2 hours ago

              In addition to the enclave routes, I have a proposal to build this with AWS Lambda as a poor man’s attestation: https://github.com/captn3m0/ideas?tab=readme-ov-file#verifia...

              • mpysc 3 hours ago

                You can get most of the way there with something like the SLSA/BCID framework, with the final artifact including some trusted provenance from an attested builder. You could go further and aim for full reproducibility on top of the provenance, but reproducible builds across different environments can get messy fast if you're looking to independently build and achieve the same result. Either way the end result is you have some artifact that you reasonably trust to represent some specific source input (ignoring the potential for backdoored compiler or other malicious intermediate code generation step).

                Now for the last mile, I'll admit I'm not particularly well-versed on the confidential compute side of things, so bridging the gap from trusted binary to trusted workload is something I can only speculate wildly on. Assuming you have a confidential compute environment that allows for workload attestation, I imagine that you could deploy this trusted binary and record the appropriate provenance information as part of the initial environment attestation report, then provide that to customers on demand (assuming they trust your attestation service).

                • kfreds 4 hours ago

                  Yes. My colleagues and I have been working on it (and related concepts) for six years.

                  glasklarteknik.se

                  system-transparency.org

                  sigsum.org

                  tillitis.se

                  This presentation explains the idea and lists similar projects.

                  https://youtu.be/Lo0gxBWwwQE

                  • cperciva 6 hours ago

                    You can do this with e.g. EC2 enclaves. Of course that's kind of begging the question, since you need to trust the enclaves.

                    • shortsunblack 5 hours ago

                      See Keylime for this.

                      • formerly_proven 5 hours ago

                        That's what remote attestation in Intel SGX does. There's similar features in other platforms as well.

                        • Joel_Mckay 5 hours ago

                          Detecting physical ingress in a co-location server is not uncommon after contacting political representatives in some countries. It is wise to have password protected SSL certs as the bare minimum non-resettable tripwire, close monitoring of the HDD/SSD drives s.m.a.r.t. firmware power-cycle counter, and of course an encrypted partition for logs and other mutable/sensitive content. Note for performance, a "sudo debsums -sac" command along with other tripwire software can audit unencrypted system binaries efficiently. Most modern ephemeral malware (on android especially) is not written to disk to avoid forensic audits assigning accountability, as the chance of re-infection is higher if you hide the exploit methodology.

                          Folks should operate like they already have someone with a leaked instance of their key files. In general, a offline key-self-signing authority issuing client/peer certs is also important, as on rare occasion one can't trust 3rd parties not to re-issue certs for Google/Facebook/Github etc. to jack users.

                          Eventually one should localize your database design to specific users, and embed user action telemetry into a design. i.e. damage or hostile activity is inherently limited to a specific users content, sanity checking quota systems limit the damage they can cause, and windowed data-lifecycle limits the credentials to read-only or does garbage collection after some time.

                          In general, the rabbitMQ AMQP over SSL client signed cert credential system has proven rather reliable. Erlang/Elixir is far from perfect, but it can be made fairly robust with firewall rules.

                          Good luck, YMMV of course... =3

                        • rough-sea 7 hours ago

                          JSR supports sigstore https://jsr.io/docs/trust

                          • croes 4 hours ago

                            Does this help when a project change ownership or in cases like the xz backdoor?