This being raspberry pi absolves you from needing to buy a separate hardware noise generator: it has plenty of GPIO. For example, one can obtain entropy by sampling random noise generated by reverse-biasing a junction in a cheap pn transistor. Here is an example: http://holdenc.altervista.org/avalanche/. Bonus — maybe it will get you hooked on electrical engineering!
Btw, some versions of raspberry pi already have hardware random number number generator accessible at /dev/hwrng.
The Raspberry Pi 4 model B can also implement a FIPS-compliant entropy source in the form of CPU Jitter Random Number Generator (JEnt)! It is listed under certificate E21 within NIST’s Entropy Source Validation Program (ESV), so it could even be used in a cryptographic module.
ESV testing for JEnt uses an oversampling rate of 3, so even if you don’t want to use the precise setup described in the certificate (maybe a different version of OS, etc), the entropy rate from this will be more than adequate.
How does the disconnected audio input of any random PC or thinclient compare?
I continue to find it a bit silly to see "with a raspberry pi" when people just mean "with any random linux box that doesn't need to be very powerful".
It's like listening to NPR, where every smartphone is an iPhone even if it's an Android, you know?
>How does the disconnected audio input of any random PC or thinclient compare?
That will give you RF noise, which isn't really random.
Electrical noise (including RF noise) is really random, as in it is impossible to predict exact value.
It does have non-flat spectrum, meaning some values are more probable than others, but that only means you need to whiten it. (A rough analogy might be a 6-sided die labeled with 1,1,1,2,3,4 - yes, number 1 is much more likely to come out. No, this does not make it "not really random", and some trivial math can produce ideal random stream out of it)
The only problem with audio input is that you may end up with non-random value - like all-zero output. But properly implemented whitener should detect this and stop outputting any value at all.
it's an often-made mistake where random generation / randomness is confused with probability distribution. Having said that, I don't know (as is in really don't know) if RF noise is unbiased; doesn't sound like it?
If you are talking about DC bias (as in, long term average of raw readings), then "unconnected audio input" is pretty likely to have it - it's easy to introduce via component tolerances, and there is no real reason to keep it exactly zero for audio purposes. But it's also pretty trivial to fix in software.
If you are talking bias in more general sense, then audio input noise is non-uniform in the frequency space, for example there is low-pass filter which filters out high input frequency, and it will affect noise values too. Good whitening algorithm is essential.
The good news however is there are many noise sources which are actually caused by quantum effects in electronic parts, and therefore completely unpredictable. Even if NSA recorded all RF noise, they still could not predict what the ADC will capture. (But they might be able to capture digital bits as they travel over the bus...)
> That will give you RF noise, which isn't really random.
what does "really" random even mean in this context? does it actually matter?
given 3 hypothetical devices in a homelab:
a) does no specialized hardware entropy collection, and instead relies entirely on the standard Linux kernel mechanisms
b) does entropy collection based on the RF noise that you're saying isn't "really" random
c) does entropy collection based on whatever mechanism you have in mind that generates "real" randomness (hand-carving bits of entropy out of quantum foam, or whatever)
even if your threat model includes "the NSA tries to break into my homelab"...device A will almost certainly be fine, they'll have ways of getting access that are much simpler than compromising the entropy pool.
I suppose device B has a theoretical vulnerability that if the NSA had physical access to your homelab, they could monitor the RF environment, and then use that to predict what its inputs to the entropy pool were. but...that's assuming they have physical access, and can plant arbitrary equipment of their own design. at that point, they don't need to care about your entropy pool, you're already compromised.
Suitably similar RSA keys well compromise each other.
https://www.infoq.com/news/2019/12/rsa-iot-vulnerability/
So bad randomness can let a remote attacker break them much more easily.
No one puts raw bit source directly into private key, they always whiten it via some method (often entropy pool setup using strong hash/encryption functions).
That means that even if you "random inputs" are totally predictable, the random values which come out of whitener are completely distinct, and generated RSA keys have virtually zero chances of being similar.
I was not aware of this! That's kinda fun.
I did an entropy test on my Pi5 (according to https://rob-turner.net/post/raspberrypi-hwrng), and it (7.999832 bits per byte) has about the same entropy as /dev/urandom (7.999831 bits per byte).
However, when using it directly, it's pretty slow. /dev/hwrng is 200 KB/s, /dev/urandom is 40 MB/s.
Though, maybe that doesn't matter if it's just intended to be used to add entropy to the system entropy pool.
I'd do this then three years later realize that something broke and it's just been feeding zeroes for the last 18 months.
good point, the project immediately after building the CA is to build a decent monitoring/alerting setup.
Could still have been random
https://web.archive.org/web/20011027002011/http://dilbert.co...
I love this idea!
"with a Yubikey" is probably a better title. The yubikey thingy costs more than the PI (and there is a helpful link if you want to buy one).
Very little of this has to do with a PI, it seems like almost any kind of home server would work (especially linux). And it is unclear to me what value is added by the yubikey? And would any FIDO device work, or is this yubikey brand only?
> And it is unclear to me what value is added by the yubikey? And would any FIDO device work, or is this yubikey brand only?
As my sibling comment says, this doesn't use FIDO, it uses PIV. The YubiKey pretends to be a USB CCID-class smartcard reader [1], with a PIV-capable smartcard inserted. You could use any other PIV-capable smartcard, but then you would also probably have to buy a smartcard reader. I do have a Dell keyboard with a built-in smartcard reader [2], but I don't use it. This would also be much bulkier.
Edit: Smartcard vendors also vary wildly in terms of their support for "Things that aren't Microsoft Windows".
[1] among other things, such as a USB HID keyboard for the OTP functionality
Good point.
Primarily, the YubiKey is there to lock away the private key while making it available to the running CA. Certificate signing happens inside the YubiKey, and the CA private key is not exportable.
This uses the YubiKey PIV application, not FIDO.
As an aside, step-ca supports several approaches for key protection, but the YubiKey is relatively inexpensive.
Another fun approach is to use systemd-creds to help encrypt the CA's private key password inside a TPM 2.0 module and tie it to PCR values, similar to what LUKS or BitLocker can do for auto disk unlocking based on system integrity. The Raspberry Pi doesn't have TPM 2.0 but there are HATs available.
I'm running smallstep CA in my homelab. While it's nicely done and clearly focuses to the containerized enterprise market, its defaults are very harsh.
Take for example the maximum certificate duration. While from a production/security perspective short-lived certificates are great, you don't want to renew certs in your homelab every 24-48hrs. Also, many things just don't support ACME but still benefit from a valid certificate, e.g. router/firewall/appliance web interfaces. Out of the box, the limit for traditionally issued certificates using the CLI is very low, too.
The default prevents expired certificates to be renewed. If your homelab does not offer a couple of nines behind the comma, you'll pretty much have to intervene on a regular basis UNLESS you adjust the defaults. You can't set the max duration to years, months or days but only hours:
"claims": {
"minTLSCertDuration": "24h",
"maxTLSCertDuration": "26400h",
"defaultTLSCertDuration": "9000h"
},
If the goal of hour homelab is to design/test/experiment with a fault-tolerant high availability k8s infra, e.g. for your job, it's great.CAVE: macOS enforces duration limits even for trusted enterprise CAs, e.g. Safari won't accept your 1000 days certificate anymore.
It's true, the defaults are quite strict.
As for the "hours" max interval, this is the result of a design decision in Go's time duration library, dealing with the quirks of our calendaring system.
This is the api presumably. Not sure about what would prevent days, wasn't familiar with the lore.
It's because units up to hours are of a fixed size, but days in most places are only 24h for ~363/365 days of the year, with some being 23h and some being 25h.
(This is ignoring leap seconds, since the trend is to smear those rather than surface them to userspace.)
For what it's worth, I've had quite a bit of success using ACME for devices that don't natively support it by using a sidecar service.
Basically, running the ACME flow on a Linux system and then having it programmatically update the cert/key for the service that needs it. Have done this for my NAS, printer, router, etc.
This is what I do as well. I use acme.sh on one linux server to generate a cert with a few SANs on it, then copy that cert to things like opnsense/truenas/etc either using ssh or their api if there is one.
Yes the default certificate duration is so small as to be useless as a default. It's a pain to set the expiration in hours, but it does still allow you to set months or years of duration. You just have to calculate out how many hours those months or years are.
For those interested in an hsm pi, theres picohsm https://github.com/polhenarejos/pico-hsm
Is there any way to replace the dedicated RNG with the YubiKey RNG? The OpenPGP applet allows you to query its internal RNG [0].
[0] - https://developers.yubico.com/PGP/YubiKey_5.2.3_Enhancements...
That isn't so tiny.
This is tiny. https://github.com/FiloSottile/mkcert
Does that provide an ACME server?
Bit of an aside, but the "Infinite Noise TRNG" seems to generate not very random "raw" data, which it hashes to make it appear as random bits.
Am I missing something, or wouldn't it be better to start with highly random raw data, and hash that to get more bits-per-second?
Not appreciating the hijacking of the back button until the cookie form is dismissed.
yeah, it's a pretty dark pattern.
It's odd that they don't include a real-time clock. Signatures include timestamps, and those better be trustworthy.
They mention NTP.
This has been on my list to implement for a while. It's a really great idea for things like proxmox and pfsense that you probably don't want to use LetsEncrypt for, but support ACME.
Why wouldn't you want to use Lets Encrypt for proxmox, etc. ?
See also perhaps "Running one’s own root Certificate Authority in 2023" from a little while ago:
I use MiniCA for this. It's really nice
I think a nanosat running a CA that you can directly via some kind of low cost RF channel would be a fun experiment.
This is littered with so many missteps I don't know where to start.
-Complete overkill requiring the use of a YubiKey for key storage and external RNG source - what problems does this solve? For a Yubikey to act as a poor man's HSM you have to store the PIN in plaintext on the disk. So if the device is compromised, they can just issue their own certs. If it's to protect against physical theft of the keys, they'll just put the entire Raspberry Pi in their pocket. You could choose to enter the PIN manually but this precludes any automation including CRL generation. It's also a waste of a good YubiKey.
-Creates a two-tier PKI... on the same device. This completely defeats the purpose so you can't revoke anything in case of key compromise. You could make it a 100-tier PKI and it would make no difference if they're on the same device. Though they would need a whole lot of YubiKeys and USB hubs for that.
-They're generating the private key on disk then importing into the YubiKey. Which defeats having an external key storage device because you have left traces of the key on disk.
-All this digital duct taping the windows and doors yet the article instructs you to download and run random binaries off GitHub with no verification whatsoever.
-Why do you need ACME in a homelab and can't just hand issue long lived certificates?
-OpenSC and the crypto libraries are notoriously difficult to set up and working properly. A tiny CA this is not.
An instance of openssl or xca covers 99.9% of "homelab" use cases. This is like using a battery operated drill to open a can of soup.
> Complete overkill requiring the use of a YubiKey for key storage and external RNG source - what problems does solve? For a Yubikey to act as a poor man's HSM you have to store the PIN in plaintext on the disk
You still can't exfiltrate the key material.
> If it's to protect against physical theft of the keys, they'll just put the entire Raspberry Pi in their pocket.
Just because someone has compromised your device doesn't mean they have physical access. That's the point.
> They're generating the private key on disk then importing into the YubiKey. Which defeats having an external key storage device because you have left traces of the key on disk.
The traces don't have to be left behind. Is this excessive 'overkill', or is the 'digital duct taping the windows and doors' insufficient?
> An instance of openssl or xca covers 99.9% of "homelab" use cases
The interesting thing about this article is that it adds a few 9's that are covered, and it's both easy and cheap.
> You still can't exfiltrate the key material
And? What actual problem does this solve or realistic threat does this prevent? They are not decryption keys they are used to digitally sign certificates.
What the DigiNotar hack taught us years ago is if your CA is compromised you are already 0wned doesn't matter if the key is stored in an HSM or not.
All they can do with a stolen key is issue more certificates. Which they can do anyway if they have root access to the CA.
You can put 12 locks on your door but if they're all keyed to the same key you've stored under the plant on the porch, it doesn't really matter.
> The interesting thing about this article is that it adds a few 9's that are covered, and it's both easy and cheap.
Hard to say if those extra 9's need an external RNG for extra entropy.
> Which they can do anyway if they have root access to the CA.
Until you turn it off. If they exfiltrate the keys, it's more complicated.
This goes back to your comment:
> Creates a two-tier PKI... on the same device. This completely defeats the purpose so you can't revoke anything in case of key compromise
But the root key is just created; it doesn't stay on the device and can't be used to sign anything.
> What actual problem does this solve or realistic threat does this prevent?
The problem is exfiltrating the key without physical access. Whether or not that's "realistic" enough to matter isn't a question that can be answered generally.
> Hard to say if those extra 9's need an external RNG for extra entropy.
IMO it's not. In the author's words: Optional, but fire
> ... For a Yubikey to act as a poor man's HSM you have to store the PIN in plaintext on the disk. ...
I haven't read the article fully yet, but it's not a bad idea to store the Root CA on the yubikey, and then generate a separate intermediate CA that is not stored on the yubikey. This way, all your day-to-day certs are issued using the intermediate and you only need to touch the root ca if you need to re-issue/revoke/etc the intermediate.
Hi, I'm the author of the post. Thanks for your questions here.
> -Complete overkill requiring the use of a YubiKey for key storage and external RNG source - what problems does this solve? For a Yubikey to act as a poor man's HSM you have to store the PIN in plaintext on the disk. So if the device is compromised, they can just issue their own certs. If it's to protect against physical theft of the keys, they'll just put the entire Raspberry Pi in their pocket.
Yep, it's overkill. Homelabs are learning environments. People want tutorials when trying new things. It's a poor man's HSM because not many people will buy an HSM for their homelab, but almost everyone already has a YubiKey they can play with.
The project solves the problem of people wanting to learn and play with new technology.
And it's a way to kickstart a decently solid local PKI, if that's something you're interested in.
The RNG is completely unnecessary flair that just adds to the fun.
> -Creates a two-tier PKI... on the same device. This completely defeats the purpose so you can't revoke anything in case of key compromise. > -They're generating the private key on disk then importing into the YubiKey. Which defeats having an external key storage device because you have left traces of the key on disk.
The tutorial shows how to generate and store the private key offline on a USB stick, not on the device or the YubiKey. The key material never touches the disk of the Raspberry Pi.
Why store a copy of the CA keys offline? Because YubiKeys don't have the key-wrapped backup and restore feature of HSMs. So, if the YubiKey ever fails, you need a way to restore your CA. Storing the root on a USB stick is the backup. Put the USB stick in a safe.
If you want active revocation, you can set it up so that the intermediate is revocable—in case physical theft of the key is important to you. (We have instructions to do that in our docs.)
> -All this digital duct taping the windows and doors yet the article instructs you to download and run random binaries off GitHub with no verification whatsoever.
It's open source software downloaded from GitHub. The only non-smallstep code is the RNG driver (GitHub is the distribution point for that project). Was there a kind of verification that you expected to see?
> -Why do you need ACME in a homelab and can't just hand issue long lived certificates? -OpenSC and the crypto libraries are notoriously difficult to set up and working properly. A tiny CA this is not.
Most people don't need ACME in their homelab, they just want to learn stuff. That said, we have homelabbers in our community issuing certs to dozens of endpoints in their homelab.
Whether you issue long-lived or short-lived certs is a philosophical issue. If a short-lived cert is compromised, it's simply less valuable to the attacker. Short-lived certs encourage automation. Long-lived certs can be easier to manage and you can just manually renew them. But unplanned expiry of long-lived certs has caused a lot of multi-million dollar outages.
I hope this helps clarify things.
Despite the critical feedback you've received above, I found the article interesting, and having a homelab with several spare Pi's, it's got me considering setting a CA up. Thank you.
> Why do you need ACME in a homelab and can't just hand issue long lived certificates?
If there is one thing I hate it is hand issuing certificates. Even for a homelab.
SSL just plain sucks and OpenSSLs incantation and especially config files make an already bad problem even worse.
Also, a lot of homelab people are experimenting and gaining experience with stuff they run in production or at work.
Those people are extremely likely to be using ACME in the wild.
Running it in your homelab makes a lot of sense to me.
I wish it was easier to get your CA installed in trust stores. Even for devices you control it's annoying but even worse if you want to share your services with mates at your house or over VPN etc. In the end it's just easier to go with LE certs for all practical cases.
Having spent time at a reasonable sized corporate environment with our own CA, I have to agree.
Its often a case of its fine until it isn't and different organisations handle it differently. Python requests installed via pip will use its own truststore, but installed via rpm it will automatically use the system store. Amazon Corretto JDK also installs its own truststore, so you have to correct that. Running thirdparty applications often comes with trouble, too.
More recently, we've been bitten by a JDK bug[0] that prevents Java from correctly interpeting Name Constraints.
I guess I'm not the intended audience, but the last thing I want in my homelab is a CA. One more thing I'd set up, forget how it works, then spend an entire weekend trying to fix when it breaks a year from now.