Hands down one of the greatest services out there, stopped a racket and made the internet secure.
I remember a time when having an HTTPS connection was for "serious" projects only because the cost of the certificate was much higher than the domain. You go commando and if it sticks then you purchase a certificate for a 100 bucks or something.
There's still enough people out there who don't know better, manually (or auto-renew) purchasing new a certificate every year from their hosting provider like it's 2013.
I have dealt with banking environment when they required SSL with at least 1-year validity on the callback API URL. Which excluded Let's Encrypt.
We were looking for a SSL provider that had > 1 year old certs AND supported ACME... for some reason we ended up with SSL.com that did support ACME for longer lasting certs; however, there was some minor incompatibilities in how kubernetes cert-manager implemented ACME and how SSL.com implemented ACME; we ended up debugging SSL.com ACME protocol implementation.
Fun. We should have just clicked once per 3 years, better than debugging third parties APIs.
No, I don't remember the details and they are all lost in my old work emails.
(Nowadays I think zerossl.com also supports ACME for >1 year certs? but they did not back then. edit: no they still don't, it's just SSL.com I think)
> I have dealt with banking environment when they required SSL with at least 1-year validity on the callback API URL
Why are (some) banks always completely clueless about these things? Validating ownership of the domain more often (and with an entirely automated provisioning set-up that has no human weak links) can only be a good thing.
Perhaps the banking sector will finally enter the 21st century in another ten years?
The banking sector usually goes with "checkbox security".
They have these really, really long lists what all needs to be secured and how. Some of it is reasonable, some of it is bonkers, there is way too much of that stuff, and it overall increases the price of any solution 10x at least.
But OTOH I can hardly blame them, failures can be catastrophic there, as they deal with real money directly and can be held liable for failures. So they don't really care about security, and more about covering their asses.
> some of it is bonkers
Some of it is truly bonkers and never was good practise, but much of the irritating stuff is simply out-of-date advice. The banks tend to be very slow to change unless something happens that affects (or directly threatens to affect) the bottom line, or puts them in the news unfavourably.
Of course some of it is bonkers, like HSBC and FirstDirect changing the auth for my personal accounts from “up to 9 case-sensitive alpha-numeric characters” (already considered bad practise for some years) to “6 digits”, and assuring me that this is just as secure as before…
That sounds like "we were truncating your id and pass before anyway".
I don't think so, because it would also imply they were also throwing away anything non-numeric, and I really hope nothing that stupid was going on. When the change happened everyone had to establish a new password.
I read it as “we have been asked to integrate an ancient system that we can't update (or more honestly in many cases: can't get the higher-ups to agree to pay to update), so are bringing out other systems down to the lowest common denominator”. That sort of thing happens too often when two organisations (or departments within one) that have different procedures, merge or otherwise start sharing resources they didn't previously.
The problem is more likely one of regulation than technical knowledge. Banks hire very smart people who know that a lot of what they do is bullshit, but they're paid to comply with banking and security regulations that lag a long way behind technical advances. Banks are also inherently conservative in their technical choices, and for good reason.
> I have dealt with banking environment when they required SSL with at least 1-year validity on the callback API URL. Which excluded Let's Encrypt.
I wonder if this would be an opportunity for revenue for Let's Encrypt? "We do 90-day automated-renewal certificates for free for everyone. If you're in an unusual environment where you need certificates with longer validity, we offer paid services you can use."
If they want to do something commercial, they should go for the code signing certificates, that stuff is still a racket.
I'm pretty sure ISRG doesn't want to deal with payments any more than they do now (i.e. outside of donations and sponsorships)
Probably better to keep LE / ISRG completely non-profit. Adding a profit motive has too big of a chance to end with actually security-relevant features being gated behind payment eventually.
It's less about the profit motive, and more about removing the remaining incentives to stay outside the ACME ecosystem. The funding would be to provide additional infrastructure (e.g. revocation servers for longer-lasting certificates), and to fund new such efforts.
But once there is an income stream from issuing certificates there is an incentive to increase it which will quickly find itself at odds with the primary missions of providing secure connections to as many people as possible. Making infrastructure depend on that income stream only increases that incentive. Perhaps you trust the ISRG to resist the temptaton but as far as I know they are run by humans.
There are many, many opportunities in both the business and non-profit world to make more money by screwing your customers/users, and despite that, it does not always happen. Businesses and non-profits are built on the trust of users (or built in spite of the utter lack of it, e.g. Comcast). I don't think they should be afraid to provide things users need. It is, in fact, possible to choose and keep choosing to maintain the trust of your users.
I think there's still incentive alignment here. Getting people moved from the "purchase 1 year certificate" world (which is apparently still required in some financial contexts) into the ACME-based world provides a path for making a regulatory argument that it'd be easy for such entities to switch over to shorter-lived certificates because the ACME infrastructure is right there.
AFAIK there's things like Extended Validation Certificate Verification that used to make the browser address bar look more trustworthy by making it green but I don't know if its still a thing. At least in Safari, I don't see a green padlock anywhere.
I remember our boss really wanted that green bar, so we got an extend validation certificate. What we had failed to realise is that they would only be issued to the actual legal name of your company, but not any other names you may be operating under. We had a B2C webshop, where we wanted the ev-cert, but because the B2C side of the business wasn't it's own legal entity, the cert we go issued was for our B2B name, which none of our customer customers knew and it looked like a scam.
The only good thing dealing with certificate resellers at the time was that they where really flexible in a lot of ways. We got our EV cert refunded, or "store credit" and used the money to buy normal certificates.
They are still there, but most browsers don't do anything with it anymore since 2019, when Firefox and Chrome stopped caring.
There are some scenarios where you still have to employ EV certificates, e.g. code signing.
Chrome 77 removed the prominent green EV badge. "A series of academic research in the 2000s studied the EV UI in lab and survey settings, and found that the EV UI was not protecting against phishing attacks as intended. The Chrome Security UX team recently published a study that updated these findings with a large-scale field experiment, as well as a series of survey experiments." [1]
Extended Validation can still play a role in a corporate's IT control framework; the extended validation is essentially a check-of-paperwork that then doesn't need to be performed by your own auditor. Some EV certificates also come with some (probably completely useless) liability insurance.
[1] https://chromium.googlesource.com/chromium/src/%2B/HEAD/docs...
> Some EV certificates also come with some (probably completely useless) liability insurance.
Warranties / insurance on SSL certificates typically only pay out if a certificate is issued improperly, often in conjunction with other conditions like a financial loss directly resulting from the misissuance. Realistically, any screwup serious enough to result in that warranty paying out would also result in the CA being abruptly removed from browser root certificate programs.
Chrome and Firefox removed the extra UI stuff for EV certs in 2019:
https://groups.google.com/a/chromium.org/g/security-dev/c/h1...
Yeah that also stopped being a thing. I'm really happy how Chrome and then other browsers gradually shifted the blame to insecure websites rather than highlighting "secure" ones.
You'll still find people online clamoring EV certificates are worth anything more than $0 but you can ignore them just as well.
Huh? EV certificates are actually certifying you're the (juristical) person you're claiming to be based on ID and trade register checks, unlike Let's Encrypt certificates which only certify you're in possession of a domain. Isn't using EV certificates legally required for e-commerce web sites at least in parts of the world, and also obligatory for rolling out as MasterCard/Visa merchant by their anti-fraud requirements along with vulnerability checks and CI/site update processes being in place?
> Isn't using EV certificates legally required for e-commerce web sites at least in parts of the world
Not in any jurisdiction I'm aware of, though it's a big world so it wouldn't shock me if some small corner of it has bad laws.
> and also obligatory for rolling out as MasterCard/Visa merchant by their anti-fraud requirements
PCI DSS does not require EV certificates.
they were also pretty bad for performance due to the extra lookup (and reduction in caching)
What extra lookup. AFAIU they are just like normal certificates but with a "customer paid extra" flag.
they normally require a revocation lookup on the spot, and iirc there was differences in if they could or how stapling worked.
Interesting. Sounds like a cost that is entirely reasonable for use cases like online banking though.
Related point - we interface with Singapore gov services (MYINFO).
They don't recognize LE nor AWS's certs. Only the big paid ones. Such an annoying process too - to pay, to obtain and update the certs.
I guess the good thing there is that it's absolutely transparent that this is just a way to make you pay somebody else. Like the Jones Act (Merchant Marine Act, but everybody just calls it the Jones Act). The US government doesn't get a slice if you want to buy ships to move stuff from one part of the US to another, but it does require that you buy the ships from an American shipyard, and so those yards needn't be internationally competitive because the US government has their back.
Nobody is like "Oh, the Jones Act ensures high quality ships" because it doesn't, the Jones Act just ensures that you're going to use those US shipyards, no matter what.
Myinfo did away with certificate requirement altogether! Yay! (hello from Singapore)
Our company bans the use of letsencrypt because of the legal terms. Nobody at the CxO level will sign off on it, so we end up paying whatever to globalsign.
What legal terms do they find objectionable?
What about ZeroSSL, which is basically interchangeable with Let's Encrypt?
Doesn't necessarily have anything to do with knowing, some environments are just not worth automating or support it so badly that even paying twice or more would still be nothing compared to the annoyance. It's been getting better over the years though.
Unfortunately, the code signing certificates work pretty much the same way
I deal with multiple enterprise applications where idea of scripting a renewal involves playing with scripting headless Chrome.
I'm really not a fan of it but I'm happier paying for a one year cert than doing that
Sorry if this is a dumb question, but why? If I'm not mistaken, Let's Encrypt supports validation via DNS now so you don't even need to have a working webserver to issue a certificate. Automating a script to perform a renewal should be much simpler than headless Chrome!
If your DNS provider doesn't have an API, that seems like a separate issue but one that is well worth your organization's time if you're working in the enterprise!
I guess it is not about renewal but about certificate deployment.
Obtaining a certificate via dns doesn't help you install it via a Web interface that takes 20+ clicks and a 15 minute reboot to apply .
And open a ticket on a suppliers website, click through four pages with free text input, then send certificate via email.
Lets not talk about key delivery. We will get back the admin cost and of all that in a year if we tunnel them through one of our LBs.
You can set up the _acme-challenge (or whatever it is)as a CNAME to point to a domain which does support an API for automating the renewal
(looking in to setting this up for a bunch of domains at work)
I had a lazily configured proxy which would request a cert for any domain you threw at it. An attacker figured this out and started peppering it with http requests with randomly generated subdomains prefixed. When I discovered it, my first thought wasn’t, “Oh, I hope I didn’t get flagged by Let’s Encrypt.” It was, “Oh, man. I feel really bad that my laziness caused undue load on Let’s Encrypt.”
Let’s Encrypt is the best thing to happen to the web in at least a decade.
At least there were some services where you could get a single domain "real" certificate for free before (But a complicated and annoying process) but then if you wanted a wildcard certificate to cover a bunch of subdomains for personal projects it became really expensive.
Glad this problem just got completely resolved.
Mozilla is getting a lot of criticism, but just for letsencrypt alone they are making the world a better place.
Before them I never used SSL for anything, because the cost/benefit ratio was just not there for my services.
Since then, I never not use it.
Two Mozilla employees were involved in starting Let's Encrypt, and the Mozilla foundation is one of the sponsors, but as far as I can tell the foundation was not directly involved in creating it.
Google/Chrome and Firefox also deserve credit for making a free and open CA viable.
We consider our ten year anniversary to be in 2025 but I appreciate the kind words here!
Today is roughly the ten year anniversary of when we publicly announced our intention to launch Let's Encrypt, but next year is the ten year anniversary of when Let's Encrypt actually issued its first certificate:
https://letsencrypt.org/2015/09/14/our-first-cert/
In December of 2015 (~9 years ago today) is was made available to everyone, no invitation needed:
It feels like just yesterday I was paying for certs, or worst, just running without.
Can't believe its been ten years.
Can’t believe there are still anti TLS weirdos.
I am 99% in favour of widespread use of TLS - but the reality is it means the web only works at the whim of the CA/Browser Forum. And some members of the forum are very eager to flex their authority.
If I do everything perfectly, but the CA I used makes some trivial error which, in the case of my certificate, has no real-world security impact? They can send me an e-mail at 6:40 PM telling me they're revoking my certificate at 2:30 PM the next day. Just what you want to find in your inbox when you get in the next day. I hope you weren't into testing, or staged rollouts, or agreeing deployment windows with your users - you'd better YOLO that change into production without any of that.
Even though it wasn't your mistake, and there's no suggestion you shouldn't have the certificate you have.
As far as the CA/B Forum is concerned, safety-critical systems that can't YOLO changes straight into production with minimal testing and only a few hours of notice don't belong on their PKI infrastructure. You'd better jump to it and fix their mistake right now.
I'm probably more critical of TLS in general than you are, but to be fair to LE one of their biggest contributions has been to change certificate updates from a deployment to something that should happen automatically during normal operations. If you have things setup the recommended way your daily certbot/etc run will simply pick up a new certificate and loat it into whatever servers that need it without you having to lift a finger. Of course in practice it doesn't always work out that way.
A daily certbot run won't protect you if the CA discovers the problem at 2pm (starting the 24 hour revocation timer) but they only have a fix rolled out by 6pm.
Anyone whose certbot run was between 2pm and 6pm would get their cert revoked the next day at 2pm anyway - even if it was only issued 18 hours ago.
Hopefully you terminate TLS far away from your app code so rolling that out to prod is a non issue. But I get your point!
TLS is not panacea and it's not universally positive. Here are some arguments against it for balance.
TLS is fairly computationally intensive - sure, not a big deal now because everyone is using superfast devices but try browsing the internet with a Pentium 4 or something. You won't be able to because there is no AES instruction set support accelerating the keyshake so it's hilariously slow.
It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.
It is also a very centralised model - if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
It also discourages naive experimentation - sure, if you know how, you can MitM your own connection but for the not very technical but curious user, that's probably an insurmountable roadblock.
> It is also a very centralised model
I can see why the centralisation is suboptimal (or even actively bad if I'm feeling paranoid!), but other schemes (web of trust, etc.) tend to end up far more complicated for the end user (or their UA). So far no one has come up with a practical alternative without some other disadvantage that would block its general adoption.
> if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
Because if we don't trust those few 3rd parties, we end up having to effectively trust every host on the Internet, which means trusting people and trusting all the people is a bad idea.
Some argue that needing a trusted certificate for just a personal page is extreme, but this one of those cases where the greater good has to win out. For instance: if we train people that self-signed certs are fine to trust in some circumstances, they'll end up clicking OK to trust them in circumstances where they really shouldn't. This can seem a bit nanny-ish, but people are often dumb, or just lazy to the point where it is sometimes indistinguishable from dumb (I'm counting myself here!) so need a bit of nannying. And anyway, if your site doesn't take any input then no browser will (yet) complain about plain HTTP.
> It also discourages naive experimentation
When something could affect security, discouraging naive experimentation on the public network is a good thing IMO. Do those experiments more locally, or at least on hosts you don't expect the public to access.
*It also discourages naive experimentation* that's the point where if you put on silly website no one can easily MitM it when its data is sent across the globe and use 0-day in browser on "fluffy kittens page".
Biggest problem that Edward Snowden uncovered was - this stuff was happening and was happening en-mass FULLY AUTOMATED - it wasn't some kid in basement getting MitM on your WiFi after hours of tinkering.
It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
> It also discourages naive experimentation that's the point where if you put on silly website no one can easily MitM it when its data is sent across the globe and use 0-day in browser on "fluffy kittens page".
Transport security doesn't make 0-days any less of a concern.
> It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
That's a societal/legal problem. Trying to solve those with technological means is generally not a good idea.
> There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
The only people who can realistically MITM your connection are network operators and governments. These can and should be held accountable for their interference. You have no more security that your food wansn't tampered with during transport but somehow you live with that. Similarly security of physical mail is 100% legislative construct.
> You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
I don't but my ISP does. Solutions for malicious actors interfering with routing are needed irrespective of transport security.
> So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Not at all - unless you are also epecting them to have their fluffy kitten postcards checked for Anthrax. In general, it is security people who often need to touch grass because the security model they are working with is entirely divorced from reality.
All I got from your explanation is:
I am going to cross the street in front of that speeding car because driver will be held liable when I get hit and die.
If there is not even a possibility to hijack the traffic whole range of things just won’t happen. And holding someone liable is not the solution.
The situation is more akin to demanding that pedestrians should be prevented from crossing the road at all cost because a malicious driver could ignore all red lights. And of course banning pedestrias ins't enough. After all, motorcyles are also pretty unsafe so we ban those too. But you see someone could also be pointing a bazooka at the road so then we require all cars to have sufficient armor plating in order to be allowed on the road. That is, before realizing that portable nukes exists and you never know who has one. We don't do that. Instead we develop specific solutions (e.g. an over/underpass for high risk intersections, walls for highways) where they are actually needed without loosing sight of the unreasonable cost (not just monetary) that demanding zero risk would impose.
Technological measures don't make things impossible: they make them harder. And they rarely solve all the consequences of a problem: only the ones that have been explicitly identified.
Counterpounts:
> Transport security doesn't make 0-days any less of a concern.
It does. Each layer of security doesn't eliminate the problem but does make the attack harder.
Mail and food are different in that there are not limitless scalable attacks that can originate anywhere around the globe.
I find the lack of backwards compatibility also concerning - and that is not something can be fixed as deprecation of old SSL/TLS versions and ciphers is intentional.
Beyond that, TLS is also adds additional points of failure. For one, it preventing users from accessing websites that are still operational but have an outdated cert or some other configuration issue. And HSTS even requires browsers to deprive users of the agency to override default policies and access the site anyway.
TLS is also a complex protocol with complex implementations that are prone to can bring their own security issues, e.g. heartbleed.
There are also many cases where there are holes in the security. E.g. old HTTP links, even if they redirect to HTTP, provide an opportunity for interception. Similarly entering domain names without a scheme requires Browsers to either allow downgrade to HTTP or break older sites. The solutions to this (mainly HSTS and HSTS preload) don't scale and bring many new issues (policy lifetimes outlive domain ownership, taking away user agency).
In my ideal world
a) There would be no separate HTTPS URL scheme for secure connections. Cool URIs don't change and the transport security doesn't change the resource you are addressing. A separate protocol doesn't prevent downgrade attacks in all cases anyway (old HTTP URLS, entering domains in the address bar, no indication of TLS version and supported ciphers in the scheme).
b) Trust should be provided in a hierarchical manner, just like domains themselves - e.g. via DNSSEC+DANE.
c) This mechanism would also securely inform browsers about what protocols and ciphers the server supports to allow for backwards compatiblity with older clients (where desired) while preventing downgrade attacks on modern clients.
d) Network operators that interfere with the transmitted data are dealth with legal means (loss of common carrier status at the very least, but ideally the practice should be outright illegal). Unecrypted connections shouldn't allow service providers to get away with scamming you.
The handshake doesn't primarily depend on AES; it is typically a Diffie-Hellman variant (which doesn't have any acceleration) that takes time. Anyway, you're hopefully using TLS 1.3 by now, where you can use ChaCha20 instead of AES :-)
> if I want to host a website …
The fundamental problem is a question of trust. There’s three ways:
* Well known validation authority (the public TLS model)
* TOFU (the default SSH model)
* Pre-distribute your public keys (the self-signed certificate model)
Are there any alternatives?
If your requirement is that you don’t want to trust a third party, then don’t. You can use self-signed certificates and become your own root of trust. But I think expecting the average user to manually curate their roots of trust is a clearly terrible security UX.
> Are there any alternatives?
The obvious alternative would be a model where domain validated certificates are issued by the registrar and the registrar only. Certificates should reflect domain ownership as that is the way they are used (mostly).
There is a risk that Let's Encrypt and other "good enough" solutions takes us further from that. There are also many actors with economic interest in the established model, both in the PKI business and consultants where law enforcement are important customers.
How would you validate whether a certificate was signed by a registrar or not?
If the answer is to walk down the DNS tree, then you have basically arrived at DNSSEC/DANE. However I don’t know enough about it to say why it is not more widely used.
How do you validate any certificate? You'd have to trust the registrar, presumably like you trust any one CA today. The web browsers do a decent job keeping up to date with this and new top domains aren't added on a daily basis anyway.
Utilizing DNS, whois, or a purpose built protocol directly would alleviate the problem altogether but should probably be done by way of an updated TLS specification.
Any realistic migration should probably exist alongside the public CA model for a very long time.
There is web of trust, where you trust people that are trusted by your friends.
There's issues with it, but it is an alternative model, and I could see it being made to work.
Ah, I forgot about that and never really considered it because GPG is so annoying to use, but it is fairly reasonable.
I don’t see how it has too many advantages (for the internet) over creating your own CA. If you have a mutually trusted group of people, then they can all share the private key and sign whatever they trust.
I think the main problem is that it doesn’t scale. If party A and party B who have never communicated before want to communicate securely (let’s say from completely different countries), there’s no way they would be able to without a bridge. With central TLS, despite the downsides, that is seamless.
Providing initial trust via hyperlinks could be interesting.
Regarding the stuff you safe guard: what are your reasons for not sharing them somehow to prevent that loss when (not if) your drive fails?
I mean, I do! The music I have I put on Soulseek, although the more obscure stuff hasn't been downloaded yet. I also have fairly old video game mods - I don't even know where to share them or if anyone would be interested at all.
You could try to upload them to modding sites (preferrably not onces with a longin requirement for downloading) if you don't want to host them yourself. That can be either general modding archives or game-specific community sites - the latter are smaller but more likely to be interested in older mods. Make sure that whatever host you use can be crawled by the internet archive.
Interest is probably going to be low but not zero - I often play games long after they have been released and sometimes intentionally using older versions that are no longer supported by current mods.
You are entirely right - although I'd have to be careful with uploading it and where because on Steam Workshop, there's assholes who threaten to DMCA you without basis and there are similar problems on other sites too. But I'll look around :)
The Internet Archive?
My letsencrypt cert, despite all my attempts, works fine with browsers but WILL NOT work with wget/curl/python/whatever.
Plus setting up letsencrypt isn't really really easy. Last time it was failing because I had disabled HTTP on port 80 entirely on my server… but letsencrypt uses that to verify that my website has the magic file. So I had to make a script to turn it on for 5 minutes around the time when the certificate gets renewed. -_-'
None of this is easy or quick, and people have other stuff to do than to worry about completely hypothetical attacks on their blog.
The digital equivalent of a local kebab shop menu does not need encryption.
The lack of understanding from us as technologists for people who would have had a working site and are now forced into either: an oligopoly of site hosting companies, or, for their site to break consistently as TLS standards rotate is one thing that brings me shame about our community.
You can come up with all kinds of reasons to gatekeep website hosting, “they have to update anyway” even when updating means reinstallion of an OS, “its not that hard to rotate” say people with deep knowledge of computers, “just get someone else to do it” say people who have a financial interest in it being that way.
Framing people with legitimate issues as weirdo’s is not as charming as you think it is.
TLS doesn't just hide the information transmitted, but also ensures the integrity. Thus nobody on the network tinkered with the prices on the menu.
Also the Kebap Shop probably has a form for reservation or ordering, which takes personal information.
True, they are all low risk things, but getting TLS is trivial (since many Webservers etc can do letsencrypt rotation fully automatically) and secure defaults are a good thing.
There are plenty of websites that were just static pages used for conveying information. Most people who set them up lacked the ability to turn them into forms that connected to anything.
They’ve nearly all been lost to time now though, if a shop has a web-presence it will be through a provider such as “bokabord”, doordash, ubereats (as mentioned), some of whom charge up to 30% of anything booked/ordered via the web.
But, I guess no MITM can manipulate prices… except, by charging…
> There are plenty of websites that were just static pages used for conveying information.
If you care about the integrity of the conveyed information you need TLS. If you don't, you wouldn't have published a website in the first place.
A while back I've seen a wordpress site for a podcast without https where people also argued it doesn't need it. They had banking information for donations on that site.
Sometimes I wish every party involved in transporting packets on the internet would just mangle all unencrypted http that they see, if only to make a point...
What ensures the integrity of conveyed information for physical mail? For flyers? For telephone conversations?
The cryptography community would have you believe that the only solution to getting scammed is encryption. It isn't.
> What ensures the integrity of conveyed information for physical mail? For flyers? For telephone conversations?
Nothing, really. But for physical mail the attacks against it don't scale nearly as well: you would need to insert yourself physically into the transportation chain and do physical work to mess with the content. Messing with mail is also taken much more seriously as an offense in many places, while laws are not as strict for network traffic generally.
For telephone conversations, at least until somewhat recently, the fact that synthesizing convincing speech in real time was not really feasible (especially not if you tried to imitate someones speech) ensured some integrity of the conversation. That has changed, though.
There is a specific class of websites that will always support non-TLS connections, like http://home.mcom.com/ and http://textfiles.com/ .
Like, "telnet textfiles.com 80" then "GET / HTTP/1.0", <enter>, "Location: textfile.com" <enter><enter> and you have the page.
What would be the point of making these unencrypted sites disappear?
textfiles.com says: "TEXTFILES.COM has been online for nearly 25 years with no ads or clickthroughs."
I'd argue that that is a most likely objectively false statement and that the domain owner is in no position to authoritatively answer the question if it has ever served ads in that time. As it is served without TLS any party involved in the transportation of the data can mess with its content and e.g. insert ads. There are a number of reports of ISPs having done exactly that in the past, and some might still do it today. Therefore it is very likely that textfiles.com as shown in someones browser has indeed had ads at some point in time, even if the one controlling the domain didn't insert them.
Textfiles also contains donation links for PayPal and Venmo. That is an attractive target to replace with something else.
And that is precisely the point: without TLS you do not have any authority over what anyone sees when visiting your website. If you don't care about that then fine, my comment about mangling all http traffic was a bit of a hyperbole. But don't be surprised when it happens anyway and donations meant for you go to someone else instead.
There is a big difference between "served ads" and "ads inserted downstream."
If you browse through your smart TV, and the smart TV overlays an ad over the browser window, or to the side, is that the same as saying the original server is serving those ads? I hope you agree it is not.
If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?
If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?
If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?
If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?
Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.
I therefore conclude that your interpretation is meaningless.
> "as shown in someones browser"
Which is different than being served by the server, as I believe I have sufficiently demonstrated.
> But don't be surprised when it happens anyway
Jason Scott, who runs that site, will not be surprised.
Huh. Never thought about it that way; replacing hypothetical MITM attacks with genuine middlemen.
The Kebab Shop also takes orders over the phone, which is not any more encrypted.
And prices are more likely to be simply outdated than modified by a malicious entity. Your concerns are not based in reality.
You can get integrity at higher levels in the stack (or lower).
You say that until some foreign national gets their kabab order MITM to deliver them some malicious virus that ends up getting him killed.
Which is of course a real concern for the average joe.
Their site will break consistently in any case. Running a site in 2024 comes with a responsibility to update regularly for a good reason.
There are more than enough forgotten kebab shop restaurant pages that are now serving malware because they never updated WordPress that an out of date certificate warning is a very good "heads up, this site hasn't been maintained in 6 years"
If we're talking hosting even a static HTML file without using a site hosting company, that already requires so much technical knowledge (Domain purchasing, DNS, purchasing a static IP from your ISP, server software which again requires vuln updates) that said person will be able to update a TLS cert without any issue.
> There are more than enough forgotten kebab shop restaurant pages that are now serving malware
[citation needed]
There are plenty of organizations that actively scan the web for "malware" (aka anything that the almighty machine learning algorithms don't like) and are more than happy to harass the website owner and hosting company until their demands are met.
Security is ultimately a social issue. Technical means are only one way to improve it and can never solve it 100%. You must never loose sight of the cost imposed by tecnological security solutions versus what improvement they actually offer.
I'm really curious as to what you see as the disadvantages of TLS. Sure, the advantages are minor for some services and critical for other services.
However, if you already have bought a domain name, the cost of setting up TLS is basically 0. You just run certbot and give it the domains you want to license. It will set up auto-renew and even edit your Apache/NGINX configs to enable TLS.
Sure, TLS standards rotate. But that just means you have to update Apache/NGINX every like 5 years. Hardly a barrier for most people imo.
Its better than it was, but TLS has a lot more knobs to fail than even a basic http server does; theres a whole host of handoff thats happening and running multiple sites is fraught with minor issues.
certbot is a python program, better hope it keeps working- it’s definitely not kept working for me and I’m a seasoned sysadmin. a combination of my python environment becoming outdated (making updates impossible) and a deprecation of a critical API needed for it to work.
The #1 cause of issues with a hobby website: darkscience.net is that it refuses to negotiate on Chrome because the TLS suites are considered too old, yet in 2020 I was scoring A+ on Qualys SSL report.
Its just time, time and effort and its wasted mostly.
The letsencrypt tools are really wonderful, just pray they don’t break, and be ready to reinstall everything from scratch at some point.
> certbot is a python program, better hope it keeps working- it’s definitely not kept working for me and I’m a seasoned sysadmin. a combination of my python environment becoming outdated (making updates impossible) and a deprecation of a critical API needed for it to work.
You could try out acme.sh that's written purely in shell. It's extremely capable and supports DNS challenge and multiple providers
> certbot is a python program, better hope it keeps working
There is also https://github.com/srvrco/getssl which is a bash script. I have lightly audited it years ago and it did not seem to upload your private keys anywhere... I've used it occasionally, but I don't let it run as root, so I need to copy the retrieved certs into the the server config manually.
Theres a bunch of alternative clients and I’ve tried many.
Larger point is regarding the fact that its required for what amounts to a poster on a wall: yes, someone can come along with a pen an alter the poster- but its not worth the effort to secure for most people and will degrade rapidly with such security too.
So, instead they turn to middlemen, or don’t bother.
Theres a myriad of other issues, but, its not as easy as we claim.
> the cost of setting up TLS is basically 0. You just run certbot
certbot is not even close to the pinnacle of easy TLS setup. Using an HTTP server that fully integrates ACME and tls-alpn-01 is much nicer: tell your server what domain you use, and it automatically obtains a certificate.
I'm always reminded about this by being on the other side of the equation with my car.
There is regulation, like mandatory yearly inspections and anyone is only allowed to sell road worthy vehicles. These rules are rather strict, likewise for the driver's license. They aren't impossible to know or understand, but there's a lot of details.
However, when I take it to the shop, whether for that yearly inspection, regular maintenance, or because there's something apparently wrong with it, I never know what to expect in terms of time and money.
Oh, it needs a new thingamajig? I start to mildly sweat, fearing it to cost six hundred like the flux capacitor that had to be replaced last week/month/year and took two weeks to get shipped from another country. "Ninety cents, and we put it in place for no charge, it literally takes ten seconds", like, I love to hear the news, could have saved me from the anguish by giving a hint when I asked about the price! But need a new key? Starting from three hundred fifty, plus one hundred seventy for a backup copy. Like, where do these prices come from? Actually, don't tell me, I'm a software engineer. I know, I know.
I'll just wait until you want your car shop web pages up. Oh, for that you'll need PCI DSS and we can't do that other things because of GDPR. Sorry, my hands are tied here. That'll be four thousand plus tax, mister auto mechanic shop owner.
I don't think that's a good analogy, you're comparing a mass produced product to an individualized B2B service that's going to generate profits for your customer.
It's not an analogy. It's asymmetric warfare.
Irrelevant.
Safe transfer should be the default.
Your argument is akin to "I don't have anything to hide."
You just do it and don't think about it. Modern servers and services make this completely transparent.
The kebab guy doesn't need to worry about this as long as they're not fooled into buying from mala fide hosting companies who tries to upsell you on something that should be the baseline.
Nah ah. Not.
While we might be able to find common ground in the statement that "safe transfer should be the default", we will differ on the definition of "safe".
Unfortunately these discussions often end up in techno-babble. Especially here on HN were we tend to enjoy rather binary viewpoints without too many shades of gray.
Try being your own devils advocate: "What if I have something to hide?".
Then deal with that. Legitimately. Reasonably. Unless you are an anarkist I assume that we can agree that we need authoraties. A legal framework. Policing.
So I 100% support Let's Encrypt and what they have done to destroy the certificate racket. That is a force of good!
But I do not think it was a healthy thing that the browsers (and Google search results) "forced" the world defacto to TLS only.
Why? Look at the list of Trusted Root Certificates in the big OS and browsers. You are telling me only good guys are listed? None here are or can be influenced by state actors?
But that is the good kind of MITM? This then hinges on your definition of "safe transport". Only the anarkist can win against the government. I am not.
It might sound like I am in the "I do not have anything to hide" camp. I am not that naive. But I am firmly in the "I prefer more scrutiny when I have something to hide". Because the measures the authorities needs to employ today are too draconian for my liking.
I preferred the risk of MITM on an ISP level to what the authoraties need to do now to stay in control. We have not eliminated MITM. Just made it harder. And we forgot to discuss legitimate reasons for MITM because "bad".
This is not a "technical" discussion on the fine details of TLS or not. But should be a discussion about the societal changes this causes. We need locks to keep the creeps out but still wants the police to gain access. The current system does not enable that in a healthy way but rather erodes trust.
Us binary people can define clear simple technical solutions. But the rest of the world is quite messy. And us bit twiddlers tend to shy away from that and then ignore the push-back to our actions.
We cannot have a sober conversation unless we depart from the "encrypt everything" is technically good and then that is set in stone. But here we are: Writing off arguments as irrelevant.
Or worse: people who still go on and on about how self-signed certificates should be accepted by browsers, and can't be convinced that blind-trust-no-first-use is lousy security.
They usually counter with “but SSH uses TOFU” because they don't see, and can't be convinced of, the problem of not verifying the server key signature⁰. I can be fairly sure that I'm talking to the daemon that I've just setup myself without explicitly checking the signature¹, but that particular side-channel assurance doesn't apply to, for example, a client connecting to our SFTP endpoint for the first time² to send us sensitive data.
--
[0] Basically, they get away with doing SSH wrong, and want to get away with doing HTTPS wrong the same way.
[1] Though I still should, really, and actually do in DayJob.
[2] Surprisingly few banks' tech teams bother to verify SSH server signatures on first connection, I know because the ones in our documentation were wrong for a time and no one queried the matter before I noticed it when reviewing that documentation while adding further details. I doubt they'd even notice the signature changing unexpectedly even though that could mean something very serious is going on.
In general if you need to resort to ad hominens like calling your detractors weirdos then maybe your position isn't as justified as you want to believe.
Can't believe the HTTPS everywhere cargo cult still can't get it through their skulls there is still a place and use cases for plaintext HTTP. In some cases, CRLs for example, they shall not be served over HTTPS.
I'm kinda mixed on LE.
It's nice that you can now get free TLS certs without having to resort to shady outfits like StartSSL. This allows any website to easily move to HTTPS, which has basically elimated sensitive data (including logins) from being sent over unencrypted connections.
On the otherhand, this reinforces the inherently proken trust model of TLS certificates where any certificate authority (and a lot of them are controlled by outright hostile entities) has the ability to issue certificates for your domain without your involvement. Yes there are tons of kludges to try and mitigate this design flaw (CAA records, certificate transparency) but they don't 100% solve the issue. If not for LE perhaps there would have been more motivation to implement support for a saner trust mechanism by now that limmits certificate issuance to those entities who actually have any authority to decide over domain ownership, like with DNSSEC+DANE.
I'm also concerned with the (intentional) lack of backwards compatibility with moving sites to TLS, which is not just a one time TLS on/off issue but a continual deprecation of protocols and ciphers. This is warranted for things that need to be secure like banking or email but shouldn't really be needed to view a recipe or other similar static and non-critical information. Concerns about network operators inserting ads or other shit are better solved with regulation.
> If not for LE perhaps there would have been more motivation to implement support for a saner trust mechanism by now
I would argue that LE has only highlighted these problems, and now actually causes people with power to worry about them.
There is a chance we would have gotten something better than TLS if the lack of LE kept certificates a pain. But that seems unlikely to me. Because the fundamental problem remains hard.
Coincidentally I just got an email from a potential client, Dutch governmental institution, that they don’t want me to use Letsencrypt. They prefer paying for a certificate themselves. Not sure why, apparently they don’t trust it.
What I'm most thankful is the ACME protocol.
Does anyone remember how we renewed certificates before LE? Yeah, private keys were being sent via email as zip attachments. That was a security charade. And as far as I know, it was a norm among CAs (I remember working with several).
Thank you Let's Encrypt.
Just handholding a renewal with globalsign
I generate the new key on the server as part of the csr creation process. I run it on the server itself so the key never leaves the server's internal storage.
CSR gets sent off to globalsign (via a third party because #largeCompany), then a couple of days later I get the certificate back and apply to the server
Would love to use ACME instead, and store the key in memory (ramdrive etc), but these are the downsides of working for a company less agile than an oil-tanker
What of Certificate Signing Requests? The whole purpose was that you wouldn’t send private keys around.
(I was only slightly involved with a couple of TLS certificates before then, and certainly they enforced the CSR approach, but maybe such terrible practice was more common in the real world that I knew.)
My memory of the whole process is kinda fuzzy, you're probably right about CSRs. Hopefully the private keys were not sent around via unencrypted email.
But the point still stands: the whole process was a nightmare, no automation, error prone, renewal easily forgetable...
The large companies could have had a staff to manage all that. I was just a solo developer managing my own projects, and it was a hassle.
I still have to go through that bs with some of my setups. Load balancers in cloud environments don't tend to integrate easily with external ACME providers like letsencrypt and the internal ones require moving your domain to them which doesn't always work. And not all cloud providers even have this. Most of them seem to treat ACME as an afterthought.
You can sort of do some hacks with scripting this together via things like terraform, cron jobs, or whatever. But it gets ugly and the failure modes are that your site stops working if for whatever reason the certificates fail to renew (I've had this happen), which courtesy of really short life times for certificates is of course often.
So, I paid the wildcard certificate tax a few days ago so I don't have to break my brain over this. A couple of hundred. Makes me feel dirty but it really isn't worth days of my time to dodge this for the cost of effectively < 2 hours of my time in $. Twenty minute job to issue the csr, get the certificate and copy it over to the relevant load balancers.
How many of those CAs are still in everyone's trust stores?
> Yeah, private keys were being sent via email as zip attachments.
Internally, perhaps. And also on a small scale maybe with CA "resellers" who were often shady outfits which were in it for a quick buck and didn't much care about the rules.
But as a formal issuance mechanism I very much doubt it. The public CAs are prohibited from knowing the private key for a certificate they issue. Indeed there's a fun incident some years back where a reseller (who have been squirrelling away such private keys) just sends them all to the issuing CA, apparently thinking this is some sort of trump card - and so the issuing CA just... revokes all those certificates immediately because they're prohibited from knowing these private keys.
The correct thing to do, and indeed the thing ACME is doing, although not the interesting part of the protocol, is to produce a Certificate Signing Request. This data structure goes roughly as follows: Dear Certificate Authority, I am Some Internet Name [and maybe more than one], and here is some other facts you may be entitled to certify about me. You will observe that this document is signed, proving I know a Private Key P. Please issue me a certificate, with my name and other details, showing that you associate those details with this key P which you don't know. Signed, P.
This actually means (with ACME or without) that you can successfully air gap the certificate issuance process, with the machine that knows the private key actually never talking to a Certificate Authority at all and the private key never leaving that machine. That's not how most people do it because they aren't paranoid, but it's been eminently possible for decades.
> Indeed there's a fun incident some years back where a reseller (who have been squirrelling away such private keys) just sends them all to the issuing CA, apparently thinking this is some sort of trump card - and so the issuing CA just... revokes all those certificates immediately because they're prohibited from knowing these private keys.
That sounds like a fun story. I'd love to read the post-mortem if it's public.
Looks like it was this:
https://www.theregister.com/2018/03/01/trustico_digicert_sym...
Yup. Trustico. As usual my preference is to avoid caring whether people are malevolent or simply incompetent, by judging on the results of their actions not guessing their unknowable mental state, so hey, maybe Trustico incompetently believed it was a good idea to know private keys (it is not) and incompetently acted in a way they thought was in their customers' best interests (it was not) and so they're in the doghouse for that reason.
[Edited: I originally said Trustico was out of business, but astoundingly the company is still trading. I have no Earthly idea why you would pay incompetent people to do something that's actually zero cost at point of use, but er... OK]
According to that article Trustico wanted the certs revoked and intentionally send the keys to DigiCert in order to get them to act. While they still shouldn't have had those keys in the first place it sounds like the "trump card" worked here.
I really wish something like this comes up for the desktop certification world as well. Microsoft just went full insane mode with their current requirements, and their certificate plugs are making more money than ever without lifting a finger.
So funny that all of their security, vetting and endless verifications are standing on a single passport photo sent over an email to this day.
Let's Encrypt is a massive achievement, and is now essential infrastructure.
Basing it on an open protocol, so it doesn't become a single point of failure, was a clever idea that allows the idea to survive the demise of any single organization.
May there be many more such anniversaries.
Peter Eckersley (1978-2022) was posthumously inducted into the Internet Hall of Fame for his founding work on Let’s Encrypt. The Internet is a better place because of Peter (and his many collaborators and colleagues).
https://www.internethalloffame.org/inductees/
None of them I have ever heard of. Whatever that may mean.
Edit: On the whole list https://www.internethalloffame.org/inductees/all/ I spotted maybe seven names. Still a single digit percentage.
Vint Cerf & Bob Kahn (TCP/IP), Paul Baran (packet switching), Tim Berners-Lee (WWW), Marc Andreesen (Netscape), Brewster Kahle (Internet Archive), Douglas Engelbart (hypertext), Aaron Swartz (RSS, Creative Commons), Richard Stallman (GNU, free software movement), Van Jacobson (TCP/IP congestion control), Jimmy Wales (Wikipedia), Mitchell Baker (Mozilla), Linus Torvalds (Linux)...
...but you’re missing the point of my comment, which is simply to acknowledge and honor (my late dear friend) Peter.
Ah, I missed Linus Torvalds and you might have missed Bob Metcalfe (Ethernet) and Jon Postel (RFC work).
My point was not do criticize the achievements of the work of any of those people.
1. I was not actively aware that this hall exists
2. I am mostly critical to such awards in general. I have noted that several companies receiving the "Export company of the year" here in this country (doesn't matter which one) have went bust a couple of years later. I received the "hacker of the year" award at my workplace some years ago. It was supposed to hang with all previous awards in the cafeteria. I did not like that and "forgot" it at home. I quit the company a year later anyway.
Edit: Forgot that I worked for the "software product of the year" twice in my life. One needed heavy, painful architectural rework 3 years later. The other was Series 60. People old enough know how that went, killed a global market leader.
Let's encrypt saved me :) I love to use it with certbot in docker-compose :) deploying really can be simple
Config management took me many years to adopt, containers took me about 6 years to warm up to. But LE was something I jumped on immediately. I had worked in web hosting for 10 years already when it came out so I remember faxing your driver's license in order to validate a TLS cert. It just felt like such a scam for so long that these CAs were over charging for something that is just a key signing.
But I guess automation and standards had to catch up in order for LE to securely setup their CA.
A lot of people are not aware that HTTPS certificates do not necessarily guard you from certain types of attacks like DNS injection. You can see <https://www.youtube.com/watch?v=exy5JwAU8qk> for one example where an attack campaign called DNSPionage obtained valid certificates for their attacks.
To explain the issue with HTTPS certificates simply, issuance is automated and rests on the security of DNS, which is achieved via DNSSEC and most do not implement.
Technically it's an attack against the certificate issuing authority, bypassing their authorisation checks (is this person really authorised to issue a certificate for the domain).
Trouble is even CAA entries won't help here (if you're spoofing A records, you can spoof CAA records too). DNSSEC might help against this, I don't know enough about DNS though.
Another type of attack is an IP hijack, which allows you to pass things like http authentication (the normal ACME method), but won't bypass CAA records. Can't use letsencrypt to issue a cert - even if you own the IP address my A or AAAA records point to - if my CAA doesn't have letsenctypt as an approved issuer.
With DNSSEC you can be certain that the response you got was issued by the nameserver that is claimed (well, by someone who owns the private key). The domain owner, and registrar can both be at fault, the CA is the last entity to blame because they are performing an automated check of domain ownership. For maximum security you'd want to buy your own TLD as my YT video talks about, to circumvent any other registries, registry wholesalers, and registrars' security models, but an adequate protection for most is to use registry/registrar lock and implement DNSSEC correctly. IP hijack will then not work when all of the above is done correctly.
Another option is manual certificate issuance with a CA whose security model is better than yours, but not implementing DNSSEC leaves you open to other attacks.
Here’s to 10 more years! With web servers like Caddy, software like certbot and even something like Apache2 getting mod_md, I’d say we’re in a pretty good spot!
That said, I’m wondering why there aren’t 10 or so popular alternatives to LE, since that seems to be the landscape for domain registrars, for example.
People talk about paying for certificates but one major pain point solved by PaaS companies over the last 5 years is automatically adding certificates and renewing them for your app deployments. It saves a huge amount of headache.
In 2024, if your PaaS does not have automated encryption for deploys, I will never use it.
I really wish they would finally branch out and offer S/MIME certificates. Good email clients support them out of the box, it's just a PITA to get them if you don't want to order 100 at a time or something equally ridiculous for SME/individuals.
Would frequent rotation be reasonable for S/MIME certs though?
thank you Edward Snowden
I wanted to post that exact comment.
Time flies when you're having fun. Congratulations
Reminder that they donation dependent
Nothing makes me trust a site with my payment info more than seeing a LE or domain-validated certificate with no ownership details in the DN.
HTTPS does not validate the trustworthiness of a site. Never has and never will. It only validates that the site has not been tampered with during transfer. Phishing sites can also have HTTPS, that doesn't make them trustworthy.
Google.com (and my bank) use a DN certificate, if it's good enough for them it's good enough for anyone.
The rate of misissuance of EV and OV is much higher than DV.
Source? I'm not questioning it, I'd like to know more. DV always seemed vulnerable to DNS tampering.
And EV is vulnerable to a fancy looking fax (remember them?)
Do you really check your site has an EV every single time? Especially now browsers treat them the same?
If not, how do you know someone hasn't got a DV certificate for this specific visit?
Scott Helme has a thorough takedown of them, and that was 7 years ago when they were still a thing.
https://scotthelme.co.uk/are-ev-certificates-worth-the-paper...