I like how the author didn’t try to be overly clever with it. They could’ve used Docker to install and isolate dependencies, Caddy or Traefik to serve the site and manage SSL automatically, maybe even k3s to run a mini cluster for HA, Tailscale to avoid manual port forwarding, Terraform for future auto-provisioning without hassle, or Ansible to prepare the machine and all that jazz.
Instead, they took the simplest possible route that works: installing the web server on bare metal, serving plain HTML and CSS, and using direct port forwarding via the router. This was a delightful read.
I really appreciated the blog post because it taught me something I’ve always wanted to learn but avoided because PaaS made me complacent.
Three years ago, I tried to publish a website on a VPS in the morning to present it to a client at lunch. This was before ChatGPT. I spent hours trying but got nowhere. At the meeting, having failed to publish it, I had to run it locally and share my screen. I was frustrated.
Later, I discovered Heroku, which solved all my problems. Eventually, I migrated to VPS + CapRover, and now I use VPS + Coolify.
I’m happy with my setup—it works—but I’ve always felt guilty for not knowing how to publish a simple website without Docker.
It took me much longer than I'd like to admit to publish my first bare-metal website on a VM. The main issue was SSL—I could never configure Nginx for multiple sites on the same server with TLS. Setting up DNS and pointing your domain to your server also requires some fiddling. It’s not intuitive, and there aren’t many people doing this, so resources with full end-to-end examples are limited.
However, with tools like Caddy and Let’s Encrypt, the process has become easier. That said, it still involves quite a bit of fiddling if you want to automate everything and make it reproducible without a lot of hassle. Now, I do it all the time for fun. But that said, if it's a static site, then I don't see much point in self-hosting when Cloudflare Page and GitHub Page are so much more convenient.
Nowadays, certbot can configure either Nginx or Caddy automatically. Make sure you install the appropriate plugin with apt (if you're using apt).
I appreciated this as well. Going in I was expecting a long winded technical report detailing npm compatibility issues or bundle size optimizations but no, they're simply setting up an apache webserver and shipping HTML/CSS.
If its for you there is no need to overcomplicate it. Complexity is just for job security :)
> installing the web server on bare metal,
On top of a Linux distro is not what people usually call bare metal
I've heard it called bare metal as opposed to a docker container or VM.
This is the Wikipedia definition which is the same thing as I said. If it's on top of an OS, it's not bare metal.
Yeah, you're correct by the original definition - but in the modern context of Docker/k8s and web servers, "bare metal" is being colloquially used to refer to installing directly on the bare OS, and people understand that.
I'm sure 100% of the people who read that comment understood what it meant.
Maybe it is, these days? As much as I appreciate all the functionality brought to us by these tools, when I started web-dev circa 2005, LAMP (Linux, Apache, MySQL + PHP) was the go-to for hobbyists.
As much as I look back at the simplicity (Apache config was not that difficult for a small site, at least with Apache 2.0), the part of me that operates production software these days gets anxiety the idea of it all.
And yet, when I wrote a small website to host my wedding website last year, it was indeed Linux, (some webserver), Postgres and PHP, with me copying files manually to FTP. It was probably nginx but you know what, I paid a company £50 for a large amount of storage, bandwidth, a domain and SSL certificate, for year, and everything went dandy. Horses for courses and all that.
Yeah. I used it for a lack of better term. But you get the idea.
It is the correct term
Now that's just goofy, I get why they have similar names but it's begging for misunderstandings. https://en.m.wikipedia.org/wiki/Bare-metal_server
Can't remember the name (if it has one), but there's a linguistic phenomenon where words take on opposite meanings from the original. The "Peacemaker" was a missile, "literally" now means "figuratively", "awful" used to mean "awe-inspiring". And apparently "bare metal" now means "runs on an operating system".
OTOH, technical fields develop jargon specifically as a solution to the problems created by semantic drift in vernacular vocabulary.
Sure, languages evolve, but that doesn't mean "anything goes" -- to the contrary, novel mutations have to survive intense selection pressures in order to eventually become part of the standard language.
Where new ways of using existing terms create ambiguity and conflict with existing meanings, their survival chances aren't always great.
Although, there are multiple people in the talk section[0] arguing for the "'physical' machine" definition. Might have to get used to it, along with "crypto" and "algorithm".
The trendy uses are "algorithim" and "crypto" are specific cases of the general meanings of these terms -- there's no contradiction or ambiguity introduced here, so these uses are OK, although people presuming the narrower trendy meanings in broader contexts are wrong.
This use of "bare metal" does contradict the pre-existing meaning, so is not quite appropriate. What is valid is describing the OS itself as running on bare metal in contrast to running within a VM/container -- but an application running on top of that OS is not running on bare metal.
This isn't particularly egregious, though, since there are negligible cases of actually running applications on bare metal today: if you are talking about applications, the context can usually explain the intended meaning. But that wasn't always the case in the past (PC "booter" software used to be common), so this doesn't necessarily apply retrospectively, and may not be the case in the future, especially considering some of the interesting things companies like Oxide are working on.
You shouldn't back off a correct statement just because some rando on the internet challenges you. Bare metal is absolutely the correct terminology here.
Hehe. Not big on picking fights over minutiae. Also, English isn't my first language, so there's always a chance of making legit mistakes when picking terms. But in my 7 years of working in the industry, I've heard it used in this context by many.
However many layers of abstractions deep you are, please try and get yourself out of there. The air down there ain't good for ya.
I found that today "bare metal" is used equivalent to "non-cloud". I think this is pathetic.
I had set one (Pi 3B+)up to do both a website as well as a public service announcement slideshow display for a local Community TV. Used PHP/CMSimple for the site and wrote a custom slider.
Things I learned:
- set the pi to reboot on poweroff
- Given this was TV used the AV output on the PI displaying in NTSC 4:3 (to support customers with older TVs), so had to be aware of overscan margins
- Added a startup script to start chromium in kiosk mode and open the slider page for the show side. worked 95% of the time, if not just power off then on.
- part of the troubleshooting was just unplug and replug - but SD cards will choke on too many power cycles, so instead of SD use a USB->SATA cable and a regular spinning rust HDD - slower but VERY reliable, the journaling file system can recover after power-outage.
- Get the right USB->SATA cable, USB-3 models seem to be more responsive and the PI can boot from them, there are some are too slow and the Pi will fail to boot.
- Slider had it's own login page (outside of CMSimple) to remotely manage the slideshow.
- Also changed the SSH Port to something uncommon to thwart bad guys.
> Step 5: Getting an HTTPS Certificate
If you’re using Apache2 you might also want to look at mod_md, not just certbot: https://httpd.apache.org/docs/2.4/mod/mod_md.html
Also, if you want to minimize the amount of JS, then just drop jQuery and use fetch, it’s reasonably pleasant too: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
Lovely article!
Thank you very much I’m glad you liked it!
I’ll look into fetch and mod_mt - they seem helpful!
I did something similar in 2016 https://www.e-tinkers.com/2016/11/hosting-wordpress-on-raspb.... Instead of having Apache, I use Nginx. Instead of serving webpage from SD card, I setup a 512MB hard disk except booting still done via the SD card. The site is still running on a Raspberry Pi 3 till this day from my living room.
I checked out your tutorial, its very cool! Do you think the 512MB harddisk makes your site more robust for when there are many queries? My site was crashed last night because of the hackernews hug of death and I may have to upgrade.
As most of my website are static content (blog with html/css), plus the implementation of cache and having a nginx (instead of Apache) help to serves most of the content from cache instead of access to the hard disk. Use of hard disk is trying to avoid the wear-off of SD card, I did in such as way where the boot is still done by the SD card, only the storage are in hard disk.
Very cool, very accessibly written. Thanks for sharing.
Curious how you handle your router’s public IP changing as (I believe?) this is pretty common depending on the ISP, no?
I am glad you like it!
Theoretically, routers have public IPs that change often and you have to buy something to make sure they don't.
In practice, this has only ever happened once to me when my router got hacked and I had to get firmware replaced. So while its a major issue if you have a huge site that needs to be reliable, if it is just a personal site it can be okay to just assume it will never change, and if it does you can always just get your domain name to point to the new IP.
It may be worth it to look into something to make it more stable, but right now it just isn't worth the extra fee.
You can use Dynamic DNS[0]. Your server contacts a remote server and if the IP address has changed the DNS is automatically updated. You can use a free subdomain[1] and CNAME/ANAME/ALIAS[2] your domain's DNS to point to it.
Edit: Njalla supports dynamic DNS records natively[3].
[0] https://en.wikipedia.org/wiki/Dynamic_DNS
[1] https://freedns.afraid.org/
Great article—thanks for sharing! I did something similar nine years ago with a Raspberry Pi 3 Model B. It stayed online for nearly a decade until it went offline at the end of 2024, following Google's decision to sunset my OnHub router.
Initially, I used it as a CUPS server to turn a non-networked Brother printer (HL-2240) into a wireless printer after Google discontinued Google Cloud Print.
The printing project was a lot of fun and inspired me to dive into another challenge: self-hosting. Along the way, I learned a great deal about Apache, SSL certificates, security, and just how fragile SD cards can be!
The print server still works well, though I’ve since downsized to a Raspberry Pi Zero W.
Surpised he does not have a step for a assigning a permanent address for the local network. He mentions it but skips it like it's not needed. a lot of routers like those from att do not allow port forwarding to addresses that have finite lease times.
I didn't know that was a thing you could do! I appreciate the comment and I have added creating a permanent address for the local network in the post and given you credit. I give your HackerNews username credit please let me know if you want your actual name listed.
Also I can tell this post was a good post because people are using he/him pronouns for me in the comments lol. What a world we live in :)
This is a wonderful project and an even more wonderful write-up, I wish I came across more in this tone! I'll just add to anyone following in these footsteps: be careful what type of website you host on your Pi. Self-hosting is great, but if you expose services on your home network to the internet, expect people to try to hack you. What type of site you host will either make this very hard or very easy. The difference between compromising a VPS and compromising your home web server is now they have access to your actual LAN. Cloudflare has a pretty good WAF on the free tier, look at it as another learning opportunity.
Thank you very much and I'm glad you enjoyed it! I'll absolutely look into cloudlfare now that my site is getting more traffic - I need to learn more about security anyways so this is a great opportunity for me!
Thanks for pointing this out. Do you know of any guides on securing a home pi web server, or less specifically, securing any Linux device exposed to the public internet?
I would like to benefit from big tech's security teams by hosting web forms and those various different kinds of site you suggest behind them and their teams! WAF + captchas + defence against bots I would rather not do the server handholding and hardening myself.
I find that ufw is a pretty simple firewall interface for linux.
UFW is great at what it does, but it's not going to save you from web application attacks.
I found that over time ufw becomes unwieldy and it's hard to make sure you keep the rules consistent - even (especially?) with configuration management.
I highly recommend firewalld instead.
Regarding private keys and computers going in for repairs...
I know it's not strictly best practice but store your private key in your password manager. That way if the worst happens you aren't up a creek...
You can also just have multiple keys? Most of my devices have an authorized_keys that gives access to a couple laptops and my phone (which runs the normal openssh client in termux), which solves a lot of these access problems.
At least encrypt your private key and store it somewhere private, and then store the decryption key in your password manager. Then if either one is compromised you are still safe.
I mean if my password manger is compromised I'm pretty screwed anyway but this is a good idea too.
The whole idea behind 2FA is that if your password manager is compromised, you will still be fine, thats why its so damn important. (also please please please don't use SMS 2FA, it's not secure, it's expensive and there's no reason why I should need mobile phone signal to login to a service.
To the OP:
Make sure you have updated firmware to latest, plus added active protection against the following known vulnerabilities.
https://nvd.nist.gov/vuln/search/results?form_type=Basic&res...
Thank you! I will look into that - it's linked to my home wifi so secruity vulnerabilities would be really bad for me. Much appreciated!
I use Kamal[1] to host all my side projects on a Raspberry Pi. It works for more than just Rails apps.
I’ve tried a few other approaches in the past and this seems like the simplest, given a familiarity with git.
I really enjoyed this but at the same time find it completely perplexing - I can't place where the author is coming from - they are either really good at explaining things from the point of view of someone who knows almost nothing about this stuff or they genuinely started this without knowing most of it. I'm not sure which would be more impressive.
Haha I think the latter assumption is more correct. I really know next to nothing about web design. I do data science and machine learning, which is quite different really. I started this project as a Freshman in undergrad and it was originally on WordPress. I have been gradually itterating over it for the past few years and now I'm 2 years out of college. I just have issues and then fix them and learn via that proccess.
So for example when my router was hacked I installed fail2ban, when a friend told me my mobile view was terrible I read about CSS, I have just been working on it gradually but I have never worked in webdev. Just today, a commenter here suggested I implement an RSS feed so I had to look up how to do that and now I have an RSS feed! I'm just learning things on the fly. I'm glad you find it impressive, I just find it as being the lazy way to learn.
Neither is required to be true. Rather, this reads as a person who genuinely hopes for you to understand what they are attempting to teach you. Often times, technical documentation is written because the author desires to sound smart, not actually help you achieve whatever the topic at hand may be.
We could have some words for this paradoxical phenomenon. I suggest "humility" and "patience":) Maybe even "kindness"
After using nginx and caddy for some time I can't fathom going back to apache config files.
Caddy is pretty good! Nginx does feel quite similar to Apache2 if you don’t try to do too much mod_rewrite.
To be honest, you can have both good and bad configuration in either.
I recently migrated one stack from Nginx to Apache2 (where an Nginx container would refuse to start when it was a reverse proxy for say 10 services and one of those took longer to start meaning that it had no DNS records, so Nginx would be in a restart loop and the other 9 would be unavailable) and it was fine.
Agree. Much less maintenance with Caddy
Thank you, this is a wonderful step by step. Saved for the next time I get a chance to sit down and fiddle. Cheers!
Glad you enjoyed!
Hey, if one is only using apache and serving static files, is there any security issue one needs to be in the lookout? I was surprised fail2ban was the only thing installed there and it seems reliably working.
If you keep Apache itself up-to-date then typically speaking no. There is always a reason to be on the lookout and <insert generic speech about security concerns here>, but static files are - well - static. Not much to mess with there.
If you choose to do smart things like caching and such, that opens a whole new can of worms, but basic "this is my HTML page" stuff is typically "not-unsafe" (as opposed to "safe").
Nice work, thanks for sharing! Do you plan to implement an RSS feed?
Glad you enjoyed it! Here you go: https://mirawelner.com/rss.xml
Any DDoS protection? Doesn't really matter with a small personal site, but I run a few solo production apps and they get hit multiple times a month by >100k rps DDoS attacks.
Me too, I need to use cloudflare... I think if he gets hit by a ddos, the pi can go down but since its a simple website and not monetized, the downtime is fine
Yeah. Though I get that it would defeat the purpose of the article, as running it on a Pi as a novelty is the whole point, and if you need to add DDoS protection you'd might as well just throw it on Cloudflare Pages instead.
Nice description, very helpful, thanks. IFAIK there’s a mixup between the words „permanent“ vs „temporary“ in the fail2ban section, you may want to check that.
fixed thank you!
Curious how well Caddy would run on an RPi. Not sure if it would come out ahead of Apache in practice. It's been my goto choice for a few years.
If there is an arm image for it then it’ll run fine
It's written in go so should work fine.
I'm running a small Caddy + Rust (Actix Web) service on my Raspberry Pi 2B. Not the snappiest service ever but usable. Part of the slowness might also be the fact that it's online using Cloudflare Tunnels.
What is Cloudfare Tunnels? Is this for not applying a firewall rule in the home router?
https://developers.cloudflare.com/cloudflare-one/connections...
Basically, you can have a computer available online through Cloudflare without having to open any ports, as it makes the computer available through outbound connections. Should also make it possible to host behind CGNAT.
Yeah, the tunnel is probably at least half the overhead.
(was down for me, this fixed it)
I was looking for something like this article. Thanks for the post. Neat.
glad you enjoyed!
Address Not Found
The caveat: only is possible if your ISP permits incoming connections.
Isn't this the normal way?
Already hugged to death, rip
It was 2am when you checked, and as mentioned in the article the Pi was rebooting after an update :)
Loads fine for me, even very fast - maybe it is fixed?
[flagged]
What an odd thing for you to comment in response to someone obviously trying to obscure this information from this post.