Not specifically trying to play golf, just to let you know that `curl -w` is your friend for extracting data from the headers:
for _ in {1..5} ; do
curl -sSw '%header{location}\n' https://indieblog.page/random | \
sed -e s/.utm_.*$//
done
Random recent browsing is also the best way to read Hacker News.
It's not addictive, gives you a quick sample of what people are thinking about recently, prevents your own biases from narrowing your view, etc.
Each refresh gives you a random post, plus its parent thread, plus some detail about that post's author's history.
To save the grep, you can do:
sed -Ene 's/^location: (.*)?utm.*$/\1/p'
If you've got a newer `curl`, you can just print the header: curl -I -s -o /dev/null -w '%header{location}' <url>
But you'll obviously still need the sed if you want to trim off the UTM cruft. Unless you want to go the route of `bash` substitutions. l="$(curl -I -s -o /dev/null -w '%header{location}' <url>)"
echo "${l%%?utm*}"
This thread is a virtual UA meet
Sorry, what is UA?
Oh no!
I'll do one better
eval `llm "bash script to get 5 random links from indieblogs and print one on each line"`
You could say it's a self mutating program.
Indieweb lacks discoverability, a way to get random indie blog posts is exactly what the indiweb needs, great idea !
Yo, this shit could be a tight JavaScript app with a reactive frontend. Y’all geezers still fuckin’ with them shell commands? /s
Give this man an electron app!