I'm partial to this non-jokey take on two hard things:
Phil Karlton famously said there are only two hard things in Computer Science: cache invalidation and naming things. I gather many folks suppose "naming things" is about whether to use camel-case or not, or picking specific symbols we use to name things, which is obviously trivial and mundane. But I always assumed Karlton meant the problem of making references work: the task of relating intension (names and ideas) and extension (things designated) in a reliable way, which is also the same topic as cache invalidation when that's about when to stop the association once invalid.
https://web.archive.org/web/20130805122711/http://lambda-the...
ah, ye olde two hard things, namely:
1) cache invalidation
2) naming things
0) off-by-one
Credit where credit is due. https://x.com/secretGeek/status/7269997868
with apologies to Leon, I think I first saw it from Martin (he gives full credit to the sources), so I'll post to link here for completeness:
In the interest of DRY, naming things is hard because when you want to reuse code in a method or library, it should be easy and intuitive to find what you need.
Most, since, many devs name by what it does, rather than how it might be found.
For example naming a function calculateHaversine won't help someone looking for a function that calculates the distance between 2 latlongs unless they know the haversine does that.
Or they default to shortest name. Atan, asin, Pow for example.
At some point you just have to browse the library, and learn conventional names for algorithms.
If you want to synthesize this type of knowledge on the fly because you don't like learning other people's conventions, just feed the docs to chatgpt and ask if there's a function that solves your problem.
This is why a formal education is so important, and why books like "gang of Four" are some sort of standard. They've given a name to some common patterns, allowing a more efficient form of communication and higher level of thinking. Are the patterns actually good? Are the names actually good? That is besides the point.
As counter-inspired by a fellow HNer, the encoding of this problem (cribbed from Dirac/von Neumann, so any inanity is all mine!) ought to be:
f[<Intension|Extension>] == 0
What are <A|B> and f[C]?
The solution to the problem would reveal their identities :)
Just in case: https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation#Hermi...
More seriously.. https://en.wikipedia.org/wiki/Binding_(linguistics)
And its derivatives in CS
Etc
There’s two hard problems in computer science: cacsynchronizing shared access to the same resource.he invalidation, and
I find “co-ordinating distributed transactions” to be very hard. Getting any kind of optimum cooperation between self-centred agents is tricky.
Also — “Jevon’s paradox”. That one is nasty! For example: just about anything we do to decrease use of fossils fuels by some small percent, makes the corresponding process more efficient, more profitable, and thus happen more. That’s a nasty nasty problem. I guess it’s not specific to computer science, but all engineering.
Everybody knows the two hard things are timezones and unicode.
What's hard about cache invalidation?
Eh, only one hard thing then, because as hard as Unicode is timezones is way harder.