First, note that this complexity is actually worse for highly dense graphs, where `m` (number of edges) dominates rather than `n` (number of nodes) [note that a useful graph always has `m > n`, and often `m <= 2d n`, where `d` is the number of dimensions and the 2 is because we're using directed edges. Ugh, how do we compare log powers?].
Additionally, the `n` in the complexity only matters if for the Dijkstra approach you actually need a frontier of size proportional to `n` [remember that for open-grid-like graphs, the frontier is limited is limited to `sqrt(n)` for a plane, and for linear-ish graphs, the frontier is even more limited].
Also note that the "sorting barrier" only applies to comparison-based sorts, not e.g. various kinds of bucket sorts (which are easy to use when your weights are small integers). Which seems to be part of what this algorithm does, though I haven't understood it fully.
Very good points. I wonder what this means for real-world street network graphs. In my experience, m can be considered proportional to n in road network graphs (I would estimate m ≈ 2C n, with C being between 2 and 3). This would mean that the asymptotic running time of this new algorithm on a classic road transportation network would be more like O(Cn log^2/3 n) = O(n log^2/3 n), so definitely better than classic Dijkstra (O(n log n) in this scenario). On the other hand, the frontier in road network graphs is usually not very big, and (as you also said for grid graphs) you normally never "max out" the priority queue with n nodes, not even close. I would be surprised if the ^2/3 beats the additional constant overhead of the new approach in this case.
In real world you are not using either, you have way more way to optimize for a specific problem. For street networks you'd probably start with "A*" or something like that.
The current meta game is the use of contraction hierarchies. Basically, you sinplify the network into hubs connected by trunk lines and then refine the routing close to start and destination.
in the real world djikstra will definitely be faster.
It’s not often that you see O(E + V log V) Dijkstra with Fibonacci heaps, either, the O((E + V) log V) version with plain binary heaps is much more popular. I don’t know if that’s because the constants for a Fibonacci heap are worse or just because the data structure is so specialized.
Yes, a standard binary heap is very fast and incredibly simple to implement, mostly because you can store the entire heap in a single continuous array, and because you can access individual elements by simple pointer arithmetic. It's quite hard to beat this in practice.
So that means it doesn’t work for Traveling Salesman, where the edges are nearly n^2? That might explain why it’s not been found before.
It is somewhat funny that it took 12 submissions here on hackernews to bring it to a wider audience :) https://hn.algolia.com/?query="2504.17033"
> But curiously, none of the pieces use fancy mathematics.
> “This thing might as well have been discovered 50 years ago, but it wasn’t,” Thorup said. “That makes it that much more impressive.”
this is so cool to me, it feel like a solution you could* have stumbled upon while doing game development or something
*probably wouldn't but still
Gamedevs -I find at least- are so obsessively deep at SOLVING their problem at hand that their headspace is indexed on shipping the game, the project, deadlines, and what to eat for the next meal (probably pizza).
Rather than the academia.
Just a hunch tho
Isn't that just it though? The problem very well could be that some part of the game is running too slow so they just start solving it. No time to read and write academic papers.
This algorithm is asymptotically faster than the state of the art, but it isn't faster in practice. At least not yet!
Somebody somewhere might be Ramanujan, but the average person is going to be a whole lot better served by reading literature than trying to reinvent the field.
That's what I have found to be the case: time is in short supply
Maybe someone did and just didn’t see it as novel?
Depending on where you work they might not let you publish a paper about it. Certainly was the case at one game studio I worked for, very secretive.
Tarjan was my algorithms professor. He invented many of them
…invented many of them algorithms? like which?
Aside from inventing a bunch of individual algorithms, Tarjan is also known for introducing various theoretical techniques that are now considered fundamental. Most notably, amortized analysis.
His Turing award writeup gives a pretty broad overview of his research contributions: https://amturing.acm.org/award_winners/tarjan_1092048.cfm
Maybe Tarjan's strongly connected components. That's one I've implemented at some point at least.
This is one of my favorites:
I'm intrigued but the article is very verbose with little detail. Mabie the paper will give a more satisfying description.
Im most curiosity how the algorithm fulfil the "global minima" that djixtra guarantees. The clumping of front-tier nodes seem prone to missing some solutions if unlucky.
O(m log^2/3 n) !!! What a triumph.
https://arxiv.org/abs/2504.17033
We give a deterministic O(mlog2/3n)-time algorithm for single-source shortest paths (SSSP) on directed graphs with real non-negative edge weights in the comparison-addition model. This is the first result to break the O(m+nlogn) time bound of Dijkstra's algorithm on sparse graphs, showing that Dijkstra's algorithm is not optimal for SSSP.
log^2/3 might be the weirdest component I’ve ever seen in a complexity formula.
I'm continually amazed by the asymptotic complexity of union-find, which is O(alpha(n)), where alpha(x) is the inverse of the Ackermann function (and n the number of sets you union). In other words, O(6) or so as long as your inputs fit into the observable universe.
There's definitely a divide on who sees what sort of algorithms. The subject of this article is in Graph Theory space, which a lot of us get even without trying (I dabbled in TSP for a while because it's a difficult distributed programming problem and I wanted to explore the space for that reason).
But if you're not implementing AI or game engines, some of the linear algebra space may be a road less traveled.
I still think matrix multiplication's O(n^2.371339) is super weird.
Matrix multiplication definitely should be O(n^(2+o(1))).
for about a decade, integer multiplication was at n4^log*(n) where log* is the iterated logarithm.
Also the curently best factorization algorithm (GNFS) is at exp(k*log(n)^1/3log(log(n))^2/3).
Intro algorithms classes just tend to stay away from the really cursed runtimes since the normal ones are enough to traumatize the undergrads.
Hey the asterisks in your reply got read as formatting so it's ended up messed-up.
oops. fixed.
Are BigInteger multiplications in logn² now or do they still have weird terms in them?
down to nlogn
Sounds a lot more complicated that Dijkstra. But I guess that's the way it goes.
Dijkstra is still very difficult for many and not universally taught in 7th grade even though you can arguably explain what a shortest path in a graph is to 14 y.o.
Dijkstra _could_ be universally taught in 7th grade if we had the curriculum for that. Maybe I'm biased, but it doesn't seem conceptually significantly more difficult than solving first degree equations, and we teach those in 7th grade, at least in Finland where I'm from.
I think we forget how old the term algorithm is. We started this journey trying to automate human tasks by divide and conquer, not computers.
Merge sort is supposedly invented in 1950, it’s more likely it was invented in 1050 than 1950. Sort a room full of documents for me. You have three minions, go.
I think humans generally use some form of bucket/radix sorting (or selection sort for small collections)
A human is different than “humans” a human with a stack may sort it into four stacks and then sort amongst them, yes.
But a room of five clerks all taking tasks off a pile and then sorting their own piles is merge sort at the end of the day. Literally, and figuratively.
For sure! The main thing keeping us from teaching advanced things to younger folks is the seeming addiction to teaching poorly/ineffectively. I'm here to find the physical play-with-your-hands demonstrations needed for teaching kids as young as 5 the intuitions/concepts behind higher-order category theory without all the jargon.
I think you could do it with many board games. Mouse Trap for monads? Poker for permutations? Dice for decision theory?
Dijkstra's algorithm is completely trivial. It's a greedy algorithm; there's nothing more complex involved than repeating the same simple step over and over. You pick a starting node then repeatedly add the lowest-cost edge to a node you haven't already reached. It's harder to explain what a "node" and "edge" are than to explain how Dijkstra's algorithm works.
Many textbooks make it sound harder than that because they want to examine complex data structures that make various parts of that as fast as possible. But the complexity is the implementation of the data structures, not Dijkstra's algorithm.
reminds me of TimSort.
I wonder if hybridizing this with selective use of randomness to probe beyond frontiers leads to another speedup.