Lots of people in this thread saying that it's just an unoptimised Bubble Sort.
No, it really isn't.
In bubble sort you see if two things are the wrong way round and it they are then you swap them. This does that (sort of) the "wrong way round".
In Bubble Sort your indices are always i<j ... here they aren't.
In Bubble Sort you only ever compare adjacent items. Here you don't.
This really, really isn't Bubble Sort, unoptimised or other.
Also, with Bubble Sort it's really easy to prove that it works. If you think this is "just unoptimised Bubble Sort" then I'd be interested to see your proof that this algorithm works.
So my feeling is that when people are dismissive of this, either they've not understood this at all, or they've understood it at a really, really deep level that I haven't. In that latter case I'd really appreciate a better explanation for what you think is going on, and why you think this is "just an unoptimised Bubble Sort".
The problem is that you can look at this and, after a few seconds, think you've formed an intuition on how it must work, but your intuition is wrong. It seems like the only way to really show how weird this is would be by creating an animation that shows how it shuffles entries around.
(Edit) I gave it a shot: https://jsfiddle.net/odwh6cL5/45/
Here are two other visualizations:
very beautiful simple animation. Good visualization of the proof of the algo; namely, that at each step i of the outer loop the first i elements are sorted (this is easy to show via induction)
Add bubble sort in parallel and race them against each other!
Thank you for this!
I agree. At first I thought it was doing something very simple but I am not so sure anymore. When you check the condition for the swap, you realize it almost works counterproductively while i<j because it keeps pushing smaller numbers in the wrong direction, but then it starts "fixing" those mistakes when i>j. I don't find this intuitive at all.
>In Bubble Sort you only ever compare adjacent items. Here you don't.
This is the first thing I noticed and it pretty much says it all. It is not bubble sort because there are no bubbles bubbling to the top. Instead it compares every possible element pair combination.
> In that latter case I'd really appreciate a better explanation for what you think is going on, and why you think this is "just an unoptimised Bubble Sort".
I can try to explain that. The algorithm really is doing a bubble sort, with some unnecessary extra work added. It also does a bit of necessary extra work.
Here is the useful work done by the algorithm:
1. First, preprocess the list so that the maximal element occurs first. (It's a 1-indexed list, so position 1 holds the highest element of the list.)
2. Now, we consider an invariant: the part of the list up to (and including) the maximal element is always sorted. Right now that consists of indices 1-1, and since that range contains only one value it's necessarily sorted. We will use a variable ("i") to remember where the maximal element is.
3. Now, the outer loop: we start at the index immediately after the maximal element, index i+1. Whatever value is in there, we bubble it to the left (this is the inner loop), in the manner of a standard bubble sort: we compare it to index i, and if it's lower (which is always, since i holds the maximal element), we swap. Then we compare index i to index i-1, and so on. Once we find an element that is smaller than the one we've been bubbling to the left, we stop, and our invariant is restored: the list up to the maximal element is still sorted. But the space covered by the invariant has increased by one slot. We increment i.
4. The loop continues until i points to the last slot in the list, at which point we're done. The invariant tells us the the list up to i is sorted, and since i points to the last value in the list, the whole list is sorted.
-----
The algorithm I've described sends i forward from the beginning to the end of the list and j backwards from i+1 to the beginning of the list. The algorithm in the paper does all the same work, but it also does more work, instead having j cross the entire list every time. (It also sends j forwards, like i, but this doesn't matter.†) The only reason for this is to make the algorithm simpler to write down. This is why it's "unoptimized bubble sort". Note that, in the algorithm of the paper, once the maximum value has been swapped into i, the check A[i] < A[j] can never succeed, but we ignore that and keep advancing j instead of just terminating the j loop. This looks especially stupid because the maximal value will always occur before i (except when i = 1, which is the stage that, in my algorithm, is called "preprocessing").
In a bubble sort, if your cursor is at 5, and you want to move A[6] to index 3, you need to make 3 swaps and 3 comparisons. In the paper's sort, when your cursor is at 5 and you want to move A[6] to index 3, you need to make 3 swaps and n comparisons, where n is the length of the full list.
-----
If you're interested in why the j loop is the same thing as a bubble sort, here's an expansion on the example above:
We're in the process of bubble sorting [1 4 8 11 24 5], looking at the 5. Because 5 is more than 4 but less than 8, we want it to ultimately end up at index 3. Everything else will shift up by one index to accommodate this.
In a bubble sort, we swap the 5 down into index 5, then we swap the 5 down into index 4, and then we swap it down into index 3.
swap(&a6, &a5);
swap(&a5, &a4);
swap(&a4, &a3);
In the paper's sort, we do exactly the same thing from the other direction: int* temp;
*temp = a6; /* we want to put this into its correct position */
swap(&a3, temp); /* done, but we need to remember what was in a3 so we can swap it up to a4 */
swap(&a4, temp); /* done, but now we need to remember what was in a4 so we can swap it up to a5 */
swap(&a5, temp); /* take a guess... */
swap(&a6, temp);
but there's a slight optimization where instead of using a temporary variable to hold the values we're swapping, we just use the space after the maximal element (which makes `swap(&a6, temp)` redundant - after the optimization, those are the same address).† It matters a little bit - a bubble sort is free to terminate the inner loop as soon as the value being bubbled has found its correct position. But in the paper's algorithm, we have to find the correct position first, which is done by scanning every index that contains a lower value until we eventually find the first one that doesn't. That work is optimized out of a bubble sort. However, since the paper's sort is committed to scanning the entire list in the inner loop anyway, the fact that the order of bubbling is wrong doesn't require any more extra scanning beyond what they've already committed to.
The number of swaps is nearly the same between Bubble Sort and Can't Believe Sort. For a random list with 100 elements, it's 2505 swaps vs. 2513 swaps.
The number of comparisons is of course about twice as large, because Bubble Sort always does exactly n(n-1)/2 comparisons while Can't Believe sort always does exactly n*n comparisons.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
> The number of swaps is nearly the same between Bubble Sort and Can't Believe Sort. For a random list with 100 elements, it's 2505 swaps vs. 2513 swaps.
Why is there any difference? Does the entire difference come from the iteration where i = 1?
> Bubble Sort always does exactly n(n-1)/2 comparisons
Bubble sort can do less than this. When you're bubbling down into a sorted list, and you find a value lower than you are, you can assume that all values below that one are also lower than you are and terminate the inner loop early.
> Why is there any difference? Does the entire difference come from the iteration where i = 1?
Sometimes it's more swaps, and sometimes it's less. The vague pattern looks like Can't Believe Sort does slightly fewer swaps on larger lists, but that could also have been noise.
Why do you expect them to have exactly the same number of swaps?
> Bubble sort can do less than this.
Oh sorry yeah, I implemented bubble sort without any sort of early return, for a more direct comparison.
I expect them to have exactly the same number of swaps in every iteration of the loop past the first one, because they are doing exactly the same thing. It's not just the same number of swaps. It's essentially the same set of swaps, in a different order. In the iteration where i = 1, the number of swaps may differ. Was I wrong about that?
Yeah, 2410 vs. 2509 swaps not including the first iteration. You should try measuring these things yourself, I'm not sure if it's doing what you think it's doing.
Your step 3 is an insertion op, not a bubble op.
An insertion op inserts one element into a sorted list. A bubble op bubbles the maximum (or minimum) element to the end of an unsorted list.
This is a strange argument to me, because the single most critical aspect of a bubble sort is the locality.
Bubble sort only ever compares adjacent elements. It only ever swaps adjacent elements. This is what makes bubble sort optimal (within constant factors) for an external sort on a tape drive. This is why anything else isn't a bubble sort.
Yeah, this makes sense to me. I started developing some of these similar intuitions while watching a visualization another commenter posted. Here are some things I realized:
1. The first pass (I think you call it the preprocessing step) is special in that it finds the maximum number and puts it in the first slot. The rest of the algorithm revolves around this max value (or at least I find it helpful to think of it that way). This is also the only time where j going above i makes any difference. After this pass we could stop the j loop when i == j but that would obviously complicate the code.
2. Second pass will always swap items in first and second slots since the first item is bigger than second (if they are equal, nothing happens). As noted in 1., j will keep going past i but it won't do anything since i is now pointing to the largest value in the array and the swap condition will never be met.
3. On third and all subsequent passes the value that a[i] is pointing to will be slotted in its right place in the sorted list that's being formed at the beginning of the array. This might require multiple swaps as j goes up and larger values go into the ith slot. Last thing that happens before j reaches i (so when j == i - 1) is always that the max value gets swapped from i-1 to i. This is basically what the second pass from 2. did.
4. The max value from 1. conceptually serves almost as a sentinel of sorts since it always represents the end of the list we've sorted so far and when j gets to i, it prevents the rest of j loop from doing anything. That's why the code can be so clean, without having to do any index arithmetic or short circuiting or any other complications. It's clean but definitely not simple, since it takes a while to wrap one's head around what's actually happening.
Thanks for the analysis!
Here's one way to intuitively think about this algorithm:
ICan’tBelieveItCanSort(A[1..n])
for i = 1 to n do
for j = 1 to n do
if A[i] < A[j] then
swap A[i] and A[j]
Example: 3412 // i=1
4312 // i=1 -> i=2
3412 // i=2 -> i=3
1432 // i=3
1342 // i=3 -> i=4
1243 // i=4
1234 // i=4
1) After an outer loop of i is finished, the first i elements are in ascending order. (For example, in the fifth line above after i=3 is finished, the first 3 elements are "124")2) At some point, the outer loop will point to "1", and "1" will be compared to the 1st position. Then, "1" will be placed in the 1st position and remain there forever. (For example, this happens in the 3rd line above)
3) After the outer loop points to "1", at some point it will point to "2" which will be compared to the 2nd position. (Proof: when the outer loop pointed to "1", if "2" was on the right of "1", then the statement is obvious; if "2" was on the left of "1", then it would be in the 1st position by (1), so again the statement holds.) Then, "2" will be placed in the 2nd position and remain there forever.
4) After the outer loop points to "2", at some point it will point to "3" which will be compared to the 3rd position...
I love this paragraph from the paper:
There is nothing good about this algorithm. It is slow – the algorithm obviously runs in Θ(n2) time, whether worst-case, average-case or best-case. It unnecessarily compares all pairs of positions, twice (but see Section 3). There seems to be no intuition behind it, and its correctness is not entirely obvious. You certainly do not want to use it as a first example to introduce students to sorting algorithms. It is not stable, does not work well for external sorting, cannot sort inputs arriving online, and does not benefit from partially sorted inputs. Its only appeal may be its simplicity, in terms of lines of code and the “symmetry” of the two loops.
I used to teach this to beginners all the time. I called it "exchange sort", though I'm not sure where I ever got that name from.
Beginners to sorting are often still somewhat struggling with for loops (especially the three-part C kind), struggling with indexing an array, struggling with swaps. It's overwhelming.
This is kind-of like pseudocode for a simple sort -- it has the same rough outline, but no tricky bits. It allows them to get their brains around the basic algorithm, and only after that you trace through an execution on the board to show them that it wastes a lot of time moving things around.
Then you can show them insertion sort as an optimization, and from that move on to fancier ones.
Can you give a reference to your teaching materials? The way this sort works is bizarre enough that I'm curious what you said about it. (Also to verify that this is actually the algorithm you taught, given how many other people thought it was something else.)
Note that they define algorithm 2 (see figure of said name) as Exchange Sort, which is distinct from the algorithm they present
Formatted:
For easier exposition in the proof later on, the array is 1-based, so the elements are A[1]...A[n].
Algorithm 1
ICan’tBelieveItCanSort(A[1..n])
for i = 1 to n do
for j = 1 to n do
if A[i] < A[j] then
swap A[i] and A[j]
Isn't that just BubbleSort?
And, if so: they're writing Arxiv papers on BubbleSort now?
It is not, it is actually closer to an insertion sort.
The interesting part is that it often comes up when the one writing the algorithm doesn't know what he is doing, and yet, it works. It may look like a BubbleSort because it is a common wrong implementation of BubbleSort.
EDIT: insertion, not selection
Yeah, section 3.2 of the PDF, titled '"Improving" the algorithm' notes that they have "rediscovered" insertion sort by simply adjusting the loop indices.
(they also document a variant in 3.1 with similar indices tweaking which they describe as similar to an exchange sort)
BubbleSort keeps going until no more swaps happen, this algorithm specifies up front which items to compare.
I actually think it's pretty cool, it happens occasionally that I just need a quick and dirty sort and this is easier to get right.
The really unintuitive bit is that the condition is the reverse of what you would expect, so I wouldn't call it "easy to get right".
Exchange sort is more intuitive and almost as simple, and if you don't mind something a little less intuitive but still very simple, just do insertion, it is often considered the best of the simple (N^2) sorting algorithms.
For me, the most intuitive and the less likely I would get wrong would be selection. It is actually the one I came up with naturally before I even knew about sorting algorithms, it is a couple of lines longer than the others though, and often considered worse than insertion.
I wonder what are these occasional situations where you need a quick and dirty sort?
I had to change the order of the rows and columns of a matrix to force the diagonal to be ordered. One possibility was to use the sort function on the diagonal, somehow get the permutation, and then apply the permutation to all the rows and columns. How had can it be?
There is a 50% chance of apply the permutation in the wrong direction, and other possible tricky details.
Since the worst case was N=20, I used bubble sort that is as quick and dirty, but it's also obvious and well known.
There are plenty of standard libraries out there that leave a lot to wish for.
Sometimes I'm writing my own language, trying to figure something else out and don't feel like being sidetracked.
Sometimes predictable performance is more important than optimal.
Let's just say there are plenty of reasons why someone might want to do that.
I’ve heard of quicksort, but what is dirtysort? :-)
Bubble sort only swaps adjacent elements—that's why it's called bubble. This doesn't.
You did not bother to read the article, did you?
Note the:
A[i] < A[j]
No.
This isn't surprising to me in the least that it works.
In the first pass (i=1), A[1] will end up with the largest value and a later pass cannot displace it.
In the second pass (i=2), A[2] will end up with the 2nd largest value (which might be the same as the largest value, if it appeared more than once in the list). And so on.
> In the first pass (i=1), A[1] will end up with the largest value and a later pass cannot displace it.
Nah, it will get displaced later. Notice that in the inner loop j goes from 1, and not from i. After i=1 A[1] will be the largest element, but after the next big step, after i=2, A[2] will be the largest element. (And after i=n, A[n] will be the largest element, as expected.)
> In the second pass (i=2), A[2] will end up with the 2nd largest value
The algorithm sorts the array ascending, and not descending. There is an example run linked in a grandkid comment, by jmholla.
(This comment was entirely rewritten.)
Thanks, I stand corrected! I am once again surprised it works at all. :-)
I assumed your parent meant "smallest" when they said "largest", and just got the sort order mixed up when commenting.
That's actually not correct at all either. After the first pass makes the largest element the first element, every pass after that is a single iteration of (reverse) bubble sorting the beginning of the array, increasing the array that is sorted. So, the second pass doesn't end up with the largest or smallest of the list, just the smallest of the selected prefix.
Once the iterator reaches beyond the length of the tested prefix, we know every element after it will be smaller because we bubbled the largest element to the iteration point which means we won't do any checks there after. In fact, excepting the first pass, this inner loop can break whenever it sees it is comparing against itself.
Edit: The original article on this has a visualization that can help: https://unnamed.website/posts/i-cant-believe-it-can-sort/
You would appear to be wrong, because A[1] ends up with the smallest element.
Specifically, on each case when i=2 and higher, j will still start at 1, so A[1] will still get referenced.
On the first pass, A[1] ends up the largest element. On subsequent passes, it ends up with the smallest element in A[1:i].
Previous discussion: https://news.ycombinator.com/item?id=28758106
Thanks! Macroexpanded:
Is this the simplest (and most surprising) sorting algorithm? - https://news.ycombinator.com/item?id=28758106 - Oct 2021 (318 comments)
Question on paper writing... There is only one author listed, but the wording keeps using "we" as in "we show that..." Is it considered poor practice to use "I" in a one-person paper? Or is "we" including the reader in the exposition?
The “royal we” is common for single author academic work. I’m honestly unsure why. StackExchange seems to think it refers to both author and reader together. I think it conveys a sort of distance from the author, giving the writing less of a personal and more general tone.
https://hsm.stackexchange.com/questions/2002/how-did-royal-w...
It's assumed you at some point talked to someone in the lab, or your advisor, or your advisee, or literally anyone about the paper. If you didn't that's probably problematic. So "we" gives non-formal credit to your community.
I have never heard that as an explanation. That kind of credit is often expressed in foot/end notes, at least in the field I worked in. It is implied that the authors speak for themselves.
Well IMO if you're giving some trivial level of credit someone else, you should use 'we', since you're both 'talking. If you reference them by name in the paper somewhere, I feel even stronger about my stance.
This is formal academic style, not wolfram style.
Discussed at the time (318 comments): https://news.ycombinator.com/item?id=28758106
It's more like an insertion sort when visualized https://xosh.org/VisualizingSorts/sorting.html#IYZwngdgxgBAZ...
I made this tool to vi's sorting algos long time ago.
This is an awesome tool. I noticed a really fun bug here where if you start a new faster sort before a slower sort finishes the slower sort keeps going and you end up with a the two sorts running at the same time, I think against different sets of values. Don't know if I'd bother fixing it because the output is neat
This is a sort I use all the time. Especially in coding challenges (AoC) because I'm usually lazy. It's nice to see it formally evaluated, it feels very intuitive to me.
This intrigues me.
Would you be willing ... without looking at this article ... to provide exactly the code you "use all the time"?
I'd like to compare what you write with the code in the article, because code that I write as a quick and dirty sort is definitely not this.
In particular, this seems to take items that are already the right way round, and swaps them. I definitely don't do that. To be specific, I just knocked this out:
#!/usr/bin/python
a = [ 3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9 ]
N = len(a)
for i in range(0,N-2):
for j in range(i+1,N-1):
if a[i]>a[j]:
a[i],a[j] = a[j],a[i]
print a
Specifically, note that I ask if a[i] is bigger than a[j] and if so then I swap them. The code in the article appears to do exactly the wrong thing, and I'd be surprised if your "go to" code does it the same way as the code in the article.Sure I have a "live" example in my 2024 AoC code. I'll add it here when I get home.
Fabulous ... thanks. I've bookmarked this bit of the thread so I can come back and check it.
Cheers!
This immediately made me wonder if there's some grand unified sorting algorithm, from which all other sorting algorithms can be derived. Could we somehow turn this thing into a quicksort, for example?
You are getting worse closer to sorting networks with that supposition
It is computer science folk-lore. Pretty sure I read about it in some popular computer magazine in 90's.
See also:
I can’t believe that I can prove that it can sort - https://news.ycombinator.com/item?id=31975507 - Jul 2022 (115 comments)
Here's some other sorting algorithms proven with SPARK, with much more straightforward proofs: https://github.com/AdaCore/spark2014/blob/master/testsuite/g...
If you want to test it for yourself https://jsfiddle.net/px2jvzyn/
Algorithm 1 ICan’tBelieveItCanSort(A[1..n]) for i = 1 to n do for j = 1 to n do if A[i] < A[j] then swap A[i] and A[j]
For code formatting, you can prepend two spaces to a line (this is in https://news.ycombinator.com/formatdoc).
(That's what ColinWright did at https://news.ycombinator.com/item?id=43162234 for example.)
Ooh, I could copy A to B and add a final two lines: "else swap B[i] and B[j]"
And now it's constant time!
It just might be the simplest sorting algorithm ever. But it's still just an unoptimized insertion sorter, and it's not novel. One would often come across this in computer books from the 80s teaching basic programming concepts and techniques in, well, BASIC.
It certainly is the most clickbaity title I have ever seen on an actual paper.
Are these the world's most crispy fries?
This is well known (google trivial sorting network!), and it would never be published anywhere peer-reviewed.
Good reminder that Arxiv is not peer-reviewed :/
I did search for "trivial sorting network", but the only networks that were called trivial were the ones for exactly two elements, while this algorithm sorts an arbitrary number of elements.
Could you link to what you're talking about? And what's its big-O runtime?
It's delay sort
delay(N); append_output(N)
that's it.
Students often produce this algorithm by mistake in first year programming. It works in spite of the fact that they don't know what they are doing.
Is this a joke paper? This is unoptimized bubble sort. This is the first sort I wrote when I was 13. This is literally the most obvious sort that exists.
Is this a joke comment?
It's not a bubble sort. The article explains.
Right there with you.
This isn't even worthy of a blog post let alone a published research article, the publisher should be embarrassed.
This isn't an innovation. It's just un-optimized bubble sort. Back in college when I was learning bubble sort, I implemented this exact sort as a type of "MVP". And then I added the other flags and stuff they reference to try to polish the turd.
I mean I'm glad someone proved it works, but I thought they proved this works like 40 years ago.
I can't see how this looks like a bubble sort to you, the inner loop isn't even comparing adjacent elements of the array.
Moreover the comparison is the opposite of what you would normally do: if A[i] < A[j] then swap(A[i], A[j])
Worst case bubble sort is n^2 - this one is n^2 for all cases even an already sorted list.
...Right.. unoptimized bubble sort.
On any bubble sort, optimised or not, no swaps are made on an array that's already sorted.
The routine in the article does.
So no, it's not a bubble sort, unoptimised or otherwise.
If you think it is a bubble sort then I'd be interested in seeing your implementation of a bubble sort, and an explanation as to why they are effectively the same.
I have recently invented a new one.
Does not touch the array, just declares it sorted. I would call it something like "wokesort".
what?
This was a totally failed attempt at a joke.
Feels oddly familiar; I'm pretty sure this was the optimal solution to a leetcode problem (with specific constraints) that I saw during interview practice. Coming up with this on the fly during an interview would have been extra impressive.
I don't see why it's at all surprising.
Each iteration of the outter loop restuls in the smallest element moving into `A[i]`. And then the first `i` iterations of the inner loop are just wasting time.
> Each iteration of the outter loop moves the smallest element into `A[i]`. The first `i` iterations of the inner loop are just wasting time.
In the first iteration of the outer loop, i=1, and in that case the inner loop moves the largest element into A[1].
So I don't think you can be correct.
This is similar to bubble sort but with the largest element appearing at the beginning index. Its a reverse Bubble Sort.
Except that when the routine finishes, the smallest element is in A[1].