• pron 6 hours ago

    I was surprised to see that Java was slower than C++, but the Java code is run with `-XX:+UseSerialGC`, which is the slowest GC, meant to be used only on very small systems, and to optimise for memory footprint more than performance. Also, there's no heap size, which means it's hard to know what exactly is being measured. Java allows trading off CPU for RAM and vice-versa. It would be meaningful if an appropriate GC were used (Parallel, for this batch job) and with different heap sizes. If the rules say the program should take less than 8GB of RAM, then it's best to configure the heap to 8GB (or a little lower). Also, System.gc() shouldn't be invoked.

    Don't know if that would make a difference, but that's how I'd run it, because in Java, the heap/GC configuration is an important part of the program and how it's actually executed.

    Of course, the most recent JDK version should be used (I guess the most recent compiler version for all languages).

    • rockwotj 4 hours ago

      It’s so hard to actually benchmark languages because it so much depends on the dataset, I am pretty sure with simdjson and some tricks I could write C++ (or Rust) that could top the leaderboard (see some of the techniques from the billion row challenge!).

      tbh for silly benchmarks like this it will ultimately be hard to beat a language that compiles to machine code, due to jit warmup etc.

      It’s hard to due benchmarks right, for example are you testing IO performance? are OS caches flushed between language runs? What kind of disk is used etc? Performance does not exist in a vacuum of just the language or algorithm.

      • pron 2 hours ago

        > due to jit warmup

        I think this harness actually uses JMH, which measures after warmup.

      • KerrAvon 3 hours ago

        Why are you surprised? Java always suffers from abstraction penalty for running on a VM. You should be surprised (and skeptical) if Java ever beats C++ on any benchmark.

        • pron 2 hours ago

          The only "abstraction penalty" of "running on a VM" (by which I think you mean using a JIT compiler), is the warmup time of waiting for the JIT.

          • woooooo 2 hours ago

            For the most naive code, if you're calling "new" multiple times per row, maybe Java benefits from out of band GC while C++ calls destructors and free() inline as things go out of scope?

            Of course, if you're optimizing, you'll reuse buffers and objects in either language.

        • hgs3 6 minutes ago

          Why is there no C benchmark? The C++ benchmark appears to be "modern C++" which isn't a substitute.

          • jhack 5 hours ago

            D gets no respect. It's a solid language with a lot of great features and conveniences compared to C++ but it barely gets a passing mention (if that) when language discussions pop up. I'd argue a lot of the problems people have with C++ are addressed with D but they have no idea.

            • rsyring 3 hours ago

              Could say the same for Nim.

              But popularity/awareness/ecosystem matter.

              • Ygg2 3 hours ago

                If the difference in performance between the target language and C++ is huge, it's probably not the language that's great, but some quirk of implementation.

              • piskov 4 hours ago

                C# is very fast (see multicore rating). Implementation based on simd (vector), memory spans, stackalloc, source generators and what have you — modern C# allows you go very low-level and very fast.

                Probably even faster under .net 10.

                Though using stopwatch for benchmark is killing me :-) Wonder if multiple runs via benchmarkdotnet would show better times (also due to jit optimizations). For example, Java code had more warm-up iterations before measuring

                • XJ6w9dTdM an hour ago

                  I was very surprised to see the results for common lisp. As I scrolled down I just figured that the language was not included until I saw it down there. I would have guessed SBCL to be much faster. I checked it out locally and got: Rust 9ms, D: 16ms, and CL: 80ms.

                  Looking at the implementation, only adding type annotations, there was a ~10% improvement. Then the tag-map using vectors as values which is more appropriate than lists (imo) gave a 40% improvement over the initial version. By additionally cutting a few allocations, the total time is halved. I'm guessing other languages will have similar easy improvements.

                  • von_lohengramm 4 hours ago

                    This entire benchmark is frankly a joke. As other commenters have pointed out, the compiler flags make no sense, they use pretty egregious ways to measure performance, and ancient versions are being used across the board. Worst of all, the code quality in each sample is extremely variable and some are _really_ bad.

                    • another_twist 4 hours ago

                      I mean this is only meant to be an iteration if I understand correctly. Its not like someone is going around citing this benchmark yelling rewrite everything in Julia / D. Imo this is a good starting point if you are doubtful or fall into the trap of Java is not fast. For most workloads we can clearly see, Java trades off the control of C++ for "about the same speed" and much much larger and well managed ecosystem. (Except for the other day, when someones OpenJDK PR was left hanging for a month which I am not sure why).

                      • inkyoto 36 minutes ago

                        Quality does vary wildly because the languages vary wildly in terms of language constructs and standard libraries. Proficiency in every.single.language. used in the benchmark perhaps should not be taken for granted.

                        But it is an GitHub repository and the repository owner appears to accept PR's and allows people to raise an issue to provide their feedback, or… it can be forked and improved upon. Feel free to jump in and contribute to make it a better benchmark that will not be «frankly a joke» or «_really_ bad».

                      • gethly 21 minutes ago

                        Go being beaten by C# in multicore is quite hard to believe. Also Zig and Odin doing so "poorly" in single core is strange.

                        • jasonjmcghee 2 hours ago

                          What's up with the massive jump for 20k to 60k for nearly all languages?

                          • foota 2 hours ago

                            My guess would be cache related. 5k probably fits in L1-L2 cache, whereas 20k might put you into L3.

                          • matthewfcarlson 3 hours ago

                            I see some questions around the methodology of the testing. But is this representative of Ruby? Several minutes total when most finish under a second?

                            • Imustaskforhelp 6 hours ago

                              This is really interesting. Julia is a beast compared to python.

                              Nowadays whenever I see benchmarks of different languages. I really compare it to benjdd.com/languages or benjdd.com/languages2

                              Ended up creating a visualization of this data if anybody's interested

                              https://serjaimelannister.github.io/data-processing-benchmar...

                              (Given credits to both sources in the description of this repo)

                              (Also fair disclosure but it was generated just out of curiosity of how this benchmark data might look if it was on benjdd's ui and I used LLM's for this use case for prototyping purposes. The result looks pretty simiar imo for visualization so full credits to benjdd's awesome visualization, I just wanted this to be in that to see for myself but ended up having it open source/on github pages)

                              I think benjdd's on hackernews too so hi ben! Your websites really cool!

                              • gus_massa 4 hours ago

                                Someone replied to me in an old comment that for fast Python you have to use numpy. In the folder there is a program in plain python, another with numpy and another with numba. I'm not sure why only one is shown in the data.

                                Disclaimer: I used numpy and numba, but my level is quite low. Almost as if I just type `import numpy as np` and hope the best.

                                • another_twist 4 hours ago

                                  > Almost as if I just type `import numpy as np` and hope the best.

                                  As do we all. If you browse through deep learning code a large majority is tensor juggling.

                              • aatd86 3 hours ago

                                Isn't that measuring the speed of json encoding instead?

                                • sergiotapia an hour ago

                                  I wrote a script (now an app basically haha) to migrate data from EMR #1 to EMR #2 and I chose Nim because it feels like Python but it's fast as hell. Claude Code did a fine job understanding and writing Nim especially when I gave it more explicit instructions in the system prompt.

                                  • pyrolistical 4 hours ago

                                    That’s odd zig concurrent got slower

                                    • another_twist 4 hours ago

                                      Contention overhead likely. Performance is more than just the langauge.

                                      • pyrolistical 2 hours ago

                                        Also 3 years old. Zig has been rewritten in that time

                                    • Vaslo 5 hours ago

                                      So in the D vs Zig vs Rust vs C fight - learn d if speed is your thing?

                                      • Ygg2 3 hours ago

                                        That only applies in an apples-to-apples comparison, i.e., same data structures, same algorithm, etc. You can't compare sorting in C and Python, but use bubble sort in C and radix sort in Python.

                                        In here there are different data structures being used.

                                        > D[HO] and Julia [HO] footnote: Uses specialized datastructures meant for demonstration purposes: more ↩ ↩2

                                      • KerrAvon 3 hours ago

                                        Genuine question: Are GitHub workflows stable enough to be used for benchmarking? Like CPU time quantum scheduling is guaranteed to be the same from run to run?

                                        • vlovich123 25 minutes ago

                                          No, it’s sloppy benchmarking

                                        • ekianjo 4 hours ago

                                          Data processing benchmark but somehow R is not even mentioned?

                                          • mcdermott 3 hours ago

                                            It would be the slowest language result on the list.

                                            • ekianjo an hour ago

                                              Slower than Python? I seriously doubt that