« BackSIMD programming in pure Rustkerkour.comSubmitted by randomint64 3 days ago
  • pizlonator 13 hours ago

    This article references the fact that security issues in crypto libs are memory safety issues, and I think this is meant to be a motivator for writing the crypto using SIMD intrinsics.

    This misses two key issues.

    1. If you want to really trust that your crypto code has no timing side channels, then you've gotta write it in assembly. Otherwise, you're at the compiler's whims to turn code that seems like it really should be constant-time into code that isn't. There's no thorough mechanism in compilers like LLVM to prevent this from happening.

    2. If you look at the CVEs in OpenSSL, they are generally in the C code, not the assembly code. If you look at OpenSSL CVEs going back to the beginning of 2023, there is not a single vulnerability in Linux/X86_64 assembly. There are some in the Windows port of the X86_64 assembly (because Windows has a different calling conv and the perlasm mishandled it). There are some on other arches. But almost all of the CVEs are in C, not asm.

    If you want to know a lot more about how I think about this, see https://fil-c.org/constant_time_crypto

    I do think it's a good idea to have crypto libraries implemented in memory safe languages, and that may mean writing them in Rust. But the actual kernels that do the cryptographic computations that involve secrets should be written in asm for maximum security so that you can be sure that sidechannels are avoided and because empirically, the memory safety bugs are not in that asm code.

    • zbentley 2 hours ago

      Eh, I don't think you need to get that extreme.

      A combination of careful use of a given high-level-language with expert awareness of compiler behavior, and the presence of tests that detect some of the nasty timing behaviors that get compiled in via static analysis of compiler IR or assembly on selected platforms will get you pretty far--not guaranteed perfect like handwritten asm would, but far enough that the advantages of not needing maintainers to be fluent in assembly past the point of maintaining those tests might outweigh the drawbacks.

      • johnisgood 10 hours ago

        So there are more bugs in a more readable and understandable programming language (C) as opposed to asm? What gives? I am asking because intuition would say the opposite since asm is much more lower-level than C.

        • wahern 8 hours ago

          The core primitives written in assembly operate on fixed sized blocks of data; no allocations, no indexing arrays based on raw user controlled inputs, etc. Moreover, the nature of the algorithms--at least the parts written in assembly, e.g. block transforms--means any bugs tend to result in complete garbage and are caught early during development.

          • itemize123 10 hours ago

            compiler optimization is a blackbox. shortcuts to crypto routines will allow side channel attacks

            • formerly_proven 8 hours ago

              Crypto primitives tend to have very simple control flow (those that don’t are usually insecure) and even simpler data structures. You won’t find many branches beyond “is there another block?” in a typical block cipher or hash, for example.

            • ironbound 9 hours ago

              I can't review assembly, agreed on better language part but we'd need tooling to help prove it's correctness of you want the asm path.

              • tucnak 7 hours ago

                Well, is it that you can't or that you won't? All you need to do is learn it, you know? The full x86 ISA for one is quite nasty, but there's a useful subset, and other architectures are much nicer in general. Asm is as basic as it gets.

            • shihab 15 hours ago

              > For example, NEON ... can hold up to 32 128-bit vectors to perform your operations without having to touch the "slow" memory.

              Something I recently learnt: the actual number of physical registers in modern x86 CPUs are significantly larger, even for 512-bit SIMD. Zen 5 CPUs actually have 384 vectors registers, 384*512b = 24KB!

              • cmovq 15 hours ago

                This is true, but if you run out of the 32 register names you’ll still need to spill to memory. The large register file is to allow for multiple instructions to execute in parallel among other things.

                • zeusk 13 hours ago

                  They’re used by the internal register renamer/allocator so if it sees you’re storing the results to memory then reusing the named register for a new result - it will allocate a new physical register so your instruction doesn’t stall for the previous write to go through.

                  • adrian_b an hour ago

                    I do not understand what you want to say.

                    The register renamer allocates a new physical register when you attempt to write the same register as a previous instruction, as otherwise you would have to wait for that instruction to complete, and you would also have to wait for any instructions that would want to read the value from that register.

                    When you store a value into memory, the register renamer does nothing, because you do not attempt to modify any register.

                    The only optimization is that if a following instruction attempts to read the value stored in the memory, that instruction does not wait for the previous store to complete, in order to be able to load the stored value from the memory, but it gets the value directly from the store queue. But this has nothing to do with register renaming.

                    Thus if your algorithm has already used all the visible register numbers, and you will still need in the future all the values that occupy the registers, then you have to store one register into the memory, typically in the stack, and the register renamer cannot do anything to prevent this.

                    This is why Intel will increase the number of architectural general-purpose registers of x86-64 from 16 to 32, matching Arm Aarch64 and IBM POWER, with the APX ISA extension, which will be available in the Nova Lake desktop/laptop CPUs and in the Diamond Rapids server CPUs, which are expected by the end of this year.

                    Register renaming is a typical example of the general strategy that is used when shared resources prevent concurrency: the shared resources must be multiplied, so that each concurrent task uses its private resource.

                  • justifa 8 hours ago

                    Interesting. Just sucks that Rust proponents apparently tried to assassinate Rust critic Rene Rebe through sw att ing.

                    Wretched, evil and vile Rust proponents will likely censor or downplay this.

                  • dapperdrake 15 hours ago

                    In the register file or named registers?

                    And the critical matrix tiling size is often SRAM, so L3 unified cache.

                  • rwaksmunski 3 days ago

                    Every Rust SIMD article should mention the .chunks_exact() auto vectorization trick by law.

                    • ChadNauseam 16 hours ago

                      Didn't know about this. Thanks!

                      Not related, but I often want to see the next or previous element when I'm iterating. When that happens, I always have to switch to an index-based loop. Is there a function that returns Iter<Item=(T, Option<T>)> where the second element is a lookahead?

                  • dfajgljsldkjag 16 hours ago

                    The benchmarks on Zen 5 are absolutely insane for just a bit of extra work. I really hope the portable SIMD module stabilizes soon, so we do not have to keep rewriting the same logic for NEON and AVX every time we want to optimize something. That example about implementing ChaCha20 twice really hit home for me.

                    • vintagedave 5 hours ago

                      I was surprised not to see Sleef mentioned as an option. It’s available for Rust and handles the architecture agnostic or portable needs they have.

                      https://docs.rs/sleef/latest/sleef/

                      • crote 3 days ago

                        What is the "nasty surprise" of Zen 4 AVX512? Sure, it's not quite the twice as fast you might initially assume, but (unlike Intel's downclocking) it's still a strict upgrade over AVX2, is it not?

                        • cogman10 17 hours ago

                          It's splitting a 512 instruction into 2 256 instructions internally. That's the main nasty surpise.

                          I suppose it saves on the decoding portion a little but it's ultimately no more effective than just issuing the 2 256 instructions yourself.

                          • adrian_b an hour ago

                            Most 512-bit instructions are not split into two 256-bit instructions, neither on Zen 4, nor on laptop Zen 5. This is a myth caused by a very poor choice of words of the AMD CEO at the initial Zen 4 presentation.

                            For most 512-bit instructions that operate on the vector registers, both Zen 4 and all the Intel CPUs supporting AVX-512 have an identical throughput: two 512-bit instructions per clock cycle.

                            There are only a few instructions where Zen 4 is inferior to the most expensive of the Intel server/workstation CPUs, but those are important instructions for some applications.

                            The Intel CPUs have a double throughput for the transfers with the L1 cache memory. Zen 4 can do only one 512-bit load per cycle plus one 512-bit store every other cycle. The Intel CPUs with AVX-512 support and Zen 5 (server/desktop/Halo) can do two 512-bit loads plus one 512-bit store per cycle.

                            The other difference is that the most expensive Intel CPUs (typically Gold/Platinum Xeons) have a second floating-point multiplier, which is missing on Zen 4 and on the cheaper Intel SKUs. Thus Zen 4 can do one fused multiply-add (or FMUL) plus one FP addition per cycle, while the most expensive Intel CPUs can do two FMA or FMUL per cycle. This results in a double performance for the most expensive Intel CPUs vs. Zen 4 in many linear algebra benchmarks, e.g. Linpack or DGEMM. However there are many other applications of AVX-512 besides linear algebra, where a Zen 4 can be faster than most or even all Intel CPUs.

                            On the other hand, server/desktop/Halo Zen 5 has a double throughput for most 512-bit instructions in comparison with any Intel CPU. Presumably the future Intel Diamond Rapids server CPU will match the throughput of Zen 5 and Zen 6, i.e. of four 512-bit instructions per clock cycle.

                            On Zen 4, using AVX-512 provides very significant speedups in most cases over AVX2, despite the fact that the same execution resources are used. This proves that there still are cases when the ISA matters a lot.

                            • nwallin 11 hours ago

                              Single pumped AVX512 can still be a lot more effective than double pumped AVX2.

                              AVX512 has 2048 bytes of named registers; AVX2 has 512 bytes. AVX512 uses out of band registers for masking, AVX2 uses in band mask registers. AVX512 has better options for swizzling values around. All (almost all?) AVX512 instructions have masked variants, allowing you to combine an operation and a subsequent mask operation into a single operation.

                              Often times I'll write the AVX512 version first, and go to write the AVX2 version, and a lot of the special sauce that made the AVX512 version good doesn't work in AVX2 and it's real awkward to get the same thing done.

                              • MobiusHorizons 17 hours ago

                                The benefit seems to be that we are one step closer to not needing to have the fallback path. This was probably a lot more relevant before Intel shit the bed with consumer avx-512 with e-cores not having the feature

                                • adgjlsfhk1 9 hours ago

                                  Predicated instructions are incredibly useful (and avx-512 only). They let you get rid of the usual tail handling at the end of the loop.

                                  • convolvatron 16 hours ago

                                    axv-512 for zen4 also includes a bunch of instructions that weren't in 256, including enhanced masking, 16 bit floats, bit instructions, double-sized double-width register file

                                  • fooker 15 hours ago

                                    > it's still a strict upgrade over AVX2

                                    If you benchmark it, it will be slower about half the time.

                                    • adrian_b 37 minutes ago

                                      I do not know what benchmarks do you have in mind.

                                      All the benchmarks that I have ever seen published about AVX-512 vs. AVX2 on AMD Zen 4 have shown better performance on AVX-512. Very frequently, the AVX-512 performance was not a little better, but much better, despite the fact that on Zen 4 both program versions use exactly the same execution resources (but AVX-512 programs have less instructions, which avoids front-end bottlenecks).

                                      • adgjlsfhk1 9 hours ago

                                        for the simplest cases it will be about the same speed as avx2, but if you're trying to do anything fancy, the extra registers and instructions are a godsend.

                                        • fooker 3 hours ago

                                          Well, try it out for a realistic program.

                                          It makes for nice looking code, yes. But is often slower (for various reasons that are well understood by now).

                                          • adrian_b 35 minutes ago

                                            Please mention some of those reasons.

                                    • jeffbee 16 hours ago

                                      "Intel CPUs were downclocking their frequency when using AVX-512 instructions due to excessive energy usage (and thus heat generation) which led to performance worse than when not using AVX-512 acceleration."

                                      This is an overstatement so gross that it can be considered false. On Skylake-X, for mixed workloads that only had a few AVX-512 instructions, a net performance loss could have happened. On Ice Lake and later this statement was not true in any way. For code like ChaCha20 it was not true even on Skylake-X.

                                      • rurban 10 hours ago

                                        This was written in the past tense, and was true in the last decade. Only recently Intel came up with proper AVX-512

                                        • adrian_b 22 minutes ago

                                          "Recently" is 6 years ago, so not so recent.

                                          The real Intel mistake was that they have segregated by ISA the desktop/laptop CPUs and the server CPUs, by removing AVX-512 from the former, soon after providing decent AVX-512 implementations. This doomed AVX-512 until AMD provided it again in Zen 4, which has forced Intel to eventually reintroduce it in Nova Lake, which is expected by the end of this year.

                                          Even the problems of Skylake Server and of its derivatives were not really caused by their AVX-512 implementation, which still had a much better energy efficiency than their AVX2 implementation, but by their obsolete implementation for varying the supply voltage and clock frequency of the CPU, which was far too slow, so it had to use an inappropriate algorithm in order to guarantee that the CPUs are not damaged.

                                          The bad algorithm for frequency/voltage control was what caused the performance problems of AVX-512 (i.e. just a few AVX-512 instructions could lower preventively the clock frequency for times comparable with a second, because the CPU feared that if more AVX-512 instructions would come in the future it would be impossible to lower the voltage and frequency fast enough to prevent overheating).

                                          The contemporaneous Zen 1 had a much more agile mechanism for varying supply voltage and clock frequency, which was matched by Intel only recently, many years later.

                                          • jeffbee 10 hours ago

                                            It wasn't. My comment covers the entire history of the ISA extension on Intel Xeon CPUs.

                                          • celrod 10 hours ago

                                            I netted huge performance wins out of AVX512 on my Skylake-X chips all the time. I'm excited about less downclocking and smarter throttling algorithms, but AVX512 was great even without them -- mostly just hampered by poor hardware availability, poor adoption in software, and some FUD.

                                            • cl0ckt0wer 13 hours ago

                                              Yeah I would have loved benchmarks across generations and vendors.

                                            • wyldfire 13 hours ago

                                              Does Rust provide architecture-specific intrinsics like C/C++ toolchains usually do? That's a popular way to do SIMD.

                                              • steveklabnik 13 hours ago
                                                • m-hilgendorf 10 hours ago

                                                  People should be aware though that without `-C target_feature=+<feature>` in your rustc flags the compiler may emit function calls to stubs for the intrinsic. So people should make sure they're passing the appropriate target features, especially when benchmarking.

                                                  [0] https://godbolt.org/z/85nx44zcE

                                                  edited: I tested gcc/clang and they just straight up fail to compile without -msse3. The generated code without optimizations is also pretty bonkers!

                                                • karavelov 13 hours ago

                                                  Yes, in the submodules of `std::arch`

                                                • nice_byte 12 hours ago

                                                  it's hard to believe that using simd extensions in rust is still as much of a chaotic clusterfudge as it was the first time I looked into it. no support in standard library? 3 different crates? might as well write inline assembly...

                                                  • steveklabnik 11 hours ago

                                                    The standard library offers intrinsics but not a portable high level API just yet.

                                                  • formerly_proven 16 hours ago

                                                    Lazy man's "kinda good enough for some cases SIMD in pure Rust" is to simply target x86-64-v3 (RUSTFLAGS=-Ctarget-cpu=x86-64-v3), which is supported by all AMD Zen and Intel CPUs since Haswell; and for floating point code, which cannot be auto-vectorized due to the accuracy implications, "simply" write it with explicit four or eight-way lanes, and LLVM will do the rest. Usually. Loops may need explicit handling of head or tail to auto-vectorize (chunks_exact helps with this, it hands you the tail).