• bmc7505 3 hours ago
    • andrewdb 29 minutes ago

      Why do we call them GPUs these days?

      Most GPUs, sitting in racks in datacenters, aren't "processing graphics" anyhow.

      • xeonmc 20 minutes ago

        General Processing Units

        Gross-Parallelization Units

        Generative Procedure Units

        Gratuitously Profiteering Unscrupulously

      • nomercy400 25 minutes ago

        I was taught years ago that MUL and ADD can be implemented in one or a few cycles. They can be the same complexity. What am I missing here?

        Also, is it possible to use the GPU's ADD/MUL implementation? It is what a GPU does best.

        • deep1283 2 hours ago

          This is a fun idea. What surprised me is the inversion where MUL ends up faster than ADD because the neural LUT removes sequential dependency while the adder still needs prefix stages.

          • lorenzohess 3 hours ago

            Out of curiosity, how much slower is this than an actual CPU?

          • sudo_cowsay 3 hours ago

            "Multiplication is 12x faster than addition..."

            Wow. That's cool but what happens to the regular CPU?

            • adrian_b 2 hours ago

              This CPU simulator does not attempt to achieve the maximum speed that could be obtained when simulating a CPU on a GPU.

              For that a completely different approach would be needed, e.g. by implementing something akin to qemu, where each CPU instruction would be translated into a graphic shader program. On many older GPUs, it is impossible or difficult to launch a graphic program from inside a graphic program (instead of from the CPU), but where this is possible one could obtain a CPU emulation that would be many orders of magnitude faster than what is demonstrated here.

              Instead of going for speed, the project demonstrates a simpler self-contained implementation based on the same kind of neural networks used for ML/AI, which might work even on an NPU, not only on a GPU.

              Because it uses inappropriate hardware execution units, the speed is modest and the speed ratios between different kinds of instructions are weird, but nonetheless this is an impressive achievement, i.e. simulating the complete Aarch64 ISA with such means.

              • 5o1ecist an hour ago

                > where each CPU instruction would be translated into a graphic shader program

                You really think having a shader per CPU-instruction is going to get you closer to the highest possible speed one can achieve?

            • nicman23 3 hours ago

              can i run linux on a nvidia card though?

              • micw 2 hours ago

                Linux runs everywhere

              • mrlonglong 2 hours ago

                Now I've seen it all. Time to die.. (meant humourously)

                • Surac 3 hours ago

                  Well GPU are just special purpous CPU.

                  • RagnarD 3 hours ago

                    Being able to perform precise math in an LLM is important, glad to see this.

                    • jdjdndnzn 3 hours ago

                      Just want to point out this comment is highly ironic.

                      This is all a computer does :P

                      We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.

                      • Nuzzerino 3 hours ago

                        > We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.

                        Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.

                        • RagnarD an hour ago

                          You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.

                      • 5o1ecist an hour ago

                        Why?

                      • MadnessASAP 2 hours ago

                        Ya know just today I was thinking around a way to compile a neural network down to assembly. Matching and replacing neural network structures with their closest machine code equivalent.

                        This is way cooler though! Instead of efficiently running a neural network on a CPU, I can inefficiently run my CPU on neural network! With the work being done to make more powerful GPUs and ASICs I bet in a few years I'll be able to run a 486 at 100MHz(!!) with power consumption just under a megawatt! The mind boggles at the sort of computations this will unlock!

                        Few more years and I'll even be able to realise the dream of self-hosting ChatGPT on my own neural network simulated CPU!