Author here for all your 8086 questions...
Did it make things simpler or more complex for the byte order they picked? It is notable that new RISC designs not much later all started big endian, implying that is simpler. Can you even tell the endianess from the dies?
The byte order doesn't make much difference. The more important difference compared to a typical RISC chip is that the 8086 supports unaligned memory access. So there's some complicated bus circuitry to perform two memory accesses and shuffle the bytes if necessary.
To understand why the 8086 uses little-endian, you need to go back to the Datapoint 2200, a 1970 desktop computer / smart terminal built from TTL chips (since this was pre-microprocessor). RAM was too expensive at the time, so the Datapoint 2200 used Intel shift-register memory chips along with a 1-bit serial ALU. To add numbers one bit at a time, you need to start with the lowest bit to handle carries, so little-endian is the practical ordering.
Datapoint talked to Intel and Texas Instruments about replacing the board full of TTL chips with a single-chip processor. Texas Instruments created the TMX1795 processor and Intel slightly later created the 8008 processor. Datapoint rejected both chips and continued using TTL. Texas Instruments tried to sell the TMX1795 to Ford as an engine controller, but they were unsuccessful and the TMX1795 disappeared. Intel, however, marketed the 8008 chip as a general-purpose processor, creating the microprocessor as a product (along with the unrelated 4-bit 4004). Since the 8008 was essentially a clone of the Datapoint 2200 processor, it was little-endian. Intel improved the 8008 with the 8080 and 8085, then made the 16-bit 8086, which led to the modern x86 line. For backward compatibility, Intel kept the little-endian order (along with other influences of the Datapoint 2200). The point of this history is that x86 is little-endian because the Datapoint 2200 was a serial processor, not because little-endian makes sense. (Big-endian is the obvious ordering. Among other things, it is compatible with punch cards where everything is typed left-to-right in the normal way.)
Awesome article Ken, I feel spoiled! It's always nice to see your posts hit HN!
Out of curiosity: Is there anything you feel they could have done better in hindsight? Useless instructions, or inefficient ones, or "missing" ones? Either down at the transistor level, or in high level design/philosophy (the segment/offset mechanism creating 20 bit addresses out of 2 16-bit registers with thousands of overlaps sure comes to mind - if not a flat model, but that's asking too much to 1979 design and transistor limitations I guess) ?
Thanks!
That's an interesting question. Keep in mind that the 8086 was built as a stopgap processor to sell until Intel's iAPX 432 "micro-mainframe" processor was completed. Moreover, the 8086 was designed to be assembly-language compatible with the 8080 (through translation software) so it could take advantage of existing software. It was also designed to be compatible with the 8080's 16-bit addressing while supporting more memory.
Given those constraints, the design of the 8086 makes sense. In hindsight, though, considering that the x86 architecture has lasted for decades, there are a lot of things that could have been done differently. For example, the instruction encoding is a mess and didn't have an easy path for extending the instruction set. Trapping on invalid instructions would have been a good idea. The BCD instructions are not useful nowadays. Treating a register as two overlapping 8-bit registers (AL, AH) makes register renaming difficult in an out-of-order execution system. A flat address space would have been much nicer than segmented memory, as you mention. The concept of I/O operations vs memory operations was inherited from the Datapoint 2200; memory-mapped I/O would have been better. Overall, a more RISC-like architecture would have been good.
I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!
> I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!
Thanks for capturing my feeling very precisely! I was indeed thinking what they could have done better with the same approximate number of transistor and the benefit of a time traveler :) And yes the constraints you mention (8080 compatibility, etc) indeed limit their leeway so maybe we'd have to point the time machine at a few years earlier and influence the 8080 first
Thanks for publishing your blog! The articles are quite enlightening, and it's interesting to see how semiconductors evolved in the '70s, '80s and '90s. Having grown up in this time, I feel it was a great time to learn as one could understand an entire computer, but details like this were completely inaccessible back then. Keep up the good work knowing that it is appreciated!
A more personal question: is your reverse engineering work just a hobby or is it tied in with your day to day work?
Thanks! The reverse engineering is just for fun. I have a software background, so I'm entirely unqualified to be doing this :-) But I figure that if I'm a programmer, I should know how computers really work.