[HN Gopher] Reverse-engineering the Intel 8086 processor's HALT ...
       ___________________________________________________________________
        
       Reverse-engineering the Intel 8086 processor's HALT circuits
        
       Author : picture
       Score  : 63 points
       Date   : 2023-01-26 17:33 UTC (5 hours ago)
        
 (HTM) web link (www.righto.com)
 (TXT) w3m dump (www.righto.com)
        
       | pifm_guy wrote:
       | So why didn't they implement the HLT instruction as simply a
       | 'jump to self' infinite loop?
       | 
       | Then no special logic would be needed, no extra states, etc.
       | 
       | Sure - there would be no power savings, and the memory bus
       | wouldn't be idle, but we're either of those a requirement in
       | 1970?
        
         | kens wrote:
         | Yes, halt is sort of redundant and processors like the 6502
         | omitted it. I think the historical popularity of halt was
         | because you could indicate to the operator that the computer
         | was halted, rather than in an infinite loop. Peripheral devices
         | could also detect the halt state.
        
       | kens wrote:
       | A few days ago, monocasa suggested I should look at the 8086's
       | HLT instruction, so here it is. Let me know if you have other
       | comments on what part of the 8086 would be interesting to read
       | about.
       | 
       | https://news.ycombinator.com/item?id=34495317
        
         | pwg wrote:
         | Another suggestion, from the previous thread:
         | https://news.ycombinator.com/item?id=34495797
        
       | rogerbinns wrote:
       | You mention inheriting little endian from the Datapoint. If that
       | constraint was not there, would a big endian 8086 be materially
       | different in any way? For example could parts be simpler or fewer
       | gates used?
        
       | jchw wrote:
       | Question: _why_ were there three HALT opcodes? does it simply
       | fill otherwise unused opcode encodings?
        
         | [deleted]
        
         | ok123456 wrote:
         | Probably an artifact of the Datapoint's instruction decoder
         | unit.
        
         | flohofwoe wrote:
         | Sometimes such 'redundant' instructions happen because of
         | incomplete instruction decoding. For instance the ED-prefixed
         | instruction block on the Z80 has:
         | 
         | - 8x NEG
         | 
         | - 8x RETI/RETN (named differently but same behaviour)
         | 
         | - 4x IM0, 2x IM1 and 2x IM2
         | 
         | - and a whopping 178 opcodes in the ED block decode to a NOP
         | (no operation)
        
         | kens wrote:
         | That's a good question. I'm completely guessing, but the
         | Datapoint probably used 0x00 and 0xff as HALT opcodes so you
         | ended up in uninitialized or missing memory the processor would
         | halt. Maybe 0x01 was the "intentional" halt instruction.
        
           | [deleted]
        
           | jchw wrote:
           | Ah, that's a really good point. Having 0x00 be a NOP or maybe
           | worse, instruction that actually is valid and does something,
           | would be a hell of a lot worse for debugging, because after
           | the fact it'd be extremely hard to figure out how you got
           | there.
        
             | pcwalton wrote:
             | It's also bad for security. IIRC code execution is easier
             | on MIPS because 0x0 is a NOP.
        
               | anyfoo wrote:
               | It's worth nothing though, that that likely wasn't much
               | of a consideration at all at the time. Networks for one
               | were barely a thing, at least on systems so tiny that
               | they'd use an 8086. And even when they were, they tended
               | to be extremely trusting until way into the 90s.
        
               | jchw wrote:
               | Definitely. Lot easier to heap spray when most of the
               | memory is a free nopslide.
        
         | ajross wrote:
         | On modern CPUs that actually can't run at full speed for
         | thermal reasons, they're critically important (though a
         | complicated dance with MWAIT and a ton of drivers has
         | supplanted HLT on x86 devices).
         | 
         | On microprocessors of the time, they're indeed a little
         | useless. None of the logic was going to disable the internal
         | clock, this was decades before the introduction of gateable
         | power wells, etc...
         | 
         | But on the bigger hardware where DMA was common, a halted CPU
         | could be relied on not to be issuing needless requests to the
         | memory bus and other clients like I/O devices (SMP was in its
         | infancy in the 70's too) would have lower contention and higher
         | throughput. I'm sure that was part of the thinking. The IBM PC
         | itself tended not to contend on its bus much (CGA and MDA had
         | their own framebuffers and floppy DMA was mostly a joke), but
         | maybe there were other 8086 implementations that cared.
        
           | anyfoo wrote:
           | You get a fun reminder of that if you run MWC's Coherence in
           | a vm today. Coherence's idle loop/task does not issue HLT, so
           | you can happily see the CPU core the vm is running on burning
           | away for no good reason.
        
       ___________________________________________________________________
       (page generated 2023-01-26 23:00 UTC)