[HN Gopher] Intel Microcode Decryptor
       ___________________________________________________________________
        
       Intel Microcode Decryptor
        
       Author : bfoks
       Score  : 243 points
       Date   : 2022-07-18 23:11 UTC (23 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | numlock86 wrote:
       | So after analysis from the community and experts we will finally
       | get rid of the whole backdoor-conspiracy bandwagon? Or will they
       | just move on to another aspect or even simply wave it off as an
       | orchestrated and constructed fake? I mean those people come up
       | with a lot weirder things to advocate for their beliefs.
        
         | midislack wrote:
        
         | javajosh wrote:
         | _> the whole backdoor-conspiracy bandwagon_
         | 
         | This isn't a correct characterization of the suspicion that
         | Intel microcode has backdoors in it. The suspicion isn't just
         | based on distrust of authority, like flat Earth, etc, but also
         | on the org having means, method, and opportunity to remotely
         | modify the operation of a CPU. And it operates within the
         | domain of the USG, who have demonstrated a keen interest in
         | weaponizing 0-day exploits.
         | 
         | What better way to acquire a novel 0-day than to simply write
         | one known only to you and distribute it from the source? This
         | is a good plan, but it comes with a substantial risk to Intel,
         | or any company who wishes to maintain a trust relationship with
         | its customers.
         | 
         | That said, I don't think anyone doing this is stupid, and for
         | safety they would not install microcode malware on everyone,
         | just some. This means we will find nothing in general CPUs, and
         | anecdotal reports finding "something" can easily be dismissed
         | as malicious or noise.
         | 
         | The truly paranoid need not worry, even if the microcode is
         | seen to be harmless, there is always the possibility that
         | hardware you buy is interdicted, modified, and sent onward,
         | such that your paranoia can remain intact.
        
         | DSingularity wrote:
         | No. Intel can sign arbitrary binaries which get executed in a
         | more privileged mode and without leaving any traces. That's the
         | problem.
        
           | trelane wrote:
           | Or someone with their keys
        
         | charcircuit wrote:
         | I don't think they will. They want to believe there exists a
         | backdoor or that they are constantly being spied on and they
         | will make up a narrative on how that happens regardless of if
         | that explanation is true or if it is even physically possible.
        
           | galangalalgol wrote:
           | While it really wouldn't surprise me if there was a back
           | door, or some incompetence (real or orchestrated) that
           | functions as one, I also don't think "they" need microcode
           | level backdoors given the state of software security, and the
           | amount of information we give away freely.
        
           | pueblito wrote:
           | Im pretty sure it's indisputable that we _are_ all constantly
           | being spied upon and tracked, both by multiple nation-states
           | as well as a ton of private companies. Believing there is an
           | undiscovered backdoor is absolutely a reasonable position to
           | take.
        
         | jbm wrote:
         | You're getting unfairly dismissed, so let me take some of the
         | downvote burden.
         | 
         | In this thread, I see implications about big media controlling
         | microcode (which doesn't seem to be impacting piracy -- if
         | anything, it's easier than ever before), about governments
         | imminently finding backdoors and trashing an entire generation
         | of chips, and other extreme outcomes that seem wholly out of
         | step with reality. (Not in the "Everyone else is dumb but we
         | few are smart" way; in the "I have not interacted with a
         | business or government ever" way)
         | 
         | I'm sure the 2600 crowd will have their next "Yes but what
         | about the *Long form* birth certificate"-style goalpost shift
         | if there are no backdoors found.
        
       | pueblito wrote:
       | Cool, I'm into cheap auditable hardware! This could maybe turn
       | out like when they discovered Linksys was breaking the GPL which
       | ended up opening up an entire class of hardware to hack on.
        
       | memorable wrote:
       | Alternative front-end version:
       | 
       | https://nitter.net/h0t_max/status/1549155542786080774
        
       | notRobot wrote:
       | Can someone more educated on this than me please ELI5 the
       | significance of this?
       | 
       | If I'm understanding correctly, this allows us to view
       | (previously obfuscated) code that runs on certain (recent-ish)
       | Intel processors?
       | 
       | What are the consequences of this?
        
         | fulafel wrote:
         | There hasn't been any obvious reason to keep this secret behind
         | encryption, so now there's a little buzz in the air if
         | something newsworthy will be revealed once people start
         | analyzing the microcode and diffs between microcode updates.
        
         | avianes wrote:
         | _> If I 'm understanding correctly, this allows us to view
         | (previously obfuscated) code that runs on certain (recent-ish)
         | Intel processors?_
         | 
         | Yes, but this "code" is the Intel microcode.
         | 
         | In a modern processor, instructions are translated in a
         | sequence of micro-operations (uOps) before execution; These
         | uOps are small instructions that the processor can execute with
         | more ease. Ultimately, this allows to build more performant
         | processors.
         | 
         | But some instructions require translation into a uOps sequence
         | that is too complex to be handled like other instructions.
         | Modern processors therefore feature a "microcode sequencer",
         | and the "microcode" is the configuration of this component.
         | 
         | And this work allows us to interpret a previously misunderstood
         | part of the microcode.
         | 
         |  _> What are the consequences of this?_
         | 
         | There are no real direct consequences for users.
         | 
         | But this helps to better understand how modern Intel processors
         | work; Especially security researchers will be able to better
         | understand how some security instruction works (mainly the SGX
         | extension). In the long term, they may find Intel errors (as
         | has already happened previously) which will be fixed in next
         | Intel processor generation.
         | 
         | Although security issues may be detected in Intel processors,
         | this will probably have no impact for normal users, but it
         | could affect some companies.
        
       | ccbccccbbcccbb wrote:
       | It's all cool and certainly a breakthrough, but Atoms, Pentiums
       | and Celerons.. Wake me up when this thing decrypts mainstream
       | Core i7 microcode!
        
         | exikyut wrote:
         | FWIW the supported CPUs list does list silicon from
         | 2017-2019...
        
       | dqpb wrote:
       | Does the disclaimer at the top have any legal merit? If they
       | didn't include that disclaimer, would they actually be liable for
       | damage or loss caused by its use?
        
         | colechristensen wrote:
         | Doubtfully legally "required" to avoid liability, but
         | everything you can do to knock down arguments that you injured
         | a third party by warning them of the danger really takes the
         | air out of lawsuits. You can point to the warning to discourage
         | from filing lawsuits, to encourage dismissal, or to make
         | winning a case more likely.
        
       | RjQoLCOSwiIKfpm wrote:
       | Which machine language is the microcode written in?
       | 
       | Is it even possible to fully decode that language with publicly
       | available information/tools?
       | 
       | Given that microcode is an internal mechanism of CPUs, I would
       | expect its language to be impossible to decode for regular people
       | because there is zero knowledge on how it works?
       | 
       | And even if there is some knowledge on it, won't Intel change the
       | machine language around a lot among CPU generations because the
       | lack of public usage means it _can_ be changed constantly, thus
       | rendering the existing knowledge useless quickly?
        
         | adrian_b wrote:
         | The microcode is a sequence of fixed-length microinstructions.
         | 
         | Each microinstruction is composed of many bit fields, which
         | contain operation codes, immediate constants or register
         | addresses.
         | 
         | The format of the microinstruction is changed at each CPU
         | generation, so, for example, the microinstructions for Skylake,
         | Tiger Lake, Gemini Lake or Apollo Lake have different formats.
         | 
         | Therefore, someone who discovers the microinstuction format for
         | one of them has to repeat all the work in order to obtain the
         | format for another CPU generation.
         | 
         | In the presentation from
         | 
         | https://www.youtube.com/watch?v=V1nJeV0Uq0M
         | 
         | the authors show the microinstruction format for Apollo Lake,
         | which is a kind of VLIW (very long instruction word) format,
         | encoding 3 simultaneous micro-operations, each of which can
         | contain three 6-bit register addresses and a 13-bit immediate
         | constant.
         | 
         | For Apollo Lake, the microinstruction encoding is somewhat
         | similar to the encoding of an instruction bundle (containing 3
         | instructions) in the Intel Itanium processors.
         | 
         | It is likely that in the mainstream Intel Core or Xeon CPUs the
         | micro-instruction format is significantly more complex than
         | this.
         | 
         | The team which reverse-engineered the microinstruction format
         | was able to do this because they have exploited a bug in the
         | Intel Management Engine for Apollo Lake/Gemini Lake/Denverton
         | to switch the CPU into a mode in which it allows JTAG
         | debugging.
         | 
         | Using JTAG they could read the bits from some internal busses
         | and from the microcode memory. The bits read were intially
         | meaningless, but by executing many test programs and comparing
         | what the CPU does, with the bits read at the same time via
         | JTAG, they eventually succeeded to guess the meaning of the
         | bits.
         | 
         | For most Intel CPUs, there is no chance to switch them into the
         | debugging mode, unless you receive a secret password from an
         | Intel employee (which probably is within the means of some
         | 3-letter agencies).
         | 
         | Once switched into the debugging mode, it is possible to do
         | things as complex as making the CPU to replace the normal
         | execution of a certain instruction with the execution of an
         | entire executable file hidden inside the microcode update (and
         | you can also update the microcode directly, bypassing the
         | normal signature verification step).
         | 
         | However, for most motherboards, it is likely that switching to
         | the debugging mode also requires physical access to change the
         | connection of some pin, not only the secret password, though
         | exceptions are known, where the MB manufacturers have forgotten
         | to disable the debugging mode on the PCB.
        
           | 5d8767c68926 wrote:
           | >...unless you receive a secret password from an Intel
           | employee (which probably is within the means of some 3-letter
           | agencies).
           | 
           | Phone calls are easier.
        
             | toast0 wrote:
             | It's still receiving a secret password from an employee if
             | you just call and ask nice and they give it to you.
        
         | masklinn wrote:
         | > Which machine language is the microcode written in?
         | 
         | Seems logical that it would be mostly be standard machine code,
         | since there are instructions which translate 1:1 to microcode
         | (I assume) no use translating _everything_ , that'd require
         | more space and be harder on everyone for little reason.
         | 
         | Though there might be privileged instructions which would only
         | be available in microcode (would be rejected by the frontend),
         | and which you would have to reverse-engineer separately.
        
         | fragmede wrote:
         | Yeah but Intel's engineers aren't going to just change the
         | machine language around for funzies. I'd expect it to be semi-
         | stable because if it ain't broke, there's no reason to go in
         | and change it.
        
           | RjQoLCOSwiIKfpm wrote:
           | They might not change existing stuff, but they may very well
           | constantly add new instructions, that doesn't break their
           | usecase, but it will break the usecase of the public trying
           | to decode it easily.
        
             | fragmede wrote:
             | Yes but that won't make the existing knowledge _useless_.
        
           | robin_reala wrote:
           | They changed from ARC to a 32-bit Quark core for the ME from
           | version 11 up.
        
             | alophawen wrote:
             | That's just the CPU arch running IME. The uOps is designed
             | by Intel.
        
         | fulafel wrote:
         | There's a lot of precedent bout figuring out undocumented
         | instruction sets (the microcode sequencing format here) from
         | history.
         | 
         | Changing things around is expensive, even if they don't have
         | external compatibility obligations.
        
         | avianes wrote:
         | > Which machine language is the microcode written in?
         | 
         | The mirocode is generally a sequence of uOps. But in Intel's
         | case, there seems to be a more complex mechanism, called
         | XuCode, that generates the uOps sequence. The XuCode ISA seems
         | to be based on x86-64, as Intel says [1]:
         | 
         | > XuCode has its own set of instructions based mostly on the
         | 64-bit Instruction Set, removing some unnecessary instructions,
         | and adding a limited number of additional XuCode-only
         | instructions and model specific registers (MSRs) to assist with
         | the implementation of Intel SGX.
         | 
         | PS: Decoding of the XuCode microcode can potentially give
         | precious information about uops encoding
         | 
         | PS2: You can find more information on uops encoding in another
         | work from the same team [2].
         | 
         | [1]:
         | https://www.intel.com/content/www/us/en/developer/articles/t...
         | 
         | [2]: https://github.com/chip-red-pill/uCodeDisasm
        
       | jacquesm wrote:
       | That's pretty weird, this article was here already earlier, had
       | 600+ upvotes and now it is back with new upvotes but the old
       | comments.
        
         | faxmeyourcode wrote:
         | I thought I was having a stroke or dejavu reading these
         | comments
        
         | jsnell wrote:
         | https://news.ycombinator.com/item?id=32156694
        
         | [deleted]
        
       | ItsTotallyOn wrote:
       | Can someone ELI5 this?
        
         | bri3d wrote:
         | As we know, processors run a series of instructions, things
         | like "move data," "add," "store data."
         | 
         | Over time, these instructions have gotten more and more
         | complicated. Now there are "instructions" like "Enter Virtual
         | Machine Monitor" which actually complex manipulations of tons
         | of different registers, memory translations, and subsystems
         | inside of the CPU.
         | 
         | And, even simple, primitive instructions like call, jump, and
         | return actually need to check the state of various pieces of
         | the processor and edit lots of internal registers, especially
         | when we start to consider branch prediction and issues like
         | Spectre.
         | 
         | It wouldn't be very plausible to hard-wire all of these complex
         | behaviors into the CPU's silicon, so instead, most instructions
         | are implemented as meta-instructions, using "microcode."
         | "Microcode" is just software that runs on the CPU itself and
         | interprets instructions, breaking them down into simpler
         | components or adding additional behaviors. Most CPUs are really
         | emulators - microcode interprets a higher level set of
         | instructions into a lower level set of instructions.
         | 
         | Historically, Intel and more recently AMD have encrypted this
         | "microcode," treating it as a trade secret. This makes people
         | who are worried about secret hidden backdoors _very_ worried,
         | because their CPU's behavior is depending on running code which
         | they can't analyze or understand. This has led to all sorts of
         | mostly unfounded speculation about secret back doors, CPUs
         | changing the behavior of critical encryption algorithms, and so
         | on and so forth.
         | 
         | Decrypting this microcode will theoretically allow very skilled
         | engineers to audit this functionality, understand the
         | implementation of low-level CPU behaviors, and find bugs and
         | back-doors in the CPU's interpretation of its own instructions.
         | 
         | Replacing this microcode silently would be absolutely
         | catastrophic security-wise, because an attacker could silently
         | change the way the CPU worked, right out from under running
         | software. But, there is no evidence this is possible, as the
         | microcode is digitally signed and the digital signature
         | implementation, so far, seems to be correct.
        
       | shmde wrote:
       | As someone who just makes Crud apps can someone please ELI5 this.
       | Why is this a big deal and why are people freaking out about
       | intel chips becoming obsolete overnight ?
        
         | Akronymus wrote:
         | If there is a backdoor, it could be widely exposed. And such a
         | hypothetical backdoor may not be patchable AT ALL.
         | 
         | As in, there may a possibility for almost every computer being
         | vulnerable to a RCE that bypasses even the OS.
        
           | spockz wrote:
           | Depending on where the vulnerability exists it can be patched
           | by patching the microcode right?
        
           | sschueller wrote:
           | Isn't the point of the microcode to make it possible to patch
           | a HW bug?
        
             | Akronymus wrote:
             | It takes microcode to patch microcode though. AFAIK.
             | 
             | If the malware shuts that patching off...
        
           | mjg59 wrote:
           | > And such a hypothetical backdoor may not be patchable AT
           | ALL.
           | 
           | Intel microcode can be loaded at runtime. If there's a
           | backdoor in the microcode then it can, by definition, be
           | patched.
        
             | pitaj wrote:
             | Pretty sure the microcode could be changed to deny a patch,
             | since it has the most privileged level of control on the
             | system.
        
           | gorgoiler wrote:
           | As I understand it, you would need to have an existing RCE to
           | exploit the microcode patching process.
           | 
           | h0t_max's research means that future attacks -- once your
           | local machine has been infiltrated by some other means -- can
           | do a lot more damage than simply encrypting your filesystem
           | or sending spam. They can rewrite the way your CPU works.
           | 
           | When your OS gets attacked by malware it is attacking the
           | layer above the bare metal on which your OS runs. The base
           | hardware remains untouched. You can at least clean things up
           | by installing a new OS on the bare metal.
           | 
           | If malware attacks the bare metal itself, then you are stuck
           | out of luck.
        
             | ndiddy wrote:
             | As mentioned in the github page that this post links to,
             | it's impossible to make a custom microcode update because
             | the updates are RSA signed.
        
               | reocha wrote:
               | Except if someone finds a flaw in the microcode loader.
        
               | anewpersonality wrote:
               | Didn't they dump the private keys?
        
               | sroussey wrote:
               | Public key
        
               | [deleted]
        
               | anewpersonality wrote:
               | What can do they with the public key vs private key?
        
               | pitaj wrote:
               | Public keys can only be used to verify a signature, not
               | to sign anything.
        
               | anewpersonality wrote:
               | Was the public key found used to decrypt the microcode?
        
               | NavinF wrote:
               | Why would private keys be on the chip?
        
           | stonepresto wrote:
           | If there exists a backdoor, its unlikely to be remotely
           | accessible. But privesc for any OS running a vulnerable CPU,
           | absolutely. This would probably look like some hidden
           | instruction to load and execute an arbitrary ELF at the
           | XuCode level, not a magic packet.
        
             | Akronymus wrote:
             | Cant the intel ME already bypass the OS and connect to the
             | network?
        
         | exikyut wrote:
         | As someone still at the "piecing things together" stage, here's
         | my understanding:
         | 
         | There are a bunch of privilege levels in Intel CPUs
         | (https://en.wikipedia.org/wiki/Protection_ring, relatively
         | boring), used for memory protection and user/kernel mode
         | separation (IIUC, I _think_ I 'm correct). They can be
         | controlled by whatever code boots the CPU ("gets there first"),
         | because the CPU boots in the most trusting state.
         | 
         | Over time the set of available levels proved insufficient for
         | security and new levels were added with negative numbers so as
         | not to disrupt the existing status quo. Ring -1 is used for
         | virtualization and can also be controlled by whatever boots
         | first (ie, outer code can create VMs and enter them, but the
         | CPU faults if inner code attempts to access the virtualization
         | instruction set), but Ring -2 and Ring -3 are used by the CPU
         | itself.
         | 
         | Essentially, in the same way whatever the bootloader loads gets
         | control over a bunch of interesting functionality because that
         | code got there first, Ring -2 and -3 are controlled by code
         | running directly on the CPU that gained control of the system
         | before the bootloader and in some cases even UEFI was started.
         | The significant elements are that a) this code can
         | theoretically be altered by system (microcode) updates; b)
         | these components run *completely* independently of the
         | operating system - IIRC, the Management Engine is based on
         | Minix running on a tiny 486-class core somewhere on the CPU
         | die; and c) this invisible functionality has the ability to
         | read and write all of system RAM. What's that? A glitch? A
         | couple of bytes of RAM just got modified? That made the running
         | Linux kernel suddenly think a process's effective UID is 0?
         | Must have been the butterfly effect!
         | 
         | A bit of Googling found this overview of Ring -1 to -3 which
         | I'd say is really good, definitely worth clearing your cookies
         | to read if Medium is yelling at you about subscribing:
         | 
         | https://medium.com/swlh/negative-rings-in-intel-architecture...
        
           | mdaniel wrote:
           | > definitely worth clearing your cookies to read if Medium is
           | yelling at you about subscribing:
           | 
           | I've been getting a lot of mileage out of "scribe.rip" that
           | was posted a while back
           | (https://news.ycombinator.com/item?id=28838053)
           | 
           | https://scribe.rip/swlh/negative-rings-in-intel-
           | architecture...
        
         | vbezhenar wrote:
         | People think that if they would find out that Intel CPUs were
         | designed to spy for Russians, the entire world will switch to
         | AMD.
         | 
         | Reality: nobody cares.
        
           | sroussey wrote:
           | The whole world would remove and replace that 5G equipment.
        
           | nonrandomstring wrote:
           | > Reality: nobody cares.
           | 
           | I'd like to talk about how we patch this pervading cynicism
           | and replace it with a model that's actually useful.
           | 
           | It isn't true on face value. Consider the course of the Cov19
           | pandemic. Two things spread. One was a virus. The other was
           | an idea, news about a virus. The latter spread much faster
           | than the former. It spread because people love "doom". Any
           | story about the impending "end of everything" is lapped up,
           | relished and shared by the public.
           | 
           | There are well understood stages to information propagation.
           | After "Is this real?" and "Am I affected?" comes the question
           | of "what to do?"
           | 
           | I think this is where your assertion about "care" comes in.
           | Some events will blow-up and burn out fast. People run around
           | like headless chickens enjoying the drama about something
           | remote to their lives, but ultimately nobody can _do_
           | anything, so focus recedes. Wars and famines are like that.
           | Curtis calls this  "Oh Dearism".
           | 
           | Alternatively, if there is any residual effect, new events
           | connected to the initial story, it feeds and grows. The
           | initial "All the chips are broken!" story that would die
           | after a 24 hour news cycle becomes an ongoing "situation".
           | Then people care, because it's cool to care. It gets a catchy
           | handle "Evil-Inside" and a news slogan "The chips are down".
           | And then it won't go back in the bottle.
           | 
           | To reformulate your "Nobody cares" - as a question - "Do
           | people have sensible cares that are proportionate to the
           | threat?" No. But if the media tells them to care because cars
           | are running off the road and there's a massive uptick in
           | cybercrime, which nobody can ignore and is directly
           | attributable to a single cause, then the "care" (hysteria)
           | can get worse than the problem (which may have happened with
           | Cov19 to some degree).
           | 
           | Finally, they may care, but about completely the wrong thing.
           | One suspects, if it turns out there is indeed some serious
           | drama in Intel chips, then Intel and the US government will
           | try to paint the security researchers as irresponsible
           | hackers who unleashed this upon the world.
        
         | tgv wrote:
         | A CPU executes the machine instructions of your program (you
         | might think of this as "assembly" programs, like "put a zero in
         | register 12" or "jump to address 0xFF3392"). There have been
         | architectures where instructions map directly onto transistors,
         | but since System/360 (*) there's an extra level: the CPU has
         | it's own, lower-level programming language that's used to
         | execute the machine instructions. Finding out how a CPU
         | actually works is interesting per se, and very valuable for
         | competitors, but this work might also expose vulnerabilities
         | and/or backdoors, built into the chip itself. It seems to be
         | around 100kB of code, so there's a whole lot of opportunity...
         | 
         | (*) first commercial architecture I know of...
        
         | bubblethink wrote:
         | It isn't. There is unnecessary hysteria. Encrypting microcode
         | is just usual competitive play. Doesn't mean anything
         | nefarious. If issues are found in the said microcode, that
         | would be a different story.
        
           | sroussey wrote:
           | Microcode does spill the secrets of the hardware, so
           | definitely don't want competitors looking through.
           | 
           | Old ones though are open book (of a sort).
           | 
           | The 6502 for the apple II was obviously hand generated so it
           | microcode is weird, but the 68k for the original Mac was
           | pretty normal microcode as we would think of it today.
        
             | classichasclass wrote:
             | The 6502 is not microcoded, or at least not in the way we
             | presently conceive of microcode. Its operations are driven
             | by an on-board PLA which controls decoding and sequencing
             | with its gates, which also explains its high performance
             | per clock. It would be hard to call those gate connections
             | micro-opcodes since they're not really used in that sense.
        
               | sroussey wrote:
               | I did say it was weird... ;)
        
       | goombacloud wrote:
       | Has someone tried to write own microcode and load it? Sounds like
       | it should be much faster to run your own code this way than
       | having the official microcode run an interpreter for your x86
       | instructions.
        
         | codedokode wrote:
         | I guess that the amount of SRAM for microcode is limited so you
         | cannot write a lot of code this way. Also, microcode might be
         | used only for slow, rarely used instructions, and it doesn't
         | make much sense to optimize them.
        
         | fulafel wrote:
         | It's signed, no interesting headway has been published afaik
         | relating to recent intel processors.
         | 
         | edit: there was this piece interesting headway mentioned
         | elsewhere in comments:
         | https://news.ycombinator.com/item?id=32149210
        
           | viraptor wrote:
           | There was some research into modifying old AMD microcode when
           | they didn't enforce signing.
           | https://hackaday.com/2017/12/28/34c3-hacking-into-a-cpus-
           | mic...
           | 
           | Nothing of that level on Intel so far.
        
       | fxtentacle wrote:
       | Wow that is really cool. Here's the GitHub link without Twitter
       | tracking, BTW: https://github.com/chip-red-
       | pill/MicrocodeDecryptor
       | 
       | Especially considering how they gained this knowledge:
       | 
       | "Using vulnerabilities in Intel TXE we had activated undocumented
       | debugging mode called red unlock and extracted dumps of microcode
       | directly from the CPU. We found the keys and algorithm inside."
       | 
       | And looking further down, some X86 instructions (that people
       | would usually call low-level) actually trigger execution of an
       | entire ELF binary inside the CPU (implemented in XuCode). Just
       | wow.
        
         | mkesper wrote:
         | @mods maybe change page to this? Much more background info
         | there.
        
           | wglb wrote:
           | Best to email them at the address at the bottom of the page
        
           | dang wrote:
           | We've merged the threads now. Thanks!
        
         | cowtools wrote:
         | I have a sinking feeling in my chest. If the suspicions are
         | true, we may be at a pivotal moment here.
        
           | jari_mustonen wrote:
           | Please explain your worst fears?
        
           | throwawaylinux wrote:
           | What's the sinking feeling suspicions?
           | 
           | This XuCode execution engine is not a new or unknown thing - 
           | https://www.intel.com/content/www/us/en/developer/articles/t.
           | ..
           | 
           | We can lament the state of computer security but that feeling
           | is surely sunken, past tense.
        
           | pyinstallwoes wrote:
           | Can you expand on that as someone who is learning more about
           | low-level architectures in relation to the hardware and
           | microcode layer?
        
             | fragmede wrote:
             | It's never been publicly clear how much is done in
             | hardware, and how much is actually done via microcode. This
             | blows the doors open and reveals that there's actually a
             | lot more being done in microcode than previously suspected.
             | What's pivotal, is how much more possible this makes
             | microcode-based attacks against Intel-based systems.
        
               | sroussey wrote:
               | Basic CPU design in 1990 undergrad had everything as
               | microcode, so I'm not sure there was much of any question
               | or any conspiracy other that EE is much harder than CS
               | and attracts far fewer people to it, and thus less
               | information gets spread around, I guess.
        
               | pyinstallwoes wrote:
               | Yeah if they can ship code that fixes "critical security"
               | faults in a physical product, then it's probably at the
               | software level all the way down.
        
               | astrange wrote:
               | It's more like they can patch any instruction to become
               | microcoded, but that doesn't mean it shipped that way
               | originally.
        
               | saagarjha wrote:
               | This makes microcode more auditable to most people.
               | Relying on the "security" of most people being unable to
               | inspect their microcode is not really a good position to
               | have.
        
               | mjg59 wrote:
               | I don't think it's true that there's more being done in
               | microcode than previously suspected? The microcode
               | updates for various CPU vulnerabilities clearly
               | demonstrated that microcode is able to manipulate
               | internal CPU state that isn't accessible via "normal" CPU
               | instructions.
        
               | blacklion wrote:
               | It is true ("lot more being done in microcode than
               | previously suspected"), but only for Atom cores. Which
               | are much simpler than "Core" cores, and, to be honest, it
               | should be expected for small, simple and slow core - as
               | microcoded execution is much simplier and make sense for
               | these cores.
               | 
               | On the other hand, most of 12-series CPUs contains them
               | :-(
        
               | ac29 wrote:
               | > On the other hand, most of 12-series CPUs contains them
               | :-(
               | 
               | E cores in Alder Lake are not listed as vulnerable to
               | this attack.
        
               | jsjohnst wrote:
               | > as microcoded execution is much simplier and make sense
               | for these cores
               | 
               | Say what? Can you explain what you are basing that on?
        
               | AnimalMuppet wrote:
               | If you build a simple, lower-end chip, you do so by
               | giving it fewer hardware resources - fewer things where
               | the implementation is via gates. That means that, in
               | order to implement the same instruction set, you have to
               | implement _more_ via microcode.
               | 
               | I _think_ that 's what's being referred do. Microcoded
               | execution is much simpler in terms of the hardware that
               | you have to implement.
        
               | jsjohnst wrote:
               | > Microcoded execution is much simpler in terms of the
               | hardware that you have to implement.
               | 
               | That's contrary to everything I've ever heard.
        
           | rst wrote:
           | Even if, well... some intelligence service has developed
           | microcode-level "implants"/malware, they wouldn't necessarily
           | want it to be part of the standard build that gets shipped to
           | all customers, precisely because of the risk of exposure.
           | It's at least as likely that they'd install it on targets as
           | updates, having first secured access by other means (as
           | they'd have to anyway to exploit a backdoor that _did_ ship
           | with the chips). It 's possible that some agencies would even
           | be able to access signing keys if that helped simplify
           | "implanting" procedures -- though it might not be necessary,
           | if (like the OPs here) they can arrange microcode-level
           | execution by other means.
        
           | rurban wrote:
           | This is only the microcode, not the deeper level minix,
           | accessing all the IO and memory, where all the backdoors are
           | suspected. Think of run-time patched CPU code to patch HW
           | errors.
        
             | mjg59 wrote:
             | The microcode runs on the CPU, and has the (theoretical)
             | ability to directly modify anything running on the CPU
             | simply by executing different instructions. The Management
             | Engine (the component that runs the Minix kernel) is on the
             | external chipset, and has no direct visibility into the
             | state of the CPU. The OS that runs on the Management Engine
             | has also been decodable for years, and while bugs have been
             | identified nobody has found anything that's strong evidence
             | of a back door.
        
           | nonrandomstring wrote:
           | It could go both ways.
           | 
           | Let's be optimistic - say, after lengthy, rigorous expert
           | analysis it turns out there are no backdoors or prepared
           | traps for potential malware within Intel CPUs. That's a big
           | boon for security everywhere, and for Intel's share price.
           | 
           | If on the other hand, it turns out against Intel, the
           | evidence is literally baked into silicon within billions of
           | devices in the wild which will become e-waste overnight.
           | 
           | With this TikTok thing in the wind the Chinese will ban
           | Intel... etc.
           | 
           | The stakes are high. The problem was always lack of
           | transparency. Putting encrypted microcode into consumer CPUs
           | was always a dumb idea. And why? To protect the media
           | conglomerates. Another reason we need to re-examine the role
           | and value of "intellectual property" in society.
        
             | ihalip wrote:
             | > billions of devices in the wild which will become e-waste
             | overnight
             | 
             | Not just e-waste, they can also become a huge liability. In
             | a presentation, the authors mention that one of the CPU
             | families which have this vulnerability were used in Tesla
             | cars. Tesla apparently switched to AMD APUs around December
             | 2021.
        
               | stefan_ wrote:
               | AMD processors have much the same backdoor-"management"
               | coprocessors. Just about the only processors without this
               | stuff is your own softcore design running on an FPGA.
        
               | rkangel wrote:
               | This is not about the management engine. Microcode is
               | part of the actual core processor itself, but an
               | updatable layer. One sort off correct mental model might
               | be to think of x64 hardware as being a RISC-ish processor
               | that runs microcode that runs your code.
        
               | greggsy wrote:
               | It's not a backdoor until it's proven that it's used for
               | that propose. Until the it's just (yet another)
               | _potential_ side channel.
        
               | CoastalCoder wrote:
               | I don't understand that logic.
               | 
               | It's like saying I haven't been robbed until I discover
               | that my stuff is missing.
        
               | atq2119 wrote:
               | That's literally true, though?
               | 
               | To make the analogy work for you, you have to add
               | something about doors being unlocked, or somebody else
               | having the key to your home.
        
               | bee_rider wrote:
               | I think in the original analogy, the actual robbery is
               | just used as an event which may occur without our
               | knowledge. Your analogy is better, the mapping makes more
               | sense.
               | 
               | Something like: The locksmith has made a copy of your
               | keys without notifying you. They could hypothetically use
               | those keys to enable a robbery, but you won't know
               | definitively either way until you find something stolen.
               | But it is a pretty weird thing for them to do, right?
        
               | na85 wrote:
               | >That's literally true, though?
               | 
               | There's no Schrodinger's Burglar. You've been robbed once
               | I take your wallet, whether you've discovered it or not.
        
               | oasisbob wrote:
               | > It's like saying I haven't been robbed until I discover
               | that my stuff is missing.
               | 
               | Well, robbery is theft under threat of force, so it would
               | be very hard to be robbed and remain unaware of it.
        
               | jsjohnst wrote:
               | Yep, I assume they just don't know the difference between
               | robbery and burglary.
        
               | CoastalCoder wrote:
               | Actually, I do know the difference, but forgot the
               | distinction when writing the comment :)
        
               | yjftsjthsd-h wrote:
               | What about POWER?
        
               | _kbh_ wrote:
               | At least for AMD the PSP isn't externally exposed which
               | means the attack surface is drastically reduced.
        
               | aqfamnzc wrote:
               | When you say externally exposed, do you mean to the
               | network, or physically exposed, or what?
        
               | _kbh_ wrote:
               | It doesn't sit on the network (unlike the ME) so an
               | attacker needs to have access to the host already to be
               | able to exploit any vulnerabilities on the PSP.
        
               | femto wrote:
               | Then you have to worry about any "management" modes on
               | the FPGA.
               | 
               | https://www.cl.cam.ac.uk/~sps32/Silicon_scan_draft.pdf
        
               | jgtrosh wrote:
               | What's left, RISC-V?
               | 
               | https://en.wikipedia.org/wiki/RISC-V
        
               | pjmlp wrote:
               | Not really, because most deployments use UEFI alongside
               | them.
               | 
               | https://github.com/riscv-admin/riscv-uefi-edk2-docs
               | 
               | So if it isn't from door A, door B will do.
        
               | zekica wrote:
               | It is not just for booting. UEFI has Runtime Services
               | that an OS can call.
        
               | yjftsjthsd-h wrote:
               | How is UEFI like ME/PSP? I thought it was just for
               | booting.
        
               | robotnikman wrote:
               | Its surprisingly more complex than that. You can run Doom
               | in UEFI
               | 
               | https://github.com/Cacodemon345/uefidoom
        
               | yjftsjthsd-h wrote:
               | Okay, but that doesn't really bother me; running
               | arbitrary payloads is the point of a bootloader. The only
               | reason I would be worried by UEFI on RISC-V is if the
               | UEFI firmware in question stays running in the background
               | after the OS boots, and isn't properly inspectable. That
               | might be the case - I have some vague notion of UEFI
               | providing post-boot services, and for all that the EDK
               | version is FOSS you could certainly make a closed version
               | - but I'm not seeing any reason to panic just yet.
        
               | derefr wrote:
               | UEFI _can_ provide post-boot services, but only in the
               | same sense that a DOS-era BIOS did: by providing memory-
               | mapped code that UEFI applications can call into,
               | exokernel style. The UEFI framework isn 't a hypervisor;
               | it has no capacity to have ongoing execution threads
               | "behind" the OS kernel. Rather, the OS kernel just
               | receives a memory+PCIe map from UEFI at startup that
               | contains some stuff UEFI left mapped; and it's up to the
               | OS kernel whether to keep it mapped and make use of it;
               | or just to unmap it.
               | 
               | And no production OS kernel really uses any of the _code_
               | that UEFI maps, or keeps any of said code mapped. It may
               | keep the data-table pages (e.g. ACPI sections) mapped,
               | for a while; but usually only long enough to copy said
               | data into its own internal data structures.
               | 
               | (Which is a shame, actually, as an OS kernel that _did_
               | take advantage of UEFI services like UEFI applications
               | do, would be able to do things that modern kernels
               | currently mostly can 't -- like storing startup logs and
               | crash dumps in motherboard NAND space reserved for that
               | function, to provide on-device debugging through a
               | separate UEFI crash-log-viewer app in case of startup
               | disk failure, rather than needing to attach a separate
               | serial debugger.)
        
               | smoldesu wrote:
               | A malicious actor can do a lot of things in UEFI, but
               | they can't decrypt my disk, they can't boot into my OS,
               | and they can't mess with my userland environment. If
               | Johnny Blackhat fancies a game of Doom over TTY on my
               | desktop's UEFI environment, he can be my guest.
        
               | classichasclass wrote:
               | POWER8 and POWER9 say hi.
        
               | [deleted]
        
               | alophawen wrote:
               | AMD have much the same thing.
               | 
               | https://en.wikipedia.org/wiki/AMD_Platform_Security_Proce
               | sso...
        
               | astrange wrote:
               | This is about microcode, not the Intel ME. AMD does also
               | have updatable microcode though.
        
             | jcranmer wrote:
             | > Putting encrypted microcode into consumer CPUs was always
             | a dumb idea.
             | 
             | Serious question: which consumer CPUs have unencrypted
             | microcode? AMD processors have had theirs encrypted for a
             | decade, Intel for even longer, and nothing I've seen
             | indicates that any ARM models have unencrypted microcode
             | updates floating about.
        
               | Klonoar wrote:
               | Do the Raptor systems (POWER-based) fit here?
        
               | cogburnd02 wrote:
               | If it can't be done on a MC68000 (or thousands in
               | parallel), then does it really need to be done?
        
               | classichasclass wrote:
               | POWER9 CPUs are in retail hardware and are being used
               | today (including several in this house), so I'd say they
               | qualify.
        
               | dkasak wrote:
               | It may be a serious question, but I don't see how it
               | relates to the part you quoted. It still is a dumb idea,
               | regardless of the answer to your question.
        
               | formerly_proven wrote:
               | Which ARM cores have loadable microcode to begin with?
        
               | Enginerrrd wrote:
               | Basically none these days. But therein lies the problem.
               | The actors can't be trusted. In part because the state-
               | actors in which they reside can't be trusted and they can
               | (and do) issue gag-orders to shove backdoors down their
               | throat.
               | 
               | But the problem with backdoors has always been that other
               | people will likely eventually figure out how to use them
               | too.
               | 
               | If you can't audit the microcode, you have a massive
               | gaping security problem.
        
               | [deleted]
        
             | gjs278 wrote:
        
             | dncornholio wrote:
             | e-waste? how come? I already assumed my hardware is
             | insecure.
        
               | userbinator wrote:
               | Forced obolescence FUD. Nothing more, nothing less.
        
             | markus_zhang wrote:
             | Can you please elaborate on the "To protect the media
             | conglomerates." part?
        
               | wsc981 wrote:
               | Would this kind of functionality not, more likely, be
               | implemented on the behest of the CIA? The media companies
               | could be used as a good cover though.
        
               | nonrandomstring wrote:
               | > Can you please elaborate...
               | 
               | Certainly. Intel (amongst others) supply consumer grade
               | CPUs for a "multimedia" market, to service music and
               | movie playback.
               | 
               | The movie and music industry apply intense pressure and
               | lobbying against the computing industry to protect their
               | profits. They see users having control over media, and
               | thus ability to copy, recode, edit or time-shift content
               | as a threat, which they dub "piracy".
               | 
               | Under this pressure to implement "Digital
               | Rights/Restriction Management" within their hardware,
               | semiconductor manufacturers have been making
               | microprocessors increasingly hostile to end users, since
               | if users were able to access and examine device functions
               | they would naturally disable or reject what is not in
               | their interests.
               | 
               | Hiding undocumented functionality to take over, examine
               | or exfiltrate data from systems via backdoors etc is a
               | natural progression of this subterfuge and tension
               | between manufacturer and user. This situation of cat-and-
               | mouse hostility should not exist in anything recognisable
               | as a "free market", and hence I deem it "protectionism".
               | 
               | Now the real problem is that these same "multimedia"
               | processors are used in other areas, automotive, aviation,
               | health and even military applications. The "risk-
               | tradeoffs" bought by big media bleed into these sectors.
               | 
               | Therefore it's clear to me that measures for the
               | protection of "intellectual property" are directly at
               | odds with computer security generally, and are
               | increasingly leading to problems for the sake one
               | relatively small sector. Sure, the digital entertainments
               | industry is worth many trillions of dollars, but pales
               | within the larger picture of human affairs. At some point
               | we may have to choose. Hollywood or national security?
        
               | bambax wrote:
               | You are absolutely right, and media companies are the
               | evilest of evil. But it's a little unclear why Intel felt
               | it had to cave in. What would have happened if it had
               | told Hollywood to go love itself somewhere? It's not like
               | movie producers are remotely capable of making chips.
        
               | formerly_proven wrote:
               | wdym cave in, it's not like Intel integrated technologies
               | pushed on them. Intel developed HDCP and co-developed
               | AACS (the Blu-ray encryption scheme). I dunno how much
               | they had their hand in AACS 2, but considering it's
               | literally built on SGX I'm going to claim without
               | evidence that Intel played a significant role in creating
               | AACS 2.
        
               | kevincox wrote:
               | If could have been pushed down the chain. The media
               | companies wanted to provide streaming so they talked with
               | Microsoft. Microsoft realized it would be a valuable
               | feature for thier OS and if they resisted but Apple
               | didn't they would lose market share as HD streaming
               | wasn't available. Then they said the same to Intel. If
               | you don't do it people will buy AMD chips for HD
               | streaming.
               | 
               | So basically it happened because users see the feature
               | and don't realize how hostile it is until they actually
               | try to access their media in an "unsupported" way.
               | 
               | Of course if everyone else resisted streaming services
               | would likely have launched anyways, but that is classic
               | prisoners dilemma.
        
               | Jolter wrote:
               | It's not like Intel have no competition. They probably
               | figured that if they didn't cooperate with the media
               | industry, AMD or someone else would, and eat their lunch
               | when suddenly Sony movies can only be played on AMD
               | devices.
        
               | Crosseye_Jack wrote:
               | I'm not hearing much noice over intels latest chips
               | losing support for 4k Bly-ray disc playback
               | 
               | (They dropped SGX which is needed for the DRM)
               | 
               | https://www.whathifi.com/news/intel-discontinues-support-
               | for...
        
               | Jolter wrote:
               | Yeah, I'm not saying they made the right call... If they
               | have any business sense they will just open up the
               | microcode after this.
        
               | formerly_proven wrote:
               | SGX is only dropped from consumer SKUs for product
               | segmentation purposes.
        
               | Crosseye_Jack wrote:
               | Well Tbf to intel. The only people I know who use a
               | bluray drive in a pc for 4K content, rip the content
               | anyways bypassing DRM. The masses (in the pc watching
               | moving via a Blu-ray drive space) have moved onto
               | streaming anyways.
        
               | mindslight wrote:
               | > _What would have happened if it had told Hollywood to
               | go love itself somewhere?_
               | 
               | Intel's executives would show up to play golf, and find
               | out that nobody wanted to play with them. They might even
               | be no longer welcome at the country club. Power
               | structures coalesce, because powerful people identify
               | with each others' authoritarian desires.
        
               | xmodem wrote:
               | One possibility might be that Intel thought that if they
               | did not do it voluntarily, they might ultimately be
               | forced to by legislation.
        
               | mistrial9 wrote:
               | > This situation of cat-and-mouse hostility
               | 
               | I think you could refine your point here.. markets
               | include adverse negotiation, secure applications are a
               | thing, and for that matter, weapons making is a thing.
               | You can't "wish away" that part.
               | 
               | I have a colleague who makes secure DRM for FAANG since
               | long ago, and has a large house and successful middle-
               | class life from it.
        
               | nonrandomstring wrote:
               | To be breif and frank with you, I see those views as
               | "part of the problem".
        
               | mschuster91 wrote:
               | Intel's SGX was at least partially intended to implement
               | DRM [0], and its various-ish predecessor technologies
               | such as old-school Palladium/TPM was expected to do the
               | same [1] (but was ultimately cancelled because of wide
               | backlash).
               | 
               | [0] https://www.techspot.com/news/93006-intel-sgx-
               | deprecation-im...
               | 
               | [1] https://en.wikipedia.org/wiki/Next-
               | Generation_Secure_Computi...
        
               | markus_zhang wrote:
               | Thanks, didn't know these are pretty hardcore protection.
               | Just curious maybe these protection layers and protetion-
               | protection layers may introduce more vectors for
               | attackers?
        
           | adrian_b wrote:
           | For now, the decryption keys have been obtained only for
           | older Atom CPUs (e.g. Apollo Lake, Denverton, Gemini Lake).
           | 
           | While these are reasonably widespread in the cheapest
           | personal computers and servers, the impact of examining their
           | microcode is much less than if the decryption keys for some
           | of the mainstream Core or Xeon CPUs would have been obtained.
           | 
           | In a more decent world, the manufacturers of CPUs would have
           | been obliged to only sign their microcode updates, without
           | also encrypting them, to enable the owners of the CPUs to
           | audit any microcode update if they desire so, in order to
           | ensure that the update does not introduce hidden
           | vulnerabilities.
        
             | asah wrote:
             | Atom and Core/Xeon don't share microcode? e.g.
             | vulnerabilities in one would never work on the other?
        
               | adrian_b wrote:
               | No, the microinstruction formats are very different, so
               | also the microprograms must be different.
               | 
               | Even the newer Atom cores, e.g. the Tremont cores from
               | Jasper Lake/Elkhart Lake/Snow Ridge/Parker
               | Ridge/Lakefield have a different microinstruction format
               | than that which has been published now.
               | 
               | For every generation of Intel CPUs, reverse-engineering
               | the microcode requires all the work to be done again,
               | what has been discovered for another generation cannot
               | help much.
        
           | midislack wrote:
           | I think we can all assume the suspicions are true.
        
         | pyinstallwoes wrote:
         | Implications of the last sentence re: ELF binary? Why is it
         | interesting? Besides surface level understanding of attack
         | vector/bypassing.
        
         | [deleted]
        
       | no_time wrote:
       | Judgement is nigh. I'd love to get my hands on one of the
       | decrypted binaries but I expect there are much more capable
       | reverse engineers are already carrying the torch :^)
        
       | O__________O wrote:
       | Curious, if an attacker has the key and access to the code, is
       | there anything to stop an attacker from updating the microcode to
       | contain an exploit?
        
         | Jolter wrote:
         | The signing key has not been compromised, afaik.
        
           | O__________O wrote:
           | Agree, though your response doesn't address if an attacker
           | had the signing key; appears that an attacker with the
           | microcode encryption key, knowledge of how microcode works,
           | how it's updated, the signing key, and how to generate a
           | valid signed update -- they would be able to deploy an
           | exploit to the CPU; obviously this excludes the existing
           | issues with Intel ME.
        
             | Jolter wrote:
             | Yes, presuming physical access as well as access to the
             | signing key, an attacker could certainly deploy malicious
             | microcode. It would be a very scary thing indeed. It seems
             | a reasonable threat model if your adversary is a malicious
             | state actor, or similarly well funded and ambitious
             | organization.
             | 
             | Edit: I assume this threat has existed as long as updatable
             | microcode has.
        
           | StillBored wrote:
           | well, I guess it depends on how the private key is stored in
           | the CPU, how if any data they can inject or cause the CPU
           | (via power rail noise/whatever) to accept an update/etc which
           | allows the private key/or validation routine to be bypassed.
           | 
           | Basically, the easier route is usually to just find a way to
           | bypass the check once, and use that to install a more
           | permanent bypass.
        
             | aaronmdjones wrote:
             | The key in your CPU would be the public key, not the
             | private key. The public key can only be used to verify
             | existing signatures, not create new ones.
             | 
             | It may be possible through power fault injection to flip
             | the bits of the public key such that you could get it to
             | accept microcode signed with your own private key, but I
             | would be very surprised if the public key weren't burned
             | into the structure of the CPU itself in a manner that
             | renders it immune to such attacks.
             | 
             | Of course, power fault injection may still allow you to
             | bypass the verification routine altogether instead of
             | modifying the key it verifies with.
        
               | StillBored wrote:
               | Agree on the public/private thing, I typed without my
               | brain in gear.
        
       | alkjlakle wrote:
        
         | anewpersonality wrote:
         | What have these people stated about the war?
        
       | Genbox wrote:
       | Discussion here: https://news.ycombinator.com/item?id=32148318
        
         | dang wrote:
         | I think we'll merge that discussion hither, because this one
         | was posted earlier and has the more substantive source.
        
       | jacquesm wrote:
       | I would not be surprised if this will end up being the highest
       | upvoted post of HN for all time depending on the outcome.
        
       | fulafel wrote:
       | If they are sane, Intel didn't rely on this staying secret in
       | their threat model.
        
         | hansendc wrote:
         | From: https://arstechnica.com/gadgets/2020/10/in-a-first-
         | researche...
         | 
         | "In a statement, Intel officials wrote: ... we do not rely on
         | obfuscation of information behind red unlock as a security
         | measure."
         | 
         | (BTW, I work on Linux at Intel, I'm not posting this in any
         | official capacity)
        
           | marcodiego wrote:
           | > I work on Linux at Intel, I'm not posting this in any
           | official capacity
           | 
           | Oh, great! Isn't there a way where intel could provide keys
           | so we could get rid of IME even if it means we won't be able
           | to play DRM'ed content?
        
             | sabas123 wrote:
             | IIRC IME also does a lot of core functionality like power
             | regulation. Unlike many in this thread probably think, it
             | does provide a lot of core functionality that you probably
             | don't want removed.
        
         | jacquesm wrote:
         | If they were truly sane this whole thing would have never
         | existed, so all bets are off on that one.
        
           | mjg59 wrote:
           | Depending on how microcode is defined, it's arguably existed
           | back to the 40s. Which modern and reasonably performant CPUs
           | are you thinking of that don't have microcode?
        
             | jacquesm wrote:
             | You completely misunderstood my comment.
        
               | mjg59 wrote:
               | What were you referring to, other than microcode?
               | 
               | Edit: Oh, you mention the encryption. Big companies love
               | obfuscating everything they create, because they're
               | afraid something commercially sensitive will exist there
               | and someone will copy it and outcompete them. I agree
               | that this is ridiculous, but I don't think it's evidence
               | of any sort of nefarious activity.
        
               | sroussey wrote:
               | Microcode spills lots of the secrets of the hardware
               | design, so you want it encrypted for as many years as
               | possible to keep your trade secrets out of competitors'
               | hands.
        
               | jacquesm wrote:
               | It's not evidence, but it may well have been used to hide
               | such evidence.
        
               | mjg59 wrote:
               | Well yes anything that's encrypted could potentially turn
               | out to be evidence of a crime when decrypted, but we
               | don't usually assume that the use of encryption is
               | indicative of that
        
               | ncmncm wrote:
               | For criminal cases. But this is not.
        
           | jpgvm wrote:
           | They were always going to need microcode to deal with errata
           | or are you referring to encrypting the microcode rather than
           | just signing it?
        
             | jacquesm wrote:
             | The latter. I can see why they did it but if it was used to
             | hide something nefarious Intel is done.
        
           | masklinn wrote:
           | What whole thing? Instruction microcoding is nothing new.
           | 
           | Or do you mean.
           | 
           | - the TXE vulnerability and / or undocumented debugging mode
           | (so microcode wouldn't have been extracted)
           | 
           | - microcode encryption so the microcode would always have
           | been completely readable
           | 
           | - x86
           | 
           | - Intel itself
           | 
           | ?
        
             | jacquesm wrote:
             | The second, possibly combined with something dirty.
             | 
             | I can see why they would sign/encrypt it so that things got
             | safer, but then they should have done a much better job of
             | it. If it was encrypted to hide something that could not
             | stand the light of day then that's an entirely different
             | matter altogether.
             | 
             | Time will tell.
        
               | aleks224 wrote:
               | At what point in time did encrypting microcode in
               | processors become common? Is this a relatively recent
               | practice?
        
               | alophawen wrote:
               | IIRC, it happened when IME was introduced. Maybe 2008?
               | 
               | Then AMD PSP did the same starting in 2013.
        
       | LeonTheremin wrote:
       | Brazilian Electronic Voting Machines use Intel Atom CPUs. Any
       | backdoor found in microcode for these is going to be a big event.
        
         | dyingkneepad wrote:
         | There are SO many easier attack vectors for the urna eletronica
         | that you don't need to worry about this. I'm not implying there
         | is anybody actually attacking them, but if I were to commit
         | election fraud I wouldn't look at low level microcode
         | backdoors.
        
           | hammock wrote:
           | To my knowledge there is no evidence of widespread voter
           | machine fraud. Perhaps it is unwise to spread ideas
           | suggesting an election system could be compromised
        
             | reese_john wrote:
             | Absence of evidence is not evidence of absence. Not that I
             | believe that widespread fraud has happened before, but
             | electronic voting is inherently flawed, there is no reason
             | why a coordinated attack couldn't happen in the future.
        
             | dyingkneepad wrote:
             | I agree with you, and that's why I didn't spread
             | disinformation suggesting an election system may be
             | compromised. I just stated that the attack surface for
             | something like this is quite big.
        
           | hulitu wrote:
           | Why not ? This would be the perfect election interference.
        
             | sroussey wrote:
             | Hack the tabulating machines. So much easier and they are
             | the only ones that humans actually look at.
        
               | walterbell wrote:
               | Why not both?
        
       | FatalLogic wrote:
       | One year ago on HN, also involving Maxim Goryachy (@h0t_max), as
       | well as Dmitry Sklyarov (of DMCA 'violation' renown) and Mark
       | Ermolov:
       | 
       |  _Two Hidden Instructions Discovered in Intel CPUs Enable
       | Microcode Modification_
       | 
       | https://news.ycombinator.com/item?id=27427096
        
       | mfbx9da4 wrote:
       | This is quite literally, hacker news.
        
       | marcodiego wrote:
       | How far are we from getting rid of IME now?
        
       | kriro wrote:
       | The security implications are quite devastating in theory but I'm
       | curious how big this will actually become (I'm guessing way less
       | big than it should be). For reference, INTC closing price: 38.71.
       | Pre-market right now: 38.84
       | 
       | Could be a decent short play if you think this will really blow
       | up big. At least it would be interesting to see how things like
       | the FDIV bug (or maybe car recalls or similar things) influenced
       | prices and compare it to this scenario.
        
         | eatsyourtacos wrote:
         | This obsession with stock prices and trying to trade on this
         | kind of stuff puts such a spotlight on how humans suck so bad.
        
         | alophawen wrote:
         | What makes you think any part of this news would change INTC
         | stock values, or even car recalls?
         | 
         | It's very confusing to me how you get to these conclusions.
         | Nothing in this news is about any scandals that would warrant
         | any of your suspicions.
        
           | kriro wrote:
           | If taken seriously this should have implications on
           | purchasing decisions and I'm not sure what would happen if
           | some (government) organizations would ask Intel to take back
           | their hardware because there's un-patachable RCE.
           | 
           | I mention the car recalls or the old Intel chip recalls or
           | any recalls really because that past situations that could be
           | comparable (I'm not implying this will lead to car recalls,
           | only that that's past events that could be looked at for
           | predicting how this might develop).
        
             | hulitu wrote:
             | Purchasing decisions are usually not made by technical
             | people.
        
         | ncmncm wrote:
         | The world at large will utterly ignore it.
        
         | resoluteteeth wrote:
         | The microcode shouldn't even need to be secret in theory, so
         | being able to decrypt it on celeron/atom cpu's is by no means
         | "devastating."
        
           | sroussey wrote:
           | It's only devastating because competitors have access and
           | understand all the things that were to good to have public
           | and patent.
        
         | __alexs wrote:
         | I think the _next_ discovery might be really big but this is
         | read-only and only for Atom CPUs so far.
        
       | Waterluvian wrote:
       | Naive question about getting "dumps of microcode"
       | 
       | Getting a dump means getting access to a memory controller of
       | sorts and asking it to read you back the contents of addresses,
       | right?
       | 
       | But you're really getting what the memory controller decides to
       | give you. There could be more indirection or sneakiness, right?
       | Ie. I could design a memory controller with landmines, as in "if
       | you ask for 0x1234 I will go into a mode where I send back
       | garbage for all future reads until power is cycled."
       | 
       | Is this a thing?
        
         | jeffrallen wrote:
         | Yes, see page 96 of Bunnie Huang's "Hacking the Xbox" where he
         | tells the story of what happens when the machine seems to boot
         | something else than the ROM.
        
         | sabas123 wrote:
         | This is a thing and was used in this fantastic piece:
         | https://www.youtube.com/watch?v=lR0nh-TdpVg
         | 
         | However the way they obtained these dumps is by going deep into
         | debugger mode of the cpu which makes me doubt anything spooky
         | would be going on.
        
         | avianes wrote:
         | > But you're really getting what the memory controller decides
         | to give you.
         | 
         | Yes, here the memory is read through a debug bus.
         | 
         | > I could design a memory controller with landmines, as in "if
         | you ask for 0x1234 I will go into a mode where I send back
         | garbage for all future reads until power is cycled."
         | 
         | Yes, it basically looks like a backdoor, and you can do it the
         | other way around: The memory read through the debug bus is
         | exactly the content of the ROM, but the memory controller is
         | made so that when the processor reads a specific address or
         | data it doesn't return the value in memory but something else.
         | 
         | This way even a person who would use a visual or an intrusive
         | memory extraction method would not notice the backdoor. The
         | only way to discover it is to do a full inspection of the
         | logic, which probably nobody will do.
         | 
         | > Is this a thing?
         | 
         | Yes, sometimes some addresses in a memory system are
         | effectively not readable (write only). As for example with some
         | memory-mapped configuration registers, a 0-value may be
         | returned instead of the register contents.
         | 
         | But your question sounds to me more about mechanisms to hide a
         | backdoor.
         | 
         | Regarding hardware backdoors, they are always theoretically and
         | practically possible, and almost always undetectable. Since
         | nothing prevents the designer from introducing logic that has
         | malicious behaviour and it's nearly non-observable.
         | 
         | This is the problem with theories about backdoors in modern
         | processors. Without evidence, these theories fall into the
         | realm of conspiracy theories. But it's almost impossible to
         | have evidence and no-one can say that it doesn't exist.
        
       | punnerud wrote:
       | Is there any chance to get the RSA keys to be able to make your
       | own code?
        
         | [deleted]
        
         | W4ldi wrote:
         | For that you'd need to hack Intels infrastructure and get
         | access to the private keys.
        
           | pabs3 wrote:
           | Probably the keys are on well-guarded offline HSMs.
        
             | 5d8767c68926 wrote:
             | Are there rules/standards for how these top secret keys are
             | stored? HDCP, Mediavine, keys to the Internet, etc. Sure,
             | you could keep it locked in a Scrooge McDuck security
             | vault, but you need to be able to burn the key into
             | hardware/software, meaning it ultimately needs to be
             | distributed across many machines, greatly increasing the
             | number of people with potential access.
        
               | cperciva wrote:
               | The _public_ key needs to be in the CPU. The _private_
               | key is only needed when Intel needs to sign new
               | microcode.
        
               | leetbulb wrote:
               | The security of these keys depend on the signing ceremony
               | / ritual involved. Here's an example
               | https://www.youtube.com/watch?v=_yIfMUjv-UU
        
             | hulitu wrote:
             | Or on an S3 vault somewhere.
        
               | tomrod wrote:
               | Terrifying.
        
       ___________________________________________________________________
       (page generated 2022-07-19 23:00 UTC)