[HN Gopher] SATAn: Air-Gap Exfiltration Attack via Radio Signals...
       ___________________________________________________________________
        
       SATAn: Air-Gap Exfiltration Attack via Radio Signals from SATA
       Cables
        
       Author : PaulHoule
       Score  : 232 points
       Date   : 2022-07-18 16:00 UTC (6 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | system2 wrote:
       | I wish there was a nice YouTube video of this attack being
       | explained.
        
       | aasasd wrote:
       | Happy to see that vulnerability names keep going deeper into the
       | metal space: they seem to be making small steps from dad-heavy-
       | metal into the territory of black and doom, corresponding to
       | sometime in the 1970s.
       | 
       | I, of course, will continue to advocate for full-on grindcore.
       | "Putrid Air Carries Pestilent Waves of Betrayal and Decay" or
       | gtfo.
        
         | [deleted]
        
         | [deleted]
        
       | [deleted]
        
       | seiferteric wrote:
       | Whats stopping us from using optical phy more often for
       | interconnects? Just cost?
        
         | advisedwang wrote:
         | Cost yes, but also inflexible cables, more space required for
         | transducers, harder to have a bus where multiple devices access
         | the same transmission medium, can't include power in the same
         | connector etc.
        
         | IshKebab wrote:
         | You can use optical for ethernet but it's expensive and mostly
         | for long distances as I understand it. There's no need for that
         | with SATA.
        
       | infogulch wrote:
       | How do you protect against these kinds of attacks? Layers of
       | shielding/faraday cages?
        
         | detaro wrote:
         | Don't let an attacker run code on your system in the first
         | place. (yes, shielding would of course help against this
         | specific attack, but you have a long list of things to worry
         | about before this kind of thing even becomes relevant)
        
         | ngneer wrote:
         | Same as with all attacks, you protect against them by making
         | the return lower than the cost...
        
         | tracker1 wrote:
         | It wouldn't take much shielding at all for this specific one...
         | you could line a case with a faraday mesh material which would
         | probably me enough for most similar attacks.
         | 
         | That said, for most of these kinds of things, you have to get a
         | malicious payload on such a system. The stuxnet approach is one
         | way, if you have specific targets. Aside from that, you pretty
         | much need phyxical access and be able to load a malicious
         | software. At that point, you may as well stick a small usb
         | bluetooth/wireless dongle, depending on the situation...
         | assuming it's a standard desktop, there's a good chance of
         | unused USB 2 header you could piggy back on.
         | 
         | At that point, an actual faraday cage or mesh on the room in
         | question would be the next practical vector. Some secure
         | buildings are up to 6' thick walls, no communications out
         | except through known devices on wired ports with software
         | checking in place (seen this in finance).
        
           | zaarn wrote:
           | Even a faraday cage would be insufficient if I can get a
           | powermeter into a power outlet on the same circuit. Or if
           | we're talking about a datacenter, I just need a good
           | electrical measurement device (Voltage, Amperage, Frequency,
           | PF) on the cable outside.
           | 
           | I can, in this example, just have the computer use more power
           | to signal a 1 and less power to signal a 0. For a DC I just
           | ramp up the entire DC load. Do that over the span of a week
           | or so to make it less noticable.
        
           | mindcrime wrote:
           | _At that point, you may as well stick a small usb bluetooth
           | /wireless dongle, depending on the situation..._
           | 
           | Right, but there are places where if you stick an unapproved
           | USB device into a computer's USB port, a nice person from
           | security taps you on the shoulder 10 minutes later. In those
           | cases, you may well have the ability to run code on the
           | computer, but no ability to attach removable storage. It's at
           | least plausible (if not exactly probable) that something like
           | this could come into play for somebody looking to exfiltrate
           | a small amount of data from a system they have access to.
           | 
           | I'm thinking something closer to (but perhaps not exactly)
           | like the Edward Snowden kind of scenario. Something where you
           | access to the computer, but can't use the other common place
           | means of getting data off of the machine and out the door.
           | Yeah, it's a stretch, but it's not beyond the bounds of
           | imagination.
        
             | tracker1 wrote:
             | How do you get your malware payload on the computer in the
             | first place?
        
               | mindcrime wrote:
               | Download it from the Internet? Key in the source by hand
               | and compile it locally? Type in the bytes with a hex
               | editor? I mean, there's a lot of possible ways. Note
               | "possible", perhaps even "plausible", but maybe not
               | "probable". And a lot of it would come down to the
               | details of the local situation.
               | 
               | And if you're wondering, I can confirm that I've
               | definitely worked at places where you were not allowed to
               | plug in a USB device (I literally saw a co-worker get the
               | "tap on the shoulder" from Security for doing that), but
               | yet many (most?) Internet downloads were allowed. Does
               | that make sense? Arguably not. Does it happen in the real
               | world? Absolutely yes.
        
       | eigenvalue wrote:
       | This isn't the first security software to be called Satan-- I
       | remember reading about one back in the 90s for breaking into
       | networks. I guess it's been long enough that it wouldn't cause
       | any confusion.
        
         | [deleted]
        
         | rnd0 wrote:
         | Ironically, I was thrown off (briefly) because I remember
         | reading about the original SATAN back in the day. I may be
         | wrong, but I think it was a precursor to nmap? I haven't looked
         | it up and this was in the late 90's.
        
           | eigenvalue wrote:
           | Yup, this is it:
           | 
           | https://en.m.wikipedia.org/wiki/Security_Administrator_Tool_.
           | ..
        
             | rnd0 wrote:
             | Thanks! I followed through to the home page and the "SATAN
             | updates" link:
             | 
             | >http://www.porcupine.org/satan/release-2-plan.html
             | 
             | >Release 2 is currently in the works - the original plan
             | was to update SATAN on its first birthday, but that
             | schedule has slipped.
             | 
             | sensiblechuckle.jpg
        
       | noname120 wrote:
       | Previously: GSMem, BitWhisper, AirHopper, ODINI, PowerHammer,
       | LED-it-GO, USBee, Bridgeware, MAGNETO, etc etc.
       | 
       | See the other [?]50 copycat papers of the author:
       | https://www.semanticscholar.org/author/Mordechai-Guri/226003...
       | 
       | At some point the author is going to run out of clever puns...
        
         | blobbers wrote:
         | What do you mean by copycat? As in he takes someone else's
         | paper and gives it a clever name? Or do you just mean he is
         | using lots of silly methods of doing data exfil?
        
           | bcook wrote:
           | I think they mean lots of people are copying the mentioned
           | author.
        
             | spullara wrote:
             | Nope, that same author has tons of papers about
             | exfiltrating data from air gapped systems using many
             | different mechanisms. Copycat is the wrong way of thinking
             | about it. All different but do the same kind of thing.
        
       | RF_Savage wrote:
       | A new day and a "new" air-gap exfiltration from this dude. What
       | ENI generating buses and peripherals are left now that he's done
       | sata, dram, cpu, fanspeed and ethernet?
        
         | kortex wrote:
         | Hard drive access LED, optical disk servo, and the
         | piezoelectric discharge due to thermal rise/fall of the GPU.
         | 
         | I'm only half joking.
        
           | detaro wrote:
           | > _Hard drive access LED_
           | 
           | He's done that one: https://arxiv.org/abs/1702.06715
        
           | klysm wrote:
           | > the piezoelectric discharge due to thermal rise/fall of the
           | GPU
           | 
           | now this is something id like to see!
        
             | maicro wrote:
             | I don't remember which rule of the internet claims that
             | there's always a relevant XKCD, but - relevant XKCD:
             | https://xkcd.com/1172/
        
         | badrabbit wrote:
         | Ben-Gurion seems to generate a lot of papers like this. Makes
         | me wonder about the secret spy stuff that doesn't get
         | published.
        
           | nibbleshifter wrote:
           | Its one guys lab there (Mordecai). They basically churn out a
           | metric fuckload of the same thing, different bus.
           | 
           | None of it makes for a practical attack.
        
             | ivraatiems wrote:
             | This feels like one of those classic things, though, where
             | you read this and go "ha ha, that'll never be valuable or
             | even work in real life", and then in ten years you read an
             | article that's something like "Hackers Steal Ugandan
             | President's Credit Card Data by Reading a Single Bit".
             | 
             | Identifying attack vectors is just building up an arsenal
             | of tools that can potentially be used depending on the
             | circumstance.
        
               | tbihl wrote:
               | The standardization of protocols and physical components
               | on computers, paired with the accessibility of global
               | wealth from computers, means that the payout is always
               | there for obscure attacks. They just may have to be
               | incorporated into large toolsets with significant
               | automation to know when it's the right tool for the job.
        
               | TedDoesntTalk wrote:
               | Theft of Bitcoin wallet private key from a well-known
               | Bitcoin billionaire seems applicable.
        
             | thrashh wrote:
             | These are all practical attacks. Most of us are just not
             | the right customer for them
             | 
             | For most people, if you are trying to exfiltrate some
             | customer data to sell on the black market, are you going to
             | spend a bunch of development time blinking a LED and
             | setting up a receiver? Nah you'd just move to some other
             | victim because you are trying to make a profit, not waste
             | more money
             | 
             | But someone like a government trying to spy on another
             | country... the cost of spies is pretty high... but pay some
             | engineers to spend all day trying to make a LED blink some
             | data is relatively cheap
             | 
             | That said, I think a lot of these attacks are kind of
             | boring even if they were unpublished. If anyone here was
             | paid 6 figures to do nothing all day but figure out some
             | obscure variables in your computers to flip to exfiltrate
             | some data, I'd be disappointed if you couldn't figure
             | anything out. I think most of these attacks are kind of
             | obvious.
        
             | goodpoint wrote:
             | They are all very practical, just not often needed.
        
             | duskwuff wrote:
             | > They basically churn out a metric fuckload of the same
             | thing, different bus.
             | 
             | Notably, most of the attacks from Mordechai's lab describe
             | low-bandwidth channels for _deliberate_ data exfiltration.
             | These attacks would only apply in unusual situations where
             | an attacker can run arbitrary code on a machine, but the
             | machine is isolated from the outside world. (Scenarios like
             | Iranian nuclear facilities come to mind.)
        
               | badrabbit wrote:
               | You say that but I am actively dealing with a chinese
               | malware(public info) I won't name that does just that but
               | with USB. It sideloads a dll when a shortcut opens a
               | legit app on a USB (shortcut is tricky in that it looke
               | like a folder and opens a folder also when you click on
               | it) it then collecte all kinds of documents and exfils to
               | the internet but if there is no internet it archives it
               | back to the USB so next time it runs with internet access
               | it will also exfil any docs on the USB from the airgapped
               | machines, fortunately just a spillover infection for us
               | but these tricks in the paper would enable to not wait
               | for a USB or internet access, it would be more realtime.
               | Like if it is an ICS system for nuclear centrifuges it
               | will send back what ir found and accept destructive
               | tasks.
        
               | userbinator wrote:
               | You still need a receiver within range in order to do
               | this.
        
               | rtev wrote:
               | Someone attacking a nuclear facility can get a receiver
               | within range
        
               | happyopossum wrote:
               | > accept destructive tasks.
               | 
               | Not with this - it's a one way channel
        
               | woevdbz wrote:
               | Well but that is exactly the point of air-gapping. It's
               | meant to give the institution a guarantee that, no matter
               | how badly the box might be owned, no matter if the user
               | themselves is compromised, stuff won't get out. If a user
               | with admin rights on the machine can get it to
               | communicate with the external world, that's an air gap
               | failure.
        
               | duskwuff wrote:
               | Right, my point is that this is a class of attacks which
               | are only relevant to a specific, small class of users.
               | It's irrelevant to users who have intentionally enabled
               | any kind of network communications on their computer, and
               | it's also irrelevant to users who have robust controls on
               | what code is executing on their system, or who run
               | untrusted code in an environment which isolates it from
               | hardware (like a virtual machine).
               | 
               | So, in short, it only affects ineptly managed secure
               | environments.
        
               | PeterCorless wrote:
               | Not that there are any of those...
        
               | woevdbz wrote:
               | > this is a class of attacks which are only relevant to a
               | specific, small class of users
               | 
               | Yes.
               | 
               | > It's irrelevant to users who have intentionally enabled
               | any kind of network communications on their computer
               | 
               | No, there are lots of use-cases at least for LAN
               | communications.
               | 
               | > it's also irrelevant to users who have robust controls
               | on what code is executing on their system, or who run
               | untrusted code in an environment which isolates it from
               | hardware (like a virtual machine).
               | 
               | No, it's relevant in those cases too, as part of a
               | defense in depth strategy.
        
             | IncRnd wrote:
             | They are practical, just not widespread. There is no reason
             | for them to be widespread. One use-case is to obtain data
             | from air-gapped machines that do not connect to the
             | Internet. More broadly, these are for targeted attacks not
             | for dumping JS onto everyone who visits wikipedia.
        
       | koshkaqt wrote:
       | Would this still work if full-disk encryption were enabled on the
       | victim machine?
        
         | infinityio wrote:
         | To my understanding (haven't yet read the paper, just the
         | abstract) - most full-disk encryption schemes are implemented
         | in OS as opposed to in-hardware, so if the attacker has deep
         | access to the system they could probably find a way to write
         | unencrypted data to disk? That may not work inside a VM though
        
         | LaputanMachine wrote:
         | It probably does. In the code example on page 5, disk activity
         | is only generated to send a "1", while a "0" is encoded as a
         | time with no disk activity.
        
         | jwilk wrote:
         | They write only random data and discard any data they read, so
         | FDE should not have any effect.
        
       | tracker1 wrote:
       | While incredibly cool... wouldn't you have to have physically
       | accessed and exploited the system for this type of attack?
       | 
       | I'm not sure what kind of practical implementation this could
       | really have.
        
         | lallysingh wrote:
         | You don't need physical access, but you can't do it via
         | Internet. An intercepted/faked software CD/USB would probably
         | be a good vector. They have to install software somehow.
        
         | [deleted]
        
         | detaro wrote:
         | That's what "exfiltration" generally means, yes
        
         | PeterisP wrote:
         | There are various ways how you can push in malware to isolated
         | systems (compromised USB drives used to be common, and still
         | seem to work in some penetration tests; but in general, you
         | might succeed in getting someone to deliver a malicious
         | document file to such a system with some zero-day in it. Also,
         | compromised hardware - e.g. ship them a USB mouse that 12 hours
         | after plugging in technically becomes also a USB storage device
         | and a USB keyboard to send commands to install stuff), but then
         | you have the trouble of what that malware should do once on
         | those systems - being able to establish a communications
         | channel is useful in such a scenario. So there's definitely a
         | role for such tools, only these scenarios are relevant only to
         | quite expensive targeted attacks, essentially spycraft by state
         | actors, not for e.g. commercial ransomware operators.
        
         | babypuncher wrote:
         | Gain physical access to a machine, install malware that gathers
         | data and transmits it using this vulnerability. Only works if
         | you can set up a listening post within range of the PC.
         | 
         | Certainly very limited utility, and definitely not something
         | any of us should actually be worried about. But it probably is
         | something to be aware of if you run IT for an embassy or any
         | other entity that could be targeted by a highly motivated state
         | sponsored actor.
        
       | jjeaff wrote:
       | I love these extreme air gap exploits. Detecting keyboard entry
       | by analyzing the sound of the typing and reading CRT monitor
       | radiation to mirror the screen from a distance come to mind.
       | 
       | But have any of these extreme exploits ever been used in the
       | wild? They all seem impossible to pull off in anything but lab
       | controlled conditions.
        
         | evan_ wrote:
         | Not exactly the same thing, because it required modifying the
         | devices in question, but during the Cold War the Soviets
         | managed to implant some really fascinating bugs into
         | typewriters used inside the US Embassy:
         | 
         | https://www.cryptomuseum.com/covert/bugs/selectric/
        
           | jerf wrote:
           | Yeah... it's hard to know what exactly they get up to in that
           | world, but the floor put under their capabilities by what has
           | been publicly exposed doesn't exactly support the "nah, it's
           | all too hard and nobody would ever bother" argument at _all_.
           | Even if we assume without real reason to do so that we 've
           | heard the most impressive stuff from them, but it's just a
           | small sampling of the number of times they've used it, if
           | your adversary is a state-level actor, you should assume the
           | very, very worst. If you're wrong, it won't be by much.
           | 
           | Read Google's Project Zero blog, too:
           | https://googleprojectzero.blogspot.com/ We don't know that
           | these exploits in particular are used, but consider that
           | Project Zero is the moral equivalent of a hobby for Google.
           | What would it look like to have _hundreds_ of people at that
           | rough skill level, training each other, practicing all the
           | time, and building software support for each other?
           | 
           | The capabilities of attackers are not bounded by your
           | imagination.
        
         | dylan604 wrote:
         | I think Tom Cruise did this once in "Mission:Impossible 32"
        
         | AdamJacobMuller wrote:
         | > But have any of these extreme exploits ever been used in the
         | wild?
         | 
         | I suspect you're going to find out in 50 years when government
         | documents (not inherently US gov) are declassified.
         | 
         | State level spying is the only thing I can think of where the
         | value of the information is so high (making the effort of this
         | kind of attack is worth it) and where there are many scenarios
         | where the volume of highly valuable information is
         | comparatively tiny.
         | 
         | Just as some very off the cuff examples of what I mean by the
         | latter. We don't need to exfiltrate satellite photos of things
         | (gigabytes of data), but, it could be very valuable to
         | exfiltrate the metadata of what they are looking at
         | (coordinates and time)
         | 
         | Also, exfiltrating information like names of sources or meeting
         | points or other methods can be trivial amounts of data but
         | finding even a single compromised person on our side would be
         | immensely valuable!
         | 
         | I'm reminded of "The Thing":
         | https://en.wikipedia.org/wiki/The_Thing_%28listening_device%...
        
           | moltude wrote:
           | didn't stuxnet jump into an air gaped system? I'm curious if
           | there is significant difference between egress and ingress
           | into air gaped systems.
        
             | legalcorrection wrote:
             | I thought a thumbdrive was the attack vector for stuxnet.
        
               | TedDoesntTalk wrote:
               | Correct:
               | 
               | https://www.theverge.com/2012/4/12/2944329/stuxnet-
               | computer-...
        
             | SketchySeaBeast wrote:
             | > I'm curious if there is significant difference between
             | egress and ingress into air gaped systems.
             | 
             | I would expect so. I'm fairly certain there's a big
             | difference between pulling fuzzy data out and figuring out
             | what it means as opposed to trying to electromagnetically
             | fling fuzzy data into a system that's not supposed to have
             | information flung at it and having the system accept what
             | you mean.
        
         | Tostino wrote:
         | We'll eventually find out when the latest iteration of
         | something like Stuxnet is found in the wild...if it still
         | leaves enough of a trace to be found.
        
           | Schroedingersat wrote:
           | Don't worry. The latest iteration will be running on the psp
           | via the private key and will have ring -1 access.
        
         | serf wrote:
         | van eck phreaking has been used successfully against electronic
         | ballot machines.[0]
         | 
         | apparently it was demonstrated practically during the Korean
         | War; but I can't find anything much about that.
         | 
         | [0]: https://yro.slashdot.org/story/09/11/22/027229/Brazilian-
         | Bre...
        
         | NovemberWhiskey wrote:
         | This is not exactly that - this is the use of a SATA bus as an
         | RF radiator to do a (very slow) covert exfiltration channel
         | from an air-gapped computer.
        
           | cogman10 wrote:
           | Yeah... but I mean, if the computer is in a steel/aluminum
           | box (likely) then both the distances and locations you can be
           | to be able to intercept that 6ghz signal are pretty limited.
           | 
           | It's a neat attack, but to pull it off you pretty much need
           | both physical access AND the ability to install virused
           | software on the target. Perhaps something the CIA/NSA pull
           | off such as stuxnet? Even then, the point of an air-gapped
           | computer is that installing such software would be pretty
           | difficult in the first place.
        
             | thrashh wrote:
             | I mean if the stakes were high enough
             | 
             | If I told you that you could make 500 million dollars
             | (after taxes) if you figured this out, suddenly you would
             | be on top of this.
             | 
             | The stakes of Stuxnet was probably "the safety of the free
             | world" in the eyes of the people who paid for the work
        
               | cogman10 wrote:
               | I don't mean that the attack isn't possible/wouldn't
               | necessarily work. I'm saying it's impractical given that
               | one of the steps to execute it is "have physical access
               | to the machine, the ability to install software, and time
               | to sit around and record the outbound data (or to be able
               | to revisit the site and collect a recording device)."
               | 
               | With each of those requirements, there are easier and
               | faster ways to get data off a machine. If you can install
               | software you can likely download data that software would
               | access. If you can access the machine twice and install
               | software that runs in the background between visits, you
               | can install your keylogger/data collectors and simply
               | record the data on the device.
               | 
               | Stuxnet wasn't about getting data out of the machine, it
               | was about breaking machines. It worked well because it
               | didn't require physical access to the machines. It spread
               | as a virus through the regular updates that Iran was
               | doing for their machines.
               | 
               | If you had such a stuxnet virus in your back pocket, then
               | you can likely steal the data and record it back on the
               | USB device (or any plugged in USB device) on the system.
        
               | thrashh wrote:
               | I was more referring to Stuxnet as an example of someone
               | who is willing to spend the money on people to figure out
               | some obscure techniques
               | 
               | Spy agencies have definitely used obscure data
               | exfiltration techniques and they can afford spies.
        
         | jagger27 wrote:
         | I'd say it's a solid "no", and the author says as much in the
         | paper.
         | 
         | > We transmitted the data with a bit rate of 1 bit/sec, which
         | is shown to be the minimal time to generate a signal which is
         | strong enough for modulation. The BER for PC-1 is presented in
         | Table VI. As can be seen, the BER of 1% - 5% is maintained
         | between 0 - 90 cm. With a greater distance of 120 cm, the BER
         | is significantly higher and reaches 15%. With PC-2 and PC-3,
         | the bit error rates (BER) are less than 5% only in short
         | proximity up to 30 cm, and hence the attack is relevant only
         | for short ranges in these computers.
         | 
         | This particular attack is a weak 6 GHz signal that can exfil
         | about 1 bit/s from a metre away. It's neat, but impractical.
        
           | lallysingh wrote:
           | Enough for cryptographic key material, but not much else.
           | 
           | Still, publishing which methods are a risk, and which ones
           | aren't, is quite useful.
        
           | leeter wrote:
           | Pretty sure a ferrite beads can shut this down anyway. But
           | this seems like a more practical question of "If they can get
           | that close do they really need this?" They've already had USB
           | or software access in some form already.
        
             | cogman10 wrote:
             | Yup, that's the part that makes this attack completely
             | impractical. You are trying to leak information but first
             | you have to install a virus on the computer? Neat concept,
             | wildly impractical.
        
               | PeterisP wrote:
               | I mean, Stuxnet is an illustrative example that has been
               | seen in the wild (and the rare exception of one that
               | became public - in general, if you'd be the target of
               | something like this and found out, the results of that
               | analysis would be classified and unpublishable), and
               | there have been almost 20 years to do improvements since
               | Stuxnet was first developed, so there definitely are real
               | attacks aiming to do stuff and/or exfiltrate data from
               | air-gapped computers after "installing a virus on the
               | computer" - and it's quite clear that Stuxnet did achieve
               | a significant practical effect.
        
         | nonrandomstring wrote:
         | > But have any of these extreme exploits ever been used in the
         | wild? They all seem impossible to pull off in anything but lab
         | controlled conditions.
         | 
         | Not at all. I've seen clearly with my own eyes the image of one
         | persons VT100 CRT tube appearing on another across the room
         | because the ground shielding had disconnected. If you have a
         | high gain YAGI antenna, digital RF buffers and some fancy
         | modern DSP lord only knows what you can snoop through the
         | walls.
         | 
         | Most of this is solved by having a simple metal (steel) PC case
         | and grounding the PSU though. Use good quality cables with
         | FCC/IEC badges not the bargain bucket Chinese ones.
        
           | ConstantVigil wrote:
           | I used to see this happen with our old TV's at an old house
           | with really bad grounding. I always wondered why it did that.
           | 
           | We could be playing a video game or movie, and set all the
           | other tv's to channel 2 or something like that, while we
           | played on channel 1. You could see the action on each of
           | them. Almost 1:1 on the reaction times, albeit a little
           | fuzzy.
        
           | fortran77 wrote:
           | "Yagi" isn't an acronym.
        
         | orangepurple wrote:
         | Soviet agents secretly installed tiny sensing devices in about
         | a dozen embassy typewriters. The devices picked up the contents
         | of documents typed by embassy secretaries and transmitted them
         | by antennas hidden in the embassy walls. The antennas, in turn,
         | relayed the signals to a listening post outside the embassy.
         | "Depending on the location of the bugged typewriters, the
         | Soviets were able to receive copies of everything from routine
         | administrative memos to highly classified documents. "One
         | intelligence officer said the potential compromise of sensitive
         | information should be viewed with 'considerable seriousness.'
         | "Another intelligence expert said no one knows for sure how
         | many or what secrets were compromised. A third official called
         | the entire affair a fiasco.
         | 
         | https://media.defense.gov/2021/Jul/13/2002761779/-1/-1/0/LEA...
        
         | badrabbit wrote:
         | Yeah, Snowden mentioned in his experience at NSA they did use
         | tricks like this and this was late 2000's. There is an
         | interview he gives where he goes under a blanket to type his
         | password on a laptop to avoid the crt reader thingy.
        
           | ekianjo wrote:
           | there is no crt on a laptop
        
           | glenstein wrote:
           | Is there any source that it was the crt reading thing
           | specifically? I remember that part of the documentary but I
           | am still wondering if he had a specific attack in mind when
           | he did that.
        
           | FreakLegion wrote:
           | Van Eck phreaking works through walls. I haven't seen the
           | interview but most likely he was just hiding the physical act
           | of typing, since that too can leak information about a
           | password.
        
           | lallysingh wrote:
           | Password prompts usually don't show the passwords you're
           | typing in.
        
             | jacquesm wrote:
             | Your fingers are hitting the keys.
        
               | lazide wrote:
               | What would that have to do with a CRT?
        
               | badrabbit wrote:
               | Sorry, the attack works in laptop monitors not crt.
        
       ___________________________________________________________________
       (page generated 2022-07-18 23:00 UTC)