[HN Gopher] Ask HN: How do you trust that your personal machine ...
       ___________________________________________________________________
        
       Ask HN: How do you trust that your personal machine is not
       compromised?
        
       "Compromised" meaning that malware hasn't been installed or that
       it's not being accessed by malicious third parties. This could be
       at the BIOS, firmware, OS, app or any other other level.
        
       Author : coderatlarge
       Score  : 387 points
       Date   : 2023-01-15 12:20 UTC (10 hours ago)
        
       | neoromantique wrote:
       | I use locked down apple devices and wireguard to a remote server
       | to do work, so all my actually sensitive data resides on the
       | remote server that is reasonably hardened, and I believe I would
       | hear if iPadOS was compromised to an extent that I need to worry
       | about it farily quickly, I hope so at least.
        
       | rvillanueva wrote:
       | Like others here are saying, you can never be 100% sure. But that
       | doesn't mean there's nothing you can do.
       | 
       | If you're worried about the impact to your broader organization
       | (which is what most of the sophisticated threats tend to target),
       | you should think about risk mitigation through the Swiss Cheese
       | defense model. Each system is inevitably going to have holes, but
       | layering them on top of one another will incrementally improve
       | your coverage.
       | 
       | For instance:
       | 
       | - Your team should be trained about phishing attacks. But
       | inevitably some will get through, so...
       | 
       | - You should implement 2FA in case a password is compromised. But
       | a threat actor may be able to capture a 2FA-passed SSO session
       | token, so...
       | 
       | - Production access should be limited to a small number of
       | individuals. But even they might get compromised, so...
       | 
       | - You should programmatically rotate credentials to make old
       | leaked credentials useless. But a newer one might be captured,
       | so...
       | 
       | - Data should be sufficiently encrypted at rest and in transit,
       | and...
       | 
       | - Your team should have an incident management system and culture
       | in place to quickly respond to customer reported incidents and
       | escalate it to the right level and...
       | 
       | - Audit logs should be tracked to understand the blast radius in
       | case of compromise - and so forth
       | 
       | When you look at incidents like CircleCI and LastPass, a good
       | security organization will understand that there was more than
       | just one point of failure and should talk in detail about how
       | they are shoring up each level.
        
         | cm_silva wrote:
         | Exactly this. Security is more about about defense-in-depth,
         | incident response and recovery planning.
         | 
         | Personally, I assume the hardware is already compromised and
         | plan for recovery accordingly, starting with the worse case
         | scenario. Then, I ask myself "If this _thing_ isn 't
         | compromised yet, how can I help it stay so?", starting probably
         | with the network access, through firmware, all the way to the
         | browser.
        
       | PaulHoule wrote:
       | Reminds me of the time I was watching a creepypasta horror movie
       | about some guy who gets strange phone calls and my phone rang.
       | 
       | I think this guy had gotten my phone number from my HN profile
       | and he thought I might be able to help him. He thought his
       | android phone was infected by malware and he knew who did it. I
       | told him the people who repair cell phones at the mall could do a
       | system reset on his phone.... Unless he was dealing with state-
       | level actors in which case it might be an advanced persistent
       | threat and it might be permanent.
        
       | jeroenhd wrote:
       | BIOS/Firmware: I just do, if I am compromised then I won't find
       | out anyway.
       | 
       | OS/app level: occasional AV scans, though I don't trust clamav as
       | much as I trust Windows antivirus.
       | 
       | I should really properly set up secure boot on my desktop to make
       | rootkits harder to install, but Linux and secure boot are just
       | too much of a kludge.
        
       | anonym29 wrote:
       | I assume it is, per Intel ME / AMD PSP's ability to read
       | everything - memory, CPU registers, disk, inspect all network
       | traffic, directly utilize onboard GbE for bidirectional
       | communication.
       | 
       | For adversaries below the level of the US intelligence agencies,
       | I run everything virtualized and compartmentalized with Qubes,
       | the installation image for which I verified the dev-provided
       | cryptographic signature matches. I try to rigorously avoid any
       | software operated by Google, Amazon, Microsoft, Apple, Facebook,
       | disable all JS by default in my LibreWolf browser, refuse to
       | connect directly websites protected by cloudflare, audit source
       | code for almost everything I run in userland, etc etc etc.
       | 
       | This is all for my personal machine. For work devices, I assume
       | they're pwned even worse and I do nothing but actual work on
       | them.
       | 
       | On the mobile side, GrapheneOS on a Pixel for my first phone, and
       | a linux phone with hardware killswitches for bt/wifi, cam/mic,
       | and baseband for my second phone.
       | 
       | All of this in addition to solid fundamentals like network
       | traffic monitoring, very restrictive firewall, offline encrypted
       | hardware password manager with no password reuse, etc.
        
         | friendlyHornet wrote:
         | > refuse to connect directly websites protected by cloudflare
         | 
         | What do you do in case you want to use a website protected by
         | cloudflare?
        
           | Spooky23 wrote:
           | Since this is a ridiculous troll, so generally speaking the
           | only way to address this to visit the site from a local
           | library, then immolate the computer with thermite and explode
           | the remainder with TNT.
        
           | anonym29 wrote:
           | There are many ways to do this, most utilizing some kind of
           | proxy-like architecture for all requests, or just to retrieve
           | cookies. My personal favorite for retrieving cookies is
           | FlareSolverr.
           | 
           | For strictly reading public webpages, public paywall bypass
           | tools and archive sites work pretty well.
        
             | friendlyHornet wrote:
             | Thank you!
        
         | hdivider wrote:
         | May I ask what necessitates this level of security? E.g. work,
         | or personal preference in terms of security and privacy?
        
       | elorant wrote:
       | I monitor all outgoing traffic.
        
       | graderjs wrote:
       | You can play with this in that situation. I assume all my cloud
       | and local data is and just keep that in mind. But I also assume
       | there's layers of access... so not everyone who can access it,
       | has access to every part of it. One group maybe can access DNS
       | queries; one group can access cellphone metadata and SMS; one
       | group maybe can access unencrypted iCloud/Google/OneDrive data;
       | one group needs warrants to access and creates lies to
       | fraudulently obtain/fake-justify those; some other group doesn't
       | need warrants and just has access, either through agreement or
       | covert access.
       | 
       | Once Advanced Data Protection switches on globally "in early
       | 2023" I'll have another compartment. But I assume that someone
       | can access basically everything. You can have fun with it.
       | 
       | I also think what's happening on my devices is some of the least
       | interesting parts of life, so, yeah, there's that, too. :)
        
       | [deleted]
        
       | varelse wrote:
       | [dead]
        
       | gnfargbl wrote:
       | Here's a short, fairly practical guide that you might find
       | helpful: https://www.ncsc.gov.uk/files/Cyber-Essentials-
       | Requirements-.... It is aimed mostly at small businesses, but I
       | find a lot of the guidance to be pretty relevant to my personal
       | IT.
       | 
       | My even shorter (and incomplete) summary of the document would
       | be: configure your router and firewall; remove default passwords
       | and crapware from your devices; use a lock screen; don't run as
       | root; use a password manager and decent passwords; enable 2FA
       | everywhere you can; enable anti-malware if your OS has it built
       | it; don't run software from untrusted sources; patch regularly.
       | 
       | There are also other controls that you can choose to impose on
       | yourself. For example, I require full-disk encryption, and I will
       | only use mobile devices which get regular updates. Would be
       | interested in hearing other things that HN'ers do to limit risk.
        
         | amelius wrote:
         | Do you lock your computer every time you leave your desk?
         | 
         | And do you always check for keylogger thumbdrives and such?
        
           | mr90210 wrote:
           | When I worked at an office, I used to, with a quick:
           | CMD+CTRL+Q
        
           | KineticLensman wrote:
           | > Do you lock your computer every time you leave your desk?
           | 
           | This was a corporate requirement where I used to work,
           | unofficially reinforced by the local jokers who would rotate
           | the screen and / or send prank messages if you didn't.
        
             | throwawayprank wrote:
             | > unofficially reinforced by the local jokers who would
             | rotate the screen and / or send prank messages if you
             | didn't.
             | 
             | Same here. The all time favorite is sending a resignation
             | notice to the person's manager (the manager usually gets a
             | fair warning first and plays along with it).
        
               | KineticLensman wrote:
               | > is sending a resignation notice to the person's manager
               | 
               | Or a message that says "I love you very much"
        
               | Marc_Bryan wrote:
               | Yep. I've sent a party mail and free lunch or snacks
               | sponsor to the team. Have got them a couple of times as
               | well. A lesson well learned...
        
             | NikolaNovak wrote:
             | I was a big proponent of hasselhoffing unlocked computers
             | (set up a wallpaper of David hasselhoff in his Bay watch
             | days sprawling over desktop :)
        
               | dehrmann wrote:
               | I heard this was big at Rackspace.
        
             | friendlyHornet wrote:
             | When I was an intern at a company, I forgot to lock my
             | screen once when I went to the toilet
             | 
             | My colleagues edited my .bashrc to echo "lock your screen
             | next time"
        
               | cozzyd wrote:
               | I like to alias emacs to vim or rm, depending on how
               | charitable I'm feeling.
               | 
               | (Kidding obviously, at least for the latter).
        
           | thallosaurus wrote:
           | Yeah, colleagues have happened once, then never again. I use
           | a Laptop with a Dockingstation for work and take the Laptop
           | home with me every time I leave so I would have noticed if
           | this would have happened.
           | 
           | When home, I always have to lock or my cat would
           | typeeeeeeeeawww
        
           | greggyb wrote:
           | Yes. Why wouldn't you?
        
             | paulcole wrote:
             | Because you assess the risk as being low and don't care
             | that much about low risk things.
        
               | AnIdiotOnTheNet wrote:
               | Sure, but it is also a very low effort thing with little
               | friction involved.
        
             | amelius wrote:
             | I have to physically go under my desk to see if my computer
             | has been tampered with.
        
           | Semaphor wrote:
           | For me: No, and no. But as that would require someone
           | breaking into my apartment, I don't worry too much.
        
             | MonkeyMalarky wrote:
             | I work from home and still lock my screen because I have
             | cats that will walk on my desk if I'm not looking. 4 paws
             | and an open vim or slack window are a dangerous
             | combination!
        
           | thr717272 wrote:
           | Lock my computer: Always[1][2].
           | 
           | Check for keylogger thumbdrives: I use a laptop so it would
           | be immediately obvious. But now that you say it I haven't
           | checked the charger USB-outlet on the back of my cabled
           | keyboard.
           | 
           | [1]: it has happened I have failed. Once a year or something.
           | 
           | [2]: I sometimes try to allow myself to go downstairs in my
           | own house to fetch a cup coffe without locking when I am
           | alone, but I find it so stressful in practice I always lock
           | it. I don't need to know but it is a good habit. I'm
           | otherwise normal :-)
        
             | sargstuff wrote:
             | > [2]: .....
             | 
             | Ah, no mention of water proofing computer to protect
             | against robotic dog innocently spilling moring coffee/tea
             | on computer.
        
             | hotpotamus wrote:
             | I've worked in places where that once a year slip-up would
             | mean you sent an email offering to buy lunch for the team
             | or get your background changed to a David Hasselhoff pinup
             | picture from the 80's. I do feel weird locking my computer
             | when I'm alone though.
        
               | francis-io wrote:
               | One place I worked, it was common practice to email
               | something semi embarrassing into a public slack channel
               | from the unlocked pc.
        
               | SargeDebian wrote:
               | Screenshot of the desktop, rotate that 180 degrees and
               | set as background. Hide all icons and the taskbar, then
               | rotate the whole screen 180 degrees. Maybe a bit of tape
               | underneath the optical mouse.
        
             | nickjj wrote:
             | What bugs me is when this is applied to remote workers in a
             | way that seems optimized for in-office environments.
             | 
             | For example IT enforces that your screen becomes locked
             | after 15 minutes of inactivity and also ties in your local
             | computer 's user login password to your SSO login to access
             | everything. It's a contradiction around password best
             | practices. If you force people to input their password
             | multiple times a day then naturally people will gravitate
             | towards easier to type passwords.
             | 
             | If the idea is "but what if you go AFK in a public place
             | and forget to lock your screen?!?", that's not a valid
             | reason. If you were working in a coffee shop and went to
             | use the restroom for 4 minutes or turned your back for 2
             | minutes then your machine could be compromised (or even
             | worse stolen). It's extremely reckless to leave your gear
             | unattended in a public place.
             | 
             | It can really break morale to input your password and MFA
             | half a dozen times a day, especially when you're alone in a
             | locked apartment where the laptop hasn't left that location
             | in a year.
        
               | Terretta wrote:
               | TouchID or windows machines with Windows Hello touch
               | solves morale issue.
               | 
               | Then they don't type most of the time and the length of
               | 'memorable passwords' (like _correct-
               | horse4BATTERY!staple_ ) isn't a problem.
        
               | __turbobrew__ wrote:
               | Yea, my work uses touchid for day to day logins and it
               | works really well
        
               | amelius wrote:
               | I suppose you could complement that with a camera + AI
               | that recognizes when you leave your computer.
        
               | fortran77 wrote:
               | My Windows Surface unlocks with their depth-camera. I
               | don't know if big corporations allow this sort of thing
               | on their laptops, but it's very handy for us in our small
               | consulting business.
        
               | AgentOrange1234 wrote:
               | Oh yes. Having my machine lock very fast when I wfh is
               | really annoying. Past that, with so many systems to log
               | into, I sometimes feel like all I do is authenticate and
               | authenticate all day long. SSO is probably saving a lot
               | of this, but at my megacorp it still sucks.
        
               | pwg wrote:
               | > What bugs me is when this is applied to remote workers
               | in a way that seems optimized for in-office environments.
               | 
               | > For example IT enforces that your screen becomes locked
               | after 15 minutes of inactivity
               | 
               | If your OS is MS-Win, try playing an audio file when you
               | don't want the auto-lock to go off. Provided IT's
               | "checkbox security" parameters [1] did not include
               | turning this off, MS-Win does not timeout lock the system
               | if an audio file is playing, which makes playback of an
               | audio file a way to prevent the timeout auto-lock from
               | happening. Note that this won't help with any 'presence'
               | indicators that go "idle" or "away" with no activity for
               | some time.
               | 
               | If this works, then you can create an audio file of
               | 'silence' with sox to use to play back when you don't
               | want the auto-lock to trigger:                  sox -n
               | silence.wav trim 0 10:0.0
               | 
               | Creates a ten minute long wav of 'silence'. If you want
               | it smaller, compress the wav with lame into an mp3 or
               | fdkaac into an aac file. Then launch playback of the
               | silence file, and set windows media player to "loop" when
               | it reaches the end of the file.
               | 
               | [1] Much corporate/govt. IT "security" is "checkbox
               | security". It is the equivalent of IT having a
               | "compliance form" with a long list of "configured
               | settings" with check-boxes next to each, and so long as
               | they can go down the form and "check all the boxes" they
               | deem their setup "secure". Whether it is actually secure
               | is not important, just that it "checks all the boxes" on
               | the "compliance form".
        
               | nickjj wrote:
               | That's really clever. This is a company issued Macbook
               | (macOS is a requirement not a choice btw) but I'm
               | guessing there will be something similar that you could
               | do.
               | 
               | This puts you into a grey area though no? You could make
               | a case this is willingly trying to circumvent security
               | protocols which could be grounds for being fired.
        
               | Spooky23 wrote:
               | The issue is more other members of your household. Your
               | roommate, kids, spouse, etc. Policy and regulatory
               | requirements don't allow incidental disclosure to people
               | like that, and the company has no relationship with them.
               | 
               | I dealt with this as a policy issue recently. Controls
               | like aggressive screen lockouts are one of the few
               | options available to allow some categories of workers to
               | work outside of a company controlled premises.
               | 
               | The argument that you live alone etc is irrelevant as I
               | have no idea (and don't want to know) whether that's
               | true. I can tell you that people have done shockingly
               | dumb things with remote work and the company has to try
               | to control risk as best it can.
        
               | nickjj wrote:
               | > Controls like aggressive screen lockouts are one of the
               | few options available to allow some categories of workers
               | to work outside of a company controlled premises.
               | 
               | What does the policy really protect against?
               | 
               | If it's being locked out after 15 minutes of inactivity
               | because of roommates or kids it doesn't protect you
               | against anything in the grand scheme of things. For
               | example if I leave my office for lunch and you step in 10
               | minutes later then you have a solid 40-50 minutes to do
               | whatever damage you plan to do while I'm gone.
               | 
               | The only time it makes a difference is if it locks really
               | fast, such as 30 seconds but then using the computer
               | naturally would be ridiculous because you couldn't stop
               | touching the keyboard or mouse without being locked out.
               | 
               | Also, what if your room mate planted cameras in your
               | office that let them see exactly what keys you're
               | pressing on what screens without ever compromising the
               | machine itself? Now everything is compromised and they
               | have full reign to do whatever they intend to do.
               | 
               | > The argument that you live alone etc is irrelevant as I
               | have no idea (and don't want to know) whether that's true
               | 
               | This is the real problem. Everyone gets treated like an
               | equal criminal when in reality none of the measures taken
               | really do anything to provide the security they were
               | designed to do. It reminds me a lot of "for the children"
               | but applied to corporations for "compliance reasons".
               | 
               | I'd be more ok with the precautions if they worked.
        
               | Spooky23 wrote:
               | It reduces risk by minimizing disruption if you're
               | following other work rules. If we locked it in 30s,
               | people wouldn't be able to work.
               | 
               | Re: the "treat people like a criminal" take. A common
               | approach organizations are taking is employee
               | surveillance. I don't want to know that your girlfriend
               | has a conviction or that your kid sits next to you with
               | sensitive data in your screen, etc. And I don't want to
               | force you into an office.
               | 
               | There's a difference between "security" and risk
               | management. If the discussion was pure security with
               | low/no risk tolerance, you'd be working on a locked down
               | terminal server in an office.
        
               | bentcorner wrote:
               | > _For example IT enforces that your screen becomes
               | locked after 15 minutes of inactivity_
               | 
               | If you're on windows, there's a powertoy[1] called
               | "Awake" that can keep your screen on indefinitely despite
               | IT rules. I liberally use this on machines I'm remotely
               | connected to because there's 0 reason why a remote
               | session should lock if I'm active on the computer looking
               | at a different window.
               | 
               | [1] https://learn.microsoft.com/en-us/windows/powertoys/
        
           | k3liutZu wrote:
           | Yes (I don't bother when I'm working from home)
           | 
           | I haven't used a use usb stick in +10 years.
        
             | dorfsmay wrote:
             | How do you install the OS?
        
           | Terretta wrote:
           | On MacOS, poke the TouchID key to Lock Screen. When back,
           | poke it again to unlock. Make this rote.
           | 
           | On Windows, Win-L to lock.
        
         | ttctciyf wrote:
         | > enable anti-malware if your OS has it . . . Would be
         | interested in hearing other things
         | 
         | Given the most common network activity is web browsing, it
         | seems like enabling protections in the browser is becoming
         | mandatory for the security-conscious.
         | 
         | For me this amounts to enabling NoScript and uBlock[edit: [0]]
         | plugins in Firefox, desktop and mobile versions, and disabling
         | or locking down various "features".
         | 
         | An additional step I take is to use several browser profiles
         | for different purposes (mail, banking, shopping, default, to
         | name four) so that Firefox always asks me to pick a profile on
         | startup. As well as reducing the possibility of XSS, this lets
         | me relax the settings for some profiles where I restrict myself
         | to a small number of trusted sites. (This may well be
         | overkill!)
         | 
         | 0: uBlock Origin, that is, as per instructive commment below[1]
         | 
         | 1: https://news.ycombinator.com/item?id=34389942
        
           | soheil wrote:
           | With respect the question is about the machine being
           | compromised, I'm not sure how XSS, phishing, etc. attacks are
           | relevant.
        
           | wintermutestwin wrote:
           | >Given the most common network activity is web browsing, it
           | seems like enabling protections in the browser is becoming
           | mandatory for the security-conscious.
           | 
           | What I am looking for is an easy way to run something like a
           | LiveCD OS in a VM for browsing. The problem is that I have
           | never found a decent LiveCD that has Firefox with all of the
           | mandatory extensions (uBlock Origin, etc...). I guess I could
           | customize my own LiveCD, but last I looked into it, doing so
           | seemed complex and too time consuming to figure out.
           | 
           | Posting this in the hopes of being steered towards a simple
           | solution or to inspire someone to create one.
        
             | Godel_unicode wrote:
             | Windows AppGuard is close to this, although it's a hyper-v
             | silo not a full VM. Edge can open links in AppGuard (which
             | is what this technology is called) right from the context
             | menu, super convenient.
             | 
             | https://learn.microsoft.com/en-us/deployedge/microsoft-
             | edge-...
        
               | wintermutestwin wrote:
               | That sounds interesting and I like that Edge is one of
               | the few browsers that has vertical tabs built in. The
               | thing is that I don't trust MS and assume that they are
               | thieving my data in any way they can...
        
             | ParetoOptimal wrote:
             | > I guess I could customize my own LiveCD, but last I
             | looked into it, doing so seemed complex and too time
             | consuming to figure out.
             | 
             | Here's a 5m solution whose starting point might be
             | acceptable:
             | 
             | - install Nix
             | 
             | - Follow first two steps at
             | https://nixos.wiki/wiki/Creating_a_NixOS_live_CD to
             | generate iso
        
               | wintermutestwin wrote:
               | That does look simple! Are the steps:
               | 
               | 1. Install Nix in a VM or on a clean HD
               | 
               | 2. Customize Firefox to my liking
               | 
               | 3. follow the two steps you highlight
               | 
               | 4. Load my created iso in a VM
               | 
               | Right?
        
               | ParetoOptimal wrote:
               | > Install Nix in a VM or on a clean HD
               | 
               | You should be safe to avoid this unless your threat model
               | includes a trusting trust type exploit on Nix generating
               | the ISO.
               | 
               | Also, just realized it'll be a little more complex
               | because you'll want to use home-manager to install
               | Firefox plugins and do about:config configuration.
               | 
               | Here are some examples of that in various contexts:
               | 
               | https://github.com/search?q=language%3Anix+programs.firef
               | ox+...
        
           | pixxel wrote:
           | I've no doubt you are referring to uBlock Origin but it's
           | real important to label it as such so the unaware don't
           | install uBlock.
        
             | ttctciyf wrote:
             | Thanks, yes, updated.
        
             | panarky wrote:
             | _> > enabling protections in the browser is becoming
             | mandatory
             | 
             | > or me this amounts to enabling NoScript and uBlock_
             | 
             | Hard to see how installing third-party extensions that can
             | view and change data for every site makes the browser more
             | protected.
        
               | pbhjpbhj wrote:
               | Just little a third party security guard can't make a
               | building more secure. /s
               | 
               | It's about who/what you trust.
        
           | paulryanrogers wrote:
           | Firefox multi-account containers can be a more convenient way
           | to isolate things, especially now that the container can be
           | limited to only the allowed sites.
           | 
           | I also made an app and extensions to help me use multiple
           | browsers, one per site. (Browsr Router)
        
             | ttctciyf wrote:
             | Regards the containers, I started my profile
             | compartmentalization practice before they arrived, so I
             | never explored them. But I'm curious as to whether each
             | container comes with a complete set of browser permissions,
             | like profiles do, which would enable you (for example) to
             | have a location-enabled container specifically for google
             | maps (in reference to[0]) while disabling it on the
             | container used for search, as you could quite easily do
             | with profile compartmentalization? (Which, tbf, is a
             | privacy rather than a security concern.)
             | 
             | 0: https://www.simpleanalytics.com/en/blog/google-changed-
             | googl...
        
               | paulryanrogers wrote:
               | Add-ons cross over container boundaries, so I suspect
               | other browser permissions do too.
               | 
               | You're solution is more bullet proof. For similar reasons
               | I am hoping to add better profile support to Browser
               | Routr.
        
         | ghostpepper wrote:
         | Since you mention routers, I'm curious what brand you use.
         | 
         | Since Ubiquity started fown the cloud-first path I've switched
         | to Mikrotik. While they do seem to have regular CVEs (which is
         | good, I think?), they also don't seem to have a public bug
         | bounty program.
        
           | [deleted]
        
           | nickjj wrote:
           | > Since Ubiquity started fown the cloud-first path I've
           | switched to Mikrotik
           | 
           | I was thinking about getting a Ubiquity router because it has
           | good support for setting up wired VLANs without needing to go
           | down the path of finding a solid OpenWrt router.
           | 
           | Is it really true that you can't access the router's
           | dashboard and configure things without associating an online
           | account to your router?
        
             | Spooky23 wrote:
             | That's not true. You need to use the gui though. I think
             | some folks preferred the cli in the past.
        
             | shamiln wrote:
             | My UDM and UDR aren't connected to UI's cloud services, so
             | it's not really cloud first.
        
             | ghostpepper wrote:
             | It's definitely still possible to use without cloud now,
             | what I meant by "started down the path" is that it seems
             | like the direction of the company is not aligned with what
             | I want.
             | 
             | I was in the market for more hardware and I had to decide
             | whether to increase my investment in Ubiquity or make a
             | change, and I chose the latter.
        
             | psutor wrote:
             | They did add back non-cloud support a while ago after the
             | cloud forcing didn't go over well, but it is a second-class
             | experience.
             | 
             | The day after I set my UDM Pro up as non-cloud, it
             | corrupted the login somehow and I had to factory reset it
             | and redo all settings (as it hadn't been running long
             | enough to run any automatic backups first). I capitulated
             | and just set it up again as a cloud login to avoid having
             | the same fiasco at some random point in the future again.
             | 
             | I immediately regretted going with the UDM to save (quite)
             | a few bucks over a OPNsense/pfSense appliance to fit my
             | requirement of handling a full 10G of WAN.
             | 
             | I will admit having the app to monitor usage remotely is a
             | somewhat neat trick. I find the firewall configuration in
             | GUI obtuse enough to be next to unusable.
        
               | ThePowerOfFuet wrote:
               | > I immediately regretted going with the UDM to save
               | (quite) a few bucks over a OPNsense/pfSense appliance to
               | fit my requirement of handling a full 10G of WAN.
               | 
               | As someone who bit the bullet and sold all their UniFi
               | gear on eBay and switched to OPNsense and Ruckus, I'll
               | tell you that it's been absolutely worth it.
        
             | Terretta wrote:
             | No
        
             | foresto wrote:
             | > Is it really true that you can't access the router's
             | dashboard and configure things without associating an
             | online account to your router?
             | 
             | That might be true for their UniFi line, but EdgeMAX
             | devices work fine without an online account. EdgeRouters
             | run a fork of Vyatta, with configuration files and command
             | line operations that are fairly easy to work with. They
             | also have a web UI for common configurations, though I have
             | little experience with it.
             | 
             | A bit of warning: EdgeMAX support has been declining in
             | recent years. Security updates are still published, but
             | bugs don't seem to be addressed as quickly or consistently
             | as they were in the past, and some forum users have
             | expressed doubts about what kind of support will exist in
             | the future. That said, my equipment is still doing fine.
             | 
             | I think OpenWRT has been ported to at least one Ubiquiti
             | router, so that might be a good fallback if EdgeOS support
             | ever ends. I wonder if anyone here has tried it.
        
             | rolph wrote:
             | https://www.lifewire.com/ubiquiti-promised-premium-secure-
             | ro... [2021]
        
           | ahelwer wrote:
           | I bought a Turris Omnia used on ebay for about a hundred
           | bucks. Very powerful, open source hardware & software, good
           | community, adblock server built in if you're too lazy to set
           | up a pihole, enough compute power to host lots of neat
           | services on it.
        
         | Silhouette wrote:
         | _Would be interested in hearing other things that HN 'ers do to
         | limit risk._
         | 
         | Mostly the same basics as you. The document you linked is a
         | good starting point.
         | 
         | I'd add extensive use of virtualisation and sandboxing. I run
         | less and less software as native, installed applications on any
         | device I use personally or professionally. Instead it tends to
         | run inside things like VMs or Docker containers or cloud-hosted
         | platforms now.
         | 
         | My basic policy is to try and make every device and installed
         | application expendable/replaceable in case anything breaks or
         | gets compromised and then focus on the data. I apply the
         | principle of least privilege for access to any sensitive data,
         | try to keep all important data in standardised formats and
         | avoid lock-in effects as much as reasonably possible, and keep
         | good back-ups under my own control with the ability to
         | redeploy/restore anything quickly and as automatically as
         | possible.
        
           | BLKNSLVR wrote:
           | 100% with you on the entirety of your last paragraph.
           | 
           | I generally use a device as an access mechanism; a configured
           | window into the data. This configuration is the only thing
           | lost when a device is lost. No data, no function, no service.
           | Configure the replacement device and continue as you were.
           | 
           | Virtualisation and Docker-isation makes backups and restores
           | almost enjoyable.
        
             | amelius wrote:
             | How do you deal with leaking of data?
        
               | BLKNSLVR wrote:
               | Not sure I fully understand the context of the question,
               | but for a device to gain access it needs to be allowed
               | into the local network by being in wifi range or
               | authorised on the VPN, plus have username/password access
               | to the service/data.
               | 
               | If, however, there's a backdoor I don't know about, then
               | I'm pwned.
        
       | throwaway2214 wrote:
       | i used to care great deal about this, but recently i gave up
       | 
       | why do you even care? most of your files are either on apple's
       | computers or google's computers
        
       | transpute wrote:
       | Some OS mitigations:                 All: patch, encrypt, backup,
       | track power, isolate workflow by device/VM       Network: router
       | with OSS firmware, workflow segmentation, reduce wireless
       | iOS: (>A12 SoC) Lockdown mode, Brave w/o JS, daily reboot
       | iOS: periodic reinstall from DFU mode, Apple Configurator / MDM
       | policy       macOS: hardening script based on workflow, outbound
       | firewall       Windows: Secured Core device + SystemGuard + App
       | Guard VM isolation       Windows: HP device + SureStart (f/w
       | check) + SureClick (browser VMs)       Linux: vPro device +
       | QubesOS with Anti-Evil-Maid        Linux: generic device + non-
       | persistent LiveCD OS image
        
         | landemva wrote:
         | For MS Windows, may also want to enable core isolation / memory
         | integrity. Sadly, vendors typically don't enable it from the
         | factory.
         | 
         | https://www.infotechnotes.com/2021/07/microsoft-windows-core...
        
       | PaulAJ wrote:
       | I keep a few PS in bitcoin stashed on it. If it ever disappears,
       | I'll know.
        
       | andix wrote:
       | Hopes and prayers?
       | 
       | I think its completely impossible to make sure your machine is
       | not compromised. You can just take the best effort to keep it
       | clean.
       | 
       | Try to use 2FA as much as possible. And try to shield the 2nd
       | factor as good as possible from any connection to your other
       | devices.
        
         | landemva wrote:
         | USA banks seem to have found the 2fa powerpoint presentation
         | and are forcing accounts to use SMS 2fa, with no ability to use
         | something like authenticator app. Nothing about SMS is secure,
         | so their IT is taking a step backward.
        
           | andix wrote:
           | SMS is still better than no 2FA, but yes, it's not secure at
           | all.
           | 
           | What really bugs me, that some systems rely only on the 2nd
           | factor, which replaces the password completely. Some even did
           | that with SMS. So you put in you user name and then the SMS
           | code. That's really bad. Also a lot of Services disguise this
           | method in the ,,I forgot my password" function, where you can
           | reset the password just with a sms code.
        
       | IAmNotAnAnt wrote:
       | I am a distrohopper and don't use phone for calling and some
       | messaging apps.
        
       | jl6 wrote:
       | I don't have ultimate trust in any software or hardware, but I
       | get to "good enough" by deciding which providers I trust:
       | 
       | * Software: Canonical, Google, Microsoft, Valve, Oracle, Dropbox.
       | I install software from their official repos and keep it up to
       | date. Anything 3rd-party/unofficial/experimental/GitHub goes in a
       | VM.
       | 
       | * Hardware: I built my main PC from mainstream commodity
       | components. I have no way of knowing if there are secret
       | backdoors but I consider it unlikely.
       | 
       | I use a password manager, I enable 2FA, I turn off things I don't
       | use, and generally have a low-risk hygienic approach to
       | computing.
       | 
       | I'm also privileged enough to not be a "person of interest" so
       | don't feel the need to take any extraordinary precautions.
       | 
       | Yes, I'm aware of VM escapes. Yes, I've read Reflections on
       | Trusting Trust. I choose to trust regardless because life's too
       | short for paranoia. As Frank Drebin said:
       | 
       | "You take a chance getting up in the morning, crossing the
       | street, or sticking your face in a fan."
        
         | rhn_mk1 wrote:
         | What about publicly known backdoors in your hardware?
         | 
         | https://www.techrepublic.com/article/is-the-intel-management...
         | 
         | There is hardware that doesn't contain those at least, but it
         | doesn't break power records.
        
           | jl6 wrote:
           | I don't consider it practical to take any countermeasures to
           | the possibility of this threat. I think there's a ~10% chance
           | it's a backdoor, and if it is, there's a 98% chance it would
           | be at the behest of a branch of the US government, and I'm
           | not currently an adversary of theirs.
           | 
           | (This is _not_ an argument for mass surveillance, it 's just
           | a practical assessment of the risk).
        
             | rhn_mk1 wrote:
             | Just to be clear, the question of whether it's a backdoor
             | or not doesn't matter for whether bad actors use it. The
             | capabilities are rather well known. Its vulnerabilities
             | aren't, but vulnrabilities do not a backdoor make.
        
         | hutzlibu wrote:
         | "which providers I trust:
         | 
         | * Software: Google, Microsoft"
         | 
         | I trust that Google and Microsoft won't hack into my bank
         | account and steal money, even though they could, but otherwise
         | I assume they collect anything they want and can.
        
           | alyandon wrote:
           | I caught Windows Defender "automatic sample submission"
           | silently uploading places.sqlite out of my Firefox directory
           | despite the assurance in the control panel that "We'll prompt
           | you if the file we need is likely to contain personal
           | information".
           | 
           | So now I disable automatic sample submission via group policy
           | but Microsoft definitely can and will access files that they
           | really have no business accessing.
        
           | rejectfinite wrote:
           | Yes, but the topic is security, not privacy.
        
           | ghostpepper wrote:
           | Security and privacy are two separate things and not always
           | aligned.
        
           | jl6 wrote:
           | I assume they profile me, but I _don 't_ assume they steal my
           | files.
        
             | hutzlibu wrote:
             | They (hopefully) won't steal your ip or personal pictures,
             | but there is so much telemetry and other phoning home going
             | on, and without transparency or the option to really turn
             | it off.
        
       | Jabrov wrote:
       | That's the neat thing: I don't!
        
       | ihusasmiiu wrote:
       | I always assume my personal machine is compromised.
        
       | [deleted]
        
       | rmkrmk wrote:
       | Are there any somewhat easy-to-use solutions to isolate a
       | development environment? Preventing or at least decreasing the
       | damage malicious packages could do? Like deleting files or
       | uploading a private ssh key/keychain to a 3rd party server?
       | 
       | I was looking into things like GitHub Codespaces, I believe
       | they're isolated per repository and integrated into VS Code, but
       | I'd like something I could run on my machine or a server of mine.
        
         | jart wrote:
         | Faraday cage? It's a turnkey solution.
        
         | jacooper wrote:
         | Docker containers?
        
           | rmkrmk wrote:
           | Just found out that there's an extension to use Docker/Podman
           | within VSCode which works on a local or remote machine.
           | 
           | https://code.visualstudio.com/docs/devcontainers/containers
        
       | sweetjuly wrote:
       | I run the latest betas of macOS and iOS which means I get exploit
       | breaking changes as soon as possible. I keep all the security
       | mitigations on my mac enabled (SIP, secure boot, etc.) which
       | helps makes a variety of exploit flows and persistent compromise
       | difficult.
       | 
       | But random malicious code in user space? Well, I really just hope
       | for the best :)
        
         | saagarjha wrote:
         | Note that betas sometimes fall behind public releases when it
         | comes to security patches.
        
           | sweetjuly wrote:
           | Ugh, I suppose you're right. They do seem to skip the beta
           | pipeline for ITWs. I wonder if this has changed now that
           | they're leaning on "rapid security response" patches for
           | critical vulnerabilities, which betas do get, and so
           | hopefully there's parity now. I should dig into this, pretty
           | sure there were a few RSRs recently and it'd be neat to
           | verify that a) betas got the same bugs patched and b) the RSR
           | went out to beta and release at the same time.
        
       | AnIdiotOnTheNet wrote:
       | The same way you can't be sure that when you drive to work today
       | you are not going to die, you can't be sure that your machine
       | isn't compromised somewhere. That's just how reality is: safety
       | is an illusion.
       | 
       | Like with driving, make an effort to lower the probability to
       | wherever makes you comfortable, then just accept that there's a
       | non-zero chance it wasn't good enough.
        
       | adg001 wrote:
       | The reality is that you cannot trust that your machines are not
       | compromised.
       | 
       | The only option we are left with is to operate under the
       | assumption that, indeed, our machines are permanently
       | compromised.
        
       | jrm4 wrote:
       | My personal machine?
       | 
       | I use Linux with lots of distributed sync and backups with e.g.
       | Syncthing (plus copies of stuff NOT on sync thing)
       | 
       | Now, I'm aware many reading this are going to nerd out hard (like
       | how the top comment now is "Android/Chromium" which I'm skeptical
       | of but haven't done much homework on? Maybe?)
       | 
       | But because you said "personal machine"-- I'm thinking about my
       | own threat model and my _years_ of experience.
       | 
       | Thus, not going to much worry about, say, some obscure Linux-
       | Stuxnet-thing, which not only is overwhelmingly unlikely, but
       | also something I can't much do anything about beyond the
       | solutions I mentioned above.
       | 
       | More likely, I can avoid stupid Windows and stupid Mac,and often
       | stupid Web mess by what I'm doing now.
        
       | khana wrote:
       | [dead]
        
       | adriancr wrote:
       | Just some generic things that should help avoid or clean up after
       | a compromise.
       | 
       | - clean reinstall every month, just pick a new flavor of Linux to
       | try out. (also helps ensure I have proper backups and scripts for
       | setting up environment)
       | 
       | - Dev work I usually do in docker containers, easy to set up/nuke
       | environments.
       | 
       | - Open source router with open source bios (apu2), firewall on
       | it, usually reinstall once in a while.
       | 
       | - Spin up VMs via scripts for anything else. (games - windows VM
       | with passthrough GPU for example)
       | 
       | - automatic updates everywhere.
        
         | 867-5309 wrote:
         | >Spin up VMs via scripts for .. games
         | 
         | this is not sustainable. you do this once and then pray nothing
         | breaks!
        
           | adriancr wrote:
           | I just have a clone of a clean windows VM.
           | 
           | If something breaks or I get bored, nuke the active one and
           | start clone, update it and make another backup, then
           | reinstall games again.
           | 
           | On the other hand, gpu pass-through breaks once in a while
           | and is annoying to fix.
        
             | 867-5309 wrote:
             | so more like an OOBE snapshot?
             | 
             | would be annoying to initiate and wait for all the windows
             | updates, drivers and game installations every time
             | 
             | also runs the risk of reusing the hwid (which e.g. Epic and
             | Ubisoft are cottoning on to and permabanning)
             | 
             | gpu passthrough is a menstrual headache hence the set,
             | forget, pray attitude
        
       | nvln wrote:
       | I used this a while back when I was worried something weird was
       | going on: https://github.com/mvt-project/mvt. (Nothing was going
       | on).
        
       | alkonaut wrote:
       | I don't. Not sure why I'd do that. I find the risk acceptably low
       | that it is. But more importantly, I don't have any reason to fear
       | that it is.
       | 
       | So I trust that regular caution and OS security reduces the risk
       | to an acceptable level but mostly I don't fear anyone reading or
       | destroying my data because I have backups and it's not sensitive.
       | Sure it would be scary from an integrity perspective, but not in
       | any other sense. Even constant access to my machine and
       | everything I do wouldn't be a big risk.
       | 
       | So if I'm affected by a ransom Trojan (most likely scenario), I'm
       | happy to just wipe my machine.
        
       | 2000UltraDeluxe wrote:
       | First, take a deep breath.
       | 
       | Second, unless you're in a situation where you've pissed
       | off/threatened some rather large actors, you should be fine
       | assuming you follow best practices for backup, software, update
       | and password management and you avoid using things like cheap IoT
       | devices to connect to your cloud services.
       | 
       | Third, when disaster strikes, keep calm, rely on backups, change
       | affected passwords and notify others who might get affected.
        
       | fattybob wrote:
       | I've been fairly comfortable since I moved to Mac's - maybe I
       | shouldn't be so comfortable but I do mind what I click and open
        
       | intrasight wrote:
       | Perhaps it's because I have a cold right now, but I interpreted
       | "personal machine" to mean my body. And what is meant by "trust?"
       | The word is really synonymous with "faith". That's why the
       | correct phrase is "trust and verify".
       | 
       | I understand that my "personal machine" - my body - is always
       | compromised. I also have faith that no heinous actors are likely
       | to try to compromise my body. But that is only because I am a
       | nobody and have the good fortune to live in a safe place.
       | 
       | As for computers, I think the same logic applies. I have faith
       | that no nefarious actors are striving to compromise my own
       | machine specifically. But for many high-value targets, this would
       | be a bad assumption. Witness the crypto thefts that have occurred
       | by hacking individual's computers.
       | 
       | I am no expert in counter ciber espionage, but my understanding
       | is that it boils down to a) reducing attach surface, b) using
       | trusted hardware, and c) using ephemeral "machines".
        
       | giantg2 wrote:
       | "How do you trust that your personal machine is not compromised?"
       | 
       | You don't. They are all likely "compromised" to some extent. The
       | vast majority likely have asymptomatic/latent state-sponsored
       | vulnerabilities, if not on the machine itself, then in the
       | network infrastructure it uses. For the most part, people might
       | not consider them "malicious third parties".
        
       | fattybob wrote:
       | Hmm, actually, my aging Mac always asks me to install something
       | whenever I connect my newer iPhone - I don't like that at all,
       | it's not at all what I'd expect from an apple device, but I
       | always am coming to realise that apple devices really aren't what
       | they used to be - quite sad
        
       | QuiEgo wrote:
       | The real question here: does it even matter?
       | 
       | As far as I know I did everything right, and someone called my
       | bank with info we both believe they got from stealing from my
       | paper mail and got access because they convinced some human at a
       | bank's call center they were me.
       | 
       | Don't make it easy for people (rng passwords + password manager,
       | 2fa, don't run as su, whole disk encryption, don't leave you
       | computer unlocked, don't log into your bank on rando computers
       | you don't control, don't use untrusted wifi). However, assume you
       | already are compromised and will have to deal with it some day.
       | 
       | Once you think that way, you don't need to stress that much about
       | getting the perfect hardware solution or being super paranoid -
       | buy a device you like, and enjoy you digital life. Stuff happens
       | sometimes and if it does you can deal with it -\\_(tsu)_/-.
        
       | epolanski wrote:
       | By default I think I am, thus I minimize the amount of sensitive
       | info I put into any device.
        
       | crims0n wrote:
       | Keep it air gapped, only way to be sure!
       | 
       | Only half kidding, unfortunately.
        
         | andromaton wrote:
         | You can get data out of an air gapped machine several published
         | ways (ultrasound, hard drive light, monitor flicker, emf, etc)
        
           | fIREpOK wrote:
           | Requires proximity and someone very determined though...
        
       | NikolaNovak wrote:
       | Great question. I don't anymore. Decades ago when I had a 286 and
       | knew what each file did and what all the software was, and
       | threats were limited and crude, I had good confidence of
       | controlling my machine. Today, when my laptop has millions of
       | files and each website - even hacker news - could inject
       | something malicious and my surface is so broad (browsers
       | applications extensions libraries everything) and virtually
       | anything I do involves network connections... I just don't have
       | the confidence.
       | 
       | FWIW, I try to segregate my machines for different categories of
       | behaviour - this laptop is for work, this one is for photos and
       | personal documents, this one is for porn, this one is if I want
       | to try something. But even still my trust in e. G. software vlan
       | on my router and access controls on my NAS etc are limited in
       | this day and age.
       | 
       | I feel today it's not about striving for zero risk (for 99.99 of
       | people) , but picking the ratio of overhead and risk you're ok
       | with. And backups. (bonus question - how to make backups safe in
       | age of encrypting ransom ware).
        
         | hdjjhhvvhga wrote:
         | > (bonus question - how to make backups safe in age of
         | encrypting ransom ware).
         | 
         | This is actually a solved problem, with many solutions. In a
         | nutshell, you need a system that has enough space to make many
         | enough copies without overwriting the ones that are too fresh.
         | It also must not be controllable by the host that you backup
         | but this is kind of obvious.
        
         | fsflover wrote:
         | > Today, when my laptop has millions of files and each website
         | - even hacker news - could inject something malicious and my
         | surface is so broad (browsers applications extensions libraries
         | everything) and virtually anything I do involves network
         | connections... I just don't have the confidence.
         | 
         | It doesn't matter how many files your computer has and how many
         | millions of lines of code it runs. There is a concept of
         | _Trusted Computing Base (TCB)_ , which is the part of the code
         | that you have to trust.
         | 
         | In Qubes OS it's only about a hundred thousand lines, and
         | doesn't include any browser. The key is security through
         | isolation.
         | 
         | You run your browser and network in virtual machines and assume
         | that they are compromised. You keep your sensitive files in an
         | offline VM.
        
         | jl6 wrote:
         | On the backup question, this is one reason why I have a set of
         | backups that are physically disconnected and _not_ automated.
        
           | BLKNSLVR wrote:
           | I have a backup NAS that's normally powered off, but it's
           | scheduled to turn on, perform backup, shut down.
           | 
           | It doesn't wake on LAN and there should be no way of knowing
           | it exists outside of checking DHCP static addresses
           | reservations - and now that I mention it, maybe I should
           | remove it from there too.
           | 
           | This minimises the size of the window, and network-snoopable
           | information, required to compromise this set of backups.
        
             | [deleted]
        
             | jl6 wrote:
             | If your main computer has had all its files encrypted by
             | ransomware, will the backup NAS know not to replace the
             | good backup with a bad one? i.e. hopefully it's not doing
             | something like `rsync --delete`.
        
               | 2000UltraDeluxe wrote:
               | Not the GP but I use ZFS snapshots for such things.
               | Should the files be overwritten or deleted I can always
               | fetch an older copy from snapshots.
               | 
               | The trick is to make sure nothing has access to the
               | backup storage, which is obviously easier said than done.
        
               | fbdab103 wrote:
               | Restic is my current backup solution of choice and that
               | takes snapshots by default. If you took a backup on
               | Monday, ransomwared on Tuesday, your backup volume would
               | double in size (file de-duplication is going to fail
               | spectacularly), but the Monday backup will be unimpacted
               | until a prune operation is run. Suggested prune workflow
               | is to maintain fairly staggered snapshots eg: 1x six-
               | months ago, 1x three-months ago, 3x from last month,
               | whatever makes you comfortable. Which should give a
               | pretty comfortable margin on retaining your files.
        
             | NikolaNovak wrote:
             | I think, perhaps ignorantly, That may prevent some human
             | being or intelligent agent specifically targeting your nas.
             | 
             | I don't think it would help against situations where your
             | primary system is being encrypted for a while, and thus
             | your backups eventually get overwritten with bad stuff.
        
               | BLKNSLVR wrote:
               | The data being backed up is in tiers of importance or
               | 'frequency of change', and based on this the backups are
               | staggered, some daily, some weekly.
               | 
               | I wouldn't often go a full week without checking some
               | file or other, so I think I'd know pretty swiftly if I
               | got infected with an encrypting ransomware virus -
               | hopefully quickly enough to minimise damage.
               | 
               | I also do off-site backups on occasion, so I could roll
               | back to the last full set of off-site backups - so long
               | as I remember where/when the most recent one is.
               | 
               | My main adversary I'm protecting against is hardware
               | failure, however. I _think_ I run a pretty tight ship in
               | restricting access to data that I 'd rather not lose.
               | 
               | Maybe some hubris payback will come visit one day
               | though...
        
               | fbdab103 wrote:
               | >The data being backed up is in tiers of importance or
               | 'frequency of change', and based on this the backups are
               | staggered, some daily, some weekly.
               | 
               | This right here is the key to backups: triage your data
               | to understand what is truly important. I roughly put
               | things into three buckets
               | 
               | - Priority 1 - potentially disastrous if lost.
               | Financials, taxes, legal documents, and password vaults.
               | The nice thing about most of these documents is that they
               | are immutable and likely append-only (eg you only have
               | one set of 2020 taxes). For most people, this amount of
               | data should be well under 1GB and require only sporadic
               | backups. Which means you can purchase a bolus of $10
               | thumb drives, encrypt the collection, and leave them
               | everywhere. Mail an annual copy to mom, leave one in your
               | bag, at the office -wherever.
               | 
               | - Priority 2 - anything you created which does not fall
               | into Priority 1. Home pictures, videos, your 1000 half-
               | baked programming projects, etc. Potentially a much
               | larger collection for which a real backup system becomes
               | necessary.
               | 
               | - Priority 3 - everything else which is _theoretically_
               | replaceable. The archive of music you  "acquired",
               | backups of youtube videos, personally ripped DVD
               | collection, etc
        
               | Nashooo wrote:
               | Would you mind sharing abit more about your setup? I
               | would want to set it up like this as well.
        
               | conception wrote:
               | Replication isn't backup though?
        
               | NikolaNovak wrote:
               | I would agree, but I feel the GP specifically talked
               | about "Backup NAS", which is where my question pertains.
               | 
               | I do have a set of hard drives I exchange with my friend
               | as "off site" backup. Neither my cloud service nor NAS
               | keep copies/snapshots of previous versions of files; and
               | I have too much stuff to create BlueRay let alone DVD
               | one-off backups :0/
        
               | Izkata wrote:
               | > and thus your backups eventually get overwritten with
               | bad stuff.
               | 
               | I use rsync's --link-dest to get Time Machine-like
               | backups so they can't be overwritten through the backup
               | system itself. I don't take the "shut it down"
               | precautions with this system the other person does,
               | though I do have a separate in-place one that's strictly
               | manual (as in I have to physically plug it in to update).
        
           | [deleted]
        
         | damagednoob wrote:
         | > how to make backups safe in age of encrypting ransom ware.
         | 
         | Versioned, offsite backups. For instance, if you have a
         | database in an AWS account:
         | 
         | * Give the backup process write only (I.e. no delete
         | permissions) to a GCP account.
         | 
         | * Create the backup in AWS and timestamp it
         | 
         | * Copy the backup to GCP using the above permissions.
         | 
         | * If you want to be more secure, copy the backup to a USBHDD
         | (daily/weekly) and unplug it.
        
           | deathanatos wrote:
           | > _Give the backup process write only (I.e. no delete
           | permissions) to a GCP account._
           | 
           | I've looked into this before, and it is just not that easy.
           | "Write" _is_ delete, for most cloud storage systems, for the
           | practical purposes of trying to keep a backup safe. (I.e.,
           | you might not be able to delete a blob in some bucket, but if
           | you can write to it, you can just overwrite it with 0s.)
           | 
           | "WORM" (write-once read-many) tends to be the term to search
           | / gets the right documentation from most providers. In GCP's
           | case, it appears to be "set up a retention policy", and
           | that's similar to my experience with other providers. These
           | bring their own set of problems.
           | 
           | That said, encrypting ransomware isn't going to magically
           | determine where your backups are, and for most orgs, _having_
           | the backup at all (and having it tested) is the priority, not
           | the whole WORM thing.
           | 
           | (Orgs, IMO, also tend to get _really_ uppity about having
           | "database" backups, where "database" == {MySQL, Postgres,
           | etc.}. But then there will be an S3 bucket that also has a
           | bunch of data in it, and that never gets backed up, and
           | nobody even _questions_ that. And half the time it seems
           | impractical to back up, too, due to a mix of cost and S3 's
           | design.)
        
       | thot_experiment wrote:
       | I worry so much more about the dumb hardware locks and secure
       | enclaves, OS features etc. I find the risk of a compromised
       | machine to be so much less of an impact on my life than my
       | computer telling me I am not allowed to do something.
       | 
       | This is my computer, let me tell it what to do. I hate how much
       | of my time is wasted by all this security stuff. Infinitely more
       | so than had been wasted by actual malware over the last decade or
       | so.
       | 
       | I don't want to have to spend 10hrs figuring out how to hide root
       | from Android pay every time something upgrades. Please just let
       | me have root on devices I own.
       | 
       | Ever since I started doing a lot of work in C where all the foot
       | guns are intentionally left in I've had my eyes opened to how
       | beautiful and fun computers can be when they aren't your fucking
       | adversary.
       | 
       | "Security" that can't be disabled by the device owner is tyranny.
        
         | hsbauauvhabzb wrote:
         | Root detection is about reducing risk to companies, presumably
         | sideloading results in substantially increased risk. It's not
         | about you, it's about their bottom line, but it can also help
         | prevent grandma from losing their life savings.
         | 
         | I'm a security engineer and know what I'm doing and agree
         | there's some level of security theatre, but it you're not
         | worried about losing Crown Jewels from a compromise you're most
         | probably uneducated or arrogant.
        
           | thot_experiment wrote:
           | Of course I'm worried, I just attempt to understand my threat
           | model, my vulnerability surface and act accordingly. I
           | understand why security exists. I just want to be able to
           | turn it off without having to resort to hours of research of
           | security tech in order to control the devices I own.
           | Obviously I wouldn't give access to an account with my life
           | savings to my rooted phone.
           | 
           | It is about their bottom line, but largely not to protect me
           | or grandma, it's about justifying control and divorcing
           | people from the power of the supercomputer in their pocket.
           | There's more money in making it hard for me to edit my hosts
           | file etc. than there is in preventing the potential loss of
           | my money/data through these restrictions.
        
       | lormayna wrote:
       | Before defining any strategy, you need to define an appropriate
       | threat model. Which kind of information are you storing on your
       | device and who can target you?
       | 
       | If you are a standard person and not doing any illegal, the
       | information that you need to protect are mostly related to
       | financial and personal standpoint. So you need to protect you
       | bank/credit card/cryptowallet with encryption and/or MFA. For
       | financial information, use the same criteria, according also to
       | level of continentality that you want to achieve: it's stupid to
       | encrypt your cat pictures, it may be worth to encrypt cipher your
       | son pictures, it's mandatory to protect your health related files
       | also with MFA. This is just to have an idea, you should make this
       | exercise frequently (let's say every 6 months) and verify if the
       | security controls are in place and have to be updated.
       | 
       | For my own devices, I am using this approach:
       | 
       | * Infrastructure: I am using a password manager with MFA for all
       | my accounts and where is possible I have enabled MFA. I have
       | Cloudflare ZT on my home network, so I am a bit protected against
       | web threat. Moreover, I have a script that everyday download
       | phishing and malicious feeds and update my router's ACLs. I am
       | not exposing anything on public, all the services inside my house
       | are accessible through VPN. My Chinese camera are heavy
       | firewalled in a different VLAN and reachable only from specific
       | host. Every device is upgraded to last version and no default
       | passwords.
       | 
       | * Main laptop: is running Linux, so I am feeling a bit more safer
       | during the web surfing. Anyway, I have an encrypted backup for
       | important data over cloud, just to be ensure disaster recovery.
       | 
       | * Secondary laptop: is running Windows, I am keeping it regularly
       | updated with scheduled MS Defender scans. My wife is mainly using
       | it, but she is not installing anything without my approval (I am
       | the admin of the laptop).
       | 
       | * Phone: Storage encrypted, access protected by strong PIN and no
       | biometric. Applications are installed only from official stores
       | and using a DNS blacklist. My phone has a native feature to
       | reduce and auditing app permissions on a schedule and I am doing
       | it by myself as well sometimes. In case I have to connect to an
       | unencrypted public network, I am using a Wireguard VPN client.
       | 
       | Just my 2 cents, I hope to did not forget anything and be
       | helpful.
        
       | luxuryballs wrote:
       | I never trust a computer to not be compromised, if there's any
       | information so critical that it must be secured at all costs then
       | I simply never let it near a digital device.
        
       | yshklarov wrote:
       | In the words of Robert Morris, 40 years ago:
       | 
       | "The three golden rules to ensure computer security are: do not
       | own a computer; do not power it on; and do not use it."
        
       | Havoc wrote:
       | Short of building your own and coding your own you never fully
       | can. Very much a best effort cost/benefit thing
        
       | ChicagoDave wrote:
       | I'm kind of hoping Malwarebytes is good at it's job.
        
       | denton-scratch wrote:
       | I don't trust that my machines are not compromised.
       | 
       | All I can do is to start with a machine I believe to be "clean",
       | and take measures to keep it that way (others have suggested
       | suitable measures). But even a brand-new machine might have a
       | compromised BIOS, or compromised firmware in some peripheral
       | processor like the Wifi adaptor.
       | 
       | I don't know how to guarantee that a machine is "clean" to begin
       | with, and I doubt anyone else does.
        
       | [deleted]
        
       | mejutoco wrote:
       | You could use something booted from a read only medium like
       | knoppix (if your computer has a cd reader!). At the BIOS level
       | all bets are off though.
       | 
       | https://www.knopper.net/knoppix/index-en.html
        
       | greggarious wrote:
       | I don't.
       | 
       | That is... I don't trust my machine.
       | 
       | I take reasonable precautions, but at a certain point you have to
       | just live your life and deal with people who violate your
       | boundaries in meatspace.
        
       | LinuxBender wrote:
       | My TL;DR Summary _With time I have moved further and further away
       | from using the computer for important things and believe that
       | tech has long since exceeded George Orwell 's wildest fever
       | dreams_
       | 
       | I assume a freshly installed OS is compromised [1] and the
       | hardware it is on is also compromised in the BIOS and firmware at
       | very least by state actors but then I also assume those state
       | actors have poorly vetted contractors that may also be
       | compromised by other nations _i.e. who pays the most gets
       | access_. I would not be surprised for a moment if they have
       | competing backdoors that try to block one another. Since I can
       | not control any of this I just imagine the national actors of the
       | world are watching my screen and yawning. _More likely the latest
       | iteration of ECHELON AI is yawning_. I instead focus on securing
       | important externalities _making bank accounts read-only from the
       | web, not all banks will do this_. I also diversify where my
       | assets are stored and make a best effort to require physical
       | access.
       | 
       | Beyond that layer I do all the usual hardening practices but that
       | only goes so far as every browser likely also has intentional
       | _weaknesses_ in them. Even FireJail and SELinux /AppArmor will
       | likely just happily relay malicious instructions. Addons may
       | raise the bar keeping some script-kiddies off my machine but I
       | never for a moment assume that it stops government contractors
       | from relaying instructions to the backdoors in the hardware
       | and/or OS and ultimately to the hidden CPU instructions that
       | likely take multiple layers of obfuscated instructions to tickle
       | _meaning SandSifter will never find them_.
       | 
       | The above is for PC's. For cell phones I assume FAANG are
       | interactively on my phone and since most of them were initially
       | funded by the government. I do not use it for anything sensitive.
       | I also assume that all cell phones have backdoors added by their
       | manufacturer. Each one does seem to dial home to different places
       | and make unique DNS requests. Putting phones into developer/debug
       | mode does seem to quiet them down which is the opposite than I
       | would have expected so maybe they know someone may be watching.
       | _i.e. malware knows it 's in a sandbox_
       | 
       | Wi-Fi Access Points are a story in and of themselves.
       | 
       | Why should I care about state actors? That one's easy. The best
       | contractors will have _leaks_ in their OpsSec and for-profit
       | companies will acquire the weaknesses and use them to do illegal
       | and unethical things to citizens for a price and political,
       | economic and a myriad of other motivations. I would not be
       | surprised if some government actors sell off access and end up
       | working for said companies.
       | 
       | [1] - https://news.ycombinator.com/item?id=34388990 [and hundreds
       | of other threads]
        
         | javajosh wrote:
         | _> ECHELON AI is yawning_
         | 
         | Well, presumably it would stop yawning if you were, like, part
         | of an armed rebellion. The perspective I'd like to hear would
         | be the Ukranian civil and military resistance to Russia's
         | invasion. How do _they_ know their systems aren 't compromised?
         | Because, yeah, in their case, being compromised means getting
         | killed.
        
           | LinuxBender wrote:
           | _in their case, being compromised means getting killed._
           | 
           | In fact there have recently been articles about this and each
           | side ordering their troops to stop using their cell phones.
           | Both sides have attributed several mass casualties to cell
           | phones. This is probably harder to enforce with conscripts
           | and military contractors. Many of the first wave of troops
           | thought they were just going on a training exercise.
        
             | grogenaut wrote:
             | Some of these at least were people posting to the socials
             | which doesn't take any form of spying. I haven't seen an
             | analysis of it tho.
        
               | LinuxBender wrote:
               | _haven 't seen an analysis of it tho_
               | 
               | Same here. GPS tagging in social media is certainly a
               | thing. There are programs and groups of people that can
               | identify locations from a single obscure photo. Then
               | there is GPS data in some photo apps that add exif data.
               | So many options one must assume phones are just loose
               | lips [1] even without factoring in App, OS, Firmware and
               | Baseband Modem backdoors not to mention instant location
               | from satellites.
               | 
               | In my opinion soldiers should be given instructions how
               | to back up their phone contents then hand all tech
               | devices to a range safety officer to put into a belt fed
               | target practice system to give them some closure from
               | their dopamine devices.
               | 
               | [1] - https://en.wikipedia.org/wiki/Loose_lips_sink_ships
        
       | [deleted]
        
       | CraigJPerry wrote:
       | I daily drive an m1 mac. I cant even change the initial boot
       | screen wallpaper because it's on a sealed partition. The previous
       | intel/T2 approach of modifying then blessing the modified
       | partition doesn't work on m1. When i dug into the depths of this
       | simple problem (changing boot screen wallpaper before login) i
       | came away impressed.
       | 
       | So I feel fairly confident about the machine firmware & OS. Less
       | so about my keyboard for example. Also because i opt out of a lot
       | of the securities (e.g. i download from homebrew rather than
       | using app store apps), I can't be sure i'm not being compromised.
        
       | 988747 wrote:
       | You simply cannot. Unless you want to go Richard Stallman path,
       | and work on a laptop from 2008, with only open source software,
       | unable to use Netflix, banking site, etc.
        
       | timbit42 wrote:
       | Try not to use apps written in unsafe languages like C and C++,
       | etc.
        
       | nilespotter wrote:
       | Compromised by whom? We've all heard of IME and PRISM I assume...
        
       | wiz21c wrote:
       | I protect my machine from as many commercial interests as I can.
       | This removes a whole class of issues.
       | 
       | For the rest, the thing is so complicated nowadays I can't really
       | say anymore.
       | 
       | I spent my youth on 8 bit machines. At that time I was 100%
       | certain there was no compromise. But nowadays,...
        
       | puma_ambit wrote:
       | Keep your essential files synced with an online service and
       | regularly format and wipe your machine at least once a year. This
       | is what I do.
        
       | EVa5I7bHFq9mnYK wrote:
       | Security is at odds with usability, so you can't run critical
       | apps on your personal machine. For me, stuff that needs to be
       | secure, runs on a separate machine that has nothing else
       | installed.
        
       | rejectfinite wrote:
       | Windows 10, Firefox, ublock origin and Windows Defender
       | 
       | Dont install weird exe or MSIs
       | 
       | Thats kinda it.
        
       | karmakaze wrote:
       | For malware that was downloaded through activity, even if it
       | required no clicks, they usually present themselves as slower
       | performance or other quirks. Maybe one has eluded discovery but
       | I'd never know.
       | 
       | I'm much more wary of systemic malware at lower levels that I
       | don't have an opportunity to detect. There's not so much I can do
       | about that other than try to use devices from vendors I trust (or
       | distrust less) that have the least preinstalled software.
       | Lobbying for open firmware or hardware is the long-term strategy.
       | 
       | I also use multiple machines: work, personal, gaming, and utility
       | (Surface Go). E.g. I use the mouse configuration software on the
       | Surface Go and only the mouse hardware with its configured
       | profile on the other machines.
       | 
       | Ultimately I can't know I'm not compromised but don't lose sleep
       | over something I don't have more control over.
        
       | krn wrote:
       | > "Compromised" meaning that malware hasn't been installed or
       | that it's not being accessed by malicious third parties. This
       | could be at the BIOS, firmware, OS, app or any other other level.
       | 
       | I don't believe there is a way to be 100% certain, but if I had
       | to go to a store and pick a new device with the lowest likelihood
       | of being compromised, it would be a desktop, a laptop, or a
       | tablet running ChromeOS[1].
       | 
       | [1] https://www.chromium.org/chromium-os/chromiumos-design-
       | docs/...
        
         | gkbrk wrote:
         | Many would consider a ChromeOS device compromised from day one.
        
           | krn wrote:
           | From the privacy perspective, ChromeOS is no different than
           | Windows and MacOS.
           | 
           | From the security perspective, it's much better, because
           | every single app is running in a sandbox.
           | 
           | I don't even use Chrome for web browsing on ChromeOS, since
           | Firefox works just fine with flatpak[1].
           | 
           | [1]
           | https://support.google.com/chromebook/answer/9145439?hl=en
        
       | voytec wrote:
       | I use a OS from a vendor not obsessed with usage monetization nor
       | any kind of "phone home" functionality.
       | 
       | I consider internet browsers to be a be a major backdoor risk and
       | thus have none installed on my host OS. I only browse interwebs
       | from VMs.
       | 
       | I don't trust my home network the same as I wouldn't trust an
       | open public WiFi.
       | 
       | I don't assume everything is as secure as it could be and am
       | taking redundant steps to ensure certain stuff.
        
       | cube2222 wrote:
       | I try to follow what others already mentioned, but still, for any
       | personal high-security stuff I use a device whose OS puts strong
       | limits on apps, like an iPad.
        
       | [deleted]
        
       | labarilem wrote:
       | Don't think I can find a practical way to be 100% sure of that.
       | I'm happy to be at 95% though.
        
       | [deleted]
        
       | phkahler wrote:
       | A combination of good practices and a lot of rationalization!
        
       | jhoelzel wrote:
       | I use 2fa where i can and accept the fact that if im not already
       | compromised, i could be at any moment.
       | 
       | Thats also why 2fa is the first thing i configure after setting
       | up a kubernetes cluster too.
        
       | lamontcg wrote:
       | If you're just a rando who isn't likely to get specific attention
       | from someone like the NSA or other state-backed threat agents,
       | then the answer is that if everything is behaving normally that
       | you're not compromised. For the bulk of people if someone breaks
       | into your personal device they're going to start using it for
       | something. You'll see unusual utilization, your proxy settings on
       | your browser will get changed, you'll just be hit by a ransomware
       | attack and your drive will be encrypted and you'll be locked out,
       | etc. They're after the bulk of the users out there and they don't
       | need to be particularly stealthy about anything.
       | 
       | Of course if you have large quantities of BTC or something then
       | the answer is to get it off of your personal machine and setup a
       | cold wallet that cannot be hacked, and stop installing clever
       | looking crypto shit on your machine.
        
       | NKosmatos wrote:
       | Read the following short story... Disclaimer: I'm not responsible
       | for any paranoia, computer fear or conspiracy thoughts that might
       | arise after reading it:
       | https://www.teamten.com/lawrence/writings/coding-machines/
        
         | drivebycomment wrote:
         | Lol. That was a fun read. Thanks for that.
        
       | modeless wrote:
       | For some excellent advice on security and privacy based on
       | thoroughly researched technical concerns rather than speculation
       | or blind trust in any particular organization (e.g. Apple or
       | Google or Mozilla), see here: https://madaidans-
       | insecurities.github.io/ I found the Android and Firefox/Chromium
       | evaluations particularly interesting.
        
       | etna_ramequin wrote:
       | From a more security security research point-of-view, the paper
       | "Bootstrapping Trust in Commodity Computers"[1] is a very good
       | overview. Although it would necessitate a bit of an update for
       | more recent developments with e.g. dm-verity etc.
       | 
       | [1] PDF:
       | https://www.andrew.cmu.edu/user/bparno/papers/bootstrapping-...
        
       | cma wrote:
       | Store a $300 eth wallet private key on the machine in plain text.
       | Have some other service notify you if that wallet address ever
       | makes a transaction.
        
       | albntomat0 wrote:
       | The biggest thing is being deliberate about your threat model.
       | Who would want to get onto your systems, and how much do they
       | care about you in particular?
       | 
       | From there, take appropriate actions. For the vast, vast majority
       | of us, that means using good passwords, updating software, and
       | not running weird things from the internet.
       | 
       | If you're worried about 0 click RCE in Chrome/Windows/iOS, you
       | either should be getting better advice from folks outside of HN,
       | or are being unrealistic about who is coming after you.
        
       | trenchgun wrote:
       | Simple: I don't.
       | 
       | It is most likely compromised and I behave accordingly.
        
       | H4ZB7 wrote:
       | you install a UN*X distro, because those are what my culture
       | tells me are secure. then you find out you can't even find the
       | fucking PGP key to verify the ISO [1]. let alone that UN*X
       | desktop is extremely insecure to insecure inputs (this includes
       | the terminal, for any wise guy who thinks this is just a gnome or
       | X11 or what have you problem). the reason why you get google
       | peddling their questionable solution at the top of the page is
       | because the open source community has not got their shit
       | together. they're too busy confusing themselves with arcane junk
       | like UN*X and PGP to see the forest for the trees.
       | 
       | 1. literally, as i speak, the sigs are 404:
       | https://www.freebsd.org/releases/12.4R/signatures/
       | 
       | i literally just opened the first distro site i could think of
       | and found their keys are fucked up as usual
        
       | throwawaaarrgh wrote:
       | Nobody cares enough about me to target me, and I don't run
       | Windows. That excludes like 99.99% of possibilities.
       | 
       | Android, on the other hand... I have installed apps I didn't know
       | much about, and that store is full of malware, so I have no idea.
        
       | jaxn wrote:
       | I believe my most vulnerable environment is the development
       | environment. 3rd party code being updated almost constantly
       | combined with fairly standardized cli to cloud environments
       | (kubectl, az/aws/Heroku, gh, etc.
       | 
       | And I don't just have to be vigilant about what I do, but also
       | about what my team has done. It terrifies me, and it's a sad
       | reality that my personal risk is reduced by the fact that if I
       | fall victim, countless other teams will as well.
        
         | mark_l_watson wrote:
         | I think you are correct. The latest news of unsafe PyPy
         | packages worried me.
         | 
         | Not always, but often I prefer development on remote VPS's. For
         | anything deep learning I pay Google a little bit of money every
         | month for Colab notebooks, save a ton of my own time, and don't
         | worry about trying random 3rd party libraries. I don't have
         | this use case anymore, but I used to use very large memory
         | VPS's with SBCL Common Lisp and Emacs to work on an old project
         | that required lot's of data in memory - VPS's are really cheap
         | if you turn them off when not in use.
         | 
         | In the 1980s and a bit into the 1990s, I did most things in X
         | Windows - I have thought about how good that would be for
         | secure remote development, but text with tmux, Emacs, etc. is
         | so much less hassle.
        
       | amelius wrote:
       | Waiting for HackGPT to become a thing, so I can use it as a
       | pentester.
       | 
       | Any ways in which AI can already help?
        
       | nixpulvis wrote:
       | TLDR; you don't.
       | 
       | Best advice I have, for what it's worth is to wipe and reformat
       | from a known clean image regularly. If you haven't been hacked
       | yet, stands to reason you wont be hacked going forward.
       | 
       | That said, I often install packages I don't fully vet, and grant
       | permissions I probably shouldn't, either in the name of curiosity
       | (how else do we learn and experiment), or necessity.
        
       | kovac wrote:
       | Assuming that you have no other option but to use a computer,
       | there's some good advice here[1] for securing a Linux system.
       | Then you can run regular scans using security tools like the ones
       | listed here[2].
       | 
       | [1]: https://wiki.archlinux.org/title/security
       | 
       | [2]:
       | https://wiki.archlinux.org/title/List_of_applications/Securi...
        
       | lrvick wrote:
       | I run QubesOS which compartmentalizes your usb ports, network
       | card, and all your various application workflows into separate
       | virtual machines. It is literally designed to protect you even if
       | part of your system is compromised.
       | 
       | https://www.qubes-os.org/intro/
       | 
       | For details on how I use Qubes specifically see:
       | https://github.com/hashbang/book/blob/master/content/docs/se...
        
         | ThePowerOfFuet wrote:
         | > For details on how I use Qubes specifically see:
         | https://github.com/hashbang/book/blob/master/content/docs/se...
         | 
         | How is this not a contradiction?
         | 
         | >6. Manual PRIVILEGED SYSTEM mutations MUST be approved,
         | witnessed, and recorded
         | 
         | >7. PRIVILEGED SYSTEM mutatations MUST be automated and
         | repeatable via code
        
       | nromiun wrote:
       | You can try to "snoop" on the virus. For example, collecting all
       | the internet packets, see if some ports are opened that is not
       | needed. Collect logs on which apps are eating up the battery.
       | These steps are not perfect by any means, but you can catch some
       | noisy virus with this. If your virus is very stealthy you can
       | only hope your passwords show up in haveibeenpwned.
       | 
       | This is also why using an open source OS is so important. At
       | least you can investigate why something is happening in the OS.
       | Without the source you can only guess at what is happening.
        
         | javajosh wrote:
         | Open-source is _de facto_ closed source if you don 't build
         | your own stuff (and know how to debug it). That's the status
         | most OSS users are in, I suspect. I run Linux but I've never
         | compiled a kernel and I've never run a native debugger. It's
         | nice that I _could_ , but this is just a platitude.
        
           | nromiun wrote:
           | But anyone this paranoid will obviously build from source?
           | Most OSS users don't build from source because they don't
           | care to look in their internet packets for viruses.
           | 
           | BTW, it is not that hard either. You can even have multiple
           | Linux kernels installed at the same time. Same with Android
           | ROMs, just checkout the code, build it and flash using ADB.
           | It is about as difficult as dual booting Windows and Ubuntu.
        
       | Helmut10001 wrote:
       | Separation of concerns is a good idea. Don't run everything
       | together, e.g. multiple boot Os, or nested OS (windows with
       | several WSL setups for different work, test untrusted windows
       | apps first in windows development VM etc.). If you have a server,
       | run dedicated VMs and work on those via remote, these days you
       | can even stream your games from your dedicated VM. In case a game
       | is compromised, it will at maximum compromise other games on the
       | VM, but not your important work on another VM.
        
         | amelius wrote:
         | A workflow that involves multiple VMs is usually very
         | cumbersome.
         | 
         | I feel that our OSes should solve this problem. Unix was built
         | with the mindset that other users cannot be trusted, but they
         | forgot that applications can also be malicious. There is a huge
         | opportunity here for better OSes.
        
           | transpute wrote:
           | Untrusted officeapps/browser has been solved on Windows for
           | ~10 years by Bromium (now HP SureStart) and ~5 years by
           | Microsoft cloning Bromium as _Application Guard_. For
           | example, every tab of the browser runs in an isolated VM,
           | seamlessly composited into the main desktop, with no special
           | action needed by the user.
        
           | tigrezno wrote:
           | Isn't that the effort done on jails like snap/flatpak
           | lately??
        
             | amelius wrote:
             | Ergonomics is part of the equation and Snap is definitely
             | not there yet, and in fact its poor quality may be driving
             | users to other solutions.
             | 
             | Snap has no GUI, where clearly a solution that provides a
             | general sandbox should. When you open an app and click
             | "Open File", the sandbox GUI should intervene and ask the
             | user where the app is allowed to look.
             | 
             | Also, the idea of "security bolted on top" of an existing
             | OS doesn't seem very trustworthy.
        
       | SnowHill9902 wrote:
       | You generally don't. It all depends on your attack hypothesis.
       | Are you a Mossad target or a non-Mossad target? The best you can
       | do if you are a non-Mossad target is to
       | anonymously/pseudonymously periodically purchase new hardware and
       | do a fresh OS install. Be minimalistic. If you can't trust your
       | wifi-enabled printer, disable its wifi connectivity and use it
       | only over USB. If you still can't trust it, don't use printers to
       | begin with.
        
       | rekrsiv wrote:
       | You don't. Treat your personal machine(s) as compromised by
       | default and take it from there.
        
       | ramraj07 wrote:
       | There are two levels here: compromised by some national agency
       | vs. compromised by anyone else.
       | 
       | For the former, I don't assume anything especially since I'm not
       | an American citizen. I still believe with some certainty that my
       | iPhone is safe from the government but not 100%
        
       | RGamma wrote:
       | _knocks on wood_
       | 
       | Until the day we get sandboxing, well-defined interfacing with
       | user data and stuff in desktop computing...
       | 
       | Imagine the desktop's security model, if you can call it that, on
       | mobile. It would be _madness_.
        
       | _Algernon_ wrote:
       | I use adblock to prevent (malicious) ads from running. Browser
       | malicious downloads warnings are enabled. Configured to show a
       | full screen warning if http is used instead of https. Opening
       | links from email requires manually copy pasting, forcing an extra
       | look at links.
       | 
       | Generally I don't install random software outside the official
       | repos or AUR, but I do blindly trust those repos to not be
       | compromised.
       | 
       | That being said, I don't think I could 100% trust a modern
       | computing device to not be compromised, but since that isn't
       | possible I also don't see it as actionable information.
        
       | aliqot wrote:
       | I trust that it is compromised since I don't have a chip fab in
       | the back yard.
        
       | eightysixfour wrote:
       | I don't. I assume all devices are potentially compromised and I
       | shift my risk to others as much as possible - credit card fraud
       | protection, etc.
        
       | morphle wrote:
       | Never allow a company or government to install known and unknown
       | software that spies on you. This means Microsoft, Google,
       | Facebook, Apple, etc.
        
       | bitexploder wrote:
       | Run Linux. Install only trusted software. Don't do sketchy (from
       | a security perspective) things like look at porn or use torrent
       | sites on your main computer. Backup often. Reinstall OS annually.
       | Don't run Windows. Don't worry too much. Be happy.
        
       | windex wrote:
       | Any way to actually monitor traffic at the PC or network level
       | for an individual user? I see a lot of odd behavior by TVs, IOT
       | devices, connected bulbs and such. None of them seem necessary to
       | me. I feel windows could do a lot better with reporting or
       | allowing users to clearly see what is communicating across the
       | network currently.
        
       | omgmajk wrote:
       | I don't. I am real picky with downloading software for my
       | personal machine and I sometimes explore with process explorer
       | and I run sketchy stuff in a sandbox but I don't trust that my
       | personal machine is not compromised.
        
       | morphle wrote:
       | Can You Trust Your Computer? https://www.gnu.org/philosophy/can-
       | you-trust.en.html
       | 
       | Eben Moglen: The alternate net we need, and how we can build it
       | ourselves:
       | 
       | https://www.youtube.com/watch?v=gORNmfpD0ak&t=2s
        
       | naveen99 wrote:
       | third parties is a broad term. Especially when the list of
       | potential 2nd parties includes most of those 3rd parties.
        
       | vHMtsdf wrote:
       | On my and my families windows machines, I try to follow the
       | advice from Taylor Swift who seems to know what she is doing...
       | https://decentsecurity.com/#/securing-your-computer/
       | 
       | In short: 1) secure bootup by locking up BIOS and encrypting your
       | drive 2) set User Access Controls to the highest level 3) install
       | up to date browser with appropriate addons (ublock)
        
         | Alifatisk wrote:
         | I would wait with getting the latest Windows updates, If I
         | don't see anything in the news after a month, that's when I
         | update.
        
           | ThePowerOfFuet wrote:
           | That month is the highest-risk time, as patches are quickly
           | reversed to find exploits.
           | 
           | Up to a week might be prudent to avoid patches which blow up,
           | but not longer.
        
           | jonstewart wrote:
           | Would very much recommend updating Windows ASAP.
        
             | f4c39012 wrote:
             | There's nothing as secure as a brick, not getting any data
             | out of that
        
       | ignoramous wrote:
       | _ex-AOSP dev here_
       | 
       | Android and ChromiumOS are likely the most trustable computing
       | platforms out there; doubly so for Android running on Pixels. If
       | you don't prefer the ROM Google ships with, you can flash
       | GrapheneOS or CalyxOS and relock the bootloader.
       | 
       | Pixels have several protections in place:
       | 
       | - Hardware root of trust: This is the anchor on which the entire
       | TCB (trusted computing base) is built.
       | 
       | - Cryptographic verification ( _verified boot_ ) of all the
       | bootloaders (IPL, SPL), the kernels (Linux and LittleKernel), and
       | the device tree.
       | 
       | - Integrity verification ( _dm-verity_ ) of the contents of the
       | ROM ( _/ system_ partition which contains privileged OEM
       | software).
       | 
       | - File-based Encryption ( _fscrypt_ ) of user data ( _/ data_
       | partition where installed apps and data go) and adopted external
       | storage ( _/ sdcard_); decrypted only with user credentials.
       | 
       | - Running blobs traditionally run in higher exception levels
       | (like ARM EL2) in a restricted, mutually untrusted VM.
       | 
       | - Continued modularization of core ROM components so that they
       | could be updated just like any other Android app, ie without
       | having to update the entire OS.
       | 
       | - Heavily sandboxed userspace, where each app has very limited
       | view of the rest of the system, typically gated by Android-
       | enforced permissions, seccomp filters, selinux policies, posix
       | ACLs, and linux capabilities.
       | 
       | - _Private Compute Core_ for PII (personally identifiable
       | information) workloads. And _Trusty Execution Environment_ for
       | high-trust workloads.
       | 
       | This is not to say Android is without exploits, but it seems it
       | is most further ahead of the mainstream OSes. This is not a
       | particularly high bar because of closed-source firmware and
       | baseband, but this ties in generally with the need to trust the
       | hardware vendors themselves (see point #1).
        
         | andai wrote:
         | > it seems it is most further ahead of the mainstream OSes
         | 
         | Noob here, I recall often hearing that iOS has superior
         | security to Android. Has this situation reversed in the last
         | few years, or was it never true?
        
           | transpute wrote:
           | _> Running blobs traditionally run in higher exception levels
           | (like ARM EL2) in a restricted, mutually untrusted VM_
           | 
           | The pKVM hypervisor is new to Android 13 and requires Pixel 7
           | hardware, both of which are a few months old.
        
           | lrvick wrote:
           | iOS is a proprietary OS making security research unreasonably
           | difficult with new setbacks on every new version. It can only
           | be regarded as reasonably private and secure if you trust the
           | Apple marketing team.
        
             | ghostpepper wrote:
             | My understanding is that Apple has gotten a lot better
             | about this with their bug bounty payouts and providing
             | debug hardware to researchers, and it's not like there's
             | not a ton of proprietary code running on most consumer
             | android devices.
             | 
             | I would also assume the fact that their vertical
             | integration all the way down to silicon is an advantage
             | here as well.
        
               | lrvick wrote:
               | In Android you at least have the choice to run a fully
               | open source OS and open source apps, albeit with some
               | driver blobs.
               | 
               | With the exception of the blobs, everything on Android is
               | auditable.
               | 
               | Meanwhile very little of MacOS or iOS is auditable.
               | 
               | Personally I do not use or trust any of the above, but if
               | forced to choose Android is worlds ahead of iOS in terms
               | of publicly auditable privacy and security.
               | 
               | You cannot form reasonable confidence something is secure
               | unless it can be readily audited by yourself or capable
               | unbiased third parties of your choosing. This means
               | source code availability is a hard requirement for any
               | security claims. Even if you had teams de-compile
               | everything you could never keep up with updates.
               | 
               | Not all open source code is secure, but all secure code
               | is open source.
        
               | aio2 wrote:
               | What @Irvick is talking about is the fact that you have
               | more freedom to test the security in an Android than in
               | iOS, such as being able to flash other systems.
        
               | Godel_unicode wrote:
               | Better, sure. A lot better? Definitely not.
               | 
               | The bug bounty is pretty hard to actually get access to,
               | there's still no source outside of the kernel, and the
               | Security Research Devices are really hard to get access
               | to. You have to be someone they've heard of, in a country
               | they approve of, you can't move the device around, and
               | you have to sign your life away to get it for 12* months.
        
           | ccouzens wrote:
           | The price of an Android zero day has been slightly higher
           | than the price of an iOS zero day since late 2019. Make of
           | that what you like https://zerodium.com/program.html
           | 
           | I suspect remarkably few people are qualified to objectively
           | say which is more secure. If you're an expert on one, you're
           | unlikely to be an expert on the other.
           | 
           | Security is multidimensional. It's unlikely there will be a
           | platform that's more secure from every possible angle. What's
           | secure for you might not be secure for a less technical
           | person, or a world traveler.
        
           | lrem wrote:
           | Isn't most of the value here in not allowing sideloading? In
           | iOS your grandma/child cannot be tricked into clicking "allow
           | apps from untrusted sources", which is how most breaches
           | happen.
        
             | rkachowski wrote:
             | Is that true? the process of sideloading and running a 3rd
             | party app is pretty far from simple, trivial or even easy
             | to explain to a grandma / child.
        
             | gkbrk wrote:
             | If the apps sandboxed, how can installing an app cause
             | breaches? As far as I know, iOS apps are sandboxed.
             | 
             | Either the sandbox is very weak and Apple instead relies on
             | App Store audits, or they disallow users installing apps
             | outside the app store to protect their 30% tax that makes
             | them a LOT of money.
        
               | Consultant32452 wrote:
               | The sandboxes are regularly breached. Basically any time
               | you hear about people "rooting" their phones, that's an
               | intentional sandbox breach.
        
               | charcircuit wrote:
               | Not always. Unlocking the bootloader doesn't breach the
               | sandbox since it factory resets the device.
        
               | lrem wrote:
               | When you're tricked into installing a nasty app, you're
               | likely also tricked into giving it whatever permissions
               | it needs for the nastiness.
        
               | phyphy wrote:
               | Which is not the fault of the OS
        
               | fragmede wrote:
               | who's fault is it then, if not the OS? that's basically
               | the os's job; to isolate and manage the system's
               | resources and access to them therin
        
               | qu4z-2 wrote:
               | At the direction of the device owner, though.
        
               | fragmede wrote:
               | the malicious app, even signed off the app store could
               | also exploit unpublished vulnerabilities to gain elevated
               | access and not require asking for permission. even or
               | especially if it's not a full sandbox escape.
        
             | abirch wrote:
             | I think Apple will have to allow different stores in Europe
             | soon. It'll be interesting to see what happens.
        
           | bo1024 wrote:
           | Maybe you're thinking of privacy, not security?
           | 
           | In terms of privacy, Android is "compromised" by default,
           | i.e. Google collects and stores a ton of private information
           | about you. I believe Apple used to be much better, and still
           | is, but getting worse.
        
             | jacooper wrote:
             | Both are one the same level at this point.
        
               | fsflover wrote:
               | Not the same, but similar enough:
               | https://news.ycombinator.com/item?id=26639261.
        
               | aio2 wrote:
               | It seems before, Apple was trying to gain a monopoly. Now
               | that it has enough users, it turns to tracking.
               | 
               | However, this is a speculation, so please take it as a
               | grain of salt.
        
               | jacooper wrote:
               | More like tracking monopoly
               | https://www.macrumors.com/2022/11/15/apple-employees-
               | unhappy...
        
             | doc_gunthrop wrote:
             | > In terms of privacy, Android is "compromised" by default,
             | i.e. Google collects and stores a ton of private
             | information about you.
             | 
             | Apple also does the same. Also, this only applies if you're
             | running an Android device "out-of-the-box". Fortunately,
             | there exist AOSP forks that mitigate this type of intrusion
             | (e.g. GrapheneOS).
        
         | lrvick wrote:
         | IMO we have to step back and be honest that the Linux kernel is
         | simply not equipped to run trusted code and untrusted code in
         | the same memory. New bugs are found every few weeks.
         | 
         | If history is of any guide Android and ChromiumOS likely still
         | have many critical bugs the public does not know about yet.
         | 
         | Sadly the only choice is to burn extra ram to give every
         | security context a dedicated kernel and virtual machine.
         | Hypervisors are the best sandbox that exists anchored down to a
         | hardware IOMMU.
         | 
         | QubesOS being VM based is thus the best effort secure
         | workstation OS that exists atm. SpectrumOS looks promising as a
         | potential next gen too.
        
           | mindslight wrote:
           | I've never used Qubes. Rather I heavily segment with manually
           | configured VMs. The ones that run proprietary software (eg
           | webbrowsing, MSWin, etc) generally run on a different machine
           | than my main desktop. It's quite convenient as I can go from
           | my office to the couch, and I just open up the same VMs there
           | and continue doing what I was doing.
           | 
           | I define the network access for each VM in a spreasheet
           | (local services and Internet horizon), which then gets
           | translated into firewall rules. I can simultaneously display
           | multiple web browsers, each with a different network nym
           | (casual browsing, commercial VPN'd, TOR, etc).
           | 
           | The downsides include needing an ethernet cable on my laptop
           | (latency), and that this setup isn't great at going mobile.
           | Eventually I'll get around to setting up a medium-trust
           | laptop that runs a web browser and whatnot directly (while
           | not having access to any keys to the kingdom), one of these
           | days real soon now.
           | 
           | Which brings me to the real downside is the work required to
           | administer it - you already have to be in the self-hosting
           | game. This is where an out-of-the-box solution could excel.
           | Having recently become a NixOS convert, SpectrumOS looks very
           | interesting!
        
             | mtsr wrote:
             | Thinking of my kids' future has also made me much more
             | energy-conscious. Meaning I've stopped running my VM host
             | 24/7 like I was, because neither ESX nor Proxmox is really
             | set up for saving energy easily (automated suspending and
             | waking, etc). Which is a shame, since I'm actually finding
             | that with gigabit fiber at home, even on mobile connections
             | I can work pretty decently on homelab VMs.
             | 
             | Running something like it on a laptop directly makes sense,
             | but I worry about bringing some workloads back to my laptop
             | that I really prefer to keep off it. In terms of raw
             | performance my laptop isn't even close, especially with
             | heavy graphic workloads. And then there's heat, etc.
        
               | mindslight wrote:
               | I feel like this is the all too common pattern of
               | individuals taking environmental responsibility to absurd
               | levels, while corporations dgaf. How much electricity is
               | burned in datacenters, especially doing zero-sum
               | surveillance tasks?
               | 
               | My Ryzen 5700G ("router") idles around 20-25W, which
               | seems like a small price to pay to not be at the mercy of
               | the cloud. That's around 60 miles of driving per month
               | (gas or electric), which seems quite easy to waste other
               | ways.
               | 
               | My Libreboot KGPE ("desktop/server") burns about 160W.
               | This is much higher than a contemporary computer should
               | be, but that's the price of freedom. I could replace it
               | with a Talos II (~65W from quick research), but the
               | payback for electricity saved would take several decades.
               | 
               | To cut back on the environmental impact, I do plan to
               | install solar panels with battery storage, which will
               | also replace the need for UPSes. I've got another KGPE
               | board for which it's interesting to think about setting
               | up as a parallel build host, only running during sunny
               | days rather than contributing to electricity storage
               | requirements.
        
             | windexh8er wrote:
             | I used to leverage VMs more (and still do in certain cases)
             | but I've moved to disposable/containerized by leveraging
             | Kasm [0]. There's other ways to stream environments, but
             | it's another option. Definitely check it out if you're
             | looking for other options.
             | 
             | [0] https://www.kasmweb.com
        
             | ThePowerOfFuet wrote:
             | Yeah, you should really look into Qubes.
        
           | teaearlgraycold wrote:
           | To be fair, Pixels (and all modern Android phones by my
           | understanding) use some kind of trusted execution
           | environment. So if you have a Pixel 6 or later you're using
           | Trusty to perform some trusted actions, which is not Linux
           | and gets some of its own SoC die space. That doesn't mean you
           | can't get kernel pwned and lose private info.
        
           | a-dub wrote:
           | i think tanenbaum will be vindicated in the end. monolithic
           | kernels are like 90s computer networks with perimeter
           | security. if i were to guess, i'd guess that the future is
           | microkernels with some sort of hardware accelerated secure
           | message passing facility. zero-trust at kernel design scale.
        
           | flanked-evergl wrote:
           | > Linux kernel is simply not equipped to run trusted code and
           | untrusted code in the same memory.
           | 
           | Just for interest sake, is Linux better or worse than MacOS,
           | iOS and Windows at this?
        
             | aseipp wrote:
             | It's... complicated. Linux is just the kernel, but good
             | modern OS security requires the kernel, the userspace,
             | _and_ the kernel /userspace boundary to all be hardened a
             | significant amount. This means defense in depth, exploit
             | mitigation, careful security and API boundaries put in
             | place to separate components, etc.
             | 
             | Until pretty recently (~3-4 years) Linux _the kernel_ was
             | actually pretty far behind in most respects versus
             | competitors, including Windows and mac /iOS. I say this as
             | someone who used to write a bunch of exploits as a hobby
             | (mainly for Windows based systems and windows apps). But
             | there's been a big increase in the amount of mitigations
             | going into the kernel these days though. Most of the state
             | of the art stuff was pioneered elsewhere from upstream but
             | Linux does adopt more and more stuff these days.
             | 
             | The userspace story is more of a mixed bag. Like, in
             | reality, mobile platforms are far ahead here because they
             | tend to enforce rigorous sandboxing far beyond the typical
             | access control model in Unix or Windows. This is really
             | important when you're running code under the same user. For
             | example just because you run a browser and SSH as $USER
             | doesn't mean your browser should access your SSH keys! But
             | the unix model isn't very flexible for use cases like this
             | unless you segregate every application into its own user
             | namespace, which can come with other awkward consequences.
             | In something like iOS for example, when an application
             | needs a file and asks the user to pick one, the operating
             | system will actually open a privileged file picker with
             | elevated permissions, which can see all files, then only
             | delegate those files the user selects to the app. Otherwise
             | they simply can't see them. So there is a permission model
             | here, and a delegation of permissions, that requires a
             | significant amount of userspace plumbing. Things like
             | FlatPak are improving the situation here (e.g XDG Portal
             | APIs for file pickers, etc.) Userspace on general desktop
             | platforms is moving very, very slowly here.
             | 
             | If you want my honest opinion as someone who did security
             | work and wrote exploits a lot: pretty much all of the
             | modern systems are fundamentally flawed at the design
             | level. They are composed of millions of lines of unsafe
             | code that is incredibly difficult to audit and fix. Linux,
             | the kernel, might actually be the worst offender in this
             | case because while systems like iOS continue to move things
             | out of the kernel (e.g. the iOS WiFi stack is now in
             | userspace as of iOS 16 and the modem is behind an IOMMU)
             | Linux doesn't really seem to be moving in this direction,
             | and it increases in scope and features rapidly, so you need
             | to be careful what you expose. It might actually be that
             | the Linux kernel is possibly the weakest part of Android
             | security these days for those reasons (just my
             | speculation.) I mean you can basically just throw shit at
             | the system call interface and find crashes, this is not a
             | joke. Windows seems to be middle of the pack in this
             | regard, but they do invest a lot in exploit mitigation and
             | security, in no small part due to the notoriety of Windows
             | insecurity in the XP days. Userspace is improving on all
             | systems, in my experience, but it's a shitload of work to
             | introduce new secure APIs and migrate things to use them,
             | etc.
             | 
             | Mobile platforms, both Android and iOS, are in general
             | significantly further ahead here in terms of "What kind of
             | blast radius can some application have if it is
             | compromised", largely because the userspace was co-designed
             | along with the security model. ChromeOS also qualifies IMO.
             | So just pick your poison, and it's probably a step up over
             | the average today. But they still are comprised using the
             | same fundamental building blocks built on lots of unsafe
             | code and dated APIs and assumptions. So there's an upper
             | limit here on what you can do, I think. But we can still do
             | a lot better even today.
             | 
             | If you want something more open in the mobile sector, then
             | probably the only one I would actually trust is probably
             | GrapheneOS since its author (Daniel Micay) actually knows
             | what he's doing when it comes to security mitigation and
             | secure design. The FOSS world has a big problem IMO where
             | people just think "security" means enabling some compiler
             | flags and here's a dump of the source code, when that's
             | barely the starting point -- and outside of some of the
             | most scrutinized projects _in the entire world_ , I would
             | say FOSS security is often very very bad, and in my
             | experience there's no indication FOSS actually generally
             | improves security outside of those exceptional cases, but
             | people hate hearing it. I suspect Daniel would agree with
             | my assessment most of the fundamentals today are fatally
             | flawed (including Linux) but, it is what it is.
        
             | lrvick wrote:
             | Linux is a security shit show but it is at least publicly
             | auditable, which is a prerequisite to form reasonably
             | confidence in the security of software, or to rapidly
             | correct mistakes found.
             | 
             | OpenBSD by contrast has dual auditing and a stellar
             | security reputation, but development is much slower and
             | compatibility is very low.
             | 
             | seL4 as an extreme is a micro-kernel with mathematically
             | provable security by design, but no workstation software
             | runs on it yet.
             | 
             | MacOS, iOS, and Windows are proprietary so they are
             | dramatically worse off than Linux in security out of the
             | gate. No one should use these that desires to maximize
             | freedom, security and privacy.
        
               | fsociety wrote:
               | I find it hard to believe that the Linux codebase being
               | auditable makes Linux more secure by default than MacOS,
               | iOS, and Windows. I doubt it is humanly feasible to fully
               | read and grok the several million LOC running within
               | Linux. I would, however, trust a default
               | MacOS/iOS/Windows system over a default Linux system. The
               | Linux community has a track record of being hostile to
               | the security community - for their own good reasons.
               | Whereas Apple and Microsoft pay teams to secure their OS
               | by default.
               | 
               | If you install something like grsecurity or use SELinux
               | policies, I could buy the argument. I have yet to see
               | these used in production though.
               | 
               | Also seL4 is not mathematically proven to be secure, it
               | is formally verified which means it does what it says it
               | does. Part of that spec may include exploitable code
        
               | krn wrote:
               | > I would, however, trust a default MacOS/iOS/Windows
               | system over a default Linux system. The Linux community
               | has a track record of being hostile to the security
               | community - for their own good reasons. Whereas Apple and
               | Microsoft pay teams to secure their OS by default.
               | 
               | I think we can have the best of both worlds here: OS
               | distributions that are being maintained by paid teams of
               | security experts, and that can be audited by anybody.
               | 
               | What are the major ones? Android, Chromium OS, RedHat
               | (Fedora, CentOS), and SUSE.
        
               | px43 wrote:
               | In the case of seL4, don't confuse formal verification
               | with security. The code matches the spec, and security
               | properties can be extracted very precisely, but spec
               | might contain oversights/bugs which would allow an
               | attacker to perform unexpected behaviors.
               | 
               | If you define security as a "lack of exploitable bugs",
               | then security can never be proven, because it's
               | impossible to prove a negative. Also many formally
               | verified systems have had critical bugs discovered, like
               | the KRACK attacks in WPA. The formal verification wasn't
               | _wrong_ , just incomplete, because modeling complex
               | systems is inherently an intractable problem.
               | 
               | The fact that seL4 doesn't even offer bug bounties should
               | be a huge red flag that this is still very much an
               | academic exercise, and should not be used in places where
               | security actually matters.
               | 
               | https://github.com/seL4/seL4/blob/master/SECURITY.md
        
               | marcosdumay wrote:
               | Besides spec bugs, the seL4 treat model is focused on
               | making sure components are kept isolated. They do not
               | deal with most of what we understand as attacks on a
               | workstation at all.
               | 
               | In fact, in a seL4 system, most of the vulnerabilities we
               | find on Linux wouldn't even be on the kernel, and their
               | verification logic can't test something that isn't there.
               | 
               | That said, the seL4 model does probably lead to much
               | better security than the Linux one. It's just not as good
               | as the OP's one-liner implies.
        
               | didericis wrote:
               | > The formal verification wasn't wrong, just incomplete,
               | because modeling complex systems is inherently an
               | intractable problem.
               | 
               | I'm not involved in this kind of research or low level
               | auditing, but I have some mathematical training and
               | fascination with the idea of formal verification.
               | 
               | I ran across this thing called "K-Framework" that seems
               | to have invested a lot in making formal semantics
               | approachable (to the extent that's possible). It's
               | striving to bridge that gap between academia and
               | practicality, and the creator seems to really "get" both
               | the academic and practical challenges of something like
               | that.
               | 
               | Here's a brief overview of one of the features that seems
               | most impressive/useful: https://youtu.be/x_xm69gd3fE
               | 
               | The clarity of Grigore's explanations and the quality of
               | what I've found here: https://kframework.org/ makes me
               | think K has a lot of potential, but again, this is not my
               | direct area of expertise, and I haven't been able to
               | justify a deep dive to judge beyond impressions.
               | 
               | You're correct in pointing out that complex systems are
               | inevitably difficult to verify, but I think stuff like K
               | could help provably minimize surface area a _lot_.
        
               | lrvick wrote:
               | > don't confuse formal verification with security
               | 
               | It sure makes auditing that code conforms to an expected
               | design a lot easier, which is most security bugs. This is
               | a fantastic design choice for a security focused kernel.
               | 
               | I will grant that proving something was implemented as
               | designed does not rule out design flaws so, fair enough.
        
               | highwaylights wrote:
               | > MacOS, iOS, and Windows are proprietary so they are
               | dramatically worse off than Linux in security out of the
               | gate. No one should use these that desires to maximize
               | freedom, security and privacy.
               | 
               | Not sure how fair this is, even though I agree with you
               | with regards to Linux being auditable and the others not.
               | 
               | Windows and macOS these days have vendor-provided code
               | signing authorities that can be leveraged (and are by
               | default), which provides at least some protection against
               | malware at the macro level (in that the certificates can
               | be revoked if something nefarious is identified). This
               | doesn't exist at all in Linux, although third party
               | products are in the early stages.
               | 
               | Windows 11 and macOS have hardware-backed root-of-trust.
               | In Windows the root of trust is the TPM, on macOS it's
               | the T2 (Intel) or the chip package (ARM).
               | 
               | Any of these features could be compromised without your
               | knowing, but at least where you have control authorities
               | for these systems you can draw some comfort in knowing
               | that once new malware has been identified spreading on
               | pretty much any machine, it can be stopped quite rapidly
               | on all machines by revocation until the bug can be
               | patched.
        
               | snvzz wrote:
               | >seL4 as an extreme is a micro-kernel with mathematically
               | provable security by design, but no workstation software
               | runs on it yet.
               | 
               | With Genode, POSIX compatibility is at the point where
               | you can run a webkit2-derived browser natively. There is
               | also 3d acceleration.
        
             | pjmlp wrote:
             | Windows 10 and later run drivers and parts of the kernel on
             | their own hypervisor slots.
             | 
             | macOS has SIP.
             | 
             | GNU/Linux is still not there doing this out of the box.
        
               | ignoramous wrote:
               | I'm curious about your take on this.
               | 
               | Would you say macOS, iOS, Windows are more _trustable_
               | than FOSS OSes like Fuschia or ChromeOS, Android, or even
               | QubeOS and Tails?
               | 
               | If not, where else ChromiumOS / Android lack (keeping in
               | mind the embedded nature of the latter)?
               | 
               | How long do you think before viable open firmware / open
               | hardware computing devices show up?
               | 
               | Thanks.
        
         | goodpoint wrote:
         | ...assuming you blindly trust google. The same company that
         | sends on average 12 MB of telemetries a day from android
         | devices.
        
           | mysterydip wrote:
           | Do you have a link for that? I'd love to send it to some
           | friends.
        
             | ignoramous wrote:
             | Not OP; see: https://news.ycombinator.com/item?id=26639261
        
         | jmole wrote:
         | The problem is that third-party OEMs don't have to run AOSP,
         | they can easily replace any and all code with malicious call-
         | home backdoors, and still pass CTS tests.
         | 
         | As far as I can tell, there is no meaningful protection in
         | place to prevent OEMs from poisoning the Android well (and the
         | Android brand), even without considering the black box firmware
         | running on wifi/BT/LTE/5G modems.
        
         | astrostl wrote:
         | > Android and ChromiumOS are likely the most trustable
         | computing platforms out there
         | 
         | Sweet, so we can trust that our personal machine is only
         | compromised by Google et al? XD
        
         | mindslight wrote:
         | Most of your bullet points are reinventions of standard
         | technology or incidental complexity.
         | 
         | Cryptographic verification of the boot chain with Hardware root
         | of trust are real. Heavily sandboxed userspace is real.
         | Everything else would seem to be a reimplementation of common
         | best practices (disk encryption), or a mitigation of a self-
         | created problem (there shouldn't be binary driver blobs running
         | on the main CPU to begin with).
         | 
         | And from what I remember, a plain AOSP install seemed to still
         | phone home to Google to check for Internet connectivity and
         | whatnot. It's awfully hard to put my faith in an operating
         | system primarily developed by a surveillance company, as the
         | working assumptions are a drastic departure from individualist
         | computing. And trying to question those assumptions with
         | independent devs is often dismissed (for a particularly
         | striking example, see LineageOS/"Safetynet").
        
           | ignoramous wrote:
           | > _Everything else would seem to be a reimplementation of
           | common best practices..._
           | 
           | True, but those protections are enabled by default (on Pixels
           | at least). Users don't have to do anything here.
           | 
           | > And from what I remember, a plain AOSP install seemed to
           | still phone home to Google to check for Internet connectivity
           | and whatnot.
           | 
           | You're not wrong, but GrapheneOS and CalyxOS are valid
           | options, if you don't trust the ROM Pixel ships with. Even
           | with a custom ROM you're left trusting the OEM. It'd be nice
           | if we could have an open hardware / open firmware Android,
           | but it hasn't happened, yet.
        
             | mindslight wrote:
             | Sure, but full disk encryption was also enabled on my Mom's
             | Ubuntu laptop 15 years ago, because I chose the correct
             | options when I set it up. What commercial vendors offer out
             | of the box has never been a good yardstick for talking
             | about security features, and it's only gotten worse with
             | the rise of the surveillance economy.
             | 
             | My fundamental problem with Graphene/Calyx is that I don't
             | trust the devs have enough bandwidth and resources to catch
             | all the vulnerabilities created upstream, especially with
             | the moving target created by rapid version churn. For
             | example, Android is _finally_ getting the ability to grant
             | apps scoped capabilities rather than blanket full access
             | permissions, which is actually coming from _upstream_ - the
             | Libre forks should have had these features a decade ago,
             | but for their limited resources.
             | 
             | Concretely, what discourages me from going Pixel is the
             | Qualcomm integrated baseband/application chipsets. I've
             | heard that Qualcomm has worked on segmenting the two with
             | memory isolation and whatnot, but their history plus the
             | closed design doesn't instill confidence. Yet again it's
             | the difference between the corporate perspective of
             | providing top-down relativist "security" rather than the
             | individualist stance of hardline securing the AP against
             | attacks from the BB.
             | 
             | Pragmatically, I know I should get over that and stop
             | letting the perfect be the enemy of the good (I'm currently
             | using a proprietary trash-Android my carrier sent me. The
             | early 4G shutdown obsoleted my previous Lineage/microG).
             | But every time I look at Pixels it seems there's so damn
             | many "current" models, none stand out as the best but
             | rather it's a continuum of expensive versus older ones
             | (destined to become e-waste even sooner due to the
             | shameless software churn). And so I punt.
        
               | sowbug wrote:
               | _Concretely, what discourages me from going Pixel is the
               | Qualcomm integrated baseband /application chipsets._
               | 
               | Google Pixel hasn't used Qualcomm chipsets since the
               | Pixel 5.
        
               | mindslight wrote:
               | Thank you! Not following so closely, I had been thinking
               | that "Tensor" was just a coprocessor based on the name.
        
         | drozycki wrote:
         | Hasn't this all been long true for iOS as well? There are
         | reasons to hate it, but a walled garden is safer in many ways
         | (as long as you trust Apple). You mention baseband - Android
         | hardware comes with max 3 years of baseband support, compared
         | to 7-9 years on iPhone. The story is similar when it comes to
         | stock OS support. So from my pov, iPhones can be a comparable
         | value (security and otherwise) to the best Android has to
         | offer, specifically because of their (usable to me and the next
         | guy to own my phone) 7-9 years of life, compared to 3 years max
         | with a Pixel. What am I missing here?
        
         | pontilanda wrote:
         | All that looks good on paper, but a lot of apps require full
         | disk access and can easily run in the background, so how
         | "trustable" can that really be in practice?
         | 
         | With iOS at least I know that apps really are sandboxed and
         | cannot access anything unless I grant permission. No app can
         | ever attempt to access my photos unless I explicitly pick a
         | photo or grant partial/total access. Even then it's read-only
         | or "write with confirmation, every time"
        
           | Grim-444 wrote:
           | Well both of your complaints were already addressed. Android
           | introduced the scoped storage system to remove and fix abuse
           | of "full" disk access, and they also added the foreground
           | notification system which forces a system notification to be
           | displayed if any app is doing work in the background, so that
           | you know about.
        
             | drozycki wrote:
             | Right, but if the average real-world Android experience
             | lags behind say iOS in terms of security, then the point,
             | even if outdated, still serves to disprove the parent's
             | premise that AOSP is the most secure.
        
           | derkades wrote:
           | In recent years, I have not seen any app request full file
           | access permissions besides file managers
        
           | jacooper wrote:
           | On GrapheneOS you can choose specifc storage scopes, even if
           | the app is requesting full user storage access.
           | 
           | And you can deny the file access permission like any normal
           | permission, most modern apps request music or videos and
           | photos, rarley an app requests full file access.
        
         | closeparen wrote:
         | What is an example of a workload on a smartphone that doesn't
         | handle PII?
        
         | tete wrote:
         | > Android and ChromiumOS are likely the most trustable
         | computing platforms out there
         | 
         | I talked to a security researcher specializing on Android at a
         | conference and he didn't sound like he'd agree.
         | 
         | While I personally think ChromiumOS does a good job, I think a
         | huge problem is that the issue is in how liberally complexity
         | is added. And complexity is typically where security issues
         | lurk. This has been seen again and again.
         | 
         | It's also why I think projects such as OpenBSD do such a great
         | job. Their main focus seems to be reducing complexity (which
         | they are sometimes are criticized for). A lot of the security
         | seems to come from the reduced attack surface you get. And then
         | the security mechanisms build, which typically are more easily
         | implemented, because of said simplicity are the next layer.
         | 
         | And I think OpenBSD has reached a sweet spot there, where it's
         | not some obscure research OS, but an OS that you can install on
         | your server or desktop, run actual work loads, heck, even play
         | Stardew Valley, or a shooter on, but have all the benefits in
         | terms of security or simplicity that you can from research OSs,
         | Plan 9, etc.
         | 
         | So maybe not mainstream, but mainstream enough to actually work
         | with it. There's sadly many projects that completely ignore
         | reality around them, also because their goal is to simply be
         | research projects and nothing more. Then we have those papers
         | that rarely everyone ever looks at on how in a perfect world
         | all those big security topics could be solved. Unless some big
         | company comes along and puts it into some milestone.
         | 
         | ChromiumOS seems like the limitations you get are similar,
         | probably even more severe than what you get compared to
         | OpenBSD's flexibility. That's something many de-google projects
         | struggle with as well. At the same time the complexity remains
         | a whole lot bigger. Of course goals and target groups are
         | hugely different.
         | 
         | I think both Android and ChromiumOS used to put more emphasis
         | on simplicity, but gave it up at some point. I am not sure why,
         | but would assume that many decisions are simply company
         | decisions. After all the eventual goal is economic growth.
         | 
         | That locking down on mobile devices is not just to increase
         | security, but has the beneficial side effect of controlling the
         | platform. This might not even be directly intended by the
         | security focused developers, but it is a side effect.
         | 
         | So "most trustable" in that scenario comes with "most gate-
         | keeping", "least ownership", etc., which we are kind of used to
         | on smartphones, tablets and Chromebooks. So I think comparing
         | it with other kinds of mainstream OSs isn't really leading to
         | much.
        
           | solarpunk wrote:
           | This page has stuck with me since I read it regarding
           | openbsd. It's a bit mean spirited, but I think openbsd mostly
           | benefits from its own obscurity. https://isopenbsdsecu.re/
           | 
           | But the nice parts of ChromeOS, as far as security properties
           | go are the way it can be "power washed" between usages. Along
           | with a desktop Linux system that has less binaries installed
           | at it's base than most. And things that are built in are
           | typically built atop chrome's sandbox.
           | 
           | I used to joke with my friends who ran TAILS Linux that my
           | grandma with her Chromebook had the same threat model.
        
             | lcall wrote:
             | No other general-purpose OS that runs on my laptop has the
             | track record of OpenBSD: only 2 remotely exploitable
             | security holes in the default installation since ~1996. And
             | then the other mitigations let you control carefully what
             | more attack surface to expose--those mitigations
             | dramatically reduce it. I appreciate the general lack of
             | privilege escalation 0-day exploits, as seen over time.
        
               | thesuperbigfrog wrote:
               | Serious question: How big is OpenBSD as a target for
               | malware, exploits, viruses, etc. ?
               | 
               | OpenBSD's track record is impressive but is it a
               | significant target compared to Windows, MacOS, and Linux?
               | 
               | It is easy to say "only two bullets have ever penetrated
               | my armor" when hardly anyone is shooting at you. I do not
               | know if this is the case because I have never used
               | OpenBSD and I do not know how widely it is used (headless
               | servers, embedded devices, etc.).
        
           | charcircuit wrote:
           | OpenBSD doesn't have proper sandboxing. If you download
           | malware it can easily steal and upload your ssh keys.
        
             | jazzyjackson wrote:
             | > proper sandboxing
             | 
             | Do jails not fulfill this?
        
               | charcircuit wrote:
               | OpenBSD doesn't have jails. Jails take effort to setup.
               | It's much easier to just run the malware instead of going
               | through the effort of making a jail for it.
        
         | 7e wrote:
         | And yet law enforcement seems to be able to open up Android
         | phones without issue, but has problems with iPhones. Is this
         | still the case?
        
           | sneak wrote:
           | Law enforcement has never had problems with iPhones. iPhones
           | in the default configuration back up all data to iCloud with
           | Apple keys, allowing Apple and the FBI to read all of the
           | photos and messages on a device at any time, without the
           | device.
           | 
           | The "Apple vs FBI" thing was a coordinated PR campaign
           | following the Snowden leaks to salvage Apple's reputation.
           | 
           | https://www.reuters.com/article/us-apple-fbi-icloud-
           | exclusiv...
        
             | iosystem wrote:
             | Unsure why you're ignoring the difference of unlocking an
             | iPhone to access the data on the phone compared to
             | accessing data on iCloud.
        
         | helloooooooo wrote:
         | Windows does all of those, in addition to fine grained access
         | controls. I would go so far as to say that the Chromium sandbox
         | implementation is better than on Android because of the ability
         | to completely de-privilege processes.
        
           | pl4nty wrote:
           | Windows struggles with feature adoption though. Win11 helped
           | with the TPM requirement and features on by default, but MSIX
           | apps are still underrepresented so userspace sandboxing is
           | weaker. Windows virtualization-based security is great
           | though, imo it's a significant advantage over Android
        
         | zython wrote:
         | What is your opinion on ios Lockdown mode ?
        
           | Dma54rhs wrote:
           | You cant verify but just take Apples word for it, it's not a
           | fair comparison in my opinion.
        
             | ylk wrote:
             | What do you want to verify exactly? Do you think Apple is
             | lying about what lockdown mode does? Why would they do
             | that? Could you at least say what your opinion is based on?
             | 
             | But it is possible to verify what it does, the same way you
             | would for an android phone (I.e. not just look at the
             | source code and hope that it matches what's running on your
             | device). https://youtu.be/8mQAYeozl5I At 26:42 he talks
             | about lockdown mode. Would be a bit weird if he lied about
             | the impact lockdown mode has.
        
               | Godel_unicode wrote:
               | What if he's wrong? Computers do things their programmers
               | don't expect them to literally all the time. Security
               | bugs generally come from a mistaken assumption about how
               | something behaves.
               | 
               | He doesn't have to be a liar to be telling you untruths
               | about how it works.
        
               | ylk wrote:
               | What if everyone is wrong about the effectiveness of
               | Android's mitigations? Then iOS would be more secure.
               | 
               | Could you please make concrete a point?
        
               | Godel_unicode wrote:
               | You asked why we would need to verify things he said. I
               | explained it quite concretely. What part did you not
               | understand?
               | 
               | Edit: whether people are wrong about android security is
               | orthogonal and whataboutism
        
               | ylk wrote:
               | My android comment was taking yours, turning it around
               | and taking it to the extreme to illustrate a point.
               | 
               | And no, I never asked why we would need to verify the
               | security researcher's claims (but sure, you should).
               | 
               | 1. Dma54rhs says Apple's (!) claims supposedly can't be
               | verified and that you need to take Apple's word for it
               | 
               | 2. I ask why not, provide a link to a talk about iOS
               | security by a renown security researcher as both an
               | example of how to verify Apple's claims (reverse engineer
               | iOS) and to lend some credence to the point that they are
               | likely to be true
               | 
               | 3. You talk about the researcher and/or programmers being
               | wrong by replying with an "orthogonal" comment containing
               | "whataboutism".
               | 
               | Edit: Could we please talk about the actual topic? Do you
               | or someone else know about instances where Apple lied
               | about mitigations like lockdown mode before? Maybe
               | there's a long history of it and I just don't know. Or is
               | there some other flaw in my logic?
               | 
               | There is always the argument about hidden bugdoors,
               | backdoored compilers or whatnot. But that's not
               | practical, by then you might as well stop using
               | technology.
               | 
               | If Apple can't be trusted then why can you trust google?
               | Or Qualcomm?
        
               | Godel_unicode wrote:
               | You're the one derailing from the actual topic, which was
               | broadly can we trust our devices and specifically can we
               | trust iOS, by muddying the water with what-about-android.
               | The question wasn't which we can trust more, the question
               | was whether and how much we can trust Apple.
               | 
               | You can't verify that iOS is doing what Apple says that
               | it's doing, because you can't read the code. You can't
               | trust that Apple perfectly understands their product,
               | because it's extremely complicated, and therefore you
               | can't just take their word for it. I'll state that the
               | check here, although it's painfully obvious, is that
               | exploits happen. Researcher opinions are fine, but facts
               | are better.
               | 
               | None of this is in any way contentious or new, it's the
               | exact debate about open-vs-closed that we've been having
               | since the beginning of software.
        
         | noyoudumbdolt wrote:
         | [flagged]
        
         | lumb63 wrote:
         | I can't discuss my former role in too much detail, but it has
         | convinced me that all the above is insufficient in a number of
         | very realistic threat models.
         | 
         | One issue is that software has vulnerabilities and bugs. I'm
         | not talking about the software that users run in sandboxes
         | environments. I'm talking about the sandboxes environments. I'm
         | talking about cryptography implementations. I'm talking about
         | the firmware running in the "trusted" hardware.
         | 
         | The other major issue is as you alluded to: the need to trust
         | vendors and hardware. Without protection and monitoring at the
         | physical level, the user has no way to verify the operation of
         | the giant stack of technology designed to "protect them".
         | Without the ability to verify operations, how is the user to
         | trust anything? Why do companies tell users to "trust them"
         | without any proof they are trustworthy?
         | 
         | This may seem like a minor point, but this is really the crux
         | of the issue. Building this giant house of cards on top of a
         | (potentially) untrustworthy hardware root of trust does not buy
         | anyone anything. Certainly it does not buy "security".
         | 
         | Large companies and nation states are the most likely
         | adversaries one wants to be wary of these days (e.g.
         | journalists, whistleblowers, etc.). What good does the
         | technology do them if the supply chain is compromised or
         | vendors are coerced to insert backdoors? These are the threats
         | that actually face people concerned about security, not whether
         | or not their executables are run in a sandboxed VM or not.
         | Great, you've stopped the adversary from inserting malicious
         | code into your device after purchase. Good thing for them, they
         | did it prior to or during manufacture.
         | 
         | The technology you alluded to above is mainly useful for
         | protecting company IP from end users, IMO. That's how I've
         | mainly seen it used, and the marketing of "security for the
         | user" is a gimmick to justify the process.
         | 
         | EDIT: I forgot to mention this entire class of security issue
         | since I am used to working on air gapped systems. I don't care
         | if you are operating in a sandboxed VM with a randomized MAC
         | over a VPN over Tor. If you're communicating with any other
         | device over the Internet, you have to trust every single other
         | machine along the way. And you shouldn't.
        
           | bigiain wrote:
           | While respecting your "I can go into details" comment, I'm
           | curious to hear whatever you _can_ comment on about what sort
           | of adversary has the capabilities you describe and do you
           | have an opinion on whether they use those in tightly targeted
           | attacks only, or do they compromise the entire
           | hardware/software supply chain in a way that they can do
           | "full take surveillance" using it?
           | 
           | If I'm not a terrorist/sex-trafficker/investigative-
           | journalist, can I reasonably ignore those threats even if I,
           | say, occasionally buy personal use quantities of illegal
           | drugs or down/upload copyright material? (With, I guess, the
           | caveat that I'd need to assume the dealer/torrent site at the
           | other end of those connections isn't under active
           | surveillance...)
        
           | bbarnett wrote:
           | _Why do companies tell users to "trust them" without any
           | proof they are trustworthy?_
           | 
           | You know the answer here, they are not to be trusted.
           | 
           | Samsung phones, for example, have a gpsd, which phones home
           | at random times. This runs as root, ignores vpn settings (so
           | no netguard for you!), and if it is just getting updated agps
           | info, it sure seems to send a lot of data for that.
           | 
           | So no, they don't want a legitly auditable device. Too many
           | questions, you see.
        
             | lumb63 wrote:
             | It is also worth mentioning, since I didn't realize it
             | until I worked in depth in the space: your CPU is not the
             | only place to execute code, or the only place with access
             | to hardware.
        
           | akiselev wrote:
           | _> The other major issue is as you alluded to: the need to
           | trust vendors and hardware. Without protection and monitoring
           | at the physical level, the user has no way to verify the
           | operation of the giant stack of technology designed to
           | "protect them". Without the ability to verify operations, how
           | is the user to trust anything? Why do companies tell users to
           | "trust them" without any proof they are trustworthy?_
           | 
           | At the end of the day you need to trust Qualcomm or MediaTek.
           | The Oracle of the hardware world, with more lawyers than
           | engineers, or... MediaTek.
           | 
           | That's a _NOPE_ for me.
        
             | drozycki wrote:
             | AFAIK there are no phones on the market with open-source
             | baseband firmware, so you have to trust one of Qualcomm,
             | Broadcom et al with access to all cellular communication.
             | Do you have a best of breed supplier you've vetted?
        
               | bigiain wrote:
               | And even if you _could_ trust the baseband on your
               | device, there's the problem that the cell tower is
               | running software you have no visibility of.
               | 
               | If I were the NSA, that's where I'd be focussing at least
               | some of the attention of the "exploit people's phones"
               | department. If you want cellular connectivity, the
               | cellular provider needs a real time way to identify your
               | device and it's location (at least down to neatest few
               | cell towers accuracy).
               | 
               | (And once I had some capability there, the people running
               | the "most secure basebands/devices would be the ones I
               | kept the closest eye on. I've heard my local intelligence
               | service are very interested in phone switch on/off
               | events, because they are a signal that someone might be
               | attempting to evade surveillance, and the rarity of
               | "normal people" switching their phone off (or
               | disconnecting from the cellular network) makes it
               | worthwhile collecting all that "metadata" so they can
               | search it for the "interesting" cases.
        
       | SpeedilyDamage wrote:
       | As with anything, there needs to be some evidence to believe
       | something, and if there's evidence, you can follow that to figure
       | out if it's real or just anomalous.
       | 
       | Generally, it's a bad idea to believe things without evidence, so
       | I guess you can trust your computer isn't compromised the same
       | way you can trust no unicorns exist; there's not any credible
       | evidence to suggest it.
        
       | thinking001001 wrote:
       | You don't. Welcome to industry!
       | 
       | "There is no way to really know if a computer is compromised" -
       | Joanna Rutkowska
        
       | Alifatisk wrote:
       | I view the traffic being sent from my devices, there is plenty of
       | good tools for that.
       | 
       | If I see any anomalies, then that's a hint.
       | 
       | Note, you should not fully rely on this but rather as a starting
       | point.
        
       | checkyoursudo wrote:
       | Bios? I'm not sure I can ever be certain.
       | 
       | For the rest, I run a pretty esoteric setup (compiled-from-source
       | custom configured linux kernel with no binary blobs; all software
       | compiled from source, with no exceptions; aggressive, burdonsome-
       | to-me privilege separation; chroots and VMs for various degrees
       | of potential threat; etc). I have no illusions that it is
       | perfectly safe. What I am comfortable with is that, in order to
       | compromise me, you would have to know a lot about what I run and
       | how I run it. I believe that I would have to be nearly
       | individually targeted to extract any useful data from my machine,
       | and that I am not nearly a valuable enough target for anyone to
       | do so. I think you would have to be a state-level actor or
       | someone with similar capabilities to compromise me, and none of
       | them would care enough.
       | 
       | My security paranoia stems from extremely sensitive work I did as
       | a lawyer long ago, but I am now so used to it that I carry on as
       | a scientist, even though my current work is not nearly so
       | sensitive (if at all). I give up a lot of convenience and some
       | functionality to operate this way, so it is not for everyone. I
       | am not an adversary to anyone, so outside state actors surely
       | don't care about me. And my own government can just get a warrant
       | and knock on my door, so they don't care about me either.
       | 
       | Embedded device firmware besides the bios is probably my main
       | vulnerability, but if you're successfully getting at me through
       | my hard drives or mouse, then I was surely an incidental rather
       | than actual target.
        
         | ulimn wrote:
         | I'm genuinely curious: Do you check/audit the code you compile
         | and run on your machine? Going with the assumption of "no": How
         | is it then different than downloading a prebuilt version from
         | an official source?
        
           | luma wrote:
           | It feels like a cargo cult approach to the problem. "I'm safe
           | because I compile from source" is an absurd statement when a
           | million LoC is involved.
        
             | kube-system wrote:
             | The linux kernel is much more than a million LoC. Closer to
             | 30 million.
        
               | amarshall wrote:
               | Much of that is drivers that may be disabled if not
               | needed for current hardware, narrowing the audit scope.
        
               | ghostpepper wrote:
               | If anything I think this underscores the parent comment -
               | open source is not inherently more secure than closed, it
               | just adds another potential avenue (source code audit) to
               | ensure security.
               | 
               | If nobody actually audits the source, and the closed-
               | source binary has had other types of testing done on it,
               | it's likely that the closed source binary will be more
               | secure.
        
               | kube-system wrote:
               | Yes, my comment was in support of its parent. If reading
               | a million lines is hard, reading ~30 million is harder.
        
           | checkyoursudo wrote:
           | I should say, I will run binaries on VMs and feel very little
           | threat from doing so. The "with no exceptions" referred to
           | the main host OS. I should have been more clear about that.
           | 
           | To answer your questions, oh hell no; definitely I do not
           | audit source code myself. Though I have rarely. I do it this
           | way, and it is different enough for me, because someone could
           | audit the source in theory. If someone did audit and found a
           | security problem, then I could check to see if my source was
           | also compromised. If I install binaries, then I might not
           | ever be able to know if my binary was compromised. Maybe
           | someday if reproducible builds are guaranteed to be bit-
           | perfect, then I would use binaries from reputable sources,
           | but that would only happen in the case where third parties
           | are compiling from source and affirming the reproduction. In
           | that case, why not just compile it myself?
           | 
           | Developers who publish compromised source are going to get
           | burned. Developers who publish compromised binaries are going
           | to say, "omg we must have been compromised by someone else."
           | Obviously it is possible for third-parties to compromise
           | source, but I'll go with what I see as the lesser threat.
           | 
           | If the cost of compiling was high, then that might make a
           | difference. For me, the cost is negligible, which makes it a
           | no-brainer for me.
        
           | soheil wrote:
           | I think for hardware level code and things link bios the only
           | way is to trust the manufacturer. You could also trust the
           | manufacturer but if they are not large enough to have fully
           | vetted and trusted vendors then you're back to square one. So
           | I think only in this sense it says something about the high
           | degree of security of devices made by Apple.
        
         | ultra_nick wrote:
         | QubesOS may be less cumbersome if it works for your use case.
        
         | M5x7wI3CmbEem10 wrote:
         | How does one achieve such security?
         | 
         | Are there guides you found helpful?
        
       | treebeard901 wrote:
       | You should assume all devices are compromised
        
         | wadayano wrote:
         | *compromisable
        
           | tete wrote:
           | I don't think so. While I am not sure about what "devices"
           | means it's common practice for example to assume your root
           | password is always compromised. The consequence here is that
           | you don't allow remote, password-authenticated root.
           | 
           | On a similar note services and networks should be treated as
           | compromised as well, meaning you must use encryption,
           | authentication and in general make sure to limit attack
           | surface.
           | 
           | And all of that boils down that you should make sure you
           | should not rely on services, users, etc. don't for example
           | access personal information they are not supposed to access.
           | 
           | After all the problem with things like Ransomware is exactly
           | that this isn't assumed.
        
         | baobabKoodaa wrote:
         | Not helpful
        
           | numpad0 wrote:
           | Truths has to be helpful.
        
           | twaw wrote:
           | Why not? It still possible to communicate securely using
           | compromised devices and networks.
        
             | sweetjuly wrote:
             | Could you expand on this? How would I securely communicate
             | from a device that, say, has a kernel level implant? This
             | is one of those cases where SGX/TrustZone would be
             | immensely helpful but nobody has built a messenger that
             | actually somehow fully lives in an enclave.
             | 
             | If you assume every device you use is compromised, how can
             | you possibly use any encryption?
        
       | h2odragon wrote:
       | You really can't, anymore. You can watch traffic and hope that
       | anything nasty isn't communicating with the outside world, but
       | then there's all sorts of side channels that you may not know to
       | watch.
       | 
       | At some point you just have to admit there's limits to privacy
       | and work with them. You paper journal could be stolen and read /
       | rewritten too, yaknow? It's not a new problem, its just in a new
       | context.
        
       | tucnak wrote:
       | Surprised not to see a mention of Talos II system based on IBM
       | POWER9 technology that is open spec and otherwise a very
       | competent build with fully open hardware FPGA mainboard and stuff
       | like physical trip jumper protection, and potential for
       | customised security measures via the BMC, Arctic Tern, et cetera.
       | IBM is notoriously good at virtualisation, and POWER9 is very
       | competent for machine learning workloads, the 2U and 4U systems
       | they offer can go up something ridiculous like 176 threads in a
       | two-socket configuration and there's plenty of lanes. You can
       | reprogram the firmware, too; it's all out there in the open and
       | you normally wouldn't need special hardware.
       | 
       | You can get one for not less than $5,500.
       | https://www.raptorcs.com/content/TLSDS3/intro.html
        
         | jart wrote:
         | This is like the third time I'm hearing about these raptors in
         | the past month and I want one. I want the $10,000 one. Your
         | comment is the most constructive one in this thread, because it
         | sounds like the solution for all the concerns expressed above
         | has finally arrived. Who here is willing to put their money
         | where their values are?
        
       | breck wrote:
       | "Assume breach" was the phrase they taught us at Microsoft (at
       | least in 2016). I assume everything is compromised. So I make
       | public and distribute/decentralize as much as possible.
       | 
       | I #BuildInPublic as much as possible on GitHub and GitLab and
       | dedicate everything to public domain (http://pledge.pub/).
       | 
       | I have a number of computers and can be up and running on a new
       | Macbook in under an hour.
       | 
       | I run multiple mirrored web sites.
       | 
       | I distribute crypto keys across ledgers and safety deposit boxes
       | in multiple states.
       | 
       | Most importantly: I don't pay for insurance (except for mandated
       | auto and homeowners). Instead, everyday I go out there and try to
       | deliver as much good to as many people as possible, knowing that
       | the best insurance when bad luck strikes isn't some check from
       | some corporation, but the helping hands from your fellow
       | neighbors.
        
         | tete wrote:
         | > "Assume breach" was the phrase they taught us at Microsoft
         | (at least in 2016)
         | 
         | security(7) man pages on FreeBSD and DragonFly, I think
         | originally written by Matt Dillon also tells you to assume
         | breach for example for the root password, which is why you
         | shouldn't allow password based logins over SSH, etc.
         | 
         | This is the earliest I could find, and it already contains
         | assuming breach in 1998.
         | 
         | https://github.com/freebsd/freebsd-src/commit/f063d76ae36ca4...
         | 
         | Does anyone have an idea on how to see the file and its history
         | from where it was moved? I checked 4.4 BSD, because the
         | copyright mentions Berkley, bit I failed to find anything in
         | the man1 directory.
        
       | danieldk wrote:
       | I only have limited trust. Between 3D printing slicers from
       | Chinese companies, many packages from PyPI and Rust crates, there
       | is always a danger that something is compromised somewhere.
       | 
       | I try to limit attack surface in the following ways:
       | 
       | - I only use M1 Macs as desktops. This reduces attack service in
       | various ways. M1 Macs do not have anything like UEFI firmware, it
       | all starts from the iBoot ROM and the whole chain is verified
       | with signatures. The OS is on a sealed system Volume that is
       | read-only and signed. Altogether, this limit firmware/OS attacks.
       | 
       | - I use a U2F key and/or the Secure Enclave of the Mac for
       | credentials (SSH keys, 2FA). They are set up to require user
       | confirmation.
       | 
       | - When possible, I will install applications from the Mac App
       | Store, since they are sandboxed by default.
       | 
       | - I use separate work and private Macs.
       | 
       | - I clean and factory restore my Macs every few months.
       | 
       | - I use some tools like Knock Knock to see if there is anything
       | suspicious.
       | 
       | Compromise is obviously possible, but I try to push it into
       | 'mostly state actor' territory, because I am not interesting to
       | most state actors.
        
       | bo1024 wrote:
       | The Librem 14 has a neutered Intel chip (no ME) among other
       | things. My favorite privacy/freedom-respecting laptop.
       | https://shop.puri.sm/shop/librem-14/
        
         | mark_l_watson wrote:
         | That looks nice enough. Is it custom built, or a 3rd party
         | laptop that is modified?
        
           | lrvick wrote:
           | Custom laptop hardware made in China then ME neutered and
           | flashed with open firmware in the US.
        
       | INTPenis wrote:
       | I have several layers of security, including an infosec mindset
       | that comes naturally, but at the end of the day I don't really
       | know. I have faith that if I were to be infected statistically it
       | would be by some malware that would give itself away by mining
       | crypto or doing something else very loud and disruptive.
       | 
       | Fun story but my laptop was actually hacked remotely once,
       | without me knowing.
       | 
       | It was almost 20 years ago, some would call me a script kiddie.
       | Just trying to be bad ass, trying to live the movie Hackers. Had
       | a stolen laptop running FreeBSD, with a wicked bootsplash just
       | like the kids in the movie.
       | 
       | So you can imagine I was moving with the wrong crowds online,
       | having little defacing wars with other groups and shit like that.
       | Caught the wrong kind of attention.
       | 
       | I say that infosec comes naturally to me now but pobody's nerfect
       | and back then I had re-used a password in a weakly encrypted
       | service database, someone hacked this service, found my password,
       | found my ssh logins to the servers, and traced backwards to my
       | laptop.
       | 
       | I don't remember the details but somehow working back from one
       | server, perhaps to another jumpserver, they were able to get the
       | IP for my laptop and actually login to it.
       | 
       | Fortunately for me they didn't do anything but gather data, they
       | posted this on a wall of shame saying "another hacker down". I
       | say fortunately for me because I had thousands of customer's data
       | on that laptop, including CC#'s for the business I was running at
       | the time. They missed all this, and the very next day I
       | reinstalled my laptop and reset all passwords on pure
       | coincidence. I had no idea I had been hacked, I just felt like
       | reinstalling for some other reason.
       | 
       | Found their wall of shame posting later and felt very much
       | ashamed.
       | 
       | This thread has inspired me to setup a tripwire for my
       | workstation. It's something I used to use many years ago but I
       | think it's a good setup to have some sort of alerting if files
       | start changing.
        
         | lloydatkinson wrote:
         | > This thread has inspired me to setup a tripwire for my
         | workstation. It's something I used to use many years ago but I
         | think it's a good setup to have some sort of alerting if files
         | start changing.
         | 
         | Can you explain what you setup?
        
           | INTPenis wrote:
           | I'm looking at current options, this[1] for example is
           | packaged for Fedora, which is my daily driver.
           | 
           | But then I got to thinking, if I'm going to do a clean Fedora
           | install for the tripwire (it's best practice) I might as well
           | try Fedora Silverblue[2]. Silverblue is an immutable system
           | so it kinda makes a tripwire less useful because no one can
           | change any system files. Only files in your home directory
           | and /etc can be modified statefully.
           | 
           | 1. https://github.com/Tripwire/tripwire-open-source/
           | 
           | 2. https://silverblue.fedoraproject.org/
        
       | jnurmine wrote:
       | For my part, after considering this very question in the past,
       | the answer is that the question is wrong.
       | 
       | The question is: is there some reason to trust, and the answer
       | is: no.
       | 
       | In my opinion, any and all general computing devices sold to the
       | mass consumer market are already compromised in some shape or
       | form as they roll out from the factories -- otherwise such things
       | would simply not be sold in large quantities.
        
       | mark_l_watson wrote:
       | I actually don't really trust my Linux and macOS laptops. I put
       | no sensitive information in them, just what I need to write
       | software or build models.
       | 
       | I do trust iOS and iPadOS in Lockdown Mode, and I avoid
       | installing apps, usually preferring web apps.
       | 
       | I have a Chromebook and I also trust that.
       | 
       | In all cases, I don't wait to install available system updates -
       | that might not be the best strategy, but that is how I do it.
        
       | devmor wrote:
       | I dont! In fact I assume it has been to some extent and that I
       | would be unable to detect this.
       | 
       | Going from that assumption, I take care to keep encrypted backups
       | of all of my important files both locally on another machine and
       | remotely.
       | 
       | I also use two-factor authentication wherever possible, because I
       | find it unlikely that the same attacker would gain access to both
       | my PC and phone.
       | 
       | Additionally, I have a second phone with no SIM card that I use
       | for some TOTP 2 factor accounts that I wish to remain especially
       | secure.
       | 
       | Operating at the assumption that you have already been
       | compromised allows you to prepare for the worst should you truly
       | be.
        
       | doubled112 wrote:
       | I don't know, really. I have a ton of "personal machines" when it
       | gets right down to it, but I'll think client wise.
       | 
       | I distro hop chronically on most of my machines. Sometimes
       | multiple OS reinstalls across machines per week. Some installs
       | have lasted a few months but it's rare.
       | 
       | I try to stick to official repos when I do reinstall, so I'm
       | outsourcing that trust to the distro maintainers.
       | 
       | If it's on the disk, it's gone except for a few important files I
       | keep in a self-hosted Nextcloud sync folder.
       | 
       | I use LUKS encryption to ensure leaving the laptop on the bus is
       | a non-event. If it was ever in somebody's possession for very
       | long (border, police, lost and found) I'd just put it in the
       | garage and never touch it again.
       | 
       | Firmware malware is pretty uncommon, still, so I'm just hoping
       | for the best there.
        
       | codetrotter wrote:
       | Noone has drained my crypto from my wallets yet.
       | 
       | So either my personal machine is not compromised, or they think
       | the amount of crypto in the wallets is too low.
       | 
       | Jokes on them though, cause I am moving my crypto to a hardware
       | wallet eventually
        
         | tluyben2 wrote:
         | Quite an interesting honeypot really.
        
           | mimimi31 wrote:
           | More like a canary I think.
        
         | progval wrote:
         | Joke's on you, you just told them they should hurry up before
         | you do ;)
        
         | pcthrowaway wrote:
         | Of course, a nation state is unlikely to 'tip their hand' so to
         | speak.
         | 
         | If the U.S. has backdoors on every PC, they're not going to
         | bother draining the wallets of "small fish"; they need to keep
         | these things secret so they can go after terrorists
        
       | lifthrasiir wrote:
       | I'm reasonably sure that my personal machine is _less_
       | compromised than the average, but I can 't and will never be able
       | to ensure that it is _not_ compromised because I have no way to
       | know everything the machine trying to do. This remains true even
       | when you have an entirely free and directly inspectable hardware;
       | you simply have no knowledge and time to verify everything. Just
       | keep a reasonable amount of precaution and skepticism.
        
       ___________________________________________________________________
       (page generated 2023-01-15 23:00 UTC)