[HN Gopher] Security Architecture Anti-Patterns
       ___________________________________________________________________
        
       Security Architecture Anti-Patterns
        
       Author : napolux
       Score  : 127 points
       Date   : 2020-01-15 14:48 UTC (8 hours ago)
        
 (HTM) web link (www.ncsc.gov.uk)
 (TXT) w3m dump (www.ncsc.gov.uk)
        
       | closeparen wrote:
       | This is _amazingly_ concrete and understandable from a technical
       | perspective for a government security document. Where can I find
       | more like this?
       | 
       | Everything I've seen in ISO security standards, for example, is
       | written at an abstract theoretical level about the design of
       | security bureaucracy rather than the design of actual systems.
       | 
       | One bone to pick: basically all tech companies expect you to be
       | oncall for your services via your laptop. They're not paying
       | anybody to sit in the office overnight, and commuting in when you
       | get paged with seriously delay mitigation. Is "browsing down"
       | even possible under those circumstances?
        
         | naravara wrote:
         | At that high a level, getting too granular about actual systems
         | just ends up with people throwing your standards out because
         | their special snowflake of a use case cannot possibly work
         | under it.
         | 
         | The reason it ends up focusing on the bureaucracy is because
         | they hope if you can get the bureaucratic part right, the
         | organization can have the relevant expertise in house to make
         | informed decisions about risk, the minimization and mitigation
         | of which is really the goal of the security function.
        
         | adamlett wrote:
         | _Is "browsing down" even possible under those circumstances?_
         | 
         | From TFA:
         | 
         |  _There are many ways in which you can build a browse-down
         | approach. You could use a virtual machine on the administrative
         | device to perform any activities on less trusted systems. Or
         | you could browse-down to a remote machine over a remote desktop
         | or shell protocol. The idea is that if the dirty (less trusted)
         | environment gets compromised, then it's not 'underneath' the
         | clean environment in the processing stack, and the malware
         | operator would have their work cut out to get access to your
         | clean environment._
        
         | _wldu wrote:
         | I agree, it's a great doc. Some more concrete examples.
         | 
         | For number 1, administering a Windows Active Directory domain
         | controller from a desktop that is also used to browse the
         | public Internet and check email.
         | 
         | For number 6, networking groups use this a lot as the reason to
         | not patch routers.
        
         | [deleted]
        
         | lmkg wrote:
         | > * Is "browsing down" even possible under those
         | circumstances?*
         | 
         | Not a security expert, but based on their explanation of
         | "browsing down," I _think_ it 's possible if the laptop is
         | sufficiently locked-down. The issue isn't fundamentally with
         | the management device being remote, it's being less-trusted. In
         | the limit case, you could have separate management-only laptops
         | that get passed around to the on-duty employee.
        
         | [deleted]
        
         | FooHentai wrote:
         | >Is "browsing down" even possible under those circumstances?
         | 
         | Seems like it could be done by having a mobile workstation that
         | doesn't read email or browse the web, just acts as a secure
         | 'satellite' administration device that does little more than
         | VPN back into the administrative network. From there, you jump
         | off to a terminal server if you need to browse or email.
         | 
         | The termination of that admin VPN would probably need to be a
         | distinct endpoint from the general VPN access concentration,
         | and have additional security/authentication measures in place.
        
         | potatoz2 wrote:
         | Ideally you'd do the least amount of browsing and reading email
         | possible on your work laptop and sandbox whatever is left if
         | possible.
         | 
         | Something like Qubes OS (or maybe manually using containers or
         | virtual machines) could be an option. Running snaps and
         | flatpaks also ensures some level of sandboxing if I'm not
         | mistaken. Using a separate user for riskier activities is also
         | worth thinking about.
         | 
         | I think it's also true that all OSes are moving towards more
         | sandboxing by default (permission to read files, permission to
         | start at runtime, admin access, etc.) so it's less of a risk
         | than it used to be.
        
       | motohagiography wrote:
       | Great description. How do you get security architecture into the
       | design phase of a system when you are doing dynamic and iterative
       | product development?
        
         | munchbunny wrote:
         | They are only mutually exclusive if your business and product
         | management teams deprioritize security. In my experience, the
         | typical reason that security gets neglected (as opposed to just
         | making reasonable trade-offs) is that management and product
         | management both care too much about just shipping shiny things
         | and don't care enough about doing right by the end user. I've
         | seen better and worse teams. Most teams fall into a category of
         | "you're lucky you're not big enough to be a target."
         | 
         | General best practices I can think of, in broad organization
         | level strokes:
         | 
         | 1. Make sure security is implemented at the dev ops layer
         | through practices such as logged just-in-time access to
         | production systems, secret vaults for service keys and
         | certificates, airgapped machines for handling secret keys, etc.
         | 
         | 2. Make sure security best practices are implemented by default
         | into your API's (CORS, TLS 1.3, whitelist based firewalls
         | between services that shouldn't need to talk to each other,
         | etc.) and make it transparent to the API caller, at least when
         | it's your own services talking to your own services.
         | 
         | 3. Make security an element of design and code reviews. Square,
         | for example, did this by having subject matter experts advise
         | teams on security design when projects were still in the
         | ideation/design phase.
         | 
         | Ultimately, security costs a non-trivial amount of time, and it
         | requires training your developers to be able to reason about
         | security.
        
       | tptacek wrote:
       | This is pretty unhelpful; the case I would make is that it's
       | providing security largely by defining the problem away. For
       | instance: it's usually unrealistic to require that all
       | administration happen through clean-room systems that don't ever
       | browse the web.
       | 
       | The real-world practice of security is in large part the
       | deployment of risky systems with mitigations in place for the
       | likely attacks that will target them. So, for instance, getting
       | everyone to talk to the admin console on a purpose-built
       | Chromebook with no Internet access is probably not a realistic
       | option, but getting every system with admin console access MDM'd
       | and requiring access to admin consoles to route through an IdP
       | like Okta to enforce 2FA is much more realistic, and thus likely
       | to happen.
       | 
       | The patterns in here that aren't unrealistic are pretty banal. I
       | don't doubt that UK NCSC sees systems designed to be unpatchable,
       | but modern engineering norms (Docker, blue/green, staging/cert
       | environments) --- norms that have really nothing to do with
       | security and are common to pretty much every serious engineering
       | shop --- address that directly anyways.
       | 
       | Other patterns don't really make sense; for instance: you should
       | design to make your systems patchable (sure, that's again a basic
       | engineering requirement anyways), but also make sure your dev and
       | staging environments aren't continuously available. Why? Those
       | are conflicting requirements.
        
         | DaniloDias wrote:
         | I agree.
         | 
         | This is more a list of anecdotes that are definitely bad. But
         | they're not representative of the kinds of common mistakes I
         | would call out.
        
         | amanzi wrote:
         | I respectfully disagree. I have seen many of these antipatterns
         | in production in many medium & large size orgs, and I think the
         | six scenarios presented in this doc are more common than you
         | think.
         | 
         | The "browse-up" scenario is extremely common because
         | engineers/administrators usually prefer to remote directly onto
         | the systems their working on from their main machine rather
         | than endure the inconvenience of needing to securely connect to
         | another host first. Many of these admins/engineers would think
         | it's inconceivable for their machines to be vulnerable but have
         | no issues downloading dev tools, libraries and dependencies
         | onto their machines from third party & untrusted sources (e.g.
         | Github, NPM, etc).
         | 
         | 'Docker, blue/green, staging/cert environments" - believe it or
         | not, these are seen as emerging trends in many orgs rather than
         | the norm as you suggest here.
         | 
         | And regarding designing systems to be patchable, you say:
         | "sure, that's again a basic engineering requirement anyways",
         | but again I'd counter that I've come across many systems that
         | haven't been patched in months or years because it's deemed too
         | hard. Another similar issue I've come across is where an org's
         | DR processes have not been properly tested because it's too
         | hard to failover without causing significant disruption. Both
         | can easily be designed for early on, but for legacy systems
         | that were implemented without this foresight it still remains
         | an issue.
        
           | EvanAnderson wrote:
           | The way that I'm reading the "browse-up" scenario, however,
           | isn't how you're describing it. Admins wouldn't "secure
           | connect to another host"-- they'd have to use a trusted and
           | known-clean device to perform all that administrative
           | activities. Connecting to that device from another host (i.e.
           | using it as a "jump box") seems to be specifically disclaimed
           | as an "anti-pattern".
        
         | ozim wrote:
         | If we are talking engineering there are OT systems that are not
         | patchable. You cannot blue/green docker deploy machine that is
         | running industrial system. It is all nice and easy if you run
         | web farm where you can just balance stuff to other server.
         | 
         | For the first one, I would say you could make admins use "clear
         | Chromebooks" but probably no one is going to pay for that.
         | 
         | For other banal ones, I would say it is good to remind people
         | about "management bypasses" are not good idea.
        
       | zinssmeister wrote:
       | Excellent and to the point. I see this apply to many technology
       | SMB companies as well. We once compiled a few actionable
       | recommendations for smaller companies that host on AWS and that
       | post ended up being our most popular article
       | https://www.templarbit.com/blog/2018/11/21/security-recommen...
        
       | inetknght wrote:
       | > _You need to enable JavaScript to run this app._
       | 
       | Nah.
        
         | amanzi wrote:
         | Is this because the site is using React? I had a look at the
         | source of the page and I'm guessing this is React-based? Are
         | the benefits gained from using React worth it for the
         | limitations you get?
        
         | danShumway wrote:
         | There's no way you would have been able to find it without the
         | page loading, but for anyone else in the same position, the
         | direct PDF is available at
         | https://www.ncsc.gov.uk/pdfs/whitepaper/security-
         | architectur....
         | 
         | I would maybe question whether an article that can be perfectly
         | embedded in a static PDF without any changes or downgrades
         | really needs an entire React stack and a Service Worker for the
         | browser, but :shrug:. Every org is free to make their own
         | engineering choices.
         | 
         | It does seem to be a pretty good list, so worth taking a look
         | at.
        
       | cs02rm0 wrote:
       | On a personal note, the advice not to browsedown from less
       | trusted devices often means an organisation supplying a trusted
       | device.
       | 
       | That potentially conflicts with IR35 for contractors who would
       | then not be supplying their own equipment.
       | 
       | I've also seen it result in a contractors *nix laptop being
       | swapped out for a Windows laptop (built by a junior employee)
       | with mandated "phone home" software installed. Personal biases
       | persuade me that this wasn't necessarily an improvement in the
       | security of the system.
       | 
       | I should say, I'm generally a fan of NCSC advice and I think it's
       | great they're putting their thoughts out there.
        
       ___________________________________________________________________
       (page generated 2020-01-15 23:00 UTC)