[HN Gopher] SolarWinds: What Hit Us Could Hit Others
       ___________________________________________________________________
        
       SolarWinds: What Hit Us Could Hit Others
        
       Author : parsecs
       Score  : 79 points
       Date   : 2021-01-12 20:52 UTC (2 hours ago)
        
 (HTM) web link (krebsonsecurity.com)
 (TXT) w3m dump (krebsonsecurity.com)
        
       | HALtheWise wrote:
       | https://www.crowdstrike.com/blog/sunspot-malware-technical-a...
       | is the key link with more technical analysis for those
       | interested, including source code of the implant.
       | "If the decryption of the parameters (target file path and
       | replacement source code) is successful and if the MD5 checks
       | pass, SUNSPOT proceeds with the replacement of the source file
       | content. The original source file is copied with a .bk extension
       | (e.g., InventoryManager.bk), to back up the original content. The
       | backdoored source is written to the same filename, but with a
       | .tmp extension (e.g., InventoryManager.tmp), before being moved
       | using MoveFileEx to the original filename (InventoryManager.cs).
       | After these steps, the source file backdoored with SUNBURST will
       | then be compiled as part of the standard process."
        
         | woliveirajr wrote:
         | This, combined with comments from other threads, makes me think
         | that SolarWinds wasn't the real target. They were just means to
         | some specific high-value ends. When low-value companies were
         | using it, the remote control would even remove the backdoor to
         | avoid some accidental discovery.
         | 
         | How, and how much, were the real targets affected?
        
           | jaywalk wrote:
           | SolarWinds was absolutely not the real target. The malware
           | wouldn't even execute if the machine it was installed on was
           | joined to a domain containing the string "solarwinds".
        
       | throwawaybutwhy wrote:
       | Oh. Brian Krebs regurgitates a corporate press release by a
       | company that has recently hired Chris Krebs (no relation), all
       | the while skirting around the solarwinds123 gross negligence. A
       | well-funded PR campaign has already resulted in a NY Times smear
       | hit piece accusing another software company of colluding with the
       | Russians.
       | 
       | Back to the substance, it appears the investigation keeps a close
       | lid on the real extent of the breach. It's nasty.
        
       | afrcnc wrote:
       | From what we've seen until now, this company deserves everything
       | that happened to it. Hope they go under. Ignoring security best
       | practices for the chance of a quick buck.
        
       | Veserv wrote:
       | Oh good. The attackers were only in their systems since 9/4/19
       | before being detected on 12/12/20, so only 15 months of
       | infiltration into SolarWinds' systems before detection. At least
       | the payload was only deployed 2/20/20, so their customers were
       | only completely infiltrated without detection for 8 months.
       | Assuming the attackers could only get a 10 MB/s channel _total_
       | per target even though they probably infected thousands to tens
       | of thousands of machines per target, at ~20 million seconds that
       | would constitute ~200 TB exfiltrated per customer or ~19 years of
       | 1080p video.
       | 
       | If an attacker has just one day to root around and exfiltrate
       | they can easily get valuable information. If they are given 8
       | months they have already gotten everything of value for months
       | and are just waiting around for any new data to come in. Think
       | how inadequate your systems must be to let an attacker sit around
       | in your systems for 8 months, it is mind-boggling how unqualified
       | their systems are for their problems. And this is not just an
       | indictment of SolarWinds. Just in case anybody forgets, it was
       | the top-flight security company FireEye who discovered this
       | breach after realizing they themselves were breached. A "best of
       | the best" security company took 8 months before realizing that
       | they or any of their customers had been breached. This is what
       | "top-class" security buys you.
        
         | 0xy wrote:
         | Their VP of Security posted a blog post entitled "Does your
         | vendor take security seriously?" one month before they
         | announced the breach.
        
         | SheinhardtWigCo wrote:
         | The lesson isn't that any particular victim sucks at security,
         | it's that well-resourced targeted attacks are generally
         | unstoppable.
        
           | joe_the_user wrote:
           | I think you can put things as modern baroque software stacks
           | with their effectively vast "attack surfaces" are not going
           | to stand-up to a well-financed, patient, skilled attacker.
           | 
           | I recall that many companies have switched from a perimeter
           | model of defense where systems are secured from the outside
           | to a "defense in depth" model where each system is secured on
           | it's own (plus the perimeter).
           | 
           | Perhaps folks should think about tightening the in-depth
           | model and avoiding the consumer model of constant updates
           | from a zillion providers. Or perhaps a single lab could
           | verify updates of the zillion providers rather than leaving
           | them on their own.
        
             | xvector wrote:
             | > I recall that many companies have switched from a
             | perimeter model of defense where systems are secured from
             | the outside to a "defense in depth" model where each system
             | is secured on it's own (plus the perimeter).
             | 
             | Yes, this is the current bleeding edge of cloud security,
             | known as "zero trust." Amongst other things, it usually
             | involves provisioning mTLS identities in a secured manner
             | to each service, with connections restricted to a
             | whitelisted set of identities.
             | 
             | I found Evan Gilman and Doug Barth's "Zero Trust Networks:
             | Building Secure Systems in Untrusted Networks" [1] a pretty
             | helpful read in understanding what modern/next-gen cloud
             | security looks like.
             | 
             | Some modern implementations of varying depth and scale
             | include SPIRE [2], Tailscale [3], and BeyondCorp [4].
             | 
             | ----
             | 
             | [1]: https://www.amazon.com/Zero-Trust-Networks-Building-
             | Untruste...
             | 
             | [2]: https://spiffe.io/
             | 
             | [3]: https://tailscale.com/
             | 
             | [4]: https://beyondcorp.com/
        
               | joe_the_user wrote:
               | Yeah, but if you're blindly installing a third party's
               | binary blob, it's hard to call that "zero trust".
               | 
               | Edit: It seems like a serious extension of the zero trust
               | concept would involve something like "only source code
               | from people we trust and then we compile ourselves" can
               | used in X system. Limit trust to trusted identities and
               | don't allow binaries in any more than people.
        
           | Veserv wrote:
           | True, but it is important to quantify that inadequacy.
           | SolarWinds was actually attacked by an actual threat and
           | their defenses were comically outmatched by that real threat
           | who found real value in attacking them. We are not talking
           | 10%-20% or even 100%, we are talking systems that need to
           | improve by 1,000%-10,000% to provide credible defense against
           | real foes. And this is not just SolarWinds, FireEye, a major
           | cybersecurity company, needs to improve their security by a
           | factor of 100x to protect their customers against people who
           | actually wish to attack them. The security is not merely
           | inadequate, it is inadequate to a degree that is almost mind-
           | boggling. Systems are being deployed that are not even 1% of
           | the necessary level of effectiveness. These organizations are
           | completely incapable of developing and deploying adequate
           | defenses.
           | 
           | This ignores the secondary problem which is that if the
           | attacks being deployed are 100x stronger than the defenses,
           | how hard is it to develop an attack that is merely 2x
           | stronger than the defenses. If we lazily extrapolate this
           | linearly, that would be 1/50th the resources to develop an
           | attack that still outmatches the defenders. How many people
           | do you think were on the SolarWinds attack? 10, 100, 1000?
           | Even at 1000 that means you would only need 20 fulltime
           | people for a year to develop a credible attack against nearly
           | every Fortune 500 company in the world and most of the US
           | government. That should be a terrifying concept. Obviously,
           | this is lazy extrapolation, but it is not so off as to be
           | non-illustrative of this problem.
           | 
           | Given this immense difference between real problems and
           | available solutions, the only reasonable assumption for
           | companies and people is to assume that they are completely
           | defenseless against such actors and likely even largely
           | defenseless against even poorly-resourced attacks as
           | demonstrated time and time again. It is imperative that they
           | act accordingly and only make systems accessible where the
           | harm of guaranteed breach is less than the benefit of making
           | those systems accessible.
        
           | jakeva wrote:
           | May I propose a corollary: we _all_ suck at security when
           | confronted by a well sourced adversary.
        
           | varjag wrote:
           | Very much so. The description of the staged attack suggests
           | months of work by a sizeable development team.
        
         | vasco wrote:
         | Whenever I read a post as strongly worded as yours I wonder
         | that its author must really be the best security minded
         | engineer out there, and has never compromised on anything for a
         | deadline or missed something because they simply didn't know it
         | should be configured in a particular safe way.
         | 
         | This definitely doesn't look good and there's probably many
         | failures along the way, but jeez.
        
           | spijdar wrote:
           | I think it's fair to say that these problems shouldn't be
           | blamed on individuals, but systemic failure. In my
           | imagination, someone should have seen this at some point. The
           | system should have had safeguards to ensure that, regardless
           | of any individual's personal failures.
           | 
           | That said, I agree it's a little harsh, since there's no
           | evidence that anyone else could/has done better in this
           | incident.
        
             | 0xy wrote:
             | Why shouldn't it be blamed on the person directly
             | responsible? In this case, the VP of Security and/or the
             | CEO?
        
           | hobofan wrote:
           | I think there ought to be a bit of a difference between e.g.
           | an engineer that customizes your local car dealerships WP
           | instance and one that works on a semi-security-related
           | product that can become a prime hacking target and is
           | deployed across nearly all Fortune 500 companies.
        
           | Veserv wrote:
           | You do not need to be the best civil engineer in the world to
           | recognize that a new bridge design falling on its first day
           | is a totally inadequate bridge. Similarly, you do not need to
           | be a security expert to recognize that 15 months of active
           | undetected infiltration demonstrates a nearly comical
           | inadequacy. To provide real material mitigation you would
           | likely need to be in the hour to day range at most to
           | actually stop a double-digit percentage of value-weighted
           | exfiltration. This even ignores the cases where they just
           | want to damage your systems which would generally only take
           | minutes.
           | 
           | Now maybe nobody can do that, maybe these systems are the
           | best available, but even if that is true that does not
           | suddenly make them adequate. Adequacy is an objective
           | evaluation of reaching a standard, if nobody can reach the
           | standard, then nobody is adequate. We would not let somebody
           | use a new untested material in a bridge if it can not support
           | the bridge just because "nobody knows how to make a bridge
           | that works with the new material". And, by any reasonable
           | metric, an inability to recognize active infiltration for
           | months indicates that against a threat actor similar to what
           | attacked them, they are about as adequate as a piece of paper
           | against a gun.
        
             | viraptor wrote:
             | Apart from what the sibling comment mentioned, it's not a
             | great analogy because we have a very limited things that
             | can go wrong with a bridge. This is knowledge shared
             | between all engineers - you analyse the design for multiple
             | known forces and you're done.
             | 
             | Compare this to the CI systems designed to take unknown
             | code and run it in a safeish way. In a bridge analogy it
             | would be something like "one of the screws turned out to be
             | a remote controlled drilling device which hollowed out
             | parts of steel without visible changes" - of course nobody
             | would notice that for some time.
        
             | Jestar342 wrote:
             | Terrible and dishonest analogy. The very reason this went
             | undetected for 15 months is because the bridge _didn't_
             | fall down. There were no signs of a "break in" and it's
             | wholly improper to compare a virtual system to a physical
             | entity like that in the first place.
        
       | fancyfredbot wrote:
       | Could and likely already has hit others
        
       | f430 wrote:
       | This is the key excerpt, its quite shocking:
       | Crowdstrike said Sunspot was written to be able to detect when it
       | was installed on a SolarWinds developer system, and to lie in
       | wait until specific Orion source code files were accessed by
       | developers. This allowed the intruders to "replace source code
       | files during the build process, before compilation," Crowdstrike
       | wrote.              The attackers also included safeguards to
       | prevent the backdoor code lines from appearing in Orion software
       | build logs, and checks to ensure that such tampering wouldn't
       | cause build errors.              "The design of SUNSPOT suggests
       | [the malware] developers invested a lot of effort to ensure the
       | code was properly inserted and remained undetected, and
       | prioritized operational security to avoid revealing their
       | presence in the build environment to SolarWinds developers,"
       | CrowdStrike wrote.
       | 
       | So how do we guard against this type of attack? How do we know
       | this hasn't already happened to some of us? What is the potential
       | fallout from this hack, it seems quite significant.
       | 
       | This must be why the Japanese Intelligence agencies prefer paper
       | over computer systems. The digitization of critical national
       | security apparatus is the Archilles Heel that is being exploited
       | successfully. One example is Japan's intelligence gathering
       | capabilities in East Asia, especially China, which is bar none.
       | Japan has a better linguistic understanding of the Chinese
       | language (Kanji and all) but also interestingly much of PRC's
       | massive public surveillance equipment like CCTV cameras are made
       | in Japan.
       | 
       | Even if they hire Krebs, I believe that if its digital, it can be
       | hacked given long enough time period and unlimited state level
       | backing and head hunting essentially geniuses of their country to
       | do their bidding. I wonder how Biden-Harris administration will
       | respond, it is very clear who the state actor is here. I'm very
       | nervous about the ramifications of this hack.
        
         | [deleted]
        
         | nvader wrote:
         | This reminds me of the Ken Thompson hack:
         | https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html
        
         | twistedpair wrote:
         | We're all screwed. Predicted long ago.
         | 
         | See _Reflections On Trusting Trust_ [1]
         | 
         | [1]
         | https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...
        
         | bostik wrote:
         | > _So how do we guard against this type of attack?_
         | 
         | You can't ever prevent it, but you can raise the attack
         | cost/complexity and make detection much more likely.
         | 
         | Go for immutable infra and transient CI/CD systems. Provision
         | the build nodes on demand, from trusted images, and wipe them
         | out completely after just a few hours. The attacker will have
         | to re-infect the nodes as they come online, and risk leaving
         | more network traces. Anything they deploy on the systems goes
         | away when the node goes away.
         | 
         | The attack against SolarWinds worked so well because the CI
         | system was persistent and it was upgraded over time. For a
         | reliable and secure environment, the correct amount of host
         | configuration is zero: if you need to modify anything at all
         | after a system has been launched, that's a bad smell. Don't
         | upgrade. Provision a completely new system with the more recent
         | versions.
         | 
         | This kind of architecture requires the attacker to compromise
         | the CI image build and/or storage instead. (Or the upstream,
         | directly.) It's an extra step for adversary, and a simpler
         | point of control to the defender.
         | 
         | Recon. Compromise. Persist. Exfiltrate. -- As a defender you
         | want to make every step as expensive and brittle as possible.
         | You can't prevent a compromise, but you can make it less
         | useful, and you can arrange things so that it must happen often
         | and leave enough traces to increase the probability of getting
         | caught.
        
           | twistedpair wrote:
           | Part of me wonders if they'd have been better with GitHub
           | cloud CI/CD in Actions, with immutable build infra (e.g.
           | Ubuntu base images).
           | 
           | But, with the wild npm dependency community, where a small
           | app can have 5K transitive dependencies, I feel we're going
           | to be even more susceptible to these attacks going forward.
        
           | joe_the_user wrote:
           | Another thing that might have made this hard - if Solar Winds
           | were distributed as source code and each client built it
           | themselves, with their own options (though the old back-
           | doored c-compiler "thought experiment" may not be as much of
           | a thought experiment anymore).
           | 
           | Moreover, achieving the hack was likely costly given the
           | effort and the benefit of the hack appeared once the Solar
           | Winds binary was distributed. You can reduce the benefit of
           | such a hack by not having information critical enterprises
           | all running the same binary blob.
        
             | SahAssar wrote:
             | > You can reduce the benefit of such a hack by not having
             | information critical enterprises all running the same
             | binary blob.
             | 
             | That assumes that you actually inspect the source, right?
        
               | joe_the_user wrote:
               | If Solarwinds was distributed as source to hundreds of
               | companies, maybe many would not bother diffing the source
               | from the previous version but it seems plausible that a
               | few would look at these at the least, especially given
               | you are talking corporations who follow deployment
               | procedures.
               | 
               | The build process itself could spit out the diffs at the
               | end, for example.
        
           | SahAssar wrote:
           | Besides all of these steps I think it's important to consider
           | every convenience, every dependency, every package you add as
           | another attack vector. This is even more relevant considering
           | the product SolarWinds sold.
        
         | Spooky23 wrote:
         | You have to segment and have monitoring tools monitored by
         | people with a clue.
         | 
         | But that is very expensive to do. The average SaaS or software
         | company does nothing.
        
         | benlivengood wrote:
         | > So how do we guard against this type of attack? How do we
         | know this hasn't already happened to some of us? What is the
         | potential fallout from this hack, it seems quite significant.
         | 
         | Verified builds. That means deterministic builds (roughly, from
         | a given git commit the same binaries should result no matter
         | who compiles them. It requires compiler support and sometimes
         | changes to the code) plus trusted build infrastructure.
         | 
         | To verify that you haven't been compromised do a verified build
         | from two independent roots of trust and compare the resulting
         | binaries. Add more roots of trust to reduce the probability
         | that all of them are compromised.
         | 
         | Establishing a trusted root build environment is tricky because
         | very little software has deterministic builds yet. Once they do
         | it'll be much easier.
         | 
         | Here's my best shot at it:
         | 
         | Get a bunch of fresh openbsd machines. Don't network them
         | together. Add some windows machines if you're planning to use
         | VS.
         | 
         | Pick 3 or more C compilers. Grab the source, verify with pgp on
         | a few machines using a few different clients. For each one,
         | compile it as much as possible with the others. This won't be
         | possible in whole due to some extensions only available in a
         | particular compiler used in its source, but is the best we can
         | do at this point. Build all your compilers with each of these
         | stage-2 compilers. Repeat until you have N-choose-N stage-N
         | compilers. At this point any deterministic builds by a
         | particular compiler (gcc, llvm, VS) should exactly match
         | despite the compilers themselves being compiled in different
         | orders by different compilers. This partially addresses Ken
         | Thompson's paper "reflections on trusting trust" by requiring
         | any persistent compiler backdoors to be mutually compatible
         | across many different compilers otherwise it'll be detected as
         | mismatched output from some compiler build ancestries but not
         | others. Now you have some trusted compiler binaries.
         | 
         | Git repository hashes can be the root of trust for remaining
         | software. Using a few different github client implementations
         | verify all the hashes match on the entire merkle tree. Build
         | them with trusted compilers of your choice on multiple machines
         | and verify the results match where possible.
         | 
         | At this point you should have compilers, kernels, and system
         | libraries that are most likely true to the verified source
         | code.
         | 
         | Make a couple build farms and keep them administratively
         | separate. No common passwords, ssh keys, update servers, etc.
         | Make sure builds on both farms match before trusting the
         | binaries.
         | 
         | The good news is that most of this can be done by the open
         | source community; if everyone starts sharing the hashes of
         | their git trees before builds and the hashes of the resulting
         | binaries we could start making a global consensus of what
         | software can currently be built deterministically and out of
         | those which are very likely to be true translations from source
         | code.
         | 
         | EDIT: https://wiki.debian.org/ReproducibleBuilds is Debian's
         | attempt at this.
        
         | dilyevsky wrote:
         | > So how do we guard against this type of attack?
         | 
         | Don't give people running windows machines access to your
         | source code/production
        
           | _wldu wrote:
           | Especially if they are domain joined.
        
             | dilyevsky wrote:
             | Just don't. Inb4 "but Microsoft has most sophisticated
             | security" crowd, every major hack of the last 15 years
             | always starts with compromised windows box that allows
             | attacker an outpost to move laterally.
        
         | rodgerd wrote:
         | > So how do we guard against this type of attack?
         | 
         | One big issue with a lot of security and enterprise ops tooling
         | is that it doesn't follow good practice around, well, security.
         | For example, security code analysis software with hard-coded
         | passwords that you hook into your build tooling, or in this
         | case, ops software that instructs you to disable Microsoft's
         | security tools so they don't flag problems with the agent.
         | 
         | In a similar vein I've had BMC software want the highest levels
         | of access to Oracle DBs to do simply monitoring, and so on and
         | so forth.
         | 
         | The other observation I heard Bruce Schneier make at a hacker
         | con is more profound, and probably going to take a lot longer
         | for national security actors to accept is this: the norms need
         | to change. There is no longer a clear separation between "our
         | stuff" and "their stuff" the way that there was a few decades
         | ago, when espionage was more on "your telco network" or "my
         | telco network". As we've moved to pervasive connectivity it's
         | no longer possible to say, "oh that backdoor will only be used
         | by our side", or "that hack will only affect their SCADA
         | systems" or whatever.
        
           | f430 wrote:
           | I think this is the best answer out of all the replies.
        
         | j_walter wrote:
         | It was not done by a bunch of amateurs that is for sure. Now
         | everything points to Russia, but that is also the most obvious
         | clues to leave as who would question it. However...we know the
         | NSA wants to be in every system and this kind of operational
         | security and evasion screams of the NSA to me.
        
           | f430 wrote:
           | > Now everything points to Russia
           | 
           | How do we know this is Russians? To my knowledge its very
           | common practice to obfusticate origins before launching a
           | campaign like this by washing through several different
           | countries.
           | 
           | You could leave stuff like comments or references that would
           | suggest it was the Russians, there's just no way of knowing,
           | so I follow the fundamentals of political sabotage: whoever
           | benefits most is the culprit. Who has the most to lose and
           | gain here?
        
             | NikolaeVarius wrote:
             | Yeah, no.
             | 
             | The various Russian APTs have tooling they prefer to use
             | and are attributable to them. This is generally fairly
             | stable because these are professionals who spend years
             | learning specific toolchains, programming, and skills, and
             | do not really change it up much, since they don't have to.
             | Even if they're attributed, what is the world going to do?
             | Toss a bomb into Russia?
             | 
             | And before you get started, yes, security professionals are
             | aware that you can obfuscate that, but there are already
             | techniques to defeat this second layer of obfuscation.
             | 
             | If multiple sources are saying this was probably Russia,
             | they probably have a decent bit of proof.
        
               | f430 wrote:
               | Hmm I hadn't considered that but how do you find out what
               | tools were being used to produce the payload source code?
               | How can you be certain? Could another adversary simply
               | use the same tooling or perhaps it is shared to an allied
               | nation (enemy of my enemy is a friend) to do its bidding.
        
               | NikolaeVarius wrote:
               | These people are incredibly smart. https://link.springer.
               | com/article/10.1186/s42400-020-00048-4
               | https://www.blackhat.com/html/webcast/07072020-how-
               | attackers...
               | 
               | TLDR.
               | 
               | > They highlight that not only malware samples and their
               | specific properties (such as compiler settings, language
               | settings, certain re-occurring patterns and the like) are
               | useful, but also information available outside the
               | actually attacked infrastructure, including data on the
               | command & control infrastructure.
               | 
               | Yes you can obfuscate certain things, buts its hard to
               | obfuscate EVERYTHING, and if you dig deep enough, you can
               | make a decent effort finding the owner.
        
               | boomboomsubban wrote:
               | From reading the paper, it seems like it would be
               | difficult for a private hacking group to manage but
               | completely doable for someone like the NSA. They could
               | outfit an entire team to work somewhere else for an
               | extended period of time, making behavior profiles
               | unreliable.
        
               | 0xy wrote:
               | Except the CIA and other actors have been known to
               | impersonate the methods of other nation states, so
               | attribution is never the smoking gun you're claiming it
               | to be.
        
       | omgbobbyg wrote:
       | As a citizen, I am shocked and appalled by this backdoor. As a
       | software engineer, I can't help but marvel at the creativity and
       | thoughtfulness put into the exploit.
        
         | f430 wrote:
         | The average engineer isn't an infosec expert and love
         | automation so they found the weakest link in the chain: CI/CD
        
       | candiddevmike wrote:
       | I'm surprised this hasn't caused the software industry to
       | completely halt and rewrite/audit all third party libraries and
       | dependencies. The entire software supply chain is highly trust-
       | based, npm especially. Why aren't we seeing the start of a NIH
       | dark age?
        
         | cobookman wrote:
         | NPM is not trusted by major fortune enterprises. Many of the
         | tech companies I've worked at banned using NPM in prod. And
         | instead created their own NPM Clone with source code pinned
         | into a private repo, which security audited.
         | 
         | Adding a single NPM library became a total PITA, as it linked
         | to another 100 other NPM libraries, which in-tern linked to an
         | additional 100+ other NPM libraries. So adding a single NPM
         | library to the private repo, meant adding 100s to 1000s of
         | other NPM libraries. (E.g. left-pad [1])
         | 
         | Personally, I think this was a major reason for node.js not
         | picking up more in the enterprise space. With Python & golang
         | getting more traction.
         | 
         | [1] https://qz.com/646467/how-one-programmer-broke-the-
         | internet-...
        
           | mandelbrotwurst wrote:
           | > With Python & golang getting more traction.
           | 
           | Does pip not have the same issues?
        
           | dgellow wrote:
           | Python and go have similar issues though. People do not audit
           | and do no vendor their dependencies.
        
         | dgellow wrote:
         | I feel that the information about the breach didn't reach a
         | mainstream audience, because of the other things happening
         | (such as the US election and all the chaos generated). In the
         | current climate it's difficult to pass the message that a
         | massive cyber attack happened and have people take you
         | seriously.
        
       | dbg31415 wrote:
       | Yes, but...
       | 
       | Having worked at SolarWinds they're especially susceptible to
       | demands from sales and marketing. "Go faster, ignore tech best
       | practices, etc." It's not unique, but their culture is not a dev-
       | first, or security-first, culture to say the least. Many product
       | managers answer to marketing first, and don't have earnest tech
       | backgrounds that would let them know right from wrong past sales
       | numbers. The culture changed significantly when they went public
       | the first time; it went from a place where devs built good
       | tools... to a place looking to buy products / competitor products
       | so they could charge more for their services. Look at how long it
       | took them to get into cloud tools -- great example of how
       | marketing and sales missed the boat because they were only
       | focused on things they had sold before and not focused on
       | systemic changes to the industry -- because technologists weren't
       | driving.
       | 
       | Anyway, like I've worked a lot places with better security built
       | into the culture, better tech best practices built into the
       | culture... that's all I'm trying to say. Knowing that attacks
       | like this are out there... and it was just a matter of time
       | before it happened, SolarWinds did next to nothing to avoid it
       | happening to them.
        
       ___________________________________________________________________
       (page generated 2021-01-12 23:00 UTC)