[HN Gopher] Toyota Accidently Exposed a Secret Key Publicly on G...
       ___________________________________________________________________
        
       Toyota Accidently Exposed a Secret Key Publicly on GitHub for Five
       Years
        
       Author : whack
       Score  : 189 points
       Date   : 2022-10-13 20:34 UTC (2 hours ago)
        
 (HTM) web link (blog.gitguardian.com)
 (TXT) w3m dump (blog.gitguardian.com)
        
       | jrockway wrote:
       | > Git is an awesome version control system, used by over 93% of
       | developers, and is at the heart of modern CI/CD pipelines. One of
       | the benefits of Git is that everyone has a complete copy of the
       | project they are working on.
       | 
       | I feel like this is copy-pasted from a pitch deck on why
       | GitGuardian should be funded. Does anyone reading the article
       | care about this anecdote? Like do people stop reading at "well
       | I'm one of the 7% that doesn't" or think "wow a lot of people are
       | using Git, I should buy their thing"?
       | 
       | Sorry for the meta comment, but it just stuck out as odd to me.
        
         | dinvlad wrote:
         | This is definitely a marketing piece. And they charge a LOT for
         | it, so it's not a solution for the common masses.
        
           | zricethezav wrote:
           | If using GitHub-Actions, Gitleaks offers competitive pricing
           | for a secret scanning solution.
           | 
           | https://gitleaks.io/products
        
             | dinvlad wrote:
             | Thanks, I've already figured out how to run Trufflehog for
             | free on our thousands of repos.
        
               | [deleted]
        
       | lumberjack24 wrote:
       | The access model on platforms like GitHub is flawed, a single
       | account can be used for both professional and personal
       | projects/repositories, leading to "fat finger" errors like this
       | one here...
        
         | jmainguy wrote:
         | Checkout github enterprise managed users, all the shiny of
         | github.com with the benefits from the self hosted github
        
         | tester756 wrote:
         | I don't think it is flawed.
         | 
         | You cannot access org's repos without VPN
         | 
         | if you create a new repo by mistake outside your org, then
         | uhh..., it's crazy?
         | 
         | it's like sending email with credentials to people outside your
         | org
        
         | anamexis wrote:
         | I don't see how this is related to GitHub's access model. Was
         | the canonical Toyota repo even on GitHub?
        
         | gw99 wrote:
         | Oh yes this. It's so easy to critically fuck up an invite into
         | an organisation. If you get typo the username you are
         | potentially compromised. I've seen a couple of near misses on
         | this already.
         | 
         | Note: the invite input box actually autocompletes ALL github
         | usernames.
        
           | ccakes wrote:
           | You can invite by email addr, so the workaround here is only
           | invite corporate email addresses.
           | 
           | If the target user hasn't added their corp email to their
           | profile then they can't be part of the org.
        
             | prepend wrote:
             | This is what I do but I really wish there was a better
             | integration with auth providers and could use it for the
             | invite. Would be nice to search my directory to type the
             | email and confirm the name matches the email.
             | 
             | This is what GitLab does with their hosted AD/LDAP
             | connector.
             | 
             | I'm in fear of mistyping something and inviting the wrong
             | person.
        
               | dotancohen wrote:
               | So never type an email address in at all. Go to an extent
               | email message, copy the bloke's email address, then paste
               | it into the Github interface.
        
           | matai_kolila wrote:
           | > Note: the invite input box actually autocompletes ALL
           | github usernames.
           | 
           | I'm sorry, but that's wild. That's like, not even an easy
           | engineering problem to solve necessarily, given their size!
        
             | gw99 wrote:
             | Not really. They only have 83-90 million users. That's not
             | really a big table, at least in my world...
        
               | matai_kolila wrote:
               | In my world 20 is a lot because finding customers is
               | hard... :(
        
         | [deleted]
        
       | Kiro wrote:
       | Question: Let's say I want to open source my app but a long time
       | ago I used to have credentials hard coded. What can I do to clean
       | this up from history?
        
         | scarmig wrote:
         | https://docs.github.com/en/authentication/keeping-your-accou...
        
         | tyler569 wrote:
         | Git does have tools that allow you to rewrite history to fix
         | situations like this, but by far the easiest solution is to
         | invalidate those credentials so they become worthless.
        
       | greyface- wrote:
       | > T-Connect enables features like remote starting, in-car Wi-Fi,
       | digital key access,
       | 
       | Could an attacker have remote-unlocked and remote-started the
       | entire Toyota fleet with this access?
        
         | mcdwayne wrote:
         | Toyota is claiming no, not with this leak. It was a partial
         | repo that was exposed. The data they accessed with the key got
         | customer ID numbers and emails only.
        
       | danielodievich wrote:
       | Many years ago I got a trial license key for something, Aspose
       | components of some sorts I think, and without thinking of it,
       | checked it in into public Github repo. Well, few days later
       | Aspose's support sends me a nicely worded note saying that they
       | noticed that it was there and invalidated it for me. Their
       | description and instructions were very clear about why they did
       | it and why I shouldn't have checked it in. I thought that was
       | very proactive and excellent customer service.
        
         | paxys wrote:
         | Github supports this out of the box -
         | https://docs.github.com/en/code-security/secret-
         | scanning/abo..., and recognizes tokens from a lot of services.
        
           | derefr wrote:
           | In fact, you can apply as a Github "secret scanning partner"
           | to have your own secret's format (regexp) be a part of this
           | secret scanning, with a webhook to your servers whenever they
           | find one, so that you can do the credential-invalidation on
           | your own backend + send the kindly-worded email from your own
           | domain.
           | 
           | Mind you, your secrets need to _have_ a distinctive format in
           | order for this to work. Probably a distinctive prefix is
           | enough.
           | 
           | An Unethical Life Pro-Tip (that the word is already out on
           | anyway, so I don't feel too bad):
           | 
           | * The content of Github public repos is all continuously
           | loaded (by Github themselves) as a public dataset into
           | BigQuery -- https://console.cloud.google.com/marketplace/deta
           | ils/github/....
           | 
           | * For about $500, you can use BigQuery to extract all matches
           | of a particular regexp, from every file, in every commit, in
           | every public Github repo.
           | 
           | Whether or not Github themselves use this to power _their_
           | secret scanning, arbitrary third parties (benevolent or not)
           | certainly can use it for such. And likely already do.
        
             | lupire wrote:
             | Does GitHub not postpone publishing new verisons until
             | after the secret scanning is done?
             | 
             | Also I'd hope that Google is scanning BigQuery queries for
             | that abuse signal.
        
             | dantiberian wrote:
             | The GitHub public events API is delayed by 5 minutes,
             | presumably to give secret scanning partners time to react
             | before commits are made public.
             | 
             | https://github.blog/changelog/2018-08-01-new-delay-public-
             | ev...
             | 
             | Disclosure: I'm an ex-GitHub employee but was not involved
             | in the secret scanning API.
        
         | Natsukvshii wrote:
         | I actually had something similar happen to me last month. I
         | accidentally published a discord API key to GitHub and within
         | minutes I got a nice message from "Safety Jim" to my personal
         | discord account letting me know they've found my key on a
         | public repo and have gone ahead and revoked it.
         | 
         | I felt like a bit of a dope but it was neat to have it happen
         | to me. Lesson learned for sure.
        
           | greysteil wrote:
           | GitHub PM here. Glad that was a good experience! We work with
           | ~50 partners (details in the link below) to notify them when
           | tokens for their service are exposed in public repos, so that
           | they can notify you.
           | 
           | https://docs.github.com/en/code-security/secret-
           | scanning/sec...
        
             | fiddlerwoaroof wrote:
             | I wish I could set this up to block pushes proactively
             | instead of reacting to pushed secrets.
        
             | cebert wrote:
             | This is awesome!
        
             | DiggyJohnson wrote:
             | Top proactive security feature of the year, for me. Nice
             | stuff.
        
             | tough wrote:
             | Is this really expensive? We're a small startup providing
             | API keys, to our customers.
        
           | shepherdjerred wrote:
           | I've had this happen to me too! No less than a second after
           | pushing to GitHub did I receive a message about publicizing
           | my auth key. It was amazing, and I'm sure this saves a lot of
           | stolen keys from people just getting into programming
        
           | vultour wrote:
           | Great that they finally do that. I accidentally checked one
           | into a public GitHub repo a long time ago and about 2 years
           | later someone found it. The infinite spam wasn't even the
           | worst part about this, bajillion emojis in every message just
           | caused the Discord client to crash instantly upon opening, so
           | I couldn't even figure out what's happening at first.
        
         | [deleted]
        
       | bsimpson wrote:
       | The social media manager at GitGuardian is winning. They just got
       | us each to read a 1000 word ad for GitGuardian.
        
       | robswc wrote:
       | It can actually be comical just how _bad_ things can be at large
       | orgs.
       | 
       | Anyone have details, theories, or a book on how such
       | inefficiencies come about? I can't speak to tech-oriented large
       | orgs but I've worked with others and its just... I'm not shocked
       | at all. I've seen public facing API keys in HTML, private SSH
       | keys that do god knows what in plaintext on FTP servers... I just
       | don't understand how they seem to care so much but in reality,
       | care so little. Just lazy?
        
         | nomel wrote:
         | In my experience, security is so far removed from the actual
         | job description/day to day cares that it's perpetually
         | "somebody else's problem", seen as an unnecessary time sink.
         | Usually there's a couple of people that actually care, but
         | they're ignored and lack the power to influence change.
         | 
         | Not lazy, just overburdened by more important things.
        
           | robswc wrote:
           | That seems to match what I've seen. Especially the last line.
           | It's just so weird the dynamic between "pretending to care"
           | and "actually caring." I've worked at or consulted on small
           | teams that were very "lax" about security but everyone seemed
           | to take it seriously so it "worked."
           | 
           | At larger orgs, I notice they take it "seriously" but that
           | results in people finding creative loopholes... like pasting
           | in all the important production keys/passwords into a google
           | doc and sharing the google doc because they "can't send
           | secrets over slack" ... lol
        
         | prepend wrote:
         | There's a book by John Gall called the Systems Bible [0] that
         | goes into how big systems form and fall apart. Mostly anecdotes
         | and no real solution, but a decent read that isn't full of the
         | usual BS that's required because usually only large systems can
         | afford high speaker fees.
         | 
         | [0]
         | https://www.goodreads.com/book/show/583785.The_Systems_Bible
        
         | eitally wrote:
         | The reality is that most companies can't afford an IT budget it
         | would require to implement, and then force adherence to, a
         | standard set of best practices for much of anything.
         | 
         | As another commenter said, though, this is not the case in
         | regulated industries, in the parts of those companies dealing
         | with regulated processes, controls and data access.
        
         | 0xbadcafebee wrote:
         | You're the receptionist at a high-security facility. Lots of
         | people work there. Your job is to make sure the people get in
         | and out quickly.
         | 
         | The building has multiple methods of access at different
         | stages. During the course of doing your job, you don't follow
         | the proper protocols a few times. You closed a door without re-
         | entering a lock code. You didn't check someone's badge and ID
         | at the third floor access gate. You forgot to make sure someone
         | swiped out on exit.
         | 
         | Nobody actually trained you on any of those protocols; they
         | just made you watch a "security is everyone's responsibility"
         | video and then expected you to know all the protocols. Maybe
         | once or twice you were lazy, but it's equally likely you just
         | didn't know what to do. And once you make that mistake,
         | somebody can exploit it as long as you don't follow the
         | protocol.
         | 
         | In facilities where security is important, there are
         | safeguards. Doors left open sound an alarm, automatically lock
         | when closed, and sensors detect if more than one person walks
         | through without badging in. These safeguards exist because
         | people are fallible and need help to enforce security.
         | Companies without security safeguards either don't care, or are
         | ignorant.
        
         | useful wrote:
         | every organization is held back by the slowest adopter. If you
         | advance too quickly compared to your colleges then you will
         | likely leave because everyone else feels like they are trying
         | to pull you back into their crap. Innovation is a depreciating
         | asset. If you don't reward the people who make a quantum leap,
         | they will leave, and all their progress will revert to the
         | mean.
         | 
         | I bet someone made a security key because it was the right
         | thing to do but they didn't have the controls in place to
         | manage it in their build system and give another key to
         | developers/engineers/etc. So someone else copied it for
         | convivence rather than have to explain to every moron in the
         | whole company how to use it to access the database or run their
         | monolith tests or get access from the dude that no longer works
         | there who was the "giver" of access keys.
        
       | chaps wrote:
       | Hah. Yeah. Found a bunch of ssh keys, passwords, etc for Comcast
       | years back which turned into a shitshow when I tried to report
       | it. Once I found the right people to talk to things got better,
       | but the entire experience was really reflective of how bad large
       | orgs are with security.
       | 
       | A friend once told me he was having a hard time getting a client
       | to take his security concerns seriously. So I went on github and
       | found a commit in their repo that included a production password
       | and sent it to him. Maybe took 5-10 minutes to find? Apparently
       | once they found out about the commit, they panicked a bit and
       | started taking his concerns more seriously.
        
         | gw99 wrote:
         | This shit happens all the time.
         | 
         | Old school one when I was a security consultant for a bit (pre-
         | automated pentest scammers). Medium size regulated fintech.
         | Domain admin passwords and admin accounts were stuck on post it
         | notes on a board in the machine room. If you went over the road
         | to the college, asked to use the toilet, which they seemed fine
         | with, and poked your 200mm lens out of the bathroom window you
         | could snap them all.
         | 
         | Don't assume that level of competence improved with addition of
         | technology.
        
           | FredPret wrote:
           | In fact we now have even better camera lenses!
        
           | ethbr0 wrote:
           | Everyone complains about post-it notes, but the physical
           | proximity requirement to read them isn't nothing. E.g.
           | compared to network-accessible files.
           | 
           | At least, until you have a network-attached webcam pointed at
           | your whiteboard.
           | 
           | But the solution to the webcam problem is to write its access
           | credentials on your whiteboard, thus forming a circular and
           | perfectly secure loop.
        
             | lupire wrote:
             | Perfectly circular 0 the first time you join a meeting.
        
             | gw99 wrote:
             | Just stick an Amazon t shirt on, a reflective yellow
             | waistcoat and a box and you can walk into most SMEs without
             | anyone blinking an eye.
             | 
             | I've seen it done hundreds of times...
        
           | chaps wrote:
           | Yep :)
           | 
           | Did some consulting for an org that did managed IT and found
           | that they wrote on a white board all of their passwords.
           | Wrote them an email basically telling them "hey maybe you
           | should erase that". May or may not have billed them for the
           | time it took to write that email.
           | 
           | They put a piece of paper over the passwords in response.
        
         | bpye wrote:
         | I've never had an outright bad experience reporting a security
         | issue, but some companies definitely aren't geared up to handle
         | reports. I found that an energy provider's API would give usage
         | information for past addresses and eventually I think the right
         | team got told, but it was a nightmare trying to find someone to
         | actually report the issue to.
        
           | chaps wrote:
           | It's hit and miss. Sometimes they want to throw you under the
           | bus. Sometimes they want you to sign affidavits. I've never
           | been asked to sign an NDA or anything like that. Sometimes
           | they threaten with criminal charges. DoJ recently released
           | some guidance about good-faith security reporting, so it
           | might be easier these days. Doubt that affects active
           | litigation/prosecution or vindictive orgs, though.
        
         | mcdwayne wrote:
         | Yikes. It is sad to hear stories like that, where security is
         | not a concern until panic sets in. :(
         | 
         | Yet another reason we need to adopt standards like security.txt
         | and make it easy to report these things as it is to tell robots
         | to ignore us with robots.txt. See securitytxt.org for more on
         | the project.
        
           | dinvlad wrote:
           | I think the fundamental problem is, a lot of orgs just don't
           | care about security, as it doesn't affect their bottom-line.
           | Even breaches are only a temporary hit on the PR. Proper way
           | to address that might just be legislation, with heavy fines
           | based on total revenue.
           | 
           | That and also security is just hard to scale. That's why if
           | it was mandated by legislation, companies would be forced to
           | spend a comparable amount on scaling their security teams and
           | efforts.
        
           | bisby wrote:
           | It's tough. I'm our public security reporting email list.
           | 
           | We get a lot of things that boil down to "When I go to your
           | website, I am able to see the content of your html files!"
           | ... yes, reporter. That is what a web server does. It gives
           | you HTML files. Congrats that you have figure out the dev
           | console on your browser, but you're not a hacker. I'm trying
           | to go with Hanlon's razor here and assume this is
           | inexperienced people and not outright scams.
           | 
           | We don't get a lot of these, but they far outweigh actual
           | credible reports. But we try our best and take everything
           | seriously until it can get disproven. And it's exhausting. So
           | I get it sometimes. Sometimes having a place for responsible
           | disclosure just opens yourself up to doing more paperwork
           | (verifying that the fake reports are fake). That said, we
           | still do it.
        
       | zricethezav wrote:
       | Good reminder to run Gitleaks or Gitleaks-Action on your repos
       | 
       | - https://github.com/zricethezav/gitleaks
       | 
       | - https://gitleaks.io/products
        
       | bootcat wrote:
       | dieyota !
        
       | eax-eip wrote:
       | Most of these (even sometimes expensive) tools only look at repos
       | and users who are associated with the company's GitHub org, which
       | barely solves the problem. The much harder problem is the number
       | of corporate secrets that are on random repositories (personal
       | dotfiles, automations, data science scripts, etc.) across GitHub
       | with no strong relationship to the organization. Try using GitHub
       | Code Search to find all the Fastly API tokens that have been
       | leaked, for example, and I bet you'd find some wild stuff.
        
         | lumberjack24 wrote:
         | GitGuardian actually does this, it monitors an extended
         | perimeter of devs and their personal/open-source repos for
         | corporate secrets or keywords -
         | https://www.gitguardian.com/monitor-public-github-for-secret...
        
       | ajross wrote:
       | "Production keys in source control" is right up there with
       | "mistaken routing table entry" and "fat-fingered DNS config" on
       | the list of critical company-breaking mistakes that you'd think
       | would be easy to avoid, but aren't.
        
         | bonestamp2 wrote:
         | I committed my google maps api key to a public github
         | repository recently and github immediately sent me a warning
         | about it. The thing is, I did it intentionally. The key is used
         | on my website and the website is served by github pages.
         | 
         | Now, it's an embedded maps api key, there's no cost to use it,
         | nobody can use it from a domain other than mine, and it's
         | easily visible in the page source if someone views that, so
         | there's really no reason not to commit it since even if I
         | didn't commit to somewhere publically, it's still publicly
         | available on my website and nobody else can use it anyway.
         | 
         | Is this a reasonable exception to this rule?
        
           | tomschwiha wrote:
           | I just wondered what happens if you spoof locally the domain
           | name. Could you still use the api key?
        
             | bonestamp2 wrote:
             | Maybe. Thankfully there's no cost to use the key if someone
             | does it.
        
           | prepend wrote:
           | I could run your site locally with a customized host file so
           | the referers all come from your domain. I don't think it's
           | that much of a risk but I wouldn't want to use a key
           | associated with something that can bill me.
           | 
           | You could use Google actions to build your pages site
           | injecting the api key at build time. It's stored a repo
           | secret rather than in code. Of course since you deploy the
           | site publicly, the key will still be visible.
        
         | ElevenLathe wrote:
         | Add "failure to rotate TLS certs before they expire."
        
           | deathanatos wrote:
           | My company has monitoring for this, but it still seems to be
           | a law of nature that,
           | 
           | 1. someone adds new service/server/infra in a submarine
           | manner
           | 
           | 2. it goes to prod
           | 
           | 3. the cert expires and outage begins
           | 
           | 4. my team is asked what to do, because "we're the cert
           | experts"
           | 
           | 5. we add it to the monitoring
           | 
           | So it only happens once ... per service. Which isn't great.
           | But how do you get people to slow down and do simple shit,
           | like add a monitor to your new service? (Or promulgate a
           | design doc...)
        
             | HL33tibCe7 wrote:
             | "If you keep smelling shit, look at your own shoe".
             | 
             | Your processes are failing your development teams, and you
             | need to fix them, rather than blaming your teams, which
             | achieves nothing.
        
             | prepend wrote:
             | Can you check with dns for new entries and check 443 for a
             | couple weeks to see if there's a tls cert there?
        
             | ElevenLathe wrote:
             | I think the real answer is to only issue limited-duration
             | certs and only via automated means (ACME or similar), thus
             | requiring automation be in place from day 1.
             | 
             | This still doesn't protect against the vector where
             | somebody else in the company has managed to prove
             | themselves to be responsible parties to another CA/issuer.
        
             | D-Coder wrote:
             | 1. Clearly describe the correct process in your other
             | process documentation.
             | 
             | 2. Email everyone who might be involved a note about this
             | and a link to the documentation and why it is important.
             | 
             | 3. Next time someone ignores it, rip them _and_ their
             | manager a new orifice.
             | 
             | 4. Wait for word of #3 to spread.
             | 
             | Might help...
        
             | [deleted]
        
           | HWR_14 wrote:
           | But at least that failure just makes the services fail, not
           | opens a security hole.
        
             | cmeacham98 wrote:
             | Except in the scenarios where the company's support starts
             | telling users to click through the warning, which I've seen
             | a few times.
        
             | [deleted]
        
         | coenhyde wrote:
         | Those three are not all equal. "Production keys in source
         | control" is the equivalent of a surgeon not washing their hands
         | between between surgeries. It's basic level of professional
         | competency that should not be violated. The latter two are bad
         | mistakes, which shouldn't happen but do.
        
           | deathanatos wrote:
           | And yet I see it get violated all the time. People _should_
           | do a lot of things, but a lot of my coworkers are _lazy_ and
           | do not do quality work. Given that it happens, and that I can
           | 't prevent it, one must then ask how to guard against it.
           | 
           | At my org, we even try to generate all secrets with a
           | standardize prefix/suffix so as to make them _very_
           | greppable. That doesn 't stop "Architects", "Customer
           | Solutions", "Analytics" types from ... just working around
           | the standard tooling and manually generating one by hand
           | because ... IDK, they think they know better? I really don't
           | get it.
        
             | coenhyde wrote:
             | Doctors used to not wash their hands too. I get it though,
             | and i've seen the same thing. Really it comes down to
             | education and not granting access to secrets to people who
             | aren't capable of handling them.
        
               | robswc wrote:
               | "fun" fact - there could potentially be thousands of
               | deaths attributed to Drs simply not washing their hands.
               | 
               | IIRC they even basically got some hospital admin fired
               | for creating a hand washing mandate, despite it being
               | proven to save lives.
               | 
               | https://www.npr.org/sections/health-
               | shots/2015/01/12/3756639...
               | 
               | (talking centuries ago, but maybe even today)
               | 
               | Looks like its still a "recent" issue, lol https://www.ny
               | times.com/2006/09/24/magazine/24wwln_freak.htm...
        
               | coenhyde wrote:
               | That further improves the analogy. Even though we all
               | agree washing hands is important and saves lives, it
               | still doesn't happen on occasion.
        
           | grepLeigh wrote:
           | Surgeons have a practiced ritual ("scrubbing") to prep for
           | surgery. Do you practice a credential-scanning ritual before
           | saving (committing) your code or pushing your code to a
           | remote repo?
           | 
           | I have git hooks to lint code syntax, but nothing for
           | scanning for leaked credentials. Looking @ TruffleHog now,
           | mentioned by another poster.
        
             | justinpombrio wrote:
             | A nice approach, if you have sufficient control over the
             | form of your secrets, is to prefix each secret with
             | "MY_COMPANY_SECRET_DO_NOT_COMMIT:". Then you can add a
             | commit hook that refuses to commit if any committed file
             | contains that substring, etc. etc.
        
               | lumberjack24 wrote:
               | Great idea, but hard to enforce. Just use a scanning CLI
               | like TruffleHog, Gitleaks, or ggshield from GitGuardian
               | to catch all sorts of hardcoded secrets.
        
             | zricethezav wrote:
             | Gitleaks also offers a nice pre-commit hook:
             | https://github.com/zricethezav/gitleaks#pre-commit
        
             | coenhyde wrote:
             | That's certainly a good idea. But the secrets shouldn't be
             | in the codebase to begin with, certainly not production
             | secrets. Production secrets should stay in production and
             | no one has access. Whatever intends to use the production
             | secrets should have first been developed in a dev
             | environment and released to prod.
        
           | ajross wrote:
           | "Should" not be violated is the point, though. I agree, it
           | shouldn't. But it is, all the time.
           | 
           | I mean, I'll bet Toyota knew this organizationally. They had
           | security people sign off on the design who all knew how
           | secure key management is supposed to work. They probably
           | reviewed this github release. And it happened anyway.
           | 
           | Maybe they weren't supposed to be production keys. Maybe it
           | was a development key from someone's early cut of the tooling
           | that got mistakenly reused for production. Maybe a script
           | that updated all the keys mixed up which was which.
           | 
           | The point is that the existence of a Clear And Unambiguous
           | Right Thing to Do is, in practice, _not sufficient to ensure
           | that that thing is done_. The space of ways to mess up even
           | obvious rules is too big.
           | 
           | And that's surprising, which is why (1) it keeps happening
           | and (2) people like you don't take the possibility seriously
           | in your own work.
        
             | coenhyde wrote:
             | You're jumping to conclusions in your final statement
             | there. The existence of inexcusable bad practices does not
             | mean we should not try to mitigate against them, and I
             | didn't say we shouldn't.
        
         | wereHamster wrote:
         | > Production keys in source control
         | 
         | IaC, right? If you don't put keys into the Code you can't have
         | Infrastructure as Code. Without keys the code only partially
         | defines your infrastructure.
        
           | coenhyde wrote:
           | You can put your keys in source control if you are encrypting
           | them with another key which is not in source control.
           | Otherwise you're doing it wrong.
        
           | nicholasjarnold wrote:
           | Well, I guess in the purest possible sense you're correct.
           | 
           | However, I'm currently working with a group using Terraform
           | on GCP (GKE), and it's popular with them to use Secret
           | Manager to manually create a secret in there (when it cannot
           | be auto-gen'd with the IaC, a fairly small subset of things)
           | and then reference that secret from the infra-defining code.
           | 
           | I think of it as being akin to "this service requires a
           | correctly configured FOO_BLAH variable in it's environment".
           | I don't really see it as any failure of achieving some IaC
           | goal, but defining infrastructure code isn't my primary
           | function, so take this with a grain of salt.
        
           | JauntyHatAngle wrote:
           | Huh? No, you use an external secrets manager or leave it to
           | run time environment vars, or leave it to your cd servers to
           | access/supply those details.
           | 
           | Assuming you have been given the right tooling, there is no
           | reason for it to be in source code.
        
             | danenania wrote:
             | If anyone out there is using environment variables
             | currently, and is interested a quick path to plugging the
             | leaks in their secrets management, check out EnvKey[1]
             | (disclaimer: I'm the founder).
             | 
             | Because EnvKey integrates tightly with environment
             | variables, no app code changes are needed to switch, so it
             | only takes a minute or two to import/integrate a typical
             | app.
             | 
             | EnvKey is designed to help avoid incidents exactly like the
             | one that just hit Toyota, while staying out of your way and
             | even making your life significantly _easier_ as a developer
             | by simplifying many aspects of config management.
             | 
             | Give it a look if you know you have some room for
             | improvement in this area and are looking for an easy,
             | secure, open source, end-to-end encrypted solution :)
             | 
             | 1 - https://envkey.com
        
           | moooo99 wrote:
           | Even with IaC you should store secrets only in your
           | environment variables for the CI servers
        
       | dinvlad wrote:
       | I wish hosted GitHub made pre-push hooks available to the public.
       | Would make this a much easier problem with free scanning tools
       | like Trufflehog.
       | 
       | Or alternatively, if GitHub Secret Scanning was available to all
       | public repos, instead of requiring a (very) expensive GitHub
       | Advanced Security subscription. But I understand, they need to
       | make money somehow.
        
         | greysteil wrote:
         | (GitHub PM here.) The Advanced Security secret scanning
         | experience is coming to public repos (for free, obviously)!
         | Give us a few more months - we have a little more work to do
         | scaling it up
        
           | dinvlad wrote:
           | Awesome, thanks for the heads-up! We will definitely look
           | forward to it, either to replace or enhance our current in-
           | house solution.
        
         | lumberjack24 wrote:
         | In the meantime, try ggshield cli
         | https://github.com/GitGuardian/ggshield
        
           | dinvlad wrote:
           | Nah thanks, I'm already running Trufflehog for free on all of
           | our multiple orgs' thousands of repos.
           | 
           | I think we would consider GG if its pricing was acceptable
           | for non-profits though.
        
             | mcdwayne wrote:
             | FYI - GitGuardian is free for individuals and teams smaller
             | than 25
        
               | dinvlad wrote:
               | As told earlier, we have thousands of repos. And our
               | teams are thousands of users on GH.
        
       | userbinator wrote:
       | Too bad this wasn't for their ECUs or something else that
       | could've benefited right-to-repair.
        
       | ajsnigrutin wrote:
       | Right to repair bad, independent repair centers will expose your
       | private data!
       | 
       | - Toyota
       | 
       | ( reference https://www.youtube.com/watch?v=GTiLnz23TNs )
        
       ___________________________________________________________________
       (page generated 2022-10-13 23:00 UTC)