[HN Gopher] Launch HN: Slauth (YC S22) - auto-generate secure IA...
       ___________________________________________________________________
        
       Launch HN: Slauth (YC S22) - auto-generate secure IAM policies for
       AWS and GCP
        
       Hi HN, We're Daniel and Bruno and working on https://slauth.io/.
       Slauth.io is a CLI to auto-generate secure IAM policies for AWS,
       and GCP (Azure in the next few days!). We enable development teams
       to speed up creating secure policies and reduce over-permissive
       policies being deployed to the cloud.  Check out the video or give
       our open-source CLI a try with one of the sample repo's on
       https://github.com/slauth-io/slauth-cli
       https://www.loom.com/share/bd02211659eb4c7f9b335e34094b57cb?...  We
       got into the cloud access market by coincidence and were amazed by
       the amount of money spent on IAM. Current tooling such as
       http://Ermetic.com and http://wiz.io/ visualize IAM
       misconfigurations post deployment but don't actually change
       engineering behavior, leaving organizations in a constant loop of
       engineers deploying over-permissive policies = security
       engineers/CISO's getting alerts = Jira tickets created begging
       developers to remediate = New over-permissive policies being
       deployed again.  We interviewed hundreds of developers and DevOps
       engineers and discovered two key pain points:  1. *IAM is a
       Hassle:* Developers despise dealing with IAM intricacies. 2. *Speed
       vs. Security:* IAM was slowing them down in deploying quality code
       swiftly.  So the objective is automate policy creation so that
       developers don't have to deal with it, and harden IAM security pre-
       deployment.  We employ Large Language Models (currently OpenAI
       GPT-4) to scan code in any language. Through a series of prompts,
       we identify service calls and the actions required. The resource
       name receives a placeholder unless its embedded in the code. We aim
       in the future to create a static code analyzer in order to not send
       any source code to LLM's but for now using LLM's is the fastest way
       to market and somewhat accepted by the industry through the use of
       Github Copilot etc.  You can use the CLI in the terminal or
       actually integrate it in your CI/CD and have it become a part of
       your development team workflow.  Three main questions we receive
       1. *Security Concerns:* How can I trust
       [Slauth.io](http://slauth.io/) to access my source code? 2. *Policy
       Accuracy:* How can I trust [Slauth.io](http://slauth.io/) creates
       the right policies? 3. *Differentiation:* How are you different
       from IAMLive, IAMBic AccessAnalyzer or Policy Sentry?  To address
       the first concern, we don't access your code directly. Instead, we
       offer a CLI that integrates into your CI/CD pipeline, allowing
       local code scanning. http://slauth.io/ uses your OpenAI key to
       convert the code into a secure policy, with the option to output
       results to *`stdout`* or a file for artifact upload and download.
       That does mean OpenAI has access to the source code located in the
       path you set to be scanned as we need to know which SDK calls are
       performed to generate the policies.  We have extensively tested it
       on AWS , Typescript and GPT 4 with very good results (>95%
       accuracy). We do know these accuracies drop when using GPT 3.5 so
       if possible, use GPT 4 as we are improving the prompts. GCP and
       Azure have been tested less but the results when using GPT 4 seem
       equally high. We also have experienced some hallucinations but they
       have not effected the outcome of a secure policy but merely the
       structure of how the policy is generated. That is not to say that
       it is 100% reliable hence we aim to provide toolings to double
       check policies through policy simulators and other means.  Compared
       to competitors, we focus mainly on generating secure policies pre-
       deployment and automating as much as possible. We were inspired by
       IAMLive but it wasn't as scalable to use across development teams.
       Policy Sentry is great for templates but with http://Slauth.io you
       actually get a granular least privilege policy. Lastly, access
       analyzer is used to harden security policies which have already
       been deployed which is similar to other cloud scanning tools and
       creates a strange reactive process to security. The new access-
       analyzer feature checks policy diffs in your CDK but again doesn't
       actually generate the policy pre-deployment.  We recognise some
       engineers are very capable of creating secure policies but similar
       to using Checkov and TFscan in the context of IaC deployment, we
       believe using Slauth.io will become a necessity in your CI/CD when
       deploying service permissions to make sure no IAM misconfiguration
       appear in the cloud.  Would love to get your feedback and feel free
       to interact on our Github repo and join our Slack community.
        
       Author : DanielSlauth
       Score  : 108 points
       Date   : 2023-12-04 13:10 UTC (9 hours ago)
        
       | verdverm wrote:
       | Repost: https://news.ycombinator.com/item?id=34038663 (11 months
       | ago)
       | 
       | > We employ Large Language Models (currently OpenAI GPT-4)
       | 
       | For IAM, this seems like a disaster waiting to happen. Combining
       | hallucination problems with security settings is not a path I
       | would consider
        
         | DanielSlauth wrote:
         | Do you think humans are doing a better job? Research shows that
         | 95% of the permissions granted to users aren't used which
         | creates huge problems and is a reason for spending millions in
         | security tools. Why not use Slauth and other checks such as
         | policy simulators to get tightened policies pre-deployed
        
           | verdverm wrote:
           | I'm not your target user, I don't feel the priority on this
           | problem even though our permissions are more permissive than
           | we'd like. Thing is, to rein them in typically requires
           | application changes. You cannot just sprinkle magic LLM dust
           | on IAM and make things better.
           | 
           | My concern is for those who blindly trust LLMs. Security
           | posturing is not the place to be an early adopter of AI
           | tools. You have to understand both IAM and system
           | architecture to know if what the LLM is saying is correct, so
           | where does that leave us?
           | 
           | I think they can be an extra pair of eyes, but not the
           | driver. Still, there is a signal to noise problem that
           | remains, due to the inherent hallucinations.
        
             | thehucklecat wrote:
             | what kind of application changes are you thinking it would
             | equire?
             | 
             | my policies are definitely too broad, but feels like I
             | should be able to tighten them up without changing code.
             | (just potentially breaking things if I get it wrong and go
             | too tight).
        
               | verdverm wrote:
               | Some scenarios
               | 
               | 1. The application has to start using credentials for the
               | first time, or consume them a different way. For example,
               | stop consuming an environment variable and rely on a
               | service account.
               | 
               | 2. You have to change ops to support new workflows. Often
               | you have to put approval workflows in place because fewer
               | people can do things and you want only the machines
               | touching production
               | 
               | 3. You have to change human behaviors and habits (this is
               | the real hard one). I've had to revert changes because
               | the increased security blocked developers and they don't
               | have time to adapt for the next deadline.
               | 
               | 4. Getting parity in local development workflows is also
               | challenging. How and where do you match vs except from
               | IAM parity?
               | 
               | 5. Should I give the current server access to a
               | particular cloud service/resource or break out that
               | particular function into a lambda and minimize the
               | permissions there? You have to think through the
               | implications of a breach and how/where you want to limit
               | the blast radius.
               | 
               | 6. This is probably obvious, but implementing application
               | level controls, like API endpoint permissioning. IAM is
               | not limited to cloud infra
        
               | DanielSlauth wrote:
               | The open-source project is a CLI you can put into your
               | CI/CD so i think a pretty neat workflow where there
               | should be less friction considering DevOps/security don't
               | need to ping-pong on permissions.
        
               | verdverm wrote:
               | when you keep telling, you ain't selling
               | 
               | ask questions to deepen your understanding
               | 
               | > ...ping-pong...
               | 
               | It was a scheduling problem rather than a decision
               | problem. The impact radius is always more than you
               | anticipate
        
             | DanielSlauth wrote:
             | First of all its pretty awesome your permissions are very
             | tight. You are definitely on the other side of the spectrum
             | compared to the rest. I get it that there is a lot of
             | skepticism because of people hyping LLM's so indeed for now
             | we use it as Copilot and not the driver. Hopefully you can
             | agree though its pretty random that we are still manually
             | creating IAM policies and need to get accustomed with the
             | thousands of different permissions :)
        
               | verdverm wrote:
               | We are actively working on reining in permissions, I
               | would not call them "tight". It's just not a top 3
               | priority, though that is likely changing with the
               | upcoming SOC2 efforts. I still don't see us reaching for
               | LLMs to help us here.
               | 
               | I'm not saying don't use them, just use them as an extra
               | pair of eyes, mostly to catch errors rather than to drive
               | and architect
               | 
               | > get it that there is a lot of skepticism because of
               | people hyping LLM's
               | 
               | The skepticism is not from the hype, it's from
               | experiencing LLM output personally. They are fine if the
               | output can be fuzzy, like a blog post or a function
               | signature, not so much if there is a specific and fragile
               | target.
        
               | vasco wrote:
               | To add a plus one here, as soon as I learned there's LLMs
               | involved this became a non starter to me. I'd rather have
               | less granular policies than risk some LLM doing something
               | crazy.
               | 
               | I can justify to management that we have limited time for
               | IAM and something was missed that we can fix / create
               | tests / scans for after an incident. It's harder to
               | explain that we chose a vendor that uses a non
               | deterministic tool that can hallucinate for one of the
               | most core security pieces of the puzzle.
        
             | wg0 wrote:
             | Absolutely not. Anywhere where accuracy, precision and
             | safety matters, throwing LLMs in the mix is irresponsible
             | IMHO or being too optimistic or possibly not understanding
             | how these giant arrays of floating point numbers work or
             | just hoping for the best.
             | 
             | Similarly, LLMs used for SQL generation meant for business
             | analytics is also a critical area where if numbers are
             | wrong, it might lead to a business going bankrupt.
             | 
             | For Prototype, fun exercise, sure go all in.
        
           | jsploit wrote:
           | > Research shows that 95% of the permissions granted to users
           | aren't used which creates huge problems and is a reason for
           | spending millions in security tools.
           | 
           | It'd potentially cost millions more to recover from a GPT-4
           | disaster.
        
           | milkshakes wrote:
           | that's a false dichotomy. there are approaches to this
           | problem that are powered by neither humans nor LLMs -- see
           | https://github.com/Netflix/Repokid as an example
        
           | lijok wrote:
           | > Research shows that 95% of the permissions granted to users
           | aren't used
           | 
           | These would be the "s3:*" and "Resources: *" scoped
           | permissions I assume? I can't imagine users are explicitly
           | typing out permissions, 95% of which are not relevant for the
           | task.
           | 
           | > which creates huge problems
           | 
           | Such as? What is the material impact of a workflow or a user
           | having too many permissions?
           | 
           | > and is a reason for spending millions in security tools
           | 
           | Are you claiming that overscoped IAM permissions alone are
           | responsible for 1M+ security tooling bills in companies?
           | Would you be willing to share information on which tools
           | these are?
        
             | kkapelon wrote:
             | > Such as? What is the material impact of a workflow or a
             | user having too many permissions?
             | 
             | Security obviously
             | https://en.wikipedia.org/wiki/Principle_of_least_privilege
        
               | verdverm wrote:
               | That is the "theoretical" problem
               | 
               | How many times have excess permissions "actually" been
               | the problem... versus something like correct permissions
               | with compromised credentials?
        
               | kkapelon wrote:
               | I am not a security expert by any means, but there are
               | several stories of excess permissions that resulted in
               | the security breaches. The last one I actually remember
               | was here in HN, and I think it was about a bug bounty for
               | Facebook where a QA system could affect production. The
               | bug bounty person "broke" production by "breaking" in the
               | QA system.
               | 
               | By the way, I have no affiliation with slauth.io (just
               | found them today as well). I just think that https://en.w
               | ikipedia.org/wiki/Principle_of_least_privilege is
               | something good to follow in critical systems.
        
               | blincoln wrote:
               | It's hard to know with any kind of accuracy how often it
               | comes up in real breaches, but we exploit it all the time
               | to great effect in pen testing.
               | 
               | A few examples I've seen repeatedly:
               | 
               | * An AWS-hosted container/artifact/CI/CD application has
               | an SSRF vulnerability that can be used to retrieve IAM
               | instance credentials. Because micromanaging permissions
               | is hard, and the application needs to access so much
               | content in S3, spin up/down instances, etc. it has ec2:*
               | and s3: _. Unless the organization has created a separate
               | AWS account for this platform specifically, it 's
               | probably game over at that point.
               | 
               | _ An internet-facing MDM solution has a code execution
               | vulnerability. Because the vendor didn't want to document
               | all of the individual permissions it needs, the
               | installation instructions specify that it should run as
               | an account with Domain Admin permissions in AD. That is
               | definitely game over for most organizations, because even
               | systems that don't authenticate against AD are almost
               | always accessed from systems that do.
               | 
               | Micromanaging permissions is hard in a big organization.
               | I saw it done well, years ago, in Active Directory, but
               | it took several FTEs who were personally interested in
               | the topic to set up and manage, and that was a
               | traditional big business IT environment. In a startup-
               | style free-for-all, good luck. I don't have an opinion
               | either way on Slauth specifically, but something that
               | generates IAM policies procedurally seems like a step in
               | the right direction.
        
               | lijok wrote:
               | If you're trying to sell a tool, you don't justify its
               | cost by saying it addresses "huge problems" such as
               | "security". Lets talk material impact; how will this tool
               | pay for itself?
        
               | verdverm wrote:
               | I think it's supposed to be like insurance. The cost of
               | bad things happening inspires you to pay for things that
               | give you peace of mind. I don't trust LLMs to give me
               | peace of mind for security tasks, if anything, the
               | opposite
        
               | kkapelon wrote:
               | Sorry, I am not trying to sell anything. I am not OP or
               | parent poster.
               | 
               | If you want to hear about stories of privilege escalation
               | there should be easy to find. I also have some on my own
               | which I might describe in another post but essentially it
               | was the classic - CI/CD pipeline that "thinks" it has
               | access only to QA does a "destroy all servers" in both QA
               | and Production because it also had access to production
               | without knowing anything about it.
        
               | verdverm wrote:
               | Famous HN (reddit) post:
               | https://news.ycombinator.com/item?id=14476421
               | 
               | "Accidentally destroyed production database on first day
               | of a job"
        
               | kkapelon wrote:
               | I also like the "integration tests reaching production"
               | as well https://news.ycombinator.com/item?id=27546017
        
             | rtkwe wrote:
             | It's the constant tug of war between the idealized security
             | status where users have just enough access to do their jobs
             | and the fact that it's hard to know the precise access you
             | need until you get the task at which point the idealized
             | process of review to grant access takes too long and really
             | drags down your development pace.
             | 
             | At my job for example we don't have a separate support team
             | for the ETL work we do so I have a lot of access I don't
             | use unless things are breaking and then I can't wait for
             | the access approval process to get added to database XXX or
             | bucket YYY to diagnose what data has broken our processes.
        
           | jmathai wrote:
           | One challenge will be similar to self driving cars. The error
           | / fatality rates need to be several orders of magnitude lower
           | than for human operators for it to be acceptable.
        
           | candiddevmike wrote:
           | AWS and GCP already provide tools to show excess
           | permissions...
        
             | verdverm wrote:
             | The pain there is often a pre-configured role with a slew
             | of permissions was used and you actually need to craft a
             | new role with the right permissions.
             | 
             | I wrote some code once to fetch all those preconfigured
             | role permissions and then present them in a more digestible
             | way
        
         | jgalt212 wrote:
         | I dunno. LLM generated config + formal verification could work.
        
           | slalmeidabbm wrote:
           | This would be the way to go with the initial offering. Adding
           | static code analysis + LLMs will help with reducing LLM usage
           | and hallucinations and then adding a way to test out the
           | policies to make sure that they are enough to run the code
           | without being too broad will increase trust in the results.
        
         | coredog64 wrote:
         | If it was me, I'd still run QC tools on the generated policy
         | just like I would for manually authored policies. Specific to
         | AWS, the IAM Access Analyzer will confirm that you're using
         | correct grammar. Further, there are techniques like SCP and
         | permission boundaries to downscope what would normally be all
         | actions/resources.
        
         | justrealist wrote:
         | The space of "real" options in IAM is small enough that
         | hallucination is not a real problem.
         | 
         | Anecdotally I've used copilot to help write a lot of IAM
         | polities in Terraform and the accuracy is basically 100%
         | already.
        
           | tjpnz wrote:
           | Same could be said for ECS container definitions yet ChatGPT
           | will happily give you a set of parameters which don't exist.
        
           | verdverm wrote:
           | human in the loop, during the prompt->gen phase, makes a huge
           | difference. You can hit backspace and try different things
           | 
           | With an API that has a hidden / predefined prompt, you'll run
           | into hallucinations that are harder or impossible to handle
        
         | debarshri wrote:
         | It feels like taking one security problem and creating another
         | problem.
        
           | verdverm wrote:
           | Yeah, when it breaks, and the human didn't write it, it will
           | be a lot harder to fix. It's like being responsible for the
           | output of a junior programmer
           | 
           | My general approach is to spend more time up-front, so when
           | you are in the heat of a problem, you don't have to learn
           | under pressure. I think my beard is graying
        
       | justchad wrote:
       | I imagine there will be times where the LLM hallucinates or gives
       | too many permissions. I'm sure it is still better than the try
       | and see approach that most humans take when it comes to IAM
       | setup. I'm just wondering if this product becomes irrelevant as
       | Amazon Q rolls out and probably has some of this functionality
       | baked in.
       | 
       | Regardless, anything to make IAM provisioning easier is worth a
       | go as long as you verify the results using a simulator or
       | something.
        
         | DanielSlauth wrote:
         | Thanks and let's see what Q will look like. I'm hoping the
         | project will further evolve in integrating it in your CI/CD
         | with PR's when commit requires IAM changes.
        
       | zebomon wrote:
       | I know you can't take responsibility for the 5% failure rate
       | GPT-4 produces, but maybe things change when you have your
       | simulator running. At that point, what kind of SLA do you plan on
       | offering with the service?
        
         | verdverm wrote:
         | More important than SLAs is who takes on the liability for
         | these mistakes? With the changing laws and regulations for
         | breaches, do I want to rely on an LLM that isn't going to own
         | that liability?
        
           | zebomon wrote:
           | Yeah, I think it's a curious decision to have launched this
           | thing with the LLM alone, as using an LLM for something this
           | potentially disastrous is a moot point for me. If they can
           | get a formal simulator running with it, on the other hand,
           | then I'd imagine they may feel more comfortable putting out a
           | guarantee and taking on some kind of liability themselves.
        
           | DanielSlauth wrote:
           | Perhaps I should have emphasized better that indeed the LLM's
           | are trustworthy by themselves and require several extra
           | checks. These would be policy simulators, connecting to cloud
           | environments and running checks in Dev/Staging.
           | 
           | Again, I understand the skepticism using LLM's but currently
           | everything is done manually and it shows that doesn't work
           | well. So using LLM's is a quick way to improve the current
           | situation and hopefully we can further compliment it with
           | checks and balances
        
             | verdverm wrote:
             | > but currently everything is done manually and it shows
             | that doesn't work well
             | 
             | If it is all done manually, and there are both good and bad
             | IAM setups, can you really extrapolate to "manual" being
             | the root cause? How can you even get an LLM to produce
             | secure policies without having existing secure policies to
             | train on? The entire premiss seems off and misleading to me
             | 
             | I would expect a hands-off approach to have worse outcomes
        
         | DanielSlauth wrote:
         | I believe the minute you connect a Dev or staging environment
         | to Slauth.io and we can run simulations and show divs we can
         | offer pretty strong SLA's..
        
       | bradleyy wrote:
       | I love the idea of this.
       | 
       | The big disconnect here is I can't share code with OpenAI for
       | various reasons.
       | 
       | Would you consider using something like AWS Bedrock+Anthropic
       | Claude, where we have better (more predictable/risk profile)
       | control over data-sharing, etc?
        
         | SOLAR_FIELDS wrote:
         | I've thought for a long while that using OpenAI on things that
         | touch internal infra/data components is closing off a huge
         | potential market. I've had a few people on the cutting edge of
         | AI tell me that the risk profile is acceptable for a lot of
         | companies, but I'm extremely skeptical that's the case based on
         | my own experience building cloud and data infra SaaS. Corps
         | want self hosted solutions for their most critical components.
         | I might be proven wrong, but the people who are telling me I'm
         | wrong often have a vested interest in that not being true
         | (building some SaaS product where the only way hockey stick
         | growth the VC's want can be achieved is by putting the eggs in
         | the cloud basket)
        
         | sbarre wrote:
         | Yeah we're in the same boat, I was somewhat excited about this
         | at first read and then I got to "LLMs" and "OpenAI" and I just
         | stopped reading. :-(
         | 
         | But I'm sure others are ok with it, so that's great.
        
       | joshuanapoli wrote:
       | I was excited to read your project description. It would be
       | really great to automatically align the security policy for each
       | component with the intent of the component's author. Tightening
       | an overly permissive policy is an awful job. I think that it
       | often has to be done through a long trial-and-error process;
       | remove all the permissions, and add back permissions one by one
       | in response to observed program failures. So it's great to see
       | another way to avoid that tedious chore.
       | 
       | A challenge with Slauth will be to organize the generated
       | policies in a way that makes them legible. I would like the IAM
       | policy to help clarify its intent. Allowing each in-use API
       | endpoint is technically required to let the service work. It
       | might be technically following the principal of least privilege.
       | But the endpoint-by-endpoint rules do a poor job of summarizing
       | the purpose of the policy or how it relates to the service. One
       | way that we do this is by having resource providers declare
       | managed policies that allow use of the resource. So the
       | "encabulator" provider also defines a "mutate encabulator"
       | managed policy. Then services that need to invoke "mutate
       | encabulator" can reference the managed policy. They don't need to
       | compute the ARN of every endpoint. The dependent service doesn't
       | end up with an inline policy that has detailed dependencies on
       | the implementation details of each target resource.
        
       | Boxxed wrote:
       | Interesting use of GPT; it's cool that it works as well as it
       | does but I'd be nervous about the various insidious ways it can
       | fail.
       | 
       | On another note, are there tools that will scan your AWS/GCP logs
       | and emit configuration to limit permissions to what you're
       | actually using? I could even see GPT doing better here too, or at
       | least it would be easier to test.
        
         | slalmeidabbm wrote:
         | We're currently focusing on a full shift-left approach to
         | policy creation. Using AWS/GCP logs to create policies would
         | work very well but it would need a few things to happen:
         | 
         | 1. The service needs to be deployed 2. To produce an actual
         | result, the calls that make use of the sdk need to be triggered
         | 
         | This is something that would be better included as an addition
         | to monitor policy usage and adjust.
        
       | biccboii wrote:
       | If I were going to use ChatGPT to generate my IAM policies, why
       | do I need a middleman to do that?
        
         | sbarre wrote:
         | It sounds like the value they bring is the custom prompts
         | they've written?
         | 
         | And probably some quality-of-life wrappers around all that
         | process?
        
       | kkapelon wrote:
       | A bit off topic but possible name clash/confusion with
       | https://www.sleuth.io/
        
       | Eridrus wrote:
       | IAM is horrific, but I feel like it's not really the application-
       | specific stuff that is annoying for me, it's the stuff that AWS
       | wants configured for AWS features to work and the fact that the
       | error messages when you get it wrong are useless at pinpointing
       | your mistake, when you do not know that access is mediated by
       | IAM.
       | 
       | Just as an example, I setup a containerized app on Fargate with a
       | custom role, and the need to configure the ability for ecs to
       | assume the role, read from ecr, write to cloudwatch, create
       | channels for debugging was super annoying.
       | 
       | Comparatively, having a policy for it to read from an s3 bucket
       | with my data was trivial.
        
         | lasermike026 wrote:
         | You get used to it.
        
           | Eridrus wrote:
           | Sure, you can get used to anything, but it still sucks and is
           | worth trying to improve the situation on.
        
         | teaearlgraycold wrote:
         | I just set up an S3 bucket - probably the most common use case
         | for IAM policies. My policy file was invalid in a way that AWS
         | never warned me about and looked good to my untrained eyes.
         | After a few hours of debugging GPT-4 was able to explain I
         | needed to break up my rules into bucket-level and key-level
         | sections. Afterwards the 403 errors went away.
         | 
         | Just sharing my story. IAM sucks and GPT-4 is a good backup for
         | configuring it.
        
           | Eridrus wrote:
           | I guess I did not try to deal with anything key related, so
           | that's probably why it was simpler for me.
           | 
           | I do agree that everything about it is horrific, though I'd
           | be surprised and impressed if an LLM were able to generate
           | your key setup from scratch.
        
       | lijok wrote:
       | I'd like to challenge you on what seems to be the main claim
       | behind why Slauth is a necessary product: "the amount of money
       | that is being spent on tooling to scan for IAM misconfigurations
       | in the cloud".
       | 
       | 1. The tooling you're quoting specifically, wiz.io and
       | ermetic.com do an incredible amount more than just "scan for IAM
       | misconfigurations". In fact, I understand that to be one of their
       | most insignificant features. Yet it sounds, from the numbers
       | being quoted (I saw the "millions" figure being thrown around),
       | that you are equating a company purchasing wiz.io as them
       | purchasing "tooling to scan for IAM misconfigurations"
       | exclusively. How much does the IAM scanning tooling actually
       | cost, and what is the material cost of delayed remediation of
       | over-permissioned entities?
       | 
       | 2. Were a company to introduce Slauth into their stack, are you
       | under the impression that they would then not need to scan their
       | IAM for misconfigurations and would therefore be able to save
       | "millions"? Would it not be fair to say that the presence of
       | Slauth would not remove the need for IAM scanning tools, since
       | IAM deployments could happen out of bounds, which is not
       | something that Slauth removes from a companies threat model?
        
       | nextworddev wrote:
       | I guess this is their 2nd pivot after LLMs took off
        
       | debussyman wrote:
       | I like this approach, I've always thought that IaC could be
       | generated by scanning application code. Although I share the
       | skepticism that IAM is the best place to start.
       | 
       | I'm curious though how well an LLM performs for newly released
       | AWS services? This is where I've experienced the most arcane IAM
       | definitions personally, but I wonder if GPT 4 is trained well
       | enough on newer sources.
        
       | lijok wrote:
       | How are you dealing with invalid policies generated by GPT? For
       | example, in your loom video and the gif on the website, the
       | resource for the s3:PutObject permission is incorrect: it should
       | be "arn:aws:s3:::my_bucket_2/*" not "arn:aws:s3:::my_bucket_2".
       | 
       | Does this support resource policies? If so, how are you ensuring
       | serious security vulnerabilities such as confused deputy are not
       | introduced by this tool?
        
         | slalmeidabbm wrote:
         | That's a very good example of the type of hallucinations that
         | can happen, we still need to develop a way to double check that
         | the generated policies are indeed valid and hopefully find a
         | way to simulate them.
         | 
         | As is stands, Slauth doesn't support resource-based policies.
        
         | nutbear wrote:
         | Good catch on the bucket vs object level permissions with S3
         | and s3:PutObject.
         | 
         | I'd also be curious for future plans with resource policies as
         | that's another layer of complexity to manage - where the
         | resource policy would manage access to potentially many
         | applications -> 1 resource. Vs 1 application -> many resources
         | which I think is the use case Slauth is solving for initially.
         | 
         | Confused Deputy would be interesting, could be done via
         | Condition Keys such as SourceArn and SourceAccount, but gets
         | complex for cross-account use cases.
        
       | bsamuels wrote:
       | At my previous job we used GCP and went through so much
       | effort/tooling to try to fix IAM. We definitely would have given
       | this tool a spin. Ignore the HN flashmob.
       | 
       | Another use case you might run in to as you talk with more
       | clients is figuring out what developer IAM roles need to be. This
       | was the far bigger problem for us as we had a ToS that restricted
       | employees from viewing/accessing user data.
        
       | Eumenes wrote:
       | The problem with IAM from my experience is it's never truly owned
       | by a single entity. If you have an IT team, its sometimes them.
       | Sometimes its devops, sometimes security. However as a startup
       | grows, the owners change. Policy is rarely developed from the
       | ground up and more patchwork to accomodate teams or timelines
        
         | nutbear wrote:
         | Yes. Good points. Agreed with patchwork as sometimes IAM can
         | take a backseat to different priorities such as application
         | development or feature development.
         | 
         | There's a couple different models for IAM ownership. At some
         | places, the application teams own IAM along with the
         | application. Sometimes, it's owned by central teams (such as
         | security).
         | 
         | And agreed, with companies growing and changing, ownership
         | changes as well.
         | 
         | Those factors can all complicated IAM development and policy
         | maintenance as it becomes more difficult to find the right fit
         | for IAM to application. For that, it would require someone who
         | knows exactly what the application needs access to and the IAM
         | actions taken as well as how to configure IAM.
        
       | callalex wrote:
       | Can someone explain to me what is so difficult about writing
       | security policies? Are people really deploying services in
       | production without understanding the upstream and downstream
       | dependencies of the service?
       | 
       | Also at cloud-scale 95% accuracy is completely unacceptable.
        
         | nutbear wrote:
         | IAM Policies in AWS are inherently difficult - there's a lot of
         | nuance to the policies such as evaluation logic (allow/deny
         | decisions), resource scoping, conditionals, and more. It's
         | often more straightforward to start with a broad IAM policy and
         | then leave it without reducing privilege as to not adversely
         | impact the application. Proper IAM also takes dev cycles, and
         | may not be top priority to get a policy correct. I think it's
         | rare to find a 100% properly scoped IAM policy for an
         | application.
         | 
         | Datadog recently did a State of Cloud Security and one of their
         | findings in https://www.datadoghq.com/state-of-cloud-security/
         | is that a substantial portion of cloud workloads are
         | excessively privileged (with more data points there).
        
         | vrosas wrote:
         | > Are people really deploying services in production without
         | understanding...
         | 
         | Oh you sweet summer child. But in reality I've seen the pattern
         | over and over, especially in GCP. 1. Create service account 2.
         | Give it Owner permission for the whole project 3. Download key
         | and check it into source control 4. deploy.
        
       | jedberg wrote:
       | Why are you using (very expensive) GPT, or any LLM for that
       | matter, when this was already a solved problem using rulesets?
       | Netflix for example has open source that does this already:
       | https://github.com/Netflix/consoleme
       | 
       | Instead of analyzing your code, you just run your code with no
       | permissions and it automatically detects permission failures and
       | thens open those permissions, with a UI showing you what it did
       | so you can remove any permissions you don't want.
       | 
       | That actually seems much more secure than trying to divine the
       | rules from reading the code.
       | 
       | What value is the LLM adding here?
        
         | nextworddev wrote:
         | So it can call itself an AI startup?
        
         | codegeek wrote:
         | Not to knock on the OP but in general, if you are doing a
         | startup in 2023, you cannot do it without AI otherwise no one
         | will take your seriously. I am not joking. AI is the new Gold
         | Rush that blockchain used to be. Personally, I do think that AI
         | is awesome and has lot of great use cases but unfortunately,
         | most VCs/Investors are looking for that keyword if you wanna
         | get funded so I feel a lot of startups are forcing AI into
         | their stuff.
        
           | jedberg wrote:
           | Yeah I'm seeing that. As an investor myself, the first
           | question I always ask is "what unique data do you bring to
           | the table that other people _can 't_ get?". My next question
           | is always "What value does an LLM add beyond what we could
           | already do with traditional (and much cheaper) models or just
           | rulesets?"
           | 
           | I'd like to think most investors are sophisticated enough to
           | detect when the "AI" was just bolted on for funding, and that
           | most startups aren't actually doing that, but are using LLM
           | for a reason.
        
       | srameshc wrote:
       | Unrelated question: How can one learn more about configuring and
       | securing IAM for Google cloud platform ?
        
       ___________________________________________________________________
       (page generated 2023-12-04 23:00 UTC)