[HN Gopher] Working Group Last Call: QUIC protocol drafts ___________________________________________________________________ Working Group Last Call: QUIC protocol drafts Author : pimterry Score : 113 points Date : 2020-06-10 12:48 UTC (10 hours ago) (HTM) web link (mailarchive.ietf.org) (TXT) w3m dump (mailarchive.ietf.org) | The_rationalist wrote: | I wonder how much different would it be if HTTP/3 was based on | SCTP instead of on UDP. | https://en.m.wikipedia.org/wiki/Stream_Control_Transmission_... | | In a parallel universe: https://tools.ietf.org/html/draft- | natarajan-http-over-sctp-0... It might be that http 4 experiment | this | bawolff wrote: | It wouldn't be adopted due to middle boxes, so all the | difference? | The_rationalist wrote: | _Until middleboxes support SCTP, UDP encapsulation is a | possible solution_ That would mean that HTTP3 enable HTTP4 | towards STCP | Avamander wrote: | Maybe in the future someone will make an implementation. But | right now I don't see it happening, SCTP has too many problems | with deployment. | mekster wrote: | Is http/3 completely transparent to the current network and no | network that is capable of handling http/1 and 2 would have any | problem handling it? | shockinglytrue wrote: | Far from it. It's quite likely going to be over 5 years if not | a decade before it would be possible to run a pure-HTTP3 | service without risking connectivity problems | | The problem is similar to the IPv6 transition, except thanks to | the browser monopolies, it's possible at least for network | providers to quickly feel significant pressure to fix their | networks. But there will always be some networks that will | never be fixed | | edit: for those inexplicably downvoting this, please pay | attention to the parent comment's question, and the Internet's | long chequered history of adopting new protocols in _any_ | setting. TCP port 443 isn 't going to magically disappear | overnight, or indeed any time soon. This is evidently true | because it has been true for all prior transitions. Mail still | flows to many places unencrypted despite the standardization of | STARTTLS 21 years ago. The long tail has only gotten much | longer in those intervening 21 years. | dathinab wrote: | > The problem is similar to the IPv6 transition, | | Maybe similar but _much_ smaller then IpV6 with much less | problems, because most web frameworks will transparently | support HTTP /1, HTTP/2 and HTTP/3 for the large majority of | use-cases. | | > Mail still flows to many places unencrypted | | Mainly because getting a TLS certificate wasn't that easier | for a lot of people in the world until recent years and the | standard being written in a way which can be easily (mis-) | understood as you having to have support unencrypted | sending/receiving of mails. (It requires it for sending for | _local_ , i.e. implicitly by OS user account authenticated | same machine mail.). | cryptonector wrote: | HTTP/2 and HTTP/3 do not change the semantics of HTTP. That | means you can run reverse proxies to server HTTP/1 services | as /2 and/or /3 and vice-versa. As a result the transition | will be a lot easier than the transition to IPv6. I expect | that the transition in corporate networks will be faster -- | the opposite of the IPv6 case -- because there is a lot of | appeal to HTTP/3. | user5994461 wrote: | You realize that HTTP/2 is still nowhere near to being | adopted by corporations? It's really far fetched to plan | HTTP/3 and expect any adoption. | cryptonector wrote: | The fact that UDP involves much smaller PCBs than TCP | alone will drive adoption of HTTP/3 because it will free | up a fair bit of memory. | | More availability of HTTP version gateways in load | balancers and other reverse proxies is all that's needed, | and that's coming along. | the8472 wrote: | You still need to keep your unacknowledged data buffered | somewhere. If the kernel isn't holding it then it's in | userspace. | microcolonel wrote: | More specifically _no PCB,_ for UDP itself. | cryptonector wrote: | It's not nil. For "connected" UDP sockets, it's smaller | than TCP's, but not nil because, well, buffers. And for | non-connected UDP sockets there's still buffers. The main | thing is that you can have much less buffer space because | you might always be willing to drop packets. Ultimately | you can have much lower memory pressure from those | buffers and the smaller PCBs. | moreati wrote: | Protocol Control Block for anyone else wondering | https://www.oreilly.com/library/view/tcpip- | illustrated/02016... | tialaramex wrote: | Qualys' "SSL Pulse" says 47.1% of surveyed sites offered | HTTP/2 and about 30% of surveyed sites offer TLS 1.3 | | Increasingly "corporations" out-source this problem to | specialists who are only too pleased to use newer | technologies with better performance and collect the same | money. | user5994461 wrote: | Because 47% of sites run on Cloudflare or similar CDN | that started enabling HTTP/2 for non-paying customers. | | The application servers running the site do not accept | HTTP/2 and most likely can't support it at all (we're a | python shop and none of the web frameworks we use could | do HTTP/2 when we looked into it). | amaccuish wrote: | For HTTP/2 at least, I think the main benefit in terms of | performance applies to the "last hop", so you still get a | more reliable experience even if the connection between | the CDN/proxy and app server is http/1.1 | dathinab wrote: | True, through many web-sites don't have to care about the | performance difference. | | But for companies like CloudFlare or Googl HTTP/2 means | less traffic overhead (multiplexing+header compression) | and can save them a lot of bandwidth (aka. money) with | that. | secondcoming wrote: | Exactly. AWS and GCP load balancers accept HTTP/2 but | allow for those requests to be forward to backend | instances as HTTP/1 because of this. | shrdlu2 wrote: | In fact AWS ALB does not even support HTTP/2 on the | backends which is really annoying. | dathinab wrote: | But exactly that's the point. It can be handled | transparently in a much easier way then the Ipv6 switch. | (Because IPv6 is so much more different then just IPv4 + | more addresses, and worse, many people don't realize it | and treat it as IPv4 with more addresses which resulted | in many problems). | uluyol wrote: | Most _domains_ may not support HTTP/3, but I fully expect | that within a few years most _traffic_ will be HTTP/3. | awirth wrote: | This is only mostly true, they do subtly change some | semantics. | | As a motivational example, it is possible to encode a colon | in a header name in http2 but not in http1.1, and this does | not violate the RFC which only blacklists "\0\r\n". | npiit wrote: | The transition will be faster because of that a big chunck | of the internet is gatewayed through a few big players | (e.g. Cloudflare, AWS, CDNs, the new wave of static | deployment services like Netlify and Zeit.co, big websites | like google, facebook, netflix, etc...) | the_duke wrote: | > because there is a lot of appeal to HTTP/3 | | I thought the primary appeal of HTTP/3 is for mobile | clients, and bad connections in general, because it | circumvents TCP head of line blocking and connections can | persist across networks. | | That doesn't feel terribly relevant in corporate networks. | | (not disputing that it's not comparable to the v6 | transition) | cryptonector wrote: | The appeal of H3 is smaller memory footprint. | cptskippy wrote: | His claim of corporate appeal is completely | unsubstantiated and I think when he says corporate he | isn't referring to enterprise. | | Enterprises largely won't give two shits about HTTP/3. | | Just last week I took ownership of another department's | decade old app written in VB.NET WinForms. The former dev | team was putting the finishing touches on the C# WebForms | refactor. It's been interesting taking a step back in | time to Dev practices from 2008. | 1_player wrote: | Why would network providers have to fix their network? Why 5 | years to adopt? | | HTTP/3 and QUIC are based on UDP. This is very different to | the IPv6 transition. | zozbot234 wrote: | We might want to have a SCTP-based HTTP/4 down the road. | That would surely benefit from some fixes on the network | side. | jeffbee wrote: | See, that will NEVER happen. Completely impractical. SCTP | has a different protocol number in the IP datagram header | and many devices will either drop or malfunction when | faced with protocol numbers they don't understand. UDP | and TCP (protocol numbers 6 and 17) are well-supported, | by practically all devices. | anticensor wrote: | Why not package QUIC in IP directly without UDP in | between? | jlouis wrote: | UDP is as clean as you can get it. It is more or less | free of any overhead. And networks know UDP already. A | new IP protocol is far more likely to be rejected in the | network. | tptacek wrote: | This, exactly; it's the actual reason UDP exists. It's a | design smell for anything to ask for a new IP protocol | number. | sagichmal wrote: | A protocol separate to UDP and TCP altogether would | suffer from middlebox interference problems. | anticensor wrote: | Aaaa, you mean the problem of -smart- stupid pipes. These | do and will exist all the time and this is an opportunity | for them to realise how detrimental they do is to | Internet. | cg505 wrote: | There are a ton of networks (think big corporate networks, | schools, shared apartment wifi) that enforce too many weird | port restrictions. Many of those places rarely get network | or config updates. I don't think it's as bad as IPv6, but | there are a lot of people for whom it isn't going to just | work out of the box. | dathinab wrote: | Sure, but for that people a non small part of the | internet is already broken. Like websockets being broken | and in turn slack being broken. | tyingq wrote: | "Network providers" isn't really the issue. It's all the | corporate and school networks that block UDP, or run broken | spyware MITM boxes, etc. Chrome has to do a TCP vs UDP race | to figure out if UDP connectivity to the internet is | broken. | kitteh wrote: | In some cases it is the ISPs. I'll share an example: | | Some broadband ISPs struggle with the fact that their | customers get compromised and join botnets. Over the last | few years UDP has become the ddos attack of choice. | Broadband access networks struggle with how to mitigate | this. Some try to block the command and control (C2) and | some try to go the customer outreach angle. For example, | notifying them that they have a compromised machine or | putting them in a walled garden with a website that pops | up telling them they've been impacted. The problem is | that outreach is costly and not super effective. So they | found another option: apply throttles on UDP. A few have | done this and it's led to big problems because from a | user experience QUIC works enough - and then falls apart. | | Some of the access providers have changed the throttles | to be less aggressive while others have resorted to being | aggressive on the topic ("you should have made a new | protocol and consulted with us!"). | cesarb wrote: | It _should_ be completely transparent; unfortunately, there are | way too many networks with firewalls configured to only allow | known ports of known protocols, instead of following the design | principle that only the end hosts should care about layers | above IPv4 and IPv6. Since QUIC and HTTP /3 uses UDP instead of | TCP, these firewalls will drop or reject the connection. | | That is: any network that is capable of handling traditional | HTTP is _capable_ of handling HTTP /3, but many are explicitly | configured to not do so. | josteink wrote: | > is http/3 completely transparent to the current network | | Iirc it introduces use of UDP and thus will be unusable on a | million corporate networks where near all UDP-traffic is | filtered. | sebazzz wrote: | Many corporate proxies are HTTP proxies (as opposed to SOCKS) | as well, which are currently by definition TCP. Not sure if | an QUIC proxy would be easy to implement at all. | fulafel wrote: | There are a lot of broken networks. HTTP/2 and even HTTP/1.1 | ran into this. | | But really the idea of the protocol being transparent to the | network is completely backwards in relation to the main idea of | the Internet, namely the end-to-end principle, which says that | the network is dumb and is not allowed to look inside the IP | packets into the higher level protocols. | api wrote: | I'm really happy to see QUIC pushed, not because I personally | have a use for it but because it's a battering ram against | network non-neutrality and what I've come to call "network | nerfing." | | Now all those ISPs, IT departments, and cloud providers that de- | prioritize or outright block UDP will get "bug reports" about | things being "slow" or not working. | | Now all those traffic shaping middle-boxes are worthless, and | your ISP can no longer spy on your requests to gather marketing | data about you. | | The Internet is an IP network, not a TCP/80 and TCP/443 network. | ocdtrekkie wrote: | My main issue with it is exactly your preference: It is built | primarily for the purpose of circumventing networking standards | and norms, mostly for the benefit of big tech companies and the | detriment of everyone else. QUIC is a political measure, not a | standards improvement. | jeffbee wrote: | QUIC hides flow control parameters from the network and takes | control away from the operating system, vesting more control | in the application. It is the perfection of the end-to-end | argument. There's nothing political about it. | ocdtrekkie wrote: | And the problem here is that I trust my network and I trust | my operating system both significantly more than I trust my | applications, especially as browsers have transitioned | largely to being user-hostile by design. | | The primary benefactor of your end-to-end argument is ad | networks that also own web browsers. | | Hence why this is political, it's about shifting network | traffic outside of places users have control. | the_duke wrote: | > and takes control away from the operating system | | That's just because QUIC is a new, not yet specified | protocol, isn't it? | | Kernels will get support eventually. Microsoft is actually | already shipping some code. [1] | | Browsers will probably switch to OS facilities once they | are mature enough and provide a performance benefit. | | [1] https://www.windowslatest.com/2018/04/03/microsoft-to- | add-su... | jeffbee wrote: | It's not abundantly clear that there can be a performance | benefit from moving QUIC into the kernel. Indeed, there | is no performance benefit from having TCP in the kernel. | That's why one of the most common tactics in high- | performance networking is to move the TCP stack into | userspace. | | The end-to-end argument in IP networking evolved in a | time (the 1970s and 1980s) when a host could reasonably | be considered the endpoint. Now hosts are so big and | powerful that it's no longer reasonable to draw the | boundary around them. The individual process running on | the host is the true endpoint, and having the application | in charge of its own network flow control is the natural | evolution of things. | the_duke wrote: | > Indeed, there is no performance benefit from having TCP | in the kernel. That's why one of the most common tactics | in high-performance networking is to move the TCP stack | into userspace. | | But a userspace network stack only make sense in single | purpose devices/servers, because they have to take over | the hardware, right? | | On a consumer device many applications can use the | network concurrently. | | QUIC runs on top of UDP anyway, so userspace libraries | use OS UDP facilities, but lose the ability to do better | cross-application balancing. Hardware TCP offloading also | plays a role of course. | JoshTriplett wrote: | It's possible to use kernel facilities to route flows to | the right application or stack, and then use that stack | to process individual connections. | | For example, many network cards support creating multiple | virtual network devices, and you could hand a whole such | device to a userspace process, but only let it see | traffic intended for that device. | | Or, finer-grained, you could use a filter to steer | certain packet flows to a process, while handling others | in the kernel. | | I do absolutely think there's value in having native QUIC | in the kernel, but then, I also think there's value in | having native TLS in the kernel; doesn't mean you always | want to use it. | jeffbee wrote: | You don't have to take over the hardware to do userspace | TCP. You only need the ability to put raw datagrams on | the wire (Linux RAW socket, for example). You _can_ take | over the device with something like DPDK but that's not | necessary. As you correctly say, QUIC uses existing OS- | level facilities to communicate over UDP, so there is no | reason that several processes on a single host cannot use | application-level QUIC stacks concurrently. | zozbot234 wrote: | > QUIC hides flow control parameters from the network | | It's not clear that this is a good thing. You generally | want things like network-provided congestion | notification+control, and plain vanilla UDP won't be enough | for this. | jeffbee wrote: | Well, the end-to-end argument is that you are wrong. You | never want the network to do anything smart. ECN is fine, | and does not require the network to have visibility into | flow control parameters. | | https://web.mit.edu/Saltzer/www/publications/endtoend/end | toe... | Traubenfuchs wrote: | I see a chicken egg problem: Why would developers/companies | abandon non-QUIC versions of their software if there is a | significant chance users can't use it? (If they even implement | it under those conditions at all.) | cryptonector wrote: | Because the semantics of HTTP don't change in /2 and /3, you | can always put a reverse proxy in front to bridge versions. | Orphis wrote: | You don't, you run them in parallel to HTTP/3 and it | shouldn't add any cost for you. The work is done in the | browser (or any client) to fallback to whatever works. | | Then, in a few years, when you notice that most of the | traffic is done over QUIC, you still won't do anything as it | has no significant cost anyway and helps the few % of people | who haven't upgraded their network. And customers who can use | it won't notice much as your website still works anyway, but | your latency metrics should be better for those users. | throwaway29303 wrote: | I believe they'll still be able to do those things, though. | It's just a matter of time. The slowing and spying that is. It | won't be as easy, but I believe it'll be feasible. It's the | typical arms race. | | Or as last resort they'll legislate either to end or weaken it. | | I hope I'm mistaken, though. | sanxiyn wrote: | I predict MITM will be normalized and no amount of "MITM is | bad for security!" will work. | vntok wrote: | Cloudflare's MITM is normalized and very good for security. | shockinglytrue wrote: | Centralizing the log files for the majority of Internet | services in the hands of a few companies will never be | good for security | vntok wrote: | That's a laugh. Are you seriously implying people who | benefit freely* from Cloudflare's state of the art | network firewalls and anti-ddos shields are somehow more | at risk than others because they contracted with a | particular security provider instead of rolling their own | security? | | That assertion will require lots of hard evidence because | the vast majority of indicators indicate otherwise. | | * free or less than 20$/mo | shockinglytrue wrote: | This read more like an advert than a comment, and that is | certainly the most loaded question I've ever seen in | written English. | | Giant centralized troves of personal data are a huge risk | to everyone. Civilization pivots and mutates rapidly, and | it's clear to nobody at any particular time which way it | may lean next, or whether it will lean toward them or on | top of them. If anyone had told you there'd be global | mass beheadings of century-old statues 3 weeks ago, I | suppose you'd have been laughing then too. | BiteCode_dev wrote: | Except if cloudflare is part of a PRISM like program, | associated with a gag order. In which case they have pwn | half the planet security. | vntok wrote: | Are you aware "PRISM-like programs" apply to every single | US based webserving entity? Why are you calling | Cloudflare out when there's thousands of companies who | are subject to the very same risk? Is it possible that | PRISM programs bear no relevance at all to this | conversation? | gruez wrote: | >Now all those ISPs, IT departments, and cloud providers that | de-prioritize or outright block UDP will get "bug reports" | about things being "slow" or not working. | | Doubt it. I suspect that browsers have some sort of happy | eyeballs algorithm for determining whether to use http/3 | specifically because some networks don't handle it well. In | those cases it'll fall back to http 1.1. | | >Now all those traffic shaping middle-boxes are worthless | | How so? how is TCP 80/443 and UDP 80/443 harder to traffic | shape than TCP 80/443 alone? | | > and your ISP can no longer spy on your requests to gather | marketing data about you. | | That's more encryption (ie. https) than switching to http/3. | Also, encryption is already mandatory for http/2 (for most | browsers). | humblebee wrote: | Curious, why fallback to /1.1 over /2? | tialaramex wrote: | So far as I can see they won't. Because QUIC (and thus | HTTP/3) is always encrypted your fallback is always a TLS | connection. | | Modern TLS agrees the sub-protocol to use (in this case h2 | = HTTP/2) early with ALPN. If that ALPN sub-protocol isn't | available that same connection just becomes HTTP/1.1 (over | TLS) instead. | | So there's no reason to fall all the way to HTTP/1.1 | without asking if HTTP/2 is available. | MertsA wrote: | It would have been great if they would have used the | opportunity to leverage SCTP instead of building it into QUIC. | They could have defaulted to plain jane SCTP on IPv6 and tunnel | it in UDP for IPv4. That would have been the perfect | opportunity to drive adoption of SCTP while implementations of | all sorts of middleboxes are still young enough to be swayed | into supporting it. We could have all of the benefits of QUIC | but with proper layer separation and multi-homing, and the | ability to build future applications on it. | api wrote: | You can't "drive adoption" of anything new in the basic IP | protocol space. You're kicking a dead mule. ISPs and IT | departments don't care and have little reason to say anything | but "no" or "its on the road map" (forever). The present IP | stack is fixed in amber, and we'll be lucky to see IPv6 | complete adoption. At this point I'd give IPv6 a 50/50 chance | of failure. | | Reasons for this: | | * No upstream incentive due to little or no competition and | not enough "killer apps." | | * Security FUD: if it's new then it might allow something | currently not understood. | | * ISP desire to control traffic and prohibit anything new. | | * First thing IT will do if there's a problem is "turn the | new thing off." (This happens a lot with IPv6 even if IPv6 is | not a cause.) | | * Vendors with a vested interest in treating the disease | (supporting technical debt) rather than curing the patient | (actually fixing the problem, reducing the need for complex | solutions). | | My point in the original comment was that QUIC might help | stop further ossification by making UDP actually necessary. | Otherwise we face the potential for removal of UDP and even | removal of non-standard TCP ports, eventually leading to | "http only" networks. ISPs and IT departments would like that | since less capability equals less support queries and less to | monitor. | | If IP were re-designed today I would suggest at least non- | cryptographic but difficult to efficiently modify | authentication of all fields (e.g. keyed or nonced | checksums), and obfuscation (fast minimal encryption?) of | everything under the IP header. No NAT, and nothing useful to | filter on. If it's visible and mutable it will become | ossified since people will MITM it. | | That way there would be no choice but to fix endpoint | security instead of deploying firewalls and no choice but to | adopt IPv6 instead of NAT. You couldn't temporarily work | around fundamental problems by nerfing the net. | | I think there's a general principle here. If easy short term | fixes that incur long term technical debt are allowed, they | will be deployed. If such a hack is deployed in a federated | or decentralized system, it becomes permanent unless extra- | systemic means (e.g. legislation or financial incentives) are | used to force it to be removed. Therefore federated and | decentralized systems should not permit such hacks. Front | load the pain by forcing the problem to actually be fixed. | | This is one thing block chains kind of get right. Everything | is cryptographically hashed and authenticated. You can't | modify state unilaterally. You either have a compliant node | or you don't, and changes must be adopted globally either as | soft or hard forks. | shockinglytrue wrote: | > Now all those traffic shaping middle-boxes are worthless | | No reasonable implementation of encrypted SNI has been proposed | or standardized. Those middleboxes are still more than useful | | AFAIK in QUIC there is some light obfuscation of the | ClientHello, but it is not intended to be an anti-filtering | measure, middleboxes can still fish out any presented name with | a little bit of new code | tialaramex wrote: | What about EKR's | | https://datatracker.ietf.org/doc/draft-ietf-tls-esni/ | | ... do you feel is unreasonable? | shockinglytrue wrote: | Unsurprisingly for a spec from Fastly & CloudFlare, the | privacy offered is predicated on the existence of large | centralized providers that due to their size cannot be | blocked. One outcome of this design is that if you want to | offer truly private service to an end user, you _must_ have | a relationship with one of these providers, otherwise your | traffic, even if it implements the spec, becomes easily | identifiable as its EKR config was served by some unique | non-shared infrastructure. | | In practical terms I guess it is reasonable, but viewed | from the angle of how the Internet was originally intended | to work, it is obviously abhorrent and self-serving. | tialaramex wrote: | eSNI can only effectively prevent people from | distinguishing things which aren't otherwise | distinguishable anyway. This is not a forgetfulness | potion, if you already know by some other means where I'm | going then eSNI doesn't fix that. | | If cat-videos.example and elect-bob.example are just | names for the same IP 10.20.30.40 then we can use eSNI to | prevent eavesdroppers discovering which you visited and | that's all. | | But if you've got 10.20.30.40 assigned by your ISP for | your personal web server then eSNI can't hide that, you | can use eSNI to prevent eavesdroppers learning whether | visitors were looking at snakes-control-nasa.example or | soup-does-not-exist.example but if all you host are crazy | conspiracy theory sites then they don't need to know | which one is which to block all of them, that's just how | IP works. | | The configuration for eSNI is delivered over DNS, so it's | up to you to choose how you want get secure DNS. | unethical_ban wrote: | At $work, udp/443 is blocked to the internet specifically | because our corporate proxy can't deal with QUIC and Chrome | tries using it a lot. | | Companies have a legal and moral imperative to see what content | is leaving their network, and until some combination of | host/network governance is developed to only allow pre- | authorized traffic to egress, MITM boxes will need to be | supported. | JoshTriplett wrote: | > Companies have a legal and moral imperative to see what | content is leaving their network, and until some combination | of host/network governance is developed to only allow pre- | authorized traffic to egress, MITM boxes will need to be | supported. | | "a legal and moral imperative"? No. Such networks are | security threats and should be repeatedly broken until they | give up. It will become increasingly expensive to even try. | And an increasing number of security researchers will find | ways to make maintaining MITM technologies more difficult and | less transparent. | | You can block spam and outbound attacks without MITMing | traffic. MITMing traffic is not about protecting the outside | world in any way. It's technology sold to companies out of | fear (or misguided regulatory requirements). One day it will | go the way of the virus scanner: dead, worse than useless, | and only existing to satisfy a decreasing number of corporate | checkboxes. | Spivak wrote: | You don't have a right to know what network traffic is | leaving your network from machines you own? Such an odd | stance from a techie since this also applies to your | Chromecast, Roku, Alexa, Laptop, Phone. | | Exactly zero companies care about "protecting the outside | world", MITM Boxes are required to have a record and scan | all traffic going through your network. If PHI is | exfiltrated we can't just tell the auditor "sorry ma'am we | have no visibility into our network by design. Could have | been from anywhere. _shrug_ " | jessaustin wrote: | I'm sure there are auditors who would sign off on it, but | you can't really claim that shitboxes prevent PHI exfil. | This is like putting up a ten-foot-long fence a mile from | the barn with the open door. You lost the battle the | moment that data was viewed on any device that wasn't | currently operated by a health professional with a reason | to see it. | droitbutch wrote: | > Such networks are security threats and should be | repeatedly broken until they give up. | | Why the hostility towards entities that have determined | their best course of action to protect _THEIR_ networks is | by focusing on the network egress pipe? Simple and | efficient to focus on one chokepoint vs patching a myriad | of devices + client software + client versions + OS | versions + future versions etc. | | > It will become increasingly expensive to even try. | | Go ahead, but you're increasingly unlikely to win that | battle. There's bound to be some software or version that | the enterprise cannot control (e.g. prevent data | exfiltration) at which point, enterprises will have no | solution but turn to SSL MITM again. | | Remember, it's the enterprises' network and data - not | yours. | JoshTriplett wrote: | > Why the hostility towards entities | | Among many other reasons, because such entities | repeatedly try to weaken Internet security standards and | software to accommodate their expectations. That's an | attack, and deserves an appropriate response. | | Also, because today, everyone doesn't always have enough | choices of potential employers that they can avoid such | policies. | | Also, because nobody should ever get used to the idea | that their network is MITMing their traffic, not least of | which because that normalizes such technologies on other | networks and in other contexts. | | Also, surveillance technologies like this _will_ be | abused by people with power over others; there 's a long | history of such abuse. | | And those are just the reasons off the top of my head. | | > Go ahead, you're increasingly unlikely to win that | battle. | | What happens when libraries and software starts dropping | support for old, insecure protocols, and the new | protocols are designed to treat MITM as an attack? What | happens when, even before that, companies that MITM have | to replace cheap MITM infrastructure with incredibly | expensive MITM infrastructure? What happens when software | and services start treating the resulting decreasing | percentage of companies that try to MITM the way they | treat companies still running IE6, and tell them "you'll | have to upgrade your network"? What happens when people | trying to MITM find they can't run modern software | stacks, and their auditors complain that they're not | upgrading? What happens when they're spending huge | amounts of money maintaining forks and patches? What | happens when the engineers who would have to maintain | those patches don't want to work on a hostile network, | and the ones that remain are more expensive? There are | _many_ things people can do to make life difficult for | anyone trying to MITM. | droitbutch wrote: | You seem to be conflating enterprise and home/personal | networks. They are not the same. | | > nobody should ever get used to the idea that their | network is MITMing their traffic | | and: | | > surveillance technologies like this will be abused by | people with power over others | | Then simply do not add a CA or self-signed cert to your | cert store. IOW: the default is secure against SSL MITM. | Nothing to "get used to" or "abused". | | > What happens when libraries and software starts | dropping support for old, insecure protocols, and the new | protocols are designed to treat MITM as an attack? | | Much simpler than you think. In an Enterprise, they block | what they cannot inspect. Clients do not own+run the | networks, the enterprise does. | | > What happens when they're spending huge amounts of | money maintaining forks and patches? | | This runs counter to your earlier argument: "You can | block spam and outbound attacks without MITMing traffic." | | How do you control a myriad of versions of client | software on a myriad of versions of devices across a | myriad number of applications? There are bound to be some | software the Enterprise cannot control (e.g. proprietary, | or simply does not have the resources to fix+recompile). | unethical_ban wrote: | You completely ignore the concept of data breaches and | preventing them. You complain about the alleged problem, | "man in the middle" technologies, without any proposed | solution. | | Let me be clear: If you do not think private companies and | businesses have a right, and sometimes an obligation, to | secure and monitor the traffic on their networks, then you | are flat out incorrect, and this conversation can end. | | If you agree they need to monitor data, but that MITM | technologies are a societal risk more than a benefit, I am | all ears. Host-based data scanning at an OS level; signed | traffic that the network can trust has been scanned by | authorized hosts (so rogue hosts don't send traffic out); | and so on. | | edit: I don't think your message is consistent, but I think | that is not a sign of malice. You say "Companies have no | legal or moral need to see what content is leaving their | network" then say "You can block spam and outbound attacks | without MITMing traffic." | | You seem to be equating "See the content leaving their | network" with "MITM", but then say later that companies can | "prevent outbound attacks" without MITM. The #1 threat for | many companies is data loss. How do you propose DLP gets | accomplished with zero access to the network? | JoshTriplett wrote: | > You complain about the alleged problem, "man in the | middle" technologies, without any proposed solution. | | I don't have to provide a "proposed solution" to MITM; | the proposed solution is "stop that". To the extent MITM | is getting harder, that means network protocols and | software is getting better. | | This is like asking the makers of adblockers what their | "proposed solution" is for ads, or asking the makers of | spam filters what their "proposed solution" is for | sending spam. | | Most critically, in a standards body, the right answer to | people trying to MITM is a polite and diplomatic version | of "go away, your use case is invalid". | | > If you do not think private companies and businesses | have a right, and sometimes an obligation, to secure and | monitor the traffic on their networks | | They can certainly _try_ (currently), and one of the jobs | of security researchers is to ensure they fail enough | that they stop trying. I also hope one day that there 's | enough collective bargaining power among engineers that | such companies cannot successfully hire people, and that | nobody ever has to be desperate enough to accept such a | draconian network. I'm glad I'm in a position to be able | to say I'll never work for a company that requires MITM; | everyone should be able to avoid that. And the only time | I'm aware of a company having an _obligation_ is due to | misguided regulations on certain narrow classes of | businesses. | | > MITM technologies are a societal risk more than a | benefit | | The company is the only one who could possibly benefit, | and that's leaving aside the likely possibility of doing | more harm than good; security measures work better when | your users want to help you rather than thwart you. | Meanwhile, the societal harm of _ever_ allowing MITM is | massive. | | Right now, the biggest step I'd love to see happen is | browsers (in concert) having a persistent banner for | "your traffic is being MITMed" any time they're getting | the wrong certificate for sites, and browsers making that | impossible to remove without rebuilding the browser from | source (which is a substantial barrier). Next, we can | make it easy for sites to detect access from MITMed | networks, and make policy decisions on that basis. Then, | coalitions of sites can collectively make it _expensive_ | to MITM. | | Put that together with new protocols that are | increasingly desirable, and increasingly unavoidable, and | which users will want, and you have a recipe for | substantially reducing the number of companies willing to | even try to MITM. | | > then say "You can block spam and outbound attacks | without MITMing traffic." | | "We block outbound email ports" is not an unreasonable | policy, as an example. And if someone tunnels around | that, now they're spamming from their network, not yours, | so it's not your fault. | | > How do you propose DLP gets accomplished with zero | access to the network? | | "DLP" has become equated with "MITM". But you could, | instead, focus on _access_ to the data rather than | _interception_ of exfiltration. If people are _trying_ to | exfiltrate your data, they will find a way, very very | easily. And there are much better ways to catch | accidents. Also, most companies are still at the level of | avoiding headlines like "laptop stolen with our whole | database on it", which has nothing to do with MITM. | unethical_ban wrote: | I believe you continue to ignore the obligation of | businesses to protect data. | | Yes, access to sensitive data is one layer of defense in | depth for data exfil - absolutely on point there. But | then you throw up your hands and say "oh well" if they do | get data, or if they have legitimate need to see it, and | try to get it out. I don't buy that. | | I agree with some of your thoughts in principle: Just | like I don't agree with backdoors in crypto, I applaud | TLS getting stronger toward PFS and eliminating MITM. But | you're naive if you think companies don't have a right or | desire to monitor the systems that hold their | intellectual property, and their customers' personal | data. | | Your bio says you work at Intel. I'd bet a month's rent | that there is a MITM proxy on your corporate network. | JoshTriplett wrote: | > I believe you continue to ignore the obligation of | businesses to protect data. | | On the contrary. I think it's critical for businesses to | protect customer data. Many customer data breaches are | "someone left the S3 bucket open" or "someone had the | database on an unencrypted laptop in their car" or | "someone broke into the production database using a | security hole we failed to patch". How many customer data | breaches are "one of our employees intentionally leaked | the customer database"? | | The sign of a good security policy is that your employees | are actively trying to _help_ you. The sign of a bad | "security" policy is pervasive word-of-mouth knowledge on | how to bypass that "security" policy to get work done. | Which network is more likely to experience a breach? | | (I should also clarify that there's a difference between | "policies for the corporate network" and "policies for | direct access to the production datacenter network", as | one of many examples. Lock that down to the point that | _you_ can 't easily access it, and _nobody_ can do so | without oversight.) | | > if you think companies don't have a right or desire to | monitor the systems | | Companies have a right and a desire to run ads, and yet | adblockers exist, and generally speaking the adblockers | win for anyone who installs one. | unethical_ban wrote: | >Companies have a right and a desire to run ads, and yet | adblockers exist, and generally speaking the adblockers | win for anyone who installs one. | | I see this isn't going anywhere - you didn't acknowledge | that if you work at Intel, you 100% have a web proxy on | your campus despite saying otherwise, and now you're | comparing adblocking by a private user on their computer | on their time, and someone accessing the internet from | their company's network on their company's time when and | uploading arbitrary data across the Internet. | | Again, you make some good points interlaced with bad | ones: Yes, limit desktop access to corporate servers. | Have strong IAM and auditing. But you think there is some | right to have all the privacy in the world on _their_ | machine on _their_ time. I fundamentally disagree. | | And would you, or whoever is doing it, stop downvoting | me? Despite our disagreements, this has been a | substantive argument. | droitbutch wrote: | > I don't have to provide a "proposed solution" to MITM | | No, my experience says you do. | | The enterprise needs to prevent data breaches and data | exfiltration. If you want them to move away from MITMing, | then I believe you would fare better if you provided an | alternative solution. | | Further, unless you have some voodoo magic, preventing | breaches and exfil, would still require inspecting | _something_ - whether it 's at the OS, client, or data | level. IOW: you are simply moving the inspection around | from network to somewhere else - but it's still | inspection - and invasion of the employee activities. | kstrauser wrote: | Your employer provides you with a medical insurance plan. | On your break at work, you use your company laptop to | access the insurer's website and use their find-a-doctor | tool to search for a cancer doctor. This is a legitimate | use of your work computer, after all. | | Does your employer have a right and obligation to see | that you're searching for a cancer doctor? | | Go to a large company's IT department and find the person | there you dislike the most. Do you want that person | monitoring the traffic and seeing that you searched for a | cancer doctor? | | Now you know why so very many of us believe you are | completely wrong about this. | droitbutch wrote: | > Does your employer have a right and obligation to see | that you're searching for a cancer doctor? | | Do you have the right to prioritize your personal | activities over your employers protection of its' data? | | Creating blindspots on enterprise networks won't get far | - especially not given today's realities of breaches. | unethical_ban wrote: | Do you want your bank to be hacked and have a 50GB | database backup with all your accounts and transaction | history slow-leaked out via DNS queries? | | Now you see why so many of us believe you are completely | wrong about this. | | To your point: | | If it is their network, it's their network and their | computer. Access the website from home, or from your | phone, which isn't MITM'ed on the corporate network. | | If it is a resource only available on the corporate | network, but it is not a "company resource" such as find- | a-doctor, then you can (a) accept that the company may | see it, (b) contact information security and ask if they | can, or (c) find another way to get the | information/complain to HR/ombudsman/whatever. | | It is up to the company to do the right thing when | possible, which means (in our case) we don't intercept | websites categorized banking, health, and a handful of | others. | | Information security obfuscates/blocks the showing of | personally identifiable logs to the rest of IT, as well. | | Again, banks (in my example) have an obligation to their | millions of customers to protect their data, more than | you have a right to browse a doctor tool or upload | sensitive spreadsheets to your personal Google Drive to | work on over the weekend. | jacobush wrote: | I thought it would be some sort of last stand of QUIC drives. | baby wrote: | My only problem is why force TLS when we have better protocols | nowadays. We did some research on using noise (in addition to | TLS) but I'm not sure anyone really pushed for this to be | considered officially (cf nQUIC) | tialaramex wrote: | We have actual security proofs for TLS 1.3 (which is the only | version offered in QUIC) | | Of course such proofs come with caveats - in particular the | proofs assume your cryptographic primitives work as intended | (e.g. that AES isn't broken) and that you've implemented the | specification and not something else - but our experience has | been that it's valuable to get those proofs, and where proof | turns out to be difficult it's a valuable pointer to weaknesses | in your design. | | As far as I know nobody has done such a proof for the Noise | design, and so we only have our intuition that it looks safe. | Skunkleton wrote: | There is nothing precluding HTTP/3.1 from switching up some | aspect of the protocol. TLS was probably selected here because | it is mature and well supported. | microcolonel wrote: | The benefit of TLS is widely deployed certificate | infrastructure. If you could find a way to keep the server | certificate infrastructure intact while cutting out most of the | ugliness of TLS, that'd probably be a workable solution. | | FWIW though, TLS 1.3 (the only currently supported TLS in QUIC, | AFAIK) is a lot less hairy than TLS 1.2 and earlier; and most | of the primitives are the same as you would use for noise (i.e. | standard AEADs like ChaCha20-Poly1305). | The_rationalist wrote: | Where are the benchmarcks? HTTP 2 gave ~X2 faster performance. I | believe that HTTP3 will give far less, especially against tcp | fast open | microcolonel wrote: | "Faster" is a hard comparison to make. QUIC resolves issues | that will be more important to some networks than others. | The_rationalist wrote: | I'm talking about average browsing like to ALEXA | jlouis wrote: | More importantly, also look at networks that are bad. | [deleted] | Avamander wrote: | How many actually have TFO enabled? | The_rationalist wrote: | In a few years most of middle boxes should supports TFO. So | when HTTP3 implementations will be mature, TFO should too. | The advantages of HTTP3 over it are uncertain | jayd16 wrote: | Its kind of an apples and oranges comparison, isn't it? If you | care about failure modes and head of line blocking then HTTP/3 | has features HTTP/2 does not. | jakeogh wrote: | HTTP/3 spin bit: https://news.ycombinator.com/item?id=20990754 | exabrial wrote: | All I would like is an unencrypted version of QUIC for low | overhead stuff where privacy/security isn't a concern such as | experimental, CAN, or air-gapped networks. This has been | staunchly rejected unfortunately :/ | bawolff wrote: | There are reasons for that beyond crypto-everywhere-goodness - | in the main use case crypto helps prevent middleboxes from | messing with things which is critical for its success. | DaiPlusPlus wrote: | Funny that - I thought the advantage of unencrypted HTTP was | so that middleboxes could do things like caching (squid, etc) | and outbound deep packet inspection to prevent information | leaks in corporate environments. | Sean-Der wrote: | Have you looked at SCTP over UDP at all? I use it for this and | works great. | | https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqs... | actually compares them! | | If you don't care about initial startup time and FEC you might | see zero difference. | michaelbuckbee wrote: | I'd like that too, but I think we've all also seen when there's | an "easy" workaround it's too often neglected. | | I'm hopeful that the hard _requirement_ to have encryption will | force better management and security. | eadan wrote: | How well does QUIC perform over networks with relatively high | packet loss? It seems like Aspera | (https://www.ibm.com/products/aspera) is the industry standard | for high performance WAN transfers, but it's a proprietary | protocol. I'm wondering if QUIC performs better than HTTP1/2 in | this respect? | bawolff wrote: | That is the use case its targeting (relative to http/2) | atesti wrote: | How does Aspera work? | | How is it possible to be 1000x faster? Is it? | | TCP works by slowing the data rate when data loss happens. On | purpose, to be fair! | | I think it's not that hard to write a program/protocl that | blasts out a big file over UDP and just sends all the parts as | packets, then waits for the receiver to assemble a big list of | missing packets, send them back and have the sender blast out | everything again at a high rate. | | But this would be at the expense of all the other TCP | connections. | | Wasn't QUIC changed to be so called "TCP friendly", that means | that it has the same back off behaviour as TCP so that if you | are in a crowded hot spot every tcp stream has a fair chance? | | (On the other site, google and some other companies have a huge | initial TCP window size and try (or tried?) to send e.g. the | whole google homepage with like 8 1500 byte packets at once, | also trying to get a better ux than other, again on the expense | of other TCP streams) | | Looks like using Aspera is unfair and only works if a few | people do it. | the8472 wrote: | They are probably comparing their product to classic TCP | congestion controllers running on long fat pipes with high | packet loss. If you swap in BBR that advantage will probably | shrink or vanish. | eadan wrote: | My understanding is that the protocol underlying Aspera | (FASP) uses UDP for the main data transfer and a TCP | connection for coordination. By using UDP for data transfer | it's not restricted to the ACK ranges imposed by TCP which | can hinder throughput on networks with relatively high packet | loss. Its throughput is not necessarily achieved by being a | "bad citizen" but by having full control over how and when it | communicates lost packets. | | Since QUIC is also over UDP, perhaps we now have more | flexibility on ACK windows etc. | | Btw. there are open protocols like Tsunami UDP | (https://en.wikipedia.org/wiki/Tsunami_UDP_Protocol) that try | to fill the same niche | the8472 wrote: | My understanding is that window scaling and SACKs enable | TCP to detect losses within large segments too. The only | limitation is that most congestion controllers throttle | back when detecting packet loss. Newer latency-based | controllers don't suffer from that problem. | eadan wrote: | This is interesting. I haven't seen much in regard to | comparing TCP+BBR to FASP other than a masters thesis | which suggests FASP outperforms on transferring large | files over long distances (which is exactly its intended | purpose). But, I wonder if splitting a file over multiple | QUIC connections and re-assembling at the other side | would come closer to the performance of FASP? Could be a | fun experiment, I might try it! | the8472 wrote: | An optimal CC should have high utilization with a single | stream and behave fairly to other connections on the | network. Measuring multiple streams misses the point of | the exercise. | | > other than a masters thesis which suggests FASP | outperforms on transferring large files over long | distances | | I found that one too. I would take it with a grain of | salt since it uses scp for data transfer (which has its | own flow control and may be the limiting factor here) and | doesn't list the values of some important tcp settings, | e.g. tcp_wmem, which can be essential for transoceanic | connections. | microcolonel wrote: | Hopefully the state of UDP will improve. | Avamander wrote: | Hm, I don't see any mention of ESNI (TLS ECH), is there a good | reason it isn't recommended/mandated by the standards? | tialaramex wrote: | eSNI isn't finished and time resolutely insists on moving in | one direction, so a document which says "Use this thing that | will exist in the future" isn't useful today. | LunaSea wrote: | Hasn't it already been implemented in Firefox? | dathinab wrote: | isn't finished => the standard of it isn't finished so | implementations might not to change quite a bit over time | and might not be compatible with each other | detaro wrote: | there is an implementation in Firefox (although I don't | know if it matches the current state of the spec draft) | that unsurprisingly is gated behind an about:config flag. | rrll22 wrote: | Currently up to draft-ietf-dnsop-svcb-httpssvc and draft-ietf- | tls-esni They're designed more slowly than HTTP3. ___________________________________________________________________ (page generated 2020-06-10 23:00 UTC)