[HN Gopher] Chrome is deploying HTTP/3 and IETF QUIC
       ___________________________________________________________________
        
       Chrome is deploying HTTP/3 and IETF QUIC
        
       Author : caution
       Score  : 288 points
       Date   : 2020-10-07 16:30 UTC (6 hours ago)
        
 (HTM) web link (blog.chromium.org)
 (TXT) w3m dump (blog.chromium.org)
        
       | The_rationalist wrote:
       | Those incremental gains doesn't seems much better than what linux
       | Tcp improvments get each year, especially if turning on state of
       | the art congestion / bufferbloat algorithms. Also Tcp fast open
       | is ridiculously old and I can't see how mainstream equipment
       | still wouldn't support it on average.
        
       | parhamn wrote:
       | Those are pretty modest gains for a layer 4 change. It's going to
       | be much harder to tool/debug this stuff. Is it expected that
       | servers pretty much always support all the HTTP protocols or is
       | the goal to eventually replace the earlier forms?
        
         | ianswett wrote:
         | In regards to the gains, these results are without 0-RTT, so I
         | expect the gains with 0-RTT to be substantially larger.
         | 
         | The numbers include total application latency, not just the
         | latency introduced by the network, so the improvement to the
         | network latency is larger. As such, applications that are more
         | sensitive to network latency would show larger improvements.
         | 
         | Thanks, Ian
         | 
         | Disclosure: I authored the post
        
           | StefanKarpinski wrote:
           | If I understand the benefits of HTTP/3 correctly, this post
           | also doesn't address one of the major ones: seamless
           | connection handoff during client mobility. Earlier HTTP
           | versions are TCP-based, so if the IP address of a mobile
           | client changes, e.g. because of moving from one cell tower to
           | another or from wifi to/from cell, then the application layer
           | has to notice the lost connection, create a new connection,
           | and move whatever application-level context there is to the
           | new connection. Not all applications do this, and even when
           | they do, it's slow. If my understanding is correct, with
           | HTTP/3 that's no longer necessary -- the same HTTP/3
           | connection can migrate from one IP address to a different
           | one.
           | 
           | Another benefit that isn't measured in this post but has been
           | mentioned elsewhere in the comments here is that the
           | experience on flaky wireless connections (without changing
           | IPs) should be much better. TCP was designed on the premise
           | that packet loss almost never due to physical failure to
           | transmit a packet, and almost always due to routers queues
           | being full (i.e. network congestion). Wireless networks
           | violate this assumption badly: physical-layer issues are the
           | most likely cause of packet loss on a wireless connection.
           | TCP reacts to wifi packet drops by backing off, assuming that
           | some router is overloaded, but the routers are fine -- it's
           | just last hop signal that's bad. In those circumstances, the
           | client should just try again instead of throttling the
           | connection to nothing. Since HTTP/3 uses UDP, it can
           | potentially handle dropped packets more appropriately.
        
             | nh2 wrote:
             | On your first point, why is it necessary to switch IPs when
             | switching between cell tower? Doesn't the cell ISP manage
             | the IP anyway, thus making it easy to keep it from one
             | tower to the next?
             | 
             | On your second point, it's configurable how TCP reacts to
             | packet loss. For example, wasn't BBR congestion control
             | made to address exactly that case?
        
               | judge2020 wrote:
               | I think at least a few network already make an attempt to
               | maintain IPs, at least when the new cell tower goes to
               | the same backbone, but the problem isn't that your IP
               | address changes - it's that the TCP connection itself is
               | no longer there. Transferring an IP is easier than
               | transferring a TCP connection and ensuring all packets
               | are received and ordered correctly between the two
               | towers.
        
               | jsnell wrote:
               | Every single mobile network will maintain IP addresses
               | when migrating between cells. The IP address is not a
               | property that the base stations care about at all, it's
               | generally tied to the PDP context maintained by the
               | GGSN/PGW, which will generally not be invalidated unless
               | the connection is idle. Mobile operators will only have a
               | handful of core networks even in large countries; having
               | a subscriber move from the area covered by one core
               | network to another would be quite rare.
               | 
               | Switching between cells will cause a latency spike, and a
               | lot of correlated packet loss if the subscriber has a
               | large queue of undelivered data, but that's all. But
               | handling packet loss and ordering are what TCP is
               | supposed to do.
               | 
               | I have no idea of what you mean by "the TCP connection is
               | no longer there". The TCP connection is a distributed
               | system living on the client and the server. It doesn't go
               | away unless one of the endpoints decides so. (Modulo
               | stateful middleboxes, like NATs. But nobody in their
               | right mind would run a TCP state-aware middlebox on a
               | cellular network base station).
               | 
               | IP changes are relevant when switching networks entirely,
               | like going from WiFi to mobile.
        
               | tialaramex wrote:
               | Yup.
               | 
               | One type of roaming that does trigger for smartphones and
               | similar devices is WiFi->4G/5G->WiFi.
               | 
               | You're at home, obviously you don't want expensive mobile
               | network data charges when you've got WiFi. So the
               | connection is over WiFi. But as you walk out the door,
               | currently your application software needs to spot that
               | the WiFi is going away (not too hard), connect over the
               | mobile network (unless your policies say to give up
               | instead to save money) and keep going. QUIC would allow
               | this to be done transparently at the transport layer, at
               | least in some cases. When you reach a coffee shop/
               | friend's place/ work and there's WiFi again, the opposite
               | transition saves you money and if you go indoors and
               | signal is weaker may also be necessary to deliver a
               | working network.
        
         | ed25519FUUU wrote:
         | Unless your web traffic represents single or double digits of
         | worldwide web traffic, I don't see why anyone would bother.
        
           | ehsankia wrote:
           | Well good thing it's optional and really only useful for a
           | small subset of very large websites. I still serve my simple
           | static unminified websites, but that doesn't mean
           | minification and other optimizations have no reason to exist.
        
           | pokoleo wrote:
           | Just like minified JS, I expect people will run 3 in
           | production, and 2 in development/etc environments.
        
           | baggy_trough wrote:
           | You would bother if you wanted reduced latency for a better
           | user experience.
        
             | leothecool wrote:
             | I don't disagree, but I still really really have a hard
             | time caring about < 1ms of latency on a 50ms call.
             | 
             | But, if I had to pay the electric bill on 2.5 million
             | servers, I would definitely care about wasting resources
             | sending extra packets.
        
               | throwaways885 wrote:
               | That 1 ms translates to a much larger number in poor
               | countries with everyone on 2G.
        
               | leothecool wrote:
               | I guess I'd have to see the metrics on it in production,
               | but my intuition is that a 2% improvement to latency
               | would be even less noticeable on low bandwidth
               | connections where the download times are measured in
               | whole seconds.
        
               | baggy_trough wrote:
               | It's really the packet loss and TCP backoff that's a
               | killer.
        
             | ed25519FUUU wrote:
             | It would bother me if I had to debug an issue with QUIC for
             | a website that never needed it in the first place.
        
             | weego wrote:
             | Optimising at that granularity is also utterly at odds with
             | everything that webdev has been doing for decades now.
             | 
             | It's a micro optimisation that won't even register for your
             | users, especially if you're reducing latency on 2mb of JS
             | bundles
        
               | baggy_trough wrote:
               | Well those webdevs are terrible, but that doesn't mean
               | everyone has to follow what they are doing.
        
             | kd913 wrote:
             | If you wanted reduced latency, I think you would stick with
             | HTTP/2 with onload on a Solarflare card.
             | 
             | Until there is dedicated hardware NICs I think you would
             | find HTTP/3 has worse latency. That is until dedicated NIC
             | accelerators come out on the market.
        
               | baggy_trough wrote:
               | You're still stuck with TCP and its backoff algorithm in
               | that case though.
        
       | The_rationalist wrote:
       | If only HTTP3 what based on sctp instead
        
       | forgotmypw17 wrote:
       | I think it's safe to accept that this will become adopted and
       | then HTTP/1.1 will get the cross-out treatment?..
        
       | ssss11 wrote:
       | Is this being added to Chromium code? Its hard to tell if its
       | being added (and in which release) or if parts of it or all of it
       | are already in Chrome or Chromium and are just being enabled now
        
         | jrockway wrote:
         | It's in there (and has been for a while). You can open your
         | network inspector, enable the "Protocol" heading, and look for
         | "h3" requests. For example, if I visit cloudflare.com right
         | now, I see a bunch of requests with protocol "h3-29". If I
         | visit Google maps, I see requests with "h3-Q050". (The part
         | after the h3- is the draft number; Cloudflare's servers use
         | draft 29; Google uses their own thing which identifies itself
         | as Q050.)
        
       | GNOMES wrote:
       | I remember a Hacker News post that many of the top Firewall
       | vendors suggest disabling UDP over port 443. Apparently it's hard
       | for packet inspection, restricted browsing etc in the enterprise
       | space.
       | 
       | Have there been any leaps in Firewall tech, or will most
       | companies still disable this?
        
         | driverdan wrote:
         | Defeating firewalls is a feature.
        
         | microcolonel wrote:
         | > _Have there been any leaps in Firewall tech, or will most
         | companies still disable this?_
         | 
         | QUIC is explicitly designed to frustrate this sort of thing, so
         | the enterprise will just have to choose between having and not
         | having it, or switch from MITM to endpoint backdoors.
        
           | cheschire wrote:
           | Or go full BYOD and forego the network equivalent of 14th
           | century defenses
        
             | microcolonel wrote:
             | The solution for us is total endpoint control. Our
             | application endpoints have their own DNS root. :+ )
        
             | thisisnico wrote:
             | Even with BYOD, you still need a firewall... Source: Am
             | Enterprise IT.
        
             | unethical_ban wrote:
             | I think you underestimate the complexity of securing data
             | in a privileged network.
        
       | evolve2k wrote:
       | "Today this changes. We've found that IETF QUIC significantly
       | outperforms HTTP over TLS 1.3 over TCP. In particular, Google
       | search latency decreases by over 2%. YouTube rebuffer time
       | decreased by over 9%, while client throughput increased by over
       | 3% on desktop and over 7% on mobile."
       | 
       | This is the most sickening sentence for me. The myopic internal
       | focus. 'Look we've made our new thing a standard and look it
       | makes our products run faster'. This is just blatant exploitation
       | that's occurring as there is too much centralised ownership. In
       | my opinion this is predatory behaviour packaged up as open source
       | good for all.
        
         | Animats wrote:
         | All that effort to get a tiny improvement.
         | 
         | The real motivation is probably to merge ads into the same
         | stream as the content, so they can't be blocked by anything
         | outside the browser.
        
           | rektide wrote:
           | this is such uneducated mean ness. real engineers from all
           | over care about improving http a lot. you throw random
           | conspiracy grade conjecture in, throw this uneducated
           | disrespectful shade why? how much effort have you made to
           | understand the improvements? you show no recognition that the
           | concensus-based ietf working groups are in agreement that
           | this is a worthy & good enhancement to http3. please do not
           | degrade that work so coursely & with such empty hot air. be
           | civil.
        
             | martin_a wrote:
             | Nevertheless we should all be as clear as possible about
             | Googles' motivation for pushing any new technology: Better
             | data collection, more/better/fine-tuned ads.
        
             | Matheus28 wrote:
             | I don't see how what he said is offensive or uncivil at all
        
             | divbzero wrote:
             | I don't think GP's speculation is anywhere close to "random
             | conspiracy grade" stuff. It's predictable and plausible
             | enough of a motive to be mentioned elsewhere in this
             | thread, _e.g._ this top-level comment [1].
             | 
             | [1]: https://news.ycombinator.com/item?id=24711247
        
         | 1vuio0pswjnm7 wrote:
         | "The myopic internal focus."
         | 
         | When "everyone" uses your software/websites, you can get away
         | with this, much like Microsoft did back in the day. Internal
         | Windows programs ran faster and seemed more stable than third
         | party software written to run on Windows. It generally offered
         | better "UX", to use today's lingo.
         | 
         | The thing that is totally ignored by the myopic focus is _the
         | value of not being part of the Borg_. Putting small speed
         | differences aside, that value is not insubstantial, though the
         | Google communications department is not responsible for keeping
         | users informed about anything more than the value of Google.
         | 
         | The fastest internet would be one without ads (not to mention
         | the other benefits). Google will not inform anyone about that.
         | It is not included in any tests.
        
           | majewsky wrote:
           | > The fastest internet would be one without ads
           | 
           | I don't disagree with your sentiment. However, their biggest
           | gains in terms of percentage points (as discussed in the
           | article) are for YouTube. And YouTube wouldn't be
           | significantly faster without ads in the way that browsing
           | primarily textual websites is faster without ads. Getting
           | buffering times down is actually a substantial improvement
           | here esp. on flaky connections. (Though the article doesn't
           | make it clear whether flaky connections specifically benefit
           | from QUIC.)
        
         | rektide wrote:
         | > In my opinion this is predatory behaviour packaged up as open
         | source good for all.
         | 
         | you appear to have made no attempt to understand or grasp what
         | or why of quic & http3. many ietf engineers from all corners
         | have worked on this because they believe to be an improvement.
         | this tackles very real head of line blocking problems &
         | introduces a much more forward & fast security model.
         | 
         | quit your cheap slander please. your casual, uninvested
         | vitriolic outburst is disgusting. show some manners. you
         | immediately jumped to the nasty gross conclusion you wanted to
         | based off some engineers citing improved performance, threw
         | slander at the attempt without even a quick look at how or why
         | engineers sought these improvements. this behavior is
         | sickeningly anti-social, & nothing more than a projection of
         | your warped world view. please attempt to surface something
         | even somewhat copied to truth when you go about shooting off.
        
         | dheera wrote:
         | Yep, because we've built an economy based on idiotic fiduciary
         | duty instead of duty to technologically advance the
         | civilization.
         | 
         | Myopic focus is exactly what the system rewards.
        
         | namesbc wrote:
         | It is improving performance for EVERYONE. They just published
         | the stats for their own stack so people can compare.
        
           | dheera wrote:
           | Although that is true, I agree with GP in the sense that the
           | improvements in protocol and browser are motivated by what
           | benefits their own products, rather than what benefits
           | everyone.
        
           | krzyk wrote:
           | Increases performance by 3%, and complicates HTTP by 100%.
           | I'm not sure I would like that. I would understand if it was
           | 2x performance gain or at least 50%, but a mere 3%?
        
             | sangnoir wrote:
             | This is exactly why it's important that they publish their
             | figures - so you can make a decision if it's works for you
             | after weighing the tradeoffs. For high-traffic properties
             | (Google, CDNs, Netflix), 3% can easily translate to
             | millions of dollars saved - they could hire an entire team
             | just to deal with the HTTP complications.
        
             | onlyrealcuzzo wrote:
             | 3% of HTTP traffic is a lot.
        
               | swiley wrote:
               | Most of it is garbage that doesn't need to be there and
               | is shoved in people's faces because it makes other people
               | money/power.
               | 
               | Ex: Why does clicking on a location link in google search
               | load a big slow SPA to display a list of "cards" instead
               | of just having a maps/web link in the search results?
        
               | aruggirello wrote:
               | Web page weight has increased 300% on average _in the
               | last decade only_.
        
             | tjohns wrote:
             | Don't forget the 7% throughput increase on mobile. That's a
             | big deal, especially given that mobile users account for
             | the majority of network traffic these days.
             | 
             | And 3% is still a lot if you're looking at global Internet
             | traffic.
        
               | martin_a wrote:
               | My PiHole is currently dropping about 20% of all
               | requests. How about we get rid of all of those and see a
               | REAL speed boost, especially on mobile?
        
             | hsbauauvhabzb wrote:
             | Unless you maintain web servers or http proxies (Maintain
             | as in, nginx/IIS/etc core team) Then it really makes no
             | difference as a user.
             | 
             | Nobody has moved over to HTTP2 yet, let alone HTTP3.
        
       | fenesiistvan wrote:
       | After my opinion around 4% performance improvement doesn't
       | justify the introduction of this more complicated protocol (maybe
       | google knows how this benefit their ad business, like forcing
       | everyody to https so they can increase their control over the
       | internet, since their scripts are already included by the
       | majority of the websites, reporting them all the important
       | metrics, regardless of HTTPS)
        
         | bawolff wrote:
         | We have already forcing everyone to https started long before
         | this.
         | 
         | You control your computer - and want to mess with network
         | traffic, make your own CA. Its not that hard.
        
         | jchw wrote:
         | It is optional, backwards compatible protocol. You can still
         | use HTTP/1.1 as a server or client. Literally the only reason
         | 1.0 isn't still usable is IP exhaustion/vhosts, and that did
         | not change with HTTP/2 or 3. Modern TLS is also a complicated
         | protocol. (If you don't believe me, I believe google.com will
         | _still_ load in IE 5.5+ with plain old HTTP, albeit in a legacy
         | mode.)
         | 
         | Also very confused at how we're spinning HTTPS as a bad thing
         | now? Cloudflare and Lets Encrypt did significantly more for
         | HTTPS adoption than Google anyways... and it is a bit
         | preposterous that it is somehow being spun as a negative. It's
         | about security as much as it is about privacy...
         | 
         | (I am a Google employee but speaking entirely in a personal
         | capacity. Additionally, I do not work on Chrome or QUIC.)
        
           | a1369209993 wrote:
           | > Also very confused at how we're spinning HTTPS as a bad
           | thing now?                 > GET /index.html HTTP/1.1       >
           | Host: example.com       < HTTP/1.1 301 Moved Permanently
           | < Content-length: 0       < Location:
           | https://example.com/index.html
           | 
           | That is how HTTPS is a bad thing.
        
             | jchw wrote:
             | I have a solution: connect over TLS on port 443.
        
             | tene wrote:
             | Could you explain what problem you're trying to demonstrate
             | here? What's the bad thing?
        
               | a1369209993 wrote:
               | It's no longer possible to fetch some things over HTTP
               | _at all_ , because the server responds, not with the
               | content requested, but with a demand to use a different,
               | more complicated protocol.
        
         | uluyol wrote:
         | Please explain how it is more complicated. Also, QUIC is not
         | just about the performance improvements offered today.
         | 
         | Existing protocols like TCP and TLS are not really simpler than
         | QUIC, you just don't think about them because we have
         | implementations of them already. However, _changing_ TCP and
         | TLS is extremely difficult to impossible because middleboxes
         | snoop on traffic and mess it up in various ways. As an example,
         | multipath TCP has been engineered to look like regular TCP and
         | automatically downgrade to regular TCP if middleboxes can't
         | handle it. Making this work is hard and 100% a waste of time
         | just to work around the fact that people deploy these boxes. I
         | believe TLS 1.3 also had deployment challenges due to
         | middleboxes.
         | 
         | QUIC encrypts ~everything so that middleboxes can't make broken
         | assumptions and manipulate traffic. Adopting it is a one time
         | pain that enable later improvements to be possible.
        
           | tialaramex wrote:
           | > I believe TLS 1.3 also had deployment challenges due to
           | middleboxes.
           | 
           | There was about one year delay between the point where the
           | protocol was initially "done" and experiments showed it could
           | not be deployed partly because of middleboxes although also
           | due to server intolerance (web servers that go "What? TLS
           | 1.3? No, rather than negotiating TLS 1.2 I'll just ignore you
           | and hope you go away you weirdo") - until the point where the
           | revised TLS 1.3 wire spelling was finished and tested (about
           | six months before it was published as RFC 8446)
           | 
           | The core idea in TLS 1.3 as shipped is that the initial setup
           | phase looks outwardly very much like TLS 1.2 resumption.
           | Interpreted as if it was TLS 1.2 the TLS 1.3 client claims to
           | be trying to resume a previous connection, a TLS 1.3 server
           | claims to accept that resumption, but really they actually
           | just agreed a brand new connection. A TLS 1.2 server would
           | see the resumption attempt, but it has no memory of any such
           | prior connection (there wasn't one, the "connection ID" is
           | just random bytes) so it offers a new one using TLS 1.2 and
           | everything goes swimmingly.
           | 
           | This way of doing things allows TLS 1.3 to be as fast on
           | first connection as TLS 1.2 was on resumption without causing
           | problems with incompatible middleboxes or servers. It does
           | make the "spelling" on the wire pretty weird looking though
           | if you are used to looking at TLS 1.2.
           | 
           | The other essential goal was to never back off. A TLS 1.3
           | client will never go "Huh, TLS 1.3 didn't work, let's try
           | again with TLS 1.2 instead". The design means if the remote
           | server can speak TLS 1.2 (or 1.0 or 1.1) it will respond as
           | such to your TLS 1.3 connection. This means adversaries can't
           | try to "downgrade" you to a bad older version.
        
         | anuila wrote:
         | I can't believe there are still people who are sour about the
         | HTTPS push. Just people complaining that it makes their job
         | harder when it clearly is a huge win for the end user. Do you
         | know how many airport/business hotspots inject junk in your
         | non-HTTPS pages? It's totally worth spending 5 minute setting
         | it up.
        
         | tpmx wrote:
         | It adds to a number of other great reasons to break up Google
         | and some other big tech companies. The fact that Google is so
         | dominant (controlling both the discovery, browser and casual
         | video content aspects) that they can unilaterally decide on
         | fundamental internetworking protocols is obviviously a very
         | important issue.
         | 
         | "U.S. House's antitrust report hints at break-up of big tech
         | firms: lawmaker (reuters.com)"
         | 
         | https://news.ycombinator.com/item?id=24697860
        
           | laurent92 wrote:
           | Nonwithstanding your argument, their size also makes them
           | suspect by default. They would be able to push control much
           | farther if it had separated interests.
           | 
           | Look at LetsEncrypt: A Google initiative. Yet 25% of the
           | world's websites use it.
        
             | unpixer wrote:
             | How exactly do you figure Let's Encrypt is a Google
             | initiative?
        
             | tialaramex wrote:
             | > Look at LetsEncrypt: A Google initiative. Yet 25% of the
             | world's websites use it.
             | 
             | No. Let's Encrypt is a service of the Internet Security
             | Research Group, a California Public Benefit Corporation, it
             | isn't an "initiative" of Google except in the same sense
             | that the Red Cross is an initiative of Google, or Sweden is
             | a US state, to make it seem "true" you need to squint so
             | hard you can't see anything properly at all.
        
             | [deleted]
        
           | [deleted]
        
           | redant wrote:
           | This argument is false. Google contributed the protocol to
           | IETF. Where the competitors introduced a lot of incompatible
           | changes, which Google then implemented. The blog post is
           | literally Google's announcement that they are moving to that
           | public standard.
        
             | tpmx wrote:
             | What other company could have done this? Was the fact that
             | Google controls so much of the entire usage chain (chrome,
             | google.com, youtube.com) irrelevant?
        
               | tialaramex wrote:
               | In this _specific_ case only three other companies make
               | web browsers with significant market share (Mozilla,
               | Apple, Microsoft), so I guess you could argue only those
               | three companies could have done this particular thing and
               | you 're correct that only Google owns Youtube.
               | 
               | But more generally companies have written up IETF
               | paperwork for other protocols. Lots of Microsoft
               | protocols have RFCs for example. But one thing that's
               | less common is actually engaging with full-blown IETF
               | working group standards development like Google did here,
               | as opposed to just saying "Look here's the protocol we
               | built, you can use that, or not". The IETF is totally
               | happy to accept what I guess you could call a "donation"
               | of that sort, and it's much less effort. Maybe you take
               | some internal documents, you reassemble them into the
               | rough shape of an RFC, you publish that draft, you get a
               | bunch of feedback about that document, focused on
               | clarifying the explanation, making sure you cover
               | everything required, and so on rather than altering the
               | protocol (which you've maybe already actually shipped in
               | a product) and after maybe 6-12 months you've got a
               | polished RFC ready to publish.
               | 
               | If you use a work VPN for example, or a corporate WiFi
               | network that's not just a few home WiFi routers with a
               | more professional SSID and password, you probably end up
               | using protocols Microsoft "donated" in this way, like
               | PEAPv0/EAP-MSCHAPv2 - these protocols are _awful_ but
               | there was no multi-step process where other vendors
               | improve on it and then they eventually reach consensus
               | and publish. Microsoft shipped products that do MSCHAPv1,
               | then wrote it up so that other products could
               | interoperate with Windows, and when they made MSCHAPv2
               | they followed the same path.
        
               | bb88 wrote:
               | Back in the 90s Microsoft had been rumored to be
               | developing their own proprietary TCP replacement -- back
               | when IIS was the king of the world.
               | 
               | They could have shut off a large portion of IIS traffic
               | to those that weren't running Internet Explorer.
        
               | tpmx wrote:
               | Makes sense.
               | 
               | I guess they failed because they were too late to the web
               | - Netscape ate their breakfast.
        
               | Ericson2314 wrote:
               | Have anything to read on this?
        
         | anderspitman wrote:
         | Since QUIC eliminates head of line blocking, I'd be interested
         | in seeing how it looks over poor connections.
        
         | jraph wrote:
         | > forcing everybody to https so they can increase their control
         | over the internet
         | 
         | how does this increase their control?
        
           | anchpop wrote:
           | It prevents ISPs from snooping. But personally I don't really
           | buy the shadowy conspiracy theories. I think HTTPS makes the
           | internet better (people want to be able to trust their
           | connections are secret), and anything that makes the internet
           | better is good for Google.
        
             | wvenable wrote:
             | There is a trade off. Encryption everywhere protects the
             | clients from the network. That's great for most usage but
             | it's a negative if it's your local network and you want to
             | have more control over clients using it.
             | 
             | Right now it's possible to selectively block client
             | activity on your network (your smart TV snooping or showing
             | ads) but that's going to get much harder in the future.
             | You'll have to chose all or none when it comes to clients.
        
             | wtetzner wrote:
             | > But personally I don't really buy the shadowy conspiracy
             | theories. I think HTTPS makes the internet better
             | 
             | I guess both can be true.
        
           | zo1 wrote:
           | It takes away control from the user that no longer has full
           | and relatively-easy control over the data flowing through
           | their hardware. With Google controlling the browser, the web
           | renderer, the HTTP protocols, the add-ons available in the
           | browser and preventing data manipulation due to HTTPS, they
           | have an encryption-protected pipe straight from their servers
           | to the user's screen.
           | 
           | And it's all done under the guise and blessing of "privacy".
           | It's really a bleak future for the web.
           | 
           | Sorry, I went on a bit of tangent. But to answer your
           | question: HTTPS makes it incredibly difficult to introspect
           | and alter content that is flowing through the web and your
           | browser. Allowing a user (and his ISP if done correctly) to
           | easily at the network level alter the content, one can do all
           | sorts of magic that we haven't even begun to explore because
           | it's effectively impossible.
           | 
           | The big usage of this would be ad-blocking and removal. At
           | this point the two biggest ad-blocking mechanisms we easily
           | have available are: DNS-blocking of ad servers, and add-
           | ons/plugins that are allowing introspection of the data on
           | the web pages visited. Both of those avenues are being
           | attacked. Add-on APIs and capabilities are being neutered in
           | little bits and pieces both on Firefox + Chrome. And DNS is
           | being attacked with things such as DNS over HTTPS (again
           | under the guise of privacy).
           | 
           | Not to mention that even SSL certificates that allow MITM for
           | the user are being attacked by initiatives such as embedding
           | SSL certificates into binaries, and certificate pinning
           | (which luckily seems to have been abandoned).
           | 
           | We need FOSS/Stallman-level activism and wars against this
           | stuff that is eating away at the rights we have over our own
           | hardware. Whatever you call this issue, it should be right up
           | there with "right to repair", "own our own data", "right to
           | be forgotten", etc.
           | 
           | Edit, wrong acronym.
        
             | [deleted]
        
         | chrismorgan wrote:
         | What are you comparing it to?
         | 
         | If you're comparing it to HTTP/1.1, the performance
         | improvements are generally a _lot_ better than that. The
         | _latency_ improvements are commonly something around that, but
         | the total page loading performance will tend to be better
         | because you get proper multiplexing.
         | 
         | But then you may say, why do we need this instead of HTTP/2,
         | which had proper multiplexing? Well, it improves things a bit
         | further, commonly improving throughput and latency by 1-10% if
         | I'm recalling the right figures; but more importantly, it fixes
         | the TCP head-of-line blocking issue that made HTTP/2 often
         | actually perform a _lot_ worse than HTTP /1.1 on low-quality
         | connections.
         | 
         | I know of sites that have held off on HTTP/2 or rolled it back
         | because it made things measurably worse for some users, and of
         | sites that split things across domains with some HTTP/1.1 and
         | some HTTP/2, deliberately, purely because of the TCP HOLB
         | issue. HTTP/3 fixes that, so that it should no longer be a
         | question of whether you make things faster for some users at
         | the cost of others--you can instead make it faster for
         | everyone.
        
           | dmix wrote:
           | Every millisecond counts when loading JS backed features on
           | websites too.
           | 
           | I'm curious with the multiplexing improvements if we'll see
           | greater performance gains in the long-term as we changed how
           | we package and bundle JS.
           | 
           | I've seen a significant improvement in general page
           | performance using Webpack's chunking, where it automatically
           | breaks up each of your components into smaller .js file and
           | only loads them if the page uses them (basically on-demand
           | async importing of JS files that were preprocessed with
           | webpack).
           | 
           | It went from loading one giant blob of JS on every page into
           | one primary JS file (about 25% smaller) + a bunch of tiny
           | 1-10kb .js files that load async. A typical heavily
           | interactive page would load 5-10 of these async files.
           | 
           | There's probably opportunities to go even further in breaking
           | up the primary file (which handles the logic of which JS
           | components to load + includes the Vue/whatever framework and
           | other JS dependencies).
           | 
           | I understand the utility of "loading once and caching" stuff
           | but for serious JS-heavy frontends the bundled JS files
           | becoming extremely bulky (sometimes in multiple megabytes due
           | to legacy dependencies) and ideally you'd minimize that
           | always-cached part as much as possible.
        
         | GlitchMr wrote:
         | HTTP/2 and HTTP/3 don't require HTTPS because "it benefits
         | Google" or whatever, but rather because it's the only way those
         | protocols could be usable to begin with.
         | 
         | It is possible to use HTTP/2 without HTTPS, but the problem is
         | that there were a lot of systems that modified unencrypted HTTP
         | traffic and got confused by HTTP/2 protocol - it looked nothing
         | like HTTP/1. The easiest workaround for this issue was to
         | require HTTPS, so that's what was done.
         | 
         | Also when HTTPS is being used the server can say during TLS
         | handshake that it supports HTTP/2 avoiding the cost in having
         | to figure out whether the server supports HTTP/2 - this cannot
         | be done with HTTP as there is no handshake. If a web browser
         | were to assume the server supports HTTP/2 it would make initial
         | HTTP/1 requests slower as it would have to try HTTP/2 first
         | (and then you would have people complaining about Google making
         | HTTP/1 slower to make HTTP/2 look more attractive). If a web
         | browser would say that it supports HTTP/2 it would make HTTP/2
         | requests slower as it would have to try HTTP/1 first (which
         | would slow down HTTP/2 when it was supposed to be fast).
        
       | Randor wrote:
       | With both a DNS-over-HTTP client and potentially a DNS-over-QUIC
       | in the browser and serving advertisements over QUIC... there is a
       | good chance that the world will see unblockable advertisements in
       | our near future.
       | 
       | I don't think this is a good idea... about a decade ago... as a
       | research project I ran honeypot farm of 13 machines to learn more
       | about malware. The honeypot machines were autonomously surfing
       | the net, parsing the DOM and choosing random links. I ran them in
       | a sandbox and was getting weekly malware hits.
       | 
       | Much to my surprise... most of the malware was coming over
       | advertisement networks on shady websites.
        
         | staticassertion wrote:
         | I genuinely don't see how DNS-over-http/quic is leading to
         | unblockable advertisements. Adblocking extensions, which have
         | got to account for at least 99% of all adblocking, don't work
         | on DNS at all afaik - they work on already resolved hostnames?
        
           | lukeramsden wrote:
           | Also, couldn't you just MITM the secure transport layer if
           | you want to run a pihole?
        
             | frank2 wrote:
             | Have you tried browsing through mitmproxy? I have. It
             | works, mostly, but is quite unpleasant.
        
           | lostmsu wrote:
           | The ad data (text, layout, media) can still be hidden on the
           | client, but AFAIK with QUIC you can push it to the browser
           | regardless if it actually was requested.
        
           | userbinator wrote:
           | HOSTS files are the simplest and can already remove quite a
           | bit of cruft, but what's effectively tunneling DNS in its own
           | VPN will bypass that.
           | 
           | Ad blocking extensions are really a "last defense" and can be
           | slowly lobotomised as they are under browser vendors'
           | control.
        
             | judge2020 wrote:
             | Chrome respects HOSTS file modifications itself[0], but I'd
             | still recommend a pi-hole with DOH set up[1].
             | 
             | 0: https://i.judge.sh/chief/Spoon/notepad_fwyvP8v6li.png
             | 
             | 1: https://docs.pi-hole.net/guides/dns-over-https/
        
             | stefan_ wrote:
             | You got it the wrong way around. Ad blocker extensions
             | should be your first choice, this DNS blocking business is
             | really for devices where that is not possible ("Smart" TV
             | et al) and it's getting less effective by the day.
        
             | staticassertion wrote:
             | I assume DNS over Whatever will still respect the HOSTS
             | file? I'd be really surprised to find otherwise, but would
             | love to hear if that's the case! If you don't trust the
             | browser you already have very little defense against ads,
             | no? After all, the browser could always just resolve DNS
             | itself.
             | 
             | HOSTS files may also be simplest (I totally disagree btw),
             | but I can't imagine they're anywhere near the most common.
        
               | hdjdbtbgwjsn wrote:
               | I don't think that hosts files are respected in e.g. FF.
               | Why not test for yourself though?
        
           | rektide wrote:
           | this thread is filled with some very very aggressive high
           | level instigators, spouting wild outlandish grimdark
           | fantasies that they in no way justify or explain.
           | 
           | there is such a poisonous mentality that has taken root, that
           | is popular. no one at the ietf shares these delusional fears,
           | & it's not because the ietf is a bought out shadow puppet.
           | it's because in reality http3 brings what http2 brought but
           | technically better which brought what http1 brought but
           | technically better.
        
             | martin_a wrote:
             | > spouting wild outlandish grimdark fantasies that they in
             | no way justify or explain.
             | 
             | Did you have a good look at how the internet evolved in the
             | last 15 years?
             | 
             | The IETF can invent HTTP/23 with DNS-over-whatever
             | tomorrow, that doesn't change anything about the general
             | state of the internet, which can best be summarized as a
             | "burning pile of poop".
             | 
             | Do you really need any examples of how broken everything
             | is? Do you really wonder that people have "grimdark
             | fantasies" about what the "next cool web feature" will
             | bring to us?
        
         | hsbauauvhabzb wrote:
         | Most services can fallback to HTTP1.1 gracefully. You should be
         | able to configure a proxy which hard drops HTTP2, HTTP3 and DNS
         | tunnelling. You're up to the mercy of the server to actually
         | respect HTTP1.1 but I've not yet herd of a 2+ only service.
         | 
         | You will lose performance, but you'll gain control.
        
           | majewsky wrote:
           | If it's not happening already, I absolutely expect there to
           | be API endpoints that only accept HTTP/2, esp. for APIs that
           | only assume their own inhouse apps to be talking to them
           | (esp. smartphone apps).
        
         | onion2k wrote:
         | _there is a good chance that the world will see unblockable
         | advertisements in our near future_
         | 
         | I don't think we'll ever have unblockable adverts. It would
         | give users a _huge_ incentive to change browser. Google get a
         | great deal of value from people using Chrome and I don 't see
         | them giving that up just to serve ads to people that want to
         | block them.
        
         | tracker1 wrote:
         | Agreed to some extent, though not really sure the protocol
         | matters here... As an alternative, limiting IFrame to 1-3
         | layers would do a _LOT_ in terms of reducing the overhead
         | /bloat/risk... you get a full ad network payload, that injects
         | an iframe because of a buy miss, then another, another, etc and
         | so on.
         | 
         | In the end, limiting IFrame depth would probably do more than
         | many things for stopping some of the bad actors.
         | 
         | I use uBlock Origin and EFF Privacy Badger for most surfing,
         | which does a decent enough job. I don't think that the
         | enhancements to the protocol really do that much... though I
         | also don't know that some of the tradeoffs are worth losing a
         | human-readable protocol.
        
         | judge2020 wrote:
         | You can still define a custom HTTPS endpoint[0], you just need
         | to trust (within Windows) the self-signed certificate your
         | pihole/etc device has, in order to use it on Chrome; your
         | endpoint setting isn't messed with when you select 'enhanced
         | protection' or anything[1].
         | 
         | The only downside to this is that rogue IoT devices or hacked
         | IoT devices can easily get around DNS filtering, but that was
         | already possible before DoH by using or running some benign
         | free public http api for dns lookups.
         | 
         | 0: https://i.judge.sh/hollow/Lyra/chrome_hxURk11GDb.png
         | 
         | 1: https://i.judge.sh/bony/Fleet/s4HD5OqNvf.gif
        
         | tialaramex wrote:
         | > unblockable advertisements in our near future.
         | 
         | Why? The user agent will still be able to choose what to
         | display, and protocol improvements don't prevent you choosing
         | an agent that has your interests at heart.
         | 
         | Using secure transport means _nobody else_ gets to decide, and
         | I certainly have a long list of people I don 't want deciding
         | whether I see things, so that helps.
        
           | userbinator wrote:
           | _don 't prevent you choosing an agent that has your interests
           | at heart_
           | 
           | I choose Dillo or Netsurf. Now how can I still use sites it
           | can't even render because the developers have drunk the
           | Google-aid and used some trendy framework that requires the
           | latest version of Chrome and JS just to display some static
           | text and images?
           | 
           | Fuck Google and its creeping control over the Internet.
        
             | Arnt wrote:
             | Do you even want to visit that sort of site?
        
               | muxator wrote:
               | Maybe he wants.
               | 
               | Once these practices become sufficiently spread, the
               | majority of developers and content producers will never
               | understand that an alternative is possible.
               | 
               | Or maybe he will need it: those same developers can
               | perfectly be contractors for a government agency whose
               | site the user has to access.
               | 
               | Trends slowly creep everywhere, independently on
               | technical merit.
        
               | swiley wrote:
               | When my brother got married his then fiance sent everyone
               | in the wedding party this site with a form on it:
               | 
               | It's mostly radio buttons (with a loading screen and a
               | pile of js/css to make it pretty of course.) You fill it
               | out, click submit and a matching suit gets shipped to
               | your house. I couldn't get some of the controls to work
               | and sent the support people an email. It turns out the
               | form they built uses some special chrome only API and
               | doesn't work in Firefox.
               | 
               | Ditto when I submitted my rental application for my
               | current apartment (how can you screw up a single page
               | with a file form that badly?)
               | 
               | The trends are definitely going the wrong way.
        
           | swiley wrote:
           | > The user agent will still be able to choose what to
           | display,
           | 
           | Will it? Even on Mozilla Firefox (which fewer and fewer sites
           | are tested against) you're pretty limited in controlling
           | this. I have zero faith that google will keep that working in
           | chrome
        
           | zo1 wrote:
           | I would _pay_ for an ISP that can read my web traffic and
           | alter it to remove ads, optimize images, filter out unwanted
           | content, and a gazillion other things.
        
             | freeopinion wrote:
             | You mean like the customer loyalty card at your
             | supermarket? That's all about making your life better,
             | right?
             | 
             | Google only ever shows you ads you want, never "unwanted
             | content". Are you suggesting Google could do even better if
             | they just knew a little more about you?
        
             | dimitrios1 wrote:
             | Why not just run pi-hole?
        
               | caseyohara wrote:
               | I don't think that will work with DNS-over-QUIC in the
               | browser, but I could be wrong.
        
               | JoshTriplett wrote:
               | It will if your device speaks QUIC; you can configure
               | your browser to talk to that device.
               | 
               | But I do think it makes much more sense to make the
               | browser just do the job directly, rather than delegating
               | that to a separate device.
               | 
               | Firefox continues to block ads just fine, and it'll keep
               | doing so in a QUIC world, while protecting me even more
               | from malicious local DNS servers. The average person is
               | much more likely to encounter a _hostile_ DNS server than
               | a  "helpful" one.
        
             | OptionX wrote:
             | ISPs are more likely to move in the direction of reading
             | your traffic to inject ads rather than remove it.
        
               | zo1 wrote:
               | Maybe - But that choice is taken away from me completely.
        
               | agwa wrote:
               | It's not. You could install a root certificate from the
               | ISP and then they could man-in-the-middle your traffic.
               | 
               | Encryption is what allows user choice rather than leaving
               | you at the mercy of your ISP.
        
               | vkou wrote:
               | If both of the two ISPs that serve my area make the
               | choice to inject ads for me, that choice will be taken
               | away from _me_ completely.
               | 
               | I'd much rather take away the choice to screw with what I
               | see _from_ my ISP.
        
           | throwaway2048 wrote:
           | Chrome is also crippling adblockers, plenty of websites are
           | already chrome only, and I'm guessing if you can ensure ad
           | delivery, a whole lot more are going to jump on that train.
        
             | gsich wrote:
             | Those are not worth visiting then.
        
               | shock wrote:
               | Until it's your bank that goes Chrome only.
        
               | colejohnson66 wrote:
               | Most sites that "require" Chrome will render and run fine
               | on Firefox. It's just a matter of setting your user agent
               | to say you're running Chrome.
        
               | freeopinion wrote:
               | You either have principles or you don't. Anything else is
               | haggling over price.
        
               | sneakernets wrote:
               | Meanwhile, in the real world...
               | 
               | As an early Linux user, I still remember running a
               | Windows VM with IE installed, for years, just to do
               | online banking. Heck, at my current corporate job all the
               | Mac users (mostly the web designers) are given VMs to run
               | IE11 to fill out their timecards, and use product
               | management software. This is absurd, but real.
               | 
               | I'm no stranger to having to use VMs to do basic stuff
               | online, but not everyone else is. And, to be honest, no
               | one should have to set up a VM to use a browser they
               | cannot trust, just to protect themselves from stupid
               | drive-by malware.
        
             | lukeramsden wrote:
             | How many are Chrome only vs Chromium only? I use ungoogled-
             | chromium[1] quite happily
             | 
             | [1] https://github.com/Eloston/ungoogled-chromium
        
             | jefftk wrote:
             | _> plenty of websites are already chrome only_
             | 
             | What websites are Chrome-only?
        
             | mixedCase wrote:
             | The easy way out is to choose a different Chromium fork
             | that has your interests at heart.
        
               | mrec wrote:
               | > _a different Chromium fork_
               | 
               | Well, that's a depressing sentence to read. The vultures
               | may be circling, but Firefox isn't dead yet.
        
               | hdjdbtbgwjsn wrote:
               | I've started to encouter websites that just don't even
               | render in FF.
        
               | garmaine wrote:
               | Firefox has been a dead browser walking for a number of
               | years now.
               | 
               | I don't like this any more than you, but it's the truth.
        
               | dont__panic wrote:
               | I'm not sure if this is true -- there was a dark time
               | during the period where Google started to advertise
               | Chrome on literally all google websites, but all of the
               | internal refactors and engine improvements over the past
               | 2-3 years have made an enormous impact on my user
               | experience. And as far as I'm concerned, Firefox isn't
               | dead as long as there isn't a viable replacement that:
               | 
               | - doesn't update via a sketchy background process (Google
               | Updater, or whatever they've renamed it to this month to
               | avoid scrutiny)
               | 
               | - allows me to customize the UI to fit my needs (tree
               | style tabs are an absolute must for me)
               | 
               | - allows me to _use a full ad blocker like uBlock Origin_
               | instead of arbitrarily limiting the ad blocker API to
               | advance the interests of advertisers
               | 
               | Of course, Firefox's management is an enormous problem,
               | from the way they've prioritized features to the recent
               | layoffs to the enormously stupid decisions (letting a
               | certificate expire that disabled almost all add-ons, the
               | Mr. Robot tie-in "experiment", pushing pocket, default
               | disabling userChrome.css, forcing auto-updates) they've
               | made in the past 5 years. But the core of Firefox is
               | good.
               | 
               | I would love a startup that builds an entire business off
               | of a fork of Firefox that's completely based on privacy,
               | perhaps with some non-invasive monetization like:
               | 
               | - a $5-20 one-time fee to use (with weak enforcement, a
               | la Sublime Text's "annoy you every 5 saves" model)
               | 
               | - a paid vs. free split where the free version of the
               | browser adblocks ads but replaces them with in-network,
               | verified safe ads
               | 
               | - Linux kernel donation-style only model, with no parent
               | corporation
               | 
               | Because of all of the Firefox devs floating around who
               | just got laid off, you could even snatch up some
               | guaranteed capable talent already familiar with the code
               | base. And the best part? They're already vetted to not be
               | part of the management-industrial complex that's taken
               | over Firefox these days.
        
               | freeopinion wrote:
               | You mean like Brave, only built off Firefox?
        
               | dont__panic wrote:
               | Pretty much my vision. The largest reason I don't use
               | Brave is because it's built on top of Chromium, and
               | simply jumping on the Chromium train gives Google the
               | ability to dictate the future of the internet. There
               | needs to be a browser engine alternative, or
               | alternatively, maybe we need to spin Chromium out of the
               | Googleplex and let a free and open source group handle
               | future development independent of the Ad overlords.
        
               | shadowgovt wrote:
               | Sub-10% market share is a problem for them.
               | 
               | It's enough of a skew that when rendering errors crop up,
               | product managers have to decide how many eng-hours it's
               | worth to chase down the small implementation difference
               | between Chromium and Gecko to snag 1 in 10 potential lost
               | visitors, instead of throwing a UA-sniffing "best viewed
               | in Chrome" banner on the page and calling it a day.
               | 
               | Being a rarely-used browser is a negative feedback loop
               | for compatibility, regardless of what standards say
               | (because standards are often too loose to guarantee full
               | interoperability in all corner cases; they rarely give
               | performance constraints and often don't consider all
               | possible combinations of feature use).
        
               | mrec wrote:
               | Agree on all points. It's not quite as deadly as back in
               | the old browser wars, because HTML5 is _much_ better
               | specified than its predecessors, but it 's definitely a
               | major problem.
        
               | fakedang wrote:
               | > I would love a startup that builds an entire business
               | off of a fork of Firefox that's completely based on
               | privacy, perhaps with some non-invasive monetization
               | like:
               | 
               | Don't know if this is 100% accurate, but Tor?
        
               | asddubs wrote:
               | >- a paid vs. free split where the free version of the
               | browser adblocks ads but replaces them with in-network,
               | verified safe ads
               | 
               | your ideal privacy focused firefox comes with a conflict
               | of interest built in
        
         | mixedCase wrote:
         | I think it's simply a case of this being the job of the user
         | agent instead of middle-boxes that manipulate traffic.
         | 
         | Of course, this makes it more annoying to deal with user-
         | hostile user agents such as some kinds of appliances and other
         | locked-down devices; in which case I would suggest "don't buy
         | user-hostile devices".
        
           | frank2 wrote:
           | >don't buy user-hostile devices
           | 
           | Such as the Chromebooks many on this site like to rave about.
        
           | wolco2 wrote:
           | Which will soon be don't buy appliances. Ever try to buy a
           | non-smart tv these days?
        
         | bashinator wrote:
         | I wonder how difficult it would be to use machine vision to
         | identify and block advertising elements.
        
       | The_rationalist wrote:
       | Unrelated: chromium 86 bring the backforward cache which make
       | back navigation instantaneous in many cases, this was I believe,
       | the biggest optimization that was Firefox only
        
       | coddle-hark wrote:
       | I'm not an expert but QUIC doesn't seem like enough of an
       | improvement over TCP to warrant replacing it, especially given
       | that it's even more complex.
       | 
       | - 0-RTT handshakes are great but there's still the problem of
       | slow start.
       | 
       | - QUIC's congestion control mechanism is pretty much the same as
       | TCP's and doesn't perform particularly well over e.g. mobile
       | networks.
       | 
       | - Mandatory TLS means it's going to be a huge PITA if you ever
       | need to run a quic service locally (say, in a container).
       | 
       | - Having it in user space means there'a a good chance we'll end
       | up with 100s of implementations, all with their own quirks. It's
       | bad enough trying to optimise for the three big TCP stacks.
        
         | tootie wrote:
         | Isn't there an exception explicitly for localhost?
        
           | coddle-hark wrote:
           | Not that I know of.
        
         | all_usernames wrote:
         | > Mandatory TLS means it's going to be a huge PITA
         | 
         | Let's Encrypt!
        
           | coddle-hark wrote:
           | Let's Encrypt can't generate certificates for localhost, much
           | less containers accessed via a local network.
           | 
           | Sure, you can get anything to work, but it WILL be a huge
           | PITA.
        
             | mahkoh wrote:
             | Let's Encrypt can generate certificates for your.domain.
             | your.domain can in turn resolve to localhost. I've been
             | using Let's Encrypt for websites behind a VPN for several
             | years.
        
               | coddle-hark wrote:
               | Yes, of course, and that might make sense in a production
               | setting. Those certificates expire after 90 days though,
               | do you really want to have to edit your dns records every
               | 90 days? To run something locally?
        
               | neurostimulant wrote:
               | ACME supports multiple challenge types. The most popular
               | is the HTTP-01 challenge, but there is also the DNS-01
               | challenge (via TXT record) which allows validation
               | without exposing your webserver to the internet.
        
               | qwertox wrote:
               | I'm also doing this.
               | 
               | I have a wildcard certificate for *.local.example.com
               | (and local.example.com), and a local DNS server which
               | resolves all the subdomains of local.example.com.
               | 
               | All local servers share the same certificate and it gets
               | refreshed automatically every 2 months. local.example.com
               | has a public NS entry to a custom nameserver which only
               | exists so that letsencrypt can perform the DNS validation
               | for that domain (and its subdomains).
               | 
               | This way I can use server-1.local.example.com,
               | server-2.local.example.com,
               | workstation-1.local.example.com internally with TLS.
        
               | coddle-hark wrote:
               | And we should be thankful that workflow is supported by
               | LE. I just don't think it's a reasonable expectation that
               | people will buy a domain and host a public nameserver so
               | they can run QUIC on localhost.
        
               | kaliszad wrote:
               | You can point a CNAME somewhere, where you have e.g.
               | https://github.com/joohoi/acme-dns which is meant just
               | for this. There are even providers (listed later in the
               | README linked), that you can use this with, without
               | managing another software.
        
               | mahkoh wrote:
               | Why would I have to edit dns records manually? I have a
               | cron job that tries to renew the certificate once a day.
               | There is no manual work involved.
        
               | coddle-hark wrote:
               | In that case you're using either HTTP or TLS
               | verification, which only works if you have a public
               | static ip/port that LE can access. You can't do that from
               | behind a NAT without port forwarding and you generally
               | don't want your local docker machines to be accessible to
               | the internet.
               | 
               | Unless your cron script is doing some funky DNS altering,
               | that is.
        
               | mahkoh wrote:
               | All of the DNS altering is performed automatically by
               | lego [1] which has support for a large number of DNS
               | providers.
               | 
               | [1] https://github.com/go-acme/lego [2] https://go-
               | acme.github.io/lego/dns/
        
               | coddle-hark wrote:
               | Oh, that's cool, I hadn't heard of lego before! But
               | still, you shouldn't need to buy a domain to do stuff
               | locally on your own device and it adds quite a bit of
               | complexity.
        
               | tialaramex wrote:
               | The only thing _we_ care about is that there 's just one
               | authoritative name hierarchy, if somebody in the name
               | hierarchy wants to give you a name without selling it,
               | that would be totally fine.
               | 
               | I would totally be down with say, the US government
               | issuing citizens with a DNS name under their ccTLD
               | somewhere. Done your tax paperwork in reasonable time?
               | Your name is guaranteed by law to keep working for
               | another year. Maybe 1480219643.ny.citizen-names.us is
               | _ugly_ but it 'd satisfy this problem for individuals.
               | Maybe they could bolt on a checkbox, $50 extra to the IRS
               | and you get to pick any as-yet unreserved legal name, or
               | they have rules like for license plates.
        
               | a1369209993 wrote:
               | > it'd satisfy this problem for individuals.
               | 
               | No, it wouldn't, because that involves interacting with
               | the public DNS hierarchy.
               | 
               | > you shouldn't need to buy a domain to do stuff _locally
               | on your own device_ [emphasis added]
        
               | majewsky wrote:
               | I do the following:
               | 
               | 1. have the domain in question resolve to a server with a
               | public IP
               | 
               | 2. have that server generate the certs with any ACME
               | client with HTTP challenge
               | 
               | 3. have that server ship the certs to the actual server
               | hosting the service via intranet
               | 
               | 4. in the intranet, have the domain resolve to the actual
               | server via /etc/hosts override
               | 
               | All of that is not that hard to set up even at scale with
               | proper config management tools. Having said that, I don't
               | actually use it for that many services myself. The most
               | significant one is LDAPS.
        
         | mcqueenjordan wrote:
         | No head-of-line blocking is a pretty significant improvement.
        
           | coddle-hark wrote:
           | Yes, but in reality you don't want your page to load until
           | all above the fold assets are loaded anyway. We went through
           | this a couple of years ago with increasingly complicated
           | techniques to avoid the dreaded "flash of unstyled content".
           | 
           | Head-of-line blocking is only a problem if you're
           | multiplexing connections. Just open one TCP socket per
           | request and you'll never have head of line blocking. There
           | are issues with opening multiple TCP sockets of course
           | (mainly Slow Start) but these aren't insurmountable.
        
             | TravelPiglet wrote:
             | You do want to download multiple assets like images in
             | parallel over the same connection since you can't open that
             | many connections per domain.
        
               | coddle-hark wrote:
               | That's not a fundamental limitation of TCP though. You
               | can theoretically open 4294967296 sockets between two
               | machines (with one IP address per machine) and a beefy
               | machine can handle >1M concurrent connections.
               | 
               | They've already proposed adding HTTPS and SVCB DNS
               | records for performance, it would be easy enough to
               | implement a ICANTAKEIT record that lets the browser know
               | it's OK to open as many sockets as it needs.
        
               | CydeWeys wrote:
               | It's a bigger lift than just switching from TCP to QUIC
               | though.
               | 
               | And there's other reasons you wouldn't want to use as
               | many connections as you have assets to download; each
               | connection has startup overhead.
        
               | coddle-hark wrote:
               | N connections have the same startup overhead as a single
               | connection though, since they're made in parallel.
        
               | a1369209993 wrote:
               | A single (TCP) connection has a single SYN and SYN-ACK
               | packet of startup overhead. N connections have N SYN
               | packets and N SYN-ACKs. That might still not be very
               | _much_ overhead, but it is more.
        
               | coddle-hark wrote:
               | You're right, of course, there's a (teeny tiny) cost for
               | the handshakes but what I meant was that the performance
               | overhead will be zero.
        
               | CydeWeys wrote:
               | The performance overhead will not be zero. You're
               | spending more computational power and memory opening up
               | and maintaining a larger number of connections, and
               | you're sending more packets and bits over the network in
               | total because each additional connection has non-zero
               | overhead.
        
               | gravypod wrote:
               | > (teeny tiny) cost for the handshakes
               | 
               | What happens when your ping time is >500ms?
               | 
               | > was that the performance overhead will be zero
               | 
               | I think QUIC is motivated by Google's expansion into
               | developing markets (China, India, etc). These areas are
               | primarily mobile markets that would like to consume the
               | same content we do. What might sound like small overhead
               | (SYN->SYN-ACK) actually takes half a second. If your
               | performance is measured in this way than N connections
               | times half a second is a long time. And, on mobile
               | networks, sometimes your TCP connection will drop even if
               | there's still data being sent back and forth (some times
               | you can see latency spikes up to 3sec on really bad
               | networks which causes most software to timeout since it
               | thinks no data came in because of head-of-line blocking).
               | 
               | It's a truely miserable experience that QUIC or HTTP/3
               | can easily solve. Imagine how slow an `apt update && apt
               | upgrade` is and how much of that overhead can be dropped
               | by connecting to 1 mirror a single time and bulk
               | requesting 10 to 100 file transfers. This allows you to
               | maximize your network throughput. Think instead of how
               | slow this would be if you opened 10 to 100 connections
               | each of which might take 500ms per startup. In the worst
               | case that's a 500ms penalty versus a 5 _second_ penalty.
        
               | a1369209993 wrote:
               | > What happens when your ping time is >500ms?
               | 
               | Actually, that's not a problem with one reused connection
               | versus N simultaneous connections (as coddle-hark said,
               | the connections are made in parallel). It's a problem
               | with how many round trips it takes to set up each
               | connection.
        
             | gravypod wrote:
             | There are many things that use HTTP that are not web
             | browsers. Things like S3, for example, can make use of no
             | head of line blocking by allowing multiple file transfers,
             | multiple pagination requests, etc. Even APIs can benefit
             | from this by using bidirectional streaming.
             | 
             | Games can now also be built over a REST-like API using
             | separate channels for different priority messages:
             | movement, world state, chat, etc. Since you don't need to
             | worry about blocking you get way better performance. Almost
             | every game engine networking component essentially
             | implements TCP-over-UDP just to get application level
             | control over these functions.
        
               | coddle-hark wrote:
               | You can do all of that, except for the application level
               | control, by just opening multiple TCP sockets. QUIC is
               | just basically TCP over UDP in that it shares the same
               | semantics (guaranteed, in order delivery of a
               | bidirectional data stream with congestion control). Sure,
               | game engines don't always want all those attributes -
               | guaranteed delivery, for example - and will implement
               | their own thing over UDP that makes more sense for them.
               | QUIC is just as bad as TCP for those use cases though!
        
               | gravypod wrote:
               | The benefit is that you can open up as many channels
               | within a connection as you want so you obtain the same
               | abilities as you'd normally implement in a game engine.
        
         | faeyanpiraat wrote:
         | Or we'll have new tooling which makes it simple to enable SSL
         | locally.
         | 
         | Dev and prod should be as similar as possible anyway.
        
           | toomuchtodo wrote:
           | This should be the user's choice, not Google's.
        
             | OptionX wrote:
             | And its not, its the IETF since whats being discussed here
             | is the IETF version not Google's. Is was their choice to
             | adopt QUIC for HTTP/3 and standardized it. If you don't
             | like the IETF then its another matter. Just point you
             | pitchfork the right way.
        
               | toomuchtodo wrote:
               | "The Internet Engineering Task Force is an open standards
               | organization, which develops and promotes _voluntary_ (my
               | emphasis) Internet standards, in particular the standards
               | that comprise the Internet protocol suite. " No one is
               | twisting the arms of Google or browser/tooling
               | implementers. Did I miss a flag where this can be
               | disabled on the browser? If so, that's my fault! But if
               | not, that's my point.
        
           | coddle-hark wrote:
           | Sure, I can create a self-signed certificate and copy it to
           | my docker container and update the app's config to use it and
           | add it to the system's certificate store and then add it to
           | every browser's certificate store every time I need to run
           | something over QUIC locally. And sure, maybe one day there
           | will be a tool that can do a that automatically.
           | 
           | But like... why?
        
             | gravypod wrote:
             | A lot of browsers already expect tls to enable specific
             | features. I've had things break between dev and prod
             | because of a lack of tls on the backend as well.
             | 
             | It's also one of those things that once you sort it out
             | it's no longer a problem. Like "why do I need to setup all
             | this complex stuff to manage a dependancy? Just give me
             | your source code and let me `gcc -o binary a.c b.c ....`"
             | 
             | In reality once you solve a problem with an ergonomic
             | solution and the solution improves your alignment with
             | production and comes at no real cost it's a win/win/win.
        
               | tialaramex wrote:
               | Mostly browsers require _Secure Context_ rather than TLS,
               | so http://127.0.0.1/ which is a secure context is fine.
        
           | dnr wrote:
           | I started using mkcert for this. It's trivially easy to use
           | and fixes all the weird quirks of http vs https:
           | 
           | https://blog.filippo.io/mkcert-valid-https-certificates-
           | for-...
           | 
           | https://github.com/FiloSottile/mkcert
        
         | an_opabinia wrote:
         | As another user stated, the biggest advantage of QUIC is how
         | much more sense it makes for wireless communications and mobile
         | devices. Mobile networks break TCP connections all the time, it
         | just violates too many useful assumptions of TCP.
         | 
         | HTTP/3's WebTransport (i.e. gRPC + sessions) defines a Session
         | far more logically than a TCP Connection + your ad-hoc in-
         | application Session does.
         | 
         | The predecessors of WebTransport already appear every
         | everywhere in low-latency and realtime networked applications,
         | like games, chat and video, on mobile devices. Typically that
         | is achieved using UDP + an ad-hoc session management and
         | authentication protocol. Or worse, undoing TCP connections and
         | managing sessions on top of TCP connection breaks (as most dual
         | web + native applications do).
         | 
         | Why reinvent that clumsy stuff over and over again instead of
         | having it standardized? TCP is already dead on mobile, for many
         | of the most popular applications people actually use.
        
           | coddle-hark wrote:
           | My day job is providing internet connectivity to
           | trains/busses/boats in remote areas over cellular modems. I
           | couldn't agree more that TCP over mobile networks is a non-
           | starter, I just wish they'd done more than implement TCP over
           | UDP. In terms of mobile performance, QUIC isn't much better
           | than TCP over a Wireguard VPN.
        
             | Matthias247 wrote:
             | > QUIC isn't much better than TCP over a Wireguard VPN.
             | 
             | Why? One of the main intends of QUIC is imho to be better
             | in this scenarios. E.g. by handling retransmission and loss
             | detection in a different fashion than TCP.
        
               | coddle-hark wrote:
               | It handles congestion control slightly differently since
               | it can estimate the Round Trip Time more accurately, but
               | it's essentially the same ACK based mechanism that TCP
               | uses. The current draft [0] from the QUIC working group
               | basically describes TCP NewReno (the authors say so
               | themselves).
               | 
               | [0] https://quicwg.org/base-drafts/draft-ietf-quic-
               | recovery.html
        
             | withinboredom wrote:
             | It's not hard to make a transparent proxy talk over udp to
             | a POP and avoid TCP issues altogether. I did this for a
             | satellite based internet service I ran back in 09, mostly
             | for fun and profit. Never thought it could be something
             | worth selling because it was so simple.
        
               | coddle-hark wrote:
               | Nope, that's the easy part! You don't avoid TCP issues
               | altogether though, all the usual behaviours of TCP are
               | still there with respect to packet loss, latency, jitter
               | etc.
        
         | navaati wrote:
         | To be fair the real problem is not mandatory TLS, it's
         | mandatory WebPKI certs: self-signed certs are not a problem in
         | a local docker container, and is not worse than unencrypted.
        
           | coddle-hark wrote:
           | You still need to add those certs to a bunch of different
           | places (browsers use their own certificate stores).
           | 
           | More importantly, I don't think it's a good idea to teach
           | people to add self signed certs to their certificate stores
           | willy-nilly. Seems like a good way to get pwned.
        
           | a1369209993 wrote:
           | Well, there _are_ other problems (even requiring encryption
           | at all means some low-power clients can 't use it because
           | they lack the processing power), but X.509/WebPKI is the main
           | one, followed by ciphersuite proliferation requiring dozens
           | of times as much audited cryptographic code.
        
           | cma wrote:
           | Isn't there a performance burden?
        
             | OptionX wrote:
             | Supposedly not (vs TCP/TLS), at least not in the Google
             | implementation (haven't read anything about the IETF
             | version). And with 0-RTT, when it comes out, you gain some
             | performance back anyway by not having to re-handshake on
             | drops.
        
             | loeg wrote:
             | Some, but maybe less than you think if you're using OpenSSL
             | and a computer less than ten years old (AES-NI and maybe
             | PCLMULQDQ for GCM). Often something else (NIC, network)
             | will be the bottleneck.
        
         | RedShift1 wrote:
         | What does QUIC solve then?
        
           | Wohlf wrote:
           | Removes the overhead and blocking of TCP while providing
           | redundancy for the packet loss of UDP.
        
             | Matthias247 wrote:
             | It also allows for multiple concurrent encrypted streams
             | while just doing one handshake.
             | 
             | HTTP/2 also allowed this, but suffered from head-of-line
             | blocking between those multiplexed connections. Quic
             | streams are independent. One Stream can still make progress
             | if datagrams which contained data for another stream got
             | lost.
        
               | coddle-hark wrote:
               | Multiple concurrent encrypted streams with a single
               | handshake isn't much of a win though. N concurrent TLS
               | connections won't be any slower than a single TLS
               | connection since the requests can be done in parallel.
        
               | tick_tock_tick wrote:
               | In a world with variable ping times and packet loss that
               | is a bold lie.
        
               | coddle-hark wrote:
               | How so? Surely the variable ping times and packet loss
               | affect both scenarios equally?
        
             | coddle-hark wrote:
             | That's not quite right. It greatly reduces the handshake
             | time for new connections, but it adds more network overhead
             | (more bytes sent) than TCP. TCP doesn't "block" any more
             | than QUIC.
        
       | djhaskin987 wrote:
       | Does anyone else think it's weird/futile that they're building a
       | protocol over UDP?
       | 
       | QUIC is disabled on our corporate network, simply because the
       | network firewall/SSL inspector can't see what's going on, and
       | can't regulate traffic, so it just blocks all UDP. our internet
       | still works because sites see that QUIC doesn't work and fall
       | back to TCP. Heaven forbid the entire web moves to QUIC or we'd
       | be in trouble.
        
         | sammy2244 wrote:
         | I think its weird/futile to block all UDP traffic.
        
           | gravypod wrote:
           | It's strangely common. As someone who has worked in a few IoT
           | companies that ship devices that run on other people's
           | networks the amount of enterprise gear that just drops all
           | UDP as a default is astonishing. Every single IoT device I've
           | worked on has had some custom, hand rolled, NTP-over-HTTP to
           | get around RTC failures.
        
         | agwa wrote:
         | Frankly, that would be your problem. Enterprise intransigence
         | should not hold back the rest of the Internet.
        
           | djhaskin987 wrote:
           | My point is that it is a lot of people's problem. Mine isn't
           | the only company using large-scale, off-the-shelf, really-
           | very-normal firewall and network safety equipment and
           | software. Just because you work at a start up, doesn't mean
           | most people do. Lots of us work at banks, financial
           | institutions, insurance companies. These companies make a lot
           | more money than the average start up, and are risk averse and
           | care a lot about compliance. They house a lot of "dark
           | matter" developers and technology professionals.
           | 
           | The whole way this debate is going irks me. It's like there
           | is two camps: those who think all the world is smart phones
           | being used by people way out in the desert and everyone else.
           | There are lots and lots of devices that have high speed, low
           | latency connections. The people that use those devices are
           | people, too. We should not over optimize for the remote smart
           | phone user over everyone else.
        
             | agwa wrote:
             | Enterprises can still have all this stuff that is
             | supposedly so important to them, like network interception
             | and "compliance". But they might have to upgrade their
             | software, insist that their vendors implement standards
             | correctly, or maybe change the way they do some things. As
             | you point out, these companies make more money than anyone
             | else. That means change shouldn't be a problem for them.
             | But of course they would prefer to externalize their costs
             | onto less-resourced companies and individuals. It's
             | everyone else's job to make sure they can't get away with
             | that.
        
         | judge2020 wrote:
         | TCP+UDP will probably be the norm for a decade once it's rolled
         | out, and with this being announced for Chrome, it'll only show
         | the market that there's demand for UDP packet inspection - not
         | that that's the best solution to DLP; endpoint security and MDM
         | gives the most insight and solves the problem of middlemen [who
         | might be a trusted local network operator, or might be a
         | government-controlled ISP] being able to inspect/limit traffic.
        
       | fenollp wrote:
       | While this seems good to have a more efficient transport, I can't
       | make sense of this                   > Since the subsequent IETF
       | drafts 30 and 31 do not have compatibility-breaking changes, we
       | currently are not planning to change the over-the-wire
       | identifier.
       | 
       | Are there slow-moving internal software at Google that relies on
       | this nonce? This looks like the kind of thing that some clients
       | will tend to rely on (for a reason yet unknown). That's how
       | clients grow the standard in unintended ways, no?
       | 
       | On another note:                   3.  optionally, the trailer
       | field section, if present, sent as a single HEADERS frame.
       | 
       | I see you're paving the way for gRPC on the Web (of browsers) by
       | adding trailers (a header sent after the body), which is not
       | supported today for HTTP/1 not /2 by at least the top 3 browser
       | vendors in volume.
       | 
       | I'm divided: I'd be glad to get rid of grpc-gateway and
       | websockets but isn't proto-encoded communication bad for the open
       | Web /in principle/? Maybe it's only a tooling problem.
        
         | gravypod wrote:
         | I think there's a large volume of tools that can be built that
         | will make things much more discoverable as we go along this
         | route. Things like the gRPC reflection, health checks, etc can
         | all be instrumented in tooling with no guess work as to how to
         | directly talk to any API that implements it. No guessing if
         | it's `GET /healthz` or `GET /healthcheck` etc.
         | 
         | There's a lot of magic you can do with protos. At my current
         | company we're even generating forms/UIs entirely off of proto
         | message definitions for things like configs. Engineers no
         | longer need to think about how to make something work cross
         | language, manually wiring up a UI, etc.
         | 
         | I cant wait to see what doors this opens up for gRPC on the
         | browser as that will bring many more OSS devs into the
         | ecosystem.
        
       | gravypod wrote:
       | Does anyone know when HTTP/3 is going to get wider support in
       | gRPC. There's an open issue in the github project about this [0].
       | In IoT use cases where you want to do bi-directional stream of
       | data to/from a location getting rid of some head of line blocking
       | will make me a happy camper.
       | 
       | 0 - https://github.com/grpc/grpc/issues/19126
        
       | xfalcox wrote:
       | Relying on Alt-Svc for HTTP/3 is really bad, so I hope Chromium
       | is following this with https://blog.cloudflare.com/speeding-up-
       | https-and-http-3-neg... right away.
        
       | [deleted]
        
       | The_rationalist wrote:
       | Where are the benchmarcks for standard tasks?
        
       | ohnoesjmr wrote:
       | Implementing QUIC is not trivial, so I suspect it will be years
       | until it gets reasonable adoption in standard frameworks and
       | languages that prefer not to interop with C.
        
         | amq wrote:
         | I don't think HTTP/3 is even intended as a general-purpose HTTP
         | replacement. Even HTTP/2 is overly complex, so HTTP/1 will most
         | likely stay as a fallback for the foreseeable future.
        
           | chrismorgan wrote:
           | Sure it is (though it depends a little what you mean).
           | 
           | I expect HTTP/2 usage to disappear, leaving HTTP/1.1 and
           | HTTP/3 as the main versions in use. For HTTP traffic (as
           | distinct from other upgraded protocols like WebSocket) HTTP/2
           | is mostly better for users than HTTP/1.1, but TCP head-of-
           | line blocking is its one particularly serious problem. For
           | users, I would characterise HTTP/3 as generally just the best
           | of both worlds, and once you have it there there's no reason
           | at all for HTTP/2.
           | 
           | HTTP/1.1 will remain popular indefinitely for compatibility
           | with older servers and clients that aren't being updated to
           | the latest stuff, and for HTTP upgrading mechanisms.
        
             | anderspitman wrote:
             | I'm excited about HTTP/3, but I sincerely hope HTTP/1.1
             | never goes away. There's something important about being
             | able to write a compliant web server in a couple lines of
             | code using nothing but a TCP socket.
        
               | steveklabnik wrote:
               | While this is true... it's also not true.
               | 
               | Like, the final chapter of the Rust book does this. We
               | implement a _really_ minimal HTTP server.
               | 
               | Turns out that, even though what we do is spec compliant,
               | some versions of Chrome don't properly deal with the
               | responses we give, which has had to lead to errata.
               | Implementing the web means knowing the implementation
               | details of the major implementations, and following them,
               | sadly. It's not actually that simple.
        
               | anderspitman wrote:
               | In a way, you're actually supporting my deeper point,
               | which is that the web (and more particularly the
               | internet) is bigger than google, even though they're
               | dominant at the moment. If Chrome doesn't support my
               | compliant minimal web server, that doesn't mean it isn't
               | useful, or even that I should have to change it.
               | 
               | HTTP is the lingua franca of the internet. We need to
               | keep a simple, text-based version of it around, so that
               | the barrier to entry for competing with the big boys
               | remains reasonable. We've already lost that fight with
               | browsers, but HTTP is still in a pretty good place. Even
               | HTTP/3 is reasonable for a small startup to implement.
               | And if you can't manage that, you can implement HTTP/1.1
               | and sacrifice some performance for simplicity.
        
               | steveklabnik wrote:
               | > If Chrome doesn't support my compliant minimal web
               | server, that doesn't mean it isn't useful, or even that I
               | should have to change it.
               | 
               | This is probably where we differ; I pretty much think
               | that you do have to do this in this situation. Or at
               | least, like, sure, you don't have to, but you lose a
               | significant amount of audience, which is one of the major
               | points of bothering with the web in the first place.
        
               | anderspitman wrote:
               | I mean I certainly take your point. If I want the biggest
               | audience today, I need to support chrome. But if that's
               | my goal I'm probably not writing my own server anyway.
               | 
               | In terms of building things today, I'm more saying that
               | if you need an internet protocol for moving data around,
               | HTTP is a pretty dang good choice. It has some cruft, but
               | if you were to start making a replacement from scratch
               | you would end up with a large subset of HTTP/1.1. That's
               | not true of HTTP/3. You simply don't need the complexity
               | for a large number of useful tasks.
               | 
               | Now, in terms of the future. I think that the internet
               | will long outlive the web (at least as we currently
               | conceive of the web), and I think HTTP as a transport
               | layer will outlive it as well. In that future, I want
               | HTTP/1.1 to still be a thing.
        
         | sayrer wrote:
         | Google first shipped QUIC in 2013.
         | 
         | https://blog.chromium.org/2013/06/experimenting-with-quic.ht...
        
         | steveklabnik wrote:
         | There are like, three different QUIC implementations in Rust.
         | And one of them is by (and being used by) Cloudflare, so it's
         | going to have pretty significant production use. (The others
         | may be as well but I am less familiar with where they're used)
         | 
         | * https://github.com/cloudflare/quiche
         | 
         | * https://github.com/djc/quinn
         | 
         | * https://github.com/mozilla/neqo/
        
         | becauseiam wrote:
         | There are multiple implementations and their interoperability
         | is tested [1], with some implementations already having
         | bindings to higher level languages (aioquic) or into existing
         | servers (nginx).
         | 
         | 1: https://interop.seemann.io
        
       | Aachen wrote:
       | Has the amplification attack been solved recently? Last I checked
       | the spec still said "at most 3x amplification" (which I expect
       | will be enough for attackers) and the server implementation that
       | I was testing went _well_ beyond that. If that 's not solved and
       | this gets deployed on a few big networks, I can already tell you
       | what the next popular protocol will be for taking down websites.
        
         | coddle-hark wrote:
         | No, it hasn't been solved and it won't be because it can't be
         | solved without adding more round trips to the handshake. It'll
         | be down to firewalls to stop this from happening.
        
       | jeffbee wrote:
       | Anyone know why there's no new URL scheme for HTTP/3? We didn't
       | rely on Alt-Svc headers for switching to HTTPS. We gave it its
       | own scheme. Why aren't we doing that for HTTP/3?
        
         | mercora wrote:
         | not sure about any details but i would guess its because for
         | HTTPS there are security implications by doing negotiation in
         | the clear whereas this probably doesn't have those.
        
         | hlandau wrote:
         | https:// is a scheme, not a protocol. In retrospect, it
         | probably should have been called "www://" and "wwws://" rather
         | than http:// and https://, because a single scheme can
         | potentially be resolved via several different protocols.
         | Consider that http:// does not necessarily imply unencrypted
         | access, due to the opportunistic encryption specification
         | (though this encryption is not MitM-secure).
         | 
         | Also note that because http:// and https:// are different
         | schemes, there is no requirement that they serve the same
         | website. http://example.com/foo and https://example.com/foo
         | could be completely different resources, or completely
         | different websites. An opportunistically encrypted load of an
         | http:// URL still needs to load the http:// website, not the
         | https:// one. Though opting in to HSTS eliminates this
         | distinction.
         | 
         | For that matter, HTTP/1.1 allows full URLs to be specified in
         | the request line, as an alternative to the traditional "Host"
         | header. This is usually only used when using HTTP proxies:
         | GET https://example.com/foo HTTP/1.1       ...
         | 
         | but what is interesting is since this also includes the scheme,
         | it potentially allows you to do something very peculiar:
         | theoretically you could access an https:// logical resource
         | over an unencrypted HTTP/1.1 connection, e.g. by telnetting to
         | example.com:80 and issuing "GET https://example.com/foo
         | HTTP/1.1". It would of course be insane to support this, but
         | _if_ one disregards the fact that https:// is supposed to
         | invariably imply secure communication, theoretically even
         | https:// resources could be loaded unencrypted, just as http://
         | resources can be loaded encrypted using opportunistic
         | encryption.
         | 
         | In short: scheme and protocol are different things, and for
         | good reason.
         | 
         | URIs are resource identifiers. They exist to identify a
         | resource, not how to access it. Tying those resource
         | identifiers to a means of resolution would unnecessarily couple
         | it to a resolution mechanism and thereby reduce the
         | universality and permanence of URIs. URIs which are URLs are
         | closer to describing a means of access but fundamentally
         | there's still an interest in providing enough degrees of
         | indirection that the longevity and permanence of an URL is
         | maximised.
        
           | social_quotient wrote:
           | Makes it sound like https:// is the new USB3
        
         | floatingatoll wrote:
         | Users don't need to care about the difference between HTTP,
         | HTTP2, and HTTP3. It's relevant only to browser engineers and
         | server operators.
         | 
         | There's no reason to ask everyone on the planet to change
         | "https:" to "http3:" when we still can't even reliably complete
         | the "http:" to "https:" transition. We've learned that users
         | simply do not care at all about "http:" or "https:". We
         | shouldn't have to ask people to update their QR codes for HTTP3
         | just because we updated the protocol.
         | 
         | Should we have introduced ftpp:// for passive FTP to
         | distinguish it from classic FTP? No: it would only confuse
         | users, cause frustration for pages linking to FTP urls, and
         | both servers and clients are perfectly capable of negotiating
         | this silently without surfacing it to the user.
         | 
         | In general the pushback with HTTP3 is that site operators would
         | rather not have to do the extra work to enable it. But from a
         | user's perspective, there is no work to enable it. It's just
         | https:// like always and one day it gets faster. Sites that
         | refuse to do the extra work to turn it on will be visibly
         | slower than their peers once it's widespread enough.
         | 
         | If you believe that HTTP3 should have had its own URI spec, you
         | would have needed to make that case a few years ago to the
         | committee implementing it; it's not going to change now. I
         | assume their discussion about that is in the archives, and I
         | expect it boils down to "this is not relevant to users, they
         | should not be expected to care".
        
           | jeffbee wrote:
           | But users don't type or even see the URI scheme any more. It
           | is also an implementation detail.
           | 
           | I just think it's weird that we've left ourselves without any
           | way to send a client directly to an HTTP/3 site. Instead they
           | _must_ establish a TCP connection to the site and be
           | redirected via Alt-Svc headers to HTTP/3.
        
             | toast0 wrote:
             | Elsewhere in the thread, it sounds like there's a DNS way
             | to suggest http/3. If adoption is high, clients might end
             | up doing something like happy eyeballs; send tcp syns as
             | well as the udp equivalents, and use whichever comes back
             | first or use them all but prefer the fastest one, etc.
        
             | ziml77 wrote:
             | That's not true unless the server is set up to handle
             | redirecting. Every time I go to unifi.home.arpa:8443 I get
             | the message:                   Bad Request         This
             | combination of host and port requires TLS.
             | 
             | I have to explicitly type the https:// for it to work.
        
             | bastawhiz wrote:
             | What's the backwards compatibility story for that? Links
             | just don't work for older browsers? Why spend all the time
             | making H3 work seamlessly with H2 if we're only going to
             | introduce a new scheme that only works in new HTTP clients?
        
             | vitaliyf wrote:
             | I am not going to claim myself if this is "right" or
             | "wrong" but there is some work on that being done via
             | https://datatracker.ietf.org/doc/draft-ietf-dnsop-svcb-
             | https... - see https://blog.cloudflare.com/speeding-up-
             | https-and-http-3-neg...
        
           | logicOnly wrote:
           | One of my biggest complaints is how https flags text based
           | websites for being dangerous.
           | 
           | What danger could possibly happen if I'm reading about a
           | Physical Therapy clinic?
           | 
           | They don't take credit cards, there's no information for me
           | to enter on the website.
           | 
           | But unless the Physical Therapist knows how to manage the
           | server, they get this scary warning.
           | 
           | Maybe it isn't a big deal to US healthcare because they make
           | lots of money. But I imagine there are others that don't have
           | the technical abilities to upgrade to https. Could your
           | grandma do it for her sewing store?
        
             | iso1210 wrote:
             | https doesn't flag that.
             | 
             | Your browser might flag a http server as dangerous (mine
             | doesn't - it just has a padlock with a line through), but
             | you're leaking information to your ISP that you are reading
             | about a Physical Therapist.
             | 
             | If your site tries to do https and fails (self signed or
             | invalid certificate) it will rightly flag up that it's a
             | problem.
             | 
             | My grandma would not be able to manage a server on the
             | internet, let alone responsibly manage it. If you can't set
             | up a modern server with https then you shouldn't be running
             | a server on the internet at all.
        
               | kube-system wrote:
               | Chrome and derivatives display "! Not secure" in the
               | omnibar, which is presumably what they are referring to.
        
               | iso1210 wrote:
               | 20 year ago the world thought that IE6 was synonymous
               | with the internet
               | 
               | Thank god we moved on from that.
        
               | mumblemumble wrote:
               | Assuming your physical therapist has their own website
               | with its own doman, and not just, say, a Facebook page,
               | you're leaking that information to your ISP with https,
               | too. https doesn't hide the domain you're talking to,
               | just the specific URLs within that domain.
        
               | iso1210 wrote:
               | We have SNI. Now sure your therapist may run their own VM
               | on it's own IP address, but that's not very likely.
        
               | mumblemumble wrote:
               | SNI doesn't encrypt the desired hostname in the payload
               | of the initial connection. It's still plainly visible to
               | an eavesdropper. They can also observe un-encrypted DNS
               | lookups.
        
             | rakoo wrote:
             | https doesn't care about the content, it's the browser that
             | tells the user that his communication is in the clear, and
             | there's no assurance as to who the user is talking to.
             | 
             | > What danger could possibly happen if I'm reading about a
             | Physical Therapy clinic?
             | 
             | Depends what is a "danger" to you. Your insurance learning
             | you're having issues and deciding to increase the amounts
             | you owe them, because they saw that your back is aching, is
             | definitely a problem.
             | 
             | > But unless the Physical Therapist knows how to manage the
             | server, they get this scary warning.
             | 
             | Wrong. In 2020, if the Physical Therapist can have an http
             | website, they can have an https website with a valid
             | certificate.
             | 
             | It's the same for your grandma store. Going from no website
             | to http is a much much bigger step than going from http to
             | https.
        
               | lez wrote:
               | Using the same logic, https://facebook.com is also a
               | danger, since we know they routinely sell out our
               | personal data to advertisers and probably anyone who is
               | willing to pay for them. Or an aquitance could be an
               | insurance agent... Not to mention NSA as a source of
               | danger in this sense.
               | 
               | The real danger I see is the disappearance of lots of
               | quality, not-for-profit content that reminds me of the
               | good old Internet, swapping it with new shiny https
               | publishers, of which 90% belong to the same owners.
               | That's the real danger to the society. The long tail is
               | disappearing, while commercial interests, and the
               | manipulation that comes with that sneaks in everywhere.
        
             | floatingatoll wrote:
             | Grandma's sewing store would be hosted on a VPS or
             | Squarespace, and will have a checkbox to provision "secure
             | site encryption" for her without any further work required
             | on her part. (They may charge her money for the
             | certificate, if they're a scummy VPS.)
             | 
             | This ship has sailed, though: "plaintext HTTP" is available
             | only with HTTP/0 and HTTP/1. This article is discussing
             | HTTP/3, which carries forward the requirement of wire
             | encryption that HTTP/2 argued over for a long time and then
             | incorporated into the standard.
             | 
             | (Incidentally, my grandmother was a Smalltalk and 6502
             | assembly programmer of educational software in the 80s. She
             | let me read her technical books at age 5. Probably best to
             | find another example, such as "non-technical site owners".)
        
             | derefr wrote:
             | Just because the original site was simple, doesn't mean
             | that the thing an MITM replaces it with needs to be. Sites
             | aren't apps; sites that do little don't "install" into the
             | browser with an intentionally-limited set of permissions,
             | such that an attacker would then be limited in their attack
             | by those permissions. An MITM can replace the site with
             | basically whatever they like.
             | 
             | I can't find the example (it was linked on HN a few years
             | back), but a clear demonstration of this is a case where
             | the MITM can serve a phishing page that _initially_ appears
             | to be the original site you 've hijacked (so the user
             | trusts it, and leaves it alone); but later, _while the page
             | is not visible_ (for example, when the user switches away
             | from that tab), the page will switch over to showing a
             | Facebook login screen or something.
             | 
             | Since the website isn't a known "malicious site" (so no
             | alert from the browser), the user probably won't bother to
             | look at the URL bar. They'll just think they left Facebook
             | open in a tab, and it logged them out for inactivity. So
             | they'll "log back in."
        
               | jk700 wrote:
               | And MITM is still possible for https, just a bit
               | different with two points of interception, rather than
               | one, see my other comment [1].
               | 
               | [1] https://news.ycombinator.com/item?id=24711111
               | 
               | EDIT: what are the downvotes for? If for disagreement,
               | this only shows how poorly people misunderstand security
               | of https.
        
             | anothercommnt wrote:
             | Yes. There is so much groupthink on this issue. Know that
             | you are not alone.
        
             | xahrepap wrote:
             | The problem is you can't trust what you're reading to be
             | from the source. Maybe the site doesn't take credit cards.
             | But after a MITM it might suddenly start taking credit
             | cards. And other things. Whatever the attacker wants! All
             | in the seeming name of the origin.
        
               | jk700 wrote:
               | MITM like that still works for most https websites
               | because of the automatic domain validation by ACME-based
               | certificate authorities. The only caveat is that now an
               | attacker has to get a valid certificate, so first he has
               | to do MITM on the route from the datacenters where CAs
               | run validators to the datacenter where the website is
               | hosted, which for most websites today is likely a long
               | route crossing many countries, after that an attacker
               | gets the exact capabilities as with MITMing http.
        
               | dfabulich wrote:
               | No, you're forgetting about Certificate Transparency,
               | which protects against this attack.
        
               | jk700 wrote:
               | It doesn't. Pretty much no one monitors CT logs and for
               | those who do there is no way to prove misissuance of
               | domain-validated certificate and revoke it, they don't
               | have private keys.
        
               | suprfsat wrote:
               | Feb 19, 2020: Multi-Perspective Validation Improves
               | Domain Validation Security
               | (https://letsencrypt.org/2020/02/19/multi-perspective-
               | validat...)
        
               | jk700 wrote:
               | That's the thing, they don't seem to bother actually
               | addressing the problem and assume no other interception
               | capability than hacking BGP. But we are talking here
               | about exactly that, i.e. if you can intercept traffic in
               | any other way somewhere close to a website or its
               | nameservers - you can get a valid certificate and use it
               | to MITM its visitors anywhere in the world where you can
               | intercept traffic too. And in case of using big cloud
               | providers for validation to "improve" security, this
               | still likely pushes traffic from all of them through some
               | big IX before reaching a datacenter with a website and at
               | worst only adds a couple more points an attacker has to
               | intercept traffic at to get the certificate.
               | 
               | This is where all that centralization is really bad for
               | security. It basically makes https a protection only
               | against low effort MITM of last mile ISPs.
        
               | [deleted]
        
             | inglor wrote:
             | HTTPS is free (with let's encrypt) and useful for privacy.
             | 
             | For example: No one is stopping someone from intercepting
             | your request to your clinic and add a form asking for
             | personal details - and then using those details to "restore
             | password" - or simply ask for your CC number. You might not
             | fall for it but are you as confident in all other patients?
        
               | logicOnly wrote:
               | It's not free when you need to pay someone to update your
               | website.
               | 
               | Grandma might be able to edit HTML, but "what's sudo?
               | What's ssh? This one website says I need to pay for
               | certs?"
        
               | robertnn wrote:
               | So what are you saying? That we should sacrifice security
               | (as explained in sibling comments) in order to allow
               | grandmas to create their own web pages?
        
               | ksaj wrote:
               | It all sounds ageist and misogynystic to me. I work with
               | a few grandmas who are right there on top of the newest
               | technologies going. One is a scientist working on a hell
               | of a cool cloud product. Old ladies aren't the model of
               | stupidity, as this thread might lead someone to believe.
        
               | fenesiistvan wrote:
               | Yes, exactly. Grandma should be able to easily publish.
               | Now grandma doesn't say anything because the unnecessary
               | complication pushed by google (https and now http3 which
               | might be enforced a few years later. These has little to
               | do with security and performance; mostly the google ad
               | business has any revenue from all these complications)
        
               | kube-system wrote:
               | Grandma also pays her registrar and ICANN for the domain
               | every year too. Free was never the price of having a
               | website, and it's not a reasonable standard of
               | expectation today. As is with literally anything else
               | that needs maintenance, if you can't maintain it
               | yourself, you have to pay someone else to maintain it.
        
               | iso1210 wrote:
               | > with let's encrypt
               | 
               | My biggest concern as http becomes less and less
               | acceptable is that practically the entire internet relies
               | on lets encrypt to run.
        
               | klodolph wrote:
               | Not to run, just to keep running long-term. If Let's
               | Encrypt exploded you would have less than 30 days to get
               | it running again. But that's not such a short time.
        
               | vandal_at_your wrote:
               | Nope. https, hybrid crypto and pub->free CA's are the
               | largest backdoor into internet traffic ever
               | (accidentally) devised. The standardization on https for
               | everything (including alt app protocols (dns,etc...)) is
               | very apparently an info grab.
               | 
               | Sym crypto is the only answer (Schneier,DJB) people have
               | been trumpeting this for years.
        
               | iso1210 wrote:
               | If I connect to a server via https and see it's
               | certificate, I am confident that my communication is
               | secure between me and the server hosting that
               | certificate.
               | 
               | To validate the person holding that certificate is who
               | they claim to be, how can I do that? By either getting
               | their certificate out of band (impractical), or trusting
               | an intermediate.
               | 
               | Lets encrypt doesn't make it any easier or harder to get
               | an invalid certificate.
               | 
               | Now if the server wants me to authenticate, https has
               | that built in. I can present my own client certificate,
               | and if it's signed by somewhere the server trusts, it
               | knows who I am. But how would a random server
               | authenticate who I am? I'd personally rather use
               | certificates or ssh keys or similar than usernames and
               | passwords, but that's too complex for the average person.
               | 
               | Clearly I could have lost control over the key to my
               | certificate, or the server could have lost theirs,
               | there's not much you can do about that, no matter what
               | type of authentication system you use.
        
             | Diederich wrote:
             | A real danger of not going with TLS is MITM attacks.
             | Specifically, injection of hostile JS, CSS or whatever that
             | can be used to penetrate your browser and/or local system.
        
             | eldridgea wrote:
             | There are a few areas of concern here:
             | 
             | First, getting authentic data from the provider so that you
             | know what they published is what you're reading.
             | 
             | But also links and embedded links/scripts. Since HTTP can
             | be (relatively) trivially MITMd, it not only exposes end
             | users to getting manipulated info, but also, having them
             | running Javascript that's not what the site owner intended.
             | 
             | In fact that's exactly how China attacked GitHub recently:
             | https://threatpost.com/github-attack-perpetrated-by-
             | chinas-g...
        
       | 02020202 wrote:
       | i would like to see peformance comparison with SRT for example or
       | other udp-based protocol. i mean, it its good for video, it must
       | be good for web too.
        
       | drenvuk wrote:
       | Can someone provide the tradeoffs and benefits of QUIC vs
       | WebSockets vs WebRTC? I know websockets are tcp and WebRTC
       | requires some special tunneling logic but aside from that I don't
       | particularly know how quic is better or different aside from
       | using udp.
        
         | jeffbee wrote:
         | That's it. UDP is why it's better. It disintermediates
         | operating system developers and their badly-tuned, slowly-
         | evolving TCP implementations.
        
           | pmlnr wrote:
           | Right, because all we ever need it is more complexity, no
           | backwards compatibility, and more speed in all terms /s
           | 
           | Come on. Networks are extremely FAST by now, TCP or not. It's
           | the silly amount of JS computation pushed to the client that
           | is slow, both in download speed and on the client.
        
             | Spivak wrote:
             | But we have backwards compatibility right now. That's,
             | like, the whole point of the OSI model. Any device that
             | supports UDP can handle QUIC with no fanfare like it was
             | any other application-layer protocol because it is!
        
               | pmlnr wrote:
               | We have fallbacks, that's not the same as backwards
               | compatibility.
               | 
               | They call this HTTP3, and they shouldn't; it's not HTTP.
        
               | Spivak wrote:
               | That was the case with HTTP2 as well though. The client
               | can negotiate with the server about supported protocols
               | but if a client only spoke HTTP1.1 and the server only
               | spoke HTTP2 they simply couldn't talk. It's just that
               | basically no servers were HTTP2 only.
        
           | dsr_ wrote:
           | The protocol stack is open to contributions on Linux,
           | FreeBSD, OpenBSD and DragonflyBSD; while Microsoft and Apple
           | don't take contributions, their developers do go to the same
           | conferences and InterOps that everybody else goes to.
           | 
           | If there's a general tuning problem in TCP implementations,
           | the cycle time for getting it fixed should be around 2 years
           | from discovery to all regularly updated machines getting the
           | fix. Given global impact, that seems pretty reasonable to me.
        
             | jeffbee wrote:
             | The actual history of "open" contributions to the Linux TCP
             | stack is one of vehement opposition. Basic things like SYN
             | retransmission remain hard-coded to constants that were
             | (arguably) appropriate in the 1970s.
        
           | toast0 wrote:
           | Yep, Google won't tune TCP on Android, so they're abandoning
           | it in favor or TCP over UDP.
        
         | jjice wrote:
         | From what I understand, QUIC is a lower level protocol near TCP
         | and UDP which uses UDP as it's base for transport. Currently,
         | HTTP/2 can multiplex streams, but if one has an error, all must
         | be stopped while TCP fixes it. In QUIC they use UDP, which
         | doesn't do error checking, so QUIC implements the error
         | checking itself. QUIC will handle the error checking while the
         | UDP streams continue to deliver at the same time.
         | 
         | This is all based on my brief reading of the QUIC Wikipedia
         | article, so take my knowledge with a grain of salt, but I think
         | that my above summary fits.
         | 
         | Wikipedia at relevant anchor:
         | https://en.wikipedia.org/wiki/QUIC#Background
        
           | jabart wrote:
           | Error in this context is a missing packet causing a delay and
           | a request for a retransmit blocking the TCP connection at the
           | OS/Kernel while it waits for the missing packet before
           | sending it to the application.
           | 
           | UDP packets have checksums, its what network switches use to
           | check before sending the packet on or dropping the packet.
           | 
           | Other benefit is UDP doesn't have a window size (buffer).
           | Which is part of the design when computers had ram measured
           | in K instead of GB (Hey my buffer is full, stop). The chatty
           | nature of TCP reduces download speeds across larger physical
           | distances. Its why download managers spin up multiple threads
           | to download part of a file to work around it.
        
         | oscargrouch wrote:
         | Not an expert, but..
         | 
         | QUIC is basically HTTP2 (bidi frames and keep-alive
         | connections) over (TCP over UDP) with encryption builtin. The
         | cool thing about QUIC also, is the ability to 'lower down' to
         | the UDP layer and skip the control and encryption protocol if
         | you need it. (I just dont know if they will extend this feature
         | to the application level).
         | 
         | WebRTC is SRTP with UDP have hole punching et al. if you want
         | so use UDP but with a whole problems of the UDP approach P2P
         | world already taken care of.
         | 
         | WebSockets works over plain TCP.
         | 
         | QUIC is the more advanced of the protocols and it will probably
         | "take over the world" with time, but it has yet to prove
         | itself.
         | 
         | It probably can be a good fit to the cases where the WebRTC is
         | being used for, but i dont now if Chrome will be ambitious
         | enough to let developers mess with the building blocks of QUIC.
         | If not, it will just become a sink to HTTP3 (a no small feat
         | anyway).
        
         | Hamcha wrote:
         | The difference is literally using UDP over TCP. The problem
         | QUIC tries to solve is that many of TCP's design decisions were
         | made in a different environment that the one we live today (eg.
         | being able to switch between from 2 internet sources like Wi-Fi
         | and mobile data while retaining connections). Plus, it
         | integrates TLS in it to reduce protocol overhead.
         | 
         | IIRC even UDP isn't ideal, the reason for choosing it over
         | making a brand new protocol was to avoid network devices like
         | routers dropping packets that they wouldn't recognize.
         | 
         | WebSockets is just an abstraction over a reliable protocol
         | (TCP), therefore, WebSockets could theoretically work over QUIC
         | too.
        
           | chungus_khan wrote:
           | The internet is a large cluster of workarounds for design
           | decisions made in the 1970s, a truly beautiful time. A time
           | before anyone imagined the need for a system to authenticate
           | who is sending mail or from what server. A time when a
           | "network connection" was a big ass permanent wire into a
           | deeply imperfect network. A time where someone intercepting
           | the communications on that wire was an afterthought. A time
           | where even when things were encrypted, DES was considered an
           | acceptable cipher.
           | 
           | Unfortunately today's internet has needed to be a continuum
           | of newer stuff that still has to work with the older stuff.
           | Such is life.
        
           | sippingjippers wrote:
           | This is a fairly drastic simplification. There are a bunch of
           | different pieces of QUIC, (not least, sorry I'm not
           | authoritative either)
           | 
           | - SSL and "TCP" handshake are collapsed into a single
           | transaction
           | 
           | - That transaction is worst case 1 network roundtrip, best
           | case (returning user) 0 roundtrips
           | 
           | - Anti-filtering baked in. Quic reveals almost nothing for
           | middleboxes to filter on. There is not yet an accepted
           | solution for encrypting the target server name (SSL SNI) but
           | that's still being worked on AFAIK
           | 
           | - "Modern" congestion control approach, where "modern" means
           | "fuck any TCP connections sharing the link"
        
       | Ericson2314 wrote:
       | I am generally pro QUIC, but after seeing
       | https://tools.ietf.org/html/draft-ietf-quic-datagram-01 I have to
       | ask, why not have all the streaming stuff on top of this? Then
       | the layering looks like:
       | 
       | 1: connections management + encryption
       | 
       | 2: streams and multiplexing
       | 
       | Seems pretty good to me?
        
         | Matthias247 wrote:
         | There's some efficiency gains from having streams at a lower
         | level. E.g. if it's known that a transmission of a particular
         | chunk of a stream failed, the next transmission can capture
         | that chunks plus additional data from that stream instead of
         | just blindly retransmitting a full datagram.
         | 
         | Besides that the crypto and handshake parts also need streams
         | with guaranteed delivery and ordering, since they carry TLS
         | stream data.
        
           | Ericson2314 wrote:
           | > E.g. if it's known that a transmission of a particular
           | chunk of a stream failed, the next transmission can capture
           | that chunks plus additional data from that stream instead of
           | just blindly retransmitting a full datagram.
           | 
           | But QUIC Datagrams do have ACKs, and don't have retries.
           | Maybe we don't want them to have ACKs, but as long as it's
           | opt-in, is that not enough for the upper layers?
           | 
           | > Besides that the crypto and handshake parts also need
           | streams with guaranteed delivery and ordering, since they
           | carry TLS stream data.
           | 
           | QUIC packets are individually encrypted so more metadata can
           | be encrypted too. And I don't think connection establishment
           | uses any sort of stream abstraction either since people speak
           | of n-packet handshakes?
        
       | garganzol wrote:
       | The level of complexity of this thing goes way beyond the HTTP
       | over CORBA experiment that took place at the end of millennium.
       | 
       | The point is: despite CORBA's convoluted complexity, at least
       | HTTP + CORBA experiment was somewhat sane as it allowed to use
       | multiplexed connections right out of the box and relied upon
       | standard network capabilities without reinventing the wheel. All
       | that in 1999 or so.
       | 
       | DNS over HTTPS, QUIC et al look nothing less than a monopolistic
       | attack on open web. Google really wants to own the Internet.
        
       ___________________________________________________________________
       (page generated 2020-10-07 23:01 UTC)