[HN Gopher] HTTP/3 Is Fast
       ___________________________________________________________________
        
       HTTP/3 Is Fast
        
       Author : SerCe
       Score  : 388 points
       Date   : 2021-12-15 08:08 UTC (14 hours ago)
        
 (HTM) web link (requestmetrics.com)
 (TXT) w3m dump (requestmetrics.com)
        
       | ainar-g wrote:
       | Quite ironically, the link doesn't work for me.
       | 
       | Archive link:
       | https://web.archive.org/web/20211201172207/https://requestme....
        
       | AtlasBarfed wrote:
       | I have seen SO MANY "reliable UDP" transports written instead of
       | TCP over the decades. Off the top of my head: TIBCO, SaltStack,
       | various games have bespoke UDP / TCP hybrids IIRC.
       | 
       | Is TCP really that fundamentally slow? How could high-level
       | repurposes of UDP be faster than presumably hardware optimized
       | and heavily analyzed TCP stacks?
       | 
       | The lies, damn lies, benchmarks seems a bit applicable too here.
       | He's disabling caching? Really it is testing a specific transport
       | situation, but caching IMO would mitigate a lot of the dramatic
       | advantages? And what about cloudflare edge caching outside the
       | browser? I think he was routing all resource requests through his
       | single server that probably isn't cached properly.
       | 
       | So with good caching will HTTP/3 produce the advantage for the
       | everyman over HTTP/1.1 to justify the attention?
        
         | Dylan16807 wrote:
         | You can't control TCP very well, and you can't do anything
         | about head-of-line blocking. So yes, for lots of circumstances
         | it's easy to beat TCP. Importantly, a custom protocol over UDP
         | can do things that are incompatible with 30 year old TCP
         | implementations.
         | 
         | And you don't need hardware optimization until you're getting
         | into many gigabits per second.
        
       | Animats wrote:
       | This is a bigger performance difference that Google reported.
       | 
       | Is the packet loss rate being measured? That's the main cause of
       | head-of-line blocking delays.
       | 
       | Are both ends using persistent TCP connections? If you have to
       | re-open and redo the TLS crypto handshake each time, that's a
       | huge overhead. Does Caddy implement that? Is the CONTENT-LENGTH
       | header set? If not, each asset is a fresh TCP connection.
        
       | bawolff wrote:
       | > TLS 1.2 was used for HTTP/1.1 and HTTP/2
       | 
       | > TLS 1.3 was used for HTTP/3.
       | 
       | > 0-RTT was enabled for all HTTP/3 connections
       | 
       | Ok then. No potential confounding variables there... none at all.
       | 
       | [For reference its expected that tls1.3 with 0-rtt is going to be
       | much faster than tls1.2 especially when fetching small documents
       | from geographically far away places. To be clear, not doubting
       | that http/3 gives performance improvements in some network
       | conditions, this is just a really bad test and tls version is
       | probably a good portion of the difference here]
        
       | JackFr wrote:
       | That improvements are given in milliseconds rather than percent
       | and charts aren't anchored at zero tells me all I need to know.
        
         | [deleted]
        
         | tester34 wrote:
         | What do you mean, what's wrong with those charts?
         | 
         | Would there be some better/more precise information shown if
         | they were anchored at 0?
        
           | vlmutolo wrote:
           | See my comment here:
           | 
           | https://news.ycombinator.com/item?id=29567067
        
         | akyoan wrote:
         | How to Lie with Charts 101
        
           | authed wrote:
           | s/Lie/deceive
        
             | tgv wrote:
             | It's a reference to this book:
             | https://en.wikipedia.org/wiki/How_to_Lie_with_Statistics
        
       | KronisLV wrote:
       | > It would be 18 more years before a new version of HTTP was
       | released. In 2015, and with much fanfare, RFC 7540 would
       | standardize HTTP/2 as the next major version of the protocol.
       | 
       | Huh, this is interesting.
       | 
       | So the current timescale is like:                 HTTP/1: 1996
       | (though HTTP/1.1 came out in 1997)       HTTP/2: 2015
       | HTTP/3: 2021 (current draft)
       | 
       | Should we expect HTTP/4 around 2022 or 2023, at this increasing
       | rate of progress, then? Just a bit of extrapolation, since it
       | seems like the rate of progress and new versions to deal with is
       | increasing. Promising, but possibly worrying.
        
         | azlen wrote:
         | From what I've read about it, one of the exciting things about
         | QUIC and HTTP/3 is that they're much more extensible. Meaning
         | anyone might be able to dabble and experiment with their own
         | version of the protocol.
         | 
         | So yes, I'd think the rate of progress may well increase. Not
         | all will become standards, but I imagine we might see HTTP/3.1,
         | 3.2, etc. long before we see an entirely new version like
         | HTTP/4
        
       | amenod wrote:
       | > The 0-RTT feature in QUIC allows a client to send application
       | data before the handshake is complete. This is made possible by
       | reusing negotiated parameters from a previous connection. To
       | enable this, 0-RTT depends on the client remembering critical
       | parameters and providing the server with a TLS session ticket
       | that allows the server to recover the same information.
       | 
       | Am I missing something, or is this yet another way to track
       | clients across visits? If so, I'm sure Chrome will be faster than
       | Firefox (because it will keep the sessions IDs live forever).
       | Well played, Google.
        
         | stusmall wrote:
         | There are features that go back a ways in TLS that someone
         | could use for tracking. A while back I wrote a quick and dirty
         | POC for using TLS 1.2 session resumption for sub-NAT user
         | tracking[1]. While it is really effective at separating out
         | multiple different users interacting with a server from behind
         | a NAT, I'm not sure these features are usable for cross site
         | tracking. I'm not familiar with 0-RTT but sessions resumption
         | scoped tickets to a host so it wouldn't be useful there.
         | 
         | While someone could use it for some tracking purposes it really
         | is a _huge_ performance boon. There are good intentions on why
         | these features were put in even if they can be abused.
         | 
         | 1. https://stuartsmall.com/tlsslides.odp
         | https://github.com/stusmall/rustls/commit/e8a88d87a74d563022...
        
       | baybal2 wrote:
       | HTTP/3 IS NOT FAST. Very likely, smartphones, or wimpier laptops
       | will not be able to benefit from the speed because of lack of
       | anything like hardware TCP offloading which benefits HTTP 1.1.
        
         | drewg123 wrote:
         | This is certainly true on the server side where QUIC can be ~4x
         | as expensive as TLS over TCP for some workloads, and I've been
         | wondering how much it matters on the client side. Do you have
         | any data from low-end clients showing either increased CPU use,
         | increase load times, or worse battery life with QUIC?
        
         | jillesvangurp wrote:
         | This literally is an article presenting benchmarks that
         | demonstrate the exact opposite of what you are claiming. If you
         | have benchmarks to contradict the findings, maybe you can share
         | them?
        
           | drewg123 wrote:
           | The article goes into very little detail regarding the client
           | hardware and software. From "No other applications were
           | running on the computer", one can assume it was a laptop or
           | desktop. If you assume that it was a developer's laptop, then
           | its probably not the "wimpy" laptop GP talked about, and
           | certainly not a smartphone.
        
       | fmiras wrote:
       | Awesome post!
        
       | est wrote:
       | Is it possible to host a UDP only h3 site without the 443 tcp?
        
         | heftig wrote:
         | Yes, I believe this is possible with the HTTPS DNS records
         | defined in https://www.ietf.org/archive/id/draft-ietf-dnsop-
         | svcb-https-...
        
       | bestest wrote:
       | 200ms faster than WHAT?
       | 
       | Edit: Ok, apparently there are charts that are not being loaded
       | due to the HN Effect. A clear example of how a blind reader would
       | miss quite a bit of information when the data is only shown in
       | images / other non A11Y compliant resources.
        
         | ksec wrote:
         | >Edit: Ok, apparently there are charts that are not being
         | loaded due to the HN Effect.
         | 
         | No it doesn't work on Safari only. Turn on Lazy Image loading
         | or use other browser.
        
         | avereveard wrote:
         | Comparing HTTP/2 and HTTP/3 protocol versions when loading
         | pages from NY            HTTP/3 is:              200ms faster
         | for the Small Site         325ms faster for the Content Site
         | 300ms faster for the Single Page Application
         | 
         | it seems a text only extraction of the section in object
         | remains perfectly intelligible
        
       | anotherhue wrote:
       | Just in time for Web3!
        
       | [deleted]
        
       | mgaunard wrote:
       | So I understand how multiplexing everything through a single TCP
       | session is a bad idea. But what was the problem with using
       | multiple TCP sessions like HTTP 1.1 does? Is it just the problem
       | that there is a latency cost to establishing the connection (even
       | though those sessions are reused later on)?
       | 
       | How about extending TCP to establish multiple sessions at once
       | instead?
        
         | jayd16 wrote:
         | Extending TCP is considered infeasible due to its pervasiveness
         | and the calcification of it across the ecosystem. Implementing
         | a new protocol is also considered infeasible for similar
         | reasons. So they arrived at a new protocol over UDP to
         | implement new features.
        
         | vlmutolo wrote:
         | I think it's just that hundreds of TCP connections per user can
         | really bog down the servers. Individual TCP connections are
         | much more expensive than QUIC streams.
         | 
         | QUIC also handles things like transparent IP-address switching
         | (on either side of the connection).
        
           | mgaunard wrote:
           | There is no difference in server overhead, the amount of
           | state to maintain and data received/sent is the same.
           | 
           | Also nothing prevents the server from implementing TCP in
           | userland, apart from maybe security concerns, but then if you
           | want low latency you have to discard those anyway.
        
             | [deleted]
        
       | hutrdvnj wrote:
       | How much packet loss does average internet traffic has on
       | average?
        
       | usrbinbash wrote:
       | This comment is not specifically about quic, which I believe is a
       | solid idea and would love to see used and supported more, but
       | about the topic of requiring super fast connections to do things
       | that really shouldn't need them.
       | 
       | Accessing a page where all useful information I am interested in
       | is text, should not require a new protocol developed by genius
       | engineers to work without delay. If I want to read an article
       | that is 50kB of text, it's not unreasonable to expect that
       | information to be here before my finger leaves the ENTER key,
       | regardless of how its transmitted.
       | 
       | Why isn't that the case?
       | 
       | Because said article is not transmitted with a dollop of html to
       | structure it, and a sprinkle of CSS and JS to make it look nice.
       | It's delivered to me buried in a mountain of extraneous garbage,
       | pulled in from god-knows-where, mostly to spy on me or trying to
       | sell me crap I don't need.
       | 
       | I am not saying "don't invent new protocols". But maybe think
       | about why it was perfectly possible to have functional, fast, and
       | reliable webpages and applications in the 90s and early 00s,
       | despite the fact that our networks and computers were little more
       | than painted bricks and paper-mache by todays standards.
       | 
       | https://idlewords.com/talks/website_obesity.htm
       | 
       | If we don't think about this, then neither QUIC, nor QUIC2 or
       | REALLY_QUIC will save us from being wading through a pool of
       | molasses slow crap. Because inevitably, following each techical
       | improvement that could makes our stack faster, is an even bigger
       | pile of bloat that drags it down again.
        
         | unbanned wrote:
         | >should not require a new protocol developed by genius
         | engineers to work without delay
         | 
         | Why not?
        
           | christophilus wrote:
           | He explained why. Visit the old Bob Dole website, then visit
           | a 2021 candidate's website. The problem isn't the protocol.
           | It's the internet obesity crisis.
        
             | unbanned wrote:
             | Why would you not want experts designing a protocol of such
             | reach and significance?
        
               | christophilus wrote:
               | I don't know if you're deliberately trolling. In the off
               | chance you're not, I suggest that you took the entirely
               | wrong takeaway from the OP, and I suggest rereading.
               | 
               | He never said experts shouldn't design protocols, and
               | nothing like that was anywhere near the point.
        
         | tlamponi wrote:
         | > If I want to read an article that is 50kB of text, it's not
         | unreasonable to expect that information to be here before my
         | finger leaves the ENTER key, regardless of how its transmitted.
         | 
         | I mean no, but it can be unreasonable to expect that physical
         | limits get beaten if you have in mind that some people just
         | have a few hundred to a thousand KM between their computer and
         | the server they access.
         | 
         | E.g., in my hometown I may get latencies of 200 ms and more to
         | a few sites, especially simpler ones that have no CDN but are a
         | single server on the other end of the world. Don't get me
         | started if I'm travelling between Vienna and South Tyrol using
         | the train, in some parts (cough Germany) the internet is really
         | spotty and latency spikes up to 10 to 20s are (sadly) rather
         | normal for a few minutes here and there.
         | 
         | Now, with HTTP 1.1 over TCP and TLS involved the setup time
         | already gets me to almost a second of wait time (more than the
         | few tens of ms my finger needs to leave the Enter key) in the
         | former setup and in the latter I may need 30 to 60s, or it even
         | just times out when travelling by train and being in a bad
         | spot, connectivity wise.
         | 
         | QUIC improves there, TLS handshake starts immediately and UDP
         | setup needs less round-times (none) compared to TCP (even with
         | TCP fast-open).
         | 
         | So simple websites can profit too from QUIC, initial load time
         | can get reduced a lot, browsing them is finally doable also on
         | remote, spotty connections. Also, I happen to develop
         | applications that are delivered as web app, they just tend to
         | acquire a certain complexity even if one tries to stay simple,
         | so loading that faster even if nothing is already cached is a
         | welcome thing to me.
         | 
         | Bloated page still will be slow, sure faster than with HTTP 1.1
         | but still slow, and I definitively would like to see that
         | getting improved, but that's not really related to the issues
         | that QUIC improves on, as simple websites win too when using
         | it; it's just less noticeable there if you already have a
         | somewhat OK connection.
         | 
         | In summary: Why not invent a new protocol if you can
         | significantly reduce overhead for everyone, especially if it
         | can coexist with the simple and established one.
        
           | deepstack wrote:
           | >QUIC improves there, TLS handshake starts immediately and
           | UDP setup needs less round-times compared to TCP.
           | 
           | hmmm abit skeptical on the less round-times. Are the all the
           | round-times in TCP to ensure the integrity of the connection.
           | With UPD it is my understanding that no confirmation of
           | receiving packet is issued. So a server can send out a
           | signal, but never can be sure if the client got it. I can see
           | it can be great for multiplexing/broadcasting, but the switch
           | the whole http protocol over like this, I can't imagine there
           | won't be tons integrity and security issues.
        
             | simiones wrote:
             | QUIC is a combination of TLS and TCP essentially,
             | implemented over UDP so that there is some chance that
             | middle-boxes will allow it to pass (there's very little
             | chance to actually use a completely new protocol over IP
             | directly and have your packets received by the vast
             | majority of networks).
             | 
             | HTTP over QUIC will probably do a similar number of round-
             | trips compared to HTTP over TCP, but it will do much fewer
             | than HTTP over TLS over TCP. There's no getting away from
             | SYN/SYN-ACK/ACK for a reliable protocol, but QUIC can put
             | certificate negotiation information directly here, instead
             | of the TLSoTCP approach of SYN/SYN-ACK/ACK/ClientHello/Serv
             | erHello/ClientKeyExchange/ServerKeyExchange.
             | 
             | Additionally, QUIC supports multiple streams over a single
             | physical connection, and correctly implements packet
             | ordering constraints and retries for them. TCP supports a
             | single stream over a connection: any delayed packet will
             | delay the entire stream. In QUIC, a delayed packet will
             | only delay packets from the same logical stream, packets
             | from other streams can still be received successfully on
             | the same connection.
             | 
             | This feature is heavily used by HTTP/3: HTTP/2 introduced a
             | concept of HTTP streams, but all HTTP streams were run over
             | the same TCP connection, so over a single TCP stream: a
             | slow packet on HTTP/2 stream 1 will delay all packets from
             | HTTP/2 streams 2, 3 etc. With QUIC, an HTTP/3 stream is a
             | QUIC stream, so a single slow request will not blkc other
             | packets from other requests from being received.
        
               | benmmurphy wrote:
               | Weirdly enough I think QUIC often does something that is
               | not much better because of address validation. It is not
               | safe for the QUIC server to send a bunch of large packets
               | back to the client source address without being confident
               | the client controls the address. We run QUIC in
               | production so I'll grab an example conversation to show
               | how many round trips connection setup takes. I guess we
               | might not be optimising connection setup correctly
               | because we are using long running QUIC connections to
               | transport our own protocol and I don't care how long the
               | connection setup takes.
               | 
               | so we are using a slightly older version of QUIC than the
               | RFC and using go-quic on server and client. and this is
               | what i see when creating a connection:
               | Client: Initial (1284 bytes) includes client hello
               | Server: Retry (166 bytes) includes retry token
               | Client: Initial (1284 bytes) including retry token and
               | client hello         Server: Server Hello/Encrypted
               | Extensions/Cert          Request/Certificate/Certificate
               | Verify/Finished (1284 bytes)          Client:
               | Certificate/Certificate Verify/Finished (1284 bytes)
               | 
               | address validation is covered in RFC 9000:
               | 
               | https://datatracker.ietf.org/doc/html/rfc9000#section-8.1
               | 
               | probably what go-quic does is not optimal because you
               | don't always have to validate the address.
               | Prior to validating the client address, servers MUST NOT
               | send more        than three times as many bytes as the
               | number of bytes they have        received.  This limits
               | the magnitude of any amplification attack that        can
               | be mounted using spoofed source addresses.  For the
               | purposes of        avoiding amplification prior to
               | address validation, servers MUST        count all of the
               | payload bytes received in datagrams that are
               | uniquely attributed to a single connection.  This
               | includes datagrams        that contain packets that are
               | successfully processed and datagrams        that contain
               | packets that are all discarded.
               | 
               | ...                   A server might wish to validate the
               | client address before starting        the cryptographic
               | handshake.  QUIC uses a token in the Initial packet
               | to provide address validation prior to completing the
               | handshake.        This token is delivered to the client
               | during connection establishment        with a Retry
               | packet (see Section 8.1.2) or in a previous connection
               | using the NEW_TOKEN frame (see Section 8.1.3).
        
             | dtech wrote:
             | TCP is not the only way to ensure integrity, QUIC protocol
             | ensures it, using UDP as transport. This is how all new
             | protocols do it since there's too many hardware and
             | software roadblocks to use anything but TCP and UDP.
        
               | salawat wrote:
               | Only because no one wants to upgrade their damn backbone
               | NICS. SCTP solved things way better than QUIC imo except
               | for the infrastructure inertia part.
               | 
               | The infrastructure inertia part isn't even so much a
               | question of technical infeasibility, but greed. So much
               | spending was slotted to carriers to improve their
               | networks, but instead of investing in capacity and
               | protocol upgrades, it all went to lobbying/exec bonuses.
        
               | lttlrck wrote:
               | SCTP runs over UDP too. The main reason for not picking
               | it is likely the TLS handshake optimizations they wanted.
        
               | xibo9 wrote:
               | Nope, SCTP runs at the same layer as UDP.
        
               | salawat wrote:
               | It does not run on UDP. It provides a UDP like best-
               | effort transmission mode though, while also optionally
               | allowing for interweaving reliable connection features.
               | 
               | If SCTP really ran on UDP. I'd have no reason to be
               | salty, because we'd already be using it.
        
               | Dylan16807 wrote:
               | RFC 6951. It _can_ run over UDP if you want it to, with a
               | clear guide for how to do it.
               | 
               | > I'd have no reason to be salty, because we'd already be
               | using it.
               | 
               | Direct OS support is a big deal, and UDP gets messed with
               | too. If someone made SCTP-over-UDP the default mode,
               | while changing nothing else, I don't think it would
               | affect adoption at all.
        
           | usrbinbash wrote:
           | >In summary: Why not invent a new protocol
           | 
           | Pretty sure I said nowhere that we shouldn't invent new and
           | better protocols. QUIte the contrary (pardon the pun).
           | 
           | What I am saying is: We should not need to rely on new
           | protocols to make up for the fact that we send more and more
           | garbage, we should send less garbage.
           | 
           | If we can get less garbage, _and_ new and improved protocols,
           | all the better!
           | 
           | But if all we do is invent better protocols, the result will
           | be that what buries the internet in garbage now, will use
           | that better protocol to bury us in even more garbage.
        
             | ReactiveJelly wrote:
             | "rely" is doing a lot of lifting in that sentence. It's
             | more productive to talk about improvements.
             | 
             | You're saying it's good to make sites smaller, and I agree.
             | You're saying QUIC is good, and I agree.
             | 
             | What do you want? To just complain that bad sites exist? Is
             | this "what-aboutism" for programmers?
             | 
             | I'm sorry, usrbinbash. I'm sorry that bad things happen,
             | and I wish I could make you happy.
        
               | Dylan16807 wrote:
               | > What do you want?
               | 
               | How about for people to recognize that this performance
               | gift is easily squandered, and talk about how we're going
               | to prevent bloat so that we can preserve the good
               | performance.
        
         | [deleted]
        
         | msy wrote:
         | The primary purpose of the web is now an app delivery platform
         | & VM execution environment, not text delivery.
        
           | Cthulhu_ wrote:
           | Anecdotal, most of my internet use is still finding and
           | reading text based information.
           | 
           | ...I mean it's to build a web application, but that's
           | different.
        
           | wolf550e wrote:
           | That is correct but irrelevant. When I reach a landing page
           | that mainly has text content (whether it's a news article, a
           | blog post, a company's "about us" or a product's features
           | list/matrix or something else), the page should load fast.
           | 
           | My browsers shouldn't need to make 100 network connections
           | and download megabytes to display a thousand words of text
           | and some images. It should be about 1 network connection and
           | maybe dozens of kilobytes (for images).
           | 
           | We are not complaining about the web version of productivity
           | apps like email, spreadsheets, project management, etc.
           | loading slowly. But a newspaper article should load faster
           | than a social media feed, and often they don't.
        
         | ignoramous wrote:
         | > _It 's delivered to me buried in a mountain of extraneous
         | garbage, pulled in from god-knows-where_
         | 
         | A good solution could _have been_ AMP: https://amp.dev/
         | 
         | > _mostly to spy on me_
         | 
         | If only AMP was not beholden to an adcorp.
        
           | the8472 wrote:
           | Isn't the first thing AMP does pull in some javascript that
           | was _required_ to render the page? I recall there being some
           | relatively long delay showing an empty page if you had some
           | stuff blocked because it was waiting for a fallback. Or maybe
           | that was some other google thing that did it. That 's exactly
           | the opposite of HTML first with small sprinkles of CSS and
           | JS.
        
             | ignoramous wrote:
             | Well, I meant, a web-standard _like_ AMP aimed at clamping
             | down webpage-obesity would be nice ( _instead_ of AMP).
        
         | throwawaylinux wrote:
         | Google wants to send you these blobs though, which is (one of
         | the reasons) why they develop faster protocols.
        
         | kobalsky wrote:
         | regardless of website bloat, tcp is pure garbage when dealing
         | with even minimal packet loss and it has always contributed to
         | the crappy feeling that mobile connections have.
         | 
         | if something doesn't start loading in 5 seconds and you are
         | still giving it time without jumping up and down, tcp gave you
         | stockholm syndrome.
         | 
         | I'm not familiar with quic so I don't know how to feel about
         | it, but we are in dire need of an alternative that doesn't
         | require reimplementing the good parts of tcp over udp like
         | every game and communication apps in the world do.
        
           | howdydoo wrote:
           | Are you alluding to SCTP/DCCP? Or is there some other
           | protocol I don't know about?
        
           | ReactiveJelly wrote:
           | I think QUIC is gonna be it, at least for the next 10 years.
           | 
           | It has unreliable, unordered datagrams, and a huge number of
           | TCP-like streams, all wrapped in the same TLS'd connection.
           | 
           | When I look at QUIC, I see old TCP protocols like IRC and
           | telnet coming back. The difficulty of encryption pushed many
           | applications into HTTPS so they could stay safe, even though
           | HTTPS isn't a perfect fit for every app.
           | 
           | With QUIC saying "You're as safe as possible and you have
           | basically a UDP and TCP portal to the server, have fun", I
           | think we'll see some custom protocols built / rebuilt on QUIC
           | that were lying dormant since the age when encryption was
           | optional.
           | 
           | For instance, multiplayer games could probably just use QUIC
           | as-is. Send the real-time data over the datagrams, and send
           | the chat messages and important game state over the streams.
           | Instead of some custom game communications library, and
           | instead of connecting TCP and UDP to the same server, one
           | QUIC library and one QUIC connection. Now it's within the
           | reach of a solo indie developer who wants to focus on their
           | game-specific netcode, and not on re-inventing networking
           | ideas.
        
             | jayd16 wrote:
             | Does the HTTP/3 spec provide access to UDP-like (non-
             | retransmitted) requests? It looks like this might need
             | another extension?
        
               | Dylan16807 wrote:
               | The suggestion is to throw out HTTP entirely and build a
               | nice tight protocol on top of QUIC.
        
           | superkuh wrote:
           | Yes. Mobile wireless is intrinsically bad due to random round
           | trip times and packet loss. But we shouldn't make the rest of
           | the internet walk with crutches just because mobile clients
           | need them.
        
         | ReactiveJelly wrote:
         | > It's sad that we need this.
         | 
         | No, it's not, multi-dimensional optimization and improvement-
         | in-depth is good, actually.
         | 
         | Look back at the small site hosted in Bangalore - That's an
         | Indian version of me. A hacker whose projects can't run on
         | self-hosted Wordpress, who only bought a VPS because their home
         | ISP is not reliable.
         | 
         | With HTTP/1, most of America can't load their site, it times
         | out. With HTTP/2, it takes 2,500 ms. With HTTP/3, it takes
         | 1,000.
         | 
         | A _petty_ software upgrade allows this imagined Indian blogger
         | to gain an audience on _another continent_ without buying more
         | servers, without using a CDN, and without taking advertising
         | deals to afford more stuff.
         | 
         | You know 3 things I hate about the web? Advertisements, CDNs,
         | and having to buy servers.
         | 
         | I promise this is purely geographical, not political - How
         | often do you connect to servers outside the USA and Europe? I
         | mean trading packets with a computer. YouTube uploads don't
         | count because they have a CDN. For me, the answer is "almost
         | never".
         | 
         | The Internet is supposed to be global, but computers are made
         | of matter and occupy space, so connections to nearer servers
         | are still better, and they always will be. But QUIC makes the
         | far-away connections a little less bad. That is a good thing.
        
           | combyn8tor wrote:
           | A CDN solves those issues and is significantly faster than
           | HTTP3 in your use case... why the hate for them? The
           | centralised aspect?
        
           | Dylan16807 wrote:
           | > With HTTP/1, most of America can't load their site, it
           | times out.
           | 
           | That shouldn't happen. Do you have a real site in mind there?
           | 
           | > No, it's not, multi-dimensional optimization and
           | improvement-in-depth is good, actually.
           | 
           | That can be true at the same time as "it's sad that we _need_
           | this "
        
         | javajosh wrote:
         | Next you'll be telling people that rather than rent storage
         | units to store all their extra stuff, and then build an app to
         | track all of that stuff, to just not buy so much stuff in the
         | first place. If you generalize this type of dangerous advice,
         | then what happens to the American economy? The same applies to
         | websites: if you don't add all that great good stuff into every
         | page, then what happens to the programmer economy?[1]
         | 
         | [1] Although satire is dead, killed by people actually
         | espousing extreme views on the internet, I still indulge but am
         | forced to explicitly tell the reader: this is ridiculous
         | satire. Of course we should solve the root of the problem and
         | not invent/adopt technology that enables our bad habits.
        
         | hdjjhhvvhga wrote:
         | > If we don't think about this
         | 
         | Actually, in this case "we" means Google and a few other big
         | companies whose aims are not the same as ours - they are the
         | ones in charge. Sure, at times it happens that our interests
         | are somewhat aligned (like page load times) but only as far as
         | it serves them. For Google, a page without their code like
         | ads/analytics is pretty much useless; for us, it's more useful
         | because it doesn't track us and loads faster.
         | 
         | So yes, while they continue doing some work in that respect, I
         | expect it will actually go worse with time as they are focused
         | on average consumer bandwidth in the USA. Once G5 is well
         | entrenched and broadband/fiber gets even faster, we can expect
         | even more bloat on websites and there is not much "we" (=users
         | and developers) can actually do about it.
        
         | Chris2048 wrote:
         | Monetisation. The old internet didn't have enough rent-seekers
         | gatekeeping the content.
         | 
         | Even Wikipedia begs for money it doesn't need.
        
         | jefftk wrote:
         | _> it was perfectly possible to have functional, fast, and
         | reliable webpages and applications in the 90s and early 00s_
         | 
         | I think you have an overly rosy view of the time. Yes, some
         | things were fast, especially if you had an unusually good
         | connection, but the average person's experience of the internet
         | was much slower and less reliable.
        
           | 5e92cb50239222b wrote:
           | I used dial-up until around 2010. Yes, pages were loading
           | pretty much forever, but once it's been loaded, it was _fast_
           | (and I always use mid-level hardware at best). I was used to
           | opening up to 100 pages on my shitty Celeron with 128 MBs of
           | RAM with Opera 9 (I think). Because dial-up is paid by the
           | minute and you really have to open as many pages as possible
           | and read them all later. It worked just fine. Then the fat
           | client /SPA devolution happened; you know the rest.
        
         | candiodari wrote:
         | OTOH, QUICK will mean UDP with per-stream flow control gets
         | through every corporate firewall on the planet in 5 or so
         | years. Hurray!
        
           | akyoan wrote:
           | _Why would they do that when HTTP1.1 is perfectly fine and
           | available?_
           | 
           | That will be the explanation, I'm not too optimistic.
        
           | goodpoint wrote:
           | Making firewalls even less effective at mitigating attacks.
        
             | candiodari wrote:
             | Given that connecting to technical docs or ssh is
             | considered an "attack" by half the people hiring
             | consultants, that will be a very good thing indeed.
        
         | comeonseriously wrote:
         | > Because said article is not transmitted with a dollop of html
         | to structure it, and a sprinkle of CSS and JS to make it look
         | nice. It's delivered to me buried in a mountain of extraneous
         | garbage, pulled in from god-knows-where, mostly to spy on me or
         | trying to sell me crap I don't need.
         | 
         | But that is how 75% of the people reading this make their
         | living...
        
           | posix_me_less wrote:
           | Which is why putting that observation here on HN is so
           | important. We can do something about it!
        
         | fiedzia wrote:
         | The reason is that a) if it doesn't look good, most people will
         | not take it seriously and b) you must have ads and tracking of
         | user activity.
         | 
         | What I really don't get is why bare html looks so awfully today
         | and you have to add tons of js and css to get something
         | acceptable.
        
           | usrbinbash wrote:
           | > and you have to add tons of js and css to get something
           | acceptable
           | 
           | No we don't.
           | 
           | As I have written before in this topic, HN is the perfect
           | example. It pulls in a small css and js file, both of which
           | are so slim, they don't even require minifying. The resulting
           | page is small, looks good, is performant and most importantly
           | _does its jobs_.
           | 
           | We don't need megabytes worth of cruft to display a good
           | looking page.
        
         | delusional wrote:
         | You already know this, but the "mountains of extraneous
         | garbage" is the content. The text you care about is just there
         | to make you download and execute the rest. quic is not needed
         | to give you the text you want. It's needed to deliver the
         | actual content, the ads.
        
           | Nemi wrote:
           | I agree that it is a greek tragedy that it always comes to
           | this, but this is just the market's way of monetizing the
           | medium. This is how it happens. We used to sit through 20
           | minutes(!) of commercials every hour for tv content. Now we
           | download one or two magnitudes more data than is required so
           | that the content we actually want is "free". Efficient
           | markets will always devolve to this unfortunately.
           | 
           | At least this (compared to unskippable commercials of past) I
           | can bypass _looking_ at.
        
         | onion2k wrote:
         | _But maybe think about why it was perfectly possible to have
         | functional, fast, and reliable webpages and applications in the
         | 90s and early 00s, despite the fact that our networks and
         | computers were little more than painted bricks and paper-mache
         | by todays standards._
         | 
         | I've been building web stuff since the 90s. Your memory of what
         | it was like is flawed. Sites were slow to load. People
         | complained about _images_ because they took a while to load,
         | and as soon as they completed the page jumped down and you lost
         | your place. Moving from one page to another page on the same
         | site _required_ throwing all the HTML away and starting over,
         | even in a web app like an email client (MSFT literally invented
         | XMLHttpRequest to solve that). The HTML content itself was
         | bloated with styles and font tags and table layouts. Often a
         | tiny article would weigh in at 50KB just because it had been
         | created in DreamWeaver or Hotmetal or something and the code
         | was horrible.
         | 
         | Thr web didn't feel fast back then. It was hellishly slow. I
         | had perf budgets to get sites loaded in under 8s (and that was
         | just measuring the time to a DOMContentLoaded event, not the
         | same as today's Core Web Vitals idea of loaded).
         | 
         | There's no doubt that the web is, in many places, bloated to
         | shit. It's not worse though. It's significantly better. It's
         | just that there's more work to be done.
        
           | usrbinbash wrote:
           | >It's significantly better.
           | 
           | Its not better if the only thing that keeps it afloat is the
           | fact that broadband is ubiquitous by now (and that isn't even
           | true for most of the world), and hardware got alot better.
           | 
           | The main difference is that many people started using the web
           | after the gamification and bloatation took over, so they are
           | used to adbanners flying in, random videos which start
           | playing, and phone batteries going flat just by looking at a
           | news article.
           | 
           | > It's just that there's more work to be done.
           | 
           | But is that extra work necessary? Look at this threads page
           | on HN. It loads 402 kb worth of content: the html, a tiny
           | amount of JS that isn't even minified (and doesn't have to
           | because its so slim), a small css file, 3 small gifs and the
           | favicon.
           | 
           | That's it. That's all the network-load required to display a
           | usable, information-dense, not hard to look at and performant
           | experience.
        
         | dm33tri wrote:
         | Sometimes when I have a very limited connection on mobile,
         | nothing helps the web and no pages would load over https, even
         | if it's just plain HTML. And if the connection drops, then it
         | almost certainly won't recover and full page reload is
         | necessary.
         | 
         | Other protocols may work much better in such conditions, for
         | example popular messengers can send and receive text, metadata
         | and blurry image previews just fine. In some cases even voice
         | calls are possible, but not a single website would load ever.
         | 
         | I think HN crowd won't notice the changes. I hope that the
         | protocol would improve experience for smartphone users without
         | reliable 4G connection.
        
           | dangerbird2 wrote:
           | Yeah, one of the major uses of http/3 is that it can
           | gracefully handle connection interruptions and changes, like
           | when you switch from wifi to mobile, without having to wait
           | for the tcp stream to timeout. That's a huge win for both
           | humongous web apps and hackernews' idealized static webpage
           | of text and hyperlinks.
        
           | josefx wrote:
           | > for example popular messengers can send and receive text,
           | 
           | Your messenger isn't sending tens of megabytes for every
           | message.
           | 
           | > metadata and blurry image previews just fine.
           | 
           | One might wonder why they don't just send the 4 MB JPEGs
           | instead of those down scaled to hell previews if they work so
           | well.
           | 
           | > . In some cases even voice calls are possible
           | 
           | Not only kilobytes of data, but kilobytes of data where
           | transmission errors can be completely ignored. I have been in
           | enough VoIP calls to notice how "well" that works in bad
           | conditions.
           | 
           | Everything points to putting websites on a diet being the
           | correct solution.
           | 
           | > And if the connection drops, then it almost certainly won't
           | recover and full page reload is necessary.
           | 
           | I would consider that a browser bug. No idea why the
           | connection wont just timeout and retry by itself.
        
             | Dylan16807 wrote:
             | > Everything points to putting websites on a diet being the
             | correct solution.
             | 
             | It would help a lot but it's not a full solution. You also
             | need to stop TCP from assuming that every lost packet is
             | because of congestion, or it will take a slow connection
             | and then underload it by a huge factor.
        
         | hdjjhhvvhga wrote:
         | Oh this is hilarious:
         | 
         | > The tech lead for Google's AMP project was nice enough to
         | engage us on Twitter. He acknowledged the bloat, but explained
         | that Google was "resource constrained" and had had to outsource
         | this project
         | 
         | > This admission moved me deeply, because I had no idea Google
         | was in a tight spot. So I spent a couple of hours of my own
         | time making a static version of the AMP website. . .
         | 
         | > By cutting out cruft, I was able to get the page weight down
         | to half a megabyte in one afternoon of work. This is eight
         | times smaller than the original page.
         | 
         | > I offered my changes to Google free of charge, but they are
         | evidently too resource constrained to even find the time to
         | copy it over.
        
         | austincheney wrote:
         | The web is an arms race.
         | 
         | The bloat and slowness is generally due to incompetence. On one
         | hand it's merchandising stalking users with poorly written
         | spyware and on the other hand the technology is dictated by the
         | lowest common denominator of developers who cannot perform
         | without monumental hand holding.
         | 
         | Does the end user really want or prefer the megs of framework
         | abstraction? No, that's immature trash from developers insecure
         | about their jobs. This is the standard of practice and it isn't
         | going away. In hiring it's seen as a preference as increased
         | tool proliferation can qualify higher wages.
         | 
         | The only way that's going to get better is by moving to an
         | alternate platform with more challenging technical concerns
         | than populating content.
         | 
         | With IPv6 and gigabit internet to the house becoming more
         | common in the US there is less and less reason to require web
         | servers, data centers, and other third party concerns. These
         | are incredibly expensive and so long as the platform becomes
         | progressively more hostile to their users emerging alternatives
         | with superior on-demand capabilities will become more
         | appealing.
        
           | bayesian_horse wrote:
           | > "Bloat is generally due to incompetence"
           | 
           | What utter arrogance. Developers are more often than not
           | driven by time constraints and demands from higher up the
           | food chain. What you call "bloat" is often not just the
           | easier solution, but the only one.
           | 
           | Of course, "bloat" can come from neglecting the non-
           | bloatiness. But to call this mainly a function of competence
           | is in my opinion misguided.
        
             | austincheney wrote:
             | Perhaps its a matter of perspective. The hand holding many
             | people require just to put text on a webpage seems pretty
             | entitled considering what developers are paid. This
             | especially rings true at the hostility and crying that
             | appear when that hand holding goes away. Its like
             | withholding sugar from a child like you ripped their heart
             | out and watching an emotional tantrum unfold, and of course
             | the child obviously believes they are a tortured victim of
             | an arrogant oppressor.
             | 
             | Stepping back from the nonsense I believe this is a
             | training disparity.
        
           | Chris2048 wrote:
           | A possible solution: less general javascript and more
           | mircroformats with the executable code in the browser.
           | 
           | That way it should be possible to turn off a lot of that
           | garbage that is generic JS. Of course, then the problems
           | becomes website built to not work if you don't have that
           | stuff. I suppose the solution is a protocol where it's not
           | possible to know, i.e any server-side state is disallowed so
           | that you cannot track id users have seen your ads OR the such
           | a response cannot be proven.
        
           | specialist wrote:
           | Yes and:
           | 
           | > _The bloat and slowness is generally due to incompetence._
           | 
           | I'm sure this is true.
           | 
           | I'm also reminded of "office automation". Conventional wisdom
           | was that new technologies would reduce the use of paper.
           | However, for decades, paper usage went up, and it took a long
           | time for the reduction to happen.
           | 
           | Curious, no?
           | 
           | Was increased paper usage an instance of Jevons Paradox?
           | https://en.wikipedia.org/wiki/Jevons_paradox
           | 
           | I have questions.
           | 
           | So then why did paper usage eventually plummet?
           | 
           | Given long enough time frame, is Jevons Paradox a phase? 150
           | years after Jevons book The Coal Question, coal consumption
           | is finally tanking. Decades after the start of data
           | processing and office automation, paper usage finally tanked.
           | 
           | Is this generalizable?
           | 
           | Clearly software bloat (and by extension web page bloat) are
           | catalyzed by better tools. Like the democratization of word
           | processing (et al) begat more paper usage, IDEs (et al) begat
           | more code production.
           | 
           | If there is a downside slope to Jevons "rebound effect"
           | (efficiency -> lower cost -> higher consumption), what could
           | it look like for software bloat? What are some possible
           | causes?
           | 
           | For coal and paper, it was displacement by even cheaper
           | alternatives.
           | 
           | What's cheaper than large, slow web pages? Or conversely, how
           | do we raise the cost of that bloat?
           | 
           | Making a huge leap of reasoning:
           | 
           | My optimistic self hopes that the key resource is attention
           | (people's time). Social media values each person's eyeballs
           | at $100/yr (or whatever). So all the nominal costs of all
           | those bloated web pages is pretty cheap.
           | 
           | My hope is the displacement of software bloat will be somehow
           | related to maximizing people's cognitive abilities, making
           | better use of attention. So replace web surfing, doom
           | scrolling, and the misc opiates of the masses with whatever's
           | next.
           | 
           | This hope is strongly rooted in Clay Shirky's works Cognitive
           | Surplus and Here Comes Everyone.
           | 
           | Thanks for reading this far. To wrap this up:
           | 
           | I regard software bloat as a phase, not inevitable.
           | 
           | I imagine a futureperfect media ecosystem not dependent on ad
           | supported biz models, the primary driver for web page bloat.
           | 
           | I have no idea if people smarter than me have studied the
           | tail end of Jevon's Paradox. Or even how much predictive
           | power the Paradox has at all.
           | 
           | I have no idea what the displacement may look like. Maybe
           | patronage and subscriptions. Or maybe universal basic income,
           | so amateurs can self produce and publish; versus "platforms"
           | exploiting user generated content and collaborative editing.
           | 
           | A bit of a scrambled thesis, I know. I'm writing to
           | understand, sound out a new idea. Thanks for your patience.
        
         | jbergstroem wrote:
         | You have $N hosts and CDN's in the world and $M web developers.
         | If you can affect $N that's a pretty big win for every web
         | developer, not just the one that has enough experience to
         | understand how and why css/js frameworks are ultimately as
         | "easy and fast" as the marketing page said. I'm sure the trend
         | of web performance and best practices will improve as tooling
         | to showcase these issues get easier and better by the day; but
         | making the car more efficient is a different problem to a
         | wider, more stable and ultimately faster motorway.
        
           | usrbinbash wrote:
           | There are cities that tried to solve the problems of having
           | constant traffic congestion on all 4 lanes by demolishing
           | buildings and building 2 more lanes.
           | 
           | The result: traffic congestion on 6 lanes.
        
             | rjbwork wrote:
             | A fairly good point. Modern adtech is kind of the induced
             | demand of internet bandwidth - that is, when new bandwidth
             | is available, adtech will grow to fill that bandwidth. I
             | guess this is also analogous to the fact that our computers
             | are WAYYY faster now than 20 years ago, but user experience
             | of application performance is roughly similar due to the
             | tools used to build applications eating up a lot of that
             | performance to make it easy to build them (electron, WPF,
             | QT, browsers, etc).
        
             | Nullabillity wrote:
             | If congestion stayed the same then ~1.5x (slightly less in
             | reality, since each lane _does_ introduce some overhead)
             | the people were able to get to where they wanted (still a
             | win!) or there was a bottleneck somewhere else in the
             | system (so go address that as well!).
        
               | ohgodplsno wrote:
               | No, merely that other transit systems have been abandoned
               | in favor of your car lanes, which makes them an even
               | worse option. We always scale usage relative to what is
               | available, whether that is highway lanes or energy use.
        
               | distances wrote:
               | Interestingly by Braess's paradox adding more lanes can
               | increase congestion, and removing lanes can speed up the
               | traffic.
               | 
               | https://en.wikipedia.org/wiki/Braess%27s_paradox
        
         | jameshart wrote:
         | Can you be specific - Which sites are you actually complaining
         | about here?
         | 
         | This site, which we're on, delivers pages of comments (this
         | very comment thread is around 50K of comment text) almost
         | instantly.
         | 
         | The Washington Post homepage loads in 600ms for me. 20K of
         | text. Images load within about another 500ms. Their lead
         | article right now is one of those visual feature ones with
         | animations that trigger on scroll, so after the text first
         | appears after 600ms, it takes a second or so longer for the
         | layout to finish. But a routine text article page, I have a
         | scrollable, readable text view within 500ms.
         | 
         | CNN.com, a fraction slower, for a little less text, but a lot
         | more pictures. When I click into one of their articles, within
         | a few seconds, it starts streaming me 1080p video of their news
         | coverage of the article. Imagine that on your 'fast functional
         | 90s and early 00s' web.
         | 
         | Or let's pick a technical resource. go.dev's landing page? No
         | noticeable delay in loading for me, and that page has an
         | embedded form for trying out Go code.
         | 
         | Or reactjs.org, say? Loads up in 200ms for me, and then
         | subsequent navigations on site among documentation pages take
         | about 60ms.
         | 
         | How about a government resource? irs.gov? The homepage maybe
         | loads a little slower than I'd like, but FAQ pages are pretty
         | responsive for me, with text appearing almost as soon as I've
         | clicked the link.
         | 
         | I'm not cherrypicking, these were the first few websites that
         | came to mind to test to see if things are really as bad as
         | you're portraying. And my impression is... you know? It's not
         | that bad?
         | 
         | I am not arguing that there aren't bad sites out there, but we
         | need to stop pretending that the entire world has gone to hell
         | in a handcart and the kids building websites today don't care
         | about optimization. Substantive websites deliver substantive
         | content efficiently and effectively, with a level of
         | functionality and visual flair that the 90s/00s web could only
         | have dreamed of.
        
           | usrbinbash wrote:
           | >Which sites are you actually complaining about here?
           | 
           | Every website which loads way more content than its use case
           | justifies. If the high quality of my device and ubiquitous
           | broadband is the only reason that something appears to be
           | loading fast, that's not good.
           | 
           | >Or let's pick a technical resource. go.dev's landing page?
           | No noticeable delay in loading for me
           | 
           | Among other things, that's because this is a website loading
           | what it has to, instead of everything and the kitchen sink.
           | 885kB in total, and most of that are the images of "companies
           | using go".
        
             | jameshart wrote:
             | Yes, stipulated - bad websites are bad.
             | 
             | But I just picked a few obvious high profile information-
             | oriented sites and none of them had that problem. So...
             | which websites are the bad ones?
             | 
             | Are you looking at the first site that comes up when you
             | search for 'chocolate chip cookie recipes' and holding that
             | up as an example that the web is a disaster area?
             | 
             | That's like picking up a gossip magazine from the rack in
             | the supermarket checkout line and complaining that American
             | literature has really gone to the dogs.
        
           | password4321 wrote:
           | Is this with or without blocking ads?
        
             | jameshart wrote:
             | I run an adblocker, like about 27% of web users, sure.
             | 
             | Do you get substantively different experiences on those
             | sites if you don't?
        
               | password4321 wrote:
               | On the commercial news sites: yes.
        
       | mojuba wrote:
       | So servers will now have to support HTTP/3, HTTP/2, HTTP/1.1,
       | HTTP/1.0 and HTTP/0.9. Is added complexity worth it though?
        
         | alimbada wrote:
         | I'm not familiar with the specs but your question implies that
         | each spec is independent of the others whereas I would assume
         | that e.g. if you support HTTP/3 then you inherently support all
         | previous specs too.
        
         | remus wrote:
         | It's not a requirement to support all of them, unless there's
         | some radical change in the landscape of the web I imagine
         | clients will continue to support HTTP/1.1 for a long time to
         | come.
        
           | mojuba wrote:
           | At least HTTP/1.x is the default for command line testing and
           | debugging. I'm a bit worried we'll lose testability of
           | servers as we add more and more layers on top of 1.x. Say it
           | works with `curl` for me, what are the guarantees it will
           | work the same way over HTTP/2 and /3 multiplexers?
        
             | jrockway wrote:
             | curl already supports HTTP/2 and HTTP/3.
             | 
             | It's true that you miss out on "telnet example.com 80" and
             | typing "GET /", but that honestly hasn't been valid since
             | HTTP/1.1 replaced HTTP/1.0. To some extent this all sounds
             | to me like "I hate screws, they render my trusty hammer
             | obsolete". That's true, but there are also great screw
             | driving tools.
        
               | otabdeveloper4 wrote:
               | The only difference between HTTP/1.0 and HTTP/1.1 is
               | having to send the 'Host:' header.
        
               | NoGravitas wrote:
               | You could still do valid HTTP/1.1 over nc or telnet, or
               | 'openssl s_client' if you needed TLS; it was just a
               | matter of you knowing the protocol, and the minimum
               | required headers for your use-case. (That said, curl is
               | generally a better choice.)
        
               | Dylan16807 wrote:
               | > or 'openssl s_client' if you needed TLS
               | 
               | Which sites tend to need.
               | 
               | I'm sure someone can make an equivalent to that. Maybe
               | even just a wrapper around curl...
        
         | illys wrote:
         | Clients will have to, server may stick to a version as long as
         | that specific version is not discarded by the organization
         | running the standard.
        
         | vlmutolo wrote:
         | I imagine the industry will settle on H/1.1 for backward
         | compatibility and simplicity, and H/3 for performance.
        
       | eric_trackjs wrote:
       | Author here. I am seeing a lot of comments about how the graphs
       | are not anchored at 0. The intent with the graphs was not to
       | "lie" or "mislead" but to fit the data in a way that was mostly
       | readable side by side.
       | 
       | The goal was to show the high level change, in a glanceable way,
       | not to get in to individual millisecond comparisons. However, in
       | the future I would pick a different visualization I think :)
       | 
       | The benchmarking has also come under fire. My goal was to just to
       | put the same site/assets on three different continents and
       | retrieve them a bunch of times. No more, no less. I think the
       | results are still interesting, personally. Clean room benchmarks
       | are cool, but so are real world tests, imo.
       | 
       | Finally, there was no agenda with this post to push HTTP/3 over
       | HTTP/2. I was actually skeptical that HTTP/3 made any kind of
       | difference based on my experience with 1.1 to 2. I expected to
       | write a post about "HTTP/3 is not any better than HTTP/2" and was
       | frankly surprised that it was so much faster in my tests.
        
         | kzrdude wrote:
         | The missing part of the axis, down to 0, is just 1/9th of the
         | current length, so I think it's absolutely the wrong trade-off
         | to cut the y-axis.
        
         | rhplus wrote:
         | Adding an 'axis break' is a great way to focus in on the range
         | of interest while also highlighting the fact that it's not
         | zero-based.
        
         | aquadrop wrote:
         | Thanks for the article. But the goal of the graphs should be to
         | show level of change and let graph speak for itself, if it's
         | high or low. If they were anchored at 0 it would actually allow
         | to see visual difference and for me personally it would be "way
         | that was mostly readable side by side".
        
         | maxmcd wrote:
         | I wouldn't interpret those comments as accusation. It's in all
         | our best interests to critique possibly misleading graphs, even
         | when done so unintentionally.
        
         | remram wrote:
         | You might also want to _actually_ put HTTP 1 /2/3 side-by-side
         | in each graph, and separate graphs by use case. Rather than the
         | current visualization, putting use cases side by side, and HTTP
         | 1/2/3 in different graphs.
         | 
         | edit: like this: https://imgur.com/a/7Gvq59j
        
         | vlmutolo wrote:
         | > However, in the future I would pick a different visualization
         | I think
         | 
         | I think the box plots were a good choice here. I quickly
         | understood what I was looking at, which is a high compliment
         | for any visualization. When it's done right it seems easy and
         | obvious.
         | 
         | But the y-axis really needs to start at 0. It's the only way
         | the reader will perceive the correct relative difference
         | between the various measurements.
         | 
         | As an extreme example, if I have measurements [A: 100, B: 101,
         | C: 105], and then scale the axes to "fit around" the data
         | (maybe from 100 to 106 on thy y axis), it will seem like C is
         | 5x larger than B. In reality, it's only 1.05x larger.
         | 
         | Leave the whitespace at the bottom of the graph if the relative
         | size of the measurements matters (it usually does).
        
           | remus wrote:
           | I think these choices are more context specific than is often
           | appreciated. For example
           | 
           | > if I have measurements [A: 100, B: 101, C: 105], and then
           | scale the axes to "fit around" the data (maybe from 100 to
           | 106 on thy y axis), it will seem like C is 5x larger than B.
           | In reality, it's only 1.05x larger.
           | 
           | If you were interested in the absolute difference between the
           | values then starting your axis at 0 is going to make it hard
           | to read.
        
             | amenod wrote:
             | It is however very rare that absolute differences matter;
             | and even when they do, the scale should (often) be fixed.
             | For example the temperatures:
             | 
             | [A: 27.0, B: 29.0, C: 28.0]
             | 
             | versus:
             | 
             | [A: 27.0, B: 27.2, C: 26.9]
             | 
             | If scale is fit to the min and max values, the charts will
             | look the same.
             | 
             | Still, as a rule of thumb, when Y axis doesn't start at 0,
             | the chart is probably misleading. It is very rare that the
             | absolute size of the measured quantity doesn't matter.
        
               | sandgiant wrote:
               | Indeed. And if they don't you are probably better off
               | normalizing your axis anyway.
        
               | Tyr42 wrote:
               | Yeah, you should graph both starting at 0K right? You
               | wouldn't want to mislead people into thinking somthing at
               | 10C is ten times more hot than something at 1C.
        
           | phkahler wrote:
           | >> It's the only way the reader will perceive the correct
           | relative difference...
           | 
           | Every day, the stock market either goes from the bottom of
           | the graph to the top, or from the top all the way to the
           | bottom. Sometimes it takes a wild excursion covering the
           | whole graph and then retreats a bit toward the middle. Every
           | day. Because the media likes graphs that dramatize even a 0.1
           | percent change.
        
             | KarlKemp wrote:
             | No, the media just happens to sometimes share OP's intend:
             | to show a (small) absolute change. That change may or may
             | not be as dramatic as the graph suggests in both
             | visualizations: measured in Kelvin, your body temperature
             | increasing by 8 K looks like a tiny bump when you anchor it
             | at absolute zero. "You" being the generic "you", because at
             | 47 deg C body temperature, the other you is dead.
             | 
             | It will be visible if you work in Celsius, a unit that is
             | essentially a cut-off Y axis to better fit the origin
             | within the domains we use it for.
        
           | eric_trackjs wrote:
           | Agreed. Next time I'll make the text and other things a
           | little larger too (the real graphs are actually quite large,
           | I had to shrink them to fit the article formatting.) I'd
           | already spent so much time on the article I didn't want to go
           | back and redo the graphs (I didn't really think too many
           | people would read it - it was a big surprise to see it on HN)
        
           | KarlKemp wrote:
           | This notion about cut-off y-axes is the data visualization
           | equivalent of "correlation is not causation": it's a valid
           | point that's easily understood, so everyone latches on to it
           | and then uses it to proof their smartitude, usually with the
           | intonation of revealing grand wisdom.
           | 
           | Meanwhile, there are plenty of practitioners who aren't
           | obviously to the argument, but rather long past it: they know
           | there are situations where it's totally legitimate to cut the
           | axis. Other times, they might resort to a logarithmic axis,
           | which is yet another method of making the presentation more
           | sensitive to small changes.
        
             | vlmutolo wrote:
             | There are plenty of instances where it's appropriate to use
             | a y-axis that isn't "linear starting at zero." That's why I
             | specified that I was only talking about ways to represent
             | relative differences (i.e. relative to the magnitude of the
             | measurements).
             | 
             | In this case, when we're measuring the latency of requests,
             | without any other context, it's safe to say that relative
             | differences are the important metric and the graph should
             | start at zero.
             | 
             | So while it's true that this isn't universally the correct
             | decision, and it's probably true that people regurgitate
             | the "start at zero" criticism regardless of whether it's
             | appropriate, it _does_ apply to this case.
        
         | marcos100 wrote:
         | The charts are ok for the purpose of visualizing the
         | performance of the protocols. At least for me, they are side-
         | by-side with the same min and max values and is easy to
         | compare. The purpose is clear and starting from zero adds
         | nothing of value.
         | 
         | People who think the bottom line is zero don't know how to read
         | a chart.
         | 
         | What maybe is missing is a table with the statistics to compare
         | numbers with maybe a latency number.
        
         | Railsify wrote:
         | But they all start at the same anchor value, I was immediately
         | able to interpret them and did not feel misled.
        
         | brnt wrote:
         | > I am seeing a lot of comments about how the graphs are not
         | anchored at 0.
         | 
         | Personal preference: for large offsets that makes sense. For
         | small ones (~10% of max here) it seems unnecessary, or, to a
         | suspicious mind, meant to hide something ;)
        
           | ReactiveJelly wrote:
           | Some of the India offsets are huge.
           | 
           | And the numbers are too small to read, and reading numbers on
           | a graph is mixing System 1 and System 2 thinking, anyway.
           | 
           | I agree that the graphs would be better and still impressive
           | even anchored to 0
        
           | eric_trackjs wrote:
           | Yep, I didn't even think about the suspicious angle when I
           | did it. Mostly I was fiddling with how to draw box plots in
           | D3 and that's what came out. Next time I will ensure a 0
           | axis!
        
         | morrbo wrote:
         | I found the article very useful regardless so thank you
        
         | posix_me_less wrote:
         | Could it be simply because UDP packets may be treated with
         | higher priority by all the middleman machines? UDP is used by
         | IP phones, video conferences, etc.
        
           | digikata wrote:
           | Because of the use of UDP, I wonder the variation could end
           | up being wider. There are some large outliers on the chart
           | (though that could be an implementation maturity issue). Also
           | I wonder if the routing environment will continue to be the
           | same in terms of prioritization if http/3 becomes more
           | common. There might be some motivation to slow down http/3 to
           | prioritize traditional udp real-time uses.
        
       | heftig wrote:
       | I've deactivated HTTP/3 in Firefox as the bandwidth from
       | CloudFlare to my provider (German Telecom) gets ridiculously bad
       | (downloading a 6 MB file takes 25 s instead of 0.6 s) during peak
       | times, and I only see this when using HTTP/3 over IPv4, all other
       | combinations stay fast.
        
       | twsted wrote:
       | I don't see any graph in Safari
        
       | jeroenhd wrote:
       | The comparison isn't entirely fair because there's no reason not
       | to use TLS 1.3 on HTTP 1 or HTTP/2 connections. The 0-RTT
       | advantage of QUIC is also available to those protocols, which is
       | one of the things the article claims make HTTP/3 faster.
       | 
       | The methodology section also doesn't say if 0-RTT was actually
       | enabled during testing. I'd argue that any system or website with
       | an admin interface or user account system should not enable 0-RTT
       | without very strict evaluation of their application protection
       | mechanisms, making it useless for many API servers and data
       | sources. It's fine for static content and that can help a lot,
       | but with the benefit of multiplexing I'm not sure how useful it
       | really is.
        
         | johncolanduoni wrote:
         | This is true, though it's worth mentioning that 0-RTT on TLS
         | 1.3 + HTTP/2 still has one more roundtrip (for the TCP
         | connection) than 0-RTT HTTP/3. Plus even if you don't enable
         | 0-RTT on either there's still a one roundtrip difference in
         | favor of HTTP/3, since it combines the (equivalent of) the TCP
         | establishment roundtrip and the TLS handshake roundtrip. That's
         | the real advantage in connection establishment.
        
         | lemagedurage wrote:
         | Yes, TLS1.3 cuts out a RTT compared to TLS1.2, but QUIC cuts
         | out another RTT by combining the TCP and TLS handshakes into
         | one.
         | 
         | For a full handshake: TCP + TLS1.2: 3 RTT TCP + TLS1.3: 2 RTT
         | QUIC including TLS1.3: 1 RTT
         | 
         | And for subsequent connections: TCP + TLS1.3: 1 RTT QUIC
         | including TLS1.3: 0 RTT (but no replay attack protection)
        
       | tambourine_man wrote:
       | What's great about HTTP 1.1 is it's simplicity. I mean, you can
       | explain it to a complete newby and have that person write a
       | functioning implementation in a few lines of shell script, all in
       | one afternoon.
       | 
       | In fact, you can probably figure it out by yourself just by
       | looking at what goes across the wire.
       | 
       | And yet, it runs the whole modern world. That's beautiful. I
       | think simplicity is underrated and it's something I really value
       | when choosing the tools I use.
        
         | tambourine_man wrote:
         | I'm pretty sure I'll never understand QUIC completely in my
         | lifetime. I do know HTTP 1.1 through and through (it's really
         | easy).
         | 
         | The fact that HTTP 1.1 speeds are even comparable, let alone
         | faster in some situations, should at least tip the scale in its
         | favor.
        
           | WorldMaker wrote:
           | > I'm pretty sure I'll never understand QUIC completely in my
           | lifetime.
           | 
           | Do you understand how to build TCP packets by hand? A lot of
           | the confusion between QUIC and HTTP/3 is intentional because
           | it is somewhat merging the layers that used to be entirely
           | separate.
           | 
           | All the examples elsewhere of "I can telnet/netcat to an
           | HTTP/1.1 server and mostly do the thing" avoid the obvious
           | fact that telnet/netcat is still doing all the work of
           | splitting the messages into TCP packets and following TCP
           | rules.
           | 
           | Presumably QUIC and HTTP/3 are separate enough to still
           | warrant the idea of a telnet/netcat for QUIC taking care of
           | the low level details and then you could try to write HTTP/3
           | on top of those apps a little easier, in a similar fashion.
           | Though maybe not, without the rich history of telnet-related
           | protocols on top of TCP and HTTP/3 still currently the only
           | protocol targeting QUIC today.
        
           | arpa wrote:
           | obscure standards mean less possibility that someone else
           | without google-scale resources will implement their own
           | client/server.
        
             | ocdtrekkie wrote:
             | And that is one of the key "benefits" of HTTP/3 if you're a
             | Google-scale company.
        
             | bayesian_horse wrote:
             | Except there are already several such implementations and
             | nobody is going to deprecate older HTTP versions.
        
               | giantrobot wrote:
               | Chrome has an overwhelming portion of the web browser
               | installed base. It's full of dark patterns and scary
               | warnings about "dangerous" things like unencrypted HTTP.
               | I don't have a lot of faith that they're _not_ going to
               | disable older HTTP versions at some point.
        
           | bawolff wrote:
           | > The fact that HTTP 1.1 speeds are even comparable, let
           | alone faster in some situations, should at least tip the
           | scale in its favor.
           | 
           | Are these realistic web browsing scenarios? Because that
           | seems false. For most websites http/2 is a pretty significant
           | performance win.
        
           | cogman10 wrote:
           | Are you sure you "know" HTTP 1.1 through and through?
           | 
           | How much do you understand about gzip or brotili? How much do
           | you know about TLS? Do you understand all of the
           | functionality of TCP? Network congestion and checksuming?
           | 
           | QUIC includes compression, encryption, and transport into the
           | standard which is why it's more complicated. Just because
           | Http 1.1 doesn't include those parts explicitly into the
           | standard, doesn't mean they are unused.
           | 
           | Http 1.1 is only "easy" because it sits atop a sea of
           | complexity that rarely faces the same critiques as Quic does.
        
         | ReactiveJelly wrote:
         | In that sense, HTML also "runs the whole modern world".
         | 
         | If you're talking about HTTP 1.1 and not thinking about the
         | complexity of TLS, you're ignoring more than half the LoC in
         | the stack.
         | 
         | I like QUIC because I was never going to roll my own TLS
         | anyway, so I might as well have TLS and streams in the same
         | package.
        
         | throw0101a wrote:
         | > _What's great about HTTP 1.1 is it's simplicity._
         | 
         | Perhaps I'm old fashioned, but I like being able to debug
         | things with telnet/netcat. (Or _openssl s_client_.)
        
           | Spivak wrote:
           | Isn't this mostly a complaint about tooling? If getting the
           | text form of a message just meant piping to a command would
           | it be fine?
        
         | username223 wrote:
         | > What's great about HTTP 1.1 is it's simplicity.
         | 
         | This is an underappreciated feature of the old protocols. You
         | can telnet into an HTTP, SMTP, POP, or even IMAP server and
         | talk to it with your keyboard (using stunnel for the encrypted
         | versions), or dump the traffic to learn the protocol or debug a
         | problem. Good luck doing that with QUIC.
        
           | bawolff wrote:
           | For the second part, using wireshark is really not that big a
           | jump in skill level and an easy way to debug the protocol
           | (provided you get past the encryption)
        
           | bitwize wrote:
           | Might want to try using tools from this century. We live in a
           | post-curl, post-Wireshark world. There's no need to create
           | excess waste heat parsing and unparsing "plain text" (lol,
           | there is no such thing) protocols anymore; and anyways with
           | SSL virtually mandatory it's not like it stays human readable
           | over the wire. Fast, efficient, open binary protocols ftw!
        
         | notriddle wrote:
         | Not really. HTTP/1.1 already has enough complexity in it that
         | most people can't implement it securely (a legitimately simple
         | protocol doesn't have desync bugs).
         | 
         | https://portswigger.net/research/http-desync-attacks-request...
        
           | tambourine_man wrote:
           | Sure. Now imagine the edge cases and security flaws we will
           | be discovering in HTTP 2 for the next decades. Fun.
        
             | notriddle wrote:
             | Actually, a lot less. HTTP/2 frames are done as a binary
             | protocol with fixed offsets, and instead of having three
             | different transfer modes (chunked, keep-alive, and one-
             | shot), HTTP/2 has only its version of chunked.
             | 
             | The best way to exploit an HTTP/2 server is to exploit the
             | HTTP/1.1 server behind it [1].
             | 
             | [1]: https://portswigger.net/research/http2
        
               | josefx wrote:
               | > The best way to exploit an HTTP/2 server is to exploit
               | the HTTP/1.1 server behind it [1].
               | 
               | The exploit mentioned is the http/2 front end getting
               | confused by the amount of responses it gets because it
               | never checked how many http/1.1 messages it was
               | forwarding, it just assumed that the http/1.1 headers
               | where matching the http/2 headers. Now in defense of
               | http/2 this was completely expected, called out in the
               | spec. as requiring validation in case of tunneling and
               | ignored by every implementation.
        
           | nine_k wrote:
           | I'd say it has enough naivete to be susceptible to desync
           | attacks.
           | 
           | Lax whitespace rules, lax repeating header rules, no advanced
           | integrity checks like checksums, etc. I see these as signs of
           | simplicity.
        
             | RedShift1 wrote:
             | Why would you need checksums? There is already checksumming
             | and error correction going on in the lower layers (TCP,
             | Ethernet, ...).
        
               | tjalfi wrote:
               | The TCP checksum catches some issues but it won't detect
               | everything. Other newer algorithms like CRC32C are much
               | more effective at detecting corrupt data.
               | 
               | The following is an excerpt from Performance of Checksums
               | and CRCs over Real Data[0].
               | 
               |  _The TCP checksum is a 16-bit ones-complement sum of the
               | data. This sum will catch any burst error of 15bits or
               | less[8], and all 16-bit burst errors except for those
               | which replace one 1's complement zero with another (i.e.,
               | 16 adjacent 1 bits replaced by 16 zero bits, or vice-
               | versa). Over uniformly distributed data, it is expected
               | to detect other types of errors at a rate proportional to
               | 1 in 2^16._
               | 
               | If you're deeply interested in this topic then I would
               | recommend Jonathan Stone's Phd thesis.
               | 
               | [0] https://www.researchgate.net/publication/3334567_Perf
               | ormance...
        
               | nine_k wrote:
               | If you have a content-length header, and a checksum after
               | the declared number of bytes, it becomes way harder to
               | desync a request: prepended data would change the
               | checksum, and an altered length header would make the
               | parser miss the checksum in the stream.
        
               | giantrobot wrote:
               | Lower layer checksums will tell you the data made it
               | across the wire correctly but have no idea if that data
               | was correct in the context of the application layer. Keep
               | in mind HTTP isn't only used between web browsers and
               | servers. A client might use a HMAC to verify the content
               | wasn't MITMed. Lower layer packet checksums won't do that
               | and TLS won't necessarily prevent it either. Even when
               | MITM isn't an issue HTTP can be proxied so a header
               | checksum could verify a proxy didn't do something wrong.
        
               | bawolff wrote:
               | TLS will prevent it in any realistic scenario.
               | 
               | If you have a shared secret to support doing an hmac then
               | you can certainly set up tls correctly.
        
               | giantrobot wrote:
               | TLS only cares about the security of the socket. It
               | doesn't know anything about the _content_ flowing through
               | the socket. Additionally the TLS connection may not even
               | be between the server and client but a proxy between the
               | server and client, client and server, or both server and
               | client. While TCP and TLS can get a message over the
               | Internet correctly and securely they can 't make any
               | guarantees about the contextual validity of that message.
               | 
               | TLS does not obviate application layer checks it can only
               | compliment them.
        
               | cogman10 wrote:
               | I cannot envision a problem that a checksum in the HTTP
               | protocol would fix that TLS wouldn't also fix.
               | 
               | You have to ask yourself "How would this get corrupted"?
               | If the concern is that the data ends up somehow mangled
               | from an external source like a bad router or a lose
               | cable, then the data simply won't decrypt correctly and
               | you'll end up with garbage.
               | 
               | So that leaves you with an application bug. However, an
               | app is just as likely to add a checksum for an invalid
               | body as they are to add one for a valid body.
               | 
               | So, the only fault I could see this fixing is an
               | application which, through corrupt memory or something
               | else, manages to produce a garbage body which fails it's
               | checksum but... somehow... is encrypted correctly. I've
               | never seen that sort of problem.
        
               | bawolff wrote:
               | > TLS only cares about the security of the socket. It
               | doesn't know anything about the content flowing through
               | the socket.
               | 
               | Not sure what that means precisely, but if you're
               | stretching definitions that far, the same is probably
               | also true of your application checksums/hmacs
               | 
               | > Additionally the TLS connection may not even be between
               | the server and client but a proxy between the server and
               | client, client and server, or both server and client.
               | 
               | So its the same as your hmac check. That can also be
               | processed at any layer.
               | 
               | > While TCP and TLS can get a message over the Internet
               | correctly and securely they can't make any guarantees
               | about the contextual validity of that message.
               | 
               | Can anything at the http protocol layer provide that?
        
             | tambourine_man wrote:
             | I think you're arguing for HTTP/1.2 instead
        
           | fulafel wrote:
           | You can implement a server without all those tricky features.
           | Proxies are hard and a design wart if you use them.
           | 
           | Quoting your link it describes this politely, "By itself,
           | this is harmless. However, modern websites are composed of
           | chains of systems, all talking over HTTP. This multi-tiered
           | architecture takes HTTP requests from multiple different
           | users and routes them over a single TCP/TLS connection [...]"
        
           | bawolff wrote:
           | Given that attack only applies to servers, and only when you
           | have proxies in front of your servers, i don't think it
           | disputes the claim that writing a minimal http client is
           | quite simple.
        
         | MayeulC wrote:
         | HTTP/1.1 is so simple that I can just open up netcat and ask a
         | webpage from a website by hand. And this is not even my field.
         | 
         | Now, you can also build a lot of complexity upon it. And while
         | the simplicity is nice for inspecting and debugging sometimes,
         | in practice it is rarely used, and it's a very complex beast
         | with TLS, SSL, gzip compression and more.
         | 
         | The arguments are similar to binary logs vs text logs. Except
         | that in HTTP's case, it is already littered with binary.
        
           | cogman10 wrote:
           | The case for "http is simple" similarly falls apart when you
           | realize that a lot of tools for diagnosing http streams can
           | just as easily (easier in fact) consume a binary stream and
           | translate it into a human readable form as they can analyze
           | the text form.
           | 
           | We don't complain that we can't read our browser's binary but
           | for some reason everyone is convinced that our transport
           | protocols are different and it is imperative to be able to
           | read it with a tool to translate it.
        
         | Cyph0n wrote:
         | To me, the fact that it _seems_ simple to roll your own HTTP
         | 1.1 implementation is more of a downside than an advantage.
        
       | markdog12 wrote:
       | Anyone interested in HTTP 3 that likes even-handed articles,
       | Robin Marx has an excellent series on Smashing Magazine:
       | https://www.smashingmagazine.com/author/robin-marx/
       | 
       | The second article is about its performance.
        
       | sam_goody wrote:
       | Full kudos to Caddy for being the first out the door with working
       | HTTP/3 support.
       | 
       | Are there downsides to HTTP/3? When HTTP/2 came out there was
       | discussions about why not to enable.
        
         | Jleagle wrote:
         | I am using it with traefik too.
        
         | buro9 wrote:
         | > Caddy
         | 
         | Benefited from Lucas https://clemente.io/ ... full credit to
         | him, and the various people who worked on the IETF test
         | implementations that then became various production
         | implementations.
         | 
         | I don't credit Caddy with this as they were downstream and just
         | received the good work done. Not that Caddy is bad, but
         | singling them out ignores those that did the hard work.
         | 
         | > Are there downsides to HTTP/3? When HTTP/2 came out there was
         | discussions about why not to enable
         | 
         | 1. HTTP/1 and HTTP/2 were natural rate limiters to your
         | application... can your app cope when all requests arrive in a
         | much shorter time window? The traffic pattern goes from
         | somewhat smooth, to somewhat bursty and spikey. If your assets
         | come from a dynamic endpoint or you have authorization on your
         | endpoint that results in database lookups... you'll want to
         | load test real world scenarios.
         | 
         | 2. There are amplification potentials when you pass HTTP/3
         | through layers that map to HTTP/2 and HTTP/1 (the optimisations
         | in the small payload to H3 are "undone" and amplify through H2
         | and H1 protocols)
         | 
         | 3. HTTP/3 combined the transport protocol benefits and the HTTP
         | protocol benefits with TLS benefits... all good, but it is
         | harder for DDoS protection as proxies and middle boxes can no
         | longer differentiate as easily the good traffic from the bad,
         | and may not have visibility over what constitues good at all.
         | 
         | Largely though... worth enabling. And for a good while the last
         | of my points is mitigated by disabling H3 as you'll degrade to
         | H2 cleanly.
        
           | francislavoie wrote:
           | For sure, as a Caddy maintainer, all credit goes to the
           | contributors of https://github.com/lucas-clemente/quic-go.
           | Early on this was mostly Lucas Clemente, but Marten Seemann
           | has been the primary maintainer for quite a while now,
           | including the transition from QUIC to HTTP/3.
           | https://github.com/lucas-clemente/quic-
           | go/graphs/contributor...
        
         | jeroenhd wrote:
         | I know of at least one ISP that plain blocks outgoing UDP/443
         | or at least messes with the packets, so I wouldn't rely on its
         | availability.
         | 
         | I'm also hesitant about HTTP/2 and 3 because of their
         | complexity, but if you trust the systems and libraries
         | underlying your server I don't see any problem in activating
         | either. Just watch out with features like 0-RTT that can have a
         | security impact and you should be fine.
        
           | jillesvangurp wrote:
           | It will likely just fallback to http on networks like that.
           | Shitty ISPs and corporate networks are why we need fallbacks
           | like that. I would suggest using a VPN if you are on such a
           | network.
           | 
           | The only downside with HTTP 2 & 3 is software support. If you
           | are in Google, you probably already use it just because the
           | load balancer supports it. With self hosted, you are
           | dependent on whatever you are using adding support. I
           | remember playing with SPDY way back on nginx for example. It
           | wasn't that hard to get going and it made a difference for
           | our mobile users (who are on shitty networks by default).
           | 
           | With anything self hosted, security is indeed a key concern;
           | especially if you are using alpha versions of new
           | functionality like this which is what you would be doing
           | effectively.
        
         | francislavoie wrote:
         | Caddy maintainer here - we still have it marked experimental,
         | not turned on by default. There's still some bugs. The actual
         | HTTP/3 implementation comes from https://github.com/lucas-
         | clemente/quic-go which essentially just has one maintainer at
         | the moment. I'm hoping the Go HTTP team throws some effort on
         | HTTP/3 sooner rather than later.
        
         | ardit33 wrote:
         | We will know over time. HTTP/3 is fixing all shortcomings of
         | HTTP/2, which was pretty flawed. It uses UDP though, and UDP
         | itself can be blocked by many firewalls, and folks/companies
         | have to update firewalls and routers for it to work properly.
         | So, that's its major weakness (eg. some workplaces block all
         | egress UDP packets).
         | 
         | https://news.ycombinator.com/item?id=27402222
         | 
         | As the article says, HTTP3 is pretty fast when there are high
         | latencies/high loss scenarios. It will be interesting to see
         | how it performs in a mobile setting. 4G and 5G networks have
         | their own/specific optimizations that are different than a
         | desktop world. eg. http keep alives were never respected, etc.
         | (connections would be closed if there was no continuous
         | activity). But, given the test, it 'should' be much faster than
         | http 1.1
         | 
         | One think is for sure: HTTP 1.1 is going to still be used, even
         | 100 years from now. It is like FM Radio, which is both good
         | enough for most cases, and very simple and convenient at the
         | same time.
        
           | josefx wrote:
           | I can't wait for all the HTTP/3 -> HTTP/2 -> HTTP/1 tunneling
           | and other exploits that will appear in all the rushed
           | implementations. We already saw how that worked with HTTP/2
           | and people tried to rush OpenSSL into adopting an
           | experimental implementation of QUIC while Google was still
           | hashing out the spec. .
           | 
           | I would suggest disabling HTTP/3 for the next 10 or so years
           | and give everyone else the chance to burn through the
           | exploits.
        
             | johncolanduoni wrote:
             | Thankfully HTTP/3 didn't change request/header semantics
             | much from HTTP/2, at least nowhere near the level of the
             | HTTP/1 -> HTTP/2 changes.
        
           | ithkuil wrote:
           | Do you have a link at hand describing the main shortcomings
           | of http/2 (that http/3 fixes)?
        
       | bsjks wrote:
       | It's 2021 and 80% of the load time is spent generating the page
       | because of slow backend languages such as Python and then 20% of
       | the load time is spent compiling and executing the frontend
       | Javascript. I am sceptical that these improvements will even move
       | the needle.
        
         | progx wrote:
         | In 2021 we use caches.
        
           | bsjks wrote:
           | "In 2021 we present outdated information because we can't
           | stop choosing bad backend languages"
        
             | futharkshill wrote:
             | alternative: In 2021 we still use C et al. for our backend
             | server, and we get hacked every single day. If I am going
             | to leave a wide open door to my house, I at least want
             | confidence that the house is not made out of cardboard
        
               | bsjks wrote:
               | That's disingenuous. There are languages like php or
               | JavaScript that are much much faster than Python and that
               | don't require you to give up the keys to your house.
        
               | Dylan16807 wrote:
               | Is it any more of an exaggeration than your post?
               | 
               | Also pypy is fast, and the speed of php also heavily
               | depends on version. Not that backend speed even makes a
               | difference that much of the time. 3ms vs 8ms won't
               | matter.
        
             | interactivecode wrote:
             | So what language do you use for cranking out enterprise
             | crud apps and getting actual work done?
        
         | lucian1900 wrote:
         | Python is unnecessarily slow, for sure. But I have rarely had
         | to deal with endpoints slow because of Python, as opposed to
         | slow because of unfortunate IO of some sort.
        
         | jeroenhd wrote:
         | For the few static websites remaining, this is a great
         | advancement. As long as you don't need to deal with user input,
         | this can be achieved easily for things like blogs, news
         | websites and ads.
         | 
         | Of course, these protocol improvements mostly benefit companies
         | the size of Google. Smaller, independent hosts probably won't
         | get much out of improving the underlying transport outside of a
         | few edge cases.
         | 
         | This blog page loaded very quickly for me compared to most
         | websites, though I don't know how much of that is in the
         | network part and how much of it is because of optimized
         | HTML/CSS/JS.
        
       | favourable wrote:
       | Anyone know if this breaks on legacy hardware running legacy
       | browsers? Yes people largely use modern hardware, but there's
       | always eccentric folk with old hardware who like to browse the
       | web on old browsers (for whatever reason).
        
         | lemagedurage wrote:
         | Chrome supports HTTP/3 since November 2020.
         | 
         | https://caniuse.com/http3
         | 
         | Typically servers support multiple HTTP versions.
        
           | shakna wrote:
           | But not supported by default by any mobile browser except
           | Android's Chrome, making this more of a bleeding edge type
           | situation on the client side of things.
        
       | bestinterest wrote:
       | Quick question, if I have a VPS with cloudflare in front of it
       | where the VPS runs a HTTP1.1 server is cloudflare limited to 6
       | http connections/resources to my server at a time?
        
         | slig wrote:
         | If you configure your static files caching correctly, they will
         | be served directly by Cloudflare in the fastest way possible
         | given the user has a modern browser. You can use
         | http://webpagetest.org/ (or Chrome Dev Tools) and look for `cf-
         | cache-status: HIT` header on the content that should be served
         | directly by CF.
        
         | CasperDern wrote:
         | I do not know about cloudflare's internals, but I believe the 6
         | connections limit is purely a browser thing. So it should be
         | able to do as many connections as it's able to handle.
        
           | bestinterest wrote:
           | Ah okay thats good news. Is it worth upgrading to HTTP2 on
           | the server side for cloudflare if they are the only consumers
           | of my VPS? They already enable HTTP/3 for clients but I'm
           | wondering if I'm somehow bottlenecking them by not matching
           | my server with HTTP/2 or HTTP/3.
        
             | johncolanduoni wrote:
             | According to their docs Cloudflare won't connect to your
             | origin servers with HTTP/2 or HTTP/3 anyway:
             | https://support.cloudflare.com/hc/en-
             | us/articles/200168076-U...
             | 
             | In general connections between CDNs/reverse proxies and
             | origin servers don't get much benefit from HTTP/2 or
             | HTTP/3. CDNs don't generally care about connection
             | establishment speed or multiplexing to the origin (the main
             | benefits of newer HTTP versions), since they can just
             | create and maintain N long-lived connections if they want
             | to be able to send N concurrent requests. They generally
             | only bother with HTTP/2 to the origin if they need to
             | support gRPC, which has some unusual connection semantics.
        
       | ericls wrote:
       | In practice, you'll have load balancers and reverse proxies some
       | where in between your server and your client. Does HTTP/3 still
       | make a difference in these cases?
        
       | wcoenen wrote:
       | I am a little confused about the test methodology.
       | 
       | The post clearly explains that the big advantage of HTTP/3 is
       | that it deals much better with IP packet loss. But then the tests
       | are done without inducing (or at least measuring) packet loss?
       | 
       | I guess the measured performance improvements here are mostly for
       | the zero round-trip stuff then. But unless you understand how to
       | analyze the security vs performance trade-off (I for one don't),
       | that probably shouldn't be enabled.
        
         | fulafel wrote:
         | This seems like it could lead to worsening packet delivery from
         | networks causing other protocols to do badly, since so far tcp
         | has needed, and ip networks have delivered, near zero packet
         | loss.
        
           | ninjaoxygen wrote:
           | I will be interested to see how HTTP/3 fares on Virgin in the
           | UK. I believe the Superhub 3 and Superhub 4 have UDP offload
           | issues, so it's all dealt with by the CPU, meaning throughput
           | is severely limited compared to the linespeed.
           | 
           | Waiting for Superhub 5 to be officially rolled out before
           | upgrading here!
        
             | Intermernet wrote:
             | CPU on client or server? Client is probably negligible
             | overhead while server is something that needs to be dealt
             | with on an ISP level. Linespeed and latency live in
             | different orders of magnitude to client CPU processing /
             | rendering.
        
               | alexchamberlain wrote:
               | I think the op meant on the Virgin supplied router.
        
         | baybal2 wrote:
         | > I am a little confused about the test methodology.
         | 
         | Yeah, I remember Google engineers steamrolled HTTP/2 through
         | W3C with equally flawed "real world data."
         | 
         | In the end it came out that HTTP/2 is terrible in real world,
         | especially on lossy wireless links, but it made CDNs happy,
         | because it offloads them more than the client terminals.
         | 
         | Now Google engineers again want to steamroll a new standard
         | with their "real world data." It's easy to imagine what people
         | think of that.
        
           | josephg wrote:
           | HTTP is maintained by the IETF; not the W3C. You make it
           | sound like Google sends an army to the meetings. They don't.
           | I didn't sit in on the quic meeting but in http there's a
           | small handful of people who ever speak up. They work at lots
           | of places - Facebook, Mozilla, Fastly, etc. And they all fit
           | around a small dining table. I don't think I met any googlers
           | last time I was at httpbis - though I ran into a few in the
           | hallways.
           | 
           | You can join in if you want - the IETF is an open
           | organisation. There's no magical authority. Standards are
           | just written by whoever shows up and convinces other people
           | to listen. And then they're implemented by any person or
           | organisation who thinks they're good enough to implement.
           | That's all.
           | 
           | If you think you have better judgement than the working
           | groups, don't whinge on hacker news. Turn up and contribute.
           | We need good judgement and good engineering to make the
           | internet keep working well. Contributing to standards is a
           | great way to help out.
        
         | KaiserPro wrote:
         | THis was my thoughts as well.
         | 
         | For desktops HTTP2 is mostly ok, possibly an improvement. For
         | Mobile it wasn't. I raised this when we were trailing in at
         | $financial_media_company. Alas the problems were ignored
         | because HTTP2 was new and shiny, and fastly at the time was
         | pushing it. I remember being told by a number of engineers that
         | I wasn't qualified to make assertions about latency, TCP and
         | multiplexing, which was fun.
         | 
         | I still am not convinced by QUIC. I really think that we should
         | have gone for a file exchange protocol, with separate control,
         | data and metadata channels. Rather than this complicated mush
         | of half remembered HTTP snippets transmuted into binary.
         | 
         | We know that despite best efforts website size is going to grow
         | bigger, in both file size and number. Lets just embrace that
         | and design HTTP to be a low latency file transfer protocol,
         | with extra channels for real time general purpose comms.
        
           | johncolanduoni wrote:
           | Isn't HTTP/3 a low latency file transfer protocol, and
           | WebTransport over HTTP/3 extra channels for real time general
           | purpose comms? It's also worth noting that HTTP/3 actually
           | does use a separate QUIC channel for control.
        
           | ReactiveJelly wrote:
           | I don't understand the "HTTP snippets transmuted into binary"
           | part.
           | 
           | QUIC itself doesn't have the request/response style of HTTP,
           | it doesn't know anything about HTTP, it's just datagrams and
           | streams inside the tunnel.
           | 
           | So you could use QUIC to build a competitor to HTTP/3, a
           | custom protocol with bi-directional control, data, and
           | metadata streams.
           | 
           | In fact, I'm looking forward to when Someone Else writes an
           | SFTP / FTP replacement in QUIC. HTTP is already a better file
           | transfer protocol than FTP. (because HTTP has byte-range
           | headers, which AIUI are not well-supported by FTP servers)
           | Think how much we could do if multiple streams and encryption
           | were as simple as importing one library.
        
             | KaiserPro wrote:
             | > I don't understand the "HTTP snippets transmuted into
             | binary" part.
             | 
             | Yup, my mistake, I meant to say HTTP3 over QUIC.
             | 
             | At a previous company(many years ago), I designed a
             | protocol that was a replacement for aspera. The idea being
             | that it could allow high speed transfer over long distance,
             | with high packet loss (think 130-150ms ping). We could max
             | out a 1gig link without much effort, even with >0.5% packet
             | loss.
             | 
             | In its present form its optimised for throughput rather
             | than latency. However its perfectly possible to tune it on
             | fly to optimise for latency.
        
         | nix23 wrote:
         | Also:
         | 
         | >TLS 1.2 was used for HTTP/1.1 and HTTP/2
         | 
         | >TLS 1.3 was used for HTTP/3.
        
           | tialaramex wrote:
           | The latter is obligatory. HTTP/3 is a way of spelling the
           | HTTP protocol over QUIC, and QUIC is implicitly doing TLS 1.3
           | (or later) cryptography.
           | 
           | So the fairer comparison might be TLS 1.3 under all three,
           | but if you need to upgrade why not upgrade the whole HTTP
           | stack?
        
             | dochtman wrote:
             | QUIC does require TLS 1.3, but as far as I can tell HTTP/2
             | over TLS 1.3 is perfectly viable, and is likely a common
             | deployment scenario.
             | 
             | Upgrading just TLS to 1.3 for most people likely just means
             | upgrading to a newer openssl which you probably want to do
             | anyway. In many web server deployment scenarios, deploying
             | HTTP/3 is highly likely to be more involved. The Apache
             | httpd doesn't support H3 at all, I don't know if nginx has
             | it enabled by default these days?
        
               | lazerl0rd wrote:
               | NGINX has an experimental QUIC branch [1], but in my
               | experience it is buggy and currently has lower throughput
               | than using Quiche [2] with NGINX. I do the latter for my
               | fork of NGINX called Zestginx [3], which supports HTTP/3
               | amongst a bunch of other features.
               | 
               | NGINX's QUIC implementation seems to also lack support
               | for QUIC and or HTTP/3 features (such as Adaptive
               | Reordering Thresholds and marking large frames instead of
               | closing the connection).
               | 
               | [1] : https://hg.nginx.org/nginx-quic
               | 
               | [2] : https://github.com/cloudflare/quiche
               | 
               | [3] : https://github.com/ZestProjects/Zestginx
               | 
               | EDIT: A friend of mine who works at VK ("the Russian
               | Facebook") informed me that they're helping out with the
               | NGINX QUIC implementation which is nice to hear, as
               | having a company backing such work does solidify the
               | route a little.
        
             | tsimionescu wrote:
             | > HTTP/3 is a way of spelling the HTTP protocol over QUIC
             | 
             | HTTP/3 could perhaps be described as HTTP/2 over QUIC. It's
             | still a very different protocol from HTTP/1.1, even if you
             | were to ignore the transport being used - the way
             | connections are managed is entirely different.
        
               | tialaramex wrote:
               | It's just spelling. It certainly wouldn't be better to
               | think of it as HTTP/2 over QUIC since it works quite
               | differently because it doesn't have an in-order protocol
               | underneath it.
               | 
               | HTTP has a bunch of semantics independent of how it's
               | spelled and HTTP/3 preserves those with a new spelling
               | and better performance
        
             | nix23 wrote:
             | >So the fairer comparison might be TLS 1.3 under all three,
             | but if you need to upgrade why not upgrade the whole HTTP
             | stack?
             | 
             | Because it's a benchmark of HTTP3 and not a comparison of
             | "as it might be stacks".
             | 
             | It would a bit like bench-marking HTTP/1 with RHEL7 and
             | Apache2, HTTP/2 with RHEL8 and NGINX and HTTP/3
             | with...let's say Alpine and Caddy...it's just not a clean
             | benchmark if you mix more then one component and try to
             | proof that this single one component is faster.
        
               | tialaramex wrote:
               | Good point. So TLS 1.3 everywhere would have been better
        
               | bawolff wrote:
               | Especially when their benchmark scenario is something
               | that plays to the strengths of tls1.3 and would probably
               | only mildly be improved (if at all) by http/3
        
               | nix23 wrote:
               | Well yes if it's a benchmark ;)
        
       | ohgodplsno wrote:
       | >If a web page requires 10 javascript files, the web browser
       | needs to retrieve those 10 files before the page can finish
       | loading.
       | 
       | The author is _this close_ to realizing the problem. But no, we
       | need Google to save us and release HTTP versions at the same
       | rythm as their chrome releases so they can keep pushing 4MB of
       | Javascript to serve ads on a shitty full-js website that could
       | have been static.
        
         | rswail wrote:
         | Replace "javascript" with "img" and the same applies.
         | 
         | Has nothing to do with your theory that this is all a secret
         | plot by Google to sell ads.
         | 
         | It's more about why is a TCP connection so "heavy" compared to
         | a QUIC/UDP connection that is providing similar reliability
         | guarantees?
        
           | ohgodplsno wrote:
           | > Replace "javascript" with "img" and the same applies.
           | 
           | Absolutely not, images can be completely loaded in an async
           | manner. Sure, you might have some content jump (unless you do
           | your job and specify the image size ahead of time), but your
           | content is still there.
           | 
           | >Has nothing to do with your theory that this is all a secret
           | plot by Google to sell ads.
           | 
           | It's everything but secret, in the same way that AMP was a
           | plot by Google to sell ads. Everything Google does is in the
           | interest of collecting data and selling ads. Not a single one
           | of their products doesn't have this in mind.
        
             | Dylan16807 wrote:
             | Images are part of content.
        
           | goodpoint wrote:
           | > secret plot by Google to sell ads
           | 
           | Not secret at all. Google developed a whole technological
           | stack including a browser to do that.
        
           | johncolanduoni wrote:
           | To be fair, this is 100% part of a not-so-secret plot by
           | Google to sell ads more efficiently by making web browser
           | connections faster.
        
       | littlecranky67 wrote:
       | The benchmarks are not suitable to reach the conclusion that
       | HTTP/3 is fast (or faster than HTTP/2). When you run such
       | benchmarks, you choose a very fixed parameter vector (bandwidth,
       | latency, link conditions that impact packet loss, payload shapes
       | etc.). Running the same benchmark against a differently chosen
       | parameter vector may result in the complete opposite conclusion.
       | 
       | Additionally, the Internet is not a static system but a dynamic
       | one. To say something is "fast", means that it should be fast for
       | most people in most conditions. Sinle-link benchmarks are not
       | feasible.
       | 
       | I.e. in a traffic jam I will be very fast when I use the turn-
       | out/emergency lane. But only as long as I am the only one doing
       | it.
        
       | bullen wrote:
       | 3x is nothing, 10x or 100x and we're talking.
       | 
       | The only real bottleneck we're going to have is CPU, so they
       | should compare that.
       | 
       | Everytime humans make an improvement we scale up to fill that
       | benefit: https://en.wikipedia.org/wiki/Jevons_paradox
        
       | beebeepka wrote:
       | Is it fast enough for first person shooters in the browser? That
       | would be both awesome. hosting dedicated servers in a browser
       | hehe
        
         | johncolanduoni wrote:
         | WebTransport (still in development unfortunately) lets you send
         | unreliable datagrams over HTTP/3, but there's no reason for it
         | to be any faster (or slower) than WebRTC's SCTP support.
         | Probably somewhat easier to support once more HTTP/3 libraries
         | are out there since it's not part of a massive standard like
         | WebRTC.
        
       | [deleted]
        
       | zigzag312 wrote:
       | Heads-up: graphs have non-zero baselines. Still, gains are quite
       | impressive.
        
       | Mizza wrote:
       | Has anybody done a comparison with Aspera yet?
        
         | jerven wrote:
         | I would love to see this as well. Or even plain compared to
         | plain downloading from an FTP site. Especially for large files
         | 10GB plus in size.
        
       | illys wrote:
       | I am always amazed by new stuff claiming to be faster when the
       | old stuff has worked perfectly since computers and networks were
       | hundreds of times less performant.
       | 
       | It seems to me just like another excuse to add complexity and to
       | create more bloated websites.
       | 
       | The Google homepage is 1.8 MB at initial load for an image, a
       | field and 3 links, all the other major web operators are not
       | better. Seriously, would they do such pages if they cared for
       | being fast?
       | 
       | [EDIT] For those not liking my comment, I should have said that
       | it is in line with the conclusion of the article: "In general,
       | the more resources your site requires, the bigger the performance
       | improvement you'll see". I am just questioning the benefit to
       | help the inflation of website traffic, in the end the service is
       | not better, just always heavier (the Google example above is just
       | an illustration).
        
         | profmonocle wrote:
         | > It seems to me just like another excuse to add complexity and
         | to create more bloated websites.
         | 
         | Arguably it's the other way around. Web sites were already
         | getting extremely complex and bloated, so new protocols are
         | attempting to restore performance that we've lost. I.E. one of
         | the problems HTTP/2 tries to solve is sending multiple files in
         | parallel over the same connection, to avoid the pitfalls of
         | opening lots of simultaneous TCP sockets. This only became a
         | major concern as web sites added more and more assets.
         | 
         | It's definitely a vicious cycle though. It's reminiscent of
         | what happens with hardware. Better hardware incentivizes
         | inefficient software development, to the point where a modern
         | messaging app might not even be usable on a PC from 2003,
         | despite not having much more functionality than similar apps
         | from the era.
        
       | lil_dispaches wrote:
       | Wait, it's not about a new HTTP, but about replacing TCP? Isn't
       | that a big deal? When did OP's browser start supporting QUIC?
        
       | phicoh wrote:
       | It is nice to see the effect of 0-RTT in QUIC. In quite a few
       | graphs the HTTP3 times for the small Site has one dot roughly at
       | the same level as HTTP2. This probably the first connection. The
       | rest gets 0-RTT.
        
       | sidcool wrote:
       | This is impressive. The only issue I have faced in the past with
       | HTTP/2 is the supported servers and browsers. It's not very
       | reliable, and migrations are painful. Hopefully HTTP/3 will be
       | seamless.
        
       | ArchOversight wrote:
       | Just as a heads up, if you are viewing the site in Safari, you
       | will not see the graphs and images as lazy loading is not yet
       | supported.
       | 
       | https://caniuse.com/loading-lazy-attr
       | 
       | You can manually enable it if you have Developer mode enabled
       | with:
       | 
       | Develop -> Experimental Features -> Lazy Image Loading
        
         | ksec wrote:
         | 12 Hours and 255 Comments, You can guess most on HN dont use
         | Safari. Thanks for the tip.
        
           | galonk wrote:
           | I just assumed they were broken or that the image server was
           | slashdotted.
        
       | cakoose wrote:
       | These charts should have the y-axis start at zero. As they are
       | now, I have to convert the bars into numbers, then mentally
       | compare the numbers, which defeats the point of charting them
       | graphically.
       | 
       | Though I guess I can compare the confidence intervals visually
       | :-P
        
         | omegalulw wrote:
         | Whenever I see y axes not start from zero on a marketing slide
         | or blog alarm bells go off in my head.
         | 
         | Tbf, I think for this blog the narrower range does help the
         | first chart as you can 1) easily compare bounds 2) on the full
         | scale they would be nearly at the same spot.
        
         | eric_trackjs wrote:
         | Author here. If I were to do it again I would pick a different
         | visualization. The intent was not to "lie" with statistics as
         | other commenters here seem to think, it was to fit the data
         | side by side and have it be reasonably visible.
         | 
         | Lots of room for improvement next time I think.
        
       | beckerdo wrote:
       | There are a few outliers in the HTTP/3 graphs that are slower
       | than the HTTP/2 graphs. I might have missed it, but I don't think
       | the outliers were explained.
        
       | ch17z wrote:
       | requestmetrics.com: Blocked by 1Hosts (Lite), AdGuard DNS filter,
       | AdGuard Tracking Protection filter, EasyPrivacy and oisd.
        
         | toddgardner wrote:
         | Request Metrics is not an advertiser, does not track
         | individuals, and complies with the EFF dnt policy. Ad block
         | lists are way too aggressive--all it takes is some random
         | person to put you on the list and it's hard to get off of it.
        
       | aquadrop wrote:
       | Regardless of the results, that is very disingenuous way of
       | showing up charts, like floor being ~1000 and first result having
       | 2500 score and second 1000, basically being on the floor, even
       | though real difference is 2.5 times.
        
         | [deleted]
        
       | knorker wrote:
       | Grrr, graphs that are not anchored at y=0.
       | 
       | Beware reading these graphs.
        
       | kaetemi wrote:
       | Parallel requests are nice, but is there a standard way to
       | request larger files sequentially (explicitly wanting nicely
       | pipelined head-of-line-blocking delivery)? Think streaming media,
       | where you want chunks to arrive one after the other, without the
       | second chunk hogging bandwidth while the first one is still
       | downloading too.
        
         | johncolanduoni wrote:
         | AFAIK HTTP/3 doesn't send data out-of-order within a single
         | stream any more than TCP does. So if you want a large file to
         | be streamed with head-of-line blocking you just don't send
         | multiple range requests for it.
        
           | kaetemi wrote:
           | What I want is multiple large files (chunks being individual
           | files in a stream) to arrive one after the other, without re-
           | request gap, and without multiple files hogging bandwidth.
           | (i.e. HTTP/1.1 Pipelining)
           | 
           | It's easy with streaming video, when you can just time the
           | requests at a fixed rate. But for stuff like streaming game
           | assets, you never know how long each download will take.
           | Doing parallel requests will just slow down the time for the
           | first asset to appear, and doesn't guarantee that you fill
           | the bandwidth anyway if the latency is high enough...
        
       | thenoblesunfish wrote:
       | Obviously the sample sizes are too small to be concluding much in
       | this direction but for fun, it looks like the HTTP/3 numbers have
       | more outliers than the HTTP/2 numbers. Is that to be expected due
       | to the nature of the new protocol or the experiment?
        
         | undecisive wrote:
         | That was my initial takeaway. I suspect that the outlier is the
         | initial connection / handshake, and that all subsequent
         | requests were much faster thanks to the QUIC / 0-RTT session
         | ticket. But I can't see anywhere this is mentioned explicitly,
         | and those outliers are a fair bit worse than HTTP/2.
        
       | ppg677 wrote:
       | QUIC needs a kernel implementation. At least in Linux, TCP/IP
       | does a lot of its processing in soft interrupt handlers which is
       | far cheaper and more responsive to pre-empt compared to UDP
       | packet delivery and wakeup to an application thread.
       | 
       | You don't want your transport acknowledgement packets to get
       | delayed/lost because of app thread scheduling.
        
         | fulafel wrote:
         | This argument would benefit from quantitavie evidence showing
         | impact on end2end application performance or server load.
        
           | ppg677 wrote:
           | It is way less CPU efficient, with poor tail latency for LAN
           | environments.
        
         | Klasiaster wrote:
         | You have to consider that an in-kernel implementation also
         | hinders innovation - for example, things like BBR instead of
         | Cubic/NewReno can be rolled out easily in userspace through the
         | application's QUIC library while it takes a while until all
         | clients and servers a) use a recent kernel which has TCP BBR
         | support and b) have it configured by default.
        
           | Klasiaster wrote:
           | ... and another bonus is that your QUIC implementation can be
           | in a memory safe language and doesn't have to use C ;)
        
         | qalmakka wrote:
         | It would also be great to have a standard API to create QUIC
         | sockets, like BSD sockets on POSIX. It would make easy to swap
         | implementations and port applications using QUIC to other
         | systems, avoiding balcanization. I can see OSes integrating
         | QUIC in their system libraries, and it would be great to avoid
         | dozens of #ifdefs or relying on third party libraries.
        
         | MayeulC wrote:
         | Could this get implemented in userspace with BPF used to
         | offload some processing in kernel space? Could this be
         | implemented in an unprivileged process?
        
         | drewg123 wrote:
         | I work on kernel and network performance on FreeBSD. For me the
         | issue with QUIC is that we loose a few decades of optimizations
         | in software and hardware. Eg, we loose TCP TSO and LRO (which
         | reduce trips through the network stack, and thus per-byte
         | overhead costs), and we loose inline crypto offload (which
         | offloads the CPU and cuts memory bandwidth in half). So this
         | makes QUIC roughly 3-4x as expensive in terms of CPU and memory
         | bandwidth as opposed to TLS over TCP.
        
           | ppg677 wrote:
           | Great points.
        
       ___________________________________________________________________
       (page generated 2021-12-15 23:00 UTC)