[HN Gopher] In spite of an increase in Internet speed, webpage s...
       ___________________________________________________________________
        
       In spite of an increase in Internet speed, webpage speeds have not
       improved
        
       Author : kaonwarb
       Score  : 392 points
       Date   : 2020-08-04 15:33 UTC (7 hours ago)
        
 (HTM) web link (www.nngroup.com)
 (TXT) w3m dump (www.nngroup.com)
        
       | rayrrr wrote:
       | Moore's Law + Parkinson's Law = Stasis
        
       | Spearchucker wrote:
       | Yeah. Because modern tech is bloat. Started on a JavaScript-based
       | search tool the other day. ALL the JavaScript is hand-coded. No
       | libraries, frameworks, packages. No ads. Just some HTML, bare,
       | bare-bones CSS, and vanilla JavaScript. Data is sent to the
       | browser in the page, where the user can filter as needed.
       | 
       | It's early days for sure, and lots of the code was written to
       | work first and be efficient second, so it will grow over the next
       | few weeks. But even when finished it will be nowhere near the
       | !speed or size of modern web apps/pages/things.
       | 
       | https://www.wittenburg.co.uk/Rc/Tyres/default.html
       | 
       | It is possible.
        
       | rayiner wrote:
       | It's comical. I've got 2 gbps fiber on a 10 gbps internal network
       | hooked up to a Linux machine with a 5 GHz Core i7 10700k. Web
       | browsing is just okay. It's not instant like my 256k SDSL was on
       | a 300 MHz PII running NT4 or BeOS. Really, there isn't much point
       | having over 100 mbps for browsing. Typical web pages make so many
       | small requests that don't even keep a TCP connection open long
       | enough to use the full bandwidth (due to TCP's automatic window
       | sizing it takes some time for the packet rate to ramp up).
        
         | iso8859-1 wrote:
         | How does a web page decide whether a TCP connection is kept
         | open?
         | 
         | Surely the webpage can only use HTTP, and there is no concept
         | of a connection in JavaScript or HTML/CSS.
         | 
         | So it must be the responsibility of the browser to reuse
         | connections, or to response multiplex efficiently in HTTP/2.
        
           | jfrunyon wrote:
           | They never said the web page decided.
           | 
           | (Although, to an extent, the contents of the webpage do
           | determine how long the browser will keep the connection open.
           | Also, WebSockets exist.)
        
           | rayiner wrote:
           | If you're grabbing a bunch of medium sized assets from
           | different servers, you cannot reuse the TCP connection.
        
         | thereisnospork wrote:
         | As someone else with gratuitously fast internet I almost wish I
         | could preemptively load/render all links off of whatever page
         | I'm on in case I decide to click on one. (I imagine this would
         | be fairly wasteful).
        
           | grumple wrote:
           | You can do this with javascript, and some sites do.
        
           | supertrope wrote:
           | Google Chrome has a prefetch feature which does that.
        
           | MrStonedOne wrote:
           | I got a fair increase in responsiveness in my site by
           | preloading links on hover and pre-render (pre-fetch all sub
           | resources) on mousedown (instead of waiting for mouse up)
        
         | pdimitar wrote:
         | iMac Pro (10-core, 64GB ECC RAM, 2TB NVMe SSD) with 1GbE
         | connection here.
         | 
         | Ever since I bought this machine (and swapped the ISP) I came
         | to understand that it was not my machine at fault; it's most
         | websites that are turtle-slow.
        
       | peetle wrote:
       | Despite an increase in speed, people insist on adding more to the
       | web.
        
       | Konohamaru wrote:
       | Niklaus's Law strikes again.
        
         | schmudde wrote:
         | I think it's a good law, for what it's Wirth.
        
         | zukzuk wrote:
         | ... which is just a special case of
         | https://en.wikipedia.org/wiki/Jevons_paradox
        
       | speeder wrote:
       | One thing is bothering me is how browsers themselves are becoming
       | ridiculously slow and complicated.
       | 
       | I made a pure HTML and CSS site, and it still takes several
       | seconds to load no matter how much I optimize it, after I
       | launched some in-browser profiling tools, I saw that most of the
       | time is spent with the browser building and rebuilding the DOM
       | and whatnot several times, the download of all the data takes 0.2
       | seconds, all the rest of the time is the browser rendering stuff
       | and tasks waiting each other to finish.
        
       | jorblumesea wrote:
       | The issue at its core is HTML. It was not designed for complex,
       | rich interfaces and interactivity that modern web users want. So
       | JS is used. Which is slow and needs to be loaded.
       | 
       | The heavy use of JS is basically just hacking around the core
       | structure of the internet, HTML/DOM problems.
        
       | sirjaz wrote:
       | The problem is that we are trying to make websites do what they
       | were never meant to. We should be making cross-platform apps that
       | use simple data feeds.
        
       | mbar84 wrote:
       | The worst case of this for me was a completely static site which,
       | sans images) loaded in under 100ms on my local machine. I inlined
       | styles, all images with width and height so there is no reflow
       | when the image arrives, no chaining of css resources, deferred
       | execution of a single javascript bundle, gzip, caching, the
       | works. Admittedly it was a simple page, but hey, if I can't do it
       | right there, where then?
       | 
       | Anyway, it all went to s __t as soon as another guy was tasked
       | with adding share buttons (which I have never once in my life
       | used and am not sure anybody else has ever used).
       | 
       | I won't optimize any pages over which I don't have complete
       | control. Maybe if a project has a CI/CD setup that will catch any
       | performance regressions, but other than that, too much effort,
       | thankless anyway, and on any project with multiple fronted devs,
       | the code is a commons and the typical tragedy is only a matter of
       | time.
        
       | joncrane wrote:
       | I think this is a general problem with technology as a whole.
       | 
       | Remember when channel changes on TVs were instantaneous? Somehow
       | along the way the cable/TV companies introduced latency in the
       | channel changes and people just accepted it as the new normal.
       | 
       | Phones and computers were at one point very fast to respond; but
       | now we tolerate odd latencies at some points. Apps for your phone
       | have gotten much much bigger and more bloated. Ever noticed how
       | long it takes to kill an app and restart it? Ever notice how much
       | more often you have to do that, even on a 5-month old flagship
       | phone? It's not just web pages, it's everything. The rampant
       | consumption of resources (memory, CPU, bandwidth, whatever) has
       | outpaced the provisioning of new resources. I think it might just
       | be the nature of progress, but I hate it.
        
         | notyourwork wrote:
         | > Ever noticed how long it takes to kill an app and restart it?
         | Ever notice how much more often you have to do that, even on a
         | 5-month old flagship phone?
         | 
         | Is this an android problem? I don't really ever have to close
         | apps unless the app itself gets stuck into a broken state and
         | forcing close to restart can correct the issue.
        
         | lumost wrote:
         | Alternately, the value of low-latency experiences is not as
         | high as we believed - or it's poorly measured.
         | 
         | Particularly in Enterprise software, the time to complete a
         | workflow or view specific data matters _a lot_ - the time to
         | load a page is a component of that, but customers will gladly
         | trade latency for a faster e2e experience with less manual
         | error checking.
         | 
         | In consumer the big limiting factor is engagement, a low-
         | latency experience will enhance engagement. However it's
         | possible to hide latency in ways that weren't possible before
         | such as infinite scrolls and early loading of assets. The
         | engagement on the 50th social feed result has less to do with
         | the latency to load the result, and more to do with how
         | engaging the content is.
        
           | jfrunyon wrote:
           | Meanwhile, most of those ways to hide latency have other
           | issues, like the good old "link I need is below the infinite
           | scroll" or "I just want to go back to a specific time, but I
           | have to scroll through every single thing between then and
           | now instead of just jumping 10 pages at a time". Which we
           | would avoid if instead we tackled the actual problem.
        
         | dylan604 wrote:
         | It used to be software would get written, and then iterated
         | over to optimize and refine it to make it smaller/faster. That
         | got too expensive, so the devs just depended on CPU speeds
         | increasing and drives getting larger.
        
         | daxfohl wrote:
         | Microwaves are another example. Used to just turn a knob to the
         | number of minutes you want, and done. Now it's five button
         | presses. (Maybe not exactly a latency thing, but a UX one that
         | makes it slower).
        
         | massysett wrote:
         | It is the nature of progress: progress is doing more with less.
         | That's increased productivity.
         | 
         | Of course software now uses more computing resources, so that's
         | not doing more with less. But the computer is cheap. What's
         | expensive is the humans who program the computer. Their time is
         | expensive, and getting experienced, expert humans is even more
         | expensive.
         | 
         | So we now have websites that have rich features bolted together
         | using frameworks. Same for desktop software, embedded systems,
         | and whatever else. They're optimized for developer time and
         | features, not for load time because that's not expensive, at
         | least not in comparison.
         | 
         | As a user the only solution I see to this is to use old
         | fashioned dumb products rather than cheaply developed "smart"
         | ones. For instance I'm not going near a smart light switch, or
         | a smart lawn sprinkler controller. Old dumb ones are cheap and
         | easy and fast and predictable.
        
           | finnthehuman wrote:
           | >But the computer is cheap. What's expensive is the humans
           | who program the computer.
           | 
           | This is a nice half-truth we tell ourselves, but that's not
           | the full story.
           | 
           | There exists plenty of optimizations where the programmer-
           | time would be smaller than the additional hardware cost. And
           | those losses compound. But they're a little too hard to
           | track, and cause is a little too far divorced from effect.
           | 
           | I did our first ever perf pass on an embedded application as
           | we started getting resource constrained. I knocked 25% off
           | the top in a week. Even if I had spent man-months for a 10%
           | savings, try and tell me that's more expensive than spinning
           | new boards.
           | 
           | That's not to say we're opposed to hardware changes; we do
           | them all the time. But the cost curve is weighted towards the
           | front so it's more attractive to spend a non-zero amount of
           | developer time _right now_ to investigate if this other
           | looming spend is avoidable. That 's not the case when you're
           | looking at controlling the acceleration of an AWS bill that
           | spreads your spend out month to month through eternity.
           | 
           | Who wants to spend a big chunk of money up front to figure
           | out if you can change that spend rate trend by a tiny
           | percentage? Even if you do, and get a perf gain, but someone
           | else on the team ships a perf loss? Then it doesn't feel
           | real, and you can only see what you spent. Even if you have
           | good data about the effect of both changes (which you don't),
           | the fact the gain was offset means the sense of effect is
           | diminished.
           | 
           | And rather than investigate perf, people can always lie to
           | themselves that the cost is all about needing to "scale."
           | That way they convince themselves not only was there nothing
           | they could have done, the cost is a sign that their company
           | is hot shit.
           | 
           | If you don't think that kind of perf difference is real, Look
           | at Maciej's comparison of pinboard and anonymized-del.icio.us
           | https://idlewords.com/talks/website_obesity.htm (ctrl-f
           | "ACME")
           | 
           | And if perf has any impact on sales, cause and effect are
           | even further apart. You might be able to measure the effect
           | perf has on your sales website directly, but if that feedback
           | loop involves a user developing an opinion over days/weeks?
           | Forget about knowing. Oh, sure they'll complain, but there
           | are no metrics, so we get the rationalizations we see in this
           | thread.
        
         | cuddlybacon wrote:
         | > Remember when channel changes on TVs were instantaneous?
         | 
         | Remember when turning a TV on was <0.5s?
         | 
         | My current dumb TV takes a good while to turn on. When I press
         | the power button, it takes about 1s for the indicator light to
         | change than another 2 or 3 to begin displaying anything.
        
           | rsynnott wrote:
           | ....
           | 
           | No, actually. Most CRTs took a long time to come on. Early
           | LCDs, maybe?
        
             | cuddlybacon wrote:
             | Interesting. I meant CRTs. Maybe my memory is a bit rose
             | tinted.
             | 
             | Still, I remember never having confusion about whether my
             | CRT TVs where responding to the power on button press or
             | not. There are plenty of times where I turn my current TV
             | off since I think it didn't receive the first button press.
        
               | rsynnott wrote:
               | Most CRTs made an obvious _noise_ when turned on
               | (actually, one of the curses of good high-frequency
               | hearing was that they made an obvious noise for the whole
               | time they were on, though I think a lot of people
               | couldn't hear them after warmup). That helped, I suppose.
        
               | kergonath wrote:
               | Exactly, feedback was immediate, which helps a lot with
               | UX.
        
             | [deleted]
        
             | kergonath wrote:
             | The image took some time to stabilise (and was black at
             | first), but the things turned on instantly, with the light
             | indicator visible without delay (and the sound of the CRT
             | turning on).
             | 
             | Now, quite often I have to wait 5s to see whether the
             | button push was registered, push again because the TV still
             | does nothing, and then watch as the thing turned on just as
             | I was pressing and interprets the second push as a signal
             | to go on standby (5 more seconds with an obnoxious message
             | about it going to sleep, and yet 5 more to wake it up).
             | It's like the USB-A that needs to be rotated twice every
             | time you want to plug something.
        
         | Aaronstotle wrote:
         | I once had a glitch on my iphone back in iOS 9.x. The glitch
         | made it that all transitions/animations were disabled and it
         | was a fantastic experience to click an app and have it open
         | instantly. Turning off animations in IOS settings doesn't make
         | it as fast as that glitch did unfortunately.
        
           | thatguy0900 wrote:
           | First thing I do with a new android is turn off all animation
           | in developer settings, much better experience. Shame that
           | glitch isn't reproducible
        
         | wnevets wrote:
         | >Somehow along the way the cable/TV companies introduced
         | latency in the channel changes and people just accepted it as
         | the new
         | 
         | One of the worst parts of the over the air digital switch over
         | was how much harder it was to channel surf quickly.
        
         | dghughes wrote:
         | >Remember when channel changes on TVs were instantaneous?
         | Somehow along the way the cable/TV companies introduced latency
         | in the channel changes and people just accepted it as the new
         | normal
         | 
         | Phones used to be rotary dial but then touch tone phones with
         | number buttons were introduced. I was reading an article about
         | human brains and its expectations. Going from a touch tone
         | phone to an old rotary style phone seems excruciatingly slow to
         | our brains. Depending on the number a 1 on each it's very close
         | in duration but a 9 or a 0 on a rotary compared to a touch tone
         | 9 or 0 seems glacial in speed.
        
           | rsynnott wrote:
           | Even worse; in some countries, touch tone exchanges took a
           | while to roll out, with the result that keypad phones which
           | supported both pulse dialing and tone dialing were common.
           | And then people never switched them over to tone dialing.
           | There were people pulse dialing into the 21st century.
        
             | coryrc wrote:
             | In the 90s, my family saved $2.68/mo on the phone bill by
             | opting out of tone dialing.
        
         | fractal618 wrote:
         | You had me at:
         | 
         | > Remember when channel changes on TVs were instantaneous?
         | 
         | There's nothing less satisfying than smushing down those
         | rubbery remote control buttons for an extra 2000 to 4000
         | milliseconds to change the channel.
         | 
         | Don't get me started on entering the wrong channel numbers.
         | That gives me PTSD.
        
           | thewebcount wrote:
           | Yeah, I remember when that first started happening, and I
           | agree it was really frustrating. The funny thing is that I no
           | longer channel surf, so I no longer hit this. I realized the
           | other day that I haven't watched cable TV in a long time. My
           | TiVo is empty and has been for a long time, but I watch TV
           | every night using streaming devices instead.
        
             | phone8675309 wrote:
             | > The funny thing is that I no longer channel surf
             | 
             | The cynical side of my brain feels that this is exactly
             | what TV stations and show producers wanted. If it's fast
             | and easy to spin the dial and maybe stop on a competitor
             | (and more importantly, watch their ads instead of yours)
             | then you have to put our quality product and not annoy the
             | customer with as many loud and irritating ads, which means
             | that you don't make as much money from the content that
             | bookends your ads.
        
           | rohan1024 wrote:
           | What's worst is the black screen in between the channel
           | switch. If you used to watch TV with your lights off, you now
           | stop doing that.
        
           | thehappypm wrote:
           | This is my new favorite "first world problem".
        
           | daxfohl wrote:
           | The worst is they "fixed" it with "digital knobs" now. But
           | due to anti-glitching logic, they are still not as responsive
           | as you want and annoying to use. Especially volume controls.
        
             | formerly_proven wrote:
             | There's nothing quite as annoying as having to use the
             | cheapest rotary encoder, that already feels like shit, but
             | the firmware is also polling it incredibly slowly, and is
             | debouncing it incorrectly, and it probably bounces like
             | hell, so not only does it feel bad to use it, but it
             | actually doesn't work half the time and goes a step or two
             | in the wrong direction, even if you spin it evenly into the
             | other.
             | 
             | These things are like these fake volume controls reddit
             | made up.
        
         | leadingthenet wrote:
         | You call it progress, I call it regress.
         | 
         | It seems to me that we have lower performance, exponentially
         | higher resource consumption, and often no more functionality in
         | a "modern" web app / Electron app compared to some desktop apps
         | from the 90's. And for what? To have even worse UI's and UX's
         | that never conform to platform guidelines? Where's the
         | progress?
        
           | coliveira wrote:
           | The big "dream" of web-based technologies was to allow
           | designers the freedom to do whatever they want. But doing
           | this has a lot of costs, which we are all paying today. In
           | the early days of GUIs, a group would design the OS, and
           | everyone would just use that design through the UI
           | guidelines. It was, in my opinion, the apex of GUI
           | programming, because you could create full featured apps
           | without the need of an experienced designer, at least for the
           | first few iterations. Now, I cannot even create a simple web
           | app that doesn't look crap, and any kind of web product
           | requires a designer to at least make it minimally usable. And
           | the whole architecture became so complex because of the
           | additional layers, that it looks more like a Baroque church
           | than a modern building.
        
           | tomjen3 wrote:
           | The desktop apps from the 90s were better because they could
           | actually show more information on a 600 by 800 pixel screen
           | than a modern app can do on a full HD screen, because it puts
           | too much whitespace.
           | 
           | Not to mention flat design, which is without exception bad
           | design. Buttons have borders, show them.
        
         | skohan wrote:
         | I remember thinking this when OS's started butting transparent
         | blur effects on various UI layers. As someone who's worked with
         | computer graphics, I understood that high quality blur effects
         | are relatively expensive. Most computers these days can handle
         | it without breaking a sweat, but It's like why are we using
         | these resources on the desktop environment, who's job is
         | basically to fade into the background and facilitate other
         | tasks?
         | 
         | I don't think it's the nature of progress so much as it is
         | laziness. Most developers (myself included) don't worry much
         | about optimization until the UX performance is unacceptable.
         | 
         | I sometimes wonder what the world could be like if we just
         | froze hardware for 5 years and put all of our focus on
         | performance optimization.
        
           | sumtechguy wrote:
           | Then others went the other way. 'hey where is the scroll
           | bar?' Oh there it is just a slightly different shade of
           | yellow than the background.
        
           | w0mbat wrote:
           | I invented the translucent blurred window effect. It first
           | shipped in Mac IE 5.0 in April 2000 (the non-Carbon version
           | only), for the autocomplete window, and later that year for
           | the web page context menu (first in the special MacHack
           | version).
           | 
           | The effect doesn't have to be expensive, and my original
           | implementation was fast on millenium era machines. The goal
           | is preserve the context of the background, while keeping
           | foreground data readable, in a way that is natural to our
           | human visual system. If you are doing it properly you also
           | shift the brightness values into a more limited range to
           | diminish the contrast and keep the tonal values away from the
           | chosen foreground text color (this is also cheap). Done
           | properly it is visually pleasing with virtually no effect on
           | readability.
           | 
           | People have coded some boneheaded imitations along the way
           | though. They don't add the blur, or they don't adjust the
           | brightness curve, or they make the radius much too big, or
           | they compute some over exact Gaussian blur that is too slow.
           | 
           | It's the nature of blur that it doesn't have to be exact to
           | be visually pleasing and convincing.
        
             | stan_rogers wrote:
             | The context of the background is self-maintaining. Reducing
             | the viz/readability of the overlay text does nobody but the
             | art critics any favours. It was, and remains, a bad idea.
        
             | jfrunyon wrote:
             | Then what went wrong in Windows 7? ;)
        
           | rusk wrote:
           | I think it was to do with graphics acceleration becoming more
           | commonplace. It was my guess that these additional effects
           | weren't originally intended to use desktop cpu .. tho I guess
           | once they became the norm who knows .. although the recent
           | trend towards "flat" UI seems to be reversing that trend ..
        
         | DoreenMichele wrote:
         | _The rampant consumption of resources ...has outpaced the
         | provisioning of new resources. I think it might just be the
         | nature of progress, but I hate it._
         | 
         | Someone had to more or less decide to handle it that way for
         | some reason. So I am skeptical that "it's just the nature of
         | progress."
        
         | formerly_proven wrote:
         | > Somehow along the way the cable/TV companies introduced
         | latency in the channel changes and people just accepted it as
         | the new normal.
         | 
         | The technical reason is that digital TV is a heavily compressed
         | signal [1] (used to be MPEG2, perhaps they have moved on to
         | h.264) with a GOP (group of pictures) length that is usually
         | around 0.5-2 seconds. When you switch channels, the MPEG-2
         | decoder in your receiver needs to wait until a new GOP starts,
         | because there is no meaningful way to decode a GOP that's "in
         | progress".
         | 
         | [1] And the technical reason for the compression is that analog
         | HD needs a lot more bandwidth than analog NTSC/PAL/SECAM, while
         | raw HD transmission would need an absurd amount of bandwidth
         | per channel (about a gigabit/s for 1080p30). So HD television
         | pretty much requires the use of digital compression. Efficient
         | digital video compression requires GOP structures.
        
           | dfox wrote:
           | Another reason for the lag is that the modulation scheme is
           | complex and requires considerable time to both acquire the
           | actual demodulator lock (in fact, the symbol in typical
           | terrestrial DTV system is surprisingly long) as well to
           | acquire synchronization of the higher level FEC layers.
        
           | cameldrv wrote:
           | Right, but there's a social/business reason why this is true.
           | There are various things that could be done technically to
           | fix this, for example, you could have three decoders running
           | simultaneously so you always had the adjacent channels
           | buffered, you could send keyframes more often, or you could
           | even use a totally different compression scheme that didn't
           | use keyframes.
           | 
           | The real reason problems like this aren't solved is that the
           | organization does not allocate resources to fix them. The
           | status quo has been deemed to be good enough and not leading
           | to the loss of too many customers to the competition, so
           | that's where it stands. This pervades all engineering --
           | everything just barely works, because once it barely works,
           | for most things, it's good enough, and no more resources are
           | deployed to make it better.
        
             | formerly_proven wrote:
             | I think some boxes advertise that, but how much does it
             | help? On an analog TV you could be switching channels at a
             | rate of say 3-4 per second while still registering the
             | channel logos and stopping on the right one (by going one
             | back immediately). One receiver in either direction isn't
             | going to keep the charade up. Some cleverness, like
             | decoding adjacent channels, but if the user switches a
             | channel down, have the "up" decoder jump to the 2nd channel
             | down in anticipation of the user going another channel down
             | etc., might help, but still. You can't feasibly emulate the
             | channel agility (across ~800 channels on satellite) of an
             | analog RF receiver in a digital system with a modulation
             | that has a self-synchronisation interval of up to a few
             | seconds.
             | 
             | > you could send keyframes more often, or you could even
             | use a totally different compression scheme that didn't use
             | keyframes.
             | 
             | GOP length / I-frame interval directly relates to bitrate.
             | Longer GOPs generally result in a lower bitrate at similar
             | quality; in DVDs or Blu-rays I believe the GOPs can be
             | quite long (10+ seconds) to achieve optimum quality for the
             | given storage space.
             | 
             | Non-predictive video codecs are usually pretty poor quality
             | for a reasonable bitrate (like a bunch of weird 90s / early
             | 00s Internet video codecs), or near-losless quality but
             | poor compression (because they're meant as an intermediate
             | codec).
        
             | xwdv wrote:
             | Hmm, those 2 seconds in between channel changes might be
             | enough to insert a micro ad that could be preloaded in the
             | background. That would appear instantaneously while your
             | channel loads, it would feel a lot more responsive.
        
               | jakeinspace wrote:
               | Just that thought makes me want to jump out the window.
        
             | VMG wrote:
             | maybe it IS good enough
        
             | philwelch wrote:
             | Realistically, it's because it was considered an acceptable
             | tradeoff for replacing 480i analog video with 1080p digital
             | video.
        
               | jfrunyon wrote:
               | By who?
               | 
               | Many people wouldn't even be able to reliably tell the
               | difference with common dpi's and viewing distances.
        
               | andrewstuart2 wrote:
               | Between 1080p and 480i? Have you never watched old
               | video/shows on an HDTV? The difference is stark.
        
               | AnIdiotOnTheNet wrote:
               | All the time, and unless the images are next to each
               | other you don't really notice unless you're looking for
               | it. Not because there isn't a big difference, but because
               | you just don't really care that much unless it is called
               | to your attention.
        
               | spanhandler wrote:
               | I've found pixel-count improvements stop mattering for me
               | somewhere around DVD quality for a lot of content. I can
               | tell the difference between that and 1080P, but stop
               | caring once I'm actually watching the show/film. For the
               | occasional really beautifully-shot film or some very
               | detailed action movies, I guess I might care a little.
               | 1080P versus 4K, I don't notice the difference at all
               | unless I'm deliberately looking for it. And that's even
               | with a 55" screen at a fairly close viewing distance.
               | 
               | What does make a difference? Surround sound. I'd take 5.1
               | or up with DVD-quality picture over 4K with stereo any
               | day, no hesitation.
        
               | formerly_proven wrote:
               | The difference between 576i50 (PAL equivalent, free
               | version of the main private channels) and 720p50 (that's
               | what German public service uses for "HD"; it's not
               | actually 1080p, although they use a pretty high bitrate)
               | is pretty stark, the difference between 576i and 1080p is
               | even more obvious. Although TVs don't really have a
               | neutral setting and try their best to mess every image up
               | as much as they can.
        
               | philwelch wrote:
               | I remember a clear improvement in the legibility of
               | things like onscreen scoreboards in sports broadcasts. In
               | the NTSC era I would squint at the TV trying to figure
               | out if the score was 16 or 18 for a football game, and
               | for a basketball game you just had to keep track of it in
               | your head since the scoreboard wasn't even persisted
               | onscreen. Other things, like telling different players
               | apart, are also easier these days (you can even make out
               | their facial features in a wide shot!)
        
             | cma wrote:
             | Maybe it leads to less impulsive tv watching and aimless
             | surfing and ends up being a benefit.
        
           | ebg13 wrote:
           | > _The technical reason is that digital TV is a heavily
           | compressed signal [1] (used to be MPEG2, perhaps they have
           | moved on to h.264) with a GOP (group of pictures) length that
           | is usually around 0.5-2 seconds._
           | 
           | Just because they transmit keyframes and deltas in the steady
           | state doesn't mean they need to wait for the next keyframe
           | when starting the stream. They could also choose to send you
           | the current reconstructed frame as a keyframe immediately
           | instead of waiting several seconds for the next pre-made one.
           | The cost to them would be epsilon (one new stream client per
           | channel to be the source of reconstructed keyframes), and the
           | customer experience difference would be noteworthy.
        
             | anticensor wrote:
             | That will only work if the original is in lossless.
        
               | ebg13 wrote:
               | No. It will work regardless. I'm talking about having a
               | single additional client decompress the broadcast and act
               | as a first-frame keyframe source based on the
               | decompressed stream. In a traditional compressed stream,
               | a reconstructed frame is already the context for the next
               | delta after the first. Sending a frame reconstructed by
               | someone else allows you to immediately begin using
               | deltas, and it's a close enough approximation of the
               | original uncompressed frame that the immediate result
               | will be quite good even if not perfect, especially
               | compared to waiting.
               | 
               | At worst it would only be as bad as a transcoded stream
               | for the first few seconds until the subsequent true
               | keyframe arrives. That's loads better than no stream at
               | all.
        
           | c3534l wrote:
           | I don't buy this. We had an old cable box that had
           | instantaneous channel-switching. The cable company made us
           | switch to a brand new one. Same signal coming in, but the new
           | one was infinitely slower. It wasn't some change in the
           | signal that made things slow, it was the software and the
           | complete lack of care for performance.
        
             | naikrovek wrote:
             | You not "buying" it doesn't make it untrue.
             | 
             | You get the cable box for the new system BEFORE they switch
             | the old system off, or else you wouldn't have service until
             | you got a new box. That's why you saw behavior change with
             | the new box.
             | 
             | It simply isn't possible to send all the channels to
             | customers at all times. There isn't enough bandwidth. So,
             | the cable box at your house negotiates with the central or
             | regional system so only a subset of channels are sent to
             | you. There is no other way to do it in digital cable
             | systems, and the switch to digital was made because it uses
             | significantly less bandwidth than analog.
        
             | LanceH wrote:
             | The old cable boxes were literally switches, with all
             | channels flowing into the box all the time. Now switching
             | is virtual and done on the server (plus all the software
             | encoding/decoding).
             | 
             | Right now I would be satifsied with at least _some_ caching
             | of the menus, so it doesn 't have to pull the data _every_
             | _damn_ _time_ I scroll up or down. Come on, it should be
             | able to remember what channels I have for more than 5
             | minutes in a row.
        
               | naikrovek wrote:
               | It probably does cache the data, and it just takes ages
               | to draw. Manufacturers really cheap out on settop box CPU
               | and RAM specifications in order to make the pricing
               | attractive to cable companies.
        
           | jedberg wrote:
           | And yet using puffer[0] I get instant channel switches at
           | 1080.
           | 
           | There is no technical reason you can't have instant channel
           | switches, it's just that they aren't making the right
           | technical decisions to allow it and/or don't want to pay for
           | it.
           | 
           | [0] https://puffer.stanford.edu
        
           | unfocused wrote:
           | OTA TV (Rabbit Ears picking up Digital TV - c.f. older analog
           | NTSC), at least here in Canada, is still using MPEG2. The
           | stream is typically around 20 Mbps, including the audio,
           | which is Dolby Digital 5.1 (except certain channels, such as
           | TVO (TV Ontario), which is DD 2.0).
           | 
           | It's all based on ATSC. While there is an ATSC 2.0, they
           | skipped that in Canada, there will be ATSC 3.0, which will
           | use HEVC (H265) for 4K. But don't hold your breath on
           | that...the TV stations aren't exactly opening their wallets
           | to upgrade anything!
        
           | rsynnott wrote:
           | I'm not sure if this is the whole story. Watching DVB
           | terrestrial TV on my television, channel changing is
           | definitely slower than a good analog TV, but it's much faster
           | than any satellite/cable box I've ever used. And there's a
           | lot of variance between cable boxes. I strongly suspect some
           | of them are doing something silly.
        
           | zozbot234 wrote:
           | Most video players can decode an "in progress" stream just
           | fine. This obviously involves quite a few artifacts for the
           | first 0.5-2 seconds or so, but seeing artifacts is generally
           | preferable to seeing a totally blank screen.
        
             | formerly_proven wrote:
             | True, and the decoders in receivers already have this
             | capability, since they keep decoding even if the forward-
             | error-correction can't save the stream any more, resulting
             | in similar artifacts. I'm not sure why it's not done on a
             | channel change, perhaps it's the manufacturers not wanting
             | their TVs to routinely show glitched footage, or perhaps
             | advertisers don't want it because it would be against
             | Corporate Design rules to glitch-effect the logos they use
             | on the daily.
        
             | frandroid wrote:
             | It's obvious that TV makers have decided that seeing
             | artifacts is not preferable. I wish that was toggleable.
        
               | ipnon wrote:
               | Implicitly TV users are also prohibited from making their
               | own technical decisions. I claim this is because the TV
               | predates the era of open computing platforms like Unix.
        
               | dragonwriter wrote:
               | > I claim this is because the TV predates the era of open
               | computing platforms like Unix.
               | 
               | I claim it's because DRM, since if it wasn't for DRM
               | wanting control of the whole reception to display path to
               | be protected, you could just stick an open computing
               | device in between the streaming signal and the display
               | device, and make your own technical choices.
               | 
               | You still can, AFAIK, for content that doesn't require
               | HDCP.
        
               | grawprog wrote:
               | I remember having a TV tuner card back in the day on my
               | old desktop and this is exactly what I did. I don't watch
               | much TV or anything so.I haven't really checked, but are
               | such cards available today with the encrypted digital
               | cable that's ubiquitous everywhere? Even most broadcast
               | is digital now and requires a decoder box between.
        
               | jfrunyon wrote:
               | OTA: Yes.
               | 
               | Cable: no, you'd need a CableCard, and approved device
               | (i.e. DVR), etc.
        
               | majormajor wrote:
               | I don't think it has anything to do with Unix or DRM or
               | any of that.
               | 
               | It's because we have a trendy now-decades-long wave of
               | product managers and designers who assume their job is to
               | _know better_ than the user.
               | 
               | Open source software is not immune.
        
               | formerly_proven wrote:
               | TV and broadcasting in general has always been mostly
               | controlled in a top-down manner by companies and
               | governments, since before computers existed.
        
               | jfrunyon wrote:
               | _cough_ Ubuntu _cough_
               | 
               |  _cough_ systemd _cough_
        
               | anw wrote:
               | > the TV predates the era of open computing platforms
               | like Unix.
               | 
               | Do you mean Unix-like? I'm not sure what you mean by
               | "open" in this instance, but Unix definitely was not very
               | open, leading to nascent platforms and movements such as
               | GNU and Free/Libre software.
        
             | catalogia wrote:
             | There is also the matter of modern screens needing 5-30
             | seconds to "boot" when old TVs and monitors turned on in
             | less than a second.
        
               | SilasX wrote:
               | Yes! I remember movies and TV shows would have scenes
               | where a character is called and told to turn on the TV
               | for breaking news. They'd see it the story instantly, and
               | that was actually realistic! (Assuming it was big enough
               | news to be on all channels.)
               | 
               | Today if you had such a scene they'd be like, "okay
               | <presses remote, waits five very immersion-breaking
               | seconds>".
        
               | formerly_proven wrote:
               | > Today if you had such a scene they'd be like, "okay
               | <presses remote, waits five very immersion-breaking
               | seconds>".
               | 
               | But they still have these scenes in movies
               | 
               | The way these scenes work now is " _picks up phone_ -
               | check the news! ", they grab the remote, and turn the
               | volume up on one of the many already running and tuned in
               | wall-mounted 24/7 on TVs ... :)
        
               | SilasX wrote:
               | >But they still have these scenes in movies
               | 
               | Yes, and if such a TV turns on instantly, that scene is
               | not realistic.
               | 
               | (I think if the were going for realism today, they'd say
               | "pull up reddit/Drudge/google news".)
        
               | formerly_proven wrote:
               | CRTs did tend to take a couple seconds to properly warm
               | up, charge up and reach final image size (the image is
               | directly scaled by the acceleration voltage, which is a
               | high impedance source charging a not-so-small capacitor
               | formed by the aquadag on the inside and outside of the
               | picture tube).
        
               | catalogia wrote:
               | CRTs did need a moment to stabilize the image, but they
               | showed signs of life virtually instantly. At the least
               | they'd make a little noise and had buttons with tactile
               | feedback so you knew something was happening. Many
               | screens today have capacitive touch buttons and have 5
               | seconds or more between a button 'press' and anything
               | happening at all, leaving you to wonder if you even
               | managed to successfully press the power button in the
               | first place.
        
             | Trias11 wrote:
             | I won't be surprised if TV manufacturers further screw
             | customers by forcing them to watch ads in between switching
             | channels.
        
             | gruez wrote:
             | Are we talking about seeking in media players, or streaming
             | websites? If I watch a random twitch stream I just see a
             | loading throbber, while it loads, not a artifacted version.
        
             | magicalhippo wrote:
             | Our cable box has three decoders. My gf can watch one
             | channel and record two others at the same time.
             | 
             | Yet does it use any of those extra decoders when not
             | recording to proactively decode the previous and next
             | channel, or something smart like that? No, of course not...
        
               | jlokier wrote:
               | Recording doesn't use a full decoder.
               | 
               | The incoming channel data stream is saved as-is. It will
               | need a demultiplexer to separate out one channel from the
               | multi-channel data stream, but it won't need to decode
               | that stream, which is the intensive bit. Decoding happens
               | when you play it back later.
        
               | SketchySeaBeast wrote:
               | > proactively decode the previous and next channel
               | 
               | Do you find yourself actually moving up and down the
               | channels, and not through the guide to somewhere else
               | entirely different than where you once where? My first
               | move if I'm switching channels is to go to the guide, not
               | to a channel one above or below my current channel.
               | 
               | I suppose it could run on the previous channel, but it
               | certainly can't guess my next channel.
        
               | monocasa wrote:
               | The first startup I worked with was an IPTV startup that
               | would send the I-Frames for the channels on either side
               | of the channel you were watching at the higher bandwidth
               | tiers so you could channel flip instantly like the old
               | days.
               | 
               | Toxic culture and relationships so the startup imploded,
               | but there was some cool tech.
        
         | iforgotpassword wrote:
         | While I can see the technical reason why channel changes take a
         | while with digital TV, I've always wondered why switching the
         | digital input of your TV, or just changing the resolution of
         | the connected devices takes so long. On most TVs it's over two
         | seconds. The signal is in most cases just a stream of
         | standalone, uncompressed frames. Switching from HDMI1 to HDMI2
         | should take a few milliseconds.
        
           | gregmac wrote:
           | There's a lot going on: HDCP handshake, resolution
           | negotiation, HDMI-CEC setup..
           | 
           | HDCP obviously being consumer-hostile, but the others are
           | decent features to make things "just work".
        
             | Dylan16807 wrote:
             | Why didn't it already do all of that, before I told it to
             | switch inputs?
        
             | iforgotpassword wrote:
             | Ok so HDCP might be an explanation although I wonder if it
             | has any impact if it's not in use. Don't really know how it
             | works tbh.
             | 
             | As for the rest: Resolution negotiation is optional and
             | wouldn't matter if the device is already outputting at some
             | resolution, also this happens on different pins, so even if
             | the device would first query the EDID of the TV to figure
             | out what mode it prefers, the TV could meanwhile already
             | display whatever the device is outputting. Same with CEC,
             | this is another protocol on different pins than where the
             | picture data is sent. HDMI really is just DVI with some
             | extras, at least for the older versions.
        
           | formerly_proven wrote:
           | I don't know why (but would be interested), but even PC
           | screens are usually very slow at switching (>1 second delay),
           | and for some inexplicable reason practically all of them turn
           | their backlight off while doing it.
        
           | kergonath wrote:
           | I've always assumed that's because TV manufacturers choose
           | the most under-powered SoC they can get away with, don't put
           | half as much RAM as it needs, and let loose incompetent
           | programmers who don't have to use the damn thing ever. Still
           | very frustrating.
        
         | commandlinefan wrote:
         | > I think it might just be the nature of progress, but I hate
         | it.
         | 
         | I suspect that the root cause is that nobody understands what's
         | going on from the UI down to the hardware, and nobody is
         | incentivized (or even allowed) to spend the time it would take
         | to actually do so.
        
         | kiba wrote:
         | _I think it might just be the nature of progress, but I hate
         | it._
         | 
         | Not really. It's just not a priority.
        
         | sandworm101 wrote:
         | Remember when you could look at a book, a _guide_ , to see what
         | was on and when? It never took more than a minute or two. Then
         | came the TV channel with the slowly-scrolling list of channels
         | (early 90s). Try figuring out today which shows will be
         | available tomorrow at a particular time... and whether you have
         | subscribed to that channel. Good luck. I don't think it is even
         | possible anymore.
        
           | thatguy0900 wrote:
           | Eh, most cable companies now have a dedicated guide menu with
           | manual scrolling and search features rather than a scrolling
           | channel, I'd say that's one of the few things to have
           | improved since old days
        
         | jberryman wrote:
         | The history of the automobile is another such example. Brief
         | summary: it turns out the invention of the car didn't really
         | save anyone time, it just enabled sprawl.
         | 
         | The purpose of technology (in the POSIWID sense) is to
         | concentrate wealth.
         | 
         | https://www.wnycstudios.org/podcasts/otm/segments/self-drivi...
        
           | dantheman wrote:
           | Did the invention of the car lead to larger living spaces?
           | Perhaps people decided to trade time for space.
        
             | jfrunyon wrote:
             | Who are these "people" you're talking about? I would much
             | rather live close to everything and walk, bike or bus.
             | Instead I _have_ to have a car because work is 3 miles
             | away, doctors are 5-30 miles away, the grocery store is 5
             | miles away, the nearest convenience store is a mile away.
             | And I live in the middle of one of the biggest metro areas
             | in the country!
        
               | bluGill wrote:
               | I'd like to have a farm in the middle of downtown too. It
               | isn't possible. You get more choices in the same amount
               | of time with a car, and get more space as well. Without a
               | car you better like the doctor withing waking distance
               | (my family considered them quacks and has heard plenty of
               | stories about people almost dieing because obvious things
               | were not caught in regular visits, instead the ER had to
               | figure them out when it was almost too late). Ymmv, I
               | like choices.
               | 
               | Of course cars do take up a lot of space, but even
               | factoring that in you have more choices in a reasonable
               | time with a car than without.
               | 
               | Don't read this as me approving of cars. I understand the
               | appeal and drive, but I hate it.
        
             | stainforth wrote:
             | Maybe supply decided for them.
        
         | lazyjones wrote:
         | > _a general problem with technology as a whole._
         | 
         | No, not with technology as a whole. With _software_ ...
        
         | minimuffins wrote:
         | I don't think it's the nature of progress. There's no question
         | we have the _capacity_ to engineer our way out of these
         | problems. They aren't unsolvable. But a lot would have to
         | change before the necessary resources are mobilized against
         | these kind of problems instead of churning out yet more bloat,
         | which, let's face it, is what most of us are doing with our
         | time every day.
         | 
         | Related: https://www.youtube.com/watch?v=kZRE7HIO3vk
         | 
         | I'm not expert enough to say if his technical solutions are
         | correct, but it's a pretty good explication of the problem.
        
           | sergeykish wrote:
           | He counts Linux as 17 MLoc, half of it is drivers, tinyconfig
           | is just 300 kLoc [1]. Is there any reason to watch? Chrome is
           | huge though.
           | 
           | [1] https://unix.stackexchange.com/questions/223746/why-is-
           | the-l...
        
         | noja wrote:
         | I swear cell phones have a slight latency compared to a
         | landline too.
        
           | toast0 wrote:
           | Cell phones have encoding/compression delay on the voice
           | part, delays waiting for a transmit slot if the air interface
           | is time division multiplexed, sometimes a jitter buffer to
           | allow for resends/error correction, and often less than ideal
           | routing (lots of people are using numbers from out of state,
           | which may require the audio path to traverse that state).
           | 
           | Originally POTS was circuit switched analog connection
           | between you and the other party --- only delays from the
           | wires, and maybe amplifiers. Nowadays POTS is most likely
           | digitally sampled at the telco side, but each sample is sent
           | individually --- there's no delay to get a large enough
           | buffer to send, because for multiplexed lines each individual
           | line is sampled at the same rate and a frame has one sample
           | from each.
        
           | rootusrootus wrote:
           | The only thing cell phones have on copper land lines is
           | portability. In all other ways they suck. I think we only
           | tolerate this because most of us have completely forgotten
           | how much better land lines were, or we never experienced them
           | to begin with. The first big hit to phones came with the
           | cordless models. No longer comfortable to hold against your
           | ear, the earpieces got flat or even convex, and mashing it up
           | against your head made for an unpleasant converation. But
           | hey, we got rid of cords! And then the change to cell phones,
           | with their tiny fraction of available bandwidth, terrible
           | sound quality, high latency, high failure rate, etc.
           | 
           | The astonishing thing is that bandwidth isn't a big deal now,
           | and we could have improved basically all aspects of mobile
           | calls to be within spitting distance of what we used to have
           | 30 years ago.
           | 
           | No wonder people don't like to talk on the phone any more.
        
             | bgun wrote:
             | > The only thing cell phones have on copper land lines is
             | portability. In all other ways they suck
             | 
             | Isn't that a little like saying "The only thing boats have
             | over cars is that they can go on water. In all other ways
             | they suck"? Portability is the entire point. Even in the
             | "good old days" most people would have accepted nearly any
             | tradeoff for the ability to carry even the simplest global
             | communications device with them.
        
           | rusk wrote:
           | Depends on your network and your landline but I guess a
           | fairly isshy network compared with a fairly common POT
           | landline you would definitely see a difference. Packet
           | switching (Vs circuit switching) alone should in principal
           | introduce a latency - then there's a lot more interconnects -
           | and on top of that whatever hocus pocus they use to optimise
           | their (digital) bandwidth usage. Of course modern land lines
           | probably are more like mobile now but not unreasonable to
           | expect circuit switches analogue POT are still widely used.
        
           | wrycoder wrote:
           | More than slight. Stand next to a friend and call him on your
           | cell.
           | 
           | Then try various things like saying "ping". The results are
           | quite amusing. Or put one phone on speaker.
        
         | Unklejoe wrote:
         | > Remember when channel changes on TVs were instantaneous?
         | 
         | I think about this every time I use the Hulu "Guide" on my Fire
         | TV. It's extremely slow and cumbersome.
         | 
         | I remember using the early digital cable boxes in like 2005
         | which were much more responsive, and honestly, the UI was much
         | better too.
        
         | TheOtherHobbes wrote:
         | Having used 14.4k dialup on a 486DX2-66MHz at 640 x 480 VGA, I
         | really don't think 2020s technology is outstandingly slow.
         | 
         | I don't even think most phone apps are bloated, because most
         | phone apps - and web sites - are just electronic forms
         | decorated with a bit of eye candy.
         | 
         | Security and reliability worry me far more. Many sites have
         | obvious bugs in $favourite_browser, and some just don't work at
         | all. Some of this is down to ad blocking, but that shouldn't be
         | a problem - and the flip side is that blocking ads, trackers,
         | and unwanted cookies seems to do wonders for page load speeds.
        
           | bluGill wrote:
           | I've used an 8 bit computer with a 300 baud modem. One bbs I
           | dialed into was a 8 bit computer with a whopping 4mb of ram.
           | It was the fastest response time of any computer I've every
           | used. Everything was in ram, and coded in assembly with speed
           | and low code space as the concern.
           | 
           | Modern computers should be much faster, but they aren't. They
           | do more, but when you do something you notice the slow speed.
        
         | 0xdeadbeefbabe wrote:
         | > Ever noticed how long it takes to kill an app and restart it?
         | Ever notice how much more often you have to do that, even on a
         | 5-month old flagship phone
         | 
         | At least car crashes are still low latency.
        
       | giantg2 wrote:
       | Quite frankly, this is bigger than the server vs client comments
       | I've seen. This is not some new phenomenon. The efficiency of
       | code and architecrure has declined over time for at least the
       | last 30 years. As compute and storage costs have come down
       | dramatically, the demand for labor has gone up. Who decides
       | what's really important in a project - the business. That comes
       | down to cost. If you can save money by using cheap hardware and
       | cheap architecture, then save money by using your human resources
       | for output vs efficient code...
        
       | darkhorse13 wrote:
       | Part of the problem is that modern JS frameworks make it
       | incredibly easy to mess up performance. I have seen mediocre devs
       | (not bad, but not great) make a mess of what should be simple
       | sites. Not blaming the frameworks, but it is still a problem to
       | be addressed.
        
       | sirjaz wrote:
       | The larger problem is that the web was never meant to be used the
       | way we use it. We should be making cross-platform apps that use
       | simple data feeds from remote sources
        
       | JJMcJ wrote:
       | There are still sites with very simple HTML/JS/CSS, and they load
       | so fast it's almost like magic.
        
       | tines wrote:
       | This assumes that the thing that should be held constant is
       | complexity, and that the loadtimes will therefore decrease. On
       | the contrary, loadtime itself is the thing being held (more or
       | less) constant, and the complexity is the (increasing) variable.
       | 
       | Progress is not being able to do faster the same things we used
       | to do, but to be able to do more in the same amount of time.
       | 
       | These seem to be equivalent, but they're not, because the first
       | is merely additive, but the second is multiplicative.
        
       | partiallypro wrote:
       | The main culprit, imo is javascript. People/clients want more and
       | more complex things, but javascript and its libraries are the
       | main culprit. Image compression, minification...it helps, but if
       | the page needs a lot of JS, it's going to be slower.
       | 
       | Slightly off topic, but I have a site that fully loads in ~2
       | seconds but Google's new "Page Speed Insights" (which is tied to
       | webmaster tools now) give it a lower score than a page that takes
       | literally 45 seconds to fully load. Please someone at Google
       | explain this to me. At least GTMetrix/Pingdom actually makes
       | sense.
        
         | s1t5 wrote:
         | I'm not a web developer so I really have no idea about this -
         | is WebAssembly a viable solution or have I just absorbed some
         | hype without understanding the problems with JS?
        
           | flohofwoe wrote:
           | WebAssembly on its own won't help with web page bloat. As
           | with all things on web pages, it can both be used to add more
           | bloat, or to reduce bloat. Web page bloat isn't mainly a
           | technical problem and can't be solved by technology alone.
        
           | austincheney wrote:
           | No. Webassembly is not faster than JavaScript enough to
           | account for the features and APIs browsers supply with
           | JavaScript. That being said WebAssembly is a superior
           | alternative to JavaScript only when not recreating the
           | interactions already available with JavaScript. Where
           | WebAssembly shines are things like large binary media, games,
           | and avoiding things like garbage collection.
        
         | bearjaws wrote:
         | Users expect load times proportional to the content expected,
         | if I am loading photoshop I don't expect it to be quick.
         | 
         | However if I am loading Reddit, I expect it to be fast, and it
         | seems the websites that should load the fastest are now loading
         | the most 'non-essential' JS, leading to worse performance than
         | peoples expectations.
         | 
         | In my experience JS adds at most hundreds of miliseconds, and
         | thats because people add dozens of marketing/tracking code that
         | bog the site down.
         | 
         | If you run ghostery / ublock the javascript eval time shrinks
         | dramatically. Our web app has a very large Angular app and it
         | still renders in under 200ms, but we don't have any "plugins"
         | due to working with PHI.
        
           | reaperducer wrote:
           | _Users expect load times proportional to the content
           | expected_
           | 
           | And how do you think that happened? It's not a chicken-and-
           | egg problem. Web pages got fat, and users' expectations got
           | lower. Lazy devs have trained people to expect the worst, not
           | the best. The app ecosystem wouldn't be half the size it is
           | if web pages worked as fast as native apps.
           | 
           |  _If you run ghostery / ublock the javascript eval time
           | shrinks dramatically._
           | 
           | Are you going to be the one to explain to the marketing
           | department why you put instructions for doing so at the top
           | of each of your company's web pages?
        
         | xenospn wrote:
         | Giant CSS bundles as well.
        
         | csark11 wrote:
         | What's your site's URL?
        
         | jefftk wrote:
         | If you post the links for the two sites I would enjoy looking
         | at them to figure out why PSI is giving such strange scores
        
         | xchip wrote:
         | The question is, why does your site needs so much javascript?
         | Most prob is for tracking :/
        
           | partiallypro wrote:
           | The great irony for Google is that their own tracking pixels
           | for Analytics, GTM, AdWords, Adsense are some of the biggest
           | culprits for pages hanging on loads.
        
             | XCSme wrote:
             | That's because you are not using AMP. /s
        
         | neurostimulant wrote:
         | Page Speed Insight mainly measure how fast your above-the-fold
         | content loads. It doesn't matter if your page loads a bunch of
         | heavy js and images as long as it's deferred/lazy-loaded and
         | doesn't block initial render. For example, amp pages actually
         | load a lot of js for its component, but it doesn't block above-
         | the-fold render and thus scored really great on page speed
         | insight. Personally, I think page speed insight is one of
         | google's strategy to encourage people to use AMP more.
         | 
         | Edit: Also, I find it comical that when you include recaptcha
         | v3 on your website, your page speed insight score can drop
         | almost 20 points. It is as if google don't want you to use
         | recaptcha at all.
        
           | sli wrote:
           | > but it doesn't block above-the-fold render and thus scored
           | really great on page speed insight
           | 
           | Which is ridiculous and just builds in the ability to game
           | the system, because in my experience, amp pages take upwards
           | of 5-8 seconds before anything of use to me actually loads,
           | while the non-amp version loads in a fraction of that time.
           | 
           | I imagine _someone_ is benefiting from amp, otherwise it
           | wouldn 't be used, but I have not experienced a single case
           | where the amp version of a site wasn't _significantly_
           | slower.
        
             | neurostimulant wrote:
             | I think a while ago you could get perfect 100 score on page
             | speed insight by putting all your content inside an iframe
             | :)
        
               | tester756 wrote:
               | Last time I tried WebAssembly website that's loading 4-5
               | MB of data and it managed to score something like >95
               | meanwhile page was ready after like 5-10sec.
        
         | heisenbit wrote:
         | Angular, React, Vue, etc. are all getting better reducing the
         | bloat. Javascript packaging and tree shaking is also much
         | better. A lot of old compatibility stuff for IE etc. also can
         | be dumped now. JS compilation got faster. The reason we are not
         | seeing any speedup is the code that is not needed: Tracking and
         | advertisements - this stuff seems to fill any size and speed
         | gains.
        
         | dgb23 wrote:
         | That statement seems to general.
         | 
         | JS in of itself is not a performance issue in many cases. It
         | can even improve performance in terms of speed/responsiveness.
        
           | grishka wrote:
           | People misuse and abuse JS all the dang time. Take Twitter:
           | you follow a link to a tweet from somewhere else, but first
           | thing you see as the page loads is not the tweet -- it's
           | everything _but the tweet itself_. There 's a spinner instead
           | of it, because rendering it server-side would've made too
           | much sense, I guess. Gotta make an API request from the
           | client and render it client-side just because that's so
           | trendy.
           | 
           | In other words, there are too many websites that are made as
           | "web apps" that should not be web apps.
        
             | bananaface wrote:
             | Which is made way worse by the fact that tweets are _280
             | characters of text_. It 's absurd that Twitter has a higher
             | delay than a few round trips when they aren't reloading any
             | structural content. Wtf are these people on 300k salaries
             | _doing_ all day?
        
               | grishka wrote:
               | This strict character limit is a defining feature of
               | microblogging, and it has nothing to do with engineering.
        
               | ezconnect wrote:
               | His point is it's just 280 characters of text to download
               | to the client to show the tweet. The client have
               | downloaded probably 1,000x that much text before he can
               | even see what he wants to see.
        
           | 9HZZRfNlpR wrote:
           | Responsiveness never, it does make complex projects easier
           | but the problem is that developers often make them complex
           | for the sake of it.
        
             | ly wrote:
             | I disagree with "responsiveness never". Imagine you have a
             | "tabs" component on a page, each tab has some text (for
             | example [1]). With javascript, you can hide the content of
             | the first tab and show the second tab on click, almost
             | instantly.
             | 
             | Without javascript the second tab would be a link to
             | another HTML page where the second tab is shown. Exact same
             | behaviour for the user, however the one with a few lines of
             | javascript will feel way more responsive than the one
             | without, where the user has to wait for a new page to load.
             | 
             | [1] https://getbootstrap.com/docs/4.0/components/navs/#java
             | scrip...
        
               | reaperducer wrote:
               | _almost instantly_
               | 
               | Or you can do it in CSS _actually_ instantly.
               | 
               | And no, complexity generally doesn't negate this. Earlier
               | this year I built a series of complicated medical forms
               | for a healthcare web site that are all HTML + CSS + < 2K
               | of jsvascript, and they all respond instantly because I
               | did't lean on JS to do everything.
               | 
               | The pages are fast, responsive, work on any device,
               | almost any bandwidth (think oncologists in the basement
               | of a hospital with an iPad on a bad cellular connection),
               | and the clients are thrilled.
        
               | theandrewbailey wrote:
               | Without Javascript, the tab can be a label that activates
               | a radio button next to a hidden div that also has a
               | input:checked~div { display: block;} CSS on it. No
               | Javascript required.
               | 
               | https://codepen.io/dhs/pen/diasg
        
               | ly wrote:
               | Yeah, you're right that simple tabs can also be
               | implemented using css, but I still disagree. Another
               | example: how about a simple search on a table [1].
               | 
               | In this case the search is instant. Without JS you would
               | have to have a submit button and wait for the request.
               | Even if you also added a button the JS version it would
               | still feel more responsive, as again, you're not waiting
               | for the request.
               | 
               | [1] https://www.w3schools.com/howto/tryit.asp?filename=tr
               | yhow_js...
        
               | lelandbatey wrote:
               | I'm not sure I've ever seen a case where I have a data
               | set that's small enough to be quickly searchable (and
               | quickly re-renderable) using client-side JS but big
               | enough that I need dedicated and app-like sort, search,
               | or query functionality. And where such a set of data to
               | exist, that data set would almost certainly be _growing_
               | with time, meaning even if it started out as something
               | where I can have snappy JS search, as time passes the JS
               | search grows heavier and slower through time.
               | 
               | Additionally, when it comes to client-side spreadsheets I
               | have seen far more terrible half client-side, half
               | server-side implementations (being only able to sort
               | within a page, instead of across all pages of results).
               | If I had to choose one, I'd choose a world were all we
               | had were server side spreadsheets.
        
             | amiga wrote:
             | Job security!
        
         | rozenmd wrote:
         | From what I've seen building PerfBeacon.com, it seems more like
         | images in e-commerce sites are the worst offenders.
         | 
         | Slowest site I've tested so far had a page size of 8.5MB, 80%
         | of that was images.
        
           | doteka wrote:
           | Hah, if only it was images. At work, our b2b app loads about
           | 5MB of minified and compressed JavaScript and I don't even
           | know anymore.
        
             | rozenmd wrote:
             | Ouch, does the app use code splitting at least?
        
             | Escapado wrote:
             | At my current gig we do that too for a dead simple
             | frontend. a couple of simple tables and cards, a navigation
             | and a footer and we're easily over 2mb. But as I have been
             | told that's what happens when Google analytics gets pulled
             | in and all the code needs to adhere to enterprise style DDD
             | (in a react app mind you). Apparently 400 layers of
             | indirection and encapsulation are the way to go...
             | 
             | edit: Just looked it up. header has a logo and 5 links.
             | footer has a scroll to top button and 10 links. both
             | responsive. How many lines of code you ask? A little over
             | 3400.
        
         | austincheney wrote:
         | More specifically the culprit is generally unnecessary string
         | parsing. Every CSS selector, such as querySelectors and jQuery
         | operations, requires parsing a string. Doing away with that
         | nonsense and learning the DOM API could make a JavaScript
         | application anywhere from 1200x-10000x faster (not an
         | exaggeration).
         | 
         | Most JavaScript developers will give up a lung before giving up
         | accessing the page via selector strings. Suggestions to the
         | effect are generally taken as personal injuries and immediately
         | met with dire hostility.
        
           | chubot wrote:
           | [citation needed] There are a lot of other reasons why client
           | side JS is slow, including page reflows, bad use of the
           | network, etc. I'm not a front end dev but I have fixed many
           | performance problems before, and I've never seen parsing CSS
           | selectors as a bottleneck.
           | 
           | I've seen some data driven work like this:
           | https://v8.dev/blog/cost-of-javascript-2019
           | 
           | I don't think they mentioned parsing CSS selectors anywhere.
           | Shipping too much code is a problem, because megabytes of JS
           | is expensive to parse, but IIUC that is distinct from your
           | claim.
        
             | austincheney wrote:
             | > I'm not a front end dev
             | 
             | But I am.
             | 
             | You are correct in that there many other opportunities to
             | further increase performance. If performance were that
             | important you would also shift your attention to equally
             | improve code execution elsewhere in your application stack.
             | 
             | > and I've never seen parsing CSS selectors as a
             | bottleneck.
             | 
             | It doesn't matter what our opinions are or what we
             | have/haven't seen. The only thing that matters are what the
             | performance measurements say in numbers.
             | 
             | EDIT
             | 
             | To everybody asking for numbers I recommend conducting
             | comparative benchmarks using a perf tool. Here is a good
             | one:
             | 
             | http://jsbench.github.io/
             | 
             | I posted a performance example to HN before and people
             | twisted themselves into knots to ignore numbers they could
             | easily validate and reproduce themselves.
        
               | bscphil wrote:
               | It's funny, you say
               | 
               | > It doesn't matter what our opinions are or what we
               | have/haven't seen. The only thing that matters are what
               | the performance measurements say in numbers.
               | 
               | But your only argument here is not numbers, but appeal to
               | authority:
               | 
               | > > I'm not a front end dev
               | 
               | > But I am.
               | 
               | I don't have any particular reason to doubt you, but if
               | objective numbers should rule the day here, maybe you
               | could link to an article comparing the performance of a
               | simple application using CSS selectors and then switching
               | to using the DOM API?
        
               | pas wrote:
               | Could you provide said numbers? Maybe you have seen a lot
               | of pathological cases, but that might not be
               | representative.
        
               | UberMouse wrote:
               | Then provide some numbers?
        
               | cuddlecake wrote:
               | But still, please follow up on [citation needed] please
        
           | fermienrico wrote:
           | At this point, we need to reimagine what a web browser is and
           | how it should be.
           | 
           | It cannot happen since we want backwards compatibility.
        
         | vlunkr wrote:
         | I think JS does play a big role, but my guess is that the 3rd
         | party stuff is lots heavier than the scripts that actually make
         | the page work. You have to write tons of code before the load
         | time even matters, but when you have a bunch of "analytics",
         | advertising, and social media integration scripts it adds up,
         | especially when each ad is essentially it's own page with
         | images and scripts.
         | 
         | If you use Privacy Badger or similar plugins, you see that it's
         | not uncommon for websites to have an obscene amount of these.
         | 
         | TLDR: I think ads are slowing the internet down way more than
         | React apps.
        
         | nomel wrote:
         | Perhaps they test with simulated reduced bandwidth?
        
       | johannes1813 wrote:
       | This reminds me a lot of Freakonomics podcast episode
       | (https://freakonomics.com/2010/02/06/the-dangers-of-safety-fu...)
       | where they discuss different cases where increased safety
       | measures just encouraged people to take more risk, resulting in
       | the same or even increased numbers of accidents happening. A good
       | example is that as football helmets have gotten more protective,
       | players have started hitting harder and leading with their head
       | more.
       | 
       | Devs have been given better baseline performance for free based
       | on internet speeds, and adjust their thinking around writing
       | software quickly vs. performantly accordingly, so we stay in one
       | place from an overall performance standpoint.
        
         | Taek wrote:
         | In the case of devs it's not just staying in the same place
         | though. You get more complete frameworks, more analytics,
         | better ad engines, and faster development pace.
         | 
         | It might not be what we wanted, but it is a benefit
        
         | pravus wrote:
         | This is known as Jevons paradox in economics and the classic
         | example in modern times is rates of total electricity usage
         | going up while devices have become ever more energy efficient.
         | 
         | https://en.wikipedia.org/wiki/Jevons_paradox
        
       | Jeaye wrote:
       | Title should be "Despite an increase in Internet speed, webpage
       | speeds have not improved", since webpage speeds have not acted in
       | spite of internet speed.
        
       | kbuchanan wrote:
       | To me it's more evidence that increased speed and reduced latency
       | is not where our real preferences lie: we may be more interested
       | in the _capabilities_ of the technology, which have undoubtedly
       | improved.
        
         | ClumsyPilot wrote:
         | To me increasing tendency of Boeing to crash is evidence that
         | safety is not where our real preferences lie.
         | 
         | To me increasing tendency of junk stocks getting AAA rating is
         | evidence that profitable investment is not where our real
         | preferences lie.
         | 
         | To me increased prevalence of obesity and heart disease is
         | evidence that staying healthy and alive is not where our real
         | preferences lie.
        
         | HumblyTossed wrote:
         | I'm unclear who you mean when you say "our" and "we".
         | Developers? Producers? Consumers?
        
       | cozzyd wrote:
       | Oh but they have... assuming you leave ublock enabled :)
        
       | alkonaut wrote:
       | Despite an increase in computer speed, software isn't faster. It
       | _does more_ (the good case) or it's simply sloppy, but that's not
       | necessarily a bad thing because it means it was cheaper /faster
       | to develop.
       | 
       | Same with web pages. You deliver more and/or you can be sloppier
       | in development to save dev time and money. Shaking a dependency
       | tree for a web app, or improving the startup time for a client
       | side app costs time. That's time that can be spent either adding
       | features or building another product entirely, both of which
       | often have better ROI than making a snappier product.
        
         | tuatoru wrote:
         | Why are you valuing the time of a developer so much more highly
         | than the time of all the users of the web page?
         | 
         | Page load time affects every user; additional features only
         | improve life for a few of them.
        
           | alkonaut wrote:
           | ROI might say a developer should build a different product
           | instead of speeding up the old one. Or perhaps it's better to
           | get 200 less satisfied customers than make the 100 existing
           | ones more satisfied. That can be done by using the resources
           | for marketing, features, SEO. In the end, when you are
           | optimizing there is always something you are _not_ doing with
           | that time.
           | 
           | Whether hundreds of users value the time they gain by not
           | waiting for page loads isn't relevant either unless it
           | actually converts to more sales (or some other tangible
           | metric like growth).
        
           | jtsiskin wrote:
           | More features, which the user usually prefers over speed.
        
             | tuatoru wrote:
             | Not from my observation.
             | 
             | Most people seem to get more confused and hesitant when
             | pages are loaded with more features, most of which are
             | irrelevant to their neeeds of the moment. (Of course flat
             | design makes this hesitation worse.)
             | 
             | And theory talks about "cognitive overload" and "choice
             | paralysis".
        
       | vlovich123 wrote:
       | Parkinson's law at work.
       | 
       | Employees building the web pages are rewarded for doing "work".
       | Work typically means adding code, whether it's features,
       | telemetry, "refactoring" etc. More code is generally slower than
       | no code.
       | 
       | That's why you see something like Android Go for entry-level
       | devices & similar "lite" versions targeting those regions. These
       | will have the same problem too over time because even entry-level
       | devices gets more powerful over time.
       | 
       | The problem is that organizations don't have good tools to
       | evaluate whether a feature is worth the cost so there's no back
       | pressure except for the market itself picking alternate solutions
       | (assuming those are options - some times they may not be if
       | you're looking at browsers or operating systems where generally a
       | "compatibility" layer has been defined that everyone needs to
       | implement).
        
       | manigandham wrote:
       | The reason websites have gotten worse is because they don't make
       | performance a priority. That's all it is. Most sites optimize for
       | ad revenue and developer time (which lowers costs) instead.
        
       | bbulkow wrote:
       | I am surprised to not see a product reason.
       | 
       | There is one.
       | 
       | Engagement falls off when there is a delay in experience past a
       | certain point. Usually considered around 100ms to 150ms with
       | extreme drop off at a second or higher. This has to go with human
       | perception and can be measured through a/b analysis and similar.
       | 
       | Engagement does not get better if you go faster past that point.
       | Past that point, you should have a richer experience, more things
       | on the page, whatever you want. or reduce cost by spending less
       | on engineering, certainly don't spend more money on a 'feature'
       | (speed) that doesn't return money.
       | 
       | Ad networks are run on deadline scheduling. Find the best ad in
       | 50 Ms, don't find any old ad as quick as possible.
       | 
       | Haven't others been involved with engagement analysis found the
       | same?
        
       | dainiusse wrote:
       | The same as smartphone - cpu's improved but Battery still holds
       | one day
        
       | k__ wrote:
       | I'm thinking about this problem often lately.
       | 
       | Just a few weeks ago I saw a size comparison of React and Preact
       | pages. While Preact is touted as a super slim React alternative,
       | in real-life tests the problem were the big components and not
       | the framework.
       | 
       | This could imply that we need to slim down code at a different
       | level of the frontend stack. Maybe UI kits?
       | 
       | This could also imply that frontend devs simply don't know how to
       | write concise code or don't care as much as they say.
        
       | jbob2000 wrote:
       | It's the marketing team's fault. I proposed a common,
       | standardized solution for showing promotions on our website, but
       | no... they wanted iframes so their designers could use a WYSIWYG
       | editor to generate HTML for the promotions. This editor mostly
       | generates SVGs, which are then loaded in to the iframes on my
       | page. Most of our pages have 5-10 of these iframes.
       | 
       | Can someone from Flexitive.com please call up my marketing
       | coworkers and tell them that they aren't supposed to use that
       | tool _for actual production code_?
       | 
       | Can someone also call up my VP and tell them they are causing
       | huge performance issues by implementing some brief text and an
       | image with iframes?
       | 
       | Can someone fire all of the project managers involved in this for
       | pushing me towards this solution because of the looming deadline?
        
         | oblio wrote:
         | Is your company making money? If so, they're doing their job
         | :-)
        
           | jbob2000 wrote:
           | We're one of the most profitable companies in my country, and
           | we're probably on the S&P 500.
        
       | teabee89 wrote:
       | There's a fancy name for this effect: Jevons paradox
       | https://en.wikipedia.org/wiki/Jevons_paradox
        
       | draaglom wrote:
       | Original data here:
       | 
       | https://httparchive.org/reports/loading-speed?start=earliest...
       | 
       | The degree to which desktop load times are stable over 10 years
       | is in itself interesting and deserves more curiosity than just
       | saying "javascript bad"
       | 
       | Plausible alternate hypotheses to consider for why little
       | improvement:
       | 
       | * Perhaps this is evidence for control theory at work, ie website
       | operators are actively trading responsiveness for functionality
       | and development speed, converging on a stable local maximum?
       | 
       | * Perhaps load times are primarily determined by something other
       | than raw bandwidth (e.g. latency, which has not improved as
       | much)?
       | 
       | * Perhaps this is more measuring the stability of the test
       | environment than a fact about the wider world?
       | 
       | https://httparchive.org/faq#what-changes-have-been-made-to-t...
       | 
       | If this list of changes is accurate, that last point is probably
       | a significant factor -- note that e.g. there's no mention of
       | increased desktop bandwidth since 2013.
        
       | ffggvv wrote:
       | this isn't that surprising when you see they mean "throughput"
       | and not "latency" when they talk about speed.
       | 
       | webpages aren't super large files so it would depend more on the
       | latency of the request not Mbps
        
         | em-bee wrote:
         | exactly. internet speed never was the issue. i am in a place
         | where international network speed is a fraction of the domestic
         | speeds. (often less than one mbit) yet websites are still just
         | as fast. it all depends on how fast the server responds to the
         | request, and almost never on how much the site has to load,
         | unless there is a larger amount of images involved.
        
       | wintorez wrote:
       | I think we need to start differentiating between webpage speeds
       | and web application speeds. Namely, a webpage would work if I
       | disable the JavaScript in my browser, but a web application would
       | not. By this definition, web page speeds has improved a lot.
        
       | bilater wrote:
       | But could the argument be made that we are loading a shit load
       | more content so even though it feels slower you're getting a
       | richer UX to work with?
        
         | Dahoon wrote:
         | Richer is often worse though. So the worst of both sides.
        
       | mostlystatic wrote:
       | HTTP Archive uses an emulated 3G connection to measure load
       | times, so of course faster internet for real users won't make the
       | reported load times go down.
       | 
       | Am I missing something?
       | 
       | https://httparchive.org/faq
        
       | CM30 wrote:
       | You could probably say the exact thing about video game consoles
       | and loading times/download speeds/whatever. The consoles got more
       | and more powerful, but the games still load in about the same
       | amount of time (or more) than they used to, and take longer to
       | download.
       | 
       | And the reasoning there is the same as for this issue for webpage
       | speed or congestion on roads; the more resources/power is
       | available for use, the more society will take advantage of it.
       | 
       | The faster internet connections get, the more web developers will
       | take advantage of that speed to deliver types of sites/apps that
       | weren't possible before (or even more tracking scripts than
       | ever). The more powerful video game systems get, the more game
       | developers will take advantage of that power for fancier graphics
       | and larger worlds and more complex systems. The more road
       | capacity we get, the more people will end up driving as their
       | main means of transport.
       | 
       | There's a fancy term for that somewhere (and it's mentioned on
       | this site all the time), but I can't think of it right now.
        
         | avani wrote:
         | I think you may be referring to Jevon's Paradox:
         | https://en.wikipedia.org/wiki/Jevons_paradox
        
           | CM30 wrote:
           | Yeah, that was it.
           | 
           | Thanks for the link1
        
         | m00dy wrote:
         | new Playstation console has the fastest ssd in the market to
         | address that issue.
        
         | setr wrote:
         | One of my heuristics for video game quality is the main menu
         | transition speed -- you only care about the menu animations
         | once, on first view; after that, you want get something done
         | (eg fiddle with settings, start the game, items, etc). So it
         | should be fast, or whatever animation skippable with rapid
         | tapping. Any game designer that doesn't realize this likely
         | either doesn't have any taste, or is not all that interested in
         | anyone actually _playing_ the game.
         | 
         | This heuristic has served me stupidly well, and repeatedly gets
         | triggered on a significant proportion of games -- and comes out
         | correct
         | 
         | The actual level loading times of games doesn't matter all that
         | much. Games go out of their way to be(feel)
         | slow/sluggish/soft/etc
        
       | swyx wrote:
       | Wirth's Law reborn: "Software is getting slower more rapidly than
       | hardware is becoming faster."
       | 
       | https://en.wikipedia.org/wiki/Wirth%27s_law
        
       | oblio wrote:
       | While I agree with the idea and I am not happy about slow apps,
       | the truth is, it's focused on technical details.
       | 
       | People don't care about speed or beauty or anything else than the
       | application helping them achieve their goals. If they can do more
       | with current tech than they could with tech 10-20 years ago,
       | they're happy.
        
         | Dahoon wrote:
         | >People don't care about speed
         | 
         | Every statically backed research on customer behaviour I have
         | ever seen says otherwise. The more you slow down the page or
         | app the less customers like and use it or buy the product being
         | sold. As someone with a homemade site for our business I can
         | say that it is extremely easy to be faster than 95% of sites
         | out there and it makes a huge difference, also on Google. Tiny
         | business with homemade website in top 1-3 on Google was
         | mindbaffling easy because everyone use too many external
         | sources and preschool level code. Especially the so-called
         | experts. Most are experts in bloat.
        
           | oblio wrote:
           | If you're small and have actual competition or not that great
           | of a market fit, sure.
           | 
           | If you're Google, Facebook, Oracle, etc, nobody cares. They
           | just endure it to get what they really want.
        
         | [deleted]
        
       | pea wrote:
       | So this has gotten to the point for me where it is a big enough
       | burning painpoint where I would pay for a service which provided
       | passably fast versions of the web-based tools which I frequently
       | have to use.
       | 
       | In my day-to-day as a startup founder I use these tools where the
       | latency of every operation makes them considerably less
       | productive for me (this is on a 2016 i5 16GB MBP):
       | 
       | - Hubspot
       | 
       | - Gmail (with Apollo, Boomerang Calendar, and HubSpot extensions)
       | 
       | - Intercom (probably the worst culprit)
       | 
       | - Notion (love the app - but it really seems 10x slower than a
       | desktop text editor should be imo)
       | 
       | - Apollo
       | 
       | - LinkedIn
       | 
       | - GA
       | 
       | - Slack
       | 
       | The following tools I use (or have used) seem fast to me to the
       | point where I'd choose them over others:
       | 
       | - Basecamp
       | 
       | - GitHub (especially vs. BitBucket)
       | 
       | - Amplitude
       | 
       | - my CLI - not being facetious, but using something like
       | https://github.com/go-jira/jira over actual jira makes checking
       | or creating an issue so quick that you don't need to context
       | switch from whatever else you were doing
       | 
       | I know it sounds spoiled, but when you're spending 10+ hours a
       | day in these tools, latency for every action _really_ adds up -
       | and it also wears you down. You dread having to sign in to
       | something you know is sluggish. Realistically I cannot use any of
       | these tools with JS disabled, best option is basically to use a
       | fresh Firefox (which you can't for a lot of Gmail extensions)
       | with uBlock. I tried using Station/Stack but they seemed just as
       | sluggish as using your browser.
       | 
       | It's probably got a bunch of impossible technical hurdles, but I
       | really want someone to build a tool which turns all of these into
       | something like old.reddit.com or hacker news style experience,
       | where things happen under 100ms. Maybe a stepping stone is a way
       | to boot electron in Gecko/Firefox (not sure what happened to
       | positron).
       | 
       | The nice things about tools like Basecamp is that because loading
       | a new page is so fucking fast, you can just move around different
       | pages like you'd move around the different parts of one page on
       | an SPA. Browsing to a new page seems to have this fixed cost in
       | people's minds, but realistically it's often quicker than waiting
       | for a super interactive component to pull in a bunch of data and
       | render it. Their website is super fast, and I think their app is
       | just a wrapper around the website, but is still super snappy.
       | It's exactly the experience I wish every tool I used had.
       | 
       | IMO there are different types of latency - I use some tools which
       | aren't "fast" for everything, but seem extremely quick and
       | productive to use for some reason. For instance,
       | IntelliJ/PyCharm/WebStorm is slow to boot - fine. But once you're
       | in it, it's pretty quick to move around.
       | 
       | Can somebody please build something to solve this problem!
        
       | anticensor wrote:
       | Not a news: Wirth's law has been known for a long time.
        
       | jrnichols wrote:
       | Of course not. People remain convinced that the internet will
       | cease to exist without advertisements all over the place. Web
       | pages are now 10mb+ in size, making 20 different DNS calls, all
       | of which ad latency. And for what? To serve up advertisements
       | wrapped around (or laying themselves over) the content that we
       | came to read in the first place.
       | 
       | Maybe i'm just old, but I fondly remember web pages that loaded
       | reasonably fast over a 56k modem. these days, if I put anything
       | on the web, I try to optimize it the best I can. Text only,
       | minimal CSS, no javascript if at all possible.
       | 
       | I hope more people start doing that.
        
       | rootsudo wrote:
       | I remember reading a story of how engineers at Youtube were
       | receiving more tickets/complaints about connectivity in Africa
       | after reducing the page loading time.
       | 
       | They were confused and thinking w/ the bandwidth limitations and
       | previous statistics it didn't make sense, something about
       | previously not having useful statistics.
       | 
       | Turns out due to reducing the page size, they finally were able
       | to have African users load the page, but not the video.
       | 
       | I thought that was interesting, and puts it right at that email
       | at lightspeed copypasta.
        
       | hpen wrote:
       | I think most software is built with some level of tolerance for
       | performance, and the stack / algorithms / feature implemented are
       | chosen to meet that tolerance. Basically, as hardware gets
       | faster, it's seen as a way to make software cheaper.
        
       | sushshshsh wrote:
       | Really? My text only websites that I've written and hosted for
       | myself are really snappy. I wouldn't know the feeling.
       | 
       | All I needed to do was spend a weekend scraping everything I
       | needed so that I could self host it and avoid all the ridiculous
       | network/cpu/ram bloat from browsing the "mainstream" web
        
       | malwarebytess wrote:
       | It's a hard pill to swallow that 20 years since I started using
       | the internet websites perform worse on vastly superior hardware,
       | especially on smartphones.
        
       | temporama1 wrote:
       | JavaScript is not the problem.
       | 
       | Computers are AMAZINGLY fast, EVEN running JavaScript. Most of us
       | have forgotten how fast computers actually are.
       | 
       | The problem is classes calling functions calling functions
       | calling libraries calling libraries.....etc etc
       | 
       | Just look at the depth of a typical stack trace when an error is
       | thrown. It's crazy. This problem is not specific to JavaScript.
       | Just look at your average Spring Boot webapp - hundreds of
       | thousands of lines of code, often to do very simple things.
       | 
       | It's possible to program sanely, and have your program run very
       | fast. Even in JavaScript.
        
         | xmprt wrote:
         | I don't think people are claiming javascript execution speed is
         | the culprit. Javascript can be slow but computers are also
         | fast. However, loading all that javascript takes a long time
         | especially if the website isn't optimized properly and blocks
         | on non-critical code.
        
         | easterncalculus wrote:
         | I think the problem is that languages like Javascript and
         | object oriented languages in general actually incentivize this
         | kind of design. Most of the champions of OOP rarely ever look
         | at stack traces or anything relating to lower-level stuff (in
         | my experience, in general). Then you take that overhead to the
         | browser and expect it to scale to millions of users. It just
         | doesn't make sense. No amount of TCO is going to fix the
         | problem either.
         | 
         | APIs are going to be used as they're written, and as
         | documented. So as much as there is a problem with people
         | choosing to do things wrong, I think the course correction of
         | those people is a strong enough force. At least in comparison
         | to when the design _incentivizes_ bad performance. There 's
         | basically nothing but complaining to the sky when the 'right'
         | way is actually terrible in practice.
        
       | bob1029 wrote:
       | It doesn't have to be this way. I am not sure when there was a
       | new rule passed in software engineering that said that you shall
       | never use server rendering again and that the client is the only
       | device permitted to render any final views.
       | 
       | With server-side (or just static HTML if possible), there is so
       | much potential to amaze your users with performance. I would
       | argue you could even do something as big as Netflix with pure
       | server-side if you were very careful and methodical about it.
       | Just throwing your hands up and proclaiming "but it wont scale!"
       | is how you wind up in a miasma of client rendering, distributed
       | state, et. al., which is ultimately 10x worse than the original
       | scaling problem you were faced with.
       | 
       | There is a certain performance envelope you will never be able to
       | enter if you have made the unfortunate decision to lean on client
       | resources for storage or compute. Distributed anything is almost
       | always a bad idea if you can avoid it, especially when you
       | involve your users in that picture.
        
         | FpUser wrote:
         | "With server-side (or just static HTML if possible), there is
         | so much potential to amaze your users with performance."
         | 
         | Actually I am amazing my users with C++ data servers and all
         | rendering done by JS in the browsers. What I do not do is
         | hooking up those monstrous framework. My client side is pure
         | JS. It is small and response feels instant.
        
           | j-krieger wrote:
           | And it does not scale to a business application that needs to
           | be deployed independently of target systems across the world.
        
         | austincheney wrote:
         | > It doesn't have to be this way.
         | 
         | Your could hire competent developers who know how these
         | technologies actually work. Server side rendering is better but
         | still not ideal, because the incompetence is reduced from the
         | load event to merely just later user interactions. The
         | performance penalty associated with JavaScript could be removed
         | almost entirely by suppling more performant JavaScript
         | regardless of where the page is rendered.
        
           | phendrenad2 wrote:
           | To me, client-side rendering feels like an end-run around
           | incompetent full-stack devs who don't know how to make
           | server-side rendering fast. So why not throw a big blob of JS
           | at the user (where their Core i7 machine and 16GB of RAM will
           | munch through it), and on the backend, the requests go
           | straight to the API team's tier (who know how to make APIs
           | fast).
        
           | bob1029 wrote:
           | There are other advantages to server-side other than the
           | specific professionals involved in the implementation.
           | 
           | Server rendered web applications are arguably easier to
           | understand and debug as well. With something on the more
           | extreme side of the house like Blazor, virtually 100% of the
           | stack traces your users generate are directly actionable
           | without having to dig through any javascript libraries or
           | separate buckets of client state. You can directly breakpoint
           | all client interactions and review relevant in-scope state to
           | determine what is happening at all levels.
           | 
           | One could argue that this type of development experience
           | would make it a lot easier to hire any arbitrary developer
           | and make them productive on the product in a short amount of
           | time. If you have to spend 2 weeks just explaining how your
           | particular interpretation of React works relative to the rest
           | of your contraption, I suspect you wont see the same kind of
           | productivity gains.
        
             | austincheney wrote:
             | This is completely subjective, but if you want reduced
             | maintenance expenses then don't rely on any third party
             | library to do your job for you regardless of which side of
             | the HTTP call it occurs. Most developers don't use this
             | nonsense to save time or reduce expenses. They use it
             | because they cannot deliver without it regardless of the
             | expenses. The "win" in that case is that developers are
             | more easily interchangeable pieces with less reliance upon
             | people who can directly read the code.
        
         | cactus2093 wrote:
         | This type of hard-line comment does great on Hacker News and
         | sounds good, but my personal experience has always been very
         | different. Every large server-rendered app I've worked on ends
         | up devolving to a mess of quickly thrown together js/jquery
         | animations, validations, XHR requests, etc. that is a big pain
         | to work on. You're often doing things like adding the same
         | functionality twice, once on the server view and once for the
         | manipulated resulting page in js. Every bit of interactivity/
         | reactivity that product wants to add to the page feels like a
         | weird hack that doesn't quite belong there, polluting the
         | simple, declarative model that your views started off as. None
         | of your JS is unit tested, sometimes not even linted properly
         | because it's mixed into the templates all over the place. The
         | performance still isn't a given either, your rendering times
         | can still get out of hand and you end up having to do things
         | like caching partially rendered page fragments.
         | 
         | The more modern style of heavier client-side js apps lets you
         | use software development best practices, to structure, reuse,
         | and test your code in ways that are more readable and
         | intuitive. You're still of course free to mangle it into
         | confusing spaghetti code, but the basic structure often just
         | feels like a better fit for the domain if you have even a
         | moderate amount of interactivity on the page(s). As the team
         | and codebase grows the structure starts to pay off even more in
         | the extensibility it gives you.
         | 
         | There can be more overhead as a trade-off, but for the majority
         | of users these pages can still be quite usable even if they are
         | burning more cycles on the users' CPUs, so the trade-offs are
         | often deemed to be worth it. But over time the overhead is also
         | lessening as e.g. the default behavior of bundlers is getting
         | smarter and tooling is improving generally. You can even write
         | your app as js components and then server-side render it if
         | needed, so there's no need to go back to rails or php even if a
         | blazing fast time to render the page is a priority.
        
         | wnevets wrote:
         | It seems like a lot of websites could replace
         | react/angular/framework with some simple jquery and html but
         | that's not cool and not good on your resume. So now we have ui
         | frameworks, custom css engines with a server side build
         | pipeline deploying to docker just for a photo gallery.
        
           | azangru wrote:
           | > with some simple jquery
           | 
           | Plain javascript, if you want to go down that route. Jquery
           | is around 30kb of javascript, if I remember correctly.
           | 
           | > but that's not cool and not good on your resume
           | 
           | On the contrary, writing vanilla js is pretty cool and
           | impressive on the resume, when every other developer puts
           | react there; but it's pretty miserable too, compared to using
           | frameworks.
        
             | bob1029 wrote:
             | If I see the words "vanilla JS" or "SQLite" on a resume, I
             | will automatically place it into my maybe pile.
        
               | ant6n wrote:
               | Is that a good or bad thing?
        
               | bob1029 wrote:
               | Yes, its basically a shortcut when I am doing a quick
               | first scan through resumes. The moment I see one of those
               | keywords I will flag it for 2nd pass review and move on.
        
             | aryonoco wrote:
             | Vanilla JS is pretty cool. Eleventy is even cooler, it gets
             | you most of the things a framework gets you, but gets
             | compiled to fast, vanilla JS and that's what served to the
             | browser.
             | 
             | Svelte/Sapper also has a lot of potential as it only ships
             | the parts if the framework that are absolutely needed
             | instead of the whole thing.
             | 
             | But in reality, you can make plenty of very fast React
             | sites and plenty of slow vanilla JS sites.
        
               | j-krieger wrote:
               | _Any_ modern JS Framework gets compiled to vanilla JS.
               | That's exactly the problem: Because Browsers don't
               | implement ES6+ Syntax natively, it has to get compiled to
               | complicated, long winded ES5 or filled by heavy
               | polyfills. If browsers were all spec compliant, code
               | bundles would be way smaller.
        
           | tester756 wrote:
           | Why would you want to replace something like Vue with jQuery?
           | 
           | Just because jQuery isn't "brand, new frontend framework"
           | doesn't mean it is good or something
           | 
           | From my experience jQuery leads to difficult to maintain
           | codebases relatively fast in compare to e.g Vue.
        
         | Semiapies wrote:
         | Server-side is no panacea. I starting paying attention
         | recently, and WordPress-based sites frequently take well over a
         | second to return the HTML for pages that are essentially static
         | --and that's considered acceptably fast by many people running
         | WP-based sites. Slow WP sites are _even worse_.
        
           | user5994461 wrote:
           | Wordpress is not static at all. It support commenting and
           | loads comments by default, it shows related articles
           | dynamically depending on categories and views, it displays
           | different content to different viistors, resize and compress
           | pictures on the fly, etc... and a thousand more things if you
           | are a logged in user, it really is dynamic.
           | 
           | It's actually pretty good considering what it does (if you
           | don't setup a ton of plugins and no ads). There can be 50
           | requests per page but that's because of all the pictures and
           | thumbnails. The page can render and be interactive almost
           | immediately, pictures load later.
        
           | bob1029 wrote:
           | Everything can be done poorly. The problem with wordpress is
           | it allows a plethora of plugins and hooks that allow for
           | lego-style webapp construction. This is not going to result
           | in cohesive, performant experiences.
           | 
           | If you purpose build a server-side application to replace the
           | functionality of any specific wordpress site in something
           | like C#/Go/Rust, you will probably find that it performs
           | substantially better in every way.
           | 
           | This is more of a testament to the value of custom software
           | vs low/no-code software than it is to the deficits or
           | benefits of any specific architectural ideology.
        
           | bdcravens wrote:
           | When I consider the "server-side" argument, I think of it in
           | apples-to-apples comparison: custom code that is either
           | server-side or client-side rendered. Wordpress on the other
           | hand is a packaged application, typically used with other
           | packaged plugins and themes. Moreover, many Wordpress sites
           | are run on anemic shared hosting. Custom applications can as
           | well, but I feel that's far less likely.
        
         | reaperducer wrote:
         | _I am not sure when there was a new rule passed in software
         | engineering that said that you shall never use server rendering
         | again and that the client is the only device permitted to
         | render any final views._
         | 
         | Maybe it's coming from the schools.
         | 
         | I worked with a pair of fresh-outta-U devs who argued
         | vehemently that all computation and bandwidth should be
         | offloaded onto the client whenever possible, because it's the
         | only way to scale.
         | 
         | When I asked about people on mobile with older devices, they
         | preached that anyone who isn't on the latest model, or the one
         | just preceding it, isn't worth targeting.
         | 
         | The ferocity of their views on this was concerning. They acted
         | like I was trying to get people to telnet into our product, and
         | simply couldn't wrap their brains around the idea of
         | performance.
         | 
         | I left, and the company went out of business a couple of months
         | later. Good.
        
           | bob1029 wrote:
           | This narrative has been going hard for the last 6~7 years.
           | For me, it's difficult to pinpoint all the causes of this.
           | 
           | I feel many experienced developers can agree that what you
           | describe ultimately amounts to the death of high quality
           | software engineering, and that we need to seriously start
           | looking at this like a plague that will consume our craft and
           | burn down our most amazing accomplishments.
           | 
           | I think the solution to this problem is two-fold. First, we
           | try to identify who the specific actors are who are pushing
           | this narrative and try to convince them to stop ruining
           | undergrads. Second, we try to develop learning materials or
           | otherwise put a nice shiny coat of paint onto the old idea of
           | servers doing all the hard work.
           | 
           | If accessibility and career potential were built up around
           | the javascript/everything-on-the-client ecosystem, we could
           | probably paint a similar target around everything-on-the-
           | server as well. I think I could make a better argument for
           | it, at least.
        
             | phendrenad2 wrote:
             | It's cargo-culting. Everyone just follows the herd. Once
             | something gets the scarlet letter of being "old" compared
             | to something "new" (both of which are purely perception),
             | it's really hard to ever get back to using the "old" thing
             | (even if it's performance is better or some other tangible
             | benefit).
        
             | hanniabu wrote:
             | > This narrative has been going hard for the last 6~7
             | years. For me, it's difficult to pinpoint all the causes of
             | this.
             | 
             | I always thought it came with the serverless meme. All
             | those services used to do that cost money, so the decision
             | was made in a lot of places to put the work on the client.
             | New people to the industry maybe haven't connected the dots
             | and think it's for scale when it's really for costs.
        
           | 0xdeadb00f wrote:
           | As a current student Compsci student who's done an
           | (introductory) web dev class, may I say I have not been
           | taught this way _at all_. In fact this notion of offloading
           | stuff to the client never came up at all in the class. That's
           | just my experience.
           | 
           | I think these ferocious views must be coming from the
           | individual - but I do realise not all courses are the same
           | and this student may have actually been (wrongly) taught this
           | way.
        
         | glouwbug wrote:
         | You're not the only one who thinks this way:
         | https://twitter.com/ID_AA_Carmack/status/1210997702152069120
        
       | tonymet wrote:
       | I'm developing a tiny-site search engine. upvote if you think
       | this product would interest you. The catalog would be sites that
       | load < 2s with minimal JS
        
         | agumonkey wrote:
         | Totally, also websites of low size simply, and websites with
         | simple structure (something that renders nice in a terminal)
        
           | tonymet wrote:
           | my initial prototype has been selecting sites based on e.g.
           | overall asset size (e.g. html + js + img) or DOM size. I'm
           | trying to identify key "speed" indicators that can be
           | inferred without requiring a full browser (every expensive
           | for indexing)
        
             | agumonkey wrote:
             | super neat project nonetheless, do you have a page I can
             | track ?
        
               | tonymet wrote:
               | thanks for inspiring me. Here you go . i'll post updates
               | there and move it to the proper domain when it's up
               | 
               | https://github.com/tonymet/ligero
        
         | slx26 wrote:
         | I'm making a site that's very light, minimal everything...
         | except for a section that will contain some games. How do you
         | handle those cases?
        
           | tonymet wrote:
           | One of the thresholds I'm using is site size - so if all HTML
           | + JS < 500kb that would qualify.
        
       | bangonkeyboard wrote:
       | I frequent one forum only through a self-hosted custom proxy.
       | This proxy downloads the requested page HTML, parses the DOM,
       | strips out scripts and other non-content, and performs extensive
       | node operations of searching and copying and link-rewriting and
       | insertion of my own JS and CSS, all in plain dumb unoptimized
       | PHP.
       | 
       | Even with all of this extra work and indirection, loading and
       | navigating pages through the proxy is still much faster than
       | accessing the site directly.
        
       | MangoCoffee wrote:
       | > webpage speeds have not improved
       | 
       | we keep abusing it beyond what a web (html) page supposed to do.
        
       | mensetmanusman wrote:
       | I tested content blockers on iOS.
       | 
       | Going to whatever random media site without it enabled is a
       | couple mb per page load (the size of SNES roms.. for text!).
       | 
       | With content blockers enabled it was a couple kb per page load.
       | 
       | Three orders of magnitude difference in webpage size due to data
       | harvesting...
       | 
       | Now, imagine how much infrastructure savings we would have if
       | suddenly web browsing was even just 1 order less data usage.
       | Would be fun to calculate the CO2 emission savings, ha.
        
       | MaxBarraclough wrote:
       | Wirth's Law: _Software is getting slower more rapidly than
       | hardware is becoming faster_.
       | 
       | https://en.wikipedia.org/wiki/Wirth%27s_law
        
         | joncrane wrote:
         | Yes! There's a "law" or "rule" for this! My favorite other ones
         | are Hanlon's Razor and the Robustness Principle.
        
           | MaxBarraclough wrote:
           | The Robustness Principle turned up in discussion yesterday,
           | under its other name of Postel's law.
           | 
           | https://news.ycombinator.com/item?id=24031955
        
             | joncrane wrote:
             | Thank you. It's one of my most-mentioned law or principle,
             | and I keep having to Google it to remind myself what it's
             | called or so I can link it to a colleague. It's fun to be
             | part of a discussion with other people who already know
             | about it (and that's a fairly intelligent discussion, too;
             | the security angle is interesting).
        
         | mnm1 wrote:
         | This is often intentional. Take a look at any OS or software
         | with animations. Slowness for slowness sake. The macOS spaces
         | change has such a slow animation, it's completely useless.
         | Actually, macOS has a ton of animations to slow things down,
         | but luckily most can be turned off. Not the spaces thing.
         | Android animations are unbearable and slow things down majorly.
         | Luckily they can be turned off, but only by unlocking developer
         | mode and going in there. It's clear whoever designed these
         | things has never heard of UX in their lives. And since these
         | products are coming from companies like Google and Apple, which
         | have UX teams, it leads me to think that most UX people are
         | complete idiots. Or UX is simply not a priority at all and
         | these companies are too stupid to assemble a UX team for their
         | products. Hard to say which is the case.
        
       | sgloutnikov wrote:
       | This post reminded me of this quote:
       | 
       | "The hope is that the progress in hardware will cure all software
       | ills. However, a critical observer may observe that software
       | manages to outgrow hardware in size and sluggishness. Other
       | observers had noted this for some time before, indeed the trend
       | was becoming obvious as early as 1987." - Niklaus Wirth
        
         | cyberlurker wrote:
         | (Somewhat related) Although I've been out of the scene a while,
         | it always felt like PC gaming was best when the GPU
         | manufactures hadn't just introduced a significantly improved
         | architecture AND the consoles were at the end of their life
         | cycle. The worst times were when the newest, super expensive
         | GPU monster would come out and the newest consoles were
         | released.
         | 
         | I started to not feel excited for more powerful hardware. The
         | performance ceiling was higher but I felt the quality
         | (gameplay, performance, art style, little details) of games
         | temporarily dropped even though the graphics marginally
         | improved on the highest end.
        
       | Tomis02 wrote:
       | The problem of course isn't the internet "speed" but latency.
       | ISPs advertise hundreds of Mbps but conveniently forget to
       | mention latency, average packet loss and other connection quality
       | metrics.
        
         | opportune wrote:
         | Correct me if I'm wrong but I believe ISPs use a combination of
         | hardware and software to "throttle down" network connections as
         | they attempt to download more data. For example if I try to
         | download a 10GB file on my personal computer I'll start off at
         | something like 40mbps and it will take 15 seconds before they
         | start allowing me to scale up to 300mbps. I assume that when
         | downloading things like websites, which should only take tens
         | or hundreds of ms, this unthrottling could also be a
         | significant factor in addition to latency, depending on what
         | the throttling curve looks like.
         | 
         | Also ISPs oversell capacity, which they've probably always
         | done, so even if you're paying for a large bandwidth that
         | doesn't mean you'll ever get it.
        
       | btbuildem wrote:
       | Part of the problem is analagous to traffic congestion / highway
       | lane counts: "if you build it, they will come". More lanes get
       | built but more cars appear to fill them. Faster connection speeds
       | allow more stuff to be included, and the human tolerance for
       | latency (sub 0.1s?) hasn't changed, so we accept it.
       | 
       | Web sites and apps are sidled with advertising content and data
       | collection code; these things often get priority over actual site
       | content. They use bandwidth and computing resources, in effect
       | slowing everything down. Arguably, that's the price we pay for
       | "free internet"?
       | 
       | Finally (and some others have mentioned this), the software
       | development practices are partially to blame. The younger
       | generation of devs were taught to throw resources at problems,
       | that dev time is the bottleneck and not cpu or memory -- and it
       | shows. And that's those with some formal education; many devs are
       | self-taught, and the artifacts of their learning manifest in the
       | code they wrote. This particularly in the JS community, which
       | seems hellbent on reinventing the wheel instead of standing on
       | the shoulders of giants.
        
       | ilaksh wrote:
       | Here's an idea I posted on reddit yesterday. Seemed like it was
       | shadowbanned or just entirely ignored.
       | 
       | # Problem
       | 
       | Websites are bloated and slow. Sometimes we just want to be able
       | to find information quickly without having to worry about the web
       | page freezing up or accidentally downloading 50MB of random
       | JavaScript. Etc. Note that I know that you can turn JavaScript
       | off, but this is a more comprehensive idea.
       | 
       | # Idea
       | 
       | What if there was a network of websites that followed a protocol
       | (basically limiting the content for performance) and you could be
       | sure if you stayed in that network, you would have a super fast
       | browsing experience?
       | 
       | # FastWeb Protocol
       | 
       | * No JavaScript
       | 
       | * Single file web page with CSS bundled
       | 
       | * No font downloads
       | 
       | * Maximum of 20KB HTML in page.
       | 
       | * Maximum of 20KB of images.
       | 
       | * No more than 4 images.
       | 
       | * Links to non-fastweb pages or media must be marked with a
       | special data attribute.
       | 
       | * Total page transmission time < 200 ms.
       | 
       | * Initial transmission start < 125 ms. (test has to be from a
       | nearby server).
       | 
       | * (Controversial) No TLS (https for encryption). Reason being
       | that TLS handshake etc. takes a massive amount of time. I know
       | this will be controversial because people are concerned about
       | governments persecuting people who write dissenting opinions on
       | the internet. My thought is that there is still quite a lot of
       | information that in most cases is unlikely to be subject to this,
       | and in countries or cases where that isn't the case, maybe
       | another protocol (like MostlyFastWeb) could work. Or let's try to
       | fix our horrible governments? But to me if the primary focus is
       | on a fast web browsing experience, requiring a whole bunch of
       | expensive encryption handshaking etc. is too counterproductive.
       | 
       | # FastWeb Test
       | 
       | This is a simple crawler that accesses a domain or path and
       | verifies that all pages therein follow the FastWeb Protocol. Then
       | it records its results to a database that the FastWeb Extension
       | can access.
       | 
       | # FastWeb Extension
       | 
       | Examines links (in a background thread) and marks those that are
       | on domains/pages that have failed tests, or highlights ones that
       | have passed tests.
        
       | sarego wrote:
       | As someone who just recently worked on reducing page load times
       | these were found to be the main issues
       | 
       | 1- Loading large Images(below the fold/hidden) on first load 2-
       | Marketing tags- innumerable and out of control 3- Executing non
       | critical JS before page load 4- Loading noncritical CSS before
       | page load
       | 
       | Overall we managed to get page load times down by 50% on average
       | by taking care of these.
        
         | Wowfunhappy wrote:
         | Can someone who understands more about web tech than me please
         | explain why images aren't loaded in progressive order? Assets
         | should be downloaded in the order they appear on the page, so
         | that an image at the bottom of the page never causes an image
         | at the top to load more slowly. I assume there's a reason.
         | 
         | I understand the desire to parallelize resources, but if my
         | download speed is maxed out, it's clear what should get
         | priority. I'm also aware that lazy loading exists, but as a
         | user I find this causes content to load too _late_. I _do_ want
         | the page to preload content, I just wish stuff in my viewport
         | got priority.
         | 
         | At minimum, it seems to me there ought to be a way for
         | developers to specify loading order.
        
           | cellar_door wrote:
           | You can code split React components and create loading tiers.
           | 
           | Facebook claims they're doing this with the facebook.com
           | redesign, for example:
           | https://engineering.fb.com/web/facebook-redesign/
        
           | [deleted]
        
           | [deleted]
        
           | xenospn wrote:
           | This is what lazy loading does. It doesn't actually load
           | images that are "below the fold". Or at least that's what it
           | should do. Images should only load once your start scrolling
           | down.
        
           | sarego wrote:
           | Understandable but for most use-cases (if your images are
           | hosted on a reliable CDN and are optimized) lazy load should
           | work fine. Lazy load works based on the distance of the image
           | from the viewport so it may not load too late.
           | 
           | Chromium based browsers now natively support lazy-loading.
        
           | dgb23 wrote:
           | That is actually the case today!
           | 
           | But it is an opt-in feature, which is not supported in older
           | browsers.
           | 
           | In modern frontend development we are heavily optimizing
           | images now. Lazy loading is one thing, the other is
           | optimizing sizes (based on viewports) and optimizing formats
           | (if possible).
           | 
           | This often means you generate (and cache) images on the fly
           | or at build-time, including resizing, cropping, compressing
           | and so on.
        
             | Bnshsysjab wrote:
             | Is putting all assets into a single png/svg to reduce total
             | requests a dead practice now?
        
               | tbarbugli wrote:
               | I guess http/2 support on CDN made this a useless (and
               | tedious) optimization
        
               | reaperducer wrote:
               | _Is putting all assets into a single png /svg to reduce
               | total requests a dead practice now?_
               | 
               | As someone who browses the source of a lot of commercial
               | web sites, I can say it's still dead common.
               | 
               | There's been a lot of static about new technologies
               | coming that will make this unnecessary, but that don't
               | help anyone today.
        
             | j-krieger wrote:
             | It is not supported by _Safari_ , too.
        
           | kmlx wrote:
           | yes, there is already
           | 
           | <img loading="lazy" ... />
           | 
           | simply add the property to your html images and the browser
           | will automatically load them when in the viewport (i.e. just
           | the ones you actually see)
           | 
           | further details: https://caniuse.com/#feat=loading-lazy-attr
           | 
           | https://html.spec.whatwg.org/multipage/urls-and-
           | fetching.htm...
        
             | Wowfunhappy wrote:
             | But then they always seem to not come in until I scroll to
             | them, which is too late and just means I have to wait even
             | more! What they ought to do is download as soon as the
             | network is quiet.
        
           | yoz-y wrote:
           | I suppose simple order in the HTML document would be a
           | heuristic that works almost always, but due to CSS the order
           | is actually not guaranteed. You need to download images
           | before doing the layout first too as you don't know the sizes
           | beforehand.
        
             | Wowfunhappy wrote:
             | It's not just CSS screwing up the order! On my own (simple)
             | sites, I can see all the images I put on a page getting
             | downloaded in parallel--with the end result being that
             | pages with more images at the bottom load more slowly even
             | above the fold.
        
             | Something1234 wrote:
             | That's what the width and height attributes on the img tag
             | are for. They're hints. Things can be redrawn later.
             | Although I think I've been seeing a lot fewer images on the
             | internet lately, but they must be hidden with css.
        
         | Moru wrote:
         | I wish 50% would be enough but some local news pages take 15
         | seconds after the last modernisation. And that is with ad-
         | blockers...
        
       | azinman2 wrote:
       | It's much like the problem of induced demand in transportation:
       | more capacity brings more traffic. More JavaScript. More ad
       | networks. More images. More frameworks.
        
       | XCSme wrote:
       | Could it also be that servers resources in general are lower and
       | that there are more clients/instance than before?
       | 
       | With all this virtualization, Docker containers and really cheap
       | shared hosting plans it feels like there are thousands of users
       | served by a single core from a single server. Whenever I access a
       | page that is cached by Cloudflare it usually loads really fast,
       | even if it has a lot of JavaScript and media.
       | 
       | The problem with JavaScript usually occurs on low-end devices. On
       | my powerful PC most of the loading time is spent waiting for the
       | DNS or server to send me the resources.
        
       | bdickason wrote:
       | The article doesn't dig into the real meaty topic - why are
       | modern websites slow. My guess would be 3rd party advertising as
       | the primary culprit. I worked at an ad network for years and the
       | number of js files embedded which then loaded other js files was
       | insane. Sometimes you'd get bounced between 10-15 systems before
       | your ad was loaded. And even then, it usually wasn't optimized
       | well and was a memory hog. I still notice that some mobile
       | websites (e.g. cnn) crash when loading 3p ads.
       | 
       | On the contrary, sites like google/Facebook (and apps like
       | Instagram or Snapchat) are incredibly well optimized as they stay
       | within their own 1st party ad tech.
        
         | davidcsally wrote:
         | As someone working on improving we vitals metrics, ad networks
         | are 100% the biggest issue we face. Endless redirects, enormous
         | payloads, and non optimized images. And on top of all that,
         | endless console logs. I wish ad providers had higher standards.
        
         | kevincox wrote:
         | Do you know why modern sites are slow? Because time isn't
         | invested in making them faster. It isn't a technical problem,
         | people take shortcuts that affect the speed. They will continue
         | to do so unless the website operators decide that it is
         | unacceptable. If some news website decided that their page
         | needed to load in 1s on a 256Mbps+10ms connection they would
         | achieve that, with external ads and more. However they haven't
         | decided that it is a priority, so they keep adding more junk to
         | achieve other goals.
         | 
         | It's simply Parkinson's Law
        
           | cuddlybacon wrote:
           | And they aren't going to decide this is a problem that needs
           | to be fixed since people will keep using it anyways.
        
           | [deleted]
        
           | wrycoder wrote:
           | lol. How about a 1.5 mbit per sec DSL line?
        
             | kevincox wrote:
             | Might have to cut some features but the point is the same.
             | Companies will build sites to the level of slowness that
             | they think is tolerable. If their user's internet
             | connection gets better that means that they can be less
             | efficient and add more features.
        
         | jakub_g wrote:
         | Facebook desktop web is far from being optimized. On my fiber
         | connection & high-end non-Mac laptop, loading "Messenger" tab
         | takes about 15-20 seconds.
         | 
         | Having said that, FB and Google properties (like YouTube) have
         | insane edge over the rivals by having full control over the
         | advertising stack.
        
           | timw4mail wrote:
           | Facebook and Google properties (moreso than google.com) are
           | among the slowest major sites.
        
           | azinman2 wrote:
           | Something strange is happening if messenger takes 15-20
           | seconds. You should investigate further.
        
             | [deleted]
        
         | netflixandkill wrote:
         | JS whitelisting and ad blocks anecdotally confirm this.
         | 
         | The number of sites that will be loading images and js from
         | three or four or more different ad/tracking and CDNs is nuts,
         | plus the various login and media links, and I feel zero guilt
         | for not participating in this advertising insanity.
         | 
         | Tightly put together pages with only a handful of JS loads are
         | damn near instant over gigabit fiber.
        
         | SilasX wrote:
         | You had me until the end. I still see significant lag and
         | mismatches on Facebook's page from simple actions like pulling
         | up my notifications or loading a comment from one of them. For
         | the later, I see it grind its gears while it loads the entire
         | FB group and then scrolls down a long list to get to the
         | comment I want to see.
         | 
         | (This is on a Macbook Air from 2015, but these are _really
         | simple_ requests.)
        
         | erickhill wrote:
         | I agree on the Ad Networks causing massive site bloat.
         | 
         | However, in terms of Facebook, I'd say it was well optimized
         | considering its complexity prior to the recent redesign. But
         | ever since the new design, my Macbook Pro can barely type on
         | that site anymore. My machine has a 2.5 GHz Quad-Core Intel
         | Core i7 and 16 GB of RAM.
         | 
         | That's pretty sad. Responsive design is a great idea, but in
         | terms of how it is sometimes implemented you're getting X
         | numbers of styles to load a page.
        
       | flyGuyOnTheSly wrote:
       | Wages constantly increase due to economic prosperity (on average,
       | I realize they have dwindled in the past 50 odd years), and every
       | single year the majority of people have nothing in their savings
       | accounts.
       | 
       | It's been that way since the dawn of time. [0]
       | 
       | This is a human economy problem, not a technological one imho.
       | 
       | If you give a programmer a cookie, she is going to ask for a
       | glass of milk.
       | 
       | [0] https://www.ancient.eu/article/1012/ancient-egyptian-
       | taxes--...
        
       | lmilcin wrote:
       | And they will not.
       | 
       | The reason is that web designers treat newly improved performance
       | as an excuse to either throw in more load (more graphics, more
       | quality graphics, more scripts, etc.) or let them produce faster
       | at the cost of performance.
       | 
       | Nowadays it is not difficult to build really responsive websites.
       | It just seems designers have other priorities.
        
       | weka wrote:
       | I was on AT&T's website the other day (https://www.att.com/)
       | because I am a customer and I was just astonished how blatantly
       | they're abusing redirection and just the general speed of the
       | page. (ie: Takes 5-10 seconds to load your "myATT" profile page
       | on 250MB up/down).
       | 
       | It's 2020. This should not be that hard. I've worked at a bank
       | and know that "customer data" is top priority but at what point
       | does the buck stop? Just because you can, doesn't mean you
       | should.
        
       | austincheney wrote:
       | Not a surprise. Most people writing commercial front end code
       | have absolutely no idea how to do their jobs without 300mb of
       | framework code. That alone, able to write to the basic standards
       | and understand simple data structures, qualifies my higher than
       | average salary for a front end developer without having to do any
       | real work at work.
        
         | phendrenad2 wrote:
         | You found someone willing to pay for developers who can get
         | better performance out of a site? Must be FAANG?
        
         | jerf wrote:
         | An uncompressed RGB 1920x1080 bitmap is 6,220,800 bytes. When
         | your webpage is heavier than a straight-up, uncompressed bitmap
         | of it would be... something's gone wrong.
         | 
         | We're not quite there, since web pages are generally more than
         | one screen, but we're getting close. Motivated searchers could
         | probably find a concrete example of such a page somewhere.
        
           | lstamour wrote:
           | What's funny is that's how Opera Mini achieves its great
           | compression for 2G and 3G network use... it renders mostly
           | server-side, with link text and position offsets/sizes, last
           | I used it...
        
         | propelol wrote:
         | They don't have any idea because very few are willing to pay to
         | have a website that's faster, so there is no incentive to learn
         | it.
        
         | sumtechguy wrote:
         | That mentality is not 'new'.
         | 
         | No, you can not add the whole c++ std lib into our code. Yes, I
         | know it is useful. Yes, that will save you 2 hours of work.
         | However, the code no longer even fits into the 1MB of flash we
         | have. Yes we can ask for a new design the management would love
         | to spend 500k on spinning a new design and getting all of the
         | paperwork for it done, and our customers would love replacing
         | everything they have for functionally equivalent hardware that
         | now costs 50 dollars more each. But at least you saved 2 hours
         | writing some code.
        
           | kyboren wrote:
           | I understand where you're coming from, and I appreciate the
           | constraints of embedded system design, but this is a pretty
           | extreme scenario.
           | 
           | Management should seriously consider whether the 0.01c saved
           | on that 8Mb chip is worth the design overhead from very tight
           | constraints. There is most likely a pin-compatible 16Mb chip
           | that would eliminate all the pain.
           | 
           | Yes, I know that in high volumes every fraction of a penny
           | counts. But if you frequently find yourself engineering your
           | way out of trivial constraints, you might be doing it wrong.
        
         | aequitas wrote:
         | There is also a big "runs-on-my-machine" factor. Where the
         | machine is the developers high-tier laptop hooked up to Gigabit
         | LAN and Fiber WAN with <5ms ping and >100Mbit symmetric.
         | 
         | Luckely there are tools like Lighthouse[0] but with all the
         | abstractions and frameworks inbetween it is often impossible to
         | introduce the required changes without messing up the quality
         | or complexity of the code/deployment.
         | 
         | [0] https://developers.google.com/web/tools/lighthouse
        
       | perfunctory wrote:
       | Every time I see a headline like this one I have to think about
       | two things - Jevons paradox and climate change.
        
       | paradite wrote:
       | In other news:
       | 
       | In spite of an increase in mobile CPU speed, mobile phone startup
       | time have not improved (in fact they became slower).
       | 
       | In spite of an increase in desktop CPU speed, time taken to open
       | AAA games have not improved.
       | 
       | In spite of an increase in elevator speed, time taken to reach
       | the median floor of an building have not improved.
       | 
       | My point is, "webpage" has evolved the same way as mobile phones,
       | AAA games and buildings - it has more content and features
       | compared to 10 years ago. And there is really no reason or need
       | to making it faster than it is right now (2-3 seconds is a
       | comfortable waiting time for most people).
       | 
       | To put things in perspective:
       | 
       | Time taken to do a bank transfer is now 2-3 seconds of bank
       | website load and a few clicks (still much to improve on) instead
       | of physically visiting a branch / ATM.
       | 
       | Time taken to start editing a word document is now 2-3 seconds of
       | Google Drive load instead of hours of MS Office Word
       | installation.
       | 
       | Time taken to start a video conference is now 2-3 seconds of
       | Zoom/Teams load instead of minutes of Skype installation.
        
         | ClumsyPilot wrote:
         | This hipster atttiude of replacing proper software with barely
         | functional webapps really gets on my nerves.
         | 
         | People use and will continue using Skype and especially MS
         | Office. It is much more functional that gSuite alternatives and
         | moving people to castrated and slow webapps is not progress.
        
         | SilasX wrote:
         | >My point is, "webpage" has evolved the same way as mobile
         | phones, AAA games and buildings - it has more content and
         | features compared to 10 years ago. And there is really no
         | reason or need to making it faster than it is right now (2-3
         | seconds is a comfortable waiting time for most people).
         | 
         | What features? I don't know anything substantive a site can
         | deliver to me today that it was not capable of 10 years ago.
         | The last major advance in functionality was probably AJAX, but
         | that doesn't inherently require huge slowdowns and was around
         | more than 10 years ago.
         | 
         | The rest of your comparisons are dubious:
         | 
         | >Time taken to do a bank transfer is now 2-3 seconds of bank
         | website load and a few clicks (still much to improve on)
         | instead of physically visiting a branch / ATM.
         | 
         | This is the same class of argument as saying that (per Scott
         | Adams), "yeah 40 mph may seems like a bad top speed for a
         | sports car, but you have to compare it hopping". (Or the sports
         | cars of 1910). Yes, bank sites are faster than going to ATM.
         | Are they faster than bank sites 20 years ago? Not in my
         | experience.
         | 
         | >Time taken to start editing a word document is now 2-3 seconds
         | of Google Drive load instead of hours of MS Office Word
         | installation.
         | 
         | Also not comparable: you pay the installation MS Word time-cost
         | once, and then all future ones are near instant. (Also applies
         | to your Skype installation example.)
        
         | [deleted]
        
       | innocentoldguy wrote:
       | _cough_ JavaScript _cough_
        
       | MattGaiser wrote:
       | We discuss the every growing size of web pages here regularly, so
       | this is not all that surprising.
        
       | Natalie1Quinn wrote:
       | The big "dream" of web-based technologies was to allow designers
       | the freedom to do whatever they want. But doing this has a lot of
       | costs, which we are all paying today. In the early days of GUIs,
       | a group would design the OS, and everyone would just use that
       | design through the UI guidelines. It was, in my opinion, the apex
       | of GUI programming, because you could create full featured apps
       | without the need of an experienced designer, at least for the
       | first few iterations. Now, I cannot even create a simple web app
       | that doesn't look crap, and any kind of web product requires a
       | designer to at least make it minimally usable.
       | 
       | https://www.bloggerzune.com/2020/08/Image-optimization-for-s...
        
       | [deleted]
        
       | [deleted]
        
       | MrStonedOne wrote:
       | I blame tcp startup.
       | 
       | and the the fact that chrome hasn't added https/3 in mainline as
       | even a flag even though the version that their sites use has been
       | enabled by default on mainline chrome for years.
        
       | mc32 wrote:
       | Network latency. Bandwidth is the new MHz.
        
         | superkuh wrote:
         | This could be part of it. The shift to mobile computers which
         | are necessarily wireless which means a random round trip time
         | due to physics, which means TCP back-off. That, combined with
         | the tendency to require innumberable external JS/CDN/etc
         | resources that require setting up new TCP connections work
         | together to make mobile computers extra slow at loading
         | webpages.
        
         | cozzyd wrote:
         | Bandwidth and latency aren't the same thing! High-latency
         | networks sometimes don't ever load some websites, even if there
         | is reasonable bandwidth. I remember one time when I was in
         | Greenland having to VNC to a computer in the US to do something
         | on some internal HR website that just wouldn't load with the
         | satellite latency.
        
       | HumblyTossed wrote:
       | Because the focus is now on time to market and developer speed
       | and anything else anyone can think of _except_ the end user
       | experience.
        
       | emptyparadise wrote:
       | I wonder how much worse things will get when 5G becomes
       | widespread.
        
         | sumoboy wrote:
         | You know for sure your phone bill will increase.
        
       | calebm wrote:
       | "A task will expand to consume all available resources."
        
       | zelphirkalt wrote:
       | It rather has slowed down with some websites, or those websites
       | did not exist back then, because they would not have been
       | possible.
       | 
       | Just today morning, when I opened my browser profile with
       | Atlassian tabs (Atlassian needs to be contained in its own
       | profile), there were perhaps 7 or 8 tabs, which were loaded,
       | because they are pinned. It took approximately 15-20s of this
       | Core i7 7th Gen, under 100% CPU usage of all cores at the same
       | time to render all of those tabs. Such a thing used to be
       | unthinkable. Only in current times we put up with such state of
       | affairs.
       | 
       | As a result I had Bitbucket show me a repository page, Jira
       | showing me a task list, and a couple of wiki pages, which render
       | something alike markdown. Wow, what an utter waste of computing
       | time and energy for such simple outcome. In my own wiki, which
       | covers more or less the same amount of actually used features,
       | that stuff would have been rendered within 1-2s and with no real
       | CPU usage at all.
       | 
       | Perhaps this is an outcome of pushing more and more functionality
       | into frontend client-side JS, instead of rendering templates on
       | the server-side. As a business customer, why would I be entitled
       | to any computation time on their servers and a good user
       | experience?
        
       | zoomablemind wrote:
       | Another factor is a wider use of all sorts of CMS (WordPress etc)
       | for content presentation, combined with often slower/underpowered
       | shared hosting and script heavy themes.
       | 
       | On some cheap hosters it may take a second just to startup the
       | server instance, that's before any of the outgoing requests are
       | done!
        
         | ClumsyPilot wrote:
         | Wordpress is not actually that bad if you use it responsibly
        
         | commandlinefan wrote:
         | Yep - to the executives, saving a couple of (theoretical) hours
         | of development work is worth paying a few extra seconds per
         | page load. Of course, the customers hate it, but the customers
         | can't go anywhere else, because the executives everywhere else
         | are looking for ways to trade product quality for (imaginary)
         | time to market.
        
       | superkuh wrote:
       | The last 5 years there has been a dramatic shift away from HTML
       | web pages to javascript web applications on sites that have
       | absolutely no need to be an application. They are the cause of
       | increased load times. And of them, there's a growing proportion
       | of URLs that simply _never_ load at all unless you execute
       | javascript.
       | 
       | This makes them large in MB but that's not the true cause of the
       | problem. The true cause is all the external calls for loading JS
       | from other sites and then the time to attempt to execute that and
       | build the actual webpage.
        
         | Eric_WVGG wrote:
         | The last two years have had a dramatic shift away from SPA and
         | toward JAMstack, which is more or less pre-rendered SPA. The
         | result is not only faster than SPA, but faster than the
         | 2008-2016 status quo of server-scripted Wordpress and Drupal
         | sites.
         | 
         | I blew off JAMstack as another dumb catchphrase (well, it is a
         | dumb catchphrase), but then inherited a Gatsby site this past
         | spring. It absolutely knocked my socks off. The future is
         | bright.
        
         | tomjen3 wrote:
         | Yes. And while I will continue to defend it in case of
         | interactive dashboards, webapps and status screens if you are
         | not writing on of those, you shouldn't be using javascript. If
         | we want to fix the internet, browsers should ask the user for
         | permission for each cookie and javascript should be disabled by
         | default.
        
         | thomasfortes wrote:
         | Yeah, it is perfectly possible to build javascript heavy
         | websites without making them unusable until all is downloaded
         | and processed...
         | 
         | I'm rebuilding a site for a client and using a bunch of dynamic
         | imports, if you don't touch a video route, you'll not download
         | the videojs bundle at all, I set a performance baseline for the
         | site to be interactive and anything that makes it go over the
         | baseline needs a good reason to be there.
        
           | dahfizz wrote:
           | I really don't think this is an optimal solution. This makes
           | _every_ page load slowly, instead of just the first page.
           | Waiting to do work doesn 't make the work any quicker.
        
             | thomasfortes wrote:
             | Not really, most of the bundles are pretty small, in the
             | 30/40Kb range gzipped, even smaller with brotli, the
             | problem is when the user has to wait 5s to start using the
             | site, waiting half a second to load the first page is ok,
             | waiting a quarter second to open other page is also ok.
             | 
             | It isn't purely about speed, it is about perception, a very
             | slow first page load is way more annoying that a couple
             | half or quarter second loads distributed over a long
             | interaction, after the js is cached all is nearly
             | instantaneous, but you don't have to wait a while to start
             | using the site in your first visit.
             | 
             | If you want you can also wait for the first page to
             | download completely and them import the remaining js while
             | the user is on the first page, but I didn't tried to see
             | how well it works.
        
         | wnevets wrote:
         | That is what happens when you hire react devs to build your
         | static website.
        
           | fermienrico wrote:
           | SPAs are a cancer. There. I said it.
           | 
           | What's wrong with accessing resources in a RESTful way?
           | 
           | A page refresh? At this point, a page refresh is so much more
           | bearable than a 2300ms SPA download and hang ups.
           | 
           | As an added bonus, you can bookmark resources. Back button
           | works!
        
         | Alex3917 wrote:
         | > The last 5 years there has been a dramatic shift away from
         | HTML web pages to javascript web applications on sites that
         | have absolutely no need to be an application.
         | 
         | SPAs are a lot easier to keep secure though, so if you don't
         | want your private data leaked then they're a much better
         | option.
        
           | randompwd wrote:
           | Any and all scripts that are loaded at any point in the SPA
           | stay for the duration of the visit. 3rd party scripts are a
           | big risk and not being able to unload JS scripts and
           | associated in-mem/in-page symbols etc. is a timebomb waiting
           | to explode.
        
           | nicou wrote:
           | How are SPAs any easier to keep secure?
        
             | Alex3917 wrote:
             | There's a very clear separation between the front end and
             | the back end, which makes it easy to write tests that
             | allow-list only the specific data that's supposed to be
             | getting returned.
        
         | Softcadbury wrote:
         | Yes you're right, but keep in mind that with SPA, only the
         | first loading is slower, then you don't need to reload again
         | when changing pages.
        
           | eberkund wrote:
           | Not necessarily true. I've seen many SPA apps where the API
           | calls that get made when you navigate between pages are slow
           | too. So subsequent pages are slow also in addition to the
           | first page.
        
           | grishka wrote:
           | Assuming there are repeat visitors. Someone sent you a link
           | to an article. You don't care which website the article is
           | on, you just want to read it and close it without exploring
           | whatever else the website had to offer. So you download a
           | several-megabyte JS bundle to be able to read several
           | kilobytes worth of text. By the time you encounter another
           | link to that particular website, your cache of its assets
           | would be long gone.
        
           | superkuh wrote:
           | Try to open 10 actual html websites. Now try to open 10 SPA.
           | Tell me which feels slower. SPA slowness is not exclusively
           | from all the third party JS and CSS loads. A lot of it is
           | just innate to being an _application_ instead of a document.
        
         | elbelcho wrote:
         | If left unoptimized, JavaScript can delay your pages when they
         | try to load in users' browsers. When a browser tries to display
         | a webpage, it has to stop and fully load any JavaScript files
         | it encounters first.
        
       | throwaway0a5e wrote:
       | To quote an exec at a major CDN:
       | 
       | "wider pipe fit more shit"
       | 
       | (yes he actually said that, to an entire department, the context
       | was that people will fill the pipe up with junk if they're not
       | careful and it made more room to deliver value by not sucking)
        
         | thrownaway954 wrote:
         | exactly this. i remember when programmers took the time to make
         | sure their programs didn't take up alot of memory. as we got
         | more ram, many became lazy about memory optimization cause
         | well... the computer has plenty. same thing here with webpages.
         | there was a time where you needed to optimize your site cause
         | of the modem that everyone used. now everyone has dsl or higher
         | so there isn't an incentive to optimize your site.
        
       | joncrane wrote:
       | I've recently started using a Firefox extension called uMatrix
       | and all I can say is, install that and start using your normal
       | web pages and you'll very quickly see exactly why web pages take
       | so long to load. The number and size of external assets that get
       | loaded on many websites is literally insane.
        
         | WrtCdEvrydy wrote:
         | It's the cascade of doom... we have an internal benchmark
         | without ads, and it's funny to see without ads we load
         | something like 6Mb of JS but if you load the ads, the analytics
         | cascade of hell will load 350Mb of JS to display one picture.
        
           | wackget wrote:
           | 6MB of JS _without_ ads? What are you loading into the
           | browser, an entire operating system?
        
             | phendrenad2 wrote:
             | 6MB of JS without ads is unfortunately on the small side
             | these days. Worked for a company that had 15MB of JS, after
             | minification and compression (mostly giant JS libraries
             | that we on the dev team used one small feature of).
        
             | WrtCdEvrydy wrote:
             | One whole React webapp.... plus pictures. It really isn't
             | "JS" as much as the whole "resources" reported by Chrome.
        
         | colmvp wrote:
         | I've been using uMatrix for ages and it was baffling to me how
         | some websites that are literally just nice looking blogs have
         | an unreal number (i.e. 500+) of external dependencies.
        
           | [deleted]
        
           | jmiles90 wrote:
           | https://www.npmjs.com/package/is-odd
        
             | Orou wrote:
             | This is a joke, right?
        
               | mr__y wrote:
               | the joke is that is-odd has a dependency - the is-number
               | package
        
               | kolaente wrote:
               | Look at how often it was downloaded last week alone:
               | https://www.npmjs.com/package/is-odd
               | 
               | We're all doomed.
        
             | f00zz wrote:
             | I notice there's an isEven as well. I wonder if they're
             | mutually recursive.
        
               | mr__y wrote:
               | there is also is-odd-even which uses both as
               | dependencies, so it appears they're not
               | 
               | https://www.npmjs.com/package/is-odd-or-even
        
               | f00zz wrote:
               | The commit history is surprisingly long. The actual test
               | evolved from                 return !!(~~i >> 0 & 1)
               | 
               | to                 return !!(~~i & 1);
               | 
               | to                 const n = Math.abs(value);
               | return (n % 2) === 1;
        
               | [deleted]
        
           | jandrese wrote:
           | I love uMatrix but it can be a serious hassle to get an
           | embedded video to play sometimes. Sometimes I'll allow
           | scripts from the embedding site and suddenly there are dozens
           | or hundreds of new dependencies popping up and not my video.
           | At this point I really have to ask myself if it is worth it.
           | Maybe if I'm lucky its a YouTube video and I can track it
           | down on YouTube's site, but if not it's going to be a big
           | headache and a lot of reloads before the stupid thing plays.
        
             | zepearl wrote:
             | Same problem here (not using uMatrix but something similar
             | being NoScript) => I'm now constantly using 2 browsers,
             | Firefox with NoScript as a base, then temporarily switching
             | to Chrome to access the sites which are demanding in terms
             | of dependencies as you described.
        
             | setr wrote:
             | I just got in the habit of turning it off, when I'm either
             | too lazy to bother, or I'm on a site I'll probably never
             | visit again. sites that are already setup ofc generally
             | stay that way.
             | 
             | The big headache is when you have a site half-setup -- its
             | correct for all of your usage, and then you try something
             | new and you get a video that doesn't load, and you sit
             | there waiting until you realize umatrix probably found
             | something new
        
         | ipnon wrote:
         | It's easy to convince atechnical people to adopt uMatrix when
         | you show them that they can watch shows without ads and load
         | pages in less than a second.
        
       | mfontani wrote:
       | Ads rule everything around me
        
       | wilg wrote:
       | Is it really that surprising that developers would rather spend a
       | performance budget on adding new features instead of further
       | improving performance?
        
       | staycoolboy wrote:
       | throughput vs latency.
       | 
       | If I want to download a 1GB file, I do a TLS handshake once, and
       | then send huge TCP packets. I can get almost 50MB/s from my AWS
       | S3 bucket on my 1GB fiber, so it takes ~20secons.
       | 
       | However, If I split that 1GB up into 1,000,000 1KB files, I incur
       | 1,000,000 the handshake penalty, plus all of the OTHER overhead
       | with nginx/apache and file system or whatever is serving the
       | request, so my bandwidth is significantly lower. I just did an
       | SCP experiment and got 8MB/s average download speed and cancelled
       | the download.
       | 
       | The problem here is throughput is great with few big files, but
       | hasn't improved with lots of little files.
        
       | anonyfox wrote:
       | I just rewrote my personal website ( https://anonyfox.com ) to
       | become statically generated (zola, runs via github Action) so the
       | result is just plain and speedy HTML. I even used a minimal
       | classless ,,css framework" and ontop I am hosting everything via
       | cloudflare workers sites, so visitors should get served right
       | from CDN edge locations. No JS or tracking included.
       | 
       | As snappy as I could imagine, and I hope that this will make a
       | perceived difference for visitors.
       | 
       | While average internet speed might increase, I still saw plenty
       | of people browsing websites primarily on their phone, with bad
       | cellular connections indoor or via a shared WiFi spot, and it was
       | painful to watch. Hence, my rewrite (still ongoing).
       | 
       | Do fellow HNers also feel the ,,need for speed" nowadays?
        
         | pdimitar wrote:
         | Very interested on how you used Zola. The moment I wanted to
         | customize title bars and side bars and I was basically on my
         | own. Back then I didn't have the desire (or expertise) to
         | reverse-engineer it.
         | 
         | Have you found it easy to customize, or you went with the flow
         | without getting too fancy?
        
           | anonyfox wrote:
           | Sometimes a little bit of inline html within the markdown
           | comments will do for me... otherwise: had been a great
           | experience so far.
           | 
           | AFAIK you can set custom variables in the frontmatter of the
           | markdown files, your layout/template html can use them (or
           | use an IF check, or ...).
        
         | 40four wrote:
         | I have a similar setup for my personal site, although it's
         | still a work in progress. I've really been interested in
         | JAMstack methods lately. I build the the static site with
         | Eleventy, and have a script to pull in blog posts from my Ghost
         | site. To bad I haven't really written any blog posts though,
         | maybe one day :) Anhhow, I really like Cloudflare workers,
         | would recommend!
        
         | onemiketwelve wrote:
         | Thank you sensei
        
         | ww520 wrote:
         | That's very cool. Nice little project to speed the site. One
         | data point. A cold loading takes about 2.2 seconds; subsequent
         | loads take about 500ms, from a cafe in the Bay Area using a
         | shared wifi.
         | 
         | The cold loading stats:                    Load Time 2.20 s
         | Domain Lookup 2 ms          Connect 1.13 s          Wait for
         | Response 68 ms          DOM Processing 743 ms          Parse
         | 493 ms          DOMContentLoaded Event 11 ms          Wait for
         | Sub Resources 239 ms          Load Event 1 ms
         | 
         | Edit: BTW, the speed is very good. I've tried similar simple
         | websites and got similar result. Facebook login page takes 13.5
         | seconds.
        
           | anonyfox wrote:
           | I do not really understand why it is _that_ slow...
           | 
           | DOM Processing 743 ms Parse 493 ms
           | 
           | ... I mean, it is just some quite light HTML and minimal CSS,
           | right? what could possibly make your browser so slow at
           | handling this?
        
             | throwaway1777 wrote:
             | Might be pretty good depending on the specs
        
               | spanhandler wrote:
               | The page is 1.03KB of HTML and ~1.5KB of CSS. The HTML
               | has about a dozen lines of Javascript in the footer that,
               | at a glance, seemed only to execute onclick to do
               | something with the menu. I'm pretty sure a 166Mhz (with
               | an M) Pentium could process 1.03KB of HTML and render the
               | page in under 700ish ms, so I agree that that seems oddly
               | slow for any modern device, unless they're browsing on a
               | mid-range Arduino.
        
             | TheDong wrote:
             | My guess? It's doing streaming parsing/processing, so it's
             | network bound.
             | 
             | It started downloading html, once it got the first byte it
             | started processing it, but then it had to wait for the rest
             | of the bytes (not to mention the css file to download).
             | 
             | The parent comment is clearly using some really slow wifi,
             | so I think it's likely that's what happened.
        
             | ww520 wrote:
             | FWIW, I re-run the test at home. Cold load is about 400ms;
             | repeated loads are about 240ms.
             | 
             | Cold load stats:                   Load Time 409 ms
             | Domain Lookup 37 ms         Connect 135 ms         Wait for
             | Response 40 ms         DOM Processing 165 ms         Parse
             | 123 ms         DOMContentLoaded Event 8 ms         Wait for
             | Sub Resources 34 ms
        
         | otter-in-a-suit wrote:
         | I have done the same with Hugo on my blog[0], but actually had
         | to fork an existing theme to remove what I would call bloat.[1]
         | 
         | The interesting thing for me is, while _I personally_ certainly
         | feel the  "need for speed" and appreciate pages like yours
         | (nothing blocked, only ~300kb), most people do not. Long
         | loading times, invasive trackers, jumping pages (lazily loading
         | scripts and images), loading fonts from spyware-CDNs - are only
         | things "nerds" like us care about.
         | 
         | The nicest comment on my design I heard was "Well, looks like a
         | developer came up with that" :)
         | 
         | [0] https://chollinger.com/blog/ [1]
         | https://github.com/chollinger93/ink-free
        
         | ntumlin wrote:
         | Very impressive. One cool thing you can further do to improve
         | perceived speed potentially at the expense of some bandwidth is
         | to begin to preload pages when a link is hovered. There are a
         | couple of libraries that will do this for you.
         | 
         | It can shave 100 - 200 ms off the perceived load time, and
         | since your site is already near or below that threshold it
         | might end up feeling like you showed the page before anyone
         | even asked for it.
        
         | djaychela wrote:
         | That's fantastic - as near to instantaneous as you need, and
         | it's actually slightly odd having a page load as quickly as
         | yours does; we've become programmed to wait, despite all the
         | progress that's happened in hardware and connectivity. The only
         | slightly slow thing was the screenshots on the portfolio page
         | as the images aren't the native resolution they're being
         | displayed at.
         | 
         | Does the minification of the css make a big difference? I just
         | took a look at it using an unminifier, and it was a nice change
         | to see CSS that I feel I actually understand straight away,
         | rather than thousands of lines of impenetrable sub-sub-
         | subclasses.
        
           | anonyfox wrote:
           | I just settled on https://oxal.org/projects/sakura/ and added
           | a handful of lines for my grid view widget, that's all.
           | 
           | Maybe it's me, but I originally learned that the concern of
           | CSS is to make a document look pretty. Not magic CSS classes
           | or inline styles (or both, this bugs me on tailwind), so the
           | recent "shift" towards "classless css" is very appealing.
           | 
           | Sidenote: Yes, the screenshots could be way smaller, but
           | originally I had them full-width instead of the current
           | thumbnail, and still thinking about how to present this as
           | lean as possible. Thanks for the feedback, though!
        
             | thomasfortes wrote:
             | I use the picture tag with a bunch of media queries to
             | deliver optimized images for each resolution in websites
             | that I build, resizing a 1080p image to only 200px width
             | does wonders to mobile performance while keeping it perfect
             | for full HD monitors.
        
               | anonyfox wrote:
               | Since Zola has an image resizing feature and shortcode-
               | snippets, this could be a nice way to automate things
               | away (i'd hate to slice pictures for X sizes by hand).
               | 
               | Will have a look, thanks!
        
         | marvinblum wrote:
         | That's perfect! Most pages should load instantaneous, at least
         | those serving text for the most part.
         | 
         | I did the same for my website [1], and I hope this becomes more
         | of a standard for "boring old" personal pages and blogs.
         | 
         | [1] https://marvinblum.de/
        
           | jfrunyon wrote:
           | Even for most businesses it should be the norm. When you
           | think about it, most businesses have almost no actual dynamic
           | content on their website - other than any login/interactivity
           | features, they might change at most a few times a day...
        
           | rootusrootus wrote:
           | Interesting. Your site triggered our corporate filter as
           | "Adult/Mature Content". I wonder what tripped it up.
        
             | anonyfox wrote:
             | Oh, wow. I have no idea, there is not much content yet, and
             | zero external dependencies... maybe its the "anon" in the
             | name? I mean, I even bought a Dotcom domain to look ok-ish
             | despite my nickname :/
        
             | kbr2000 wrote:
             | Try looking up 'blum' on urbandictionary :)
        
               | marvinblum wrote:
               | Wow, that's unfortunate. Well, I can't do anything about
               | that :)
        
               | penneyd wrote:
               | Major manufacturer of cabinet hardware - seems fine.
        
               | rapnie wrote:
               | And means 'flower' in German.
        
         | 1vuio0pswjnm7 wrote:
         | "Do fellow HNers also feel the need for speed nowadays?"
         | 
         | I stopped using graphical browsers many years ago. I use a
         | text-only browser and a variety of non-browser, open source
         | software as user-agents. Some programs I had to write myself
         | because AFAIK they did not exist.
         | 
         | The only speed variations I can detect with human senses are
         | associated with the server's response, not the browser/user-
         | agent or the contents of the page. Most websites use the same
         | server software and more or less the same "default"
         | configurations so noticeable speed variations are rare in my
         | UX.
        
         | randomguy3344 wrote:
         | Hey brother, I made an account just to reply to your comment, I
         | enjoyed your website and grew my knowledge reading it.
         | 
         | Just wanted to let you know there's a typo @
         | https://anonyfox.com/tools/savings-calculator/
         | 
         | ```Aside from raw luck this __ist __still the best```
        
           | anonyfox wrote:
           | Thanks, didn't see it even after you posted it. german
           | autocomplete probably :(
        
         | umyemri wrote:
         | Alignment of list on https://anonyfox.com/grimoire/elixir/
         | seems a bit off.
         | 
         | Love the style though. Very crisp, very snappy.
        
           | anonyfox wrote:
           | Thanks for The feedback, will have a look!
        
         | NorwegianDude wrote:
         | Seems mostly good to me after cloudflare caches it, but you
         | have made one annoying mistake: you forgot to set the height of
         | the image, so it results in content shift. Other than that,
         | it's great! :)
        
         | rafaelturk wrote:
         | zola ?
        
           | anonyfox wrote:
           | https://www.getzola.org/ Static site builder written in Rust
        
         | mbar84 wrote:
         | If you would specify the width/height of the image, you could
         | avoid the page reflow that makes the quicklinks jump down.
        
       ___________________________________________________________________
       (page generated 2020-08-04 23:00 UTC)