[HN Gopher] The modern web on a slow connection (2017)
       ___________________________________________________________________
        
       The modern web on a slow connection (2017)
        
       Author : x14km2d
       Score  : 166 points
       Date   : 2021-06-05 16:01 UTC (6 hours ago)
        
 (HTM) web link (danluu.com)
 (TXT) w3m dump (danluu.com)
        
       | istillwritecode wrote:
       | For many advertising-based websites, the value of the user is
       | often proportional to their bandwidth.
        
       | TacticalCoder wrote:
       | One of the problem is that a lot of devs have very good
       | connections at home. I've got 600 MBit/s symmetric (optic fiber).
       | Spain, France, now Belgium... It's fiber nearly everywhere. Heck,
       | Andorra has 100% fiber coverage. Japan: my brother has got 2
       | Gbit/s at home.
       | 
       | My home connection is smoking some of my dedicated servers: the
       | cheap ones are still on 100 MBit/s in datacenter and they're
       | totally the bottleneck. That's how fast home connections are, for
       | some.
       | 
       | I used to browse on a 28.8 modem, then 33.6, then ISDN, then
       | ADSL.
       | 
       | The problem is: people who get fiber are probably not going back.
       | We're going towards faster and faster connections.
       | 
       | It's easy, when you're on fiber since years and years now, to
       | forget what it was. To me it's at least part of the problem.
       | 
       | Joel is not loading his own site on a 28.8 modem. It's the
       | unevenly distributed future.
        
         | Black101 wrote:
         | > Joel is not loading its own site on a 28.8 modem. It's the
         | unevenly distributed future.
         | 
         | I hope that no one are... and at least using 56k
        
           | pjmlp wrote:
           | Preferbly the dual mode versions then.
        
             | Black101 wrote:
             | " Dualmode faxmodems provide high-quality voice mail when
             | used with a soundcard. They also have both Class 1 14.4
             | Kbps send/receive fax for convenient printer-quality faxing
             | to any faxmodem or fax machine. You can even "broadcast"
             | your faxes to multiple recipients, schedule fax
             | transmission, or forward them to another number. "
             | 
             | I don't see the advantages
        
         | oblak wrote:
         | I think the problem is the vast majority of web developers
         | don't care. It's true. Sure, having nice connection helps. In
         | the sense that having a meal helps you forget about world
         | hunger... if you cared about it in the first place.
         | 
         | Sans multimedia consumption, the modern web is fine on a
         | reasonable connection provided you're using noscript or
         | something like that. If you're not, then well, you're already
         | screwed either way.
         | 
         | What's crazy to me is not that regular users put up with a ton
         | of bullshit - they have to. It's that lots of fellow developers
         | do and they most certainly don't have to. They simply don't
         | care.
        
           | mgarciaisaia wrote:
           | I guess most of the times it's not about a a dev not caring
           | about the issue - it's about not being paid to address this
           | scenarios.
           | 
           | The default for most of the tools/frameworks is to generate
           | this bloat without a nice fallback for slow connections.
           | Because the industry is focused on people with good
           | connections, because that's where the money is.
           | 
           | Whenever I develop a webapp whose users won't be having a
           | good Internet connection, there's a requirement to support
           | that use case, and I spend time making sure the thing is
           | usable on bad connections, and it's OK to sacrifice some UX
           | to get there.
           | 
           | But on most cases, customers (both end-users and companies
           | that pay me to code something for them) prefer shiny & cheap
           | rather than "works OK for faceless people I don't know they
           | still live like I did 15 years ago".
           | 
           | TL;DR: it's an economics issue, as usual.
           | 
           | ----
           | 
           | PD: I've spent five hours yesterday to avoid a 70% packet
           | loss to the router in the place I've recently moved to.
           | There's a (top) 6Mbps connection on the other side of the
           | router. I'm suffering not being on a top-class connection -
           | but that's _my_ issue, not my customer's, nor my customer's
           | customers.
        
           | ajsnigrutin wrote:
           | > What's crazy to me is not that regular users put up with a
           | ton of bullshit - they have to
           | 
           | Honestly, the number of people working in tech, not using an
           | adblock (and not even knowing that things like this exist) is
           | making me sad.
        
       | habibur wrote:
       | The modern web loads the whole website on your first visit, aka
       | SPA.
       | 
       | Plus React 150kb, Bootstrap 150kb and all their plugins make it
       | multi megabyte.
       | 
       | I was thinking about converting my old server rendered web site
       | into modern web. Still wondering if it's worth it.
        
         | grishka wrote:
         | Most often though, your first visit is your only visit.
        
         | bgroat wrote:
         | It isn't
        
         | sneak wrote:
         | The downside of this "modern" approach is that you break the
         | website for anyone with javascript off, or using a text
         | browser.
        
           | goodpoint wrote:
           | ...also everybody without modern and/or expensive hardware
           | AKA 2 billion people in the world.
        
           | hypertele-Xii wrote:
           | Doesn't it also break basic caching? That is, I can't
           | download a "modern" website to view it offline because it's
           | actually just a rendering shim that needs to phone home for
           | the content?
        
         | capableweb wrote:
         | That's last years modern web. This years modern web splits up
         | the JS bundle based on the pages so you only load what's
         | required for each page. So we're basically back to square one.
         | 
         | > I was thinking about converting my old server rendered web
         | site into modern web. Still wondering if it's worth it.
         | 
         | As usual guideline I tend to use: Are you building a website or
         | a web application? If you're building a website, you're best
         | off with just static pages or static pages generated
         | dynamically. If you need lots of interactivity and similar,
         | better to build a web application, and React fits well for that
         | purpose.
        
         | osrec wrote:
         | That's only when done poorly, which unfortunately has become
         | the norm. Properly done modern websites load things
         | incrementally, as and when they're needed, while cleanly
         | separating the front end logic from the back end.
        
           | bdcravens wrote:
           | The No True Scotsman argument could also be applied to the
           | technologies that "modern websites" were supposed to replace.
        
         | systemvoltage wrote:
         | I really like the idea of server side rendering. Flask/Django
         | style backend and use Jinja2 templates, sprinkle some vanilla
         | JS for interaction if needed.
         | 
         | I wonder if it is safe to say vast majority of websites are
         | simple enough to use the aforementioned pattern? There are so
         | many "Doctors appointment" type dynamic websites that I don't
         | think need anything like React or Angular.
         | 
         | I think React is great if you're building the next Notion or a
         | web-based application such as Google Sheets.
         | 
         | Edit: Yeah, I am new to webdev and I find server side rendering
         | "refreshing" :-)
        
           | vbsteven wrote:
           | It's not just an idea. It's tech that has been around for
           | decades now. Rails, Django, Spring, Laravel, Sinatra, Flask
           | and hundreds of others.
           | 
           | They work fine for a large chunk of modern websites. And
           | server side templating is not the only concept from that era
           | that is much simpler than what is popular now. Those
           | frameworks were primarily synchronous instead of
           | asynchronous. And they worked in pretty much every browser.
           | Without breaking the back button. With shareable links. And
           | no scrolljacking.
           | 
           | For me personally the sweet spot for many
           | applications/websites is still just that: A synchronous MVC
           | framework with serverside rendering. With a sprinkle of
           | vanilla JS or Vue on select pages for dynamic stuff.
        
             | pjmlp wrote:
             | Same here, such approaches work and do the job just fine.
        
           | the__alchemist wrote:
           | I do this. It's remarkable this is considered to be a novel
           | pattern! (Note that some of the advances in modern CSS and JS
           | made this more feasible than it used to be)
        
           | yurishimo wrote:
           | Since you say you're new, it might be worth looking at
           | Laravel if you learned with the componentized JS approach.
           | The Blade templating language they've been perfecting for
           | years now has started to embrace composable components very
           | similar to a JS framework, but all server-side.
           | 
           | https://laravel.com/docs/8.x/blade#anonymous-components
           | 
           | https://laracasts.com/series/blade-component-
           | cookbook/episod...
        
       | strooper wrote:
       | I wonder if the experience improves by using proxy browsers, such
       | as- Opera (lite/mini).
        
         | judge2020 wrote:
         | I doubt it since most websites have moved to using geo-
         | distributed CDNs and sometimes app servers anyways.
        
           | fifilura wrote:
           | I am pretty sure it would, and that is one of the reasons
           | Opera Mini is still very popular in Africa.
           | 
           | One of the things it does is to remove the need for hundreds
           | of requests to fetch every single image/script in the page
           | (from the client that is). Instead it is only one file to
           | fetch over http. Only that makes a huge difference.
        
       | thirdplace_ wrote:
       | We are soon coming full circle when this generation's programmers
       | realize they can render html templates server-side.
        
         | nunez wrote:
         | That's already kind of happening. Google and Mozilla both have
         | "accelerator" services that render pages in their data centers;
         | something similar to what opera was doing years ago. I also
         | think Node supports server side rendering. A parking web app I
         | used out in Lincoln NE takes advantage of that.
        
           | easrng wrote:
           | I know about Google's compression proxy but what's Mozilla
           | doing? I found something called Janus but it looks like it
           | was discontinued. Opera Mini is still around and there's also
           | the Firefox-based browsh.
        
         | pjmlp wrote:
         | We are already there, https://nextjs.org/docs/basic-
         | features/pages
         | 
         | Naturally it has to be more clusmy than just using one of the
         | boring SSR that exist since 2000.
        
         | RedShift1 wrote:
         | But there is something to be said for SPAs and slow
         | connections. All the HTML and JS code is loaded beforehand and
         | afterwards only raw data is exchanged. If the API calls can be
         | limited to one call per user action/page visit, the experience
         | would be better because only some API data has to be
         | transferred instead of an entire HTML page. So your initial
         | page load would be slow because of all the HTML and JS, but
         | afterwards it should be faster compared to having server side
         | rendered pages.
        
           | grishka wrote:
           | In practice, there's rarely "afterwards". You visit some
           | website once because someone sent you a link. You load that
           | one page and that's it. You read the article and close it. By
           | the time you visit that website again, if you ever do, your
           | cache will be long gone, so you're downloading the 2-megabyte
           | JS bundle again.
           | 
           | In other words, pretty often the initial load is the only
           | one.
        
           | handrous wrote:
           | I _very rarely_ see a  "Web App" that's faster due to this.
           | Take Gmail: plain-HTML gmail transfers and renders a whole
           | page faster than full-fat Gmail does _most_ of the things it
           | does, which involve merely  "some API data". The
           | activity/loading indicators on normal Gmail display longer
           | than the full-page load on plain-HTML gmail.
           | 
           | This had some validity in the early days of AJAX when common
           | practice was to respond with HTML fragments and it was mostly
           | just a work-around for Frames not being very good at a bunch
           | of things they should have been good at. These days, not so
           | much.
        
             | zozbot234 wrote:
             | Makes sense. Roundtrips _will_ kill you on slow
             | connections, and the average SPA does lots of back-and-
             | forth roundtrips. Then the JS code has to tweak hundreds or
             | thousands of DOM nodes in a completely serial fashion to
             | reflect the new data. Much faster in practice to either
             | download or generate (preferably via WASM) a single chunk
             | of raw HTML and let the browser rerender it natively.
        
               | handrous wrote:
               | It's part of the cause of all the damn memory-bloat ,
               | too. Receive some JSON (one copy of the data), parse that
               | and turn it into JS objects (at least two copies
               | allocated, now, depending on how the parsing works),
               | modify that for whatever your in-browser data store is,
               | and/or copy it in there, or copy it to "local state", or
               | whatever (three copies), render the data to the shadow
               | DOM (four copies), render the shadow DOM to the real DOM
               | (five copies). Some of those stages very likely hide even
               | _more_ allocations of memory to hold the same data, as it
               | 's transformed and moved around. Lots of library or
               | pattern choices can add more. It adds up.
        
               | swiley wrote:
               | They also handle errors by having the user reload the
               | page which means everything starts over.
               | 
               | My experience growing up is that you don't notice the
               | issues on sane pages written by
               | hobbyists/professors/researchers and then you go to
               | something built by google and everything falls apart.
        
               | toast0 wrote:
               | It really depends on both latency and bandwidth.
               | 
               | A reasonable POTS modem does ok on latency, but bad on
               | bandwidth. So roundtrip is fine until you send too much
               | data (which is real easy, modern initial segment limit of
               | 10 combined with a 1500 mtu is more than a second of
               | download buffer). If you kept the data small, many round
               | trips would be okish.
               | 
               | On the other hand, traditional geosynchronous satellite
               | always has terrible latency, so many round trips is bad
               | regardless of the data size... One big load would be a
               | lot better there.
        
             | RedShift1 wrote:
             | It only works if the webapp can keep its calls limited to 1
             | per page/user action. Lots of webapps make multiple
             | roundtrips (additional fetches triggered by earlier
             | requests so they can't be done in parallel) making it slow
             | even on fast connections (looking at you Quickbooks Time).
        
               | robertlagrant wrote:
               | This is exactly why BFFs exist.
        
           | goodpoint wrote:
           | Not at all: HTML compresses extremely well. CSS/favicon/etc
           | are cached.
           | 
           | If you get rid of javascript frameworks used for SPA, the
           | overhead of delivering a handful of HTML tables and forms
           | instead of some JSON is negligible.
        
             | phil294 wrote:
             | > a handful of HTML tables and forms
             | 
             | But that depends on the use case, doesn't it. Static sites
             | may as well be huge, and now you need to send all of the
             | surrounding html over, when only a small table or form
             | would need updating. So I am not so sure about your point.
             | The greater the complexity of the displayed page, the more
             | sense it makes to use a SPA network-wise. (edit: mostly
             | covered in sibling comments)
             | 
             | You have a point about compression though. I now wonder
             | what the situation would look like if we had (had) native
             | HTML imports, as that would greatly help with caching.
        
           | ufmace wrote:
           | In theory I guess, but I'd bet that basically every SPA is
           | using enough JS libs that the initial load is much bigger
           | than a bunch of halfheartedly-optimized basic HTML. I bet
           | somebody somewhere has written a SPA-style page designed to
           | be super-optimized in both initial load and API behavior just
           | because, but I don't think I've ever seen one.
        
             | contriban wrote:
             | I agree with the general sentiment, but if you used
             | Facebook and YouTube you know they respond immediately on
             | tap, even if the view hasn't completely loaded. They are
             | SPA-style pages.
             | 
             | Unfortunately they are the exception as there are a lot of
             | awful SPAs that focus on looking cool while they're barely
             | usable. Looking at you, Airbnb.
        
               | 10000truths wrote:
               | Facebook and YouTube can afford to use SPAs without
               | worrying too much about performance penalty because they
               | invest massive amounts of effort into highly optimized
               | and widespread content delivery networks, to the point
               | where many ISPs literally host dedicated on-site caches
               | for them.
        
           | doliveira wrote:
           | As someone who has lived with actual slow connection and
           | budget phones, I don't think I've ever seen this promise
           | fulfilled.
           | 
           | It should work, but it's never so in practice.
        
           | marcosdumay wrote:
           | Just enable your web server's deflate middleware and you'll
           | see that raw data sizes aren't very different from fully
           | formatted HTML.
        
           | masklinn wrote:
           | There really is not. SPAs generally means tons more assets
           | being loaded and hard errors on on unreliable connections.
           | 
           | On a web page, missing a bit of the page at the end is not an
           | issue, you might have somewhat broken content but that's not
           | a deal-breaker.
           | 
           | With an SPA, an incompletely loaded API call is just a
           | complete waste of the transferred bytes.
           | 
           | And slow connections also tend to have much larger latencies.
           | Care to guess what's an issue on an SPA which keeps firing
           | API calls for every interaction?
           | 
           | > So your initial page load would be slow because of all the
           | HTML and JS, but afterwards it should be faster compared to
           | having server side rendered pages.
           | 
           | The actual effect is that the initial page load is so slow
           | you just give up, and if you have not, afterwards it's
           | completely unreliable.
           | 
           | Seriously, try browsing the "modern web" and SPAs with
           | something like a 1024 DSL with 100ms latency and 5% packet
           | loss, it's just hell. And it's the sort of connections you
           | can absolutely find in rural places.
        
             | RedShift1 wrote:
             | Won't loading an entire HTML page be just as bad on such a
             | connection? There's a lot more data to transfer.
        
               | misnome wrote:
               | > On a web page, missing a bit of the page at the end is
               | not an issue, you might have somewhat broken content but
               | that's not a deal-breaker
               | 
               | - the argument is that most of what you probably want is
               | better than nothing.
               | 
               | Although I can imagine sites thus prioritising ads even
               | more...
        
               | ratww wrote:
               | Not at all, unless there is an absurd amount of content
               | on the page that is unrelated to the data being fetched
               | (like title, footer, sidebar). An HTML-table, for
               | example, is in the same ballpark size-wise as a JSON
               | representation of the same data. And that's without
               | taking into account the fact the JSON can potentially
               | carry more information than necessary.
               | 
               | Facebook is an example of a website where there is such
               | an absurd amount of content that's not the focus of the
               | page: the sidebars, the recommendations, the friend list
               | of the chat, the trends, the ads. It sorta makes sense
               | for them to have an SPA (although let's be frank: most
               | people in slow connections prefer "mobile" or "static"
               | versions of those sites).
               | 
               | The impetus for SPAs was never really speed. The impetus
               | for SPAs is control for developers, by allowing
               | navigation between "pages" but with zero-reloads for
               | certain widgets. It was like 90s HTML frames but with
               | full control of how everything works.
        
           | lhorie wrote:
           | > So your initial page load would be slow because of all the
           | HTML and JS, but afterwards it should be faster compared to
           | having server side rendered pages
           | 
           | TFA is arguing that a user on a bad connection won't even
           | make it to a proper page load event in the first place.
           | 
           | It's probably also worth mentioning that the "gains" from
           | sending only data on subsequent user actions are subject to
           | devils in details, namely that response data isn't always
           | fully optimized (e.g. more fields than needed), and HTML
           | gzips exceptionally well due to markup repetitiveness
           | compared to compression rate of just arbitrary data.
           | Generally speaking you can rarely make up in small
           | incremental gzip gains what you spent on downloading a
           | framework upfront plus js parse/run time and DOM repaint
           | times, especially on mobile, compared to the super fast
           | native streaming rendering of pure HTML.
        
           | ratww wrote:
           | _> If the API calls can be limited to one call per user
           | action /page visit, the experience would be better because
           | only some API data has to be transferred instead of an entire
           | HTML page_
           | 
           | HTML pages are not that big, though, unless you put a lot of
           | content around the data. Not to mention JSON can be wasteful,
           | and contain more data than needed. And lots of SPAs require
           | multiple roundtrips for fetching data.
           | 
           | And even if you do have lots of content around your data,
           | there are alternatives, like PJAX/Turbolinks that allow you
           | to only fetch only partial content, while still using minimal
           | JS compared to a regular JS framework.
        
         | CPLNTN wrote:
         | Look up React Server Components
        
       | hackernewslol wrote:
       | YES!
       | 
       | I've started using Tailwind and Alpine (with no other frontend
       | framework). A basic page with a couple of images, a nice UI,
       | animations, etc only takes up ~250kb gzipped.
       | 
       | It loads quickly on practically any connection, is well-
       | optomized, and generally just works really well. The users love
       | it too.
       | 
       | Coupled with a fast templating system and webserver (server-side
       | in Go is my personal favorite but there are plenty of others), it
       | isn't hard to get a modern featureset and UI with under 300kb.
       | 
       | I hope the push for a faster internet continues. Load-all-the-
       | things is great until you're on a bad connection.
       | 
       | Sourcehut comes to mind as a great model/example, using a simple
       | JS-less UI and Go server-side rendering.
        
       | morpheos137 wrote:
       | what do people think is the best approach to incentivise lean web
       | design? The bloat of the modern web is absolutely ridiculous but
       | it seems to be the inevitable result of piling abstraction on top
       | of abstraction so that you end up with a multimegabyte news
       | article due to keeping up with the latest fad framework.
        
         | tomaskafka wrote:
         | What every large company does - make an employee VPN (for
         | laptops and phones) that simulated poor internet connection
         | (slow, packet loss, lags), to let the developers feel the pain.
        
           | marcosdumay wrote:
           | The joke is on them (well, yeah, but figuratively too). The
           | VPN is only slow for external sites that the developers can
           | not fix, while internal ones load at the full speed the
           | middlebox can handle.
           | 
           | Companies are grooming the developers so that they will never
           | have difficulty hiring people that can diagnose a broken
           | firewall, but they are not getting faster web pages out of
           | the deal.
        
         | MattGaiser wrote:
         | I think you would need to prove that it matters in terms of
         | increasing revenue and is more valuable than other things you
         | can spend your engineers on.
         | 
         | I would be surprised if this discussion has not taken place at
         | Airbnb and they apparently decided it did not matter.
        
       | MarkusWandel wrote:
       | My mom's cellular data plan (used for rural internet access
       | through a cellular/wifi router) has a 128kbps fallback if you use
       | up your main data allotment.
       | 
       | 128kbps isn't so bad, is it? More than 3x the speed she used to
       | get with a dialup modem.
       | 
       | But no. We ran it into the fallback zone to try it out. And half
       | the sites (e.g. the full Gmail UI or Facebook) wouldn't even load
       | - the load would time out before the page was functional.
       | 
       | The 128kbs fallback is meant to be as a lifeline, for email and
       | instant messaging communications. And that's really all it's food
       | for any more.
        
       | johnmorrison wrote:
       | This article gave me a random push to nuke the size of my site -
       | just brought it down from ~50 kB on text pages all the way to
       | ~1.5 kB now, and from 14 files to 1 HTML file.
        
       | superkuh wrote:
       | >You can see that for a very small site that doesn't load many
       | blocking resources, HTTPS is noticeably slower than HTTP,
       | especially on slow connections.
       | 
       | Yet another reason to not get rid of HTTP in the HTTP+HTTPS world
       | just because it's hip and what big companies are going.
        
         | ev1 wrote:
         | This is an old article. 0RTT TLS 1.3 + QUIC is on par with HTTP
         | usually, based on our RUM/telemetry.
        
           | superkuh wrote:
           | 0RRT only applies to connections after the first load. And
           | what cloudflare does is certainly not what most people on the
           | internet (as opposed to most mega-corporations) should do.
        
         | Grimm1 wrote:
         | That's a really bad take on HTTPS. HTTP helps to enable a bunch
         | of trivial but damaging attacks on an end user. If you run
         | anything maybe more complicated than a static blog, and even
         | then for your own security, HTTP is the wrong choice.
        
       | k__ wrote:
       | Interesting that it mentions the book "High Performance Browser
       | Networking". My feeling after reading it was that latency is all
       | that matters, not page size.
        
         | AlexanderDhoore wrote:
         | Page size becomes latency when your connection is slow.
        
           | k__ wrote:
           | Yes.
           | 
           | Every round trip adds up.
        
       | 0xbadcafebee wrote:
       | > I probably sound like that dude who complains that his word
       | processor, which used to take 1MB of RAM, takes 1GB of RAM
       | 
       | Well, 1MB is way too small, but 1GB is way too large. Conserving
       | resources is important.
        
       | brundolf wrote:
       | I was expecting an empty rant, but this was even-handed and made
       | some good points
        
         | bo1024 wrote:
         | If you don't know Dan Luu's blog, I suggest reading the rest!
        
       | ansgri wrote:
       | From a capitalist devil's advocate point of view: does it really
       | matter that your website works poorly with 90% of the web when
       | these 90% either don't have any disposable income (there's got to
       | be a correlation) or are too distant geographically and
       | culturally to ever bring you business? The "commercial quality"
       | (i.e. to avoid looking worse than competition) web development
       | has become unbelievably complicated and expensive, so in reality
       | probably only individual enthusiasts and NCOs should care about
       | the long tail. Local small businesses will just use some local
       | site builder or social media, so even they don't matter.
        
         | bdcravens wrote:
         | The assumption you are making here is that the preferred
         | customers you're talking here are always on quality
         | connections, and that poor connections are limited to undesired
         | customers. The same user who accesses your site from an 800MB
         | wifi signal may need to access the same site in spotty 4G
         | scenario.
        
         | hokumguru wrote:
         | From the other capitalist point of view, wouldn't it be more
         | ideal to squeeze every single percentage of the market if you
         | could? I think most companies would gladly take a 10% boost in
         | sales.
         | 
         | Furthermore, that's only the bottom 10% of America, appealing
         | to their audience appeals to maybe the top percentage of India
         | and Asia as well which are giant markets
        
           | pjscott wrote:
           | That depends on how much money it takes to get a bit more of
           | the market and how much money you expect to get from them.
           | Ideally you'd reach the point where those two derivatives are
           | equal, then stop.
        
         | mumblemumble wrote:
         | These sorts of characterizations are just way, way, way too
         | simplistic.
         | 
         | Assume that I earn a six figure income and live in a major city
         | with reasonably fast (though not necessarily top tier)
         | Internet. So, y'know, theoretically a fairly desirable customer
         | and not a subsistence farmer or some other demographic that's
         | easy to give the cold shoulder. But still --
         | 
         | Bad Internet day, maybe someone's getting DDoS attacked? If
         | your website isn't lean, I'm more likely to get frustrated by
         | the jank and leave.
         | 
         | It's Friday evening and the Internet is crammed full of
         | Netflix? If your website isn't lean, I'm more likely to get
         | frustrated by the jank and leave.
         | 
         | Neighbor's running some device that leaks a bunch of radio
         | noise and degrades my wifi network? It's Friday evening and the
         | Internet is crammed full of Netflix? If your website isn't
         | lean, I'm more likely to get frustrated by the jank and leave.
         | 
         | I'm connecting from a coffee shop with spotty Internet? If your
         | website isn't lean, I'm more likely to get frustrated by the
         | jank and leave.
         | 
         | I've got 50,000 tabs open and they're all eating up a crapton
         | of RAM? If your website isn't lean, I'm more likely to get
         | frustrated by the jank and leave.
         | 
         | I'm browsing while I've got some background task like a big
         | compile or whatever chewing up my CPU? If your website isn't
         | lean, I'm more likely to get frustrated by the jank and leave.
         | 
         | I'm accessing from my mobile device and I'm in a dead zone
         | (they're common in major cities doncha know)? If your website
         | isn't lean, I'm more likely to get frustrated by the jank and
         | leave.
         | 
         | etc. etc. etc.
         | 
         | Meanwhile, while it may not be the style of the times, it's
         | absolutely possible to make very good looking websites with no
         | or very little JavaScript. Frankly, a lot of them look and feel
         | even better than over-engineered virtual DOM "applications" do.
         | Without all that JavaScript in the way, the UX feels downright
         | snappy. https://standardebooks.org/ is a nice example.
        
         | yjftsjthsd-h wrote:
         | That assumes that you can appropriately target the 10% that are
         | profitable. If you can't reliably do that or the profitable 10%
         | isn't 10% with the highest performance computers and
         | connections, then you probably are better served by casting a
         | wide net.
        
           | blowski wrote:
           | What if, to take advantage of the profitable 10%, you are
           | better off providing a rich page, albeit with a large
           | filesize? I have no evidence to support that claim, other
           | than that large profitable companies generally seem to think
           | this is the case.
        
         | nunez wrote:
         | It does if your business is low margin high volume and the 90%
         | you reference have _enough_ disposable income to buy the
         | basics. I.e. literally all of retail and consumer banking.
         | 
         | Most businesses fall into this category. Facebook, Amazon, and
         | Netflix fall into this category. That's why their reliability
         | engineers are amongst the highest paid engineers in those
         | businesses. They literally cannot afford to be down.
         | 
         | The ironic thing about your argument is that I've found
         | recently that the more something costs, the more difficult it
         | is to procure, ESPECIALLY online. Some of the most expensive
         | items out there simply cannot be purchased online end-to-end.
         | 
         | looking to rent an apartment online? Easy. Looking to rent or
         | buy a house? A billion moving parts, with half a million of
         | those parts needing to be done face to face.
        
         | Seirdy wrote:
         | Strong disagree but upvoting for a good discussion.
         | 
         | Some situations to give your "capitalist devil" pause:
         | 
         | - A train passenger on the way to their summer home enters a
         | tunnel. Packet loss goes through the roof and nothing but HTML
         | loads.
         | 
         | - A hotel room housing an attendee of a local shareholder
         | meeting has awful shared wi-fi.
         | 
         | - Ping speeds plummet on a politician's private jet.
         | 
         | - Someone hoping to buy a new Rolex at the mall can barely
         | connect in the underground parking garage.
         | 
         | Everyone hits the bottom of the bell curve.
        
           | goodpoint wrote:
           | - Plenty of wealthy people on cruise ships or yachts with
           | high-latency uplinks
           | 
           | - Approx 1 million crew are at sea at the same time on
           | average
        
         | ufmace wrote:
         | Depends on the site. It'd be nice if somebody's individual blog
         | about the cool tech thing they built was accessible to everyone
         | everywhere. If you are a business that can fundamentally only
         | serve customers in city X, then it's perfectly reasonable to
         | use tech that might not work for people outside of city X.
         | There's a lot of space in between those, so use your judgement
         | I guess.
        
         | toast0 wrote:
         | This falls down because people with lots of money have cabins
         | in the woods with garbage internet. If they can't load your
         | page from there, there goes that lucrative customer. Or they
         | may be on a ferry, or a small aircraft or etc.
        
         | goodpoint wrote:
         | I've never seen any example of businesses making deliberate
         | choices on websites size and loading time.
         | 
         | The web "development" world is just too messy and hype driven.
        
       | dustractor wrote:
       | Between 2010-2016, I lived in a rural area where we had one
       | option: Satellite internet from HughesNet. Prior to that, the
       | only option was dialup, and since it was such a remote area, the
       | telephone had grandfathered a few numbers from out of the service
       | area as free-to-call, to allow for residents to use the nearest
       | isp without extra toll charges.
       | 
       | So we went from paying 9.95/month for average 56k service, to
       | 80/month for a service that was worse than that.
       | 
       | To add insult to injury, a local broadband provider kept sticking
       | their signs at our driveway next to our mailbox, and we would
       | call to try and get service, but we were apparently 200 feet past
       | the limit of their service area. People who lived at the mouth of
       | our driveway had service, our neighbors had service, but we were
       | too far out they said.
       | 
       | I repeat: as late as 2016 I WOULD HAVE KILLED TO BE ABLE TO JUST
       | USE FREAKING DIALUP!
        
         | 1MachineElf wrote:
         | In rual Virginia I had a very similar experience during the
         | exact time frame as you. Verizon and Comcast would say we could
         | be connected over the phone, send equipment (which I'd pay
         | for), then turn around and say it was too remote. Neighborhood
         | down the street had their service though. The ISP we ended up
         | with was a couple with an antenna on top of a local mountain.
         | Our service was capped at 2GB per day and blowing through it
         | (which was very easy) meant being throttled to the point of
         | nothing working anymore. It was several long years of
         | frustration.
        
         | fossuser wrote:
         | Starlink is a life saver for this situation.
         | 
         | Family in Placerville CA went from unusable <1mbps viasat to
         | 50-190mbps overnight.
         | 
         | I hope it puts all the geostationary satellite companies into
         | bankruptcy.
        
           | kevin_thibedeau wrote:
           | They're not competing for bandwidth with anyone else yet.
           | Wait til the service is open to everyone.
        
             | pjscott wrote:
             | In rural areas, contention should stay low enough that the
             | speeds will continue to be at least decent. And of course
             | the latency is much lower than geosynchronous satellites,
             | independent of bandwidth.
        
             | walrus01 wrote:
             | They're limiting the number of CPEs per cell to a maximum
             | number of sites, and volume of traffic, such that the
             | service won't degrade to a viasat/hughesnet like terrible
             | consumer experience. I can't say how I know, but some _very
             | experienced and talented network engineers_ are designing
             | the network topology and oversubscription /contention ratio
             | based out of their Redmond, WA offices.
        
         | nunez wrote:
         | I feel bad for y'all rural folk. I paired with some people who
         | were living out there, and Internet was always a problem for
         | them.
        
         | walrus01 wrote:
         | I have been using another person's starlink beta terminal since
         | November of last year, and have had my own since late January.
         | It's at a latitude sufficient that packet loss and service
         | unavailability is averaging about 0.25% (1/4th of 1 percent)
         | over multi day periods.
         | 
         | It's a real 150-300 down, 16-25 Mbps up. In many cases it
         | actually beats the DOCSIS3 cable operator's network at the same
         | location for jitter, packet loss, downtime.
         | 
         | The unfortunate economics of building 4000-6000 kg sized
         | geostationary telecom satellites with 15 year finite lifespans,
         | and launching them into a proper orbit ($200 million would not
         | be a bad figure for a cost to put one satellite in place) mean
         | that the oversubscription/contention ratios on consumer grade
         | VSAT are extreme.
         | 
         | Dedicated 1:1 geostationary satellite still has the 492-495ms
         | but is remarkably not bad, but you're looking at a figure of
         | anywhere from $1300 to $3000 per Mbps, per direction, per month
         | for your own dedicated chunk of transponder kHz for SCPC.
         | You're also looking at a minimum of $7000-9000 for terminal
         | equipment. That's the unfortunate reality right now.
         | 
         | I feel sorry for both viasat/hughesnet consumer grade
         | customers, who are suffering, and also the companies, who are
         | on a path dependent lock in towards obsolescence. Even more so
         | if various starlink competitors like Kuiper, Telesat's LEO
         | network and OneWeb (not exactly a competitor since it won't be
         | serving the end user, but same general concept) actually launch
         | services.
        
         | hypertele-Xii wrote:
         | > I WOULD HAVE KILLED TO BE ABLE TO JUST USE FREAKING DIALUP!
         | 
         | I know you're saying this in jest, but joking about committing
         | murder to get Internet access isn't actually funny.
        
           | jlund-molfese wrote:
           | It's a common English idiom, and like "raining cats and
           | dogs," the meaning of the phrase doesn't correspond literally
           | to the words. The parent comment isn't actually joking about
           | committing murder.
           | 
           | As someone whose only internet options for a few years were
           | satellite and WISP that cost several hundred dollars for 8
           | Mbps down, I understand where they're coming from :)
        
             | hypertele-Xii wrote:
             | Raining cats and dogs would be a natural phenomenon, not an
             | act of violence.
             | 
             | What if this has been the "idiom"?:
             | 
             | > I WOULD HAVE RAPED A NIGGER TO BE ABLE TO JUST USE
             | FREAKING DIALUP!
             | 
             | Would it still be funny? Maybe this is an American thing,
             | where violence is glorified and totally ok to joke about.
             | If you said "I'd kill for ______" in my country, you'd get
             | some _very_ concerned looks from your peers.
        
               | tom_ wrote:
               | It's a standard English idiom, universally understood not
               | to be taken literally.
        
               | hypertele-Xii wrote:
               | "The standard" is primitive, barbaric, and violent. It's
               | high time to evolve and civilize. Language shapes
               | thought. Thought can shape language. Stop glorifying
               | violent idioms.
        
           | theshadowknows wrote:
           | Who says they're joking. They didn't say they'd kill a human.
           | Maybe they'd kill a goat.
        
             | MattGaiser wrote:
             | It seems to work for aircraft maintenance.
             | 
             | https://www.reuters.com/article/us-nepal-airline-odd-
             | idUSEIC...
        
               | rvba wrote:
               | A football player's family sacrificed some 3 poor cows to
               | win a final. It didnt work. Imagine being a cow and dying
               | for Liverpool FC...
               | 
               | As for the plane: there is a reason why European Union
               | bans aircraft from many countries. Why not get... good
               | technicians?
        
           | temporama1 wrote:
           | Time for some fresh air
        
         | detaro wrote:
         | No neighbor that let you piggyback on their connection? (That
         | of course doesn't change the fundamental shittyness of the
         | overall lack of available access)
        
           | Causality1 wrote:
           | Yes that was my first thought. Offer to pay half the bill and
           | pick up a giant yagi antenna for twenty bucks. Only downside
           | is it's technically illegal to combine that much power and
           | that much gain but I've known people who did it for a decade
           | with no FCC call.
        
             | afavour wrote:
             | Depending on how far we're talking you could also just run
             | network cable...
        
       | dghughes wrote:
       | The problem with comparing the Internet now vs 25 years ago is
       | back then you didn't live on the Internet all your waking hours.
       | You jumped on, got what you needed and got off again otherwise
       | you'd be paying a high hourly premium. And to top that off you'd
       | power off your computer and cover it and the monitor with a dust
       | cover.
       | 
       | Now with phones and always on connections it's not even
       | comparable. I spent more time using my computer; programming,
       | graphics, learning about it, yes the physical thing in front of
       | me than on the Internet in the early 1990s even later 1990s.
        
         | gscott wrote:
         | Plus 25 years ago images were a few k in size. Now with the
         | average webpage being a megabyte in size, it is outrageous.
        
           | tomjen3 wrote:
           | The images were a few kilobytes in size because they were
           | 120x80 pixels GIFs.
           | 
           | Now they are 1200x750 high resolution JPGs, but that is a
           | worth while tradeof when we have screens that can display
           | them.
           | 
           | The real issue is Javascript.
        
             | afavour wrote:
             | Actually in terms of bandwidth JavaScript isn't the
             | problem, you can fit an entire SPA into the bandwidth
             | required for one large image (it is a problem in terms of
             | CPU usage on underpowered devices though)
             | 
             | Large images being "worth the trade off" is debatable
             | depending on your connection speed, I think (though at
             | least you can disable images in the browser?)
        
       | marcodiego wrote:
       | Nevermind connection speed, the modern web is unbearable without
       | ad blockers.
        
       | jart wrote:
       | Finally, an influential developer who cares about the other 99%.
       | 
       | I've lived in dozens of places. I've lived in urban areas,
       | suburban areas, rural areas. I've even lived on a boat. With the
       | exception of wealthy areas, reasonable internet is a constant
       | struggle.
       | 
       | It's also never due to the speed though, in my experience. The
       | biggest issue are always packet loss, intermittent outages,
       | latency, and jitter. High speed internet doesn't count, aside
       | checking off a box, if it goes out for a couple hours every
       | couple hours, or has 10% packet loss. You'd be surprised how
       | common stuff like that is.
       | 
       | Try visiting the house of someone who isn't rich and running mtr.
       | Another thing I've noticed ISPs doing is they seem to
       | intentionally add things like jitter to latency-sensitive low-
       | bandwidth connections like SSH, because they want people to buy
       | business class.
       | 
       | In many ways, 56k was better than modern high speed internet.
       | Because yes, connections were slow, but even with 300 baud the
       | latency and reliability was good enough that you could count on
       | it to be good enough to connect to some central computer and run
       | vi, which Bill Joy actually wrote on that kind of connection,
       | which deeply influenced its design.
        
       | Tade0 wrote:
       | > If you think browsing on a 56k connection is bad, try a 16k
       | connection from Ethiopia!
       | 
       | No need to travel that far. During my time in my previous
       | apartment I had two options for connecting:
       | 
       | -The landlord's mother's old router behind a thick wall (400kbps
       | at best, 300-400ms latency with 10-30% packet loss).
       | 
       | -A "soapbar" modem we got along with my SO's mobile plan. 14GB a
       | month, slowing down to a trickle of 16-32kbps when used up.
       | 
       | Things that worked on such a connection:
       | 
       | -Google
       | 
       | -Google Meet
       | 
       | -Facebook Messenger
       | 
       | -Hacker News
       | 
       | The rest of the web would often break or not load at all.
        
       | giantrobot wrote:
       | The performance numbers are a really helpful illustration of the
       | problem with a lot of sites. A detail it misses and I see people
       | constantly forget, is any individual user might have one of those
       | crappy connections at multiple points during the day or even move
       | through all of them.
       | 
       | It doesn't matter if your home broadband is awesome if you're
       | currently at the store trying to look something up on a bloated
       | page. It's little consolation that the store wrote an SPA to do
       | fancy looking fades when navigating between links when it won't
       | even load right in the store's brick and mortar location.
       | 
       | Far too many web devs are making terrible assumptions about not
       | just bandwidth but latency. Making a hundred small connections
       | might not require a lot of total bandwidth but when each
       | connection takes a tenth to a quarter of a second just to get
       | established there's a lot of time a user is blocked unable to do
       | anything. Asynchronous loading just means "I won't explicitly
       | block for this load", it doesn't mean that dozens of in-flight
       | requests won't cause _de facto_ blocking.
       | 
       | I'm using the web not because I love huge hero images or fade
       | effects. I'm using the web to get something accomplished like buy
       | something or look up information. Using my time, bandwidth, and
       | battery poorly by forcing me to accommodate your bloated web page
       | makes me not want to give you any money.
        
         | ajsnigrutin wrote:
         | Yep... in large stores like ikea, i've had huge problems with
         | my connection falling down to 2g speeds, and finding anything
         | online was a huge pain in the ass. First thing to load should
         | be the page layout with text already inside, then images, then
         | everything else... some pages ignore this, load nothing + a
         | huge JS library, then start with random crap, and i get bored
         | and close my browser, before i get the actual text.
        
       ___________________________________________________________________
       (page generated 2021-06-05 23:00 UTC)