[HN Gopher] 1MB Club
       ___________________________________________________________________
        
       1MB Club
        
       Author : bradley_taunt
       Score  : 468 points
       Date   : 2020-11-19 17:22 UTC (5 hours ago)
        
 (HTM) web link (1mb.club)
 (TXT) w3m dump (1mb.club)
        
       | forgotmypw17 wrote:
       | Shameless plug: My page, a fully-featured message board with
       | tagging and stats, private key based accounts, and optional
       | JavaScript features, the homepage is much smaller :)
        
       | gandalfian wrote:
       | War and Peace 1.3mb. Ah well, a two pager I guess.
        
         | GloriousKoji wrote:
         | Or we could zip it up and then use javascript to unzip it and
         | display on the client side.
        
           | jefftk wrote:
           | This is already built into the browser. Your browser sends
           | "Accept-Encoding" listing what decoders it has (ex: "gzip,
           | deflate, br"), and then the server may encode the body with
           | any of them. If it does, it will send "Content-Encoding:
           | gzip" or similar to indicate which it chose.
        
         | jefftk wrote:
         | This site is listing transfer size, so if you serve it with
         | "Content-Encoding: gzip" (as you should) you should be all set.
        
       | mNovak wrote:
       | How much of these bloat libraries / frameworks does the browser
       | cache, vs how much is downloaded fresh each time (looking across
       | multiple sites)?
       | 
       | Are there no tools to extract only the portions of js / css etc
       | that your site actually uses, and just host that mini package
       | yourself?
        
       | vmception wrote:
       | We are finally just starting to see a contender for web3.js to
       | get the same functionality without a bloated package size
        
       | aresant wrote:
       | From a "Why, who cares" perspective website (and app) speed are
       | highly correlated with conversion and engagement.
       | 
       | Google's headline research on the subject says "Improving your
       | load time by 0.1s can boost conversion rates by 8%."
       | 
       | Some add'l data, sourced by Neilsen Group below:
       | 
       | - Google found that increasing the load time of its SERPs by half
       | a second resulted in a 20% higher bounce rate.
       | 
       | - Google found 53% of mobile visits ended if a page took longer
       | than 3 seconds to load.
       | 
       | - Akamai aggregated data from 17 retailers (7 billion pageviews)
       | and found that conversion rates were highest for pages that
       | loaded in less than 2 seconds; longer load times correlated with
       | 50% drops in conversion rates and increased bounce rates,
       | especially for mobile visitors.
       | 
       | - BBC found that for every extra second of page load time, 10% of
       | users will leave.
       | 
       | Want to sell / fix this?
       | 
       | Here's the best three simple resource illustrating objective
       | third party results from increasing site speed:
       | 
       | - https://blog.hubspot.com/marketing/page-load-time-conversion...
       | 
       | - https://wpostats.com/
       | 
       | - https://www.cloudflare.com/learning/performance/more/website...
       | 
       | Here is a more compelling deeper look from a leader in the UX
       | space:
       | 
       | - https://www.nngroup.com/articles/the-need-for-speed/
       | 
       | Here's a really well written article about how to PROVE to the
       | powers that be that site speed is worth testing:
       | 
       | https://web.dev/site-speed-and-business-metrics/
        
         | ChuckMcM wrote:
         | If nobody cares, why did Google invest so much in amp ?
        
           | MoldovianDlite wrote:
           | Manufacturing consent can be highly lucrative. It's even
           | easier if your audience is primed to view products as
           | necessarily reflecting consumer demand. Thankfully for Google
           | et al, if you control the market actual demand is not
           | important!
        
           | brudgers wrote:
           | Alphabet cared. It cared about what corporations can care
           | about, making money. It invested so much because there was a
           | good return on investment and there was a lot of cash on hand
           | wanting for investment and Amp was big enough to make use of
           | enough cash in hand to be worth pursuing. With billions of
           | cash on hand fixed overhead makes small investment not worth
           | the bother. For Alphabet, a hundred million dollars is less
           | than rounding error.
        
           | pletnes wrote:
           | To take control of other people's content and, by extension,
           | revenue streams? Performance was a selling point to the web
           | dev - me, as a user of the web, was never even asked my
           | opinion, and I can't (easily) disable amp.
        
         | chrisin2d wrote:
         | I used to work for a major e-commerce company and will back
         | this up. The importance of performance, of course, depends on
         | the use case.
         | 
         | In e-commerce, the biggest factors (that come to mind) to
         | compete on are--SEO and brand aside--price, offerings, and
         | convenience. Convenience has two dimensions: UX and
         | performance.
         | 
         | If a user has a very clear idea of what they want from your
         | site, they'll probably be patient. If a user is channel surfing
         | (and the vast majority are when it comes to shopping, comparing
         | options, etc.), then every millisecond matters. Every
         | millisecond spent loading is a millisecond not spent
         | selling/convincing.
        
           | myself248 wrote:
           | So what you're saying is, the fact that I feel icky and
           | decide against spending any money on HomeDepot.com lately, is
           | hardly a surprise.
        
           | LeifCarrotson wrote:
           | I totally believe that the vast majority of page views are
           | from channel surfing (or bots), but have a hard time
           | believing that has any correlation to actual purchases.
           | 
           | If I'm spending an hour's pay on something I really want, of
           | course I'll wait 10 seconds for the page to load, or if I
           | can't get it to load at all, I'll make a note to try again
           | later from different browser or internet connection. I'll
           | manually enter my address details if they didn't auto-fill
           | properly. I'll respond to emails on what I want to declare
           | for customs, and various other efforts.
           | 
           | I, and people I know, feel like we buy very little on a whim.
           | Is that unique? Are there whales who buy everything you can
           | put in front of their face, or a different demographic who
           | searches for something to buy and then changes their mind in
           | precisely 850 milliseconds?
           | 
           | I would accept that like candy in the checkout lane, the
           | profit is small but worth more than the extra effort it takes
           | to put the offering there, but the revenue is small compared
           | to the actual stuff people need to buy, but the analytics
           | that suggest that hundreds of people click a link to your
           | page but most close it faster than you can read the headline
           | just seem unbelievable.
        
             | im3w1l wrote:
             | Maybe it's people who know kind of what they want, but not
             | exactly. Let's say you want a vacuum cleaner you go to a
             | few different websites and check about 20 models on each
             | site. One one site every time you check a model you have to
             | wait 10 seconds. You might give up early on that site.
        
       | justinzollars wrote:
       | lol Jira is 24 MB.
        
       | aww_dang wrote:
       | I like to limit myself to around 100k for simple, interactive
       | pages.
        
       | toinbis wrote:
       | https://turboeshop.com - 76kb. https://gatsbyeshop.com/ - 176kb
       | (written in next js, ignore the gatsby in domain name). Outcome
       | is 100% identical. Next.js is 45% slower than vanilla, 2x more in
       | weight, and has 3x more request to deliver the same content to
       | user(check it yourself https://ibb.co/Y0wzBrF). Let's pretend i'm
       | a bussiness owner. If someone convinces me I should choose
       | Next.js for my next project, I promise to buy him/her
       | coffee/tea/beer and send postcard(paper one!:) on Christmas. Not
       | intending to start a flame, just am honestly curious and in a
       | mood for a small challenge:) In other words - vanilla vs hydrated
       | react/vue/svelte! Any opinions much appreciated. I have strong
       | opinions of mine as well - do promise to share them in replies.
        
       | ravenstine wrote:
       | It would be useful to tag or categorize those links. I can't
       | imagine I'm the only one who won't click on some domain I don't
       | recognize that doesn't suggest its purpose.
        
       | yellowapple wrote:
       | I love it!
       | 
       | I feel like the 1MB limit is excessively generous, especially for
       | text-only pages. But maybe that's what makes it so damning when
       | pages fail to adhere to it. I know at least one website I
       | maintain fails it spectacularly (though in my defense it's
       | entirely because of that website being chock-full of photos, and
       | full-res ones at that; pages without those are well under that
       | 1MB mark), while other sites I've built consist entirely of pages
       | within a fraction of that limit.
       | 
       | It'd be interesting to impose a stricter limitation to the 1MB
       | Club: one where _all_ pages on a given site are within that
       | limit. This would disqualify Craigslist, for example (the listing
       | search pages blow that limit out of the water, and the listings
       | themselves sometimes do, too).
       | 
       | I also wonder how many sites 1mb.club would have to show on one
       | page before it, too, ends up disqualifying itself. Might be
       | worthwhile to start thinking about site categories sooner rather
       | than later if everyone and their mothers starts spamming that
       | GitHub issues page with sites (like I'm doing right now).
        
         | highmastdon wrote:
         | Don't forget that also 1MB of JavaScript is much much more
         | heavy on the client than 1MB image
        
           | charlesdaniels wrote:
           | At the risk of over-complicating things, perhaps there could
           | be limits per resource type. 10Mb of images might be
           | reasonable (e.g. for a photojournal), but only 128KB of JS,
           | and 128KB for everything else. Something along those lines.
        
             | SilasX wrote:
             | Yeah I was surprised they included pictures in the limit at
             | all -- I mean, sometimes, you _need_ those pictures, and
             | for them to load slower is less important so long as you
             | don 't need them to navigate the page.
        
               | Aeolun wrote:
               | I'm not surprised. The whole point of it is that you can
               | reasonably load it over 3g.
               | 
               | If you have a few megs of images that never show up
               | because they take too long to load, there is no point.
        
               | phkahler wrote:
               | You dont really need full size/resolution images on a web
               | page.
        
               | SilasX wrote:
               | Yeah true but even with small images, you can hit that
               | cap quickly.
        
               | phponpcp wrote:
               | You do for full width retina images on 1440p+ monitors.
        
               | SulfurHexaFluri wrote:
               | You people are why some sites have ugly blurry
               | logos/images on my 4k screen
        
           | Lerc wrote:
           | Indeed, but you also get more bang for your downloaded buck.
           | 
           | My toy project https://k8.fingswotidun.com/static/ide/?gist=a
           | d96329670965dc...
           | 
           | Gives you an emulator, an 8-bit AVR Assembler, and an IDE for
           | just over 500k transferred. Almost all of it JavaScript.
           | 
           | Using math.js is by far the heaviest part but at least your
           | Asm already knows things like solarMass and planckConstant
           | :-). CodeMirror comes in second heaviest but for user-
           | experience-per-byte you're always going to be streets ahead
           | of a gif.
        
             | szhu wrote:
             | True, but clients can opt to not load (or lazy-load) images
             | without too much adverse effect. JS-heavy sites often
             | completely break without JS.
        
         | divbzero wrote:
         | Agreed. 1MB seems more reasonable as a per-site threshold. Or,
         | alternatively, somewhere between 10kB and 100kB as a per-page
         | threshold.
         | 
         | For context, 1MB is the same order of magnitude as the original
         | _Doom_ which was about 2.4MB in size. [1]
         | 
         | [1]: https://www.wired.com/2016/04/average-webpage-now-size-
         | origi...
        
         | CarelessExpert wrote:
         | > I feel like the 1MB limit is excessively generous, especially
         | for text-only pages.
         | 
         | No kidding. I just checked and the average text-only page on my
         | blog well under 100kb. Even the image-heavy front page is under
         | 1MB...
        
       | [deleted]
        
       | billfruit wrote:
       | Nice to know of CNN lite, had never heard of it before.
        
         | DanielleMolloy wrote:
         | Indeed. Also, much less of an attention sink, and lynx-
         | friendly. Are there more news websites like this?
        
           | Shared404 wrote:
           | NPR has one on the list. I don't know about any others.
        
           | dublinben wrote:
           | This site has a list with several others:
           | https://greycoder.com/a-list-of-text-only-new-sites/
        
         | subprotocol wrote:
         | Wow yeah, this made my day! It's nice not waiting 10 seconds
         | for a video I don't want to watch to auto start playing (just
         | so I can stop it).
        
       | commonturtle wrote:
       | This seems to be a common sentiment among developers today but
       | honestly bloat doesn't bother me too much. I do want good
       | performance from my websites, but with fast internet connections,
       | who cares if it is 1KB or 1MB?
       | 
       | Also, if you can develop a website for half the cost with twice
       | the size, isn't that the right thing to do? Developer time is the
       | expensive resource, and bandwidth is cheap.
        
         | droobles wrote:
         | A fair assessment for an MVP in the US, but you fail to scale
         | globally if your site is bloated as not all countries and
         | territories have the access to data at low cost the US has.
         | Definitely worth it to optimize and reduce bloat after
         | establishing your product's place in the market (or develop
         | from the get-go with reducing bloat in mind).
        
           | bityard wrote:
           | > have the access to data at low cost the US has
           | 
           | I don't know where you heard that, the US has some of the
           | most expensive and slowest Internet access in the developed
           | world.
           | 
           | https://www.numbeo.com/cost-of-
           | living/country_price_rankings...
           | 
           | https://en.wikipedia.org/wiki/List_of_countries_by_Internet_.
           | ..
        
         | bityard wrote:
         | 1. Not everyone in the world has a fast internet connection, or
         | even a reliable one. Smaller pages have a better chance of
         | making it through congestion or physical layer problems.
         | 
         | 2. Even when speed is not an issue, many people are billed by
         | the megabyte. My phone's data plan for example. I almost never
         | use it for anything more than email and a handful of very
         | trusted sites.
        
       | dynamite-ready wrote:
       | Would this page qualify? - https://quixical.com/welcome
        
         | noahtallen wrote:
         | What did they do to the scrolling!
        
       | _threads wrote:
       | YES
        
       | e12e wrote:
       | Thanks, especially for:
       | 
       | https://john-doe.neocities.org/
       | 
       | I'm just looking at modern front end web again (we're using
       | react, ant - and I've stockholmed myself into considering react
       | with tailwind as a "lightweight" alternative... Nice to be
       | reminded about truly simple html+css).
        
         | ezluckyfree wrote:
         | ah I like this one, thanks!
        
       | arendtio wrote:
       | 1MB seems to be a bit much...
       | 
       | A few years ago I wrote a little blog software which is based
       | completely on JS functionality (including i18n and page
       | transition animations) and doesn't work without it. So much so
       | that no search engine ever finds the posts (my mistake, if you
       | would do it right, that should not be a problem nowadays).
       | 
       | And yet, loading a page is less than 1 MB. In fact, loading the
       | HTML and JS only, the page would fit in 100kb. The Webfonts are
       | responsible for most of the traffic.
        
       | 1-6 wrote:
       | Reminds me of a dollar store.
        
       | ChrisMarshallNY wrote:
       | 1MB.
       | 
       | When I started writing Web sites (late '90s), we'd get spanked
       | for 50K pages.
       | 
       | I'm not sure it's possible, anymore, with a lot of the CSS and JS
       | libraries; let alone the content.
       | 
       | But back then, we didn't have content delivery networks. CDNs
       | make a _huge_ difference.
        
         | hinkley wrote:
         | I used what was probably the first 1MB page on the internet to
         | discover why realloc() was not a smart choice for progressive
         | parsing of content. I showed the problem to the lead dev and he
         | cut the load time by a factor of ten in as many hours. NCSA had
         | a web page where you could list your web server and a couple of
         | sentences about it. Which was great until Netscape came out and
         | everyone was setting up web servers.
         | 
         | This was back when Lycos was news to most people and Alta Vista
         | was just launching. It was a poor man's Yahoo. I don't think it
         | was a coincidence that the end of its usefulness came so close
         | to the birth of better alternatives. The ungainliness of the
         | status quo is usually what goads someone into trying something
         | new.
        
         | Uehreka wrote:
         | We also had super grainy images, no fonts (besides the default
         | set) and no videos. We can complain about bloat, but I don't
         | think comparisons to the 90's are totally appropriate, except
         | as a dunk.
        
           | ChrisMarshallNY wrote:
           | If you note, it wasn't actually a complaint, or a dunk.
           | 
           | It was an observation, and recounting some personal
           | experience.
           | 
           | What _is_ bloat, as far as I 'm concerned, is hitting a page
           | with a 4K background video.
           | 
           | If you want a _real_ "I had to walk 2 miles, barefoot" story,
           | I can tell you about writing [C]WAP/WML sites (the old
           | "m.<yourdomain>" sites). I actually considered using XSLT to
           | extract WML from the regular site HTML.
        
             | scoot wrote:
             | Some of us are older than that
        
         | petercooper wrote:
         | I was going to say that I remember coding towards a 35-50KB
         | limit in the 90s. That'd get you a load time within 10 seconds
         | on a typical modem of the era, if you were lucky! :-) Couple
         | that with only being able to use hex colors at multiples of 00,
         | 33, 66, 99, cc, and ff, as well as having to cater for 640x480
         | resolutions - great times.
        
         | muyuu wrote:
         | the new internet sucks
         | 
         | priorities all over the place and major regression in the most
         | crucial things
        
       | jessmay wrote:
       | The first item on the list, sjmulder.nl/en/ takes up to 2 seconds
       | to load with TLS being over 500ms.
       | 
       | Is this just some sort of self host?
        
         | 101008 wrote:
         | Craiglist took me almost 8 seconds to load.
        
         | nivenkos wrote:
         | It loaded in under 200ms for me?
        
       | [deleted]
        
         | [deleted]
        
       | [deleted]
        
       | mattbk1 wrote:
       | What's a good framework for building lightweight websites? Or am
       | I better off just writing HTML?
        
         | deergomoo wrote:
         | Depends what sort of website really. If it's best described as
         | an app, Svelte seems good. Rather than having a big feature-
         | filled runtime it does most of its work at compile time, so
         | your bundle is almost always much smaller than something like
         | React or Vue.
         | 
         | If it's just a static or almost static content I'd lean more
         | towards just plain HTML, maybe with a little server templating.
        
       | m90 wrote:
       | Stylistic nitpick: don't use the word cancerous for things that
       | don't make too much of a difference for anything but your
       | "ethos". Real cancer has people die (animals too). Bloated
       | websites might be annoying, but you will survive them even if
       | your the 1337est hacker around.
        
         | yellowapple wrote:
         | I can definitely imagine situations where being able to load a
         | page quickly is indeed a matter of life or death. This is
         | indeed part of the design rationale for the min css framework:
         | https://mincss.com/index.html
         | 
         | > Min is used by over 65,000 people in 195+ countries, from
         | North Korea to South Sudan to Mongolia to Somalia. Think your
         | software is critical? Try a webapp keeping you alive in a
         | warzone.
         | 
         | Now, whether or not this actually happens is a good question,
         | but it does seem like a plausible possibility.
        
         | ajsnigrutin wrote:
         | The problem with bloated website is, that the cumulative time
         | lost while waiting for them to load is (for many) measured in
         | many, many years. Pages like google/facebook/... optimizing
         | load time by a couple of hundredths of a second, means years of
         | real people time saved.
        
           | m90 wrote:
           | I know it sucks waiting for a website to load, but it's not
           | like your life depends on it. You can still be an overall
           | happy person while doing so, time is not "lost".
           | 
           | Performance metrics are better when put into perspective.
        
       | mark242 wrote:
       | Here's the disconnect, which is why these "webpages need to be
       | slim!" sites tend to make me think they're greybeard nostalgia
       | for an internet that doesn't really exist anymore.
       | 
       | Your 1mb webpage is approximately 60ms worth of Disney+
       | streaming.
       | 
       | Your 1mb webpage is approximately 1.7s worth of Zoom chat.
       | 
       | Your 1mb webpage is approximately 1.9s worth of Tiktok video.
       | 
       | Unless you are specifically targeting low-bandwidth users, you
       | need to worry about product market fit long before you worry
       | about slimming your js bundles down.
        
         | CyberDildonics wrote:
         | You are not only conflating megabytes (MB) with megabits (mb)
         | but you are also conflating latency (a site loading) with
         | throughput (a buffered video playing).
        
         | tomxor wrote:
         | > Unless you are specifically targeting low-bandwidth users
         | 
         | There aren't some tiny number of low bandwidth users with some
         | esoteric internet problem, there is a significant divide due to
         | technological and geographic reasons. Averages are very
         | misleading when the majority of people in cities connected to
         | various fiber end points keep getting crazier and crazier
         | speeds while 50% of the US is stuck on ADSL, with a theoretical
         | max of 20Mbit down - add geographical limitations and that's
         | often far lower due to line noise. Where I live in the UK it's
         | also just a matter of luck, I live in the middle of a major
         | city and yet in the 5 places I've lived over the last 10 years
         | the only option was ADSL, and never >6Mbit.
         | 
         | I haven't even started the argument about other countries with
         | poorer internet infrastructure.
         | 
         | When you assume being able to download at 1MiB/s is basic, you
         | make the internet suck for a huge chunk of the population, they
         | are not the majority - but _barely_.
        
           | KMnO4 wrote:
           | Even if you _are_ able to get palatable speeds, bandwidth
           | caps are an issue for some people.
           | 
           | For example, I'm stuck with 40GB/month in rural Ontario (ie,
           | 5 minutes outside of a city), which means I share roughly
           | 1300mb per day with my household. I'm constantly watching the
           | bandwidth meter tick up in my menu bar.
        
       | globular-toast wrote:
       | Instead of being an absolute number it should be a ratio of
       | useful information/size, or a signal to noise ratio. It's quite
       | hard to calculate in general, but it could probably be done by
       | rendering the page something like Firefox's reader mode and
       | taking that as the signal and the rest as the noise.
        
       | tomgp wrote:
       | In 2005 I had to fill in a form to request a page size exception
       | for our election results map on the BBC News site. The page
       | weight was going to be just a little under 400kb. At the time the
       | size considered acceptable for the whole page (images and all)
       | was 75kb.
        
         | scrooched_moose wrote:
         | kb or kB?
         | 
         | Assuming kB and dialup, which was 83% of the US at the time
         | (easiest stat to quickly find), that was a difference of about
         | 23s vs 121s load time.
         | 
         | Edit: Seems like the UK would have been about 50/50 around that
         | time according to this article:
         | https://www.independent.co.uk/news/business/news/broadband-o...
        
           | folmar wrote:
           | As far as my math goes 75k(b/B) on dialup (usual speed for
           | that time is 56.6k) is either 1.3s or 10.6s. Either being
           | acceptable at the time since browser loaded text first and
           | rendered images when they came.
        
       | mnsolo wrote:
       | Using the phrase "cancerous growth" is really poor form. Do you
       | really think a 300ms page load resembles someone fighting for
       | their life?
        
       | codingdave wrote:
       | I checked out a Wordpress site I run, and it comes in at 374K,
       | when I have put zero effort into optimizing it. I feel like 100K
       | would be a far more meaningful bar to set for this kind of
       | collection.
        
       | mp3k wrote:
       | https://bundlephobia.com/
        
       | doctorbuttes wrote:
       | Sure, many sites are bloated, but I can't help but feel the
       | sentiment is somewhat dramatic. Cancerous growth? "Client-side
       | queries!?"
       | 
       | Performance is just one piece of the UX puzzle. Users don't
       | really care about bundle size... They care if your site provides
       | value in a reasonable amount of time.
        
       | pippy wrote:
       | The culture change around the importance of performance is
       | frustrating, new developers all seem to love their massive
       | JavaScript libraries. Giving a trivial function such as search
       | box on a webpage or some animation results in a convoluted 10MB
       | JavaScript monster from hell. If they run into anything remotely
       | challenging they'll simply add a library to do it for them
       | creating more bloat. It used to be the opposite. When asked 'why
       | not just some jQuery and DOM'? they'll reply it's too hard.
       | React, webpack, node etc are great tools but they need to used
       | carefully.
        
         | Drew_ wrote:
         | > Giving a trivial function such as search box on a webpage or
         | some animation results
         | 
         | If you want great UI and UX, these are anything but trivial.
         | This is why taking an off the shelf solution is preferable to
         | reinventing the wheel in the pursuit of marginal improvements
         | of speed (not mentioning 3rd party solutions are often still
         | faster than what you could whip up in a day or 2).
        
         | that_guy_iain wrote:
         | For most apps that I've use, jQuery and DOM would be able to do
         | it, just no where near as well as react or vue. We're building
         | web apps that are highly responsive with lots of UI effects.
         | Even when it seems like there isn't much FE code going on,
         | you'll normally find that majority of it is now done with
         | preloading and XHR calls to give a smoother feeling. As long as
         | a site fully loads within 2 seconds then I'm ok with that.
        
         | SulfurHexaFluri wrote:
         | React/vue/etc are not massive libraries. The problem usually
         | comes from websites loading 5 versions of jquery and then 30 ad
         | network scripts.
         | 
         | And yes, doing anything mildly responsive with jquery is too
         | hard. And not in the "I don't know how to do this" way. Its
         | just a huge waste of time. What takes 100 lines of jquery code
         | can be done for free with jsx. Usually jquery sites settle for
         | sub optimal UX because its too hard to do the things react
         | sites do.
        
       | ssijak wrote:
       | In an age of 4k streaming being frugal with 1MB websites for the
       | sake of it is kind of missing the point. Yes, we should care how
       | we build stuff, and we should not use resources like there is no
       | tomorrow, but on HN there is a sentiment that basically says that
       | everything on the web after year ~2010 is terrible, hard to use
       | and wasteful.
        
       | prague60 wrote:
       | My question for these kinds of complaints is, how do is it
       | proposed we implement the same features and value that can be
       | provided currently, with fewer downloas?
       | 
       | Could we have Facebook without a huge download? Is this saying
       | the web should not have these features? Are we intended to
       | download each new application separately on a PC like we do
       | mobile?
       | 
       | I see so many complains about "the bloated web", but no solution
       | that let's us keep the tremendous value the internet has given to
       | the world. It just comes across so short-sighted and reactionary.
        
       | gcblkjaidfj wrote:
       | Would be better if ranked by size:content instead of size alone.
        
       | javierbyte wrote:
       | I hate how people have to use "cancerous" everywhere. Out of
       | respect please use a different word.
        
       | crazygringo wrote:
       | I don't mind highlighting and curating small sites for fun,
       | that's neat.
       | 
       | But calling larger sites a "cancerous growth on the web" just
       | feels immature to me. Everything is just cost/benefit. There's no
       | need to bring a black-and-white fanatical exaggerated mindset to
       | it.
       | 
       | You can celebrate something you like without having to make the
       | alternative a _disease_ , sheesh.
        
         | Aeolun wrote:
         | With all due respect. Modern news websites are a disease, and I
         | find it pretty hard to disagree with the nomer cancerous growth
         | since the behavior is spreading, and normalized by these
         | websites.
         | 
         | There is zero benefit to me in loading 20mb of garbage just so
         | I can read 10kb of text.
        
           | SulfurHexaFluri wrote:
           | Yes news sites are the worst of the bunch but then we get
           | people complaining about full web apps using JS when there is
           | no real alternative.
        
           | fizwhiz wrote:
           | hyperbole much?
        
       | phkahler wrote:
       | Can't they automate this?
        
       | bnchrch wrote:
       | Ahh let the bikeshedding continue.
       | 
       | In these situations all I want to say is "Who cares".
       | 
       | Optimization matters when it actually solves a problem, before
       | that it's just wasted effort.
       | 
       | I don't hear the general public yelling from the rooftops "The
       | web is too slow! Developers are building too fast! I wish we
       | could go backwards!"
        
         | stickfigure wrote:
         | I hear my users constantly yelling for more features. Not one
         | has complained about the page load (5mb uncompressed, plus a
         | delay for auth to initialize).
         | 
         | I'll keep listening to my users.
        
           | thesuitonym wrote:
           | Because anyone who complains is given this treatment. We
           | don't complain to everybody we just bounce.
        
             | Shared404 wrote:
             | > We don't complain to everybody
             | 
             | Except on forum's. We complain there :P
        
             | stickfigure wrote:
             | If you're offering a service that is so uncompelling or so
             | undifferentiated that the page load speed is a driving
             | factor, than you should certainly focus on page load speed.
             | 
             | On the other hand, Gmail regularly takes 10s+ to load and
             | yet most users are glued to it.
             | 
             | If you're a SaaS business, your marketing page is probably
             | not your app. The people using your app care about
             | usability, and page load speed is rarely the driving
             | factor. Especially with SPAs.
        
               | JKCalhoun wrote:
               | I hear you saying that if my service is compelling
               | enough, people will tolerate the horrendous page loads.
        
               | somurzakov wrote:
               | hahaha, brilliant
        
               | stickfigure wrote:
               | That is absolutely correct. Ask yourself: Will my
               | customers be happier with additional features X, Y, and
               | Z, or a faster page load?
               | 
               | Better yet, ask your actual customers.
        
               | im3w1l wrote:
               | Maybe the initial load is 10+s, but clicking into and out
               | of emails is fast, and that is what matters. If that took
               | 10s it would be unusable.
        
         | userbinator wrote:
         | To be frank, that's probably because you're not listening.
         | 
         | YouTube redesign, Twitter redesign, and one other huge site I
         | can't remember at the moment all had plenty of complaints about
         | how much slower --- and less featured --- they were. The
         | average user doesn't know a static page has been replaced with
         | a bloated SPA, but can sure feel the difference.
         | 
         | The web has declined for sure, and it's precisely because of
         | ignorant developers with attitudes like yours.
        
         | Shared404 wrote:
         | > In these situations all I want to say is "Who cares".
         | 
         | This list has a second hidden benefit: Typically, I find that
         | those who have very lightweight pages have more interesting
         | things to say.
         | 
         | That being said, that's not a hard and fast rule, as I both
         | have a lightweight page, and not much that's interesting to
         | say.
        
           | geodel wrote:
           | Agreed. Not a general rule but as a rule of thumb it works
           | very well.
        
           | ffpip wrote:
           | This is also true for how the site 'looks'. The less
           | flashier, the more interesting the content might be to read.
           | They don't bother about SEO/clicks
        
             | SulfurHexaFluri wrote:
             | The best content is always on some boomers custom made
             | forum software from 2009 that they use to post in depth
             | articles about obscure topics.
        
         | jamil7 wrote:
         | Thats because you're not listening to them.
        
         | mokkol wrote:
         | I always think, the one feature everybody wants is speed. The
         | same page but faster is always better.
        
         | geodel wrote:
         | If general public even knew who to complain they would have
         | surely yelled loudly. For now I hear non-tech people say 'Oh,
         | our internet is slow even though we pay lot of money'
         | 
         | Meanwhile valley bros keep helpfully reminding me to upgrade my
         | computer to a reasonable 2020sh.
        
           | hinkley wrote:
           | I wonder if the answer is better visualization tools? We have
           | fairly detailed dashboards that we as developers can look at,
           | but there is no "check engine light" version for people who
           | just want things to "go".
        
           | xmprt wrote:
           | Alternatively, they say things like "I hate how slow ads
           | are." It's not the ads that are slow. It's all the tracking
           | and libraries built on top of it.
        
           | krapp wrote:
           | The general public likes to do things on the web other than
           | reading minimalist hypertext. They like real-time chat and
           | video and images and playing games in the browser. They would
           | rather be able to do the things they like faster and more
           | efficiently than abandon the last 20 years of technological
           | progress.
        
             | userbinator wrote:
             | I had real-time _video_ chat and played browser games 15
             | years ago, on hardware a fraction of the power of that
             | today.
             | 
             | It seems a lot of the newer generation just doesn't realise
             | what was already possible before, and is only infatuated
             | with inefficient reinventions.
        
             | donbrae wrote:
             | Those things are all great and probably far better
             | optimised than your average should-be-static site serving
             | bloat.
        
             | yellowapple wrote:
             | The overwhelmingly vast majority of things on the World
             | Wide Web are things that are perfectly feasible - and
             | indeed pretty darn _trivial_ - with minimalist hypertext.
        
         | aresant wrote:
         | "Optimization matters when it actually solves a problem" -
         | agree.
         | 
         | With the major disclaimer that if your website serves ANY
         | commercial utility for you personally or your business website
         | speed is hugely important a clear cause / effect impact on
         | engagement, conversion rate, bounce rate, etc.
        
         | noobermin wrote:
         | I don't know about you but I see a lot of complaints on twitter
         | about the new stories feature. That isn't a complaint about
         | things being slow but it does seem to suggest users want to go
         | backwards wrt new features that are unneeded.
        
         | jen729w wrote:
         | Well also, my site is built with Gatsby and comes in at <1MB
         | (just) but I bet if you told people it was Gatsby you'd get the
         | stink-eye. "Ugh, dirty framework, what's wrong with no-JS?"
         | 
         | I mean never mind that my internal page links load in <50ms
         | thanks to prefetching and all the other smart stuff that Gatsby
         | does.
        
         | tclancy wrote:
         | Not everyone within earshot of your roof is "the general
         | public". It matters to people on lower-bandwidth connections.
         | It matters in terms of carbon footprints.
         | 
         | And just as someone who grew up horrified if individual pages
         | got over 100kb (or whatever the company rule was), the idea of
         | not caring at all about "page" weight is how my old company
         | wound up passing 20+ megs of JSON around just because it was a
         | little easier.
        
           | Aeolun wrote:
           | In my company I don't care so much about load time since
           | everyone will only load the app once per day, we know and
           | control the network speeds, and anyway, they're paid to sit
           | there waiting for it to load.
           | 
           | The same thing is not true for the general public.
        
         | baxtr wrote:
         | Well I agree and disagree :) as always it depends. I think it's
         | clear that especially on slow mobile connections people will
         | close a site quickly if they don't see results and this in turn
         | yields lower sales for example. Also, Google will factor Speed
         | into its algorithm. So you will lose traffic if you don't
         | optimize. At the same time, people at home with broadband
         | wouldn't care less.
         | 
         | That's actually a core reason for slow pages. People in offices
         | with large monitors and huge bandwidth develop those sites
        
         | wcerfgba wrote:
         | What about the problem of CO2 / MB ?
        
           | yellowapple wrote:
           | I feel like it'd be more productive to reduce the CO2 per MB
           | (e.g. using a 100% renewable-energy-powered hosting provider,
           | pressuring ISPs to use renewables, pressuring power companies
           | to use renewables, etc.) than to reduce the MB in this case.
        
           | abraae wrote:
           | I dunno. That time the user spends waiting for the sluggish
           | website downloading gigabytes of html is time they maybe
           | don't spend reaching for that apple that was air freighted
           | across the world to reach their fridge.
           | 
           | Flippant but I doubt there's any clear correlation between
           | reducing the size of a web page and reducing overall CO2
           | emissions.
        
         | dheera wrote:
         | The public _does_ yell  "The web is too slow!"
         | 
         | Except they usually yell "My phone is too slow" when the
         | problem is actually the web that got 3X bloated since they
         | bought their phone.
        
       | Yajirobe wrote:
       | Can you guys stop spamming the guy's github issues page?
        
         | rozap wrote:
         | They give folks on HN an opportunity for self promotion, what
         | do you think will happen :)
        
           | Shared404 wrote:
           | Guilty as charged :P
           | 
           | They quite literally did ask for it though.
        
         | [deleted]
        
         | dshacker wrote:
         | He asked to add pages there :)
        
       | sheerun wrote:
       | It should be something like 100kb compressed, not 1MB
       | uncompressed
        
       | mintone wrote:
       | I believe this is placing emphasis on entirely the wrong thing.
       | 
       | Page size does not matter. Render time/time 'til functional does.
       | 
       | Stripe.com is 1.2MB fully loaded but is pretty much all non
       | blocking so it feels as fast as a very small site. By the time
       | you've scrolled to the bottom of the homepage it's loaded >2MB.
       | The convert pretty well, and my experience is that Stripe knows
       | what they are doing better than most. So what should we be
       | optimising for? Total page size? or a lightning fast experience
       | with non-blocking content which can be as big as we require?
       | 
       | IMO developers need to optimise for building _fast_ sites, not
       | _light_ ones for the sake of being light.
        
         | Aeolun wrote:
         | Maybe we should optimize for both? Small page size is
         | irrelevant if the loaded js renders the page unuseable for
         | seconds, but the same thing is true on slower connections if
         | you have to load tons of data in the first place.
        
         | tonymet wrote:
         | 1.2mb will only be fast on fast hardware and a fast network.
         | You're not accounting for p90 load times. Targeting small page
         | size is a better indicator of audience wide performance
        
         | pketh wrote:
         | yar, maybe it should've been about js size, ideally closer to
         | the 200kb-of-js club
        
           | yellowapple wrote:
           | I'd be down for that, but without the "k" :)
        
       | CogentHedgehog wrote:
       | In some ways it's interesting to think that a modern website is
       | larger than most operating systems used to be.
       | 
       | I can understand why (frameworks, rich assets, etc) but it puts
       | us in an interesting place.
       | 
       | I wonder what the web will look like in 2030?
        
         | discreteevent wrote:
         | Indeed, I remember running QNX from a floppy disk (1.44MB)
         | complete with windowing system and utilities.
        
           | CogentHedgehog wrote:
           | Yeah, that sounds about right. I remember MS DOS used to fit
           | on a 720k floppy -- back when they were literally floppy...
        
           | CyberDildonics wrote:
           | Reinventing QNX will be state of the art for decades to come.
        
       | bochoh wrote:
       | I tweeted about text.npr.org this morning. Excellent resource.
        
       | poorman wrote:
       | Hopefully the creator has plans to automate the Github issue
       | verification process.
        
       | manjana wrote:
       | Would love to see a 1MB club for websites where there was a
       | requirement they would have to make use of both FA, CSS & JS all
       | at once. Or perhaps a 1MB SPA club.
       | 
       | This is interesting though. Wonder what the average newly founded
       | website sits at these days.
        
       | divbzero wrote:
       | I like the use of CSS variables for the gray bars:
       | width: calc(var(--data-size)/1000 * 100%);
        
       | vegannet wrote:
       | 1MB in size feels like an attempt to define "performant" without
       | considering... performance. A speed test from an "average"
       | connection that also measures first paint etc. would be much more
       | meaningful: not all megabytes are equal.
       | 
       | I love the idea, just needs a better measurement.
        
         | SulfurHexaFluri wrote:
         | Pretty easy to write a one line js loop that makes the page
         | slow as shit.
        
         | alexchamberlain wrote:
         | I think that's a little unfair tbh; there are plenty of places
         | that already do that, but don't take account of the memory
         | footprints on the clients machine etc.
         | 
         | I do agree that 1MB is arbitrary, but it's the right kind of
         | arbitrary IMHO.
        
       | hinkley wrote:
       | There is a problem I've wrestled with at pretty much every webapp
       | job I've had.
       | 
       | People decide it's more economical to add more stuff to existing
       | pages than it is to introduce a new page. As time progresses the
       | number of pages goes up logarithmic to the amount of
       | functionality.
       | 
       | This hits on two fronts. One is pressure from the non technical
       | people, who want the cost of new stuff to be O(1). "This is so
       | simple. Why do you have to make it a big deal?" Makes sense the
       | first time. Makes no sense at all the 58th time.
       | 
       | The other one is that we don't have an easy way to carve up
       | functionality and move it around efficiently. Webpack and friends
       | try to solve this problem, but it leaves a lot to be desired.
       | It's easier logistically to have the same giant couple of JS
       | bundles where almost every bit of logic is available everywhere.
       | Which makes it more tempting to expose all functionality
       | everywhere.
       | 
       | I discovered at one point that the quite effective strategy I
       | used for maintaining mature wikis is essentially applying the
       | B-Tree algorithm by hand, with depth weighted by recency
       | (eventually all outdated documentation is relegated to the
       | leaves, or updated in order to avoid relegation).
       | 
       | I have a hunch you could do much the same for websites, but I
       | haven't worked out how to do it in a spirit of refactoring
       | (stability is the first word, but progress has the final say).
       | 
       | There's a related issue with Conway's law, where the links on the
       | main page or in the footer are always proportional to the number
       | of divisions in the organization, and the number of initiatives
       | currently in progress or recently finished. This vastly increases
       | the decision tree even when you avoid having a giant landing
       | page. I've only seen one solution to this and that is to treat
       | these links as what they are: ads. Ads are either short lived or
       | get rotated frequently to avoid overwhelming the audience.
        
       | hbbio wrote:
       | Had to upvote, since I'm currently building a quite complex web
       | application (using Svelte) for which I'm trying to restrict
       | myself to 1Mb JS uncompressed (including SVG paths).
       | 
       | To do this, there is almost no runtime dependencies - while I
       | "lost" a few months building things I could borrow from bigger
       | libraries (from parts of rxjs to much bigger component
       | libraries/design systems), development speed is now excellent!
       | 
       | Currently sitting at 750kb and close to feature completion so I'm
       | pretty sure that it will be achieved.
        
       | ilaksh wrote:
       | I previously suggested something a little bit along these lines.
       | But what I thought would be better would be a distributed
       | database and a browser extension that would measure load times
       | (or maybe page size) and stability and automatically update it.
        
       | susam wrote:
       | How accurate is this list? I see it mentions that visiting
       | https://danluu.com/ downloads only 9.1 kB but when I actually
       | visit this website with Google Chrome, in the Developer Tools'
       | Network tab, I see that a total of 21.7 kB is transferred.
       | Name          Status  Type      Size       ------------  ------
       | --------  --------       danluu.com    200     document    2.5 kB
       | analytics.js  200     script     18.9 kB       collect       200
       | xhr          67 B       favicon.ico   404     text/html   266 B
       | 
       | Shameless plug: I maintain a lightweight home page myself at
       | https://susam.in/ and I try to keep it as lean as possible
       | without compromising on some custom formatting I care about. So
       | far I have got the transfer size down to less than 6 kB.
        
         | mattbk1 wrote:
         | Your page looks very nice, I almost want to duplicate my own
         | website into a format that's just a list of links and titles.
        
         | e12e wrote:
         | Nice page. I only miss Firefox reader mode on these simple
         | pages (for my preffered font size, and dark mode). I wonder if
         | it's possible to hint to the browser that it should be
         | available, even if the mark-up is simple?
         | 
         | Ah, the late 90s and early 00s when we still had user.css (in a
         | meaningful sense).
        
           | divbzero wrote:
           | There are vestiges of _user.css_ like the _Style sheet_
           | option in Safari. [1] I do wish they were better supported,
           | and that user styles automatically had priority over webpage
           | styles.
           | 
           | [1]: https://support.apple.com/guide/safari/advanced-
           | ibrw1075/mac
        
         | keyle wrote:
         | 5c hint: a little bit of padding and max-width on your content
         | would make it far more readable!
        
         | ffpip wrote:
         | Are you not using an ad blocker? danluu.com has google-
         | analytics, which is a very large script.
         | 
         | Please use uBlock Origin.
        
           | RussianCow wrote:
           | I don't think an ad blocker should be assumed when
           | calculating the size of a web page.
        
             | tshaddox wrote:
             | It would depend on precisely what you're measuring. If
             | you're concerned about page loading speeds, then resources
             | that don't block rendering or functionality (like, I
             | assume, a proper GA integration) might not be worth
             | considering. But if you're worried about, say, bandwidth
             | usage, then of course you'd want to count everything.
        
               | RussianCow wrote:
               | This is true, although _any_ resources loaded introduce
               | CPU overhead, which can be non-negligible, especially on
               | mobile.
        
           | susam wrote:
           | I do use uBlock Origin in general. However, I disabled it
           | while verifying if the download size claimed in this post is
           | accurate.
        
       | joshmanders wrote:
       | Shout out to https://1mb.co/
        
       | boplicity wrote:
       | Here's a common example, from a FAANG company: 72mb and 367
       | requests to display around 2,000 characters of text. Maybe double
       | that if you count the "hyperlinks."
       | 
       | Single page applications are horrible, and oh-so-common.
       | 
       | https://imgur.com/a/v2rud7T
        
         | scoot wrote:
         | 72 millibits doesn't sound feasible for even the most
         | minimalist of websites. ;)
         | 
         | Seriously though, this isn't just a "grammar" thing. If you
         | don't know the difference between millibits and megabytes, you
         | probably shouldn't post on the subject of page size (or
         | anything else relating to bandwidth consumption / network
         | performance).
        
         | deergomoo wrote:
         | They really don't have to be. Unfortunately a lot of them are
         | just built really badly, by people with fast hardware and
         | gigabit networks.
         | 
         | With good code splitting, which is supported basically out of
         | the box by Webpack and all the big front-end frameworks, I
         | would argue a well-written SPA transmits less data in the long
         | run than a fully server-rendered app. The initial payload is
         | bigger by a few hundred KB sure, but after that:
         | 
         | - Like server rendered pages, you still only load pages on
         | demand, thanks to code splitting
         | 
         | - Unlike server-rendered pages, you can cache everything except
         | the data that changes, because it's JavaScript not HTML.
         | 
         | Yes you're still making a request for fresh data on every page
         | load, but while the code is cached you need only download the
         | raw data itself. You'd have to be really clumsy to send a JSON
         | payload bigger than a full HTML document containing a rendered
         | representation of the same data.
         | 
         | Of course, this only really applies to apps. Static or
         | infrequently changed content, you're better off either server
         | rendering or just serving static files.
         | 
         | You can also make the argument that it would be more efficient
         | to serve mostly static HTML, with the bare minimum JavaScript
         | required to fetch new data, but ultimately everything is a
         | trade off :)
        
       | gargarplex wrote:
       | There might be an opportunity to write minimalist web clients for
       | popular services.
        
       | soheil wrote:
       | Interesting that a site that champions for performance has an
       | ungodly favicon [0] that is 10x larger (15KB) than the page
       | itself.
       | 
       | [0] https://1mb.club/favicon.ico
        
         | KMnO4 wrote:
         | I did a quick Google search and discovered that SVG favicons
         | are valid in most browsers, so for fun I reduced it by 95% down
         | to 886 bytes:
         | 
         | https://output.jsbin.com/yudonidujo
        
           | nekopa wrote:
           | And it looks much nicer zoomed in (as in your link) ;)
        
           | e12e wrote:
           | Nicely done. Manual or automatic conversion?
        
         | shepherdjerred wrote:
         | What kind of an internet connection are you on that 15kb is
         | significant?
         | 
         | Performance is important, but if your site is clocking in at
         | 17kb then you're probably doing okay.
        
           | addedlovely wrote:
           | May seem pedantic but those little gains do add up. A missing
           | favicon also has a surprising impact.
        
           | bombela wrote:
           | But when you start this way, you quickly endup with the bloat
           | that is modern software. Because each extra bit of waste
           | seems justified at every steps along the way.
        
       | gfxgirl wrote:
       | I'm pretty happy with my bloated mess. Glad youtube and Spotify
       | aren't limited to 1meg
       | 
       | Glad Google Maps lets me view the world in 3D and with street
       | views and make custom maps, and read reviews with photos, etc.
       | Don't care that it takes more than 1meg.
       | 
       | Love Apple's beautiful pages like the Macbook Air page at 14meg
        
         | suyash wrote:
         | what if those experiences could be provided with a smaller
         | footprint? Try to think bigger than yourself. It would no doubt
         | be faster but also would consume less bandwidth, less storage,
         | less compute = less power meaning good for the environment.
        
         | Falling3 wrote:
         | I don't think anyone is going to argue that there are sites
         | that provide huge utility in exchange for their size. No one is
         | suggesting we google maps should clock in at less than a MB.
         | But the point is that there's a continuing trend of websites'
         | sizes growing much faster than their functionality - often with
         | the former to the detriment of the latter.
        
       | tomcooks wrote:
       | Please accept email suggestions, I'd like to avoid needing github
        
       | forgotmypw17 wrote:
       | What's an easy way to measure total resources downloaded for a
       | page?
        
         | asenna wrote:
         | On Firefox you can open up the console (right click -> Inspect
         | Element), head to the "Network" tab and check the very bottom
         | of that panel. It should give you details on how much time it
         | took and the size of your request. You can refresh your page
         | and it refreshes the data at the bottom as well.
         | 
         | I believe it should be similar on Chrome and Safari as well,
         | just locate the Network tab in the browser console.
        
       | imheretolearn wrote:
       | I was once asked to debug an extremely slowly loading page. In
       | the first 5 minutes it was evident that the client was doing
       | things it wasn't supposed to do. It was downloading 300MB worth
       | of resources to show the webpage. Incorrect implementations and
       | inefficient use of libraries is the reason why we're seeing a
       | bloated websites all over the web
        
         | vagrantJin wrote:
         | > It was downloading 300MB worth of resources to show the
         | webpage. Incorrect implementations and inefficient use of
         | libraries is the reason why we're seeing a bloated websites all
         | over the web
         | 
         | In situations like that, it's right and proper to ask who built
         | the site. Then shake your head with absolute contempt.
        
           | grecy wrote:
           | and follow them around from job to job, giving yourself
           | endless contacting work cleaning up their messes!
        
       | johnghanks wrote:
       | > These things are a cancerous growth on the web.
       | 
       | lmfao a little dramatic. JS heavy sites _can_ be great
       | experiences if they are developed properly just like <1MB sites
       | can be garbage if they are poorly developed.
        
       | syassami wrote:
       | My personal fav: 148kb, $530B company.
       | https://www.berkshirehathaway.com
        
         | [deleted]
        
         | nekopa wrote:
         | And it has an ad on it. (Boy do I miss the days of Google's
         | text only ads)
        
         | [deleted]
        
         | [deleted]
        
         | anamexis wrote:
         | Including an ad, no less.
        
         | frogpelt wrote:
         | That's not completely fair since they are an investment holding
         | company.
         | 
         | How about you average their subsidiary web pages? Start with
         | DQ.com (Dairy Queen)
        
         | conductr wrote:
         | Seems impressive but considering most holding companies have no
         | website maybe it's bloated
        
         | bak3y wrote:
         | Copyright (c) 1978-2020 Berkshire Hathaway Inc.
         | 
         | Can it be assumed that this same website has been in place
         | since 1978? Obviously not exactly like it is now, but probably
         | not far off.
        
           | vitus wrote:
           | I'd assume not, given that Tim Berners-Lee hadn't invented
           | HTTP yet.
           | 
           | But, the website looks rather similar to how it did back in
           | 2001 (with the recognizable two-column list of bullet
           | points):
           | 
           | https://web.archive.org/web/20011129002047/https://www.berks.
           | ..
        
           | zerocrates wrote:
           | HTML being from the early 90s or so, I wanna say "no."
           | 
           | I wonder what the logic for the 1978 date is. It's hard for
           | me to believe they had any reasonably connected predecessor
           | of this in 1978.
        
             | ciabattabread wrote:
             | It's the copyright for the entire website. There's a letter
             | to shareholders written March 14, 1978 -
             | https://www.berkshirehathaway.com/letters/1977.html
        
         | criddell wrote:
         | I assume you went through the proper channels to get written
         | permission to make that link.
         | 
         | From the legal disclaimer at the bottom:
         | 
         | > linking to this website without written permission is
         | prohibited
        
           | wackro wrote:
           | How enforceable is this? And what's the likelihood anyone
           | would try to enforce it?
        
             | gohbgl wrote:
             | Probably as enforceable as having "looking at me is
             | prohibited" written on your shirt while walking down the
             | street.
        
         | tenebrisalietum wrote:
         | Want to tell them you like it? Better buy a stamp.
         | 
         | "If you have any comments about our WEB page, you can write us
         | at the address shown above. However, due to the limited number
         | of personnel in our corporate office, we are unable to provide
         | a direct response."
         | 
         | I see they have a link to "Berkshire Activewear". Now that's a
         | much much more heavyweight page.
        
         | jyriand wrote:
         | Just for your information, from their legal page: "...linking
         | to this website without written permission is prohibited." Not
         | sure what is meant by this, but I found it funny.
        
           | vulcan01 wrote:
           | IANAL
           | 
           | What I think they mean by this is that you shouldn't link to
           | resources on their website to make it seem like they endorse
           | your (product, website, whatever).
        
           | bbarnett wrote:
           | Imagine if we all wrote?
        
           | libria wrote:
           | That same sentence starts (paraphrasing) "[Copying or giving
           | out of any stuff gotten on this domain ... is banned]" so
           | merely quoting the legal terms is also "illegal".
        
       | ergwwrt wrote:
       | On a curious note, what's Hacker News score? Seems ala carte
       | minimal no?
        
         | jedimastert wrote:
         | Chrome Devtools says a very respectable 62.6 kB
        
           | petercooper wrote:
           | Curious, even with cache disabled I get somewhat less
           | (13.2KB) but I assume that's what came over the wire and then
           | expands after it's inflated(?)
        
       | tslocum wrote:
       | Project Gemini was created for a similar reason.
       | 
       | https://gemini.circumlunar.space
        
       | idlewords wrote:
       | The submissions page for this project is 1.8 MB, which seems kind
       | of against the spirit of the endeavor.
        
       | _benj wrote:
       | > These things are a cancerous growth on the web.
       | 
       | I'm a huge fan of minimalism using gopher:// frequently.
       | 
       | But blanket statements like this seems seems a little too much.
       | 
       | I'm personally aware of some educational tools that non-techy
       | educator are able to use with just a user name and password. A
       | few years ago in order to deliver tools like that the process was
       | a physical CD, some key, some installation wizard... and if you
       | had Mac? tough luck.
       | 
       | Let's be conscious about bandwidth usage and bloat, but we are no
       | longer in the 90's and, even if imperfect, progress has happened
       | in tech
        
       | huangc10 wrote:
       | 1MB? no no, tres commas club please!
        
       | the__prestige wrote:
       | I suspect this site's page will soon not qualify to be a part of
       | this club :)
        
       | harikb wrote:
       | Two suggestions
       | 
       | 1. Weight sites by some popularity count. Otherwise does anyone
       | care if some random small site generates a 0.1k response just to
       | get to the top
       | 
       | 2. Do you plan continuously verify the download size is still
       | accurate.
        
         | stabbles wrote:
         | 1. maybe weight websites by alexa rank? 2. some selenium github
         | action would pretty much automate everything
        
       | slmjkdbtl wrote:
       | My theory is more stuff almost directly means less user
       | experience. More stuff always requires more design knowledge,
       | which is an extremely scarce resource.
        
       | colordrops wrote:
       | Where's the Nginx landing page.
        
       | gitgud wrote:
       | I guess the leader is missing "packet.city" an entire website in
       | a single TCP packet weighing in at 1.2kb
       | 
       | Site: http://packet.city/
       | 
       | Source: https://github.com/diracdeltas/FastestWebsiteEver
        
         | chacha2 wrote:
         | Takes the same 1 second to load that every website takes.
        
       | m-chrzan wrote:
       | Nice! I've gotten so jaded with how big and slow most of the
       | modern web is, that randomly stumbling upon a tiny site prompted
       | me to write about the joy of it on my blog (a much less than 1MB
       | site itself): https://m-chrzan.xyz/blog/small-big-sites.html
        
       | alphachloride wrote:
       | Existing search engines should add a query term to filter by page
       | size.
        
       | SimeVidas wrote:
       | 100 KB of images is not the same as 100 KB of JavaScript. You
       | could technically still create a website that is a 1 KB HTML
       | document that loads almost 1 MB of JavaScript, which is
       | definitely too much.
        
         | donohoe wrote:
         | I think they are looking at whatever loads in that initial Page
         | Load.
        
         | tutfbhuf wrote:
         | How about WebAssembly? Whats your size limit for that?
        
           | vaccinator wrote:
           | Should be 0kb for that
        
       | seanwilson wrote:
       | Here's my example of a non-trivial product landing page weighing
       | in at 0.3MB transferred:
       | 
       | https://www.checkbot.io/
       | 
       | Includes analytics, chat client, big product screenshot, theming
       | and payment integration, so you can still do a lot well within
       | 1MB.
        
       | TACIXAT wrote:
       | I've always had a project in mind for a bespoke search engine
       | that only indexes pages smaller than 65kb.
        
         | conception wrote:
         | There is a search engine that tries to do something like
         | this... searches mainly for blogs and whatnot and I'm so sorry
         | I can't remember the name of it. :(
         | 
         | Ironically you can google about and find it probably.
        
           | stonogo wrote:
           | Is it https://wiby.me? If not, I'd love to know more!
        
             | rfrey wrote:
             | That is a brilliant site, thank you.
        
             | gen220 wrote:
             | The surprise me feature has yielded some really good links!
             | 
             | "Doom as a tool for system administration" [1], "What can
             | you do with a slide rule?" [2], and of course some vintage
             | 90s personal pages... some of which have been actively
             | maintained to this day (!) [3]
             | 
             | edit: that last page has some incredibly-detailed
             | information about the various cars he's owned [4]
             | 
             | [1]: https://www.cs.unm.edu/~dlchao/flake/doom/
             | 
             | [2]: http://www.math.utah.edu/~pa/sliderules/
             | 
             | [3]: http://www.billswebspace.com/
             | 
             | [4]: http://www.billswebspace.com/124Abarth.html
        
               | Shared404 wrote:
               | [1] Is actually a really cool idea. I thought it would
               | just be a joke, but the article makes a good case for it,
               | at least as a POC for a UI.
        
               | timoth wrote:
               | Also reworked for newer contexts:
               | 
               | https://github.com/storax/kubedoom
               | 
               | https://github.com/gideonred/dockerdoomd
        
         | nivenkos wrote:
         | Or a news aggregator that only accepts links to pages with the
         | same restriction.
        
       | nazgulnarsil wrote:
       | I want to blacklist sites over a certain threshold. Is there an
       | easy way to do that?
        
       | maxk42 wrote:
       | This makes me happy. My latest project (unreleased) has its
       | entire front page - including CSS, Javascript, and images -
       | served in a single request with an _unminified_ response payload
       | of just over 4k.
       | 
       | Part of what made it possible was the Mu CSS Framework:
       | https://bafs.github.io/mu/
        
       ___________________________________________________________________
       (page generated 2020-11-19 23:00 UTC)