[HN Gopher] There's plenty of room at the Top: computer performa...
       ___________________________________________________________________
        
       There's plenty of room at the Top: computer performance after
       Moore's law
        
       Author : dsavant
       Score  : 40 points
       Date   : 2020-06-06 20:52 UTC (2 hours ago)
        
 (HTM) web link (www.techrepublic.com)
 (TXT) w3m dump (www.techrepublic.com)
        
       | Stratoscope wrote:
       | Not everyone may be aware of the title's background. It is a play
       | on words of a 1959 Richard Feynman lecture, "There's Plenty of
       | Room at the Bottom: An Invitation to Enter a New Field of
       | Physics":
       | 
       | https://www.google.com/search?q=there's+plenty+of+room+at+th...
        
       | SlippyCrisco wrote:
       | Yeah, (generally) we got lazy for sure.
        
         | xellisx wrote:
         | I remember in the Windows 98 days, there was a program called
         | GuitarFX that blew my mind. IIRC, it was only a couple MB in
         | size, didn't use a lot of RAM, and was fairly low latency. I
         | think I was running it on a PIII 500Mhz with 128MB of RAM.
         | 
         | Now we have apps that are like 40+ MB, eat RAM and CPU cycles
         | to do fairly simple things.
        
           | rayiner wrote:
           | What's mind boggling is the change in relative demand. Back
           | in the 1990s, an IRC or AIM client was a little background
           | app that took a tiny fraction even of a 64MB of RAM computer.
           | But Slack takes a larger share of an 8GB if RAM computer.
        
           | Gibbon1 wrote:
           | There were functional spreadsheets that ran on a Apple ][
           | with 48k of RAM. Not to mention schematic capture and PCB
           | layout programs that would run 'fine' on a 486 with 4 MB of
           | RAM.
           | 
           | So yeah it's disheartening to see programs with the same
           | functionality of an old 16 bit 0x86 program using hundreds of
           | MB's of RAM.
        
             | lumost wrote:
             | I wonder how much of this is simply due to increasing
             | expectations on interface and portability. Icons, borders,
             | and fonts were tiny in the 90s and looked tiny. Programs
             | ran for particular platforms and were often coded in the
             | native platform's APIs and linked to specific binaries of a
             | specific OS version. Now with expectations of high res
             | iconogrophay, detailed and smooth fonts, as well as the
             | expectation that an OS update shouldn't effect any of my
             | apps - we inherently have bigger programs. The alternative
             | approach could be taken, but it wouldn't be viewed as a
             | mark of quality.
             | 
             | Games like Destiny weighs in at >100GB with it's latest
             | updates due to map and image needs, and the slack client
             | runs smoothly across 5 Operating systems.
        
               | zozbot234 wrote:
               | "High-res" icons are still quite tiny. Expectations have
               | definitely grown wrt. the complexity of text rendering:
               | things like Unicode multi-language support, emojis and
               | high-quality typography are taken for granted nowadays,
               | and even something as simple as that would have required
               | fairly high-end hardware back in the early-to-mid 1990s.
        
       | Ericson2314 wrote:
       | Only at the scale of the big tech companies do the machine costs
       | start to seriously stack up against the labor costs. (Well, also
       | smaller machine learning shops, but they are in trouble anyways.)
       | And I don't think they fsce enough competitive pressure to be
       | forced to take the massive foundational and educational
       | investments they would need to pull this off.
       | 
       | It won't happen with this generation of the industry.
        
         | ineedasername wrote:
         | The tooling infrastructure of code development itself needs to
         | be optimized (not necessarily in speed terms) to provide coders
         | with the information & tools necessary to write faster code. It
         | seems like IDE's are currently optimized to facilitate rapid
         | code development, but not development of rapid code.
        
           | xellisx wrote:
           | I use PHPStorm and a plugin called EA Inspections that will
           | gladly point out optimizations. I deal with a lot of legacy
           | code written 10 years ago that I've been slowly optimizing
           | when possible. Of course we've been working to burn down the
           | legacy code base.
        
           | est31 wrote:
           | The issue is, most code only runs so few times that it
           | doesn't really matter. Only when the code requires so much
           | cloud resources that it's worth to spend an engineer's salary
           | for a few hours/days to optimize it you should do it.
        
         | awinter-py wrote:
         | you may be right for consumer-facing functions where adding
         | more boxes is cheaper at small size
         | 
         | but slow internal tools are a direct tax on developer
         | productivity
         | 
         | this is an area where faster / smaller software does make a
         | difference
        
         | smabie wrote:
         | Most of the software that I use I would consider to be
         | unacceptably slow on my top of the line workstation: emacs,
         | webbrowsers, intellij; the list goes on and on. Something needs
         | to be done, and fast. Pretty much everyone I know regularly
         | complains about how slow and crappy the software they use is,
         | especially my non-technical friends.
         | 
         | We need to start thinking about performance as a necessary and
         | basic feature, not some nice to have that can be worked on
         | later. A new program needs to designed from the ground up to be
         | fast, from the language choice to the data structures.
         | 
         | And I think this shift is being made, except in web development
         | circles. Most new and upcoming languages list performance as a
         | basic feature: Rust, Julia, Nim, Zig, etc. Also, native
         | compilation is often listed as a feature, as there's been a
         | huge backlash against the vm cargo cult on the late 90s and
         | 2000s.
        
           | throwaway_pdp09 wrote:
           | What's your problem with your emacs? Mine runs pretty snappy.
           | Sometimes you get a runaway process in it which _may_ be a
           | problem but if you have problems every time, something else
           | is wrong.
           | 
           | As for browsers _shrug_ , turn off JS and add a blocklist and
           | it'll run an order of magnitude faster.
        
             | rjsw wrote:
             | I didn't even feel that emacs was slow when I was running
             | it on a 25MHz 386DX. The only real change between then and
             | now is that I'm using a TrueType font with it.
        
           | [deleted]
        
       | abnry wrote:
       | I was always jealous of the stories about programmers in the 80s
       | who wrote games in insanely small amounts of RAM. I kind of want
       | that same kind of challenge needed to be overcome in my lifetime.
        
         | Kye wrote:
         | It's all relative. There's nothing stopping you from trying to
         | make a complex 3D game that runs well on a low-end laptop.
         | Think of how much waste there is in games made with popular
         | tools like Unity. They trade performance for ease of
         | development. See what happens when you go the other way.
        
         | [deleted]
        
         | ikeyany wrote:
         | It's still very common in the embedded and IoT space. One such
         | challenge is low-power/low-overhead encryption.
        
         | karatestomp wrote:
         | Write apps for Garmin watches, especially supporting ones a
         | generation or two old and on the lower end. Welcome to the
         | world of removing code to shave a couple KB off the space the
         | code itself takes up in memory so you can use it for data.
         | 
         | Oh and for some reason their language and APIs are OO despite
         | the fairly tight memory limitations, and despite not having any
         | real need to do that (nor enough memory to really take
         | advantage of any OO features even if you wanted to). So you've
         | _really_ got to watch (haha) yourself.
        
         | realtalk_sp wrote:
         | Those challenges are everywhere in data science/engineering.
         | It's enormously expensive to do things shoddily in those
         | domains and the resource constraints are very much
         | bottlenecking what's practically achievable.
        
         | jcadam wrote:
         | Expectations for what software should be able to do were quite
         | a bit lower in the 80s too :)
         | 
         | I started learning to code as a kid in the 80s on 8-bit
         | machines. Seemed like there was rapid progress in the
         | availability of CPU power, RAM, etc., up until the early 2000s
         | and then things seemed to... slow down.
         | 
         | Which is about the time everybody started to focus hard on
         | horizontal scalability.
        
           | rayiner wrote:
           | Were they? My recollection is that software back then had
           | more features. Software today is neutered in comparison.
           | (Compared Google Docs to Word Perfect from the early 1990s).
        
             | krisoft wrote:
             | Are you sure about that Google Docs comparision? It can be
             | edited at the same time by multiple people in a seamless
             | way. Did Word Perfect do that?
        
               | zozbot234 wrote:
               | Of course it did. You just had multiple people sitting at
               | the computer. :-P With a KVM-sharing switch if you wanted
               | to be fancy about it.
        
             | xellisx wrote:
             | Think about Windows and how easily it was to change
             | settings, you went to the control panel. Now in Windows 10
             | you have dig around to find things. It's like they are
             | trying to hide stuff.
        
               | dehrmann wrote:
               | No, they're just maintaining two separate interfaces for
               | everything.
        
             | siberianbear wrote:
             | Google Docs isn't a great example. In WordPerfect's case,
             | WordPerfect _was_ the product and there was an incentive to
             | make it great.
             | 
             | Google Docs isn't the product. It's free. You're the
             | product. And it only needs to be good enough to keep
             | competitors from creating something so great that it
             | removes their first-mover advantage. It's clear that Google
             | has given up developing new features for it.
        
               | AlexCoventry wrote:
               | Grammar checking on Google docs had gotten wicked smart.
        
               | dehrmann wrote:
               | A solid portion of Docs users are paid G Suite users.
        
           | dehrmann wrote:
           | By the mid 2000s, they just started adding more cores.
           | Machines kept getting more memory, albeit slower. SSDs have
           | been a game changer, and they really only got big in the last
           | 10 years.
           | 
           | We never went past 64 bits because it's enough bits to
           | address memory for the foreseeable future, and applications
           | needing more are rare. It's mostly cryptography.
        
       | harikb wrote:
       | Anecdotal - One of favorite YouTube live-streaming personalities,
       | Jon Gjengset[1] is a Ph.D student at MIT CSAIL. His talks are
       | very detailed[4] and captivating[3]
       | 
       | [1] https://thesquareplanet.com/
       | 
       | Talks
       | 
       | [2] https://youtu.be/s19G6n0UjsM
       | 
       | [3] https://youtu.be/DnT-LUQgc7s
       | 
       | Rust Details(my favorites)
       | 
       | [4] https://youtu.be/rAl-9HwD858
        
       | monocasa wrote:
       | I always interpret the end of Moore's Law to mean that was once a
       | curve of exponential growth, has shown where the inflection point
       | is, and now we're on the other side of the S curve. Yes there's
       | still gains to be had, but they're harder and harder in ways that
       | aren't ameliorated by just increases in funding. There needs to
       | be changes in how we compute, and a rejection of the idea that
       | specialization is beat out by the economies of scale of generic
       | solutions.
        
       | Exmoor wrote:
       | > "For tech giants like Google and Amazon, the huge scale of
       | their data centers means that even small improvements in software
       | performance can result in large financial returns,"
       | 
       | And really this is only for software that runs inside their
       | servers. The costs for software running on client machines is
       | just distributed to the user.
       | 
       | If google makes a change that makes Chrome run less efficiently
       | that means I'm paying for it with my time and electricity. If I
       | want it to run better I'm shelling out money from my own pocket
       | for new hardware.
        
       | sillysaurusx wrote:
       | A bunch of you seem interested in taking up the challenge: having
       | to write smarter software, wistful for the old days when it was
       | forced on you.
       | 
       | Turns out, you can! There's a big domain that needs your help,
       | today: deep learning.
       | 
       | A lot of the software is currently crude. For example, to train a
       | StyleGAN model, the official way to do it is to encode each photo
       | as uncompressed(!) RGB, resulting in a 10-20x size explosion.
       | 
       | There's plenty of room at the top, and never moreso in AI
       | software. Consider it! Every one of you can pivot to deep
       | learning, if you want to. There's really nothing special or
       | magical in terms of knowledge that you need to study. A lot of it
       | is just "Get these bits from point A to point B efficiently."
       | 
       | There's also room for beautiful tools. It reminds me a lot of
       | Javascript in 2008. I'm sure that will sound repugnant for a
       | majority of devs. But for a certain type of dev, you'll hear that
       | ringing noise of opportunity knocking.
        
         | zozbot234 wrote:
         | The highest-performance deep learning framework is most likely
         | Theano which still has to be ported to current Python 3.x.
         | There's quite a bit of low-hanging work to be found via fixing
         | things like that.
        
           | AlexCoventry wrote:
           | Highest performance by what metric? I thought Theano has been
           | dead for a couple of years, at this point.
        
       | dang wrote:
       | This is a knockoff piece but it links to the paper:
       | https://drive.google.com/file/d/1PFCcP5WAE-64DlrRoZZE5NhC3cc...
       | 
       | The paper's URL is
       | https://science.sciencemag.org/content/368/6495/eaam9744 but the
       | text seems to be paywalled. We've put its title above.
        
       | philipkglass wrote:
       | I quote from the Conclusions section of the actual paper posted
       | by dang:
       | 
       | "As miniaturization wanes, the silicon-fabrication improvements
       | at the Bottom will no longer provide the predictable, broad-based
       | gains in computer performance that society has enjoyed for more
       | than 50 years. Performance-engineering of software, development
       | of algorithms, and hardware streamlining at the Top can continue
       | to make computer applications faster in the post-Moore era,
       | rivaling the gains accrued over many years by Moore's law. Unlike
       | the historical gains at the Bottom, however, the gains at the Top
       | will be opportunistic, uneven, sporadic, and subject to
       | diminishing returns as problems become better explored. But even
       | where opportunities exist, it may be hard to exploit them if the
       | necessary modifications to a component require compatibility with
       | other components. Big components can allow their owners to
       | capture the _economic advantages from performance gains_ at the
       | Top while minimizing external disruptions. "
       | 
       | (My emphasis on " _economic advantages_ from performance gains.
       | ")
       | 
       | Databases are still getting faster. You don't get a faster DB by
       | buying a new server and then running a 10 year old version of SQL
       | Server or PostgreSQL on it.
       | 
       | Language runtimes are still getting faster. You don't get better
       | Java or JavaScript performance by running a 10 year old release
       | of the HotSpot JVM or V8.
       | 
       | 3D renderers, video encoders, maximum flow solvers (one example
       | examined in depth in the article): they're all getting faster
       | over time at producing the same outputs from the same inputs.
       | 
       | The key is _incentives_. There are probably people in your
       | organization who care a lot if database operations slow down 10x.
       | But practically nobody in your organization cares if Slack
       | responds to key presses 10x slower or uses 100x as much memory as
       | your favorite lightweight IRC client. (I 'm trying to make a
       | neutral observation here. Looking at it from 10,000 feet, I get
       | annoyed when I see how much memory Slack takes on my own machine,
       | but that annoyance wouldn't crack the top 20 priorities for
       | things that would improve the productivity of the business I'm
       | in.)
       | 
       | The incentives problem is also why the Web seems slow. Browsers
       | too are still getting faster. I used to run multiple browsers for
       | testing and old Firefox and IE releases were actually _much
       | slower_ at rendering identical pages than current stable
       | releases. But pages are getting heavier over time. Mostly it 's
       | not even a problem of people trying to make "too fancy" sites
       | that are applications-in-a-browser. It's mostly analytics and
       | advertising that makes everything painfully slow and battery-
       | draining. I run a web site that has had the same ad-free,
       | analytics-free, JS-light design since the early 2000s. It renders
       | faster than ever on modern browsers. It's not modern browsers
       | that are the problem -- it's the economic incentive to stuff
       | scripts in a page until it is just this side of unbearable.
       | 
       | For certain kinds of human-computer interaction, people will pay
       | a lot to reduce latency. Competitive gamers will pay, for
       | example. Sometimes people will invest a lot of effort to reduce
       | memory footprint -- either because they're shaving a penny off of
       | a million embedded devices or because they're bumping up against
       | the memory you can fit in a 10 million dollar cluster. But the
       | annoyances that dominate Wirth's Law discussions on HN -- why do
       | we have to use Electron apps?! -- are unlikely to get fixed
       | because few people are willing to pay for better.
        
       ___________________________________________________________________
       (page generated 2020-06-06 23:00 UTC)