[HN Gopher] Intel plans immersion lab to chill its power-hungry ...
       ___________________________________________________________________
        
       Intel plans immersion lab to chill its power-hungry chips
        
       Author : rntn
       Score  : 41 points
       Date   : 2022-05-20 15:28 UTC (2 days ago)
        
 (HTM) web link (www.theregister.com)
 (TXT) w3m dump (www.theregister.com)
        
       | azinman2 wrote:
       | Shouldn't they instead figure out how to run cooler and with less
       | power in general? That's where it seems everyone else is going...
        
         | booi wrote:
         | I need... more powahhh...
         | 
         | No but seriously why don't they build a 128-core atom server.
         | That's really all anybody wants. I don't need the fastest most
         | immersed cpu ever, just a bunch of decent ones at 30W or less.
        
           | astrange wrote:
           | That'd be a pretty unbalanced architecture - may use less
           | rack space than 128 servers, but with only one servers' worth
           | of IO, network, PSUs, it'd be less reliable and maybe not
           | even faster.
        
           | matja wrote:
           | AMD's plan with EPYC Zen 4 "Bergamo", with up to 128 Zen 4c
           | cores.
        
           | tyrfing wrote:
           | 128 cores at 30 watts isn't something I've seen anyone
           | planning. What's more likely is 128 cores at 300-400+ watts,
           | and scaling from there is most likely to increase power usage
           | and core counts. Bergamo (AMD), Graviton (AWS), Sierra Forest
           | (Intel), Grace (NVIDIA) are all going for that.
           | 
           | 30 watts is low power mobile and "edge" compute.
        
           | tadfisher wrote:
           | That was Larrabee/Xeon Phi, was it not? Discontinued for lack
           | of sales.
        
             | sseagull wrote:
             | That's basically what I thought. However, IIRC they were
             | marketed more towards high-performance computing (with
             | avx-512).
             | 
             | They were an uncomfortable middle ground though, between
             | normal CPUs and GPUs. My benchmarks showed that there
             | wasn't much of an advantage over 20-ish normal xeon cores
             | (for my HPC workloads).
             | 
             | (Memory is a little fuzzy - that was 4-6 years ago).
        
               | glowingly wrote:
               | While not exactly what you are looking for, Intel Snow
               | Ridge is a continuation of their Atom-based (next to
               | their line of Core-based) networking processors. 8-24
               | cores.
               | 
               | https://www.intel.com/content/www/us/en/products/details/
               | pro...
               | 
               | Though, unless if you 100% need X86, there is the Ampere
               | Altra 128 core Cortex-N1 chip.
        
         | icegreentea2 wrote:
         | Intel (and everyone else) do work on improving compute
         | efficiency.
         | 
         | But as the article points out, if 40% of your DC's power
         | consumption is in cooling, then you'd be foolish not to target
         | that slice.
         | 
         | Liquid and immersion cooling allows higher power density, which
         | all things being equal (I know there's a lot of heavy lifting
         | being done by this...) will be preferred. Why distribute your
         | components over a rack if you could fit it into a single 4U
         | board? Why distribute your components over an aisle if you
         | could fit it into a rack?
        
           | lumost wrote:
           | The other advantage of power density is that it creates
           | stronger convection currents. Whereas data centers have
           | traditionally been actively cooled, it's not unreasonable to
           | imagine open air dcs with air channels to support convection.
        
             | walrus01 wrote:
             | it's sort of been done, though there are still a lot of
             | active fans to move the hot air.
             | 
             | https://www.google.com/search?channel=fs&client=ubuntu&q=ch
             | i...
        
         | voldacar wrote:
         | There is only so much computation you can do per watt on
         | current process nodes. To increase our computation per chip,
         | which is the goal, we need to increase the amount of watts we
         | consume per chip. The goal should be to make more powerful
         | processors, not ones that do the same with less power.
        
           | temac wrote:
           | > we need to increase the amount of watts we consume per
           | chip.
           | 
           | Not sure we need that, except in niches. At scale you often
           | want at least _some_ efficiency, which is certainly not max
           | TDP per core (because the best efficiency point is with lower
           | frequencies and higher width, not the max freq you can
           | achieve). So remains the question of large number of cores,
           | but at some point the area of silicon also goes stupid high.
           | And you can put multiple packages, _without_ sacrificing
           | overall system density too much, and without departing from
           | simpler, and probably lower TCO pollution.
           | 
           | For small systems it depends, but you actually often have
           | even more limited thermal budget, except again in niches if
           | you are ready to tolerate the drawbacks (stupid power req it
           | even becomes hard to have just a few machines on a basic
           | electrical network in standard homes or offices, high noise
           | under load, obviously high TDP so heating up a lot). But you
           | have less space constraints so if you really want absurd
           | systems you already can.
           | 
           | So do we really need to e.g. double or triple the
           | (electrical/thermal) power density at scale? Do we need 2 kW
           | chips? Do we need to sacrifice the efficiency now, and
           | increase the nominal consumption now, instead of waiting just
           | a few years for node improvements? (And I could even ask: do
           | we really need that much increase of processing power,
           | shouldn't we start to optimise for the total ecological cost
           | instead? and I've not tried to do some prospective in that
           | area but _maybe_ this would mean slowing down the processing
           | power growth...)
        
       | jabl wrote:
       | Can't say I'm terribly excited about another massive scale usage
       | of forever chemicals, aka flourocarbons. Didn't Intel get the
       | memo, we're trying to reduce usage of these (see e.g. the EU
       | F-gas regulations) not increase.
       | 
       | There's a coolant that's widely used, non-toxic, environmentally
       | benign, cheap, abundant and non-flammable. Yeah, water. So it's
       | not dielectric so needs some engineering. But humanity has a
       | decent track record of building systems with pipes, hoses, heat
       | exchangers and so forth. Same can't be said for cleaning up
       | Superfund sites.
        
         | tremon wrote:
         | Fresh (potable) water is going to be a precious resource too,
         | and salt water is probably out of the question for its
         | corrosiveness. So I'm not sure replacing fluorocarbons with
         | water will be any better. Aren't there other liquids we can
         | explore?
        
         | antisthenes wrote:
         | > Same can't be said for cleaning up Superfund sites.
         | 
         | It's a pretty huge leap to go from "closed loop CFC cooling
         | system for a computer" to "superfund sites".
         | 
         | What am I missing? If we're building a system with pipes and
         | heat exchangers, why can't the coolant be a low-impact CFC
         | rather than water? It's not a system where you just vent those
         | cooling liquids in the atmosphere.
        
           | BenoitP wrote:
           | Some of these CFC are almost eternal compounds. They are so
           | stable that no natural light frequency from the sun can break
           | them apart.
           | 
           | Sulfur hexafluoride, used in high voltage circuit breakers,
           | has a half-life of 3200 years, has a global warming potential
           | 22800 times that of CO2.
           | 
           | So you don't want to vent them, but any accident/leak can be
           | considered a catastrophe.
           | 
           | That's just the physics of it: highly dielectric + stable
           | often makes for a big greenhouse gas offender.
        
       | ece wrote:
       | What is the maximum performance % difference between optimizing
       | for perf/$ and perf/watt? Sure, there are wafer scale chips now,
       | but the TDP for a phone is still ~5W watts, average laptops have
       | gone from ~15 to ~30W, and desktops from ~300-600W+. I suppose
       | with Zen 4, there might actually be an apples to apples
       | comparison baring ISA and uncore differences. If ADL is anything
       | to go by I imagine performance will be within ~15% of each other,
       | but with a ~30% price difference if you care about a more
       | efficient and cooler running chip. Sure the efficiency gains add
       | up, but so do the performance gains on the other side.
        
         | wmf wrote:
         | _What is the maximum performance % difference between
         | optimizing for perf /$ and perf/watt?_
         | 
         | Alder Lake and M1 Pro are good demonstrations of those two
         | approaches.
        
         | tlb wrote:
         | It can be a lot. Speculation requires executing operations
         | before you're sure they'll be needed, which can double or
         | triple power draw in order to increase instruction-level
         | parallelism. And all the machinery needed to enable
         | speculation, like branch predictors, draws power too.
         | 
         | This graph shows a factor of 100 between the highest-performing
         | and most-efficient systems:
         | https://en.wikipedia.org/wiki/Performance_per_watt#Examples
        
       | shrubble wrote:
       | I remember reading with surprise that the Motorola 68040 CPU,
       | which was competing with the Intel 486, could have run hotter but
       | at a faster coock speed -- but Motorola didn't want to specify
       | the use of a heat sink. Seems like quite a change!
        
         | wincy wrote:
         | Uhh, I think you made a typo that made your post quite phallic.
        
       | aj7 wrote:
       | Tik tok is covered with these videos. I saw one with an entire
       | server rack in a tank. https://www.tiktok.com/t/ZTdnJ8Yco/?k=1
        
         | LegitShady wrote:
         | LTT had their mineral oil pc videos...7 years ago
        
       | eternityforest wrote:
       | I wonder what other options there are for cooling.
       | 
       | What's wrong with water and cooling blocks? I'm sure they could
       | develop some quick connect hardware, paired with sensors and
       | valves, so that any leaks could be auto-stopped.
       | 
       | You could build the connectors such that pressing and holding the
       | release button causes the whole loop to drain by suction, for
       | near zero dripping as long as you wait a few seconds first.
        
       | [deleted]
        
       | chroem- wrote:
       | This is why I think silicon carbide based chips are going to be a
       | huge deal: you can run them hot enough that you can actually run
       | a heat engine off of the chip's waste heat to recuperate some of
       | the electricity you spent on computation. Now if only I could
       | figure out how to invest in companies developing this
       | technology...
        
         | picture wrote:
         | Doesn't silicon carbide have unusual properties for making
         | digital circuits with? SiC diodes have forward voltages higher
         | than regular Si, for example. And additionally, I'm not sure if
         | you can get the same performance on SiC at super high
         | temperature, as even the metalization need to be specialized to
         | handle the heat without too much resistive voltage drop, etc
        
         | AtlasBarfed wrote:
         | Wasn't the original research push for CVD diamonds for CPUs so
         | you could run them 10x hotter?
        
       ___________________________________________________________________
       (page generated 2022-05-22 23:00 UTC)