[HN Gopher] IBM Creates First 2nm Chip
       ___________________________________________________________________
        
       IBM Creates First 2nm Chip
        
       Author : 0-_-0
       Score  : 491 points
       Date   : 2021-05-06 10:11 UTC (12 hours ago)
        
 (HTM) web link (www.anandtech.com)
 (TXT) w3m dump (www.anandtech.com)
        
       | andy_ppp wrote:
       | Interesting table showing millions of transistors per mm^2 there.
       | Is Intel's 10nm really having more transistor density that TSMC's
       | 7nm? This could mean some big things coming from AMD moving to
       | 5nm next year!
        
         | xxs wrote:
         | yes, intel 10nm is similar to tsmc's 7mn. Unfortunately for
         | Intel they have struggled to produce any non-low-powered chips
         | on 10nm. That can be alleviated by not cramming the entire area
         | with transistors.
         | 
         | Overall the numbers of "X" nm mean very little
        
           | colinmhayes wrote:
           | Intel 10nm desktop is releasing this fall.
        
             | xxs wrote:
             | > Intel 10nm desktop is releasing this fall.
             | 
             | Hence, "Intel they have struggled". For 5years.
             | 
             | We still don't know how it's going to turn out.
        
         | ksec wrote:
         | And and it has been stated gazillion times, except most media
         | doesn't like to report it. Which leads to discussion that goes
         | no where.
        
       | thombles wrote:
       | I'm glad we have an industry standard for fingernail size at
       | least.
        
         | danmur wrote:
         | Haha, that was my favourite part. Wish they'd followed up by
         | asking which fingernail and whose, when they last cut it etc.
        
         | guidopallemans wrote:
         | That must be equal to 1cm^2, right?
         | 
         | I guess you could cheat and use the area of a thumb nail, but
         | then you could take it a step further and use the area of a
         | horse's fingernail, which is at least 100 times larger...
        
           | mminer237 wrote:
           | IBM used ~1.2cm^2 (150mm^2)
        
       | blodkorv wrote:
       | How big is IBMs chip manufacturing?
       | 
       | Power seems to be more and more out of fashion and other chips
       | seems to preform way better. They cant be making much of those?
        
         | jhickok wrote:
         | IBM still makes a lot of money on Power, but at the moment it
         | is a managed decline. Ever refresh cycle they get a nice big
         | revenue bump, and they have a pretty large Fortune 500/US
         | Defense base that won't be leaving Power any time soon.
         | 
         | Soon IBM will be the biggest consumer of Power as they continue
         | to move customers to Power on IBM Cloud.
         | https://www.ibm.com/cloud/power-virtual-server
        
         | agumonkey wrote:
         | Maybe with the relocalization of fabs they will have an
         | opportunity to grow that side again.
        
         | ilogik wrote:
         | if only there were a handy article on anandtech, linked to on
         | HN that could answer this question
        
       | slver wrote:
       | I'm still waiting for this new persistent memory IBM had. But
       | anyway, I'm happy to see this density is even possible. Exciting
       | times ahead.
        
       | bullen wrote:
       | I'm curious what the longevity at 100% will look like for these.
       | 
       | Also peak performance is allready passed even with multicore
       | because of RAM latency.
       | 
       | I still think 14nm or larger will be coming out on top after a
       | few decades of intense use.
       | 
       | Time will tell!
        
         | formerly_proven wrote:
         | > Also peak performance is allready passed even with multicore
         | because of RAM latency.
         | 
         | By that logic CPUs from today should be _slower_ than those
         | from 2005, since the real (uncached) memory latency has
         | increased (significantly) for most systems.
         | 
         | Though I suppose this means you could actually engineer
         | workloads that run slower on a modern 5 GHz CPU than on a
         | 2005-era Athlon 64.
        
           | erk__ wrote:
           | IBM already have a chip with a gigabyte of L3 cache, so maybe
           | that combats the latency in another way
        
             | formerly_proven wrote:
             | Reducing _average memory read /write latency_ (as well as
             | increasing bandwidth) is the point of big caches - but,
             | cet. par. bigger caches inherently have higher lookup
             | latency, deeper cache hierarchies increase the latency
             | further and cache sizes are also limited by ISA concerns
             | (4K pages on x86 limit the size of VIPT caches). So a
             | system with a deep cache hierarchy and large caches will
             | perform better on average, but actually going out there and
             | getting bits from main memory will take longer. Not least
             | because the actual, physical latency of DRAM itself
             | improves only very, very slowly, this makes old systems
             | stand up quite well in this particular metric against
             | modern systems.
        
         | vardump wrote:
         | > I'm curious what the longevity at 100% will look like for
         | these.
         | 
         | Indeed. At this feature size I'd already start to be worried
         | how long the chip will work.
        
           | MayeulC wrote:
           | Feature size seems to be 15nm, from the article. 2nm is
           | marketing speech for transistor density.
        
       | marsven_422 wrote:
       | IBM still have competent engineers...looks like IBM HR department
       | has failed.
        
       | jbverschoor wrote:
       | That should also mean that the price per chip will decrease,
       | because you can fit more on a single wafer
        
         | reportingsjr wrote:
         | Historically this has been true, and costs reducing is a part
         | of Gordon Moore's original statement.
         | 
         | It has been true for Intel up until at least 2015, and I expect
         | that ignoring the recent supply chain weirdness it will remain
         | true for a while longer.
         | 
         | http://www.imf.org/~/media/Files/Conferences/2017-stats-foru...
        
         | Guthur wrote:
         | Not necessarily, and most likely no. The cost of craming so
         | much in to such a small is not linear it's been getting more
         | and more expensive. There are also factors like yield that come
         | into play and potentially drive the costs up again.
        
         | whoknowswhat11 wrote:
         | Absolutely not - these will be by far the most expensive
         | process nodes.
         | 
         | 28nm probably where you want to be for cost - check out what
         | raspberry pi and other lower cost products use
        
           | jbverschoor wrote:
           | I dunno. Besides the capex, a wafer is about 25k. That
           | doesn't really change, right?
        
           | akmittal wrote:
           | I don't think it would be that much cheaper to make 28nm now.
           | 2-3 generations behind mainstream should be cheapest. PI 4
           | was launched around 2 years back. So if PI5 is launched today
           | they will mist probably go for 12nm.
        
             | whoknowswhat11 wrote:
             | Depends on volume. Tape out / design / validation / mask
             | costs and other nonrecurring costs get high or very low
             | based on node size. For Apple volume latest and later nodes
             | may be cheapest per unit performance. Most folks aren't
             | Apple
        
           | reportingsjr wrote:
           | Every new processing node has been the most expensive
           | processing node ever! Transistor costs have continued their
           | trend of decreasing for every node. It's possible and likely
           | that there will be a blip for the next two or so years due to
           | the recent supply chain struggles, but I'm willing to bet
           | they will continue decreasing for a while after this shakes
           | out.
           | 
           | http://www.imf.org/~/media/Files/Conferences/2017-stats-
           | foru...
        
         | FinanceAnon wrote:
         | TSMC is currently keeping prices constant or even increasing
         | them.
         | 
         | https://www.gizmochina.com/2021/03/31/tsmc-price-increase-ru...
        
         | st_goliath wrote:
         | Not necessarily, because this inversely also allows packing
         | additional complexity onto the same die size.
         | 
         | Just look back over CPU development and costs over the last
         | couple decades. The latest gen stuff got ever more complex and
         | ever more capable but cost _roughly_ the same at the time it
         | hit the market.
        
       | birdyrooster wrote:
       | Can IBM scale this production up themselves?
        
         | piokoch wrote:
         | Rather not, they are not in the business of mass production of
         | this kind of chips. They don't need, probably they will own a
         | few really hot patents (and they deserve them).
         | 
         | The article mention they will team up with Samsung and Intel.
         | If Intel managed to produce have that chip at scale this would
         | really helped them, given the fact they are behind Apple/TCMS
         | and AMD.
        
           | bogwog wrote:
           | > The article mention they will team up with Samsung and
           | Intel.
           | 
           | Wouldn't an Intel partnership be bad for their entire Power 9
           | business? I'm not very familiar with Power 9, but as I
           | understand it's marketed as an alternative to x86 in the
           | server/cloud hardware.
           | 
           | This advancement along with Intel's stagnation could be a
           | good opportunity for IBM to gain more market share there.
           | Maybe Power 9 workstations will even become a thing most
           | people can afford.
        
             | salmo wrote:
             | Power9 is actually made at GlobalFoundries. And it's mostly
             | just in (fading) IBM i systems and one-off super computers.
             | Power 10 is supposedly on the horizon. There's a handful of
             | niche workstations, and "a partnership" with
             | Google/Rackspace, but I can't even see how to get one of
             | those via GCP or Rackspace.
             | 
             | It's an interesting chip, and I'm a big fan of CPU
             | diversity, but Power's time has come and gone. It's really
             | only supported internally at this point: IBM i, AIX,
             | RedHat, with Debian and (Open|Free)BSD as the outliers.
             | 
             | We'll just have ARM and RISC-V as non-x86 "real" CPUs, I
             | guess.
        
               | cjsplat wrote:
               | Power systems seem to still be available inside the
               | Google Cloud :
               | https://console.cloud.google.com/marketplace/product/ibm-
               | sg/...
               | 
               | https://www.google.com/search?q=google+cloud+IBM+power
               | has lots of stuff, including a video from 2 years back
               | about how to fire a Power VM up and use it :
               | //www.youtube.com/watch?v=_0ml4AwewXo
               | 
               | (disclaimer : Ex-googler, involved once upon a time in
               | the project, no current knowledge of status).
        
               | spijdar wrote:
               | I think you're underselling it a bit. It's supported on
               | RHEL, SUSE, Fedora, Debian, Ubuntu, several more niche
               | distros besides, but aren't those the dominant OSes
               | outside Windows (for servers/workstations)? Unless I'm
               | misinterpreting the meaning of "outliers".
               | 
               | And support goes out pretty far. Just to rattle some
               | examples of public "support" for power8/9, Anaconda lists
               | power9 binaries under x86 for their installer (no mention
               | of ARM), PyPy has Power support, V8 has Power support
               | (the basis for Power ports of Chromium w/JIT'd
               | javascript), nvidia has Power drivers/CUDA runtimes on
               | their main download portal.
               | 
               | I'm not trying to paint it as largely supported or even a
               | great platform, but I think it might _now_ be better
               | supported than it ever has been. It 's little-endian now,
               | software compatibility is almost 100% outside of JIT'd
               | things (and many JIT'd things have Power ports). If/when
               | it fails or fades away, it'll be because it wasn't _good
               | enough_ to displace the alternatives, rather than because
               | no one supported it or it was too difficult to use
               | /support.
        
             | MangoCoffee wrote:
             | Intel want IDM 2.0 if Intel can convince IBM and produce
             | chip for IBM then the IDM 2.0 will have a chance because
             | trust in the foundry business is important.
        
         | whoknowswhat11 wrote:
         | Sadly no.
         | 
         | Intel should pay them to help get stuff into production maybe?
        
       | IanCutress wrote:
       | I'm the author of the article, but I've also made a video on the
       | topic. Covers the same topic, but some people like video better
       | than written. Also includes a small segment of a relevant Jim
       | Keller talk on the topic of EUV.
       | 
       | https://www.youtube.com/watch?v=DZ0yfEnwipo
        
         | phkahler wrote:
         | I thought with EUV they could drop multi-patterning for 7nm and
         | maybe another node or two. Do you have info on weather each
         | company is doing double or quad patterning on any of these
         | nodes? That has a big impact on the cost of a chip and I would
         | think yield as well.
        
           | IanCutress wrote:
           | IBM said they're still doing single patterning EUV on 2nm.
        
             | ksec wrote:
             | Thanks Dr. Did they mentioned how many layers? I remember
             | TSMC was aiming at something close to 30 for their 3nm.
             | 
             | And it is a little strange how IBM decide to name it 2nm
             | when it is much closer to 3nm.
        
               | IanCutress wrote:
               | I asked that exact question, they did not answer.
        
       | ramshanker wrote:
       | More soup for Wafer Scale Engine 3.
        
       | tromp wrote:
       | The table in the article suggests to me that instead of this
       | fictional "feature size", we could use the achievable transistor
       | area as a more meaningful measure of process scale.
       | 
       | IBM achieved 50B transistors in 150mm^2, for a per-transistor
       | area of 3000 nm^2.
       | 
       | TSMC's 5nm process (used by Apple's M1 chip) apparently achieves
       | a transistor area of 5837 nm^2, while Intel's 10nm is lagging at
       | roughly 10000 nm^2.
        
         | fallingknife wrote:
         | So really Intel would be 100 nm, TSMC 77 nm, and IBM 55 nm,
         | then? Those node names really are fiction.
        
           | tromp wrote:
           | Yes, if you assume transistor area to be square and take the
           | side length. In defense of existing nomenclature though,
           | feature size originally denoted the size of the smallest
           | distinguishable feature _within_ a transistor, not the entire
           | transistor.
        
             | ta988 wrote:
             | And thats what is important because that's the actual
             | physical limit, for lithography and electron flow reasons.
        
           | hwillis wrote:
           | 50, 38.5, and 27.5 nm. The "transistor-y" part is the inner
           | bit, so you don't count the isolation distance between
           | transistors, which still makes up part of the area.
           | 
           | To be more specific, one of the things node names have
           | referred to is the M1 (metal 1) half-pitch, or half the
           | center-to-center distance between the metal traces which
           | connect directly (well, almost directly) between transistors.
           | Originally, the closest width you could space those wires was
           | the same as the thinnest width you could make them, since
           | it's basically just a photo negative. If you take the half
           | pitch, that's the width of a wire.
           | 
           | The width of that wire was the thinnest electrode you could
           | make between the n and p regions of silicon, so that was your
           | channel length. Over time we have pushed so that the distance
           | between wires is different from the width of the wires, and
           | the regions of silicon are larger than the distances between
           | them, ect. etc.
           | 
           | So channel length started to shrink much faster than the
           | wires, and in the 2000s it was less than half of the node
           | name and fully a third of the M1 half-pitch. Since then
           | things got even weirder, and "channel length" doesn't
           | correspond to the changes in performance any more.
           | 
           | Since the M1 pitch doesn't track the nodes very well, area
           | density probably won't either. The reason it does so well is
           | probably more to do with the fact that its the other major
           | process goal besides performance- the more transistors you
           | squeeze in, the more chips you get per wafer at a given
           | performance. Foundries ensure the area density keeps
           | increasing as fast as performance, and the difficulty has
           | kept rough pace with performance. It's entirely possible that
           | relative difficulty will change and area density will start
           | to underperform as a metric for performance.
        
         | mrfusion wrote:
         | Wow that's a shocking difference from the 2nm. Who can I trust
         | anymore?
        
           | PhaseLockk wrote:
           | In the original naming scheme, 2nm would be the length of the
           | transistor gate, which is the smallest feature on the device,
           | not a dimension of the whole transistor. It's not meaningful
           | to compare 2nm to the area numbers given above.
        
         | iamgopal wrote:
         | Shouldn't we need to use volume instead of area ?
        
           | tromp wrote:
           | I think cooling requirements will keep use of the 3rd
           | dimension severely limited in the foreseeable future. Also,
           | the required number of lithographic steps might make it
           | economically infeasible.
        
             | sterlind wrote:
             | I've heard of proposals to do cooling via microfluidic
             | channels, but the lithographic steps problem seems
             | inevitable. at least unless you can somehow pattern
             | multiple layers at once, which would destroy the feature
             | size.
        
             | dehrmann wrote:
             | > I think cooling requirements will keep use of the 3rd
             | dimension severely limited in the foreseeable future.
             | 
             | It's fun to go look back on old CPU coolers. They started
             | around Pentium-era CPUs, then kept getting bigger. Around
             | 100W TDP, they stopped. I think that's the largest
             | practical air-cooled cooler.
        
               | verall wrote:
               | A consumer tower (air) cooler can clear 150W on the stock
               | fan and if you stick some 2k-3k rpm push-pull it will
               | probably clear 300W.
               | 
               | Fancier vapor chambers and thicker, higher RPM fans can
               | clear up to thousands in server environments.
        
             | londons_explore wrote:
             | NAND manages 100+ stacked transistors.
             | 
             | Things like processor caches have similarly low average
             | switching rates, so it doesn't seem out of the realm of
             | possibility to see use of the third dimension for logic.
        
               | selectodude wrote:
               | NAND isn't ever all being used at the same time and even
               | then it can get very hot. SSD thermal throttling is a
               | genuine issue.
        
               | xxs wrote:
               | SSD thermal throttle the controller =not= the NAND,
               | itself... which actually doesn't even like to be cold.
        
               | wtallis wrote:
               | Depends on the drive. Sometimes it really _is_ the NAND
               | getting too hot.
        
               | merb wrote:
               | if you use the latest drives m.2 pci4, etc. high end the
               | whole thing gets really hot. high end boards thus have
               | heat sinks. keep in mind nand needs to be warm to reach
               | peak perf, but not too warm. the whole heat sinks can
               | reduces the heat by as much as 7degC degree.
        
           | pjc50 wrote:
           | They're still not stacked, as far as I'm aware - still needs
           | to be in a "well" on the substrate.
        
         | dorfsmay wrote:
         | If somebody wonders what those size actually mean, here are
         | reddit threads with some enlightening answers, from the last
         | time I digged into this:
         | 
         | https://old.reddit.com/r/ECE/comments/jxb806/how_big_are_tra...
         | 
         | https://old.reddit.com/r/askscience/comments/jwgdld/what_is_...
        
         | TchoBeer wrote:
         | >10000
         | 
         | Too many zeroes?
        
           | slfnflctd wrote:
           | I thought this at first too, but after re-reading it became
           | clear that they're saying IBM's process takes up the least
           | room per transistor on average while Intel's takes up the
           | most. So IBM is actually beating Apple by this metric here,
           | however counter-intuitive that may seem.
        
             | mkl wrote:
             | IBM's 2nm beating TSMC's 3nm doesn't seem counter-
             | intuitive. If their naming systems are comparable it's
             | exactly what you'd expect.
        
               | slfnflctd wrote:
               | Well, everyone knows now that the 'X nm' naming doesn't
               | actually mean much when looking at the whole chip, which
               | is why we're talking about transistors per area. Also,
               | Apple has been designing modern chips with massive
               | production volumes for a while, and IBM... hasn't (in
               | fact, they've mostly sold off their former fabrication
               | capabilities). So that's why my expectations were a
               | little different.
        
               | hexane360 wrote:
               | That makes sense as well. It's not unexpected that a one-
               | off F1 car is faster than a mass-produced Mustang.
        
           | mkl wrote:
           | No, Intel's 10nm is a bigger process, lagging behind TSMC's
           | 5nm, so its per-transistor area is bigger.
        
             | TchoBeer wrote:
             | I see, I was reading the numbers as a density (per nm^2)
        
         | avs733 wrote:
         | It is slightly more complicated than that
         | unfortuantely...although the current nomenclature is more cargo
         | cult than meaningful.
         | 
         | I could produce a physically smaller transistor, with a smaller
         | gate, source, and drain. However, depending on the limitations
         | of my process changes for scaling I may not actually be able to
         | pack transistors more tightly. Notionally, the smaller
         | transistor could use less energy which improves the chip
         | design, but not be packed more tightly.
         | 
         | There is more than one way to improve a semiconductor at the
         | feature, device, and chip level.
         | 
         | The node naming is a useful convention for the industry because
         | saying something like '10nm' efficiently communicates
         | historical context, likely technological changes, timelines,
         | and other things that have nothing to do with the physical size
         | of the devices on the chips.
         | 
         | It's basically a form of controlled vocabulary.
        
         | Nokinside wrote:
         | Yes. Transistor density is a good measure that translates into
         | something meaningful. It can be compared across different
         | process technologies. Bear in mind that transistor density is
         | not not actually transistor density.
         | 
         | Transistor density is weighted number that combines NAND2
         | transistors/area and scan flip-flops/area.
         | 
         | Tr/mm2 = 0.6x(NAND2 Tr/mm2) + 0.4x(scan flip-flop/mm2)
        
           | nousermane wrote:
           | For anyone wondering why this is done like so instead of
           | literally counting transistors:
           | 
           | Transistors with 2,3,.. gates are functionally identical to
           | 2,3,.. transistors connected in series, but take chip area
           | that is only a small fraction larger than 1 "normal"
           | transistor. Counting those as either 1, or as multiple (by
           | number of gates) would skew stats in a less-then-useful way.
           | 
           | That is - among other quirks. Ken Shirriff [1] has some
           | excellent articles that touch the topic of what exactly
           | counts as a "transistor".
           | 
           | [1] http://www.righto.com/
        
             | MrBuddyCasino wrote:
             | Would achievable SRAM cache size per mm2 work?
        
               | Tuna-Fish wrote:
               | That is an important metric, but SRAM scales differently
               | to logic density, and there can be processes where SRAM
               | is proportionally much more or less expensive than some
               | logic.
               | 
               | Properly representing transistor density is a very hard
               | problem, to which many different solutions have been
               | proposed. It's just that the solution typically shows the
               | processes of those who proposed it in the best possible
               | light, so there is no industry consensus.
        
               | keanebean86 wrote:
               | How about making an open source benchmark chip. It could
               | have sram, logic, io, etc. As part of releasing a node,
               | manufacturers would port the benchmark chip.
        
               | tromp wrote:
               | They could make it a chip to efficiently solve Cuckatoo32
               | [1], using 1GB of SRAM (or 0.5GB SRAM and 0.5GB off-chip
               | DRAM) to find cycles in bipartite random graphs with 2^32
               | edges.
               | 
               | [1] https://github.com/tromp/cuckoo
        
               | tomcam wrote:
               | That seems kind of brilliant to me.
        
               | [deleted]
        
         | kuprel wrote:
         | They should quote sqrt(3000) ~ 55nm as the transistor size
        
         | robocat wrote:
         | Here's the table: "Peak Quoted Transistor Densities (MTr/mm2)"
         | IBM    TSMC    Intel   Samsung       22nm
         | 16.50         16nm/14nm          28.88   44.67   33.32
         | 10nm               52.51  100.76   51.82       7nm
         | 91.20  237.18*  95.08       5nm               171.30
         | 3nm               292.21*           2nm        333.33
         | 
         | Data from Wikichip, Different Fabs may have different counting
         | methodologies
         | 
         | * Estimated Logic Density
        
           | kuprel wrote:
           | I wonder if these logic densities could be converted into
           | units of nanometers. For example 333 MTr/mm2 is 333 Tr/um2.
           | Then there are effectively sqrt(333) transistors on a side of
           | a square micrometer which comes out to about 1/sqrt(333) *
           | 1000 = 55 nanometers per transistor. Way off from the 2
           | nanometer feature size
        
             | watersb wrote:
             | > 55 nanometers per transistor. Way off from the 2
             | nanometer feature size.
             | 
             | Cool. Following along here, I consider that transistors
             | require multiple "features", and these days the more
             | complicated transistor structure has enabled the increase
             | in speed while lowering the power requirements. Power is
             | also now a set of characteristics, the power needed to
             | change transistor state, and the leakage current that
             | remains when the transistor is not switching.
             | 
             | Not just a simple NPN junction FET anymore.
             | 
             | Then I think about those great microchip analyses on Ken
             | Sheriff's blog. How very different transistor layouts show
             | up in circuit designs. I can only imagine that modern high
             | performance SOC design is even more complex.
             | 
             | https://righto.com
             | 
             | 55 nanometers per transistor sounds like a useful number to
             | me.
        
       | galaxyLogic wrote:
       | Plenty of room. At the bottom
        
       | akmittal wrote:
       | Hopefully after this next Raspberry pi will finally have a 7nm
       | chip
        
         | 0-_-0 wrote:
         | I'm thinking the extra production capacity that needs to be put
         | in to handle the current chip shortage must get freed up
         | sometime, which should mean more than the expected excess 7nm
         | capacity a few years down the line...
        
       | chipzillazilla wrote:
       | Do these IBM 2nm chips have an actual function? Just curious what
       | they actually do.
        
         | enkid wrote:
         | They do the same things the bigger chips do faster and with
         | less power.
        
           | mrweasel wrote:
           | I don't think that was the question, not that you're wrong.
           | 
           | The question might be more the same one I had: Are they
           | actually making a 2nm POWER processor, or is this just some
           | basic chip that shows that the process works?
        
             | IanCutress wrote:
             | Basic chip trying lots of different things with the
             | manufacturing technology to profile them/see how they work.
             | It's a proof of concept.
        
         | daniel-thompson wrote:
         | From TFA:
         | 
         | > No details on the 2nm test chip have been provided, although
         | at this stage it is likely to be a simplified SRAM test vehicle
         | with a little logic. The 12-inch wafer images showcase a
         | variety of different light diffractions, which likely points to
         | a variety of test cases to affirm the viability of the
         | technology. IBM says that the test design uses a multi-Vt
         | scheme for high-performance and high-efficiency application
         | demonstrations.
        
           | BlueTemplar wrote:
           | > a variety of different light diffractions
           | 
           | So, their function is that they look really cool ?
           | 
           | Yes, yes, I'm leaving...
        
       | ChuckMcM wrote:
       | This is an amazing step and the transistor density chart shows
       | you just how big a deal this is. 1/3B transistors per square mm.
       | Now for 'grins' take a piece of paper and make a 2 x 2 mm square
       | on it. Now figure out what you can do with 1.3B transistors in
       | that space. Based on this[1] you are looking at a quad-core w/GPU
       | desktop processor.
       | 
       | Of course you aren't really because you cant fit the 1440 pins
       | that processor needs to talk to the outside world. But it
       | suggests to me that at some point we'll see these things in
       | "sockets" that connect via laser + WDM to provide the I/O. An
       | octagonal device with 8 laser channels off each of the facets
       | would be kind of cool. Power and ground take offs on the top and
       | bottom. That would be some serious sci-fi shit would it not?
       | 
       | [1] https://en.wikipedia.org/wiki/Transistor_count
        
         | anigbrowl wrote:
         | I suspect over the next decade we'll see architectural changes
         | and a whole host of liberating form factors where relatively
         | modest processing power is sufficient to provide huge utility,
         | from eyeglasses to earrings, to implants of various sorts. Also
         | we're already at the point of being able to do some serious
         | processing in a pill that's small enough to digest and excrete
         | safely. Near-real-time chemical assay can't be that far off.
        
       | anigbrowl wrote:
       | Heh, I remember being roundly mocked here about 10 years ago for
       | disputing the idea that 22nm was the end of the road. ' _Maybe_
       | 16 or 14nm, but then the laws of physics intervene. ' I should
       | have put down a bet.
        
       | ksec wrote:
       | It really depends when this will arrive ( if at all ) at Samsung
       | Foundry. TSMC will have 3nm _shipping_ next year. And currently
       | still looking at 2024 for their 2nm. Which would have 50% higher
       | transistor density than this IBM 2nm.
       | 
       | And I really do support Intel's node renaming if the rumour were
       | true. I am sick and tired of people arguing over it. It is wrong,
       | but that is how the industry has decided to use. Adopt it and
       | move on.
        
       | PicassoCTs wrote:
       | Moore to the rescue. But can they keep up with the bloatware
       | curve? If we cook all the horsepower down to glue, and glue ever
       | more horrible libraries together, when will we reach peak horse-
       | pisa-stack?
       | 
       | I stopped eyeing for cpu advances a decade ago. If we could have
       | a NN-driven prefetcher that is able to anticipate cache misses
       | from instructions and data, 300 cycles ahead of time, that would
       | be some speedup we all could benefit from, if it found its way
       | into hardware.
       | 
       | https://drive.google.com/file/d/17THn_qNQJTH0ewRvDukllRAKSFg...
        
         | joe_the_user wrote:
         | Regardless of bloatware, I don't see how more transistors can
         | indefinitely simulate an increase in processor speed. It seems
         | like it would violate P!=NP or the halting problem or something
         | - as well as being a massive security hole.
         | 
         | This is why gpgpu processing seems like the wave of the future.
         | 
         | Alternatively, human level AI won't come from OpenAI but the
         | final desperate Intel team trying to create a system that
         | anticipates all serial routines the processor can run in
         | parallel.
        
         | fallingknife wrote:
         | Wouldn't that level of speculation massively degrade security?
        
         | titzer wrote:
         | It doesn't matter if you fetch 300 cycles in the future, you'll
         | eventually saturate cache bandwidth.
         | 
         | It's worth noting that the M1 has a reorder buffer of more 600
         | micro-ops, which essentially means it can be 600 instructions
         | in the "future" (it's only the future from the point of view of
         | instruction commit).
        
         | simias wrote:
         | I think it's the glue that keeps up with the CPU power curve,
         | so to speak. You give devs more RAM and more cycles and they'll
         | find a way to use them with inefficient languages, suboptimal
         | code and shiny UI.
         | 
         | I think it's important to remember that for instance Zelda:
         | Link's Awakening was less than 512KiB in size and ran on a
         | primitive 1MHz CPU.
         | 
         | But at the same time we have to acknowledge how better it _can
         | be_ to develop for a modern target. We can decide to waste this
         | potential with bloated electron runtimes, but we can also
         | leverage it to make things we thought impossible before.
        
           | hvidgaard wrote:
           | > You give devs more RAM and more cycles and they'll find a
           | way to use them with inefficient languages, suboptimal code
           | and shiny UI.
           | 
           | I'd go as far as saying that most devs do not want to use
           | inefficient languages, write suboptimal code or program that
           | new shiny UI. Quite to the contrary, but users and management
           | demand most of those things just with other names.
        
             | gmadsen wrote:
             | really depends on the context. Embedded software has its
             | own fun challenges, but for a simple web app, the last
             | thing I want to worry about is controlling allocations
        
           | enkid wrote:
           | I don't think this is a good example. Zelda: Link's
           | Awakening, while ground breaking, can be, and has been
           | improved upon. Using those resources to create a better user
           | experience is not inefficiency. That's the entire point of a
           | video game - a good user experience.
        
             | dijit wrote:
             | I think you're not disagreeing with the parent.
             | 
             | You can have an exceptional experience with a game in less
             | than the storage and processing power it takes to run Hello
             | World today;
             | 
             | To that end you can make things better and nicer than Links
             | Awakening with modern resources.
             | 
             | But can you make something 500-3000x better? (that's the
             | order of magnitude we're talking about, with Slack vs
             | Zelda)
        
               | dale_glass wrote:
               | No, because improvements don't scale that way.
               | 
               | 320x200 was more or less where graphics started. 640x480
               | was _much_ better. 720p was a lot better than broadcast
               | TV. 1080p was a very nice improvement for a lot of uses.
               | 4K is visibly better, but mostly eye candy. 8K is
               | pointless for a desktop monitor.
               | 
               | The steps give you 4X the number of pixels. At 320x200
               | you're severely constrained in detail. You'd have a hard
               | time for instance accurately simulating an aircraft's
               | cockpit -- you don't have enough pixels to work with.
               | 
               | 1080p to 4K is in most cases not the same kind of
               | difference. There's little for which 1080p is actually
               | constrained in some way and require sacrificing something
               | or working around the lack of pixels.
               | 
               | This isn't because modern software design is bloated and
               | sucks, but simply because improvements diminish. Our
               | needs, abilities and physical senses are finite.
        
               | chrisco255 wrote:
               | For VR to ever begin to approach something close to
               | reality, and to really prevent people from getting sick
               | wearing the headsets, it needs to be nearly 90Hz per eye
               | and very high definition in a very small, lightweight
               | package. While you're focused on the rectangular screens,
               | the market is figuring out other ways to use this
               | technology.
        
               | dale_glass wrote:
               | Yes, VR goes much higher need-wise, but it has a similar
               | curve.
               | 
               | The DK1 was a proof of concept. The DK2 was barely usable
               | for most uses, and not at all for others. The CV1 was
               | about the minimum needed for viable commercial use --
               | earlier ones needed huge fonts, so many games' UI
               | wouldn't translate well.
               | 
               | By Quest 2 the resolution is okay. It could be better
               | still, but it's no longer horribly limiting for most
               | purposes.
        
               | coretx wrote:
               | You are technically correct but it might be wise not to
               | omit that 8k on a desktop monitor may attribute to the
               | presented gamut. ( Despite the (sub) pixels not being
               | visible by the human eye. )
        
               | dangus wrote:
               | While I agree that returns diminish, I'll be the first to
               | disagree that those enhancements are unnecessary.
               | 
               | If I do text work on a 1440p 27" monitor I can see the
               | pixels. Anything less than 4K is unacceptable to me. I
               | could see an 8K monitor being a worthy upgrade if I could
               | afford it and my computer could drive it, especially if
               | it meant that I could make the screen larger or wider and
               | still avoid seeing pixels.
               | 
               | Also, we might consider that digital tech is arguably
               | still catching up in some aspects to our best analog
               | technology like 70mm film.
        
               | zozbot234 wrote:
               | What's wrong with "seeing pixels" when all you're doing
               | is basic text work? Bitmap fonts - or properly-hinted
               | TrueType/OpenType fonts, now that the relevant patents
               | have expired - can be pixel-perfect and are far from
               | unreadable.
        
               | dangus wrote:
               | Because I have to look at it all day and I'd like it to
               | look smooth.
               | 
               | Also, I can read smaller text when the resolution is
               | higher.
               | 
               | When it comes to technology and innovation I believe that
               | asking "why do I need this?" can often be the wrong
               | question. It feels like a bias toward complacency.
               | 
               | Why drive a car when you've already got a horse?
        
           | bogwog wrote:
           | > but we can also leverage it to make things we thought
           | impossible before.
           | 
           | Or realistically, to lower hiring costs.
        
             | bcrosby95 wrote:
             | Games are simultaneously more expensive, and cheaper, than
             | ever to make.
             | 
             | At the top end, tech advancements go towards things that
             | couldn't be done before. And at the bottom end, it's
             | enabling smaller shops to do things they couldn't do before
             | because it's now within their budget.
        
             | simias wrote:
             | Or shorten dev time, sure. I don't think those are
             | necessarily bad things. The industry seems to have enough
             | trouble as it is to recruit software devs. Can you imagine
             | if you needed deep assembler mastery in order to develop a
             | website? It would have hampered the development of the
             | industry tremendously.
        
               | PicassoCTs wrote:
               | Joke aside, imagine a language that is optimized, to
               | allow for ease of later optimization. As in object
               | orientated, fancy beginner level code goes in, and the
               | hot-loop could be rewritten if needs be by a
               | professional, without doing a complete data-orientated
               | refactoring of the original code.
        
               | PicassoCTs wrote:
               | It would be a set of two languages. One classic, defining
               | operations, sets, flow direction. Then another, which
               | defines containers and the (parallel) paths of execution.
        
               | hvidgaard wrote:
               | I don't think that is possible to be honest. I'd love it,
               | but most sub optimal choices require extensive
               | refactorings to work around. I'd love to be proven wrong
               | though.
        
               | TchoBeer wrote:
               | From my experience that's a bit like what Cpython is
        
               | Siira wrote:
               | The current frontiers are nowhere near what is possible
               | though; E.g., python, bash, ruby could be a lot faster if
               | their development was subsidized by governments. Cpp
               | doesn't have a good package manager (and likely will not
               | have one in the coming decade either). Go doesn't have
               | obvious features such as string interpolation and
               | generics. Rust's tooling still isn't pleasant. ...
        
               | elzbardico wrote:
               | ADA and in a minor extent COBOL, were heavily subsidized
               | by the government. Probably even PL/1
        
           | DerekL wrote:
           | > I think it's important to remember that for instance Zelda:
           | Link's Awakening was less than 512KiB in size and ran on a
           | primitive 1MHz CPU.
           | 
           | The Game Boy CPU ran at 4.19 MHz.
        
             | monocasa wrote:
             | Eh, sort of. It's memory bus runs at 1.024 MHz, and
             | operations take 4 CPU cycle multiples to execute. It's
             | really more like a 1.024Mhz processor with the internals
             | running QDR.
        
         | maccard wrote:
         | > I stopped eyeing for cpu advances a decade ago
         | 
         | Even in the x64 space massive advances have been made [0]. For
         | 2.5x the power, you can have 8x the parallelism, with 50%
         | faster single core speeds, (plus all the micro improvements
         | with avx, or the other improvements that make 3.5GHz today
         | faster than a 3.5GHz cpu from 10 years ago)
         | 
         | [0] https://cpu.userbenchmark.com/Compare/AMD-Ryzen-
         | TR-3970X-vs-...
        
           | wmanley wrote:
           | I think the point is that while CPUs have gotten
           | (significantly) better over time using a computer doesn't
           | feel any faster than 10 years ago because software has more
           | than made up for it. On my 3 year old Ubuntu machine here it
           | takes ~2-3s to launch gnome-calculator. This is no faster
           | than launching gnome-calculator in 2004 on my 2004 Athlon 64
           | with Gnome 2.6. It's not like gnome-calculator is suddenly
           | doing a lot more now.
           | 
           | It feels like I spend more time now waiting for the computer
           | than I did then. It's the same with the phone - click, wait,
           | click, wait, click, wait. For a taste of what it could be
           | like try running Windows 2000 in a VM. Zoooooooom.
           | 
           | Why? Developers will only optimise their software when it
           | becomes unbearably slow - and in the absence of automated
           | performance tracking and eternal vigilance software gets
           | slower over time. The result is that there is no reason* to
           | get excited about CPU advances. It doesn't make your life
           | better, it's just a cost you have to pay to stop your life
           | getting worse.
           | 
           | * Sure there are some reasons - if you're already
           | using/dependant upon heavily optimised software like games or
           | machine learning, but for most people that's a very small
           | part of their computing use.
        
             | imiric wrote:
             | Haven't you heard? Streaming a web browser that runs in The
             | Cloud is the future. So don't worry about bloated software
             | or upgrading your hardware, The Cloud will make it better.
             | ;)
        
             | dangus wrote:
             | This take is common, understandable, but at the same time
             | mostly just cynical and I think misguided.
             | 
             | First, application launch times have to do with storage and
             | memory speed over raw CPU power. So, without knowing your
             | exact specs, I can't tell you if your launch times for your
             | new and old machine are actually a problem, especially if
             | you've upgraded your older machine to an SSD or if your new
             | machine still uses a HDD for its main drive.
             | 
             | Second, I simply don't believe you that it takes a full
             | three seconds to open gnome calculator on your new machine.
             | Or your old one, really. (It makes me want to ask the
             | annoying question: is Linux that bad? My Mac or Windows
             | machine has opened the calculator instantly regardless of
             | the era).
             | 
             | Finally, this is actually the most important point:
             | capability of your computer is still miles ahead of what it
             | used to be. Try playing 1080p video on your 2004 machine.
             | Now try 4K.
             | 
             | Maybe you say that high resolution video isn't important. I
             | don't agree, but fine, let's move on:
             | 
             | Browser performance is another hugely important aspect
             | where your 2004 machine will get smoked. Sure, you can
             | remain cynical about websites getting "bloated," but you
             | should also realize that almost all business applications
             | and some seriously complex general purpose applications are
             | all moving to the web. Things like video conferencing in a
             | browser window were a pipe dream in 2004. Microsoft word
             | used to be considered a big complex and slow application,
             | and now multiple competitors just run it right in your
             | browser.
             | 
             | This is a very good thing for interoperability and for
             | making your Linux system actually useful instead of having
             | to jump on your Windows partition every time you want to
             | collaborate on a Microsoft Word document.
             | 
             | You may not like Apple machines but step into an Apple
             | store one day and play around with the cheapest M1 MacBook
             | Air computer. It has no fan, doesn't get hot on your lap,
             | and the battery lasts 15 hours. Yet, it beats the 16-inch
             | Intel MacBook Pro that is still the most current model on
             | some benchmarks - a machine that is hot, loud, heavy, and
             | will give you probably 6 hours of battery at best with the
             | largest possible battery that's legal to fly with.
             | 
             | I think the long story short is that software developers
             | leveraging the hardware is actually a good thing. It means
             | that our capabilities are increasing and developers have
             | less friction, so they can deliver more complex and useful
             | functionality. As long as UI latency is within the
             | acceptable range, allowing developers to build more complex
             | software and focus on functionality over performance
             | optimization could be argued to be a very good thing.
             | 
             | In 2004 a developer would struggle to make a decent cross
             | platform application with rich functionality. Something
             | like a chat application with embedded media would need to
             | be written three times to cover three platforms, and I'm
             | still not aware of any chat app in 2004 that had a whole
             | application and webhook ecosystem. The most important part
             | about a chat application is for it to be interoperable! Now
             | you've got Slack, which is practically a business operating
             | system. Rather than being annoyed about that I think we
             | should be impressed.
             | 
             | (I also disagree that gaming is just a niche use of
             | hardware. That's an industry with higher revenues than
             | movies and sports combined).
        
               | Narishma wrote:
               | > Second, I simply don't believe you that it takes a full
               | three seconds to open gnome calculator on your new
               | machine. Or your old one, really. (It makes me want to
               | ask the annoying question: is Linux that bad? My Mac or
               | Windows machine has opened the calculator instantly
               | regardless of the era).
               | 
               | I believe it. It takes 10 seconds or so for the Windows
               | 10 calculator to cold-start on my laptop.
        
               | jhickok wrote:
               | huh? On my Mac it takes .5 seconds, on my Windows machine
               | it takes only slightly longer (.5 - 1 second).
        
               | Siira wrote:
               | Windows "modern" calculator takes quite some time on its
               | useless splash screen for me. (The old native calculator
               | opened almost instantly.)
        
               | tpxl wrote:
               | First, let me say I mostly agree with you that storage
               | speed is more important for application launch speed than
               | CPU or RAM.
               | 
               | I have an nvme ssd which gets around 450MB/s according to
               | dd, a Ryzen 3900x (12 cores, 24 threads) and 64 gigabytes
               | of RAM.
               | 
               | That being said, IntelliJ Community edition, installed as
               | a snap, takes 24 seconds to get to splash screen, 57
               | additional seconds for the splash screen to disappear
               | then another 35 seconds before the first text appears in
               | the IntelliJ window (it's a blank gray window before
               | that) for a grand total of 1 minute and 56 seconds of
               | startup time.
               | 
               | Some things are just unreasonably slow despite absolute
               | beastmode hardware.
        
               | elzbardico wrote:
               | My experience with snap is that it makes Ubuntu absurdly
               | slow. I completely removed this spawn of satan from my
               | Ubuntu machines.
        
               | bcrosby95 wrote:
               | I don't know what "installed as a snap" means, but:
               | 
               | I'm running WSL 2 with MobaXterm and from running
               | idea.sh, intellij's splash screen comes up in less than 1
               | second, and my list of projects comes up in about 3
               | seconds.
        
               | plasticchris wrote:
               | Probably installed with snapd
        
               | tpxl wrote:
               | It's a packaging system for Ubuntu
               | (https://en.wikipedia.org/wiki/Snap_(package_manager)).
               | It's not that IntelliJ is slow, it's that something to do
               | with the snap packages makes them dog slow (Same thing
               | happened with Spotify installed through snap, while a
               | .deb launches in about a second).
        
             | 2pEXgD0fZ5cF wrote:
             | Fully agree.
             | 
             | It also feels like some kind of disdain for performance and
             | optimization has taken root in certain parts of our
             | industry and communities. The "client does not care about
             | this" school of thought that aggressively sees
             | optimizations as foolish to even think about.
             | 
             | While this kind of thinking obviously has its place
             | especially when it comes to proper management, it has been
             | way overblown to the point of not being able to mention
             | performance advantages without someone appearing out of
             | nowhere to smugly tell you how the evil performance and
             | optimization (the enemies of creating revenue, apparently)
             | should be ignored and looked down upon in the face of the
             | allmighty "faster development times".
        
             | jamincan wrote:
             | It's a similar dynamic to how widening a road only provides
             | temporary relief for traffic. More traffic fills the newly
             | available space until it reaches homeostasis with the
             | number of people avoiding the road due to congestion.
             | 
             | Similarly, developers take up the space that better
             | hardware provides until the user experience starts to
             | degrade and they are forced to optimize.
             | 
             | In both cases, you end up with the apparent contradiction
             | that the user's experience doesn't seem to change despite
             | improvements that on the face of it should translate to
             | them.
        
               | bcrosby95 wrote:
               | That's why I never became a cook. Jerks kept eating my
               | food!
        
               | [deleted]
        
         | Const-me wrote:
         | > If we could have a NN-driven prefetcher that is able to
         | anticipate cache misses
         | 
         | It's not that critical for memory prefetcher because cache
         | hierarchy is helping a lot. Most software doesn't read random
         | addresses. And prefetchers in modern CPUs are pretty good
         | already. Another thing, prefetcher is not a great application
         | of AI because the address space is huge.
         | 
         | Branch prediction is another story. CPUs only need to predict a
         | single Boolean value, taken/not taken. Modern processors are
         | actually using neural networks for that:
         | https://www.amd.com/en/technologies/sense-mi
         | https://www.youtube.com/watch?v=uZRih6APtiQ
        
           | thrwaeasddsaf wrote:
           | > Most software doesn't read random addresses.
           | 
           | Is there any literature or studies to back this?
           | 
           | I thought one of the downfalls of modern software is a
           | ridiculous number of tiny dynamic allocations, often
           | deliberately randomized in space. Lots of pointer chasing and
           | hard-to-predict access patterns.
           | 
           | And some people work really hard to make their access
           | patterns cache friendly, which is far from trivial and for
           | most software, not a cost they can justify. Sometimes
           | changing the access patterns means reorganizing all vital
           | data structures and going SoA to AoS (or vice versa) and due
           | to limited language support, that can mean sweeping changes
           | across the entire code base. It doesn't help that as new
           | features are added, requirements on these data structures and
           | access patterns can change a lot.
        
             | Const-me wrote:
             | > Lots of pointer chasing and hard-to-predict access
             | patterns.
             | 
             | They're hard to predict individually, but statistically I
             | think they're rather predictable. Many programs indeed use
             | tons of small allocations and chase quite a few of these
             | pointers. Still, after the program runs for a while, the
             | data is distributed appropriately across caches (e.g. my
             | CPU has 32kb L1D, 512kb L2 and 32MB L3) and it sometimes
             | becomes not terribly bad.
             | 
             | > work really hard to make their access patterns cache
             | friendly, which is far from trivial and for most software
             | 
             | Depends on the goal. If the goal is approaching the numbers
             | listed in CPU specs, FLOPS or RAM bandwidth, indeed, it can
             | be prohibitively expensive to do.
             | 
             | If the goal is more modest, to make software that's
             | reasonably fast, for many statically typed languages it's
             | not that bad. Prefer arrays/vector over the rest of the
             | containers, prefer B+ trees over red-black ones, prefer
             | value types over classes, reduce strings to integer IDs
             | when possible, and it might be good enough without going
             | full-HPC with these structs-of-arrays and the complexity
             | overhead.
        
         | jdashg wrote:
         | Worth mentioning that Ryzen's branch predictor does in fact
         | have an NN component.
        
         | ComodoHacker wrote:
         | >I stopped eyeing for cpu advances a decade ago
         | 
         | So have you completely missed the huge advance of ARM that you
         | have in your phone now?
        
           | pjmlp wrote:
           | ART and V8 take care of it for me, so yeah.
        
           | [deleted]
        
       | taklamunda wrote:
       | Brand name is so nice hope the quality is better then brand.
        
       | hi41 wrote:
       | With all the over-the-top sensationalized news reports, I did not
       | hear an understated humor in a long time and then I saw this in
       | the article! Nice one!
       | 
       | >>IBM states that the technology can fit '50 billion transistors
       | onto a chip the size of a fingernail'. We reached out to IBM to
       | ask for clarification on what the size of a fingernail was, given
       | that internally we were coming up with numbers from 50 square
       | millimeters to 250 square millimeters.
        
         | babypuncher wrote:
         | The scandal will be when it turns out their reference was
         | actually a toenail.
        
         | FridayoLeary wrote:
         | The author seems to be a Professor in Oxford. I took the
         | trouble to look up because i (correctly) guessed this was
         | typical, dry British irony. The academic link also came as no
         | surprise.
        
           | colordrops wrote:
           | Maybe too dry or I'm dense. I don't get the joke.
        
             | beowulfey wrote:
             | The joke is that their reference of a fingernail, as if it
             | were a consistent unit of measurement, turns out to be
             | useless as a reliable measurement of surface area given the
             | range in measurements of those around them.
        
               | ludsan wrote:
               | ISO has it as 1/42042042042042 of a "library of congress"
        
           | throwaway2037 wrote:
           | To be clear, according to his LinkedIn profile
           | (https://www.linkedin.com/in/iancutress):
           | 
           | <<Academically I have obtained a doctorate degree (DPhil, aka
           | PhD) from the University of Oxford>>
           | 
           | It does not say he is/was a professor. (Please correct me if
           | wrong.) Still, a PhD from Oxford is an amazing achievement!
           | 
           | And, yes, I also enjoyed that bone-dry humor about enquiry
           | about size of fingernail...
        
             | IanCutress wrote:
             | Can confirm, not a professor :) Nine papers on
             | Computational Chemistry during my 3 year PhD. Been writing
             | about semiconductors for 10+ years now. Also doing video on
             | the youtubes. https://www.youtube.com/techtechpotato
        
               | Covzire wrote:
               | I really like your level-headed, not-overly-gamer-but-
               | still-not-too-dry style. Keep it up!
        
               | gsibble wrote:
               | Hey! Subscribed to you the other day. Great channel. Keep
               | up the hard work!
        
               | secfirstmd wrote:
               | Hahaha. That's some achievement and perfectly timed
               | response. https://youtu.be/_uMEE7eaaUA
        
               | johncalvinyoung wrote:
               | Apparently we were up at Oxford at the same time! I
               | wasn't working on my DPhil, though... PPE as a visiting
               | undergrad student. I greatly enjoy your deep dives on
               | Anandtech!
        
         | ineedasername wrote:
         | Clearly we need an ISO standard for fingernail size.
        
           | tambourine_man wrote:
           | Well, we (some stubborn countries at least) use feet as a
           | unit of measurement, so I wouldn't be surprised.
        
             | lstamour wrote:
             | Worse, there were actually two different foot measurements,
             | one used in surveying and the other more generally. They
             | vary by not much, but it adds up when surveying, and still
             | causes trouble today as you have to determine whether or
             | not something was measured using one definition of a foot
             | or another. Of course, one pair of fingernail trimmers
             | tends to be as good as another, but one can't say the same
             | about shoes sized for the wrong foot. ;-)
        
               | ineedasername wrote:
               | Hmmm... people also trim fingernails to different
               | lengths. There are also artificial finger nail
               | extensions, although I think those can clearly be left
               | out of consideration.
               | 
               | I think to accommodate such variables the standard would
               | really need to consider only the nail plate that covers
               | the finder itself, not any overhand resulting from
               | growing nails out longer.
               | 
               | Further, I would propose we settle on a single finger to
               | use. While all fingers are important, the dominant role
               | of the pointer finger makes it a clear candidate.
               | 
               | Really, if we're moving to a nail-based benchmark for
               | generating transistor density metrics then these details
               | really need to be, um... nailed down.
        
         | Normille wrote:
         | >"..reached out.."
         | 
         | When I'm king of the world. There will be public floggings for
         | anyone using that puke-making phrase!
        
           | bdamm wrote:
           | What alternatives would you suggest? What are you willing to
           | bet that they will stand the test of time without also
           | becoming overused icons of language? "Reached out" at least
           | has a warm diplomatic connotation so I personally don't mind
           | it at all.
        
             | mbg721 wrote:
             | Why not "asked" for "reached out to ask"?
        
               | bqmjjx0kac wrote:
               | At the end of the day, it is what it is.
        
               | geenew wrote:
               | It's certainly not what it's not, and it was what it was.
               | Going forward, it will be what it will be.
               | 
               | Despite the claims from some quarters, it wasn't what it
               | wasn't; despite the hopes from other quarters, it won't
               | become what it won't become.
               | 
               | Some want it to be what it was, while others want it to
               | not become what it is.
               | 
               | Everyone can agree that it might or might not become what
               | it is. All anyone can do is learn from what it was,
               | understand what it is, and work to make what it will be
               | what they want it to be.
        
               | jraph wrote:
               | At the beginning of the day too, most certainly.
        
               | anonymfus wrote:
               | Because it does not express how hard it was.
        
               | panzagl wrote:
               | I'm too busy tilting at the 'just say use instead of
               | utilize' windmill, but your cause is worthy.
        
               | anigbrowl wrote:
               | I hate that one too. I think it's meant to imply a sort
               | of pioneering by making something usable which previously
               | wasn't, rather than merely employing it like some plebe.
               | I've noticed an inverse correlationation between
               | pomposity and substantiveness.
        
               | kseifried wrote:
               | Because often times they didn't get to the asking stage,
               | they got to the "can we talk to someone about X and ask
               | questions?" stage and nobody replied by the article
               | deadline.
        
             | shard wrote:
             | Yes, we can "circle back" in a few years and "touch base"
             | with him to see if he still feels the same.
        
           | hi41 wrote:
           | Why use punishment when a 10 week course on correct English
           | usage would suffice :-) Since you are the king in that epoc,
           | you could make it mandatory!
        
             | moron4hire wrote:
             | There is no such thing as "correct English usage". There is
             | only "English usage that adheres to the expectations of the
             | listners". It's always been this way in English and it
             | would take authoritarian social controls (not language
             | controls) to change it.
             | 
             | In other words, to quote Cheryl Tunt, "You're not my
             | supervisor!"
        
               | zepto wrote:
               | He said he plans on being King, so he can simply dictate
               | that people use 'The King's English'.
        
               | balabaster wrote:
               | Dictate? He has met English people, right?
               | 
               | We don't really take to being dictated to that well. I
               | can name half a dozen historic "circumstances" where
               | that's failed quite catastrophically.
               | 
               | We don't like being told what to do...
        
               | akiselev wrote:
               | The Kingship is a red herring. He just needs a bus with
               | Chaucer's face above the words "The Ol' English of the
               | Great British Empire" and a media campaign with pithy
               | slogans like "Time to get on with English" and "English
               | means English!"
        
               | kbenson wrote:
               | Ah, the British. Us Americans are over here trying to
               | influence everyone with sophisticated media silos and
               | social networks, meanwhile in Britain someone's muttering
               | "hold my beer" and "I'm going to need a bus and some red
               | paint..."
        
               | akiselev wrote:
               | Life imitates art. On the one side is the shambolic chaos
               | that is Mike McLintock and on the other side is Malcolm
               | Tucker, whose writ "runs through the lifeblood of
               | Westminster like raw alcohol, at once cleansing and
               | corroding."
        
               | zepto wrote:
               | Sorry, I apologize. Dictate was the wrong word.
               | 
               | He can proclaim that his people shall use the King's
               | English.
               | 
               | Also, the Tower of London is still a possession of the
               | crown.
               | 
               | Whether it goes well or not is another matter, but I'm
               | sure it will be invigorating.
        
           | pilsetnieks wrote:
           | > When I'm king of the world. There will be
           | 
           | Muphry's law?
        
       | rasputnik6502 wrote:
       | ...but it's too small to be seen.
        
       | mlacks wrote:
       | not much experience in this space: who would use this patent? I
       | see Samsung and Intel as partners; do they simply use IBMs
       | research with their own manufacturing to produce this?
       | 
       | Also curious if this development will affect apple silicon or
       | TSMCs bottom line in the near future
        
         | dahfizz wrote:
         | PowerPC in general is somewhat niche. Its used in things from
         | networking switches to fighter jets, but nothing that would
         | affect mainstream CPU market. I can't imagine Apple jumping to
         | a new architecture after just developing their own ARM chips.
        
       ___________________________________________________________________
       (page generated 2021-05-06 23:00 UTC)