[HN Gopher] 5/3nm Wars Begin ___________________________________________________________________ 5/3nm Wars Begin Author : nanosheet Score : 77 points Date : 2020-01-25 17:29 UTC (5 hours ago) (HTM) web link (semiengineering.com) (TXT) w3m dump (semiengineering.com) | nanosheet wrote: | Is Moore's Law still alive? | seshagiric wrote: | To quote Intel CEO Robert Swan (on CNBC); the Moore's law is | going to come back live in the 5/3nm Fab capabilities. Intel is | at 11nm, aiming for 7nm. Whereas Texas Instruments is going for | 5nm. Per Robert, they had lot of learnings from 11nm to 7nm but | expect the move to 5/3nm much faster. | marvy wrote: | But realistically it can't go on much longer, right? The | diameter of a silicon atom is like 0.21 nanometers, so we're | almost within an order of magnitude from rock bottom, right? | I don't actually know anything about this stuff so I could be | hopelessly confused, but that's my impression. | agumonkey wrote: | Who knows, maybe work at 5/3nm will give people new ideas. | stefan_ wrote: | Remember this nm number has nothing at all to do with | physics and is now purely a marketing term. | mcchew wrote: | Can you clarify? I thought the number was still the | length of the transistor. | judge2020 wrote: | https://en.wikichip.org/wiki/technology_node | | > Historically, the process node name referred to a | number of different features of a transistor including | the gate length as well as M1 half-pitch. Most recently, | due to various marketing and discrepancies among | foundries, the number itself has lost the exact meaning | it once held. Recent technology nodes such as 22 nm, 16 | nm, 14 nm, and 10 nm refer purely to a specific | generation of chips made in a particular technology. It | does not correspond to any gate length or half pitch. | Nevertheless, the name convention has stuck and it's what | the leading foundries call their nodes. | variaga wrote: | For processes >= 40nm, the number is the gate length of | the transistor. For smaller processes, the number is | (approximately) the equivalent gate length that would | result in the same transistor density (transistors per | mm^2) as if the gate length had been reduced to that | size, assuming everything else was scaled | proportionately. | | The trick is, not everything else scaled proportionately. | The gate lengths (mostly) stopped shrinking at around | 34nm but other things kept shrinking, so the overall | transistor density kept going up. | | (And that assumes planar transistors. Things like FinFET | or nanowire which make the transistor structure 3d | instead of 2d further disconnect the gate length from the | achievable density.) | Dylan16807 wrote: | Not _nothing_. The sizes of various aspects of the | process are still roughly correlated with the number, | even if they 're 2x or 3x that size. | PaulHoule wrote: | Past that, it is various forms of chiplets, 3-d stacking, | high bandwidth memory to intensify densification at some | cost. | | In the wings there are a few semiconductor materials, from | Si-Ge to In-P and Ga-A and Ga-N that are used in optical | transceivers, cell phone base stations, power electronics, | and military electronics. Silicon is a good, not great | superconductor, and it dominates because we are good at | making things out of Silicon. | | An In-P microprocessor as complex as a 6502 should be able | to clock upwards of 80 GHz and could run with a fully | populated address space of static RAM on the chip and be | able to react to fast events in real time like nothing | else. | | Such a chip would replace 16 5GHz cores for more mainstream | computation, so if cost gaps narrowed, the In-P part might | compete with a Si part in a complex chiplet architecture. | (e.g. the In-P chip can be built at 1/16 the density of the | Si chip and not have all this multiple-patterning and | lasers trouble that Si is getting into) | vardump wrote: | > An In-P microprocessor as complex as a 6502 should be | able to clock upwards of 80 GHz and could run with a | fully populated address space of static RAM on the chip | and be able to react to fast events in real time like | nothing else. | | > Such a chip would replace 16 5GHz cores for more | mainstream computation,... | | 80 GHz 6502 would be about as fast as... 1 GHz x86 in | integer operations, and even that is being generous. | | Floating point would be several orders of magnitude worse | than even that. | | Typical X86 can do 64 8-bit SIMD operations per clock, 2x | AVX2 instructions retired in a single clock cycle. At | over 4 GHz. | | But it'd sure be a beast in real-time applications... | assuming signal integrity is a solvable problem. | Dylan16807 wrote: | > But it'd sure be a beast in real-time applications... | assuming signal integrity is a solvable problem. | | I'd be skeptical about whether it beats an FPGA. | monocasa wrote: | The idea is pretty closely correlated to the really high | end cadence style hardware emulators. Those tend to be | made out of a sea of tiny ascetic processors that only | know logic ops rather than LUTs, AFAIK. | | So it would seem they beat FPGAs in some niches. | | I question the 80ghz number given by the grandparent | though. Looking at visual 6502 you're not going to get a | order of better magnitude fanout on that design. | Dylan16807 wrote: | > Looking at visual 6502 you're not going to get a order | of better magnitude fanout on that design. | | Better fanout than what? | | The most useful thing I could find was this old article, | but it certainly suggests that the room is there to | expand: https://spectrum.ieee.org/semiconductors/material | s/indium-ph... | vardump wrote: | Agreed, FPGA would be my go-to solution as well. Although | FPGAs certainly can't touch gigahertz+ range yet. Even | 500 MHz is... challenging. | Dylan16807 wrote: | > Although FPGAs certainly can't touch gigahertz+ range | yet. Even 500 MHz is... challenging. | | It's a relative difference though, that depends on what | material you're building the circuit in. If you can build | an FPGA to run at 1/10th the speed of a 6502, then with a | sufficiently expensive process you get a tradeoff between | a very weak 80GHz processor and a customizable 8GHz | circuit. | Dylan16807 wrote: | A 6502 needs two and a half cycles per instruction, on | top of a weak instruction set. The frequencies don't | compare to a modern core. What really matters is the | transistor speed, which does have the potential to be an | order of magnitude faster, but nobody's going to be | making 8 bit processors out of it. | hpcjoe wrote: | Hmm ... I wrote my Ph.D. thesis on low temperature grown | III-V material (GaAs to be precise) and studied InGaAs, | and other variations. | | GaAs would make for awesome switching systems, as it is a | direct bandgap semiconductor, as opposed to Silicon, | which requires phonon mediation. While this is a | tremendous technological advantage, you have that minor | problem of fabrication. | | For GaAs, you need ~5 atmospheres of As gas at 550C. Not | something anyone wants in their backyard, for good | reason. | | For InP, you need to worry about your supply of rare | earth (In) elements, and all the surrounding tech needed | to grow ingots of an appropriate orientation material for | InP. You have to worry about the role of defects (which | is what I simulated), how to dope, how to build | structures. | | Its not simply that it is technologically better, its | that there is a whole massive ecosystem around Silicon, | that for better or worse, pretty much guarantees that we | are going to be running silicon based units for a long | ... long time. | | The joke, and there was one when I was in grad school, | about GaAs and other non-Si materials was, they are the | materials of the future. And always will be. | davidivadavid wrote: | How do industries typically break out of that kind of | path dependence? | staffanj wrote: | Did you mean TSMC? | cedivad wrote: | TI is aiming for 5nm? | monocasa wrote: | Not sure why you're being downvoted; I only know of Intel, | TSMC, and Samsung attempting to hit 5nm. | | https://en.wikichip.org/wiki/5_nm_lithography_process | unlinked_dll wrote: | Not in the literal sense but I think it's important to look at | it metaphorically. | | People kind of misinterpret Moore's Law as this statement about | the speed of computers but he wasn't talking about that in the | paper [0]. He was talking about cost of manufacturing - and | implicitly, the rate of innovation (which is throttled by how | fast you can bring something to market). | | Basically Moore's Law was an observation that implied that more | complex electronics could be made cheaper as time went on. | | And in today's marketplace, we shouldn't just look at node size | and Moore's law as the complexity bottleneck, because it isn't. | Moore himself talks about it in the paper. | | As an example, Moore pointed out that at the time, " _packaging | costs so far exceed the cost of the semiconductor structure | itself that there is no incentive to improve yields._ " Today | this isn't true anymore - and while Zen2 did see a node shrink, | the great innovation that made a more complex device cheaper | was the move to chiplets, resulting in greater yields. | | As well, Moore prophesied the rise of EDA tools and what would | become hardware description languages. | | " _The total cost of making a particular system function must | be minimized. To do so, we could amortize the engineering over | several identical items, or evolve flexible techniques for the | engineering of large functions so that no disproportionate | expense need be borne by a particular array. Perhaps newly | devised design automation procedures could translate from logic | diagram to technological realization without any special | engineering._ " | | What I'm getting at in this ramble is that looking at node size | and "Moore's Law" is only a small slice of the pie and not | particularly interesting outside of material science. Looking | back at what Moore actually wrote - his law is not "dead" | outside of the advance of node size, but his advice on the | innovation of every other aspect of design is moving at | breakneck speed and it's very exciting. | | [0] | https://hasler.ece.gatech.edu/Published_papers/Technology_ov... | meta39 wrote: | If you consider the literal version of the law, i.e. "the | number of transistors on a microchip doubles every two years" | and you take into account GPUs, then it was alive until at | least 2019. | | Here is a good visualization: | https://www.youtube.com/watch?v=7uvUiq_jTLM | | As the video concluded, we shall see... | 40acres wrote: | The literal interpretation is dead but the trend itself is | still very much active. We're not only consistently packing | more transistors into each chip but various advancements in | packaging technology assure that performance will remain on an | upward trend. | vkaku wrote: | I don't think the 7nm one is complete yet. | xiphias2 wrote: | I was looking for the type of device that the chips will be used | for, but couldn't find any mention. | | Sadly laptops are always behind in the manufacturing queue when | it comes to knew technology. | | I'm excited that Zen 2 is coming out on 7nm, but at the same time | my mobile phone is already good enough in energy efficiency, and | I don't expect real practical speed increase for the 5nm version. | jdsully wrote: | Intel tends to push new technology into laptop chips first | while AMD does the opposite. This is merely a business decision | on the part of AMD to play to their strengths. | | Right now you can get a 10nm Intel part in a laptop but not for | the desktop. | rwmj wrote: | Smaller transistors use less power, it's not always about | increasing speed. Your smartphone, if it's anything like mine, | would be better if it had a longer running time between | charges. | | Having said that, the very first users will be high end | servers. It's no accident that IBM were the first to | demonstrate 5nm wafers a few years ago[1], because they'll use | them in their top of the line POWER chips. Those chips have | incredible single thread performance, but also incredible | prices (and apparently very low yields). If you're the sort of | person who wonders who would pay money for that, then you're | not the target customer :-) (Disclosure: I now work indirectly | for IBM) | | [1] https://www.ibm.com/blogs/think/2017/06/5-nanometer- | transist... | Fronzie wrote: | > Smaller transistors use less power. | | Don't the smaller transistors also have higher leakage? I | thought that below 10nm scaling down further would not give | power benefits. | unlinked_dll wrote: | It's called Dennard Scaling [0] and it ran out about 15 | years ago | | [0] https://en.wikipedia.org/wiki/Dennard_scaling | Dylan16807 wrote: | Having constant power density with continuous performance | improvements broke down. | | But we're not yet at the point where power density | increases as fast as transistor density. | 1123581321 wrote: | For mobile, increased power efficiency is speed as it allows | heavier COU usage with the same battery life. | blp wrote: | This is just....not true. IBM doesn't make 5nm chips. This is | a research demo of a transistor design for that node. They | don't make it, and never will. | hindsightbias wrote: | Power has never been on the latest node, and Global | Foundaries abandoned that node over a year ago. POWER10 will | by all accounts be a Samsung node | ebg13 wrote: | > _Your smartphone, if it 's anything like mine, would be | better if it had a longer running time between charges_ | | Little of your phone battery goes to the CPU. It's almost all | screen and radios. | magicsmoke wrote: | Probably FPGAs. They're commonly used to refine foundary | processes because of how regular their layouts are. Also, the | smaller the gates on your FPGA, the more logic blocks you can | pack on it. | kingosticks wrote: | Top-end networking equipment e.g. routers, 5g stuff. | | Also, graphics cards, server processors, and possibly now also | automotive stuff. | georgeburdell wrote: | Intel's 10nm is laptop first. It's easier to yield smaller | chips | robocat wrote: | "It's safe to say that not all need advanced nodes. But Apple, | HiSilicon, Intel, Samsung and Qualcomm require advanced | technologies, and for good reason." | | "The design cost for a 3nm chip is $650 million, compared to | $436.3 million for a 5nm device, and $222.3 million for 7nm, | according to IBS. These are "mainstream design costs," which | means one year after a given technology has moved into | production." | | The new nodes will only be used where the design costs can be | recovered, which is mostly consumer and server farms AFAIK. | Basically anything using the current best nodes will use the | next gen nodes. | RandomTisk wrote: | Is this just further marketing malpractice or will there actually | be 5 or 3nm features? | gameswithgo wrote: | it speaks to this IN the article. Why is this low effort post | at the top? | stygiansonic wrote: | nm is the new GHz/MHz. | SethTro wrote: | > Today, the node names are little more than marketing terms. | 'The node designation is becoming more misleading and | meaningless," said Samuel Wang, an analyst at Gartner. "For | example, at 5nm or 3nm, there is no single geometry that is | actually 5nm or 3nm." | hartator wrote: | So what is 5nm or 3nm? | aarongolliver wrote: | It has about the same meaning as the 5 or 3 in "core i5" or | "core i3". It's all marketing. | wolf550e wrote: | It's still a marketing term that has nothing to do with | specific feature sizes, but they do increase density from one | node to another. | | https://en.wikichip.org/wiki/5_nm_lithography_process | dwaltrip wrote: | How are the numbers chosen if they don't correspond to any | actual measurement? | ip26 wrote: | Well, the old law was that density doubled with each node. | Indeed, you can see that e.g. from TSMC7 to TSMC5, density | roughly doubled, regardless what happened with minimum | feature size. | wolf550e wrote: | Like the numbers in product names, only with the | understanding that smaller is better. | blp wrote: | It is an equivalent scale. Density is fine pitch _m2 pitch_ | track height. You scal node by an equivalent reduction. | | It would be better to use logic density, but it is easier | to use numbers we are used to. | bogomipz wrote: | The article states: | | >"Originally, the node name was tied to the transistor gate | length dimensions." | | then further down: | | >"CPP, a key transistor metric, measures the distance between a | source and drain contact." | | I have mistakenly thought that node designations were based on | distance between source and drain. Could someone say why gate | length dimensions are the more significant measurement? The | distance between source and drain somehow feels more intuitive to | me. But maybe because this is easier to visualize? | ajross wrote: | "Gate length" referred to the width of the sandwiched | semiconductor band with different doping, (i.e. the "N" part of | a PNP CMOS transistor). Traditionally (which is to say, so long | ago that it doesn't really matter) these were the finest | features present in the mask set and were a good metric for fab | sophistication. | | That CPP metric is measuring how close together the contact | vias for the two halves of the transistor can be. It's a much | bigger number, but still probably just as good as a proxy for | transistor density. ___________________________________________________________________ (page generated 2020-01-25 23:00 UTC)