[HN Gopher] What's different about next-gen transistors
       ___________________________________________________________________
        
       What's different about next-gen transistors
        
       Author : rbanffy
       Score  : 64 points
       Date   : 2022-10-20 11:25 UTC (1 days ago)
        
 (HTM) web link (semiengineering.com)
 (TXT) w3m dump (semiengineering.com)
        
       | martincmartin wrote:
       | So carbon atoms are spaced about 0.15 nm apart in carbon metal.
       | So a 3nm process is about 20 carbon atoms.
       | 
       | Impressive.
        
       | robotburrito wrote:
       | Chip War is a fantastic book about the history of this industry.
       | I would recommend it highly for some background information.
        
       | Aardwolf wrote:
       | Random semi related question about chip making: AFAIK CPU's have
       | multiple layers, not just a single silicon layer of transistors.
       | And also, making transistors involves doping the silicon with
       | other elements. And lithography is about shining a laser through
       | a mask. And all this starting from an already cut silicon wafer.
       | 
       | So how are the multiple layers done, if light is shined on the
       | surface, how do you reach the other layers? And how is this
       | doping done, you also need to choose what element goes where but
       | this couldn't be done with light and a mask I'd think (so even if
       | single layer I wonder how this works)? And how do you reach the
       | other layers with those chemical elements for doping?
        
         | pclmulqdq wrote:
         | Each wafer has only one layer of transistors. They then have
         | many layers of wiring on top. They go through semiconductor
         | manufacturing with one layer of transistors.
         | 
         | For each layer of metal, they go through several steps of
         | deposition of insulator, masking, etching, deposition of metal
         | (often in several passes now), grinding or etching that, and
         | then covering it all up with more insulator and grinding that
         | layer down to produce a smooth surface for the next layer.
         | 
         | Once the chips are complete, 3D stacking is a packaging
         | process. It involves grinding the backsides of the chips down
         | until they are very thin and attaching the dies together using
         | vias that run through the thin remaining silicon layer.
         | 
         | EDIT: Flash memory today has multiple stacked doped silicon
         | layers, but it is a special process that is largely unsuitable
         | for logic.
        
         | variadix wrote:
         | It's built from the bottom up in layers (at least
         | traditionally, I'm not sure how the newer 3D structures for
         | memory are constructed). The bottom layer of silicon substrate
         | is covered in an oxide and etched selectively by photoresist
         | and masking. Further layers are connective layers of metal
         | lines insulated by oxide, with vias connecting metal layers to
         | one another. The same oxide deposition, photoresist, expose,
         | etch process is used for each layer.
        
           | mattbillenstein wrote:
           | Haven't worked in the space in more than 15 years, but as I
           | recall there were dozens of masks - maybe upwards of 50 -
           | when I was doing ASIC.
           | 
           | And I believe there were steps to flatten and I believe
           | effectively sand the surface flat after some layers.
        
           | JumpCrisscross wrote:
           | > _oxide deposition, photoresist, expose, etch process is
           | used for each layer_
           | 
           | Would note that since FinFETs, deposition occurs from the
           | side as well as top.
        
         | ThrowawayR2 wrote:
         | You can only ever reach exposed surfaces. This means a very
         | complex sequence of adding and removing layers to expose/hide
         | things so that the currently exposed surface features are what
         | you want to change in the current processing step.
         | 
         | The Wikipedia article on CMOS has a very nice illustration of
         | the basic steps of this process:
         | https://en.wikipedia.org/wiki/CMOS#/media/File:CMOS_fabricat...
         | Current processes are much more complex than even that.
        
         | Hikikomori wrote:
         | https://youtu.be/NGFhc8R_uO4
        
         | spyremeown wrote:
         | There are "wells" to tap the components into. You just add
         | successive layers in a very smart way.
        
       | hinkley wrote:
       | Are we ever going to circle back to the notion of creating
       | circuits that can hold more than 2 states? We seem to be doing
       | that for SSDs but not for logic.
       | 
       | Or has the space already been explored and there's nothing there?
        
         | [deleted]
        
         | jlokier wrote:
         | Memristor and analogue processors to accelerate ML are doing
         | this. Neural networks are ok with operations on noisy, analogue
         | states. Analogue operations such as op-amp multiplication are
         | more power hungry than individual digital gates, but some are
         | less power hungry than digital multipliers which use thousands
         | of gates.
         | 
         | For digital, noise-free logic, it's generally more power
         | efficient and physically simpler to have logic gates operate on
         | two cleanly separated states with more gates, than to have
         | bulkier, more complicated gates that do the same thing with
         | combined states. A transistor which is fully on or fully off in
         | a logic circuit uses low power either way (like a wire or a gap
         | in the circuit), but the in-between state uses more power (like
         | a resistor, it produces heat). This is one reason why power
         | consumption goes up with the amount of logic state switching
         | (the in-between resistor-like state occurs briefly during each
         | state change), and also why many-level stable logic states
         | aren't so efficient, except with complicated gates that use
         | many transistors to implement many thresholds.
         | 
         | For non-volatile storage, that doesn't apply. Information
         | density is more important, individual memory cells do not
         | change state often so switching power is less of a thing per
         | memory cell, and the logic sits at the edge of the memory
         | array, shared among many cells. The edge circuitry can afford
         | to be more complicated, to optimise the bulk of the memory
         | array.
         | 
         | In magnetic storage and communication, the signal processing to
         | encode many states in a small signal takes considerable power,
         | but the trade off is worth it.
        
           | 202210212010 wrote:
           | _> Memristor and analogue processors to accelerate ML are
           | doing this. Neural networks are ok with operations on noisy,
           | analogue states. Analogue operations such as op-amp
           | multiplication are more power hungry than individual digital
           | gates, but some are less power hungry than digital
           | multipliers which use thousands of gates._
           | 
           | I checked Mouser just now and memristors are not in stock:
           | https://www.mouser.com/c/?q=memristor
           | 
           | If we are talking about imaginary components, then I think
           | quantistors will change everything.
        
         | klelatti wrote:
         | The Intel 8087 and iAPX432 could store 2 bits per transistor in
         | ROM
         | 
         | https://twitter.com/kenshirriff/status/1046436838146113536
        
         | thunderbird120 wrote:
         | Analog circuits exist and are widely used in some applications
         | including for stuff like inference in AI models. The issue is
         | that they are somewhat inexact which is a huge problem for most
         | conventional code. As for circuits which work in a discrete
         | rather than continuous space, it's basically always better to
         | just use binary to represent information because 2 states are
         | the most easily separable. Once you have to start separating
         | out more states things get more difficult and it's almost never
         | worth the effort unless you're trying to do stuff like storage
         | in SSDs.
        
         | bigmattystyles wrote:
         | https://en.wikipedia.org/wiki/Ternary_computer#History
        
           | hinkley wrote:
           | I'm aware of ternary computers.
           | 
           | What I was trying to say is that there are things we don't do
           | because they're hard or we don't know how, and there are
           | things that we don't do because we've proven they don't work.
           | 
           | Ternary does have some cool features though.
        
         | b3orn wrote:
         | Nothing I can link you, but I read about some companies trying
         | to bring back analog computers for machine learning purposes.
        
           | drexlspivey wrote:
           | Relevant Veritasium: https://youtu.be/GVsUOuSjvcg and
           | https://youtu.be/IgF3OX8nT0w
        
         | mjgerm wrote:
         | Roughly speaking, an N-level digital logic system requires O(N)
         | transistors in order to buffer/force a signal into one of N
         | states, but only performs O(log(N)) more work with them
         | relative to binary.
         | 
         | Without the buffering step, you'll eventually get the middle
         | logic levels drifting (e.g. your "1"s become "0"s or "2"s).
         | Binary gets this for "free" because there's no middle states;
         | this doesn't apply just to a simple buffer, similar details
         | apply to the implementation of all other gates (many of which
         | are rather awkward to implement).
         | 
         | Analog works out for rough calculations because you can skip
         | the buffering process, at the expense of having your
         | calculation's precision limited by the linearity of your
         | circuit.
         | 
         | SSDs are more of a special case, because to my knowledge
         | they're not really doing work on multi-level logic outside of
         | the storage cells. They pump current in on one axis of a
         | matrix, read it out on the other, and then ADC it back to
         | binary as fast as possible before doing any other logic.
         | 
         | Random sidebar: I don't see any constraint like this for
         | mechanical computers, so a base-10 mechanical computer doesn't
         | strike me as any more unreasonable than a base-2 mechanical
         | computer (i.e. slop and tolerance is independent of gear size).
         | In fact, it might be reasonable to say you should use the
         | largest gears that the technology of your time can support
         | (sorry Babbage).
        
       | justinlloyd wrote:
       | There's a lot coming down the pipe in term of next-gen components
       | in the SOI and 3D subthreshold world, ULP (ultra-low power), MEMS
       | (pretty much everything you find in your phone these days,
       | outside of the CPU itself), 3D integration, the problem is
       | summarizing it in a neat little comment. Memristors are huge, and
       | are going to be, huge-er still. World changing huge. Which ties
       | in to new analogue design techniques. 3D integration, which we
       | have had for a decade, is at the tipping point as we move to new
       | nodes and start making monolithic integration a thing. Ultra low
       | power energy efficient compute at the edge, we're talking
       | consuming less power in a year than it takes me to think about
       | what I want to say when writing this comment right now. On-die
       | microfluidic cooling for 3D dies is starting to appear. On-die
       | sensors for new camera tech. We're edging up to a point where
       | multiple sensors and/or a camera, compute, model inference and
       | mesh network connectivity all exist on a single die.
        
         | modeless wrote:
         | Memristors aren't huge yet, are they? Don't they have wear
         | issues? Who is working on them?
        
           | thunderbird120 wrote:
           | They're definitely not huge right now and I haven't seen any
           | viable paths toward them becoming a big thing any time in the
           | near future. There's barely any work being put into them
           | relative to other stuff in the field. HP made some big claims
           | years back but have totally failed to deliver anything and I
           | haven't heard anything about the technology in years.
        
         | ABeeSea wrote:
         | I feel like memresistors have been hyped since HP tried to use
         | it to save their business from 2007-2015. Nothing ever came
         | from it from HP at least. Have there been new developments that
         | make it more likely to be viable?
        
           | justinlloyd wrote:
           | A lot of technology that's out at the fringes will often be
           | hyped up by a marketing department. It is the culinary
           | equivalent of telling everyone how awesome the soup is going
           | to taste just as the cook is returning from the grocery store
           | with the unprepared, raw ingredients. We're edging closer to
           | the "needs seasoning." There's always some company trying to
           | sell something that doesn't really exist in a production
           | form, and then 10 years or 20 years later, it is so in the
           | background we don't even think of it anymore. You know how
           | pricing works? Multiply by ten for every step from cost of
           | raw materials to a commercial product in your hand? 10x to
           | turn the raw materials in components. 10x to turn components
           | in to a product. 10x to move the product to the store. 10x to
           | sell it to you as an item you can use. Same kind of idea when
           | it comes to technology research, but in terms of time. Six
           | months of research becomes five years of development becomes
           | ten years of commercialization becomes twenty years of "it's
           | everywhere."
        
         | inasio wrote:
         | Together with memristors, in-memory compute is likely going to
         | be a big deal. It's already here in some forms, see e.g.
         | Content Addressable Memory in high end networking gear.
        
       | B1FF_PSUVM wrote:
       | Rabbit hole diving on the makers of the machines that make chips:
       | 
       | https://semiengineering.com/entities/asml/
       | 
       | https://en.wikipedia.org/wiki/ASM_International
       | 
       | https://www.asml.com/en/company/about-asml/history
       | 
       | From the last one, ca. 1988 ASML was failing badly:
       | 
       |  _" But in a market of fierce competition and many suppliers, the
       | small unknown company from the Netherlands couldn't catch a
       | break. ASML had few customers and was unable to stand on its own
       | two feet. Making matters worse, shareholder ASMI was unable to
       | maintain the high levels of investment with little return and
       | decided to withdraw, while the global electronics industry took a
       | turn for the worse, and Philips announced a vast cost-cutting
       | program. The life of our young cash-devouring lithography company
       | hung in the balance. Guided by a strong belief in the ongoing R&D
       | and in desperate need of funds, ASML executives reached out to
       | Philips board member Henk Bodt, who persuaded his colleagues to
       | lend a final helping hand."_
       | 
       | (I understand that nowadays you need their equipment for new
       | fabs.)
        
         | agumonkey wrote:
         | Interesting how the present came from a very fragile, almost
         | non happening past.
         | 
         | A lot less critical, I've read that Olivetti vanished from the
         | industry due to random finance wars between France and Belgium
         | with some bank (which bought the company not long before)
         | crashing and thus cutting money right at the moment where PC
         | were taking off.
        
       ___________________________________________________________________
       (page generated 2022-10-21 23:00 UTC)