[HN Gopher] AMD's RX 7600: Small RDNA 3 Appears
       ___________________________________________________________________
        
       AMD's RX 7600: Small RDNA 3 Appears
        
       Author : picture
       Score  : 60 points
       Date   : 2023-06-04 17:55 UTC (5 hours ago)
        
 (HTM) web link (chipsandcheese.com)
 (TXT) w3m dump (chipsandcheese.com)
        
       | gyudin wrote:
       | Yet their drivers still cause kernel panic and crashes even on
       | their own demos lol
        
       | mastax wrote:
       | > It uses TSMC's 6 nm process, which won't provide the
       | performance and power savings that the cutting edge 5 nm process
       | would. In prior generations, AMD fabbed x60 and x600 series cards
       | on the same cutting edge node as their higher end counterparts,
       | and used the same architecture too. However, they clearly felt a
       | sharp need to save costs with this generation, compromising the
       | RX 7600 a bit more than would be expected for a midrange GPU.
       | 
       | It's also so they only need to design one memory controller for
       | 6nm. I believe I remember this corroborated from an AMD engineer
       | interview around the 7900XTX launch. Memory controllers aren't
       | just logic that can be "compiled" to whatever target node. They
       | have specific electrical requirements that take substantial
       | design work. For this generation AMD has a 6nm memory controller
       | that they use both in this 6nm monolithic design and in their 6nm
       | memory controller chiplets on the larger designs.
        
         | rowanG077 wrote:
         | It's analogue circuitry at the chip boundary far reaching into
         | RF. It isn't like the high-level digital code like Verilog/VHDL
         | which is relatively straightforward to port.
         | 
         | The people who do that are the true gurus honestly.
        
       | speed_spread wrote:
       | The 8GB VRAM size limitation is the biggest downside for cheap
       | local AI. My feelings is that current LLMs start producing
       | interesting results from 12GB and above.
        
         | kramerger wrote:
         | It's "RDNA lite" for somewhat cheaper GPUs. It is not supposed
         | to have tons of VRAM.
         | 
         | Besides, the _really_ interesting LLMs use far far more than
         | 12GB. But that sort of this changes from day to day here...
        
         | cypress66 wrote:
         | You're expecting too much from a cheap card. AMD offers many
         | 16GB cards, and used RTX 3090s (24GB) are very affordable and
         | are the go to for AI.
        
           | tigeroil wrote:
           | What do you consider to be affordable? In the UK used RTX
           | 3090s still go for well over 1k.
        
             | LaurensBER wrote:
             | For a company that's very affordable, for an individual
             | we're not there yet but I've seen some impressive demos
             | running on a M1/M2 laptop so no doubt time will bring down
             | the hardware requirements even more (and hopefully the
             | price of used 3090s as well).
        
           | epolanski wrote:
           | Cards in the same bracket offered the same amount of VRAM 8
           | years ago to be fair.
        
           | smcleod wrote:
           | Used RTX cards (of any spec) are super expensive here in
           | Australia still.
        
         | ChuckNorris89 wrote:
         | Here's where the unified memory architecture of M1 Macs, gaming
         | consoles, APUs, and other custom SoCs can shine and shows the
         | inefficiencies of the traditional ageing desktop PC
         | architecture in the modern world where the graphics VRAM is
         | completely separated from the system RAM.
         | 
         | I'd like to see PCs designed more like consoles or M1 Macs,
         | with GDDR shared as unified RAM between GPU and CPU. I have an
         | laptop with a last gen AMD APU that's no slouch, but the VRAM
         | slice of total RAM is still fixed as configured in BIOS between
         | 512MB and 4GB, instead of fully unified and dynamically shred
         | by the OS, which seems highly ineficient and wasteful in the
         | modern age.
         | 
         | It's why PS5 and Xbox with their 16GB of fully unified VRAM can
         | compete with gaming PCS witch need 16GB of system RAM and over
         | 8GB of VRAM. Why have two separate memory zones where one sists
         | empty most of the time and when needed is actually too small
         | while the other is half empty, when you can unify them and make
         | use of the whole pie as needed?
        
           | coffeebeqn wrote:
           | I can't imagine the SOCs are too far away for PCs. It does go
           | against the upgradeability philosophy but I'm not sure how
           | many users really care about that
        
             | throw9away6 wrote:
             | Who really upgrades pcs anymore. By the time you need to
             | upgrade everything from socket to power standard is
             | outdated
        
               | beebeepka wrote:
               | Certainly not the case with AMD AM4. Stats show that
               | plenty of people have upgraded their AM4 builds.
               | 
               | The Intel only crowd is such a sad bunch
        
               | heavyset_go wrote:
               | I'd disagree if AM5 follows the path of AM4.
        
               | bee_rider wrote:
               | A mid-lifetime upgrade where you pop in a new GPU, fill
               | up any unpopulated ram slots, or add in a new drive, can
               | be nice. CPUs last a really long time nowadays.
        
               | throw9away6 wrote:
               | So do gpus though. Are you really going to pop in a 800$
               | gpu into 200$ worth of components?
        
               | coffeebeqn wrote:
               | Depends how good the original was if it's worthwhile.
               | GPUs don't need that much from the rest of the machine as
               | long as it's somewhat sensible for gaming. Especially at
               | 1440p or 4k the GPU is a huge portion of the performance
        
               | yjftsjthsd-h wrote:
               | > Are you really going to pop in a 800$ gpu into 200$
               | worth of components?
               | 
               | Why not? For the right workloads, the GPU is the
               | important thing anyways. It's not _that_ different from
               | rotating disks through a machine.
        
             | kramerger wrote:
             | Is this different from integerated GPUs from Intel and AMD?
        
               | ChuckNorris89 wrote:
               | As exemplified in my post above, AMD integrated GPUs
               | don't have unified memory, at least my 5000 series I
               | bought in 2022. Maybe the new 7000 series changes this.
        
               | fulafel wrote:
               | They have had it for about 10 years, since Kaveri. See eg
               | https://hexus.net/tech/news/cpu/54709-amds-heterogeneous-
               | uni... or https://www.guru3d.com/articles-
               | pages/amd-a10-7800-kaveri-ap...
        
               | ChuckNorris89 wrote:
               | Not exactly. While that may have been the case way back
               | in the past when AMD APUs were designed as a homonegnous
               | CPU-GPU unit from the start, I can tell you that's not
               | the case with the relatively modern 5000 series.
               | 
               | I need to go in the Bios and specify explicitly how much
               | of the system RAM I want allocated exclusively of the
               | integrated GPU and the rest stays available for system
               | RAM.
               | 
               | The reason is because Ryzen 5000 APUs seem to be a job
               | rushed out the door so they're just a Zen 3 CPU with a
               | separate Vega GPU glued together on the same die, but
               | they're not a homogenous design, designed to work as one
               | unit, like the APU of the PS5, so memory wise they're
               | unaware of each other, even though AMD calls them APUs,
               | they're not really, but more like separate CPU and GPU on
               | the same die.
               | 
               | I wish I knew about this limitation at the time, as Intel
               | chips with integrated graphics have unified memory.
        
               | yjftsjthsd-h wrote:
               | I wonder if a clever enough driver+firmware could make
               | that hot-adjustable (or truly unified, rather) with
               | nothing but a software patch. It sounds like the pieces
               | are all there.
        
           | tsss wrote:
           | I imagine if that ever happens it will only be used to nickel
           | and dime consumers and you'll end up paying even more through
           | forced upselling than with the inefficient VRAM we currently
           | have.
        
           | bee_rider wrote:
           | The main argument I can think of for separating their memory
           | is that, of course, GDDR can be optimized for bandwidth and
           | regular ram can be optimized for latency or somewhere in
           | between.
           | 
           | But, memory latency continues to make poor progress compared
           | to CPU speeds. So, since we're already going to need a
           | complicated system of caches on the CPU side, maybe it is not
           | such a big deal if CPU memory acts more like GDDR.
        
       | rektide wrote:
       | Almost twice the bandwidth of the rx580, from April 2017. About
       | the same msrp.
       | 
       | It'll be interesting to see how this holds up versus Intel, whose
       | been doing pretty ok at this price point. Intel's initial launch
       | was pretty rocky but the drivers have gotten much much faster
       | already.
        
       ___________________________________________________________________
       (page generated 2023-06-04 23:00 UTC)