[HN Gopher] AMD Zen 3/Ryzen 5000 announcement [video]
       ___________________________________________________________________
        
       AMD Zen 3/Ryzen 5000 announcement [video]
        
       Author : thg
       Score  : 487 points
       Date   : 2020-10-08 15:57 UTC (7 hours ago)
        
 (HTM) web link (www.youtube.com)
 (TXT) w3m dump (www.youtube.com)
        
       | ebbflowgo wrote:
       | Add to calendar / reminders
       | 
       | Radeon 6000 BIG NAVI - Oct 28
       | https://www.usehappen.com/events/89753512
       | 
       | Ryzen 5000 on shelf - Nov 5
       | https://www.usehappen.com/events/20853465
        
         | whatch wrote:
         | Thanks for sharing both the website and specific events!
        
       | formerly_proven wrote:
       | 5900X beats a 10900K by 100 points in Cinebench R20 Single-
       | Thread. 100. Points. That's 20 % higher per-thread throughput.
       | 
       | 20 % IPC increase, 20 % higher perf/W on the same process,
       | effectively double the cache size, and reduced memory latency.
       | Absolute insanity from AMD.
       | 
       | I'm almost a little disappointed they didn't introduce a 5960X --
       | they could probably claim 2x-3x performance over the Intel part.
        
         | hnracer wrote:
         | Intel is toast
        
           | dang wrote:
           | Maybe so, but could you please stop posting unsubstantive
           | comments to Hacker News?
        
             | hnracer wrote:
             | You are right, I was not in the best state of mind.
        
               | dang wrote:
               | It happens to all of us.
        
         | baybal2 wrote:
         | > 20 % IPC increase, 20 % higher perf/W on the same process,
         | effectively double the cache size, and reduced memory latency.
         | Absolute insanity from AMD.
         | 
         | An increase like this wasn't something unexpected from a
         | generational update back in nineties.
        
           | dragontamer wrote:
           | But this is an architectural tweak on the same node. Zen2 was
           | 7nm TSMC, while Zen3 is 7+nm TSMC.
           | 
           | TSMC must have optimized the heck out of their transistors to
           | deliver such a large benefit. Or the AMD Zen 3 architecture
           | really found some low-hanging fruit or something to grossly
           | improve performance.
        
             | bch wrote:
             | > low-hanging fruit or something to grossly improve
             | performance.
             | 
             | If there's such a thing anymore I'd be fascinated to hear
             | about it.
        
               | pedrocr wrote:
               | Anandtech will have one of their detailed
               | microarchitecture breakdowns at product availability day
               | (November 5th).
        
             | pedrocr wrote:
             | The IPC increase seems to be coming from the
             | microarchitecture, not the process. The Anandtech article
             | includes a breakdown:
             | 
             | https://images.anandtech.com/doci/16148/1%20Slide%206_575px
             | ....                   +2.7% Cache Prefetching
             | +3.3% Execution Engine         +1.3% Branch Predictor
             | +2.7% Micro-op Cache         +4.6% Front End         +4.6%
             | Load/Store
        
               | phkahler wrote:
               | No silver bullet there, just small gains in a bunch of
               | areas. Some of those may have been hard wins too.
        
             | moonbas3 wrote:
             | In the slides it was still marked as 7nm without the +.
        
           | brundolf wrote:
           | It's much harder to do now
        
         | Teknoman117 wrote:
         | I think it's been neat from an architecture perspective for
         | AMD. TSMC (and AMD) has had trouble matching the frequencies
         | that Intel has been attaining on their 14 nm node, so the only
         | way they were going to beat Intel's single threaded performance
         | was to increase IPC enough over Intel that the frequency gap
         | didn't matter. Seems like they finally got there (the point
         | where their IPC is high enough that a 4 point something GHz Zen
         | part can exceed the single threaded performance of a 5 point
         | something GHz Intel part).
        
         | greggyb wrote:
         | 5960X would be a Threadripper part (at least based on the
         | releases of the 3000-series Ryzen desktop processors).
         | Threadripper has typically lagged the Ryzen announcement. I
         | think we can expect some insane TR parts soon.
        
           | Teknoman117 wrote:
           | The fact that they made of point of supporting threadripper
           | in their announcement gives me a certain level of confidence
           | there will be Zen 3 thread rippers. I mainly say this because
           | people were curious if the 3950X would mean the end of the
           | threadripper line, which was of course later proven wrong
           | when the 24, 32, and 64 core TR parts came out.
        
         | adrian_b wrote:
         | Even more significant is that it beats the newest Intel Tiger
         | Lake by 6% at equal clock frequencies (at 4.8 GHz).
         | 
         | That means that AMD will enjoy a better IPC than anything Intel
         | for at least 1 year, until the end of 2021, when Intel will
         | launch Alder Lake.
         | 
         | Intel Rocket Lake, expected in March 2021, cannot have an IPC
         | better than Tiger Lake, so it will also have an IPC slower by
         | at least 6% than Zen 3.
         | 
         | Because Zen 3 tops at 4.9 GHz, Rocket Lake will have to reach
         | at least 5.3 ... 5.4 GHz to match or maybe exceed Zen 3.
        
           | Tepix wrote:
           | Is Rocket Lake expected to scale up to 16 cores?
        
             | adrian_b wrote:
             | No, the top model of Rocket Lake will have only 8 cores &
             | 16 threads.
             | 
             | Therefore, for multi-threaded tasks it will be much slower
             | than Zen 3.
             | 
             | However, it is expected to have an IPC comparable with Ice
             | Lake and Tiger Lake, so if it would reach 5.4 GHz, it might
             | be a little faster than Zen 3 for single-threaded tasks and
             | gaming.
        
               | charwalker wrote:
               | I'm after an 8/16 chip myself so that would work for me
               | but Intel chips are DOA at this point.
        
               | Jap2-0 wrote:
               | Although everyone is saying that Intel is dead in single
               | thread (which is definitely far more true than
               | previously), I don't see 5.4GHz as infeasible for them -
               | tenth gen goes up to 5.3GHz at the very high end and I
               | would be surprised if they didn't reach at least 5.4GHz.
               | 
               | Looking historically, the 6700k reached 4.2GHz, the 7700k
               | 4.5GHz, the 8700k 4.7GHz, the 9900k 5.0GHz, and the
               | 10900k 5.3GHz - which would imply 5.5 or 5.6GHz for what
               | will (presumably) be the 11900k.
        
         | MrBuddyCasino wrote:
         | Plus 15% IPC and 10% more max boost, this checks out. I'd
         | expect the 8-core part to have a higher multi-threading
         | performance increase than the 12-core part, because it has just
         | one CCX.
        
       | wiremine wrote:
       | I don't follow the chip world closely... I see a number of
       | comments about how this compares to intel, but is there a good
       | unbiased summary that compares the current roadmaps for Intel and
       | AMD?
        
         | SteveNuts wrote:
         | Intel's Roadmap: \
         | 
         | AMD's Roadmap: /
        
       | teruakohatu wrote:
       | Pricing is just a little disappointing. The 3800X launched at
       | $399. The 5800X will be $449. Still good value though.
        
         | jagger27 wrote:
         | They're delivering smack down best in class performance and
         | efficiency for the price it's worth.
        
         | whynotminot wrote:
         | They frankly can afford to charge what they're worth now. No
         | longer any caveats when comparing with Intel.
         | 
         | Effectively, they charge what they want to now. They are the
         | market leaders.
        
         | ece wrote:
         | The 1800X was $500 (top of the line 3 years ago), though the
         | 1700 was $330, and the street prices were often lower. Raising
         | prices to compensate for more demand seems like a rational
         | thing to do, the 3000-series did have stock problems.
         | 
         | I'm not sure $450 for the only 8-core is value though,
         | hopefully there is a 5700 65W SKU coming.
         | 
         | The Big Navi number for Borderlands 3 4K seems inline with a
         | 3080, so if priced right, AMD's going to run the table.
        
         | bobcostas55 wrote:
         | Yeah, I was thinking of upgrading from my 3600 but the prices
         | are pretty crazy. $180 for the 3600 vs $300 for the 5600X, for
         | the same core count and a slight IPC improvement? You can get a
         | 3700X with 2 more cores for that money!
         | 
         | New generations of hardware are supposed to improve the
         | price/perf ratio!
        
         | twblalock wrote:
         | It's not that good of a value. The 3800x sells for between $320
         | and $350 right now. The new chip costs $100 more. That is at
         | least a 30% price increase and the new chip is not 30% faster
         | than the old one.
        
         | warrenm wrote:
         | If $50 on a top-end CPU is your breaking point ... you're not
         | the customer they want
        
         | gtm1260 wrote:
         | I almost wonder if it's meant to keep demand a bit lower since
         | the 3800x was almost impossible to get at launch. Given how the
         | most recent set of more desirable launches went, I wouldn't be
         | surprised to see this price come down in the future.
        
       | JS62 wrote:
       | Time to say 'AMD Inside'
        
       | Causality1 wrote:
       | I'm very excited for this. The only applications I run where I
       | really hurt for extra CPU performance are game emulators, and
       | most of those are heavily constrained by single-core performance.
        
       | tgb wrote:
       | They make this sound like a large change to the fundamental
       | design of these chips. That gets me wondering: how do they test
       | redesigns during development? I assume it's very hard to predict
       | the performance impact of many changes. How often are they
       | manufacturing test chips to measure performance? How much does
       | that cost to do? Can you realistically simulate performance
       | characteristics?
        
         | wmf wrote:
         | It's too expensive to fab test chips so it's all done using
         | cycle-accurate simulation software like SimpleScalar or gem5
         | and later using FPGA-based emulation like Palladium/ZeBu.
        
       | dang wrote:
       | The anandtech article is here:
       | https://www.anandtech.com/show/16148/amd-ryzen-5000-and-zen-...
       | 
       | (via https://news.ycombinator.com/item?id=24720711, which we've
       | merged hither)
        
         | jonplackett wrote:
         | And Intel's share price is.... still totally fine. Wait, WFT?
        
           | yakz wrote:
           | AMD has produced a "better" CPU before, it didn't result in
           | them taking over the market for CPUs.
        
             | kadoban wrote:
             | Their market share is already rising. They're not going to
             | "take over", there's too much inertia everywhere for that
             | (and there's business and tech considerations beyond raw
             | performance), but the main reason Intel's stock didn't take
             | a hit is that this was expected.
        
           | dfdz wrote:
           | Intel Revenue US$71.9B (2019) Intel Mkt cap US$226.98B
           | 
           | AMD Revenue US$6.48B(2019) AMD Mkt cap US$101.57B
           | 
           | (AMD stock is too high imo)
           | 
           | Source: wikipedia and google search
        
             | xur17 wrote:
             | How does revenue growth compare though?
        
           | owenmarshall wrote:
           | The downside is priced in. Stock changes now are about
           | expectations for future releases. The market has bet that Zen
           | 3 will be good (short term) and Intel will regain ground over
           | the long term.
        
           | gameswithgo wrote:
           | market already knew this was coming
        
       | jules wrote:
       | How it the TDP number determined? How can the 8 core CPU be 105W
       | and the 16 core CPU be 105W too? Why isn't the maximum power draw
       | of 16 cores roughly double the power draw of 8 cores?
        
         | DenseComet wrote:
         | GamersNexus has a pretty good article on TDP [1]. It can be
         | summarized as being a mostly marketing number that is useful in
         | a few specific ways. This is applicable to both AMD and Intel.
         | 
         | [1] https://www.gamersnexus.net/guides/3525-amd-ryzen-tdp-
         | explai...
        
       | ksec wrote:
       | This is one of those Shut up and Take my Money Product
       | announcement.
       | 
       | ~20% Increase in IPC, ~10% Increase in Boost Clock Speed. It
       | doesn't matter how Intel spin it, single thread performance will
       | no longer be an Intel only selling point. _32MB_ L3 Cache is
       | going to be very useful for certain types of Application.
       | 
       | Some of these were rumoured for quite some time. But having it
       | confirmed is a completely different matter. And the pricing is
       | still very good compared to what we used to get from Intel.
       | 
       | My only concern is stock availability. And not just at launch but
       | over its life time. AMD has been very conservative with their
       | production estimate. ( Again It is not TSMC's fault ) It wasn't
       | long ago they were on the verge of bankruptcy, so it is
       | understandable but at the same time I wish they took a little
       | more risk.
       | 
       | It is also interesting there is no mention of EPYC 3. I am quite
       | concern about their lack of progress in the Enterprise / Server
       | segment.
        
         | old-gregg wrote:
         | Isn't this how it's always been? New tech shows up in
         | enthusiast/consumer segment first, bugs are ironed out,
         | manufacturing ramps up & yields go up, then new server parts
         | are announced?
         | 
         | Besides, isn't EPYC2 the best server CPU already? There's no
         | time pressure on AMD, they're comfortably in the lead.
        
           | tyldum wrote:
           | AMD lacks AVX512 instruction set, which is a show stopper for
           | many applications.
        
             | nate_meurer wrote:
             | AVX512 is garbage. It incurs a massive performance hit from
             | both frequency and mode-switching penalties. Apart from a
             | few niche HPC and ML applications which you'll never
             | encounter, AVX512's most compelling use cases are to drive
             | Intel's shady market fragmentation, and to create more
             | bullshit FP benchmarks that Intel can claim to win.
             | 
             | https://www.extremetech.com/computing/312673-linus-
             | torvalds-...
        
             | gameswithgo wrote:
             | 1. not many 2. avx2 with twice the cores is about as good
             | as avx512 a lot of the time
        
             | smarx007 wrote:
             | Automatic vectorization for AVX512 would work on the
             | simplest of cases and using intrinsics or writing inline
             | assembly is beyond the scope of 99% of software projects.
        
             | gnufx wrote:
             | Which applications, and why? Some computational ones will
             | go significantly slower on the same number of cores if they
             | could have kept avx512 fed (perhaps small data in cache),
             | but most don't spend all their time in something like GEMM.
             | The new UK "tier 1" HPC system is all EPYC.
        
           | smolder wrote:
           | It's not a given at all, even if that's common. As a fresh
           | counter-example, Nvidia released Ampere on TSMC 7nm for the
           | enterprise many months before they released chips of the same
           | architecture for consumer devices on Samsung's cheaper and
           | significantly less dense 8nm node.
        
           | wahern wrote:
           | Regular EPYC CPUs have too high a TDP for smaller server
           | installations, such as so-called edge servers. EPYC Embedded
           | is still stuck on Zen 1. Intel hasn't upgraded their
           | comparable line of mid-range server CPUs (e.g. Xeon D line),
           | either, but AMD won't win on performance alone. Intel has a
           | _huge_ SKU lineup, _much_ _more_ volume, and a _much_
           | _richer_ vendor and platform ecosystem. If the vendor demand
           | isn 't there to push AMD on EPYC Embedded, Ryzen Emedded, and
           | other market segments, AMD should _build_ demand, otherwise
           | they 'll just recapitulate their rise & fall during the
           | 2000s.
        
             | xigency wrote:
             | Cloudflare recently announced that they would be building
             | out some data centers with EPYC CPUs. Do you believe that
             | the situation has changed since February? They did a pretty
             | exhaustive analysis [0] of price, performance, and power
             | where they saw an advantage in switching away from Intel
             | for that generation at least.
             | 
             | If the existing server hardware is already in a decent
             | spot, then maybe they need to spend more resources on sales
             | rather than making changes to keep it cutting edge.
             | 
             | [0] https://blog.cloudflare.com/an-epyc-trip-to-rome-amd-
             | is-clou...
        
               | MikusR wrote:
               | And before that Cloudfare announced that they are
               | migrating to Arm servers.
        
               | wahern wrote:
               | I think Cloudflare is a different kind of "edge" service
               | than what EPYC Embedded targets, and I would think
               | Cloudflare uses regular EPYC chips. The "edge" that EPYC
               | Embedded, Xeon D, etc target is, I think, more about the
               | hardware configurations (smaller enclosures, minimal
               | number of drives and other devices) and the type of
               | facility (usually not a colocation facility, so power and
               | heat are of significantly more concern). But the
               | workloads are still very much server-class.
               | 
               | EPYC Embedded chips are competitive with Intel's
               | offerings (Xeon D, etc), but as I said Intel's ecosystem
               | is richer--for example, more, better motherboards. It's
               | not enough that AMD's chips are competitive with Intel's.
               | AMD has a huge ecosystem handicap, and so either they
               | need to improve the ecosystem or sell chips that are
               | _dramatically_ better than Intel 's, and for EPYC
               | Embedded neither is the case. Long term a better
               | ecosystem is necessary for AMD to survive because a
               | broader product and customer base brings consistent
               | income and mindshare--staying power. I would hope and
               | assume AMD is working closely with cloud vendors on their
               | proprietary hardware, but the results are opaque; judging
               | by traditional channels (Supermicro, etc), AMD hasn't
               | even begun to close the gap.
        
               | pdpi wrote:
               | I don't think survival is at stake. AMD powers both XBox
               | and PlayStation for the second generation in a row and
               | that ought to help keeping them alive. It's more a matter
               | of whether they can capture enough of the market so
               | they're not merely "surviving".
        
               | HeadsUpHigh wrote:
               | With the kind of marketshare ryzen has gotten on custom
               | builds plus several % increase in server( which has large
               | margins) I don't think amd has financial issues right
               | now. They'll probably target gpus next, go for the low
               | hanging fruit and then slowly iterate on the rest of
               | their products.
        
               | wahern wrote:
               | Fair enough. Survive was a poor choice. What I had in
               | mind was surviving as a contender at the high-end, so we
               | can continue to benefit from competition for server-class
               | chips. Alpha, SPARC, and POWER all lost the high-end
               | market (their only market) to Intel at a time when Intel
               | was inferior. AMD previously failed because Intel
               | surpassed them, but that's because AMD couldn't leverage
               | their initial advantage to secure their markets and thus
               | their ability to keep investing. Without volume and
               | mindshare failure is inevitable. Providing the best high-
               | end chips is insufficient to remain competitive long-
               | term. The reasons for previous failures were complex
               | (ISA, operating system, sales channels, etc), but
               | ultimately it comes down to something like diversity--
               | customer and product diversity provide buffers in terms
               | of sales as well as changes in market direction. AMD's
               | chips are indisputably better than Intel's right now, but
               | even with Intel's mindboggingly _massive_ fumbles they
               | 're barely sweating in terms of current and prospective
               | revenue.
        
           | [deleted]
        
         | kar1181 wrote:
         | My last AMD processor was the Athlon 64 x2 circa 2005, it's
         | going to be nice going back.
        
           | dghughes wrote:
           | My Athlon is in a case next to my left knee. I was almost
           | scared off by the Intel hyper but not too sure of AMD spin.
           | We're all casualties of marketing departments. In the end I
           | bought AMD because it was cheaper than Intel.
        
           | incompatible wrote:
           | I'm using a AMD Athlon II X2 260, released in 2010.
        
           | rhizome wrote:
           | I'm still running an Ath 64 x2 6000+ for a web server, I
           | think it's from about 2006. That with an old hard drive are
           | running 120W, which would be nice to hack down.
           | 
           | I don't remember if I jumped straight to this from my Pentium
           | Pro 200, but the role of this box started with that one.
        
           | phkahler wrote:
           | I ran one of those until a couple years ago when I got the
           | 2400G APU. What a nice upgrade. Even better parts available
           | now.
        
             | [deleted]
        
         | genpfault wrote:
         | > 32MB L3 Cache is going to be very useful for certain types of
         | Application.
         | 
         | Are there applications where more cache is actively
         | detrimental?
        
           | phire wrote:
           | General rule of thumb is that bigger caches are slower to
           | access.
           | 
           | If AMD have traded a bit of access latency for their larger
           | cache, then theoretically there will be a memory heavy
           | application with a working set that fits in 16MB that will
           | see a preformance hit.
           | 
           | Though, we don't know Zen 2 was running into area based speed
           | limitations for it's L3 cache. It's entirely possible Zen 3's
           | cache runs at the same speed.
        
           | rnvannatta wrote:
           | Having more cache can potentially lower the speed of the
           | cache, as the access time is limited by the time the longest
           | path takes, the propagation delay.
           | 
           | So there's a tradeoff between cache size and cache speed,
           | which is why there are separate L1, L2, and L3 caches of
           | various sizes. So potentially the L3 cache in this
           | architecture could be slower than the L3 cache in the 3000
           | series. It could also be the same speed if the size was
           | limited for other reasons, such as yield.
        
             | formerly_proven wrote:
             | AMD claims a significant improvement in memory latency
             | though, which is concordant with their large gains in
             | gaming workloads (a 20 % general-purpose-throughput-
             | oriented IPC increase alone would never give you a 20 % FPS
             | increase in games).
        
               | Sohcahtoa82 wrote:
               | > a 20 % general-purpose-throughput-oriented IPC increase
               | alone would never give you a 20 % FPS increase in games
               | 
               | Is this true even for games that are CPU-bound? When I
               | play MS Flight Simulator, enable the Dev toolbar, and
               | look at the framerate monitor, it tells me that it's
               | spending 20 ms of CPU time per frame, which causes my
               | framerate to cap at 50 fps. A 20% increase in IPC would
               | theoretically bring the frame time to 16.67 ms, giving me
               | a cap of 60 fps.
        
               | nitrogen wrote:
               | There was a now-deleted comment about the CPU busy-
               | waiting for the GPU, to which I had this reply:
               | 
               | Reviews/benchmarks of Flight Simulator by e.g. Gamers
               | Nexus show that Flight Simulator is heavily CPU limited,
               | running on a single CPU thread.
        
               | [deleted]
        
               | rnvannatta wrote:
               | A larger cache size would improve memory latency assuming
               | the working set can utilize the full 36mb, which I'm sure
               | the 2 games that had a 20% uplift can.
               | 
               | It's purely speculation but I suspect the cache size was
               | limited by yield concerns rather than timing constraints.
               | It looks like the 5600X has 1mb less cache so they
               | probably engineered a way to disable faulty sections of
               | the cache on a 1mb granularity.
               | 
               | Edit: My speculation's wrong. The cache difference
               | between the 5700X and the 5600X is due to core count
               | differences. It's the sum of the various cache sizes, and
               | I misread the slide.
        
             | alfalfasprout wrote:
             | While this is true, in practice for the vast majority of
             | applications this is a good tradeoff since the relative
             | slowdown of L3 cache vs. the improvements in reductions of
             | cache misses ends up being tiny:huge.
             | 
             | I'd expect the workloads that could suffer (all else equal)
             | would be something like SIMD optimized matrix multiply
             | where you're always able to prefetch the elements needed
             | into cache effectively and memory access tends to be
             | sequential. But those slight losses would likely be dwarfed
             | by the improved core clocks, etc.
        
             | flavius29663 wrote:
             | isn't the difference between L1, L2 and L3 also because of
             | the functionality, not just the size+speed? L1 is data and
             | code. L2 is data only, per core. L3 is data, synchronized
             | between cores.
        
               | rnvannatta wrote:
               | Yeah, technically there are 2 L1 caches; x86 is a
               | 'Modified Harvard' architecture. The instruction cache
               | also typically has to deal with caching micro-ops. I
               | believe L2 and beyond store both instructions and data.
               | There's also cache associativity, where the the same
               | location in memory can be stores in one of N locations,
               | which can differ per level. I think L1 caches are
               | typically more associative because that takes extra
               | silicon per byte. It looks like Zen 2 at least has an 8
               | way associative L1 cache.
        
           | sedatk wrote:
           | There probably are in which more cache is useless.
        
           | wmf wrote:
           | Not as such, but larger caches are slower (due to the speed
           | of light) which can reduce performance. A few apps have seen
           | performance regressions on Tiger Lake.
        
             | tylerhou wrote:
             | > but larger caches are slower (due to the speed of light)
             | 
             | Wouldn't it be more accurate to say due to gate delay
             | because of an additional level in (de)muxing?
        
               | wmf wrote:
               | I assumed the slowdown is due to wire delay (a larger
               | cache has longer wires) not gate delay but I could be
               | wrong.
        
           | foota wrote:
           | No? Other than the next best use of that silicon/heat budget,
           | to whatever degree the latter is relevant for cache. But not
           | afaik from an architectural point of view.
        
         | agumonkey wrote:
         | `buy the rumor, buy the news` -- amd
        
         | nwallin wrote:
         | > 32MB L3 Cache is going to be very useful for certain types of
         | Application.
         | 
         | "Going to be"? The Ryzen 3600 ($199) and above already have
         | 32MB L3 cache, the 3900X ($499) and above already have 64MB.
         | The big L3 cache is already a big selling point of Zen 2.
        
           | cma wrote:
           | The difference they talked about is it is now uniform latency
           | instead of split into two blocks of 16MB accessible at
           | different latencies depending on which group of four cores
           | you are on.
        
       | nakovet wrote:
       | At least for the laptop market AMD seems to have been issues with
       | distribution, manufacturing, I was looking the Lenovo website for
       | some models and it's hard to find an AMD based one. Out of stock
       | or not available. Feels like when the Nintendo Switch was first
       | launched. Hopefully they will address that soon.
        
         | toast0 wrote:
         | I don't know if it's higher than normal demand or lower than
         | normal supply, but a lot of the laptop market has been low on
         | stock since summer (if not sooner).
         | 
         | Of course, AMD is supply constrained, TSMC fabs are a
         | bottleneck and AMD isn't the only customer that can easily sell
         | every chip that comes off the fab.
        
         | I_am_tiberius wrote:
         | I'm waiting for a Dell XPS 15/17 with a 4800u/h but not sure if
         | that ever happens. Maybe I go with the Lenovo T14 as soon as
         | 4800u is avaiable on it.
        
         | imhoguy wrote:
         | Just ordered T14 thru gold partner. Spec: Ryzen 7 Pro 4750U
         | with 8c/16t and 32GB RAM. Arrival first week of Nov in Europe.
         | Can't wait to get my toys running on the box - Linux, Docker,
         | VMs.
        
         | post_break wrote:
         | Custom build amd thinkpads had months backlog, but some of the
         | crappy prebuilt spec ones were ship next day.
        
         | qppo wrote:
         | Supply chains are highly disrupted right now for lots of laptop
         | manufacturers, I was researching laptops for a friend and it
         | seems like many high end models (or really any mid/low volume
         | SKU) are out of production temporarily.
        
           | rozab wrote:
           | I think it's more a case of demand than production,
           | especially for enterprise-focused laptops like Lenovo's
        
         | ds wrote:
         | This. If they cant get it under control, it wont effect intel
         | as much as you think.
         | 
         | Some say its yield issues, but who knows. 4900HS laptops are
         | selling out like crazy and not keeping up with demand- New
         | laptops arent using the chips because of lack of inventory.
         | They are absoultely beasts though. The zephyrus g14 absolutely
         | dominates on performance + battery life.
        
           | imhoguy wrote:
           | Pitty it hasn't got webcam and pgup/pgdown/home/end keys! It
           | completely disqualifies it for business/programming/wfh work.
        
         | brandmeyer wrote:
         | I just bought a T495s from their website a few days ago, so
         | your experience must be temporary.
        
           | distances wrote:
           | But that's the old generation isn't it? T14 and T14s are the
           | new ones with AMD 4000 which absolutely now dominates laptop
           | CPU performance charts.
        
             | brandmeyer wrote:
             | Well, drats.
             | 
             | OTOH, T495s works great with Debian stable! I just needed a
             | kernel backport from buster-backports to get graphics to
             | work.
        
       | nightowl_games wrote:
       | In 2017 I decided to build a gaming PC. I had been out of the PC
       | Gaming world for a while, so I watched some youtubers to see what
       | I should buy. Ryzen first gen had just come out, and Linus Tech
       | Tips was pretty pro AMD. Seemed pretty optimistic.
       | 
       | I bought a Ryzen 1700, and checked the AMD stock price. It was
       | ~$10.
       | 
       | I told all my friends to buy AMD stock.
       | 
       | I had never purchased stocks before, but I was pretty sure that
       | AMD was going to go up. I bought $500 worth of AMD stock at $12.
       | (it took a few months for me to get around to buying it)
       | 
       | As 2018 went on, financial market started to pay attention to
       | AMD. People were calling it a buy at ~30$.
       | 
       | I was pretty sure that everyone else had missed the boat and that
       | I was in the money solely because of Linus Tech Tips.
       | 
       | Now here we are, AMD at $85. Thanks Linus.
        
         | option wrote:
         | I never recommend friends and family to buy a specific stock.
         | This is a good way to loose friends. I am happy to share with
         | them what I am buying though, if they ask
        
         | bryanlarsen wrote:
         | I bought 3dfx stock in the late nineties for a similar reason.
         | It went to zero. My recommendation would be to sell some of
         | your stock to lock in a little bit of profit and then you can
         | let the rest ride and it'll affect you less emotionally.
        
           | distances wrote:
           | Same for me, not 3dfx but stock losses affecting me much more
           | than wins. So I do mostly just index funds now to remove the
           | buy/sell stress.
        
           | p1necone wrote:
           | It's such a good feeling when something gains enough for you
           | to be able to sell back your initial investment entirely and
           | still have a decent amount.
        
         | alcover wrote:
         | > I told all my friends to buy AMD stock.
         | 
         | I thought I should buy AMD when the Intel security bugs
         | appeared. Then with the success of AMD's new line of procs, I
         | thought even more so.
         | 
         | I didn't do it because I got lazy to study how one buys stock.
         | 
         | Now when I look at the stock I want to headbutt a brick wall.
        
           | bootloop wrote:
           | Had the same exact thought and problem. Only that I bought
           | AMD "fake" demo stock in my bank account on that same day
           | just for fun. So know I can see exactly how much I could have
           | made if I only would have bothered to get my real account to
           | work that day..
        
             | alcover wrote:
             | Maybe we should still setup a trading account and buy ? Who
             | knows where the stock will climb if Intel's image degrades
             | further ?
        
         | throw51319 wrote:
         | You should've put more in. 500 is nothing
        
           | nightowl_games wrote:
           | I know. It was my first stock purchase so I only put in what
           | I was willing to lose.
        
       | donor20 wrote:
       | What I love is that these are often drop in replacements. How has
       | AMD gotten AM4 to last so long as a form factor? Intel just burns
       | through socket designs it feels like by comparison.
        
       | simfoo wrote:
       | Oh man, really regretting paying 800EUR for the 3950X in January
       | now
        
       | glandium wrote:
       | PSA: If you're on the market for a new CPU, and have seen in the
       | past that rr (https://rr-project.org/) didn't support Ryzen, you
       | can stop worrying about it: Support for Ryzen has recently
       | landed. It's not in a release yet, but it's on the master branch.
        
       | [deleted]
        
       | qppo wrote:
       | So the benchmarks are insane, always trust but verify there. I
       | only caught the tail end but what would really let me buy in to
       | Team Red is a competitor to VTune and IPP.
        
       | bennysonething wrote:
       | Do intel still out perform AMD on emulation? I'm upgrading my pc
       | under my tv, I find most of the games I play now are on
       | emulators. Especially great playing old PS1 games that I never
       | played the first time round, which all the smoothing etc. I keep
       | hearing IPC is higher on intel? But I guess this latest news
       | means ryzen will out perform on IPC too?
        
         | Grazester wrote:
         | Emulating a PS1 can be done well on a PC from 2000. The Sega
         | Dreamcast emulated select PS1 games with resolution upscaling
         | and texture filtering in 1999/2000.
         | 
         | You would only need a beefy processor if you are emulating a
         | PS3 or using one of those insane near 100% system SNES
         | emulators with Run-ahead
        
           | bennysonething wrote:
           | Yeah, I'm currently running a 8200 intel. But I'd like to be
           | able to emulate PS2, gamecube, Dreamcast and wii. I'll need a
           | decent graphics card too.
        
             | proverbialbunny wrote:
             | In theory these new CPUs should give around a 20-25% fps
             | increase as long as you're not bottle necking your cpu with
             | an under powered graphics card.
        
       | EasyTiger_ wrote:
       | Single-core performance always held me back from AMD. Will wait
       | to see more benchmarks but this looks extremely promising.
        
       | throwawaysea wrote:
       | Can someone please explain the branding differences between Zen,
       | Ryzen, and Threadripper? Keeping up with all this is exhausting
       | and confusing.
        
         | neogodless wrote:
         | Zen is not a brand. It's a codename for the evolution of their
         | CPU architecture.
         | 
         | Zen, Zen+, Zen 2 and Zen 3 are the generations of CPU
         | architecture.
         | 
         | Ryzen is the consumer brand of CPUs. Geared towards anyone who
         | doesn't specifically need Threadripper.
         | 
         | Threadripper is the enthusiast/workstation brand of (rather)
         | high core count CPUs. If you run software that uses a lot of
         | cores, you should know whether this is the right line of CPUs
         | for you.
        
       | tus88 wrote:
       | No DDR5 supprot. Ill be skipping this one.
        
       | brundolf wrote:
       | I'm bracing myself for supply shortages come November, given
       | global events combined with what will no doubt be an insane
       | amount of demand
        
       | satai wrote:
       | Remember, remember, the fifth of November.
        
       | [deleted]
        
       | kube-system wrote:
       | Now get OEMs to put one in a higher-tier laptop with a decent
       | keyboard and socketed RAM and you can take my money.
        
       | chrismorgan wrote:
       | When the 3800X and 3900X came out, they included coolers.
       | 
       | Then the 3800XT and 3800XT bumped the clock speed by around 2%,
       | increased the price by 15-18% (coming in at the prices the X
       | models had been at release, but said X models had come down), and
       | removed the cooler, which effectively bumped the price by, I
       | dunno, maybe another 5% to get equivalent coolers--if you can
       | even _get_ cheap coolers for them.
       | 
       | Now the 5800X and 5900X are coolerless too.
       | 
       | Any idea why they seem to have changed their philosophy here?
       | I've always thought having a cooler was very convenient, as on
       | paper the provided cooler seemed to be quite good enough--though
       | to be sure there's a reason why the 3950X and up don't include a
       | cooler ("cooler not included, liquid cooling recommended").
        
         | The_Colonel wrote:
         | Including cooler makes sense for low end + mid range, but not
         | really for high end where people will most probably want to use
         | a specific cooler.
        
         | sedatk wrote:
         | Stock coolers have never been popular among who build their own
         | PCs. They were considered mediocre at best. Since people
         | wouldn't care whether they were in the package or not, removing
         | them might have looked like an easy way to increase profit
         | margins. Removing the cooler from their production cycle might
         | have also increased their production throughput.
        
         | roguas wrote:
         | People buy or have already nongeneric coolers for this types of
         | cpu. Almost always.
        
       | xwdv wrote:
       | Wow, we're entering a new age now where AMD is the processor of
       | choice by default.
        
       | nick_ wrote:
       | Uhh, what happened to the 4000-series desktop CPUs?
        
         | searchableguy wrote:
         | Number 4 is considered an omen in china. Probably changed it
         | due to marketing reasons.
         | 
         | https://en.m.wikipedia.org/wiki/Chinese_numerology
        
           | brobinson wrote:
           | It's not limited to just the PRC...
           | 
           | https://en.wikipedia.org/wiki/Tetraphobia
        
           | onli wrote:
           | They claimed confusion with the 4000 series mobile processors
           | and APUs. That's quite more likely, they wouldn't have
           | released that series otherwise.
        
         | CraftThatBlock wrote:
         | They skipped 4000 because of their mobile/APU series, which
         | always used +1000 in the model number:
         | 
         | - Zen 2 desktop: 3000 - Zen 2 mobile/APU: 4000
         | 
         | This way, Zen 3 will all be on 5000
        
         | mattashii wrote:
         | They probably wanted to fix the confusion about what type of
         | core was fitted into each processor, as the 3000-series desktop
         | processors contain Zen 2 for the parts without iGPU, but use
         | Zen 1+ for the parts with iGPU. 4000-series currently is only
         | parts with iGPU and Zen 2, so they have comparable performance
         | to the 3000-series desktop processors without iGPU.
         | 
         | So, I expect that iGPU-enabled parts which contain Zen 3 will
         | also be branded as 5000-series CPUs.
        
       | CorpOverreach wrote:
       | Now if only I could acquire an RTX 3080 to pair with it :(
        
         | foota wrote:
         | Wait a couple weeks and you can snag a navi along with it
        
           | iaml wrote:
           | Which is a smart choice anyway, because if navi's good nvidia
           | might announce ti versions or price cuts.
        
             | MrBuddyCasino wrote:
             | There won't be any price cuts this year, and probably until
             | mid 2021, as TSMC capacity is fully booked until then.
             | Samsung was a stop-gap solution for Nvidia, they gambled
             | for better prices and lost.
        
             | cptskippy wrote:
             | I've been reading a lot of comments everywhere about
             | disappointment with Big Navi but your comment exemplifies
             | why Big Navi matters even if it's not King.
        
               | foota wrote:
               | It could be, but the verdicts out till it lands. My guess
               | is cheaper and near 3080 performance, but it could go
               | either way. Likely cheaper regardless just based on
               | history though.
        
             | heelix wrote:
             | Price cuts? I've not found a single 3080 in store or online
             | yet. There was a 200 person queue for the 12 available at
             | microcenter launch day.
        
       | andy_ppp wrote:
       | If you want to know what changed to get 20% performance increase
       | - everything basically - but they focus on the 8 cores on a die
       | can now access 32mb L3 cache directly rather than two sets of 4
       | cores on a die accessing 2 x 16mb L3 cache.
       | 
       | Here is the point in the video about IPC improvements:
       | 
       | https://youtu.be/iuiO6rqYV4o?t=406
        
       | NikolaeVarius wrote:
       | No 5700 w/8 cores price point? Annoying
        
       | usainzg wrote:
       | YEEEEESSSSSS
        
       | jug wrote:
       | With the two coinciding like this for 2021, I feel like you'll
       | get a ton of gaming bang per buck with a budget CPU from the Zen
       | 3 series and a budget graphics card from Nvidia's 30 series or
       | maybe also AMD's counterpart. Exciting times with major leaps on
       | both fronts.
        
       | awslattery wrote:
       | Can't wait for Steve to validate some of these claims, but was
       | still hoping the October street date rumor was real.
        
         | mrkwse wrote:
         | Release is 5 November
        
       | ckastner wrote:
       | I was somewhat taken aback by this complete focus on gaming.
       | Gaming this, gaming that, FPS this and that.
       | 
       | Buying a 3900X was one of the best purchases I ever made, but
       | from the video, as a non-gamer, it's not entirely clear to me why
       | I should consider upgrading to the 5900X. A non-gaming benchmark
       | would have been helpful.
       | 
       | Perhaps I just misunderstood the target demographic of the Ryzen
       | 9, and maybe what I'm thinking of (and should be looking at) is
       | Threadripper after all.
        
         | numpad0 wrote:
         | What kind of intensive tasks even exist today, apart from
         | gaming?
         | 
         | And on top of that, I'm slowly realizing that regular people
         | don't appreciate snappy computers. They don't want to wait, but
         | don't want quick moves either. So _things happening instantly_
         | demo might not work anyway IMO.
        
           | ckastner wrote:
           | Software development, machine learning, general number
           | crunching, etc.
           | 
           | But, as I mentioned above, perhaps I'm just looking at the
           | wrong product, and maybe I should be looking at Threadripper
           | instead.
        
           | greggyb wrote:
           | Compiling large programs. All sorts of data analytics use
           | cases. Rendering - covers still and video. HPC. Real-time
           | data processing.
        
           | anonymfus wrote:
           | The most mainstream application of massive computing power at
           | home seems to be video editing.
        
           | leetcrew wrote:
           | > And on top of that, I'm slowly realizing that regular
           | people don't appreciate snappy computers.
           | 
           | I don't think this is true. my 70yo father who couldn't care
           | less about tech is always asking me why webpages don't load
           | instantly even when he pays for the highest tier of internet
           | service.
           | 
           | I think most people just don't have the knowledge to make
           | hardware/software choices that reflect their desire for
           | "snappiness". a lot of people don't understand that a laptop
           | is inherently less powerful than a desktop, let alone how to
           | allocate money towards cpu/ram/storage to get the best bang
           | for their buck.
        
             | zargon wrote:
             | It doesn't help that many retail computers have imbalanced
             | specs, trying to sell to people looking for the highest
             | number on a specific component (and sacrificing every other
             | component for cost) rather than offering a well-balanced
             | system.
        
         | AshWolfy wrote:
         | For consumer chips, gaming is where the money is, wait until it
         | comes out and see what the benchmarks are then
        
         | xfalcox wrote:
         | There was at least 5 non gaming benchmarks, like GCC,
         | Cinebench, VRay, CAD...
        
         | akmittal wrote:
         | They did show cinebench and content creation benchmark
         | https://youtu.be/iuiO6rqYV4o?t=1237
        
         | greggyb wrote:
         | Gamers appear to be the most avid and most vocal fans of any
         | given computer technology (except maybe networking?). They're
         | probably trying to ride the hype train.
         | 
         | That said, you should always wait for third party benchmarks
         | from a source you trust, with benchmarks that relate to your
         | production workloads. I like Anandtech, Phoronix, and Gamers
         | Nexus for such reviews. The last, despite the name, includes a
         | decent swath of non-gaming benchmarks, and they are incredibly
         | transparent about benchmarking methodology, which I appreciate.
         | 
         | With the necessary caveat out of the way, some observations:
         | 
         | If the IPC gains are there, and we're seeing similar or higher
         | clocks (which seems to be the case), then you should expect a
         | pretty good uplift to non-gaming performance.
         | 
         | The cache architecture optimization should also be a boon to
         | many workloads outside of gaming.
        
           | fomine3 wrote:
           | Now Gamers Nexus is my #1 review site despite the name.
           | Anyway most important thing is check a similar workload
           | benchmark for your use rather than Cinebench(if you're not CG
           | renderer).
        
           | theandrewbailey wrote:
           | > Gamers appear to be the most avid and most vocal fans of
           | any given computer technology (except maybe networking?).
           | 
           | Gaming is very sensitive with network latency. Bandwidth is
           | less important, but helps for downloading and installing
           | games.
        
             | greggyb wrote:
             | I absolutely understand the importance of network
             | performance in gaming.
             | 
             | My observation was on how vocal gamers tend to be in their
             | excitement about hardware. I see gaming media go nuts over
             | CPU and GPU releases. There's excitement and detailed
             | analysis of motherboards. I've seen in depth content on
             | storage, and there's been a lot of hype over the PS5's new
             | storage architecture. RAM gets attention. Obviously
             | displays get lots of attention, both from a visual quality
             | and refresh frequency perspective. There are endless buying
             | guides for peripherals such as mice, keyboards, and
             | headsets.
             | 
             | I don't tend to see much content on networking, nor
             | anything approaching the excitement I've seen for any of
             | the categories above.
             | 
             | These observations do not diminish the importance of the
             | network in online gaming. I merely noted that networking
             | hardware tends to generate less vocal excitement among
             | gamers. Beyond networking equipment, I believe gamers lead
             | in hardware excitement.
        
               | tssva wrote:
               | Network performance may be important but the reality is
               | that for the most part those things which influence
               | network performance are out of the hands of gamers. There
               | is "gaming" network hardware out there but for the most
               | part I would categorize it with the $500 coax cables sold
               | to audiophiles.
        
               | greggyb wrote:
               | The biggest lever in the toolbox for a typical user to
               | optimize network is good old ethernet. Wired vs wireless
               | will typically reduce latency a little bit and
               | _massively_ improves on jitter, which is arguably more
               | important for competitive gaming (within a reasonable
               | range of latency).
               | 
               | Outside of that, I agree.
        
         | leetcrew wrote:
         | the single thread performance is the meat of the story here.
         | that was the main caveat that stood in the way of an
         | unconditional recommendation of amd over intel in the last
         | generation. it makes sense to primarily target people (like me,
         | incidentally) who would have bought amd last time around but
         | for the deficit in gaming performance.
        
         | f311a wrote:
         | Gamers are the biggest audience when it comes to high-end
         | consumer CPUs. You don't need $550 CPU for your laptop or PC if
         | you don't play games.
         | 
         | Many professionals, including developers, don't need such CPUs
         | too.
         | 
         | When I need a lot of CPU power, I just ask my company to
         | provide me servers.
        
           | cma wrote:
           | If you compile large c++ codebases they are a godsend. A
           | slide showed 5950x was only 9% better at a compiler benchmark
           | than the 3950x though. A deeper pipeline based IPC increase
           | may suffer with missed branch prediction in the type of
           | parsing and optimizing workloads of compilers and not get the
           | full benefit.
        
             | f311a wrote:
             | Do C++ developers prefer desktop computers at work? I think
             | a lot of them use laptops.
        
               | outworlder wrote:
               | Is DistCC (or related tech) not a thing anymore?
        
               | leetcrew wrote:
               | depends on the project. it takes me about 20 minutes to
               | do a full build on an overclocked 9700k. if I were forced
               | to use a laptop, I would probably quit.
        
               | spockz wrote:
               | What about spinning up a 24core vm and mount your code
               | with sshfs for those compiles? Edit: corrected ashes to
               | sshfs
        
               | leetcrew wrote:
               | this has been discussed internally. for a variety of
               | reasons, testing the software on a remote machine is
               | kinda painful, so we would still have the overhead of
               | copying the executable back to the developer's machine.
               | the more straightforward approach is just to buy fast
               | desktops for all the devs. a desktop with 32GB ram, a top
               | of the line consumer cpu, and a middling gpu is roughly
               | the same price as a good ultrabook anyway.
               | 
               | also, c++ compilation doesn't scale perfectly with more
               | cores. linking is still single threaded, AFAIK.
        
               | PaulDavisThe1st wrote:
               | lld will parallelize your link step. Major improvement
               | over ld. Barely any change to the rest of your build
               | process.
        
               | StillBored wrote:
               | sshfs adds far to much latency to large compiles which
               | tend to open a ton of little files, read a KB or two, and
               | close them. If your on a local LAN its doable on NFS or
               | other low latency network sharing, but for everything
               | going over the internet ssh is going to be far to slow.
               | In those cases its better to just git pull the changes
               | and maintain a local copy on the compile target.
        
               | sukilot wrote:
               | What do you mean? The code is on the build server. You
               | only need to push your human-timescale changes.
        
               | ahartmetz wrote:
               | You can always get roughly 2-3x the CPU performance of a
               | laptop in a desktop (2x core count, higher clock due to
               | much bigger heatsink). It makes sense to use a desktop
               | for the compile times.
        
               | ben-schaaf wrote:
               | The most performance you can get in a mobile CPU right
               | now is an 8 core ryzen. On a desktop you can get the
               | 3990x at 64 cores. That's at least 8x the performance.
        
               | ben-schaaf wrote:
               | Can't fit a 32 core threadripper in a laptop no matter
               | how hard you try. Afaik most use workstation desktops if
               | they're working on large codebases with long compile
               | times (think chrome, Firefox, unreal engine, etc. with
               | compile times often in the hours on slow computers).
        
               | DoofusOfDeath wrote:
               | I usually find myself developing C++ code directly on
               | servers, via SSH. In that setup, a laptop + external
               | monitor is my preference, because of portability.
               | 
               | If I ever get to do local-machine development again, the
               | choice of laptop vs. desktop really depends on how build
               | times compare, and which platform has Intel CPUs with the
               | required ISA extensions.
        
               | sroussey wrote:
               | Which ISA extensions are useful for compilation?
        
               | StillBored wrote:
               | None really, compilation is considered a branch/integer
               | workload and has been that way for decades. Its a lot of
               | tree traversals, and short string manipulation. One of
               | the bigger boosts in recently memory were the "fast
               | strings (rep mov*)" implementations about 10 years ago.
               | 
               | Single core compilation is actually a fairly good metric
               | for general purpose performance. Of course it scales
               | almost linearly with core count too.
        
               | DoofusOfDeath wrote:
               | Regarding the speed of compilation, I'm not sure which if
               | any recent ISA extensions matter.
               | 
               | What matters to me is the ISA extensions that are needed
               | by the software I'm building, typically DL or HPC code.
               | Life is easier when I can build and test the software on
               | my local machine, without having to involve some fancy
               | new server.
        
               | xrikcus wrote:
               | Many of us who do use laptops do the builds elsewhere.
        
               | gambiting wrote:
               | We are a company that only does development in C++ and
               | everyone has a desktop workstation. Laptops are simply
               | too slow unless absolutely huge and besides my minimum
               | requirement in a workstation nowadays is 64GB of ram with
               | 128GB preferred. Hard to find a laptop that can support
               | that much.
        
               | ip26 wrote:
               | Honest question, why no build farm?
        
               | gambiting wrote:
               | We have a build farm. This is just for running the
               | editor.
        
               | sroussey wrote:
               | My MacBook has 64, don't think 128 is an option.
        
             | tmd83 wrote:
             | Yeah I noticed that comparison and was surprised at that
             | given the amazing single thread benchmark performance
             | improvement. We really have to see the benchmark.
             | 
             | One might end up in a situation where for developers the
             | older generation cpu being cheaper and more available might
             | make more sense.
             | 
             | I wish somebody really did some suitable benchmark suite
             | for developers.
        
               | vbezhenar wrote:
               | You don't want older generation CPU. You won't
               | clean/build your project after every change. You'll use
               | some kind of incremental build which will use few cores.
               | Your IDE will use few cores. And significantly faster
               | single-thread performance from new generation will be
               | extremely helpful.
        
           | GordonS wrote:
           | > Many professionals, including developers, don't need such
           | CPUs too
           | 
           | While I agree that many people don't need mass multi-
           | threading, I don't count developers among those.
           | 
           | Compilation and transpiration are computationally intensive,
           | and usually take advantage of multi-threading. Developers
           | often also run multiple virtual machines and containers, for
           | example to run a database, message bus or blob store.
           | 
           | As a developer, I'll take all the compute capacity I can get,
           | thank you very much!
        
           | lliamander wrote:
           | > Many professionals, including developers, don't need such
           | CPUs too.
           | 
           | True, though I think there's a distinction between "need" and
           | "can make use of". If I'm going to spend 8+ hours a day
           | working on a computer, I'd probably be willing to spend a
           | extra 20% or whatever on total system cost to eek out a bit
           | extra performance even if that cost doesn't make sense from a
           | strict cost/benefit calculation.
           | 
           | That said, there isn't really a good way for developers to
           | calibrate the hardware to their workload.
        
           | ATsch wrote:
           | It's not that developers don't need these CPUs, it's just
           | that there's a lot more gamers than developers, and most of
           | them will buy laptops or be provided workstations with Ryzen
           | PRO or Threadripper processors.
        
           | warrenm wrote:
           | Apparently you don't do development, local virtualization,
           | work on enterprise apps, or any of 1000s of other things that
           | benefit from extra horsepower
        
             | manigandham wrote:
             | They said _biggest audience_ , not the only one.
        
           | Glyptodon wrote:
           | I definitely need ram way more than I need CPU, but there are
           | still plenty of things that CPU helps me save time with as a
           | dev, and often those things are somewhat parallel. So more
           | cores is good too.
        
           | gambiting wrote:
           | >>Many professionals, including developers, don't need such
           | CPUs too.
           | 
           | Lol, sorry, but our C++ projects have an average build time
           | of 40 minutes on an 8-core/16threaded Xeon CPU in my
           | workstation. Even using Fastbuild/SN-DBS it still takes 5-10
           | minutes. We'll take any number of cores we can get, thank
           | you.
        
             | sukilot wrote:
             | 20% faster CPU isn't going to solve your problem. You
             | should look into caching and distributed compilation.
             | 
             | The biggest companies in the world writing the largest
             | software don't have build times that slow.
        
             | xorfish wrote:
             | > 8-core/16threaded Xeon CPU
             | 
             | What generation and what frequency?
             | 
             | Those things can be quite dissapointing for what they cost
             | compared to consumer CPUs.
             | 
             | I would expect the 5950X to be around 3-6 times faster than
             | an 8 core Xeon.
        
               | gambiting wrote:
               | It's the Xeon W-2145.
        
             | DoofusOfDeath wrote:
             | I wasn't familiar with Fastbuild or SN-DBS. Can/should they
             | be used in conjunction with ccache?
        
             | f311a wrote:
             | Yeah, but there are 10 JavaScript developers for each C/C++
             | developer.
             | 
             | I'm a Python developer and I can work from laptop. I do a
             | lot of data processing and my laptop can handle development
             | and testing. On production, some of the scripts are running
             | on 64 core machines and use vector operations via numpy and
             | scipy.
        
               | notJim wrote:
               | LOL I'm a "js developer" and the system I work on
               | requires like 10 docker containers. Not sure where this
               | idea comes from that JS developers don't need power.
        
               | rowanG077 wrote:
               | You are just wasting cycles and memory. Don't use docker
               | if you care about performance.
        
               | notJim wrote:
               | This kind of response is supremely unhelpful. I work at a
               | company with a large team, I'm not just going to tell all
               | 300+ people to drop everything and rewrite the entire dev
               | stack. Docker has become the de facto standard for dev
               | environments.
        
               | rowanG077 wrote:
               | My comment was flippant because your comment was useless.
               | Of course if you decide to use something like docker you
               | will force yourself into a performance corner. This would
               | happen with literally any piece of software. It's not
               | then valid to use that as a way of requiring a better
               | cpu. No what you need is to get your development
               | environment in order.
               | 
               | Btw docker is only standard for CI systems. Never would I
               | advise anyone to actually use docker for local
               | development let alone 10 docker instances at once. It's
               | just pure insanity.
        
               | notJim wrote:
               | Why is it not valid to buy a better cpu to have a better
               | dev environment? What does validity even mean here? I was
               | never told I had to check in with you when making
               | decisions. Before docker I remember spending tons of time
               | setting up dependencies locally on machines when starting
               | a new job, or maintaining puppet scripts to do so. And
               | then since machines change over time, and run a different
               | OS, your dev environment never quite matches prod. A $500
               | CPU is drastically cheaper than the bugs, headache, and
               | time suck of maintaining a dev env.
        
               | AlfeG wrote:
               | Webpack compilation of hello world projects are insanely
               | long task. If new CPU can reduce this *hit in 20% I will
               | be happy
        
               | sroussey wrote:
               | Even JS devs "compile" their code and use multiple
               | threads. And when JS runs, every JS engine uses multiple
               | threads (parsing, running, GC). Just look at how many
               | threads chrome is using on a single web page. No need to
               | throw shade on JS devs and their equipment needs.
        
               | f311a wrote:
               | Yes, but you can't use 105W CPU in your laptop anyway.
               | 
               | In a company where I work, almost every front-end
               | developer uses a laptop with an external display because
               | it's a portable solution.
        
               | lliamander wrote:
               | I think OP's main argument is that if you actually need a
               | lot of cores, then you should probably be offloading some
               | of that work onto a server.
               | 
               | I currently use a 6-core mobile H-class CPU for my work
               | that is a mix of Java and JS development, with a bunch of
               | browser tabs and docker containers running, large
               | IntelliJ project, etc. While my system often has a lot of
               | processes/threads running, the main cost of those is in
               | memory utilization rather than constant CPU performance.
               | 
               | I'm not saying that can't use more CPU horsepower, but
               | it's certainly enough for my needs (for now). The main
               | benefit of having more cores for me right now is just to
               | maintain system responsiveness while running a bunch of
               | background tasks, rather than raw throughput of large
               | compile jobs.
        
               | tmd83 wrote:
               | Kind of the same setup for me. Primarily Java work on a 6
               | core laptop. Between Chrome absurd RAM/cpu usage,
               | IntelliJ, Gradle, Weblogic memory usage is just super
               | high and cpu is not so little either at times even
               | without build. Add anything extra I'm running for
               | diagnostic, performance tuning and things are very
               | troublesome. And Mac doesn't seem to handle such pressure
               | as gracefully as Linux (though I'm not 100% sure).
        
               | lliamander wrote:
               | I'm running Linux and things generally feel pretty smooth
               | to me. It wouldn't surprise me if Linux had better multi-
               | tasking support.
        
               | tmd83 wrote:
               | I think more than better multi tasking it's handling of
               | load when things are at the limit in some fashion whether
               | is excessive cpu or ram.
        
             | ohazi wrote:
             | "Any number of cores" != "Fastest possible core design"
             | 
             | I also compile large, multi-compilation-unit C++ programs,
             | and it's much, much better to have a 24 or 32 threads
             | clocked at 2 GHZ than 8 threads clocked at 5 GHz.
             | 
             | The parent comment was talking about fast individual core
             | designs, which _are_ genuinely useful for gaming, and can
             | 't be approximated by adding more cores.
        
             | mey wrote:
             | Been a long time since I've done heavy C++ development
             | (long time) but how does Fastbuild compare to Incredibuild?
        
               | gambiting wrote:
               | It's about the same, we've stopped using Incredibuild
               | some time ago because there were almost no benefits to it
               | over Fastbuild and the cost was huge.
        
           | walshemj wrote:
           | Not for content creators and other workstation type
           | applications - all those non server workstation boards are
           | going some where right?
        
           | overcast wrote:
           | You don't need it if you DO play games. I'm running an 11
           | year old Xeon, that maxes every modern game at 2560x1600 on a
           | 1070. GPU is WAY more important than CPU in gaming.
        
             | leetcrew wrote:
             | > GPU is WAY more important than CPU in gaming.
             | 
             | depends on the game. plenty of popular esports titles will
             | bottleneck on the CPU with even a low end graphics card.
        
               | overcast wrote:
               | Look at any gaming benchmarks, unless they are running
               | them at low resolutions, the CPU is not the bottleneck. I
               | agree there is some fringe simulation cases, but most
               | gaming is not inhibited by CPU. Once you're up in the 2k
               | and 4k land, the GPU is the bottleneck. This video
               | focuses on 1080p performance benefits. There is zero
               | reason to be upgrading between generations unless you're
               | just burning money. Anyone paying $500+ every time there
               | is a new CPU platform, is guaranteed already running 2k
               | and 4k content.
        
               | outworlder wrote:
               | Simulation is not necessarily 'fringe'. Kerbal Space
               | Program, Oxygen Not Included, heck... Dwarf Fortress (ok,
               | the last one IS fringe).
               | 
               | You are correct that most games don't use that much CPU.
               | Most titles will run comfortably with an Intel I3 from
               | the past three generations or so, provided there are no
               | bottlenecks in the GPU. The CPU will be the bottleneck,
               | but if you are getting sufficient framerates, so what?
               | 
               | Modern CPUs are insanely powerful. Even then, I moved to
               | Ryzen - for the above mentioned titles.
        
               | overcast wrote:
               | I run Kerbal and Dwarf Fotress on an 11 year old Xeon, as
               | well as every other modern game with a 1070 at 2560x1600.
               | All the eye candy on. I'm not suggesting running a decade
               | old CPU, but it's completely silly for this presentation
               | to be pumping expensive CPUs for 1080p gaming benefits.
        
               | leetcrew wrote:
               | it's really not so clear cut outside of the few highly
               | gpu-intensive AAA titles that get released each year. I
               | play csgo (an older, but still very popular game) on a 4k
               | display. my gpu hovers around 40% at max settings, while
               | a couple cpu cores are constantly pegged.
               | 
               | it is hard to find benchmarks to show this however. the
               | reputable sites mainly focus on 1080p because this is
               | where the difference between cpus is most visible.
               | doesn't mean there isn't a meaningful difference at
               | higher resolutions, it just isn't as good a way to show
               | what they are trying to show in a cpu review.
               | 
               | > There is zero reason to be upgrading between
               | generations unless you're just burning money.
               | 
               | agreed, the speedup is almost never worth such a short
               | upgrade cycle, especially for consumers. but this is
               | moving the goalposts given the thread's context.
        
               | jyrkesh wrote:
               | The big CPU-bottlenecked esports game is CS:GO, or any
               | other Valve / Source game. Even on an older CPU with
               | mediocre single-threaded perf (i7-5820K), I can run the
               | game _fine_ on high settings, 1080p (100-250fps), but it
               | occasionally dips down below my 144hz refresh rate, which
               | can be annoying.
               | 
               | And yeah, the GPU is never begged while playing, but
               | multiple CPU cores definitely are.
               | 
               | Also, emulation is another big place where CPU
               | bottlenecking is a thing. Look at Dolphin (Gamecube/Wii),
               | Citra (Wii U), and yuzu (Switch), they're all CPU bound.
               | Folks in the various subreddits will ask why their new
               | GPU isn't netting them a higher framerate, and it's
               | because the GPU is almost not even used (though in some
               | cases with better graphical effects, shaders, etc. they
               | can be beneficial)
        
               | NikolaeVarius wrote:
               | https://www.youtube.com/watch?v=dmeWo7BLN9Y According to
               | DF, 4k games are bottlenecking on the CPU via the 3090
        
               | FartyMcFarter wrote:
               | When do they say that in the video? The CPU-limited
               | comment at 10:28 is for 1080p / 1440p.
        
               | NikolaeVarius wrote:
               | 10:41 , so 10 more seconds. I910900k bottlenecking Hitman
               | 2 at 4K Max
        
               | FartyMcFarter wrote:
               | It doesn't seem like that's 4K: the text on the top-right
               | corner clearly says 1080p or 1440p next to each
               | framerate. The narrator also says 1080p and 1440p.
               | 
               | It's definitely confusing, since there's a mention of 4K
               | elsewhere...
        
               | NikolaeVarius wrote:
               | Sure, but that drives the point further right? Already
               | CPU bound at 1080P
        
               | FartyMcFarter wrote:
               | No. The CPU is the bottleneck at 1080P, but the GPU might
               | become the bottleneck at a higher resolution.
        
               | leetcrew wrote:
               | not really. cpu bound at 4K implies cpu bound at 1080p,
               | but the converse is not true.
        
               | Matthias247 wrote:
               | While I agree with the general statement, I want to
               | remark that 1080p is actually 2k if you treat 2160p as
               | 4k.
               | 
               | I guess you mean 1440p - which you could maybe label as
               | 2.6k. But I don't think there is any such designation.
               | Calling it WQHD is probably the right thing.
        
             | mikepurvis wrote:
             | It depends. I threw together an HTPC a few months ago out
             | of mostly second-hand parts and it's been a mixed bag. With
             | an i5-4460 and an RX 570, I can run modern shooters
             | perfectly, but a bunch of indie games give me unexpected
             | trouble. Hades for example (isometric roguelite heavy on
             | visual effects) still gives me lots of slowdowns and
             | framedrops, and I even get jitter and issues on some really
             | old titles that I would have expected to be butter-smooth,
             | like Sonic Generations.
        
               | overcast wrote:
               | The RX 570 is definitely a budget level GPU, an upgrade
               | to that would certainly give you what you need to run
               | those games on an I5 properly.
        
             | easde wrote:
             | If you're playing at high framerates (120Hz+) you'll find
             | that your CPU is inadequate for many recent games.
             | 
             | (playing games on a 1070 and a 6 year old Xeon and feeling
             | the lack of single-threaded performance)
        
             | Rebelgecko wrote:
             | Have you tried MS Flight Sim? That's the game that
             | convinced me it's time to upgrade my trusty i5-4670k
        
         | XCSme wrote:
         | Also because Google Chrome, Microsoft Word, Visual Studio Code
         | and Slack don't have an FPS meter, and you rarely notice a few
         | dropped frames, compared to games were you can easily tell
         | performance differences.
        
         | gameswithgo wrote:
         | AMD was already ahead of intel on _every other use case_ so
         | they finally got gaming too.
        
         | andrewstuart2 wrote:
         | Assuming that your workloads are standard non-gaming workloads,
         | they will probably also benefit substantially when gaming
         | benefits. My reasoning being that a common bottleneck for
         | gaming workloads is single thread performance, and unless this
         | is highly focused on a single core clock boost, I'd expect that
         | single-threaded performance increases would be multiplied
         | appropriately by core count, though perhaps not perfectly
         | linearly.
         | 
         | When you see marketing reaching for "non-gaming" metrics, it's
         | often for highly parallelizable workloads, which benefits non-
         | gaming disproportionately, but e.g. for compiling/linking there
         | are still tasks that have to be done serially, which is where
         | that single-threaded performance boost is critical.
         | 
         | I'm definitely excited at this point to see what a Zen 3
         | threadripper can bring to the table.
        
         | x87678r wrote:
         | I'm wondering how many non-gamers actually buy desktops. Either
         | they use laptops or their company chooses one. I'm a non-gamer
         | who likes desktops but am happy with my old PC and am thinking
         | of getting a laptop for next box.
        
           | hombre_fatal wrote:
           | I built a small form factor PC for fun, but it's pretty much
           | deadweight.
           | 
           | Having to sit down in the same chair at the same desk for an
           | entire work session is a self-inflicted dealbreaker for me.
           | It made more sense when laptops were so underpowered with
           | such bad battery life. But now, gaming and exotic workloads
           | are the only thing that really justify being tethered to a
           | machine.
           | 
           | If I could go back in time, I'd save that $1000 towards a
           | maxed out Macbook Pro and dual-boot Windows.
        
             | x87678r wrote:
             | Lol I sit at the same chair at the same desk every day.
             | Perhaps you just persuaded me what I should do next.
        
         | [deleted]
        
         | protomyth wrote:
         | Business PCs are determined by the company building the PC. The
         | consumer market that actually builds their PCs is mostly
         | gaming. They did mention content creators a couple of times.
        
         | zamalek wrote:
         | Gaming workloads are strict superset of almost any other
         | workload (because they are extremely taxing on almost any
         | metric a platform can have). Game benchmarks are abundant and
         | reproducible. They are a good indicator of how a platform will
         | deal with almost any other software.
         | 
         | AMD did have a CAD benchmark thrown in.
         | 
         | > Threadripper
         | 
         | Not a bad choice at all (wait for that 5000 TR announcement),
         | however, a *950X comes close to TR for much less. My 3950X has
         | made a striking difference for Rust compilation times and I'm
         | strongly considering a 5950X.
        
           | pimeys wrote:
           | I'm also having a 3950x for Rust work and it has been the
           | fastest CPU I've ever used. It just flies every day and makes
           | rust compilation times not an issue anymore.
           | 
           | That means, I really don't know do I want to have the trouble
           | of finding the 5950x and the work of replacing the CPU. I
           | really don't feel like I need more CPU power at this point.
           | 
           | But replacing my 2080ti with a new fast Radeon, that would be
           | something...
        
           | dvdplm wrote:
           | Indeed. And for rust compilation, good single threaded
           | performance can have a significant impact on that last
           | annoyingly slow linking step (at least when building
           | executables).
        
             | steffan wrote:
             | I've noticed the same thing on EC2 instances, compiling,
             | e.g. Alacritty, there wasn't a lot of difference between 8
             | vs 16 vCPUs since the last step took a significant portion
             | of time and was single-threaded. It's fun watching 16 (or
             | 32) CPUs maxed out though.
        
           | celrod wrote:
           | Gaming doesn't really have SIMD or FP at all though, does it?
        
             | zamalek wrote:
             | Couldn't be further from the truth. Rasterization is all
             | linear algebra. That being said, you don't really see games
             | with the likes of AVX-512.
        
               | moonchild wrote:
               | That all happens on the GPU. There's a tiny (trivial)
               | amount of linalg on the CPU and the rest is punted to
               | shaders.
        
             | sharpneli wrote:
             | Plenty of it. Going trough the transform hierarchy (tons of
             | 4x4 matrix multiplications which definitely are vectorized)
             | and then the physics simulations, and that's just part of
             | it.
        
           | PaulDavisThe1st wrote:
           | Can gaming workloads simulate the 100% of all cores that a
           | parallel large scale native C++ compilation will generate?
        
             | zamalek wrote:
             | Games are typically written in C++, so that's a yes. As for
             | the threading, older (2yr or so back) games tend to assume
             | 4 cores and don't scale past that (thanks, Intel!). More
             | recent games tend to aim at arbitrary horizontal scaling.
             | 
             | You do still get major titles that are completely single-
             | threaded.
        
               | moonchild wrote:
               | ? no
               | 
               | The parent was talking about c++ _compilation_.
               | 
               | A c++ compiler is very different from a game engine
               | written in c++.
        
               | fomine3 wrote:
               | written in C++ not means it's compatible workload. Gaming
               | is performance is just gaming. For gaming, realtimeness
               | is important and normally single thread perf is important
               | even though some process is multithreaded.
        
             | m12k wrote:
             | Modern game engines can, yes
        
         | pier25 wrote:
         | > _A non-gaming benchmark would have been helpful._
         | 
         | Like Cinebench?
        
           | ckastner wrote:
           | Fair enough, on that point.
        
         | azeirah wrote:
         | I personally love their cpu's for development, can run a solid
         | IDE, have 3 browsers open, compile huge programs with many
         | threads, run 40 docker containers next to each other, have
         | discord, YouTube, Thunderbird etc.. open
         | 
         | And it keeps running smoothly! I don't know what the market
         | share of developers is vs gamers :x
        
           | skrtskrt wrote:
           | 40 docker containers?? My fully spec-d out Macbook Pro could
           | only dream of running 8 without a hiccup
        
             | whatch wrote:
             | Which macbook do you use? Thinking about replacing my
             | thinkpad e590 with macbook pro 16
        
               | skrtskrt wrote:
               | I had (before switching jobs) the 2018 MBP with all the
               | options maxed out.
               | 
               | Was running a Docker stack of about 8 containers, plus
               | PyCharm. It worked okay-ish, but everything was maxed out
               | all the time and the fans were always spinning. Battery
               | was being drained even when on the charger. Pycharm would
               | lag for a second or two here and there.
               | 
               | Now switched jobs and was issued a 2018 MBP with only
               | 16GB RAM and the 2.2GHz i7. I have gone to the trouble to
               | run everything locally (A single 200MB Python repo +
               | Spark). I don't see the same power drain but Pycharm lags
               | VERY hard on its indexing and code insight... sometimes
               | 5-10 seconds.
        
             | distances wrote:
             | Every time I code with the MBP from work I dream of using
             | my home desktop computer instead. There's just no
             | comparison. I have just the 3700X CPU but already that on
             | Linux runs circles around my work laptop.
        
             | alderz wrote:
             | Keep in mind that Docker for Mac runs in a virtual machine,
             | so it is much heavier than running it on Linux.
        
               | skrtskrt wrote:
               | Good point.
               | 
               | I have yet to take the dive into buying or building a
               | Linux workstation for development. Probably very worth
               | it, but I have found the endless fiddling you can do with
               | Linux to really distract me from doing the actual work I
               | turned on the computer to do.
        
               | read_if_gay_ wrote:
               | It's not only endless fiddling you _can_ do but
               | occasinally you _have to_ do as well.
        
             | intricatedetail wrote:
             | Mac these days is a fashion statement rather than a serious
             | tool, plus company shady practices e.g. making devices
             | difficult to repair, questionable design decisions e.g.
             | running high voltage rails right next to low voltage
             | signals to increase the likelihood of failure and then
             | making it difficult to recover data and forcing you to use
             | their cloud so they can analyse your data. No thanks.
        
             | azeirah wrote:
             | Hyperbole, meant that I never had any issues with how many
             | containers I was running haha I think I ran like 15 at most
             | on my desktop
        
         | znpy wrote:
         | As far as I see/understand it:
         | 
         | Gaming is a single-thread use case that resonates with a lot of
         | people and is generally easy to understand / relate to.
         | 
         | AMD could have used some other single-core use case like high
         | frequency trading, but that would not have been grasped by so
         | many people like the gaming use case.
         | 
         | Now add the huge success of youtube channels like LinusTechTips
         | and similar and you get the point: the gaming use case helps
         | deliver the message to a wider audience.
        
       | x87678r wrote:
       | from the comments:
       | 
       | >Shame that COVID-19s still here. Can't attend Intel's funeral.
        
         | _sveq wrote:
         | https://kensegall.com/wp-content/uploads/2017/10/apple.toast...
        
       | burnte wrote:
       | The FPS improvements on some of those games are huge, makes me
       | think that they've also improved memory transfer efficiency as
       | well as IPC improvements.
        
         | CyberDildonics wrote:
         | What do you mean by "memory transfer efficiency"?
        
           | Filligree wrote:
           | Zen has traditionally been extremely hungry for memory
           | bandwidth. Memory latency is bad, so processors prefetch
           | whatever they can, but they sometimes get it wrong and fetch
           | data that doesn't get used.
           | 
           | This allows you to give an efficiency rating, and Intel is
           | better here.
           | 
           | Also means single channel Zen is a bad idea, but that doesn't
           | stop laptop manufacturers from doing it.
        
             | CyberDildonics wrote:
             | I haven't ever seen that measured or heard of that hurting
             | memory bandwidth, do you have a link?
             | 
             | Most programs are not memory bandwidth constrained because
             | they aren't optimized well and are bottlenecked on memory
             | latency.
             | 
             | Prefetching is going to be wasted much more on programs
             | that are hopping around in memory, which will be
             | constrained by memory latency and far from having memory
             | bandwidth problems.
             | 
             | Programs that run through memory linearly with short or no
             | branches will be using all the prefetched memory.
        
               | cameron_b wrote:
               | memory bandwidth here being relative, because part of the
               | architecture is putting a pile more cores behind a memory
               | controller than before.
               | 
               | Relative to previous architectures, the cores have to
               | wait in line for more other cores if they need to grab
               | something not in cache
        
               | CyberDildonics wrote:
               | That would be memory latency, not memory bandwidth.
        
         | loeg wrote:
         | They've gone from 4-core CCXes with 16 MB L3 as the atomic unit
         | of CPUs, to 8-core chiplets with 32 MB L3. So, broadly,
         | workloads that fit in 32MB but not 16MB will perform much
         | better on Zen 3.
        
         | proverbialbunny wrote:
         | In the video they said they increased branch prediction quite a
         | bit, so I imagine it has more to do with this than it does with
         | latency.
        
         | dathinab wrote:
         | I think in the video they said something along that line.
        
         | ilaksh wrote:
         | Double the L3 cache size and more cores in the complex.
        
       | shusson wrote:
       | I wonder how they came up with 19%. Surely they could nudge it up
       | to 20% and no one would notice. These are synthetic tests after
       | all right?
        
         | xondono wrote:
         | 1st rule of negotiation, avoid round numbers.
         | 
         | There's a big psychological difference between asking for
         | 50.000$ and 52.178$ and 23cts.
         | 
         | The second looks more believable because it looks like
         | something you got from "running the numbers", even if you just
         | made that up.
        
         | anonymfus wrote:
         | Surely they can also bump the price from $299, $449, $549 and
         | $799 to $300, $450, $550 and $800 respectively, and nobody will
         | notice either.
        
         | vikramkr wrote:
         | 19 sounds more "real" since it isn't a round number
        
           | jacquesm wrote:
           | It looks pretty round to me.
        
             | mrlala wrote:
             | The 9 might be round but the 1 has a bunch of pointy edges.
        
               | vikramkr wrote:
               | The secret is to print all prices and
               | marketing/performance numbers with 8 bit displays. Then
               | you will make all the money.
        
               | jacquesm wrote:
               | man round
        
         | OkGoDoIt wrote:
         | At first impression, a claimed 19% improvement feels more
         | trustworthy, more data-driven, rather than a 20% which might be
         | written off as marketing fluff.
        
         | [deleted]
        
         | neogodless wrote:
         | > The +19% value that AMD is using is taken from internal
         | testing, using the geometric mean of 25 benchmarks involving a
         | mixture of real-world and synthetic.
         | 
         | This is from the article. So yes, some synthetic tests, and
         | some "real-world" benchmarks.
        
       | marcosscriven wrote:
       | What I'd love to see is an 8-core APU, for a powerful Linux
       | desktop that doesn't need especially good graphics.
        
         | gruez wrote:
         | Ryzen 7 4700G fits your build, but it's zen2.
        
           | marcosscriven wrote:
           | Sounds perfect, but read they're OEM only.
        
         | headmelted wrote:
         | Hoping the PS5 digital edition becomes a perfect Linux
         | workstation with an early days exploit.
        
       | satisfaction wrote:
       | No news on Threadripper?
        
       | twblalock wrote:
       | ~20% more performance for about ~50% more price, based on what
       | the 3000 series is selling for. Maybe the 3000 series is a better
       | deal.
       | 
       | The price increases really make Zen 3 less compelling.
        
       | bluecalm wrote:
       | I am waiting for my 3970x to arrive next week (it was a long wait
       | to get a GPU and 256gb of RAM due to shortages) and I am already
       | thinking that maybe I should have waited just a bit longer. That
       | is after not upgrading my Ivy Bridge quad for about 7 years due
       | to lackluster offerings from Intel. It's very exciting these days
       | thanks to AMD!
        
       | buran77 wrote:
       | This looks very promising, with 19% IPC increase and keeping the
       | power envelope. They're calling it "the fastest core on the
       | market". And that's at $549 for 12 cores, $449 for 8 cores, and
       | $299 for 6 cores.
       | 
       | Off topic, it's incredible what a flat tone Mark Papermaster
       | managed to use when saying "I couldn't be more excited to
       | present...".
        
         | modeless wrote:
         | Something about the sound quality in this video makes him sound
         | exactly like a text-to-speech system. It's uncanny.
         | 
         | Single thread performance was the only caveat I cared about vs.
         | Intel. Really tempted to build a new PC now with Zen 3 and
         | Nvidia 3080. If they are actually in stock anywhere.
         | 
         | I don't understand how Intel's stock price has held up in the
         | face of their clear loss of their longstanding most important
         | asset, the lead in single thread performance. I expect Apple to
         | beat them soon as well, putting them in 3rd place.
        
         | animationwill wrote:
         | >> Off topic, it's incredible what a flat tone Mark Papermaster
         | managed to use when saying "I couldn't be more excited to
         | present...".
         | 
         | That's an awesome last name though!
        
       | all_blue_chucks wrote:
       | Glad they skipped the 4000-series branding. Now we can look
       | forward to next year's release of the 5700XT CPU to pair with the
       | current 5700XT GPU.
        
         | ehsanu1 wrote:
         | They didn't skip it, but all the 4000-series CPU's are for
         | laptops AFAIK.
        
           | xondono wrote:
           | they've skipped _for desktops_.
        
       | intricatedetail wrote:
       | Why they go with gaming CPU first when it doesn't make much
       | difference if cpu is 6% faster? I was hoping to see a product I
       | could use in a workstation e.g a 32 core one that would have
       | single core speed better than Intel. From that perspective the
       | launch feels dissapointning. I've been needlessly holding money
       | as I need a new CPU. I will have to go with a top Intel one. I'll
       | try AMD next year maybe.
        
       | honkycat wrote:
       | What I am really hoping for out of the next-gen AMD offering is
       | value.
       | 
       | GPUs are so insanely expensive anymore, it is so frustrating to
       | want to upgrade my 6 year old 970 GTX and be unable to do a
       | meaningful upgrade without spending over $400.
       | 
       | Edit: 8 year old -> 6 year old Edit: Ryzen 5000 line -> next-gen
       | AMD offering
        
         | dragontamer wrote:
         | > GPUs are so insanely expensive anymore, it is so frustrating
         | to want to upgrade my 6 year old 970 GTX and be unable to do a
         | meaningful upgrade without spending over $400.
         | 
         | A $150 to $200 GTX 1660 Super or Radeon 5500 XT will be much,
         | much, MUCH faster than a GTX 970.
         | 
         | EDIT: Yeah, NVidia and AMD focuses on selling their $500 or
         | $700 or $1500 GPUs. But every generation, they eventually
         | release a $200 GPU with the same tech (just cut down and
         | slower). Its less exciting, but that's all most people need.
        
           | honkycat wrote:
           | GPU Benchmark puts the 1660 Super at +35% benchmark speeds,
           | and the 5500xt at 12%.
           | 
           | I don't really see the point in upgrading for that level of
           | improvement when I can spend a bit more and get a much more
           | significant upgrade.
        
             | dragontamer wrote:
             | Indeed. Its a sliding scale from $150 all the way up to
             | $700. (Then a jump to the 3090). Performance scales with
             | price very well in the $150 -> $750 range, you can pick
             | pretty much whatever performance suits your budget.
             | (Especially as the "last gen" models drop in price to the
             | levels associated with their performance)
        
         | neogodless wrote:
         | I'm a little confused. The Ryzen 5000 line is CPUs. The GPUs
         | being announced on October 28th will be Radeon 6000s.
         | 
         | But maybe I'm just misunderstanding you a bit!
        
           | honkycat wrote:
           | No, I misspoke. I was talking about the next-gen GPUs but I
           | had a brain fart
        
       | nine_k wrote:
       | I only hope that the wonderful improvements in AMD chips are not
       | made possible by something as clever as Intel's 10 years ago,
       | which ended up exploitable in interesting ways.
        
         | formerly_proven wrote:
         | * 25 years ago
        
       | piinbinary wrote:
       | It sounds like "Big Navi" may roughly match RTX 2000-series
       | performance, but not quite meet RTX 3000-series performance
       | (which is better than I was expecting).
        
       | mcraiha wrote:
       | "This will be the only processor (at launch) with a 65 W TDP"
       | that is shame. The 3900 got good reviews, but it is very hard to
       | buy, while 3900x is available. So I assume same will happen with
       | 5900.
        
         | xondono wrote:
         | It's not like TDP means anything anyway...
        
         | CyberDildonics wrote:
         | In many bios setting you can adjust the dynamic overclocking
         | with course and easy or more nuanced settings to give you what
         | you want. Many times a lot of the power and heat comes from
         | high dynamic overclocking which uses a disproportionate amount
         | of power.
        
           | loeg wrote:
           | You can also underclock the high end parts by lowering the
           | voltage and/or thermal limits that govern the behavior of
           | CPB/XFR/PBO (the CPU's internal clock-setting mechanism(s) at
           | P0). That might correspond more directly with the benefits
           | associated with a 65W SKU without hampering performance more
           | than necessary.
        
         | OkGoDoIt wrote:
         | They say lower-TDP processor should be available within six
         | months. I think that makes sense, AMD wants to make the
         | marketing splash with really nice high numbers at launch.
        
           | omgwtfbyobbq wrote:
           | I wonder if the lower-TDP versions are whatever didn't make
           | the high performance cut with part of the die disabled. If
           | that's the case they would have to wait a certain amount of
           | time to offer them.
        
           | pier25 wrote:
           | You mean like a 5700X ?
        
         | loeg wrote:
         | "At launch." They're rolling out the highest margin SKUs first,
         | the rest of the lineup will fill out after Nov 5.
        
       | Aardwolf wrote:
       | They mention "wider issue in float and int engines"
       | 
       | What does that mean? wider AVX?
        
       | lhoff wrote:
       | Interesting points:
       | 
       | ZEN3:
       | 
       | - 19% IPC improvement
       | 
       | - 8 Core CCX complex with unified L3 Cache (before 4 cores shared
       | half the L3 Cache)
       | 
       | - Still 7nm process
       | 
       | Ryzen:
       | 
       | - 631 points in single core cinebench for 5900X - 640 points in
       | single core cinebench for 5950X
       | 
       | - 26% performance increase in 1080p gaming compared to Ryzen 3000
       | 
       | - Models (Available November 5th):                 -5950X (16C
       | 32T, 4.9Ghz / 3.4 Ghz, 105W TDP, 799$)            -5900X (12C
       | 24T, 4.8GHz / 3.7 GHz, 105W TDP, 549$)            -5800X (8C 16T,
       | 4.7 Ghz / 3.8GHz, 105W TDP, 449$)            -5600X (6C 12T,
       | 4.6GHz / 3.7 GHz, 65W TDP, 299$)
       | 
       | Radeon:
       | 
       | 6000 Series launch on October 28th
        
         | jeffbee wrote:
         | That cinebench score is about 6% higher than the highest score
         | on Anandtech, which is an Intel laptop part. Not sure which I
         | will consider more vaporware.
         | 
         | https://www.anandtech.com/bench/CPU-2020/2758
        
         | scuget wrote:
         | The TDP is really good. Even by "being conservative about our
         | specs" standards.
        
         | baybal2 wrote:
         | They, again, use an interesting binning trick:
         | 
         | They sell 2 low bin dies on a single package as a superior
         | product to a 1 die good bin, and delay a 2 good die release:
         | 
         | $ per core:
         | 
         | 16C $50
         | 
         | 12C $45
         | 
         | 8C $56
         | 
         | 6C $50
         | 
         | $ per die:
         | 
         | 16C $400
         | 
         | 12C $275
         | 
         | 8C $450
         | 
         | 6C $300
         | 
         | This time though, they decided to linearise boost clocks
         | against the place in the lineup: i.e. 8C now has boost below
         | 12C, and 16C.
        
           | pedrocr wrote:
           | What would be the 2 good die release? They've already
           | announced the line-topping AM4 part that uses all the cores
           | from the two dies. Isn't the next step up a four die part in
           | a Threadripper packaging on a different socket with some
           | cores disabled?
        
             | baybal2 wrote:
             | 2 good die is 5950, and I believe they will delay it for a
             | few month like 3950
        
               | Nursie wrote:
               | On sale November 5th, according to the presentation.
        
               | pedrocr wrote:
               | They've already announced it. You're assuming the retail
               | availability will take a while?
        
         | vbezhenar wrote:
         | 5950X base frequency is 3.4 Ghz according to anandtech.
        
           | lhoff wrote:
           | Thanks. Wasn't shown on the slides.
        
         | greggyb wrote:
         | That 5950X (and its predecessor) seem like voodoo with the core
         | count, clock frequency, and TDP (yes, I know TDP is a flawed,
         | flawed number - it's still impressive).
        
           | zanny wrote:
           | TDP is about as meaningful on CPUs now as nm is in fab tech.
           | 
           | In practice the 3950x pulled up to ~225w running maxxed out
           | avx2 workloads, ~300w if you overclocked it. The 3900x pulls
           | up to ~190 watts at stock boosts. Both are called "105w TDP"
           | parts.
        
           | GuB-42 wrote:
           | Maybe not the voodoo you are thinking about but on the GPU
           | side, the 3Dfx Voodoo was indeed a groundbreaking 3D
           | accelerator card.
           | 
           | At the time, 3Dfx main competitors were Nvidia and ATI,
           | Nvidia finally bought 3Dfx and AMD bought ATI. So
           | technically, voodoo is on the side of AMD's competition.
        
             | greggyb wrote:
             | Intel's name for its CPU architecture is "Core."
             | Nevertheless, AMD discusses its CPU cores. "voodoo" and
             | "Voodoo" need not be the same (;
        
           | bob1029 wrote:
           | I second this reaction. Historically, clock speeds scaled
           | inversely with the # of cores. Seems like the efficiency is
           | overtaking other constraints at this point.
        
             | loeg wrote:
             | The achievable all-core clock will inevitably scale
             | inversely with # of cores, in practice. The advertised
             | single core and all-core clocks are some combination of
             | binning and pure marketing.
        
               | greggyb wrote:
               | With most modern PC processors, both GPU and CPU, one of
               | the primary limitations is thermal headroom. There are
               | features and technologies with varying names across
               | brands and processors that essentially do the same thing:
               | run at the maximum clock that the current thermal
               | situation will support.
               | 
               | From my personal experience, my Threadripper 3970X will
               | happily maintain ~4.4GHz all-core (rated for 4.5GHz max
               | single-core) so long as I can keep the temperature in or
               | below the 70s, with no overclocking.[0] There are power
               | limits as well, but rated performance numbers are within
               | the power limits. Overclocking can put you past the
               | marked power limits, and certainly needs ample cooling.
               | 
               | [0] Granted, I need to pump cold air into the case to
               | maintain the temperature, but that's a limitation of my
               | current thermal solution. At some point I'll probably
               | upgrade to an excessive water cooling solution (:
        
               | jrockway wrote:
               | That's an interesting piece of data and I'm glad you
               | posted it. I also use a 3970X and can't get it to 4.4GHz
               | even with all but one CCD disabled (much less all-core).
               | I am on air cooling, though, and suspect that switching
               | to water cooling would help a lot. I hit 90C almost
               | instantly under load. (I use the automatic overclocking
               | and can do 4.2GHz all core; much better than the
               | specified 3.7GHz all core.)
               | 
               | Since I don't run into Threadripper owners very often,
               | I'm wondering if yours also has a pretty high idle power?
               | Mine idles at 80-90W (reported; 200+W from the wall)
               | which is surprising to me coming from the Intel world. So
               | much electricity wasted simply because I am too lazy to
               | turn off my computer.
        
               | amarshall wrote:
               | I have a 3960X on Gigabyte Aorus Master with an RX480,
               | 1080 Ti, X520 NIC, 3x SSDs, and my total-system idle on
               | Linux is 180-210 W. I do agree that the high idle is
               | frustrating, as it dumps a lot of heat into the room.
        
               | greggyb wrote:
               | I have a hunch that I won the silicon lottery with it,
               | though I haven't confirmed with any overclocking. I'm
               | happy to dive deeper if you want. Below is a basic
               | summary.
               | 
               | I have 5x140mm intake fans, with a Noctua NH-U14S TR4-SP3
               | cooler. That runs with push-pull 140mm fans, 2000RPM
               | in/1500RPM out.
               | 
               | This reaches steady state within a few minutes under
               | load. With an open case, it will clock down to ~4.2GHz
               | after 5-10 minutes. With a closed case, that is faster.
               | For fun, I ducted the A/C vent in the room into the case
               | and cranked it. It stayed reliably up around 4.4GHz all
               | core. Technically, I think it would reach a lower steady
               | state clock/higher temp if I left it for days, as it does
               | noticeably warm the room.
               | 
               | I don't recall idle power, but yeah, it's warm.
        
               | formerly_proven wrote:
               | The difference with watercooling for CPUs seems to mostly
               | come down to lower noise, and only marginally better
               | thermals.
        
               | greggyb wrote:
               | I can get much more surface area of radiator than can
               | reasonably be reached by the heat pipes on an air cooler.
               | There are limits to how many radiators you can
               | effectively leverage in a cooling loop.
               | 
               | Additionally, a loop with a large volume of liquid offers
               | much more thermal buffer before reaching a steady state
               | temperature.
               | 
               | Most air coolers will be heat saturated within a minute
               | or so. A water cooling loop may maintain lower
               | temperature for minutes to tens of minutes. So even with
               | two solutions that otherwise reach the same steady state
               | temperatures (and therefore throttle equally), you may
               | see better real world performance out of the water
               | cooling solution.
               | 
               | I'll note, I would be building an open loop, not using an
               | AIO/closed loop cooler. My case has room for 7x140mm of
               | radiator in a couple of configurations. I would probably
               | use 5x of that in one 420mm radiator and a 280mm
               | radiator. This should offer much more cooling capacity
               | than any tower cooler.
        
               | paranoidrobot wrote:
               | I can only offer anecdata, but I built a Ryzen 3600 based
               | desktop earlier in the year.
               | 
               | Initially I used the stock cooler, but it idled at ~45C,
               | and the moment I did anything approaching a load it
               | immediately shot to 90C. This was in a room with ambient
               | at around 23C.
               | 
               | After getting annoyed for a while I swapped for an AIO
               | Liquid cooler and hey-presto it now idles at 30C and when
               | maxed out - 75C.
        
               | greggyb wrote:
               | That's not really a good air vs water comparison. You'd
               | have gotten similar results if you swapped for a better
               | tower cooler as well. The stock coolers are basically
               | built to cover base clock and a bit of turbo.
        
               | baybal2 wrote:
               | The thermal wall off 100W/cm2 is not much about how much
               | heat you can sink, but how much you produce.
               | 
               | The heat cannot leave the silicon itself quick enough
               | with modern chips.
        
             | [deleted]
        
         | [deleted]
        
         | zapnuk wrote:
         | As much as I like the new generation I'm not quite sure about
         | those prices.
         | 
         | Right now I can buy:                 - Ryzen 9 3900X (12C /
         | 24T) 3.8GHz - 4.6GHz => 389EUR        - Ryzen 7 3700X (8C /
         | 16T) 3.9GHZ - 4.5GHz => 299EUR.
         | 
         | It seems to me that the current Ryzen 9 3900X has an insane
         | value compared to the new generation. Sure, it's single core
         | performance is lower by a meaningful amount. But I'd assume
         | that the multi core performance is WAY better with its 12 cores
         | compared to the 6/8 cores of the 5600X/5800X.
        
           | neogodless wrote:
           | If you compare MSRP at launch to retail a year later, you're
           | going to notice a big difference.
           | 
           | In the US, the Ryzen 9 3900X MSRP was $499 but is $70 less
           | now. (Initially, supply was low, and it was selling over MSRP
           | as high as ~$570.)
           | 
           | But they came with coolers... I have a Ryzen 2700X that I got
           | for $230, and I still use the stock cooler. To jump to a
           | Ryzen 5800X plus a cooler would be a huge expense. I will
           | definitely be on the sidelines for the next six months, but
           | then I'll revisit the pricing situation (once motherboard
           | manufacturers release updated 400-series firmware.)
        
             | NikolaeVarius wrote:
             | They don't ship with coolers?
             | 
             | Man the lack of a 5700x is really making it hard for me to
             | not justify getting a 3700x on sale
        
               | coder543 wrote:
               | > Man the lack of a 5700x is really making it hard for me
               | to not justify getting a 3700x on sale
               | 
               | Sounds like a win-win for AMD. Either they sell to
               | customers that demand the best absolute performance, or
               | they sell to customers that demand the best value. Since
               | they offer products to meet both demands, they don't
               | really care which type of customer you are.
               | 
               | Zen 2 is still a fantastic processor, and it will
               | certainly be more affordable than Zen 3 for the immediate
               | future.
        
               | Tuna-Fish wrote:
               | The rumor is that any cpus with TDPs of 65W or below will
               | ship with coolers, and ones with above that will ship
               | without.
        
               | zrm wrote:
               | The thing to watch is then going to be the "5700X", i.e.
               | the Zen3 version of the 3700X, which, if the analogy
               | matches, should have 8 cores and a 65W TDP. It isn't in
               | the initial slate but they left room for it in the
               | numbering.
        
               | Teknoman117 wrote:
               | Anecdotally, all of the PC gamers I know put aftermarket
               | coolers on their CPUs. The box cooler is still in the box
               | taking up space in their closet. The AMD parts especially
               | benefit from additional cooling (higher turbo clocks). I
               | think it makes sense to not include box coolers on parts
               | so high up the performance chart that using a box cooler
               | would just hamstring it.
        
             | zapnuk wrote:
             | Sure, the prices of the new generation will fall over time.
             | 
             | But I still don't quite see how the multi core performance
             | per $ of the new generation will be competitive compared to
             | their previous generation. At least at the lower end,
             | simply because you can buy more cores/threads for roughly
             | the same money - although with a lower clock speed and
             | single core performance.
             | 
             | However, I guess we will know more when meaningful
             | benchmarks are released and this ends anyways when the
             | remaining supply of 3900X is sold out.
        
               | gruez wrote:
               | >But I still don't quite see how the multi core
               | performance per $ of the new generation will be
               | competitive compared to their previous generation
               | 
               | AFAIK that was the case during the 3000 series launch.
               | The newest generation always has worse bang/buck.
        
               | zapnuk wrote:
               | The difference is that the 3000 series introduced 12 Core
               | CPUs that weren't available before AND provided increased
               | gaming performance.
               | 
               | This time around the major reason seems for upgrades to
               | be single core performance - which I guess as they didn't
               | really go into multi core performance. But we'll know
               | more when the benchmarks come out, and discussion
               | beforehand is pretty pointless.
        
       | [deleted]
        
       | Roritharr wrote:
       | I've just upgraded my Notebook to an HP EliteBook 835 G7, which
       | is a 13" Notebook with a Ryzen 7 4750U. I've decked it out with
       | 64GB Ram and a 2TB SSD. 8 Cores, 16 threads, boosting to 4,1ghz,
       | 3 outputs capable of 4k 60hz, (2* dp over usb-c, 1 hdmi 2.0), 2
       | full size USB A Ports... and a lot more goodies all packed in a
       | very supremely built chassis.
       | 
       | I couldn't want for more, (ok Thunderbolt, but that's not as
       | valuable as everything else).
       | 
       | I'm VERY happy with it's performance and couldn't be more
       | grateful that AMD is providing much needed competition in the CPU
       | market, I wouldn't have gotten a machine this powerful at this
       | size otherwise.
       | 
       | So yeah, i'll upgrade my Desktop to Ryzen 5950 once I get the
       | opportunity, even if it's just to hold more fire below Intel's
       | feet.
        
         | kissiel wrote:
         | I got this 4750U in the T14. The CPU performance is great, but
         | the iGPU is terrible when connected to an external 2160p60
         | screen. Animations for stuff like maximizing window have 2-3fps
         | tops. iGPU from intel in 10510U manages 10+ fps. (5.8 kernel,
         | Gnome 3.36).
        
           | Roritharr wrote:
           | This must be a software issue in Linux, I don't have anything
           | close to these problems in Windows. Perfectly fluid with 2
           | external 2160p60 screens connected.
           | 
           | WSL2 is amazing btw.
        
             | kissiel wrote:
             | Probably infamous radeon drivers. Thanks for the hope :D
        
           | qz2 wrote:
           | Got a bottom end T495s here that quite happily handles an
           | external 4K screen with Ubuntu.
        
         | cashewchoo wrote:
         | Where did you buy it from? I'm in the market for a 13" laptop
         | with 32GB of ram and a ryzen CPU, with no hardware that's not
         | Linux-friendly. So this sounds like a close fit.
         | 
         | But I can't even find the Elitebook 835 on HP's website, or on
         | Amazon.
        
           | pedrocr wrote:
           | Something that fits that description as well is the Lenovo
           | X13/T14/T14s
           | 
           | https://psref.lenovo.com/syspool/Sys/PDF/ThinkPad/ThinkPad_X.
           | ..
           | 
           | https://psref.lenovo.com/syspool/Sys/PDF/ThinkPad/ThinkPad_T.
           | ..
           | 
           | https://psref.lenovo.com/syspool/Sys/PDF/ThinkPad/ThinkPad_T.
           | ..
           | 
           | What I've yet to find is anything similar that also has a
           | more than a 1080p screen. Frustratingly the Lenovo T14/T14s
           | in Intel spec does have a 4K screen.
        
             | rowanG077 wrote:
             | It's so frustrating. I would love a 13 inch Ryzen laptop.
             | But all their screens suck in some way.
        
               | qz2 wrote:
               | I've got a bottom end T495s which is mostly used as a
               | terminal. The screen is 1080p but quite decent. I like
               | the whole machine to be honest.
               | 
               | I wouldn't want to bet on a 1" smaller screen for a
               | decline in quality with the X395
        
           | phs318u wrote:
           | Same. The best I could find were models with 16GB soldered on
           | the MB. Settled for a System 76 Lemur Pro (10th gen i7).
        
           | Roritharr wrote:
           | I ordered from a small notebook dealer in my area
           | (notebook.de) that offer to upgrade the devices if you ask
           | for it. I was looking for weeks for this special model as it
           | was the first 13" model with the right ports that offered 2
           | so-dimm slots, so I emailed them about it before it was
           | listed to be among the first to receive it.
           | 
           | Funnily enough HP in their own specsheet made the mistake of
           | declaring it as only supporting 32GB which then lead to me
           | having to very forcefully demand them to just order the
           | memory at my risk and install it anyway. Of course it works
           | beautifully.
        
       | [deleted]
        
       | dzonga wrote:
       | when are the AMD APU's going to be out ? those seem like a killer
       | deal for cpu | gpu combo. casual gamin g + dev work.
        
         | Bayart wrote:
         | Probably second half of 2021. They'll get the Threadripper and
         | non-X desktop CPUs out of the door first.
         | 
         | The Zen2 APUs and laptop chips came out a few months ago,
         | practically a year after Zen2 launched.
        
       | teruakohatu wrote:
       | The 5900X has claimed 26% improvement for gaming. This is huge.
       | Intel is going to have to start shipping fridges with their next
       | batch of CPUs.
        
         | zamalek wrote:
         | You joke, but you really can get phase-change cooling[1]. I'm
         | eagerly awaiting the RDNA2 announcement, as I'm really hoping
         | to complete my current build as my first red box (AMD CPU, AMD
         | GPU).
         | 
         | [1]: http://www.ldcooling.com/shop/14-phase-change
        
           | bserge wrote:
           | Jesus, those prices... yeah they're pretty compact, but if
           | you do it for a desktop, you can do it for 1/2 or even 1/3 of
           | the price by getting a portable air conditioner or a mini
           | fridge (new or used) and modifying it for cooling. Way higher
           | BTU transfer, as well.
           | 
           | There were a few of these DIY projects last time I checked,
           | and they worked well. Downsides: big cooling unit right next
           | to you, loud af unless you mod the fan, as well :D
        
             | pedrocr wrote:
             | Linus Tech Tips did it with an aquarium cooler:
             | 
             | https://www.youtube.com/watch?v=HMtvEbD2MQo
             | 
             | It looks like quite a straightforward setup actually. But a
             | custom water cooling loop is almost surely a better
             | solution.
        
               | bserge wrote:
               | Interesting idea!
               | 
               | Though it seems to me a normal PC water cooler would be
               | cheaper nowadays :D
               | 
               | I watched their industrial fan experiment yesterday, that
               | seems even easier... if you could put your PC in a
               | separate room
               | 
               | https://www.youtube.com/watch?v=EM2G5vLGcQQ
        
               | rowanG077 wrote:
               | I'd actually worry about erosion with that industrial fan
               | setup.
        
               | pedrocr wrote:
               | I'm planning a new house and one of the ideas for the
               | floor plan is precisely to have a small room just behind
               | the study to put the noisy stuff and just have the
               | peripherals in the living space. And once again LTT has
               | done it by running all the PCs in the home from a single
               | rack (even a single EPYC machine with virtualization) and
               | routing fiber over the house to where the peripherals
               | need to be:
               | 
               | https://www.youtube.com/watch?v=jvzeZCZluJ0
        
           | toast0 wrote:
           | I think it was a reference to Intel's overclocking stunt.
           | 
           | https://www.tomshardware.com/news/intel-28-core-
           | processor-5g... (best reference I could find quickly)
        
         | overcast wrote:
         | 26% at 1080, which no one buying a $500+ CPU every generation
         | is running. The benefits will be minimal at 2k and 4k, with the
         | same GPU, and a decent processor within the last 5-10 years.
        
           | [deleted]
        
           | neogodless wrote:
           | While I agree with the sentiment here... this has been the
           | argument Intel has been using as a "last excuse to buy Intel
           | over AMD" - if you buy a fast enough video card, but play
           | your games at 1080p on a really high refresh rate monitor...
           | the gaming performance was better on Intel.
           | 
           | So AMD focuses on it here to say "look, your very last excuse
           | for choosing Intel over AMD is no longer invalid."
           | 
           | Of course I do more "non-gaming" than gaming, so it wasn't
           | very important to me in the first place, and I don't spend
           | enough on a graphics card for this to matter. But I want a
           | lot of cores for fast compilation and great multi-tasking
           | with containers and virtual machines.
        
             | intricatedetail wrote:
             | The AMD software to control CPU doesn't work with
             | virtualisation enabled. They have been ignoring requests to
             | fix it for years.
        
             | overcast wrote:
             | Yes of course, non gaming, dev and media work, these will
             | be beast. I was just confused on why they were so focused
             | on 1080p gaming performance benefits. Buying a whole new
             | setup if you're running any relatively modern CPU would be
             | a waste of money for gaming. At 1080 you're probably
             | already killing it in framerate, and at 2k+ the benefits
             | just aren't there.
        
               | read_if_gay_ wrote:
               | Because at 1080p you can actually compare the performance
               | maybe?
        
           | formerly_proven wrote:
           | What's the difference between 1080p and 2K?
        
             | p1necone wrote:
             | Generally GPU load scales with resolution and graphical
             | fidelity, while CPU load mostly just scales with framerate
             | irrespective of resolution or graphics settings - so you
             | might be CPU bottlenecked at max settings 1080p with an
             | average CPU and a mid-high end GPU, but even with the
             | current highest end GPUs and a mid range CPU you're likely
             | not bottlenecked by the CPU at 1440p or 4k because the GPU
             | isn't pushing out as many frames.
        
             | komodo wrote:
             | 2560x1440 is often called "2k"
        
             | manigandham wrote:
             | 2K has no official designation and is sometimes used to
             | describe 1440p.
        
             | 0-_-0 wrote:
             | Nothing, AFAIK
        
             | smcgaw wrote:
             | The higher the resolution you run a game at the more likely
             | it is that the GPU becomes the bottleneck for the frame
             | rate.
        
             | overcast wrote:
             | 78% more pixels at 2560x1440, also the performance sweet
             | spot for high end GPUs.
        
         | logicOnly wrote:
         | What part of gaming?
         | 
         | Loading times are entirely based on hard drive/SSD times.
         | 
         | Visual takes the majority of processing and is done on the
         | video card.
         | 
         | So what exactly is improved?
        
         | theandrewbailey wrote:
         | It wouldn't be the first time Intel did that.
         | 
         | https://www.tomshardware.com/news/intel-28-core-cpu-5ghz,372...
         | 
         | > Unfortunately, it turns out that Intel overclocked the
         | 28-core processor to such an extreme that it required a one-
         | horsepower industrial water chiller.
        
       | tmd83 wrote:
       | A long time ago perhaps up until even Phenom2 processor from AMD
       | I used to hear people saying AMD systems felt snappier even
       | though they weren't benchmarking as good.
       | 
       | Does anyone have a solid explanation was it just rumours,
       | fanboyism or there was something about the system. I just thought
       | about context switch or syscall overhead was that ever
       | meaningfully different.
        
         | erulabs wrote:
         | Way back in the day particularly the Athelon / Applebread days
         | and chips had the first on-die memory controller for consumer
         | processors. While they had fairly significantly lower IPC than
         | some Pentiums (remember the Pentium D egg cooker?), they had
         | much better memory latency. Hard to prove anything - but some
         | people claim they were quicker
        
       | Unsimplified wrote:
       | AMD keeping the same standard TDPs at 105W and 65W was such a
       | good design decision. Clear contrast to Samsung's oft-criticized
       | MLC to TLC move with their 980 Pro.
       | 
       | People care about both absolute TDP and power efficiency.
        
       | Thaxll wrote:
       | Zen3 performance looks really good, however the GPU ones ... I
       | think it's going to be pretty bad compare to the RTX 30xx.
        
         | ebg13 wrote:
         | > _however the GPU ones ... I think it 's going to be pretty
         | bad compare to the RTX 30xx._
         | 
         | Their teaser numbers seemed to show par performance at 4k
         | against the 3080.
        
         | Bayart wrote:
         | Their GPU numbers show parity with the 3080, which is what
         | everybody expected.
         | 
         | They probably don't have their final lineup yet, much less
         | pricing.
        
       | djsumdog wrote:
       | When is Intel's next chip announcement? Is there anything in the
       | pipes to make them more competitive with home enthusiasts again?
       | My main Linux box and NAS run Ryzen and I'm a fan, but I don't
       | want to see competition leave the market. I was hoping Intel
       | would finally release stand-alone workstation graphics cards.
        
         | vbezhenar wrote:
         | They promised 2021 Q1. Not sure about announcement date. I
         | think that they won't be competitive until 7nm and that won't
         | happen anytime soon.
        
           | leetcrew wrote:
           | imo, that was just a flailing attempt to spoil the zen 3
           | launch. the fact that they didn't even give some vague
           | details on performance in the rocket lake announcement is
           | telling. my guess is they don't currently have a path to a
           | competitive product in Q1, or they would have said so.
        
         | emddudley wrote:
         | Intel's next move is the 11th Gen Rocket Lake, set for Q1 2021.
         | Doesn't seem very compelling to me.
         | 
         | https://www.anandtech.com/show/16145/intel-confirms-rocket-l...
        
           | bhouston wrote:
           | The integrated Xe graphics that actually performs well is the
           | main game changer. For ultrabooks this could be quite nice if
           | it isn't super hot.
           | 
           | https://www.pcmag.com/opinions/intel-iris-xe-is-
           | here-5-ways-...
        
         | warrenm wrote:
         | Intel's fabled "tick-tock" releases haven't been interesting in
         | what...a decade or more?
        
           | distances wrote:
           | Intel hasn't been doing ticks in years. I haven't kept track,
           | but it's been now more like tick-tock-tock-tock-tock-tock.
        
       | gjsman-1000 wrote:
       | What is a little disappointing is the big increase in MSRP. Then
       | again, it's a pandemic and AMD is the leader in almost
       | everything, so it is somewhat justified. However, Ryzen 3000 will
       | still be around for the more bang-buck focused builders.
        
       | Finnucane wrote:
       | I was literally about to order a new pc with a 3900. So do I get
       | that or wait for this? Will it be sold out for months?
        
         | greggyb wrote:
         | Do you need the PC today?
         | 
         | Yes -> Buy it now.
         | 
         | No -> Wait for third party reviews with benchmarks that
         | represent your workload. Decide on the price/performance
         | tradeoff then.
         | 
         | In general, there will always be a better processor coming
         | around the bend. Intel promises its next release in Q1 2021.
         | Zen 4 is on the roadmap and will be coming probably in a year
         | and a bit. There are a few reasons that I think it's worth
         | waiting right now (if you can):
         | 
         | - We are less than a month from release
         | 
         | - AMD is suggesting significant performance improvements over
         | the current gen
         | 
         | - AMD's claims for its Ryzen series have largely held true when
         | third parties release their own benchmarks
         | 
         | I would not suggest pre-ordering. Always wait for third party
         | reviews and benchmarks.
        
         | dvfjsdhgfv wrote:
         | Invest in the motherboard, buy a 3600 now and replace it later,
         | maybe with something even better than 3900.
        
         | bob1029 wrote:
         | You could get the 3900 today and upgrade to the 5000 series
         | after the fact. Socket compatible.
        
           | jacquesm wrote:
           | That's not exactly free, whereas waiting for a bit has zero
           | overhead.
        
             | entropicdrifter wrote:
             | zero monetary overhead*
             | 
             | I agree with you in principle, but time and opportunity
             | have value worth considering too
        
           | boardwaalk wrote:
           | But, don't waste money on a 3900 you're going to replace. The
           | 3300x and 3600 are really good placeholder options (I have
           | the 3600 for this purpose).
        
             | Finnucane wrote:
             | That actually seems like a reasonable idea--get the cheaper
             | cpu now, which would still be a big boost over my old
             | system, then swap later when supplies allow.
        
               | myself248 wrote:
               | That's what I did last year, with a 3200G. It's a
               | functional placeholder for now, still a significant bump
               | over what it replaced, but only temporary.
               | 
               | My plan is to wait for Socket AM5 to come out, and get
               | the last-best AM4 offering to stuff into my existing
               | motherboard. That may be complicated by BIOS and chipset
               | support (which effectively means a socket-generation-
               | increment even if the physical connector is the same),
               | but we'll see how far my B450 can take me. And at the
               | time, I'll be adding a discrete GPU, since I'll surely be
               | going to a non-APU chip.
        
         | blihp wrote:
         | If you want the best bang for the buck, wait for the Black
         | Friday sales (which usually start in early Nov) when resellers
         | will probably be heavily discounting the 3xxx series. If you
         | want the absolute best performance, wait a few months on the
         | 5xxx and I'm sure you'll see some discounting. You can still
         | buy the 2xxx series new for ~$200... the old models don't
         | disappear just because the new model comes out, they get
         | cheaper.
        
       | WaxProlix wrote:
       | Cinebench R20 score of 631 is bonkers. Hopefully pricing stays
       | nearly in line with the 3000 series. Very exciting for what's
       | basically a same socket incremental update.
       | 
       | Edit: 549 usd for the 5900X, 449 for the 5800X, 299 for the 5600X
        
         | [deleted]
        
         | brundolf wrote:
         | $299 for the 5600X
        
       | koluna wrote:
       | At this point, picking AMD for your CPU becomes such a no-
       | brainer. Compounded by Intel's security issues and all.
        
         | vbezhenar wrote:
         | ECC support in AMD systems is strange. It's supported
         | theoretically, but practically there are issues, one have to
         | carefully pick motherboard and even then it's some kind of
         | unsupported configuration. Intel sells cheap and fast Xeons
         | with proper ECC support. I'm very interested in AMD CPUs and I
         | hope that ECC story will improve, so I can buy some kind of
         | workstation-branded motherboard and use fully supported ECC
         | configuration.
        
           | adrian_b wrote:
           | You are right, but if you buy a motherboard that claims to
           | support ECC, you will usually not have any problems.
           | 
           | For example I am using an ASUS Pro WS X570-ACE, which is a
           | reasonably priced workstation board ($300) with a Ryzen 7
           | 3700X and ECC memory.
           | 
           | ECC worked OK, without any problems. I have also used a
           | couple of ASRock MB's and ECC also worked OK on them.
           | 
           | I would much prefer more guarantees from AMD, but rather than
           | buying a slow Intel CPU I prefer a little risk with AMD.
        
           | grishka wrote:
           | Dumb question. Why would one want ECC in something that isn't
           | a server? How often do bits in memory actually flip by
           | themselves for it to be warranted?
        
             | JoeAltmaier wrote:
             | https://stackoverflow.com/questions/4109218/do-gamma-rays-
             | fr...
             | 
             | tl;dr: one bit error in 4GB every 72 hours
        
               | coder543 wrote:
               | I wish someone with a larger server farm would count the
               | number of reported ECC errors per GB-hour and give us
               | updated numbers. That StackOverflow question is about 10
               | years old now, and I think it's relying on data even
               | older than that.
        
               | moonchild wrote:
               | Yes; per the internet archive[1], the data's at least 20
               | years old.
               | 
               | 1. https://web.archive.org/web/20010612184424/http://www.
               | boeing...
        
               | bentcorner wrote:
               | Someone once did a bit-squatting experiment and
               | "estimates that 614,400 memory errors occur per hour
               | globally".
               | 
               | https://nakedsecurity.sophos.com/2011/08/10/bh-2011-bit-
               | squa...
               | 
               | It would be interesting to repeat this experiment today.
        
           | p_l wrote:
           | The difference is that AMD doesn't disable ECC support in any
           | model line, while Intel disables it, sometimes without rhyme.
           | 
           | Extra funny when you notice that certain Xeon lines are
           | actually i7 with different branding and ECC left enabled.
           | 
           | The problems with ECC on AMD comes from consumer vendors not
           | putting the time into testing, and possibly not even
           | connecting the ECC lines (remember, ECC requires putting
           | additional traces between memory controller and memory
           | slots). Then you have to deal with whatever customisation the
           | vendor of the motherboard did to firmware - their changes
           | might have resulted in effective disabling of ECC.
           | 
           | With Intel, you either have the same game as above (with the
           | non-Xeon ECC-capable parts), or pay through the nose for
           | comparable performance "workstation/enterprise" gear, as ECC
           | support being used for market segmentation by intel is pretty
           | much an open secret.
        
             | vbezhenar wrote:
             | You don't need to pay through the nose with Intel, at least
             | for latest generation. 10900K costs $499. 10900K with ECC
             | called Xeon W-1290P and costs $539. That's 8% extra. ASUS
             | Pro WS W480-Ace is $280 which is reasonable cost for a good
             | motherboard.
        
               | spockz wrote:
               | How do you discover which Xeon is the i7 version of the
               | same chip or vice versa? Is it just spelunking through
               | Intel Ark to find a same generation, clock speeds Xeon or
               | is there more to it?
        
               | vbezhenar wrote:
               | Something like that, yes. Specs are almost identical.
               | There are CPU families so I picked best models from a
               | family and compared them.
        
               | als0 wrote:
               | Would be interesting to compare die shots.
        
               | posnet wrote:
               | Any of the Xeon Ws are just rebranded i7s, others might
               | be as well. You can tell because they have the DMI memory
               | bus instead of the Xeon exclusive UPI interconnect.
        
               | StillBored wrote:
               | I picked up a i3-9100 a few months back because its was a
               | low cost processor with ECC (for an edge/embedded
               | solution). The problem then becomes the motherboard, and
               | it seems intel has just shifted the ECC tax from the
               | processor to the motherboard/chipset. That core fits on a
               | lot of low cost motherboards, but to enable ECC requires
               | about another $100 chipset tax.
        
           | dragontamer wrote:
           | ECC support is iffy at the consumer brands. Its a "we won't
           | disable it, but we won't guarantee that it works" sort of
           | deal.
           | 
           | If you want verified ECC support, you need to buy the
           | workstation chips and motherboards: Threadripper Pro or
           | EPYCs.
        
             | TwoNineA wrote:
             | The latest BIOS for ASUS Prime X370 Pro has ECC explicitly
             | as a configuration option. Seems to work in Linux. I am
             | using 2x8R ECC 2666Mhz RAM from Kingston.
        
             | JackMcMack wrote:
             | ECC is supported on the Pro series as well. My home server
             | is running Ryzen 5 Pro 4650G (yay for integrated graphics)
             | and Asrock B550M.
             | 
             | I went through the effort of using qvl memory, but actually
             | testing ECC is a bit more difficult. While ECC is supported
             | & active, memory errors are sadly not reported to the OS. I
             | remember seeing a forum post somewhere of somebody
             | overclocking/undervolting the ram to force errors, but I
             | can't seem to find it right now. There's a fine line
             | between stable, stable with recovered errors, and unstable.
        
               | vbezhenar wrote:
               | That's what I'm talking about and I wouldn't call it
               | "fully supported". I want to know about ECC statistics.
               | It's important because if I can see that ECC recovers
               | abnormally high number of errors, it's likely that I need
               | to replace RAM right now.
        
               | JackMcMack wrote:
               | I don't disagree with you, but when was the last time you
               | ever had to replace a stick of ram?
        
               | adrian_b wrote:
               | ECC statistics are available for Ryzen if you use Linux
               | (configured to load the appropriate EDAC module).
               | 
               | So on Linux, ECC is fully supported, even with Ryzen.
        
               | raegis wrote:
               | What operating system? On Linux, I had a memory stick
               | that was not completely inserted, and periodically I saw
               | corrected memory errors reported in the logs until I
               | fixed the issue.
        
           | baybal2 wrote:
           | A very good way to undercut low end Xeons which were bought
           | solely for their insurance against an out of the blue crash.
        
           | DCKing wrote:
           | AMD guarantees ECC works with their workstation focused
           | Threadripper line. On Ryzen, it only works if you do your
           | homework picking hardware.
           | 
           | It's a shame they made the TR platform much more expensive in
           | the last generation.
        
             | henriquez wrote:
             | Is that a new thing? I have an older Threadripper and my
             | motherboard definitely doesn't support ECC (even though the
             | CPU does)
        
           | loeg wrote:
           | What Xeon and Xeon motherboard with ECC support are "cheap?"
           | 
           | In that price range, AMD markets Threadripper and Epyc, both
           | with proper ECC support.
           | 
           | ECC support in Ryzen systems is up to the motherboard
           | manufacturer, and some manufacturers advertise support very
           | clearly. E.g., at least a couple years ago, ASRock explicitly
           | supported ECC in all their Ryzen motherboards.
        
         | mhh__ wrote:
         | For a normal machine that does look to be the case but I've
         | always found AMDs manuals and software quite lacking so it may
         | be worth going with intel just for the tooling (i.e.
         | performance counters seem to be much better documented on
         | intel)
        
         | f311a wrote:
         | It's not that simple for computing. I heard that in Data
         | Science Intel is still preferred because of better AVX support.
         | 
         | There are also things like Intel MKL. A lot of software can use
         | it when compiled on a user machine.
        
           | adrian_b wrote:
           | The new Zen 3 cores are expected to have a higher AVX
           | throughput per cycle than all Intel CPUs, except the most
           | expensive models of Xeon Gold, Platinum or W and the HEDT i9
           | models that have dual AVX-512 FMA units.
           | 
           | The cheaper models with only one AVX-512 FMA unit have a
           | lower throughput, which will be exceeded by Zen 3, even at
           | the same clock frequency.
           | 
           | For multi-threaded tasks, Zen 3 CPUs will have a higher
           | clock-frequency than any Intel CPU, so it is expected that
           | any older Intel CPU will be beaten easily.
           | 
           | It remains to be seen which will be the performances of the
           | Ice Lake Server CPUs, to be launched before the end of the
           | year. However, miracles are not expected, because these are
           | using the older Intel 10 nm technology, not the improved one
           | used by Tiger Lake.
        
             | gnufx wrote:
             | > The cheaper models with only one AVX-512 FMA unit have a
             | lower throughput, which will be exceeded by Zen 3, even at
             | the same clock frequency.
             | 
             | I think there are relatively few with only one FMA (and
             | there's no way of interrogating them at runtime, sigh) but,
             | yes, if you know you have one, you use AVX2 for GEMM
             | kernels, as a specific example.
             | 
             | For general computational workloads, you're likely better
             | off with more AVX2 cores and high memory bandwidth, even
             | without whatever improvements there are in Zen 3.
        
           | Const-me wrote:
           | > is still preferred because of better AVX support
           | 
           | AVX1 and AVX2 performance is on par.
           | 
           | For instance, vmulpd AVX1 instruction is faster on AMD, 3
           | versus 4 cycles. vpaddd AVX2 instruction is same at 1 cycle
           | latency. vfmadd132pd FMA instruction is slightly faster on
           | Intel, 4 versus 5 cycles. Throughput is the same across these
           | two. I was looking at AMD Zen2 versus Intel Ice Lake.
           | 
           | Some Intel chips have AVX512. Still, many practical
           | applications don't need that amount of SIMD wideness, and
           | these who do are often a good fit for GPGPUs.
           | 
           | > There are also things like Intel MKL
           | 
           | There're vendor-agnostic equivalents like Eigen.
        
             | DoofusOfDeath wrote:
             | IIUC, Intel uses the term "AVX-512" as an umbrella term,
             | and different processors support different subsets of
             | "AVX-512" instructions [0].
             | 
             | AFAIK this is a break from previous Intel nomenclature,
             | where any processor supporting e.g. "SSE4.2" instructions
             | was guaranteed to support _all_ SSE4.2 instructions.
             | 
             | I'm concerned that sometimes this causes confusion when
             | talking about processor -- software compatibility.
             | 
             | [0] https://en.wikipedia.org/wiki/AVX-512
        
               | loeg wrote:
               | I think the consideration GP was trying to make is that
               | Zen (at least 1 and 2 -- and I haven't heard otherwise
               | for 3) do not support 512-bit wide AVX registers at all.
        
             | f311a wrote:
             | For some computer vision tasks, Tensorflow is much faster
             | when you have AVX512.
             | 
             | Also https://www.intel.com/content/www/us/en/artificial-
             | intellige...
        
               | Const-me wrote:
               | These results were achieved on dual-socket Xeon E5 2699v4
               | (the architecture is 5 years old and has no AVX512, they
               | optimized for AVX2) and on Xeon Phi 7250 (that thing does
               | have AVX512 but that's not a processor, a specialized
               | accelerator with 68 cores).
               | 
               | Also Tensorflow is awesome fit for GPGPUs and is usually
               | way faster on them.
        
             | singhrac wrote:
             | > There're vendor-agnostic equivalents like Eigen.
             | 
             | That's looking at the wrong layer of the hierarchy, I
             | think. There are many open-source linear algebra libraries,
             | but iirc they all link against something that has a
             | BLAS/LAPACK API. That might be something like MKL,
             | OpenBLAS, ATLAS, etc.
             | 
             | When I last checked, MKL was much faster than its
             | competitors, and is only available (at full speed) on Intel
             | CPUs. Has that changed?
        
               | gnufx wrote:
               | > When I last checked, MKL was much faster than its
               | competitors
               | 
               | That has never been generally true in my experience
               | measuring over the years. It has been true at times for
               | specific cases, e.g. OpenBLAS until it got avx512 support
               | on a par with MKL (at least for serial DGEMM -- I've
               | forgotten quite how the rest of level 3 goes).
        
               | Const-me wrote:
               | > iirc they all link against something that has a
               | BLAS/LAPACK API
               | 
               | Eigen can consume these I think, but they are optional.
               | It has it's own implementation of these, written in
               | manually vectorized C++, with intrinsics, up to and
               | including AVX512 (controlled with macros). For
               | parallelization it uses OpenMP provided by the compiler
               | (also controlled with a macro).
               | 
               | > Has that changed?
               | 
               | It's hard to directly compare Eigen to the rest of them.
               | They don't do the same thing.
               | 
               | One feature of Eigen is lazy evaluation. Expressions like
               | a+b or axb don't return another matrix or vector; they
               | return a placeholder object that only computes something
               | on assignment. For complicated expressions this can be a
               | huge win, e.g. r=a+b+c+d will read from a,b,c,d, compute
               | sum of the 4 on the fly, and write into r without
               | temporary copies in memory.
               | 
               | However, also makes Eigen's source code outright scary,
               | and hard to debug or optimize.
               | 
               | Anyway, based on the old pics there
               | http://eigen.tuxfamily.org/index.php?title=Benchmark they
               | are more or less comparable. Things like alpha*X+beta*Y
               | were much faster in Eigen (probably due to that lazy
               | evaluation thing), Hessenberg was much faster in MKL, in
               | general they are close.
        
           | gnufx wrote:
           | > There are also things like Intel MKL
           | 
           | There are also things like OpenBLAS, and BLIS (which AMD
           | support).
        
         | joshstrange wrote:
         | I'm searching now but does AMD have an alternative/answer for
         | Intel's QuickSync? Turning on HW acceleration on my Plex server
         | (so that it uses QuickSync) is a game changer. From struggling
         | to handle 3+ 1080p streams and pegging all the cores to being
         | able to do 6+ without going over a load average of 1.
        
           | toast0 wrote:
           | QuickSync is GPU accelerated encode/decode right? This
           | processor announcement is for their CPUs without GPUs, so
           | you'd need a GPU add on board, and both AMD and NVidia
           | support that. AMDs processors with GPUs (they call them APUs)
           | support that too. AMD tends to release desktop CPU, then high
           | end desktop/server, then laptop APU and finally desktop APU.
           | They only released Zen2 desktop APUs a couple months ago, and
           | they're currently OEM only and very hard to find in the US
           | (grey market imports only, AFAIK, but send me an email if I'm
           | wrong, address in profile)
        
             | joshstrange wrote:
             | QuickSync is all in the CPU (no GPU needed). IIRC it's part
             | of the Intel Graphics (built into the CPU) so maybe it's
             | not exactly fair to call QuickSync part of the CPU but it's
             | included in the physical CPU chip and I have no discrete
             | graphics card in my server right now. I know I can get a
             | GPU to offload decoding/encoding to but QuickSync is pretty
             | awesome for my use-case and buying a graphics card has it's
             | own issues (space in case, cost of card, getting it play
             | nice with Plex in docker, etc).
        
               | toast0 wrote:
               | It is GPU needed. The intel spec sheet for i3-9350KF
               | doesn't show QuickSync, but i3-9350K does. The difference
               | between the two is that the F series doesn't have a GPU
               | (or it's disabled). Also unavailable on server class
               | Xeons without GPUs.
               | 
               | I agree, it's convenient to have a GPU on most CPUs, but
               | AMD puts less priority on that market, so we just have to
               | wait.
        
           | Const-me wrote:
           | Video encoders/decoders are parts of GPUs, not CPUs. Only
           | CPUs with integrated GPU have these pieces of hardware.
           | 
           | AMD is pretty comparable in that regard:
           | https://en.wikipedia.org/wiki/Video_Core_Next but I don't
           | have computers with AMD APUs or GPUs, and don't have a hands-
           | on experience with these features.
        
             | joshstrange wrote:
             | Hmm, looks like Plex doesn't have support for Video Core
             | Next yet: https://forums.plex.tv/t/feature-request-add-
             | support-for-amd...
             | 
             | I'm still probably 3-6mo away from a new server build so
             | I'll just re-evaluate then I guess and honestly I might
             | just go with another storage server and leave my intel/QS
             | server as-is and just go with a AMD that plays nice with
             | UnRaid.
        
               | DenseComet wrote:
               | Plex supports Nvidia GPUs for hardware transcoding, so
               | you could pick up a cheaper Nvidia GPU and stick it in
               | the build, but that will probably not be possible for
               | small builds.
        
               | lostlogin wrote:
               | I wanted AMD but quicksync for a Plex server is just so
               | good. I bought an I7 NUC10 for this role and it's great.
               | Virtualised OS, Docker for Plex and it'll do 11x
               | transcodes (1080p to 720p) in hardware while also hosting
               | several other machines. The first time you pass though
               | the GPU to the Vm, then into Docker is a bit of a head
               | scratcher, but it's actually fine and works well.
               | 
               | It has a maximum of 64gb ram and is tiny. With an nvme
               | drive (Samsung Evo) it is really really fast.
               | 
               | The next best option as far as I was concerned was the
               | NUC8, and while I'd love to have the PSU onboard and no
               | brick, a Mac Mini is a lot of money.
               | 
               | The Nuc8 is a better option than the 10 for anything
               | that's needing actual graphics. The 10 has a very anaemic
               | GPU compared to the 8, but has 2 more CPU cores when
               | comparing i7s.
        
               | Const-me wrote:
               | Which OS are you running there? If it's Windows, does
               | your software have an option to select MS Media
               | Foundation for video encoding/decoding?
               | 
               | In my experience, GPU vendors, all 3 of them, are
               | including reasonably well-made media foundation hardware
               | transforms as a part of their GPU drivers. MF API is
               | vendor agnostic. Apart from a few bugs I found in Intel's
               | drivers (it was about h265 hardware encoder, Intel
               | neglected to react), the same API works with all capable
               | hardware.
        
               | lostlogin wrote:
               | Plex works so well with Ubuntu that I have become a huge
               | proponent of this method. I'm not sure if it's their
               | developers having a bias for the OS, but the Ubuntu
               | version always seems to work well.
        
           | Thaxll wrote:
           | When you have 24 "HT" cores I'm not sure why you would need
           | QuickSync.
        
             | Const-me wrote:
             | Because video resolution has grown well above 1080p. I'm
             | looking at 4k monitor at the moment, recent chips have some
             | support for 8k video.
        
             | lotyrin wrote:
             | Because in addition to theoretical capacities I also care
             | about my power bill and how comfortably cool the room my
             | transcoding machine is in stays.
        
       | lliamander wrote:
       | Who here uses their own computer for work? If you work for an
       | employer, what's the agreement like that allows you to do that?
        
         | jeffbee wrote:
         | I work remotely at Goldman Sachs and it is 100% BYOD.
        
           | lliamander wrote:
           | Interesting! Follow up questions:
           | 
           | - Are there any constraints or requirements on the kind of
           | device, software installed, etc.? E.g. using a VPN,
           | antivirus.
           | 
           | - Do you use a separate work and personal accounts?
           | 
           | - laptop or desktop?
           | 
           | - how does your employer feel about personal projects?
           | 
           | Thanks!
        
             | jeffbee wrote:
             | Accessing anything at work requires Citrix in to a windows
             | VM that's racked up in a dark datacenter somewhere. What
             | device anyone uses to accomplish this is their own
             | business. No VPN or AV, no MDM or any of that nonsense.
             | 
             | I personally use a Pixelbook Go with an external monitor
             | for this purpose. Really anything will do the job.
        
               | lliamander wrote:
               | Ah, so actually development is done on a virtual desktop.
               | Makes sense.
               | 
               | Currently, I do most of my work locally on a work issued
               | laptop (and external monitor) with some services
               | offloaded to AWS.
               | 
               | I wouldn't mind building a desktop PC for fun, but as I
               | don't really do high-end gaming, I couldn't see
               | justifying the expense unless I was able to do work on it
               | as well. I'd of course have to get employer sign-off, but
               | I'm just trying to figure out what that kind of agreement
               | might look like.
        
         | sosodev wrote:
         | I use my own computer because my company just doesn't have any
         | rules against it afaik. I just installed a VPN client that's
         | compatible with their server.
        
           | lliamander wrote:
           | Did they also issue a laptop?
        
             | sosodev wrote:
             | No, they asked if I needed one and I declined.
        
               | lliamander wrote:
               | Cool. Sounds like their cool with it then. I'm sure some
               | employers don't mind saving the expense.
               | 
               | If you don't mind me asking, what are the specs of your
               | computer, and are they appropriate for your workload or
               | overkill?
        
               | sosodev wrote:
               | CPU: AMD Ryzen 5 3600
               | 
               | GPU: NVIDIA GeForce GTX 1660 Ti
               | 
               | Memory: 16GB
               | 
               | Storage: Samsung Evo 970 500 GB NVMe SSD
               | 
               | OS: Pop!_OS 20.04
               | 
               | Pretty middle of the road specs but it's plenty enough
               | for the web development and light gaming that I do. :)
        
               | lliamander wrote:
               | Indeed! Faster than almost any laptop for a much better
               | price.
               | 
               | Wendell from the Level1techs has said that, for most
               | developers the R5 3600 is all you need.
               | 
               | Personally I've been eyeing the 3900X, which is probably
               | overkill, but is such a great bargain that it would be
               | hard to pass up.
        
       ___________________________________________________________________
       (page generated 2020-10-08 23:00 UTC)