[HN Gopher] Intel's Arc GPUs will compete with GeForce and Radeo...
       ___________________________________________________________________
        
       Intel's Arc GPUs will compete with GeForce and Radeon in early 2022
        
       Author : TangerineDream
       Score  : 226 points
       Date   : 2021-08-16 15:16 UTC (7 hours ago)
        
 (HTM) web link (arstechnica.com)
 (TXT) w3m dump (arstechnica.com)
        
       | Exmoor wrote:
       | As TFA rightly points out, unless something _drastically_ changes
       | in the next ~6mo, Intel is going to launch into the most
       | favorable market situation we 've seen in our lifetimes.
       | Previously, the expectation is that they needed to introduce
       | something that was competitive with the top end cards from nVidia
       | and AMD. With basically all GPU's out of stock currently they
       | really just need to introduce something competitive with the
       | almost _anything_ on the market to be able to sell as much as
       | they can ship.
        
         | NonContro wrote:
         | How long will that situation last though, with Ethereum 2.0
         | around the corner and the next difficulty bomb scheduled for
         | December?
         | 
         | https://www.reddit.com/r/ethereum/comments/olla5w/eip_3554_o...
         | 
         | Intel could be launching their cards into a GPU surplus...
         | 
         | That's discrete GPUs though, presumably the major volumes are
         | in laptop GPUs? Will Intel have a CPU+GPU combo product for
         | laptops?
        
           | dathinab wrote:
           | It's not "just" a shortage of GPU's but all kinds of
           | components.
           | 
           | And it's also not "just" caused by miners.
           | 
           | But that means if they are _really_ unlucky they could launch
           | into a situation where there is a surplus of good second hand
           | graphic cards _and_ still shortages /price hikes on the GPU
           | components they use...
           | 
           | Through as far as I can tell they are more targeting OEM's
           | (any OEM instead of a selected few), and other large
           | customers, so it might not matter too much for them for this
           | release (but probably from the next one after one-ward it
           | would).
        
           | orra wrote:
           | Alas: Bitcoin.
        
             | cinntaile wrote:
             | You don't mine bitcoin with a GPU, those days are long
             | gone.
        
           | errantspark wrote:
           | > How long will that situation last though
           | 
           | Probably until at least 2022 because the shortage of GPUs
           | isn't solely because of crypto. Until we generally get back
           | on track tricking sand to think we're not going to be able to
           | saturate demand.
           | 
           | > Will Intel have a CPU+GPU combo product for laptops?
           | 
           | What? Obviously the answer is yes, how could it possibly be
           | no? CPU+GPU combo is the only GPU related segment where Intel
           | currently has a product.
        
         | mhh__ wrote:
         | If they come out swinging here they could have the most
         | deserved smugness in the industry for a good while. People have
         | been rightly criticising them but wrongly writing them off.
        
         | 015a wrote:
         | Yup; three other points I'd add:
         | 
         | 1) I hate to say "year of desktop Linux" like every year, but
         | with the Steam Deck release later this year, and Valve's
         | commitment to continue investing and collaborating on Proton to
         | ensure wide-range game support; Linux gaming is going to grow
         | substantially throughout 2022, if only due to the new devices
         | added by Steam Decks.
         | 
         | Intel has always had fantastic Linux video driver support. If
         | Arc is competitive with the lowest end current-gen Nvidia/AMD
         | cards (3060?), Linux gamers will love it. And, when thinking
         | about Steam Deck 2 in 2022-2023, Intel becomes an option.
         | 
         | 2) The current-gen Nvidia/AMD cards are _insane_. They 're
         | unbelievably powerful. But, here's the kicker: Steam Deck is
         | 720p. You go out and buy a brand new Razer/Alienware/whatever
         | gaming laptop, the most common resolution even on the high end
         | models is 1080p (w/ high refresh rate). The Steam Hardware
         | survey puts 1080p as the most common resolution, and ITS NOT
         | EVEN REMOTELY CLOSE to #2 [1] (720p 8%, 1080p 67%, 1440p 8%, 4k
         | 2%) (did you know more people use Steam on MacOS than on a 4k
         | monitor? lol)
         | 
         | These Nvidia/AMD cards are unprecedented overkill for most
         | gamers. People are begging for cards that can run games at
         | 1080p, Nvidia went straight to 4K, even showing off 8K gaming
         | on the 3090, and now they can't even deliver any cards that run
         | 720p/1080p. Today, we've got AMD releasing the 6600XT,
         | advertising it as a beast for 1080p gaming [2]. This is what
         | people actually want; affordable and accessible cards to play
         | games on (whether they can keep the 6600xt in stock remains to
         | be seen, of course). Nvidia went straight Icarus with Ampere;
         | they shot for the sun, and couldn't deliver.
         | 
         | 3) More broadly, geopolitical pressure in east asia, and
         | specifically taiwan, should be concerning investors in any
         | company that relies heavily on TSMC (AMD & Apple being the two
         | big ones). Intel may start by fabbing Arc there, but they
         | uniquely have the capacity to bring that production to the
         | west.
         | 
         | I am very, very long INTC.
         | 
         | [1] https://store.steampowered.com/hwsurvey/Steam-Hardware-
         | Softw...
         | 
         | [2] https://www.pcmag.com/news/amd-unveils-the-radeon-
         | rx-6600-xt...
        
           | iknowstuff wrote:
           | Intel sells expensive CPUs which are becoming useless thanks
           | to ARM - as much in consumer devices as they are in
           | datacenters, with big players designing their own ARM chips.
           | GPUs are their lifeboat. Three GPU players is better than
           | two, but I don't see much of a reason to be long Intel.
        
           | ZekeSulastin wrote:
           | ... Nvidia _did_ release lower end cards that target the same
           | market and price point as the 6600 XT a lot _earlier_ than
           | AMD though - as far as MSRP goes the 3060 and 3060 Ti bracket
           | the 6600 XT's $380 at $329 and $399 (not that MSRP means a
           | thing right now) and similarly brackets performance, and even
           | the MSRP was not received well in conjunction with the 1080p
           | marketing. _Both_ manufacturers have basically told the mid
           | and low range market to buy a console even if you are lucky
           | enough to get an AMD reference or Nvidia FE card.
        
             | Revenant-15 wrote:
             | I've happily taken their advice and have moved to an Xbox
             | Series S for a good 80% of my gaming needs. What gaming I
             | still do on my PC consists mainly of older games, emulators
             | and strategy games. Although I've been messing with
             | Retroarch/Duckstation on my Xbox, and it's been quite novel
             | and fun to be playing PS1 games on a Microsoft console.
        
           | pjmlp wrote:
           | Steam will hardly change the 1% status of GNU/Linux desktop.
           | 
           | Many forget that most studios don't bother to port their
           | Android games to GNU/Linux, which are mostly written using
           | the NDK, so plain ISO C and C++, GL, Vulkan, OpenSL,..., yet
           | no GNU/Linux, because the market just isn't there.
        
             | reitzensteinm wrote:
             | Not disagreeing with your overall point, but it's pretty
             | rare for people to port their mobile game to PC even if
             | using Unity and all you have to do is figure out the
             | controls. Which you've probably got a beta version of just
             | to develop the game.
        
             | 015a wrote:
             | The first wave of Steam Decks sold out in minutes. They're
             | now pushing back delivery to Q2 2022. The demand for the
             | device is pretty significant; not New Console large, but
             | its definitely big enough to be visible in the Steam
             | Hardware Survey upon release later this year, despite the
             | vast size of Steam's overall playerbase.
             | 
             | Two weeks ago, the Hardware Survey reported Linux breaching
             | 1% for the first time ever [1], for reasons not related to
             | Deck (in fact, its not obvious WHY linux has been growing;
             | disappointment in the Win11 announcement may have caused
             | it, but in short, its healthy, natural, long-term growth).
             | I would put real money up that Linux will hit 2% by the
             | January 2022 survey, and 5% by January 2023.
             | 
             | Proton short-circuits the porting argument. It works
             | fantastically for most games, with zero effort from the
             | devs.
             | 
             | We're not talking about Linux being the majority. But its
             | definitely looking like it will see growth over the next
             | decade.
             | 
             | [1] https://www.tomshardware.com/news/steam-survey-
             | linux-1-perce...
        
               | pjmlp wrote:
               | It took 20 years to reach 1%, so...
               | 
               | I believed in it back in the Loki golden days, nowadays I
               | rather bet on macOS, Windows, mobile OSes and game
               | consoles.
               | 
               | It remains to be seen how the Steam fairs versus the
               | Steam Machines of yore.
        
               | onli wrote:
               | Don't let history blind you to the now ;)
               | 
               | It's way better now than it was back then. There was a
               | long period of good ports, which combined with the Steam
               | for Linux client made Linux gaming a real thing already.
               | But instead of fizzling out like the last time there were
               | ports, now Linux transitioned to "Run every game" without
               | needing a port. Some exceptions, but they are working on
               | it and compatibility is huge.
               | 
               | This will grow slowly but steadily now, and is ready to
               | explode if Microsoft does on bad move (like crazy Windows
               | 11 hardware requirements, but we'll see).
               | 
               | Biggest danger to that development are the gpu prices,
               | the Intel gpus can only help there. A competent 200 bucks
               | model is desperately needed to keep the PC as a gaming
               | platform alive. It has to run on fumes - on old hardware
               | - now.
        
               | [deleted]
        
             | trangus_1985 wrote:
             | >Steam will hardly change the 1% status of GNU/Linux
             | desktop.
             | 
             | I agree. But it will change the status in the listings.
             | Steam deck and steamos appliances should be broken out into
             | their own category, and I could easily see them overtaking
             | linux desktop
        
         | dheera wrote:
         | > will compete with GeForce
         | 
         | > which performs a lot like the GDDR5 version of Nvidia's
         | aging, low-end GeForce GTX 1030
         | 
         | Intel is trying to emulate what NVIDIA did a decade ago. Nobody
         | in the NVIDIA world speaks of GeForce and GTX anymore, RTX is
         | where it's at.
        
         | pier25 wrote:
         | Exactly. There are plenty of people that just want to upgrade
         | an old GPU and anything modern would be a massive improvement.
         | 
         | I'm still rocking a 1070 for 1080p/60 gaming and would love to
         | jump to 4K/60 gaming but just can't convince myself to buy a
         | new GPU at current prices.
        
           | leeoniya wrote:
           | i wanna get a good Alyx setup to finally try VR, but with the
           | gpu market the way it is, looks like my RX480 4GB will be
           | sticking around for another 5yrs - it's more expensive now
           | than it was 4 yrs ago (used), and even then it was already
           | 2yrs old. batshit crazy; no other way to describe it :(
        
           | mey wrote:
           | I refuse to engage with the current GPU pricing insanity, so
           | my 5900x is currently paired with a 960 GTX. When Intel
           | enters the market it will be another factor in driving
           | pricing back down, so might play Cyberpunk in 2022...
        
             | deadmutex wrote:
             | If you really want to play Cyberpunk on PC, and don't want
             | to buy a new GPU.. playing it on Stadia is an option
             | (especially if you have a GPU that can support VP9
             | decoding). I played it at 4K/1080p, and it looked pretty
             | good. However, I think if you want the best graphics
             | fidelity (i.e. 4K RayTracing), then you probably do want to
             | just get a high end video card.
             | 
             | Disclosure: Work at Google, but not on Stadia.
        
         | voidfunc wrote:
         | Intel has the manufacturing capability to really beat up
         | Nvidia. Even if the cards don't perform like top-tier cards
         | they could still win bigly here.
         | 
         | Very exciting!
        
           | pankajdoharey wrote:
           | What about design capabilities? If they had it in them what
           | were they doing all these yrs? i mean since 2000 i can
           | remember a single GPU from intel that wasnt already behind
           | the market.
        
             | Tsiklon wrote:
             | Raja Koduri is Intel's lead architect for their new product
             | line; prior to this he was the lead of the Radeon
             | Technologies Group at AMD, successfully delivering Polaris,
             | Vega and Navi. Navi is AMD's current GPU product
             | architecture.
             | 
             | Things seem promising at this stage.
        
           | flenserboy wrote:
           | Indeed. Something that's affordable and hits even RX 580
           | performance would grab the attention of many. _Good enough_
           | really is when supply is low and prices are high.
        
           | opencl wrote:
           | Intel is not even manufacturing these, they are TSMC 7nm so
           | they are competing for the same fab capacity that everyone
           | else is using.
        
             | judge2020 wrote:
             | *AMD/Apple is using. Nvidia's always-sold-out Ampere-based
             | gaming chips are made in a Samsung fab.
             | 
             | https://www.pcgamer.com/nvidia-ampere-samsung-8nm-process/
        
               | Yizahi wrote:
               | Nvidia would also use TSMC 7nm since it is much better
               | that Samsung 8mn. So potentially they are also waiting
               | for the TSMC availability.
        
               | judge2020 wrote:
               | How is it 'much better'? 7nm is not better than 8nm
               | because it has a smaller number - the number doesn't
               | correlate strongly with transistor density these days.
        
               | kllrnohj wrote:
               | Did you bother trying to do any research or comparison
               | between TSMC's 7nm & Samsung's 8nm or did you just want
               | to make the claim that numbers are just marketing?
               | Despite the fact that numbers alone were not being talked
               | about, but two specific fab processes, and thus the "it's
               | just a number!" mistake wasn't obviously being made in
               | the first place?
               | 
               | But Nvidia has Ampere on both TSMC 7nm (GA100) and
               | Samsung's 8nm (GA102). The TSMC variant has a
               | significantly higher density at 65.6M / mm2 vs. 45.1M /
               | mm2. Comparing across architectures is murkey, but we
               | also know that the TSMC 7nm 6900XT clocks a lot higher
               | than the Samsung 8nm RTX 3080/3090 while also drawing
               | less power. There's of course a lot more to clock speeds
               | & power draw in an actual product than the raw fab
               | transistor performance, but it's still a data point.
               | 
               | So there's both density & performance evidence to suggest
               | TSMC's 7nm is meaningfully better than Samsung's 8nm.
               | 
               | Even going off of marketing names, Samsung has a 7nm as
               | well and they don't pretend their 8nm is just one-worse
               | than the 7nm. The 8nm is an evolution of the 10nm node
               | while the 7nm is itself a new node. According to
               | Samsung's marketing flowcharts, anyway. And analysis
               | suggests Samsung's 7nm is competitive with TSMC's 7nm.
        
               | IshKebab wrote:
               | TSMC have a 56% percent market share. The next closest is
               | Samsung at 18%. I think that's enough to say that
               | everyone uses them without much hyperbole.
        
               | paulmd wrote:
               | if NVIDIA cards were priced as ridiculously as AMD cards
               | they'd be sitting on store shelves too
        
               | kllrnohj wrote:
               | Nvidia doesn't price any cards other than the founder's
               | editions which you'll notice they both drastically cut
               | down on availability for and also didn't do _at all_ for
               | the  "price sensitive" mid-range tier.
               | 
               | Nvidia's pricing as a result is completely fake. Like the
               | claimed "$330 3060" in fact _starts_ at $400 and rapidly
               | goes up from there, with MSRP 's on 3060's as high as
               | $560.
        
               | paulmd wrote:
               | I didn't say NVIDIA did directly price cards? Doesn't
               | sound like you are doing a very good job of following the
               | HN rule - always give the most gracious possible reading
               | of a comment. Nothing I said directly implied that they
               | did, you just wanted to pick a bone. It's really quite
               | rude to put words in people's mouths, and that's why we
               | have this rule.
               | 
               | But a 6900XT is available for $3100 at my local store...
               | and the 3090 is $2100. Between the two it's not hard to
               | see why the NVIDIA cards are selling and the AMD cards
               | are sitting on the shelves, the AMD cards are 50% more
               | expensive for the same performance.
               | 
               | As for _why_ that is - which is the point I think you
               | wanted to address, and decided to try and impute into my
               | comment - who knows. Price are  "sticky" (retailers don't
               | want to mark down prices and take a loss) and AMD moves
               | fewer cards in general. Maybe that means that prices are
               | "stickier for longer" with AMD. Or maybe it's another
               | thing like Vega where AMD set the MSRP so low that
               | partners can't actually build and sell a card for a
               | profit at competitive prices. But in general, regardless
               | of why - the prices for AMD cards are generally higher,
               | and when they go down the AMD cards sell out too. The
               | inventory that is available is available because it's
               | overpriced.
               | 
               | (and for both brands, the pre-tariff MSRPs are
               | essentially a fiction at this point apart from the
               | reference cards and will probably never be met again.)
        
               | RussianCow wrote:
               | > But a 6900XT is available for $3100 at my local
               | store... and the 3090 is $2100.
               | 
               | That's just your store being dumb, then. The 6900 XT is
               | averaging about $1,500 brand new on eBay[0] while the
               | 3090 is going for about $2,500[1]. Even on Newegg, the
               | cheapest in-stock 6900 XT card is $1,700[2] while the
               | cheapest 3090 is $3,000[3]. Everything I've read suggests
               | that the AMD cards, while generally a little slower than
               | their Nvidia counterparts (especially when you factor in
               | ray-tracing), give you way more bang for your buck.
               | 
               | > the prices for AMD cards are generally higher
               | 
               | This is just not true. There may be several reasons for
               | the Nvidia cards being out of stock more often than AMD:
               | better performance; stronger brand; lower production
               | counts; poor perception of AMD drivers; specific games
               | being optimized for Nvidia; or pretty much anything else.
               | But at this point, pricing is set by supply and demand,
               | not by arbitrary MSRPs set by Nvidia/AMD, so claiming
               | that AMD cards are priced too high is absolutely
               | incorrect.
               | 
               | [0]: https://www.ebay.com/sch/i.html?_from=R40&_nkw=6900x
               | t&_sacat...
               | 
               | [1]: https://www.ebay.com/sch/i.html?_from=R40&_nkw=3090&
               | _sacat=0...
               | 
               | [2]: https://www.newegg.com/p/pl?N=100007709%20601359957&
               | Order=1
               | 
               | [3]: https://www.newegg.com/p/pl?N=100007709%20601357248&
               | Order=1
        
             | BuckRogers wrote:
             | This is a problem for AMD especially, but also Nvidia. Not
             | so much for Intel. They're just budging in line with their
             | superior firepower. Intel even bought out first dibs on
             | TSMC 3nm out from under Apple. I'll be interested to see
             | the market's reaction to this once everyone realizes that
             | Intel is hitting AMD where it hurts and sees the inevitable
             | outcome.
             | 
             | This is one of the smartest moves by Intel, make their own
             | stuff and consume production from all their competitors,
             | which do nothing but paper designs. Nvidia and especially
             | AMD took a risk not being in the fabrication business, and
             | now we'll see the full repercussions. It's a good play
             | (outsourcing) in good times, not so much when things get
             | tight like today.
        
               | wmf wrote:
               | _This is a problem for AMD especially_
               | 
               | Probably not. AMD has had their N7/N6 orders in for
               | years.
               | 
               |  _They 're just budging in line with their superior
               | firepower. Intel even bought out first dibs on TSMC 3nm
               | out from under Apple._
               | 
               | There's no evidence this is happening and people with
               | TSMC experience say it's not happening.
               | 
               |  _Nvidia and especially AMD took a risk not being in the
               | fabrication business_
               | 
               | Yes, and it paid off dramatically. If AMD stayed with
               | their in-house fabs (now GloFo) they'd probably be dead
               | on 14nm now.
        
               | BuckRogers wrote:
               | Do you have sources for any of your claims? Other than
               | going fabless being a fantastic way to cut costs and
               | management challenges, but increase longterm supply line
               | risk, none of that is anything that I've heard. Here are
               | sources for my claims.
               | 
               | AMD on TSMC 3nm for Zen5. Will be squeezed by Intel and
               | Apple- https://videocardz.com/newz/amd-3nm-zen5-apus-
               | codenamed-stri...
               | 
               | Intel consuming a good portion of TSMC 3nm-
               | https://www.msn.com/en-us/news/technology/intel-locks-
               | down-a...
               | 
               | I see zero upside with these developments for AMD, and to
               | a lesser degree, Nvidia, who are better diversified with
               | Samsung and also rumored to be in talks with fabricating
               | at Intel as well.
        
               | wmf wrote:
               | I expect AMD to start using N3 after Apple and Intel have
               | moved on to N2 (or maybe 20A in Intel's case) in 2024 so
               | there's less competition for wafers.
        
               | AnthonyMouse wrote:
               | > Will be squeezed by Intel and Apple
               | 
               | This doesn't really work. If there is more demand,
               | they'll build more fabs. It doesn't happen overnight --
               | that's why we're in a crunch right now -- but we're
               | talking about years of lead time here.
               | 
               | TSMC is also not stupid. It's better for them for their
               | customers to compete with each other instead of having to
               | negotiate with a monopolist, so their incentive is to
               | make sure none of them can crush the others.
               | 
               | > I see zero upside with these developments for AMD, and
               | to a lesser degree, Nvidia
               | 
               | If Intel uses its own fabs, Intel makes money and uses
               | the money to improve Intel's process which AMD can't use.
               | If Intel uses TSMC's fabs, TSMC makes money and uses the
               | money to improve TSMC's process which AMD does use.
        
             | Animats wrote:
             | Oh, that's disappointing. Intel has three 7nm fabs in the
             | US.
             | 
             | There's a lot of fab capacity under construction. 2-3 years
             | out, semiconductor glut again.
        
           | deaddodo wrote:
           | Where do you get that idea? The third-party fabs have far
           | greater production capacity[1]. Intel isn't even in the top
           | five.
           | 
           | They're a shared resource; however, if you're willing to pay
           | the money, you _could_ monopolize their resources and
           | outproduce anybody.
           | 
           | 1 - https://epsnews.com/2021/02/10/5-fabs-own-54-of-global-
           | semic...
        
             | wtallis wrote:
             | You're looking at the wrong numbers. The wafer capacity of
             | memory fabs and logic fabs that are only equipped for older
             | nodes aren't relevant to the GPU market. So Micron, SK
             | hynix, Kioxia/WD and a good chunk of Samsung and TSMC
             | capacity are irrelevant here.
        
           | abledon wrote:
           | it seems AMD manufactures most 7nm all at TSMC, but intel has
           | a factory coming online next year in Arizona... https://en.wi
           | kipedia.org/wiki/List_of_Intel_manufacturing_si...
           | 
           | I could see gov/Military investing/awarding more contracts
           | based on these 'locally' situated plants
        
             | humanistbot wrote:
             | Nope, wiki is wrong. According to Intel, the facility in
             | Chander, AZ will start being built next year, but won't be
             | producing chips until 2024. See
             | https://www.anandtech.com/show/16573/intels-new-
             | strategy-20b...
        
           | [deleted]
        
         | chaosharmonic wrote:
         | Given that timeline and their years of existing production
         | history with Thunderbolt, Intel could also feasibly beat both
         | of them to shipping USB4 on a graphics card.
        
           | pankajdoharey wrote:
           | I suppose the better thing to do would be to ship an APU,
           | Besting both Nvidia on GPU and AMD on CPU? But can they?
        
         | hughrr wrote:
         | GPU stock is rising and prices falling. It's too late now.
        
           | rejectedandsad wrote:
           | I still can't get a 3080, and the frequency of drops seems to
           | have decreased. Where are you seeing increased stock?
        
             | hughrr wrote:
             | Can get a 3080 tomorrow in UK no problems at all.
        
               | mhh__ wrote:
               | Can get but still very expensive.
        
             | [deleted]
        
           | YetAnotherNick wrote:
           | No, they aren't. They are trading at 250% of MSRP. See this
           | data:
           | 
           | https://stockx.com/nvidia-nvidia-geforce-
           | rtx-3080-graphics-c...
        
             | RussianCow wrote:
             | Anecdotally, I've noticed prices falling on the lower end.
             | My aging RX 580 was worth over $400 used at the beginning
             | of the year; it now goes for ~$300. The 5700 XT was going
             | for close to $1k used, and is more recently selling for
             | $800-900.
             | 
             | With that said, I don't know if it's a sign of the shortage
             | coming to an end; I think the release of the Ryzen 5700G
             | with integrated graphics likely helped bridge the gap for
             | people who wanted low-end graphics without paying the crazy
             | markups.
        
         | ayngg wrote:
         | I thought they are using TSMC for their gpu, which means they
         | will be part of the same bottleneck that is affecting everyone
         | else.
        
           | teclordphrack2 wrote:
           | If they purchased a slot in the queue then they will be fine.
        
           | davidjytang wrote:
           | I believe nvidia doesn't use TSMC or not only use TSMC.
        
             | dathinab wrote:
             | Independent of the question around TSMC they are still
             | affected as:
             | 
             | - Shortages and price hikes caused by various effect are
             | not limit to the GPU chiplet but also most other parts on
             | the GPU.
             | 
             | - Especially it also affects the RAM they are using, which
             | can be a big deal wrt. pricing and availability.
        
             | mkaic wrote:
             | 30 series Nvidia cards are on Samsung silicon iirc
        
               | monocasa wrote:
               | Yeah, Samsung 8nm, which is basically Samsung 10nm++++.
        
               | abraae wrote:
               | 10nm--?
        
               | monocasa wrote:
               | The '+' in this case is a common process node trope where
               | improvements to a node over time that involve rules
               | changes become Node+, Node++, Node+++, etc. So this is a
               | node that started as Samsung 10nm, but they made enough
               | changes to it that they started marketing it as 8nm. When
               | they started talking about it, it wasn't clear if it was
               | a more manufacturable 7nm or instead a 10nm with lots of
               | improvements, so I drop the 10nm++++ to help give some
               | context.
        
               | tylerhou wrote:
               | The datacenter cards (which are about half of their
               | revenue) are running on TSMC.
        
             | ayngg wrote:
             | Yeah they use Samsung for their current series but are
             | planning to move to TSMC for the next irrc.
        
           | YetAnotherNick wrote:
           | Except Apple
        
         | rasz wrote:
         | You would think that. GamersNexus did try Intels finest, and it
         | doesnt look pretty
         | 
         | https://www.youtube.com/watch?v=HSseaknEv9Q We Got an Intel
         | GPU: Intel Iris Xe DG1 Video Card Review, Benchmarks, &
         | Architecture
         | 
         | https://www.youtube.com/watch?v=uW4U6n-r3_0 Intel GPU A Real
         | Threat: Adobe Premiere, Handbrake, & Production Benchmarks on
         | DG1 Iris Xe
         | 
         | Its below GT1030 with a lot of issues.
        
           | agloeregrets wrote:
           | DG1 isn't remotely related to Arc. For one, it's not even
           | using the same node nor architecture.
        
             | deaddodo wrote:
             | That's not quite true. The Arc was originally known as the
             | DG2 and is the successor to the DG1. So to say it isn't
             | "remotely related" is a bit misleading, especially since we
             | have very little information on the architecture.
        
           | trynumber9 wrote:
           | For some comparison, that's a 30W 80EU part using 70GB/s
           | memory. DG2 is supposed to be 512EU part with over 400GB/s
           | memory. GPUs generally scale pretty well with EU count and
           | memory bandwidth. Plus it has a different architecture which
           | may be even more capable per EU.
        
           | pitaj wrote:
           | This won't be the same as the DG1
        
           | phone8675309 wrote:
           | The DG1 isn't designed for gaming, but it is better than
           | integrated graphics.
        
             | deburo wrote:
             | Just to add on that, DG1 was comparable to integrated
             | graphics, but just in a discrete form factor. It was a tiny
             | bit better because of higher frequency, I think. But even
             | then it wasn't better in all cases, if I recall correctly.
        
       | fefe23 wrote:
       | The fact that the selling point most elaborated on in the press
       | is the AI upscaling, I'm worried the rest of their architecture
       | may not be up to snuff.
        
       | dragontamer wrote:
       | https://software.intel.com/content/dam/develop/external/us/e...
       | 
       | The above is Intel's Gen11 architecture whitepaper, describing
       | how Gen11 iGPUs work. I'd assume that their next-generation
       | discrete GPUs will have a similar architecture (but no longer
       | attached to CPU L3 cache).
       | 
       | I haven't really looked into Intel iGPU architecture at all. I
       | see that the whitepaper has some oddities compared to AMD /
       | NVidia GPUs. Its definitely "more different".
       | 
       | The SIMD-units are apparently only 4 x 32-bit wide (compared to
       | 32-wide NVidia / RDNA or 64-wide CDNA). But they can be
       | reconfigured to be 8x16-bit wide instead (a feature not really
       | available on NVidia. AMD can do SIMD-inside-of-SIMD and split up
       | its registers once again however, but its a fundamentally
       | different mechanism).
       | 
       | --------
       | 
       | Branch divergence is likely to be less of an issue with narrower
       | SIMD than its competitors. Well, in theory anyway.
        
       | jscipione wrote:
       | I've been hearing Intel play this tune for years, time to show us
       | something or change the record!
        
         | mhh__ wrote:
         | They've been playing this for years because it's only really
         | now that they can actually respond to Zen and friends. Intel's
         | competitors have been asleep at the wheel until 2017, getting a
         | new chip out takes years.
        
         | jbverschoor wrote:
         | New ceo, so some press releases, but the company remains the
         | same. I am under no illusion that this will change, and
         | definitely not in such a short notice.
         | 
         | They've neglected almost every market they were in. They're
         | altavista.
         | 
         | Uncle roger says bye bye !
        
       | andrewmcwatters wrote:
       | Mostly unrelated, but I'm still amazed that if you bought Intel
       | at the height of the Dot-com bubble and held on, you still
       | wouldn't have broken even, even ignoring inflation.
        
       | jeffbee wrote:
       | Interesting, but the add-in-card GPU market _for graphics
       | purposes_ is so small, it 's hard to get worked up about it. The
       | overwhelming majority of GPU units sold are IGPs. Intel owns
       | virtually 100% of the computer (excluding mobile) IGP market and
       | 70% of the total GPU market. You can get almost the performance
       | of Intel's discrete GPUs with their latest IGPs in "Tiger Lake"
       | generation parts. Intel can afford to nibble at the edges of the
       | discrete GPU market because it costs them almost nothing to put a
       | product out there and to a large extent they won the war already.
        
         | selfhoster11 wrote:
         | You must be missing the gamer market that's positively starving
         | for affordable dedicated GPUs.
        
         | mirker wrote:
         | I would guess that the main point has to be hardware
         | accelerated features, such as ray-tracing. I agree though that
         | it seems pointless to buy a budget GPU when it's basically a
         | scaled up iGPU. Perhaps it makes sense if you want a mid-range
         | CPU without a iGPU and you can't operate it headlessly, or if
         | you have an old PC that needs a mild refresh.
        
       | jeswin wrote:
       | If Intel provides as much Linux driver support as they do for
       | their current integrated graphics lineup, we might have a new
       | favourite among Linux users.
        
         | stormbrew wrote:
         | This is the main reason I'm excited about this. I really hope
         | they continue the very open approach they've used so far, but
         | even if they start going binary blob for some of it like nvidia
         | and (now to a lesser extent) amd have at least they're likely
         | to properly implement KMS and other things because that's what
         | they've been doing already.
        
         | jogu wrote:
         | Came here to say this. This will be especially interesting if
         | there's better support for GPU virtualization to allow a
         | Windows VM to leverage the card without passing the entire card
         | through.
        
           | modeless wrote:
           | This would be worth buying one for. It's super lame that
           | foundational features like virtualization are used as
           | leverage for price discrimination by Nvidia, and hopefully
           | new competition can shake things up.
        
         | dcdc123 wrote:
         | A long time Linux graphics driver dev friend of mine was just
         | hired by Intel.
        
         | r-bar wrote:
         | They also seem to be the most willing to open up their GPU
         | sharding API, GVTG, based on their work with their existing Xe
         | GPUs. The performance of their implementation in their first
         | generation was a bit underwhelming, but it seems like the
         | intention is there.
         | 
         | If Intel is able to put out something reasonably competitive
         | and that supports GPU sharding it could be a game changer. It
         | could change the direction of the ecosystem and force Nvidia
         | and AMD to bring sharding to their consumer tier cards. I am
         | stoked to see where this new release takes us.
         | 
         | Level1Linux has a (reasonably) up to date state of the GPU
         | ecosystem that does a much better job outlining the potential
         | of this tech.
         | 
         | https://www.youtube.com/watch?v=IXUS1W7Ifys
        
         | kop316 wrote:
         | This was my thought too. If their linux driver support for this
         | is as good as their integrated ones, I will be switching to
         | Intel GPUs.
        
         | heavyset_go wrote:
         | Yep, their WiFi chips have good open source drivers on Linux,
         | as well. It would be nice to have a GPU option that isn't AMD
         | for open driver support on Linux.
        
         | holoduke wrote:
         | Well. Every single AAA game is reflected in GPU drivers. I bet
         | they need to work on windows drivers first. Sure they need to
         | write tons of custom driver mods for hundreds of games.
        
       | tmccrary55 wrote:
       | I'm down if it comes with open drivers or specs.
        
         | the8472 wrote:
         | If they support virtualization like they do on their iGPUs that
         | would be great and possibly drive adoption by power users. But
         | I suspect they'll use that feature for market segmentation just
         | like AMD and Nvidia do.
        
         | TechieKid wrote:
         | Phoronix has been covering the Linux driver development for the
         | cards as they happen:
         | https://www.phoronix.com/scan.php?page=search&q=DG2
        
       | arcanus wrote:
       | Always seems to be two years away, like the Aurora supercomputer
       | at Argonne.
        
         | stormbrew wrote:
         | I know March 2020 has been a very very long month but I'm
         | pretty sure we're gonna skip a bunch of calendar dates when we
         | get out of it.
        
         | re-actor wrote:
         | Early 2022 is just 4 months away actually
        
           | timbaboon wrote:
           | :O ;(
        
           | AnimalMuppet wrote:
           | Erm, last I checked, four months from now is December 2021.
        
           | midwestemo wrote:
           | Man this year is flying, I still think it's 2020.
        
             | smcl wrote:
             | Honestly, I've been guilty of treating much of the last
             | year as a loading screen. At times I've been hyper-focussed
             | on doing that lovely personal development we're all
             | supposed to do when cooped up alone at home, and at others
             | just pissing around making cocktails and talking shite with
             | my friends over social media.
             | 
             | So basically what I'm saying is - "same" :D
        
         | dubcanada wrote:
         | Early 2022 is only like 4-8 months away?
        
           | dragontamer wrote:
           | Aurora was supposed to be delivered in 2018:
           | https://www.nextplatform.com/2018/07/27/end-of-the-line-
           | for-...
           | 
           | After it was delayed, Intel said that 2020 was when they'd be
           | ready. Spoiler alert: they aren't:
           | https://www.datacenterdynamics.com/en/news/doe-confirms-
           | auro...
           | 
           | We're now looking at 2022 as the new "deadline", but we know
           | that Intel has enough clout to force a new deadline as
           | necessary. They've already slipped two deadlines, what's the
           | risk in slipping a 3rd time?
           | 
           | ---------
           | 
           | I don't like to "kick Intel while they're down", but Aurora
           | has been a disaster for years. That being said, I'm liking a
           | lot of their OneAPI tech on paper at least. Maybe I'll give
           | it a shot one day. (AVX512 + GPU supported with one compiler,
           | in a C++-like language that could serve as a competitor to
           | CUDA? That'd be nice... but Intel NEEDS to deliver these GPUs
           | in time. Every delay is eating away at their reputation)
        
             | Dylan16807 wrote:
             | Edit: Okay I had it slightly wrong, rewritten.
             | 
             | Aurora was originally slated to use Phi chips, which are an
             | unrelated architecture to these GPUs. The delays there
             | don't say much about problems actually getting this new
             | architecture out. It's more that they were halfway through
             | making a supercomputer and then started over.
             | 
             | I could probably pin the biggest share of the blame on 10nm
             | problems, which are irrelevant to this architecture.
             | 
             | As far as _this_ architecture goes, when they announced
             | Aurora was switching, they announced 2021. That schedule,
             | looking four years out for a new architecture, has only had
             | one delay of an extra 6 months.
        
               | dragontamer wrote:
               | > I could probably pin the biggest share of the blame on
               | 10nm problems, which are irrelevant to this architecture.
               | 
               | I doubt that.
               | 
               | If Xeon Phi were a relevant platform, Intel could have
               | easily kept it... continuing to invest into the platform
               | and make it into 7nm like the rest of Aurora's new
               | design.
               | 
               | Instead, Intel chose to build a new platform from its
               | iGPU architecture. So right there, Intel made a
               | fundamental shift in the way they expected to build
               | Aurora.
               | 
               | I don't know what kind of internal meetings Intel had to
               | choose its (mostly untested) iGPU platform over its more
               | established Xeon Phi line, but that's quite a dramatic
               | change of heart.
               | 
               | ------------
               | 
               | Don't get me wrong. I'm more inclined to believe in
               | Intel's decision (they know more about their market than
               | I do), but its still a massive shift in architecture...
               | with a huge investment into a new software ecosystem
               | (DPC++, OpenMP, SYCL, etc. etc.), a lot of which is
               | largely untested in practice (DPC++ is pretty new, all
               | else considered).
               | 
               | --------
               | 
               | > As far as this architecture goes, when they announced
               | Aurora was switching, they announced 2021. That schedule,
               | looking four years out for a new architecture, has only
               | had one delay of an extra 6 months.
               | 
               | That's fair. But the difference between Aurora-2018 vs
               | Aurora-2021 is huge.
        
           | [deleted]
        
       | pjmlp wrote:
       | I keep seeing such articles since Larrabe, better wait and see if
       | this time it is actually any better.
        
       | RicoElectrico wrote:
       | Meanwhile overloading a name of an unrelated CPU architecture,
       | incidentally used in older Intel Management Engines.
        
       | cwizou wrote:
       | They still are not saying with which _part_ of that lineup they
       | want to compete with, which is a good thing.
       | 
       | I still remember Pat Gelsinger telling us over and over that
       | Larrabee would compete with the high end of the GeForce/Radeon
       | offering back in the days, including when it was painfully
       | obvious to everyone that it definitely would not.
       | 
       | https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)
        
         | judge2020 wrote:
         | Well there's already the DG1 which seems to compete with the
         | low-end. https://www.youtube.com/watch?v=HSseaknEv9Q
        
       | at_a_remove wrote:
       | I find myself needing, for the first time ever, a high-end video
       | card for some heavy video encoding, and when I look, they're all
       | gone, apparently in a tug of war between gamers and crypto
       | miners.
       | 
       | At the exact same time, I am throwing out a box of old video
       | cards from the mid-nineties (Trident, Diamond Stealth) and from
       | the looks of it you can list them on eBay but they don't even
       | sell.
       | 
       | Now Intel is about to leap into the fray and I am imagining
       | trying to explain all of this to the me of twenty-five years
       | back.
        
         | noleetcode wrote:
         | I will, quite literally, take those old video cards off your
         | hands. I have a hoarder mentality when it comes to old tech and
         | love collecting it.
        
           | at_a_remove wrote:
           | That involves shipping, though. It wouldn't be worth it to
           | you to have my old 14.4 Kbps modem and all of the attendant
           | junk I have.
        
         | topspin wrote:
         | "apparently in a tug of war between gamers and crypto miners"
         | 
         | That, and the oligopoly of AMD and NVidia. Their grip is so
         | tight they dictate terms to card makers. For example; you can't
         | build an NVidia GPU card unless you source the GDDR from
         | NVidia. Between them the world supply of high end GDDR is
         | monopolized.
         | 
         | Intel is going to deliver some badly needed competition. They
         | don't even have to approach the top of the GPU high end; just
         | deliver something that will play current games at 1080p at
         | modest settings and they'll have an instant hit. Continuing the
         | tradition of open source support Intel has had with (most) of
         | their GPU technology is something else we can hope for.
        
       | dleslie wrote:
       | The sub-heading is false, I had a dedicated Intel GPU in 1998 by
       | way of the i740.
        
         | acdha wrote:
         | Was that billed as a serious gaming GPU? I don't remember the
         | i740 as anything other than a low-budget option.
        
           | dleslie wrote:
           | It was sold as a serious gaming GPU.
           | 
           | Recall that this was an era where GPUs weren't yet a thing;
           | instead there was 2D video cards and 3D accelerators that
           | paired with. The i740 and TNT paved the way toward GPUs,
           | while I don't recall whether either had programmable
           | pipelines they both had 2D capacity. For budget gamers, it
           | wasn't a _terrible_ choice to purchase an i740 for the
           | combined 2D/3D ability.
        
             | acdha wrote:
             | I definitely remember that era, I just don't remember that
             | having anything other than an entry-level label. It's
             | possible that this could have been due to the lackluster
             | results -- Wikipedia definitely supports the interpretation
             | that the image changed in the months before it launched:
             | 
             | > In the lead-up to the i740's introduction, the press
             | widely commented that it would drive all of the smaller
             | vendors from the market. As the introduction approached,
             | rumors of poor performance started circulating. ... The
             | i740 was released in February 1998, at $34.50 in large
             | quantities.
             | 
             | However, this suggests that it was never going to be a top-
             | end contender since it was engineered to hit a lower price
             | point and was significantly under-specced compared to the
             | competitors which were already on the market:
             | 
             | > The i740 was clocked at 66Mhz and had 2-8MB of VRAM;
             | significantly less than its competitors which had 8-32MB of
             | VRAM, allowing the card to be sold at a low price. The
             | small amount of VRAM meant that it was only used as a frame
             | buffer, hence it used the AGP interface to access the
             | system's main memory to store textures; this was a fatal
             | flaw that took away memory bandwidth and capacity from the
             | CPU, reducing its performance, while also making the card
             | slower since it had to go through the AGP interface to
             | access the main memory which was slower than its VRAM.
        
               | dleslie wrote:
               | It was never aimed at top-end, but that doesn't mean it
               | wasn't serious about being viable as a gaming device.
               | 
               | And it was, I used it for years.
        
             | smcl wrote:
             | My recollection is that the switchover to referring to them
             | as a "GPU" wasn't integrating 2D and 3D in the same card,
             | but the point where we offloaded MUCH more computation to
             | the graphics card itself. So we're talking specifically
             | about when NVidia launched the Geforce 256 - a couple of
             | generations after the TNT
        
           | detaro wrote:
           | That's how it turned out in practice, but it was supposed to
           | be a serious competitor AFAIK.
           | 
           | Here's an old review: https://www.anandtech.com/show/202/7
        
             | smcl wrote:
             | It's kind of amazing to me that I never really encountered
             | or read about the i740. I got _really_ into PC gaming in
             | 1997, we got internet that same year so I read a ton and
             | was hyper aware of the various hardware that was released,
             | regardless of whether I could actually own any of it
             | (spoiler, as a ~11 year old, no I could not). How did this
             | sneak by me?
        
       | dkhenkin wrote:
       | But what kind of hash rates will they get?! /s
        
         | IncRnd wrote:
         | They will get 63 Dooms/Sec.
        
         | f6v wrote:
         | 20 Vitaliks per Elon.
        
       | vmception wrote:
       | I'm going to add this to the Intel GPU graveyard in advance
        
       | byefruit wrote:
       | I really hope this breaks Nvidia's stranglehold on deep learning.
       | Some competition would hopefully bring down prices at the compute
       | high-end.
       | 
       | AMD don't seem to even be trying on the software-side at the
       | moment. ROCm is a mess.
        
         | pjmlp wrote:
         | You know how to break it?
         | 
         | With modern tooling.
         | 
         | Instead of forcing devs to live in the pre-historic days of C
         | dialects and printf debugging, provide polyglot IDEs with
         | graphical debugging tools capable of single step GPU shaders
         | and a rich libraries ecosystem.
         | 
         | Khronos got the message too late and now no one cares.
        
         | lvl100 wrote:
         | I agree 100% and if Nvidia's recent showing and puzzling focus
         | on "Omniverse" is any indication, they're operating in a
         | fantasy world a bit.
        
         | jjcon wrote:
         | I wholeheartedly agree. PyTorch did recently release AMD
         | support which I was happy to see (though I have not tested it),
         | I'm hoping there is more to come.
         | 
         | https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-a...
        
           | byefruit wrote:
           | Unfortunately that support is via ROCm, which doesn't support
           | the last three generations (!) of AMD hardware: https://githu
           | b.com/ROCm/ROCm.github.io/blob/master/hardware....
        
             | dragontamer wrote:
             | ROCm supports Vega, Vega 7nm, and CDNA just fine.
             | 
             | The issue is that AMD has split their compute into two
             | categories:
             | 
             | * RDNA -- consumer cards. A new ISA with new compilers /
             | everything. I don't think its reasonable to expect AMD's
             | compilers to work on RDNA, when such large changes have
             | been made to the architecture. (32-wide instead of 64-wide.
             | 1024 registers. Etc. etc.)
             | 
             | * CDNA -- based off of Vega's ISA. Despite being "legacy
             | ISA", its pretty modern in terms of capabilities. MI100 is
             | competitive against the A100. CDNA is likely going to run
             | Frontier and El Capitan supercomputers.
             | 
             | ------------
             | 
             | ROCm focused on CDNA. They've had compilers emit RDNA code,
             | but its not "official" and still buggy. But if you went for
             | CDNA, that HIP / ROCm stuff works enough for the Oak Ridge
             | National Labs.
             | 
             | Yeah, CDNA is expensive ($5k for MI50 / Radeon VII, and $9k
             | for MI100). But that's the price of full-speed scientific-
             | oriented double-precision floating point GPUs these days.
        
               | byefruit wrote:
               | That makes a lot more sense, thanks. They could do with
               | making that a lot clearer on the project.
               | 
               | Still handicaps them compared to Nvidia where you can
               | just buy anything recent and expect it to work. Suspect
               | it also means they get virtually no open source
               | contributions from the community because nobody can run
               | or test it on personal hardware.
        
               | dragontamer wrote:
               | NVidia can support anything because they have a PTX-
               | translation layer between cards, and invest heavily on
               | PTX.
               | 
               | Each assembly language from each generation of cards
               | changes. PTX recompiles the "pseudo-assembly"
               | instructions into the new assembly code each generation.
               | 
               | ---------
               | 
               | AMD has no such technology. When AMD's assembly language
               | changes (ex: from Vega into RDNA), its a big compiler
               | change. AMD managed to keep the ISA mostly compatible
               | from 7xxx GCN 1.0 series in the late 00s all the way to
               | Vega 7nm in the late 10s... but RDNA's ISA change was
               | pretty massive.
               | 
               | I think its only natural that RDNA was going to have
               | compiler issues.
               | 
               | ---------
               | 
               | AMD focused on Vulkan / DirectX support for its RDNA
               | cards, while its compute team focused on continuing
               | "CDNA" (which won large supercomputer contracts). So
               | that's just how the business ended up.
        
               | blagie wrote:
               | I bought an ATI card for deep learning. I'm a big fan of
               | open source. Less than 12 months later, ROCm dropped
               | support. I bought an NVidia, and I'm not looking back.
               | 
               | This makes absolutely no sense to me, and I have a Ph.D:
               | 
               | "* RDNA -- consumer cards. A new ISA with new compilers /
               | everything. I don't think its reasonable to expect AMD's
               | compilers to work on RDNA, when such large changes have
               | been made to the architecture. (32-wide instead of
               | 64-wide. 1024 registers. Etc. etc.) * CDNA -- based off
               | of Vega's ISA. Despite being "legacy ISA", its pretty
               | modern in terms of capabilities. MI100 is competitive
               | against the A100. CDNA is likely going to run Frontier
               | and El Capitan supercomputers. ROCm focused on CDNA.
               | They've had compilers emit RDNA code, but its not
               | "official" and still buggy. But if you went for CDNA,
               | that HIP / ROCm stuff works enough for the Oak Ridge
               | National Labs. Yeah, CDNA is expensive ($5k for MI50 /
               | Radeon VII, and $9k for MI100). But that's the price of
               | full-speed scientific-oriented double-precision floating
               | point GPUs these days.
               | 
               | I neither know nor care what RDNA, CDNA, A100, MI50,
               | Radeon VII, MI100, or all the other AMD acronyms are.
               | Yes, I could figure it out, but I want plug-and-play,
               | stability, and backwards-compatibility. I ran into a
               | whole different minefield with AMD. I'd need to run old
               | ROCm, downgrade my kernel, and use a different card to
               | drive monitors than for ROCm. It was a mess.
               | 
               | NVidia gave me plug-and-play. I bought a random NVidia
               | card with the highest "compute level," and was confident
               | everything would work. It does. I'm happy.
               | 
               | Intel has historically had great open source drivers, and
               | if it give better plug-and-play and open source, I'll buy
               | Intel next time. I'm skeptical, though. The past few
               | year, Intel has a hard time tying their own shoelaces. I
               | can't imagine this will be different.
        
               | dragontamer wrote:
               | > Yes, I could figure it out, but I want plug-and-play,
               | stability, and backwards-compatibility
               | 
               | Its right there in the ROCm introduction.
               | 
               | https://github.com/RadeonOpenCompute/ROCm#Hardware-and-
               | Softw...
               | 
               | > ROCm officially supports AMD GPUs that use following
               | chips:
               | 
               | > GFX9 GPUs
               | 
               | > "Vega 10" chips, such as on the AMD Radeon RX Vega 64
               | and Radeon Instinct MI25
               | 
               | > "Vega 7nm" chips, such as on the Radeon Instinct MI50,
               | Radeon Instinct MI60 or AMD Radeon VII, Radeon Pro VII
               | 
               | > CDNA GPUs
               | 
               | > MI100 chips such as on the AMD Instinct(tm) MI100
               | 
               | --------
               | 
               | The documentation of ROCm is pretty clear that it works
               | on a limited range of hardware, with "unofficial" support
               | at best on other sets of hardware.
        
               | blagie wrote:
               | Only...
               | 
               | (1) There are a million different ROCm pages and
               | introductions
               | 
               | (2) Even that page is out-of-date, and e.g. claims
               | unofficial support for "GFX8 GPUs: Polaris 11 chips, such
               | as on the AMD Radeon RX 570 and Radeon Pro WX 4100,"
               | although those were randomly disabled after ROCm 3.5.1.
               | 
               | ... if you have a Ph.D in AMD productology, you might be
               | able to figure it out. If it's merely in computer
               | science, math, or engineering, you're SOL.
               | 
               | There are now unofficial guides to downgrading to 3.5.1,
               | only 3.5.1 doesn't work with many modern frameworks, and
               | you land in a version incompatibility mess.
               | 
               | These aren't old cards either.
               | 
               | Half-decent engineer time is worth $350/hour, all in
               | (benefits, overhead, etc.). Once you've spent a week
               | futzing with AMD's mess, you're behind by the cost of ten
               | NVidia A4000 cards which Just Work.
               | 
               | As a footnote, I suspect in the long term, small
               | purchases will be worth more than the supercomputing
               | megacontracts. GPGPU is wildly underutilized right now.
               | That's mostly a gap of software, standards, and support.
               | If we can get that right, every computer have many
               | teraflops of computing power, even for stupid video chat
               | filters and whatnot.
        
               | dragontamer wrote:
               | > Half-decent engineer time is worth $350/hour, all in
               | (benefits, overhead, etc.). Once you've spent a week
               | futzing with AMD's mess, you're behind by the cost of ten
               | NVidia A4000 cards which Just Work.
               | 
               | It seems pretty simple to me if we're talking about
               | compute. The MI-cards are AMD's line of compute GPUs. Buy
               | an MI-card if you want to use ROCm with full support.
               | That's MI25, MI50, or MI100.
               | 
               | > As a footnote, I suspect in the long term, small
               | purchases will be worth more than the supercomputing
               | megacontracts. GPGPU is wildly underutilized right now.
               | That's mostly a gap of software, standards, and support.
               | If we can get that right, every computer have many
               | teraflops of computing power, even for stupid video chat
               | filters and whatnot.
               | 
               | I think you're right, but the #1 use of these devices is
               | running video games (aka: DirectX and Vulkan). Compute
               | capabilities are quite secondary at the moment.
        
               | wmf wrote:
               | Hopefully CDNA2 will be similar enough to RDNA2/3 that
               | the same software stack will work with both.
        
               | dragontamer wrote:
               | I assume the opposite is going on.
               | 
               | Hopefully the RDNA3 software stack is good enough that
               | AMD decides that CDNA2 (or CDNA-3) can be based off of
               | the RDNA-instruction set.
               | 
               | AMD doesn't want to piss off its $100 million+ customers
               | with a crappy software stack.
               | 
               | ---------
               | 
               | BTW: AMD is reporting that parts of ROCm 4.3 are working
               | with the 6900 XT GPU (suggesting that RDNA code
               | generation is beginning to work). I know that ROCm 4.0+
               | has made a lot of github checkins that suggest that AMD
               | is now actively working on the RDNA-code generation. Its
               | not officially written into the ROCm documentation yet,
               | its mostly the discussions with ROCm github issues that
               | are noting these changes.
               | 
               | Its not official support and its literally years late.
               | But its clear what AMD's current strategy is.
        
               | FeepingCreature wrote:
               | You don't think it's reasonable to expect machine
               | learning to work on new cards?
               | 
               | That's exactly the point. ML on AMD is a third-class
               | citizen.
        
               | dragontamer wrote:
               | AMD's MI100 has those 4x4 BFloat16 and FP16 matrix
               | multiplication instructions you want, with PyTorch and
               | TensorFlow compiling down into them through ROCm.
               | 
               | Now don't get me wrong: $9000 is a lot for a development
               | system to try out the software. NVidia's advantage is
               | that you can test out the A100 by writing software for
               | cheaper GeForce cards at first.
               | 
               | NVidia also makes it easy with the DGX computer to
               | quickly get a big A100-based computer. AMD you gotta shop
               | around with Dell vs Supermicro (etc. etc.) to find
               | someone to build you that computer.
        
               | paulmd wrote:
               | > ROCm supports Vega, Vega 7nm, and CDNA just fine.
               | 
               | yeah, but that's exactly what OP said - Vega is three
               | generations old at this point, and that is the last
               | consumer GPU (apart from VII which is a rebranded compute
               | card) that ROCm supports.
               | 
               | On the NVIDIA side, you can run at least basic
               | tensorflow/pytorch/etc on a consumer GPU, and that option
               | is not available on the AMD side, you have to spend $5k
               | to get a GPU that their software actually supports.
               | 
               | Not only that but on the AMD side it's a completely
               | standalone compute card - none of the supported compute
               | cards do graphics anymore. Whereas if you buy a 3090 at
               | least you can game on it too.
        
               | Tostino wrote:
               | I really don't think people appreciate the fact enough
               | that for developers to care to learn about building
               | software for your platform, you need to make it
               | accessible for them to run that software. That means "run
               | on the hardware they will already have". AMD really need
               | to push to get ROCm compiling for RDNA based chips.
        
               | slavik81 wrote:
               | There's unofficial support in the rocm-4.3.0 math-libs
               | for gfx1030 (6800 / 6800 XT / 6900 XT). rocBLAS also
               | includes gfx1010, gfx1011 and gfx1012 (5000 series). If
               | you encounter any bugs in the
               | {roc,hip}{BLAS,SPARSE,SOLVER,FFT} stack with those cards,
               | file GitHub issues on the corresponding project.
               | 
               | I have not seen any problems with those cards in BLAS or
               | SOLVER, though they don't get tested as much as the
               | officially supported cards.
               | 
               | FWIW, I finally managed to buy an RX 6800 XT for my
               | personal rig. I'll be following up on any issues found in
               | the dense linear algebra stack on that card.
               | 
               | I work for AMD on ROCm, but all opinions are my own.
        
               | BadInformatics wrote:
               | I've mentioned this on other forums, but it would help to
               | have some kind of easily visible, public tracker for this
               | progress. Even a text file, set of GitHub issues or
               | project board would do.
               | 
               | Why? Because as-is, most people still believe support for
               | gfx1000 cards is non-existent in any ROCm library. Of
               | course that's not the case as you've pointed out here,
               | but without any good sign of forward progress, your
               | average user is going to assume close to zero support.
               | Vague comments like
               | https://github.com/RadeonOpenCompute/ROCm/issues/1542 are
               | better than nothing, but don't inspire that much
               | confidence without some more detail.
        
         | rektide wrote:
         | ROCm seems to be tolerably decent, if you are willing to spend
         | a couple hours, and, big if, if HIP supports all the various
         | libraries you were relying on. CUDA has a huge support library,
         | and ROCm has been missing not just the small fry stuff but a
         | lot of the core stuff in the that library.
         | 
         | Long term, AI (& a lot of other interests) need to serve
         | themselves. CUDA is excellently convenient, but long term I
         | have a hard time imagining there being a worthwhile future for
         | anything but Vulkan. There don't seem to be a lot of forays
         | into writing good all-encompassing libraries in Vulkan yet, nor
         | many more specialized AI/ML Vulkan libraries, so it feels
         | largely like we more or less haven't started really trying.
        
           | rektide wrote:
           | Lot of downvotes. Anyone have any opinion? Is CUDA fine
           | forever? Is there something other than Vulkan we should also
           | try? Do you think AMD should solve every problem CUDA solves
           | for their customers too? What gives here?
           | 
           | I see a lot a lot a lot of resistance to the idea that we
           | should start trying to align to Vulkan. Here & elsewhere. I
           | don't get it, it makes no sense, & everyone else using GPU's
           | is running fast as they can towards Vulkan. Is it just too
           | soon too early in the adoption curve, or do ya'll think there
           | are more serious obstructions long term to building a more
           | Vulkan centric AI/ML toolkit? It still feels inevitable to
           | me. What we are doing now feels like a waste of time. I wish
           | ya'll wouldn't downvote so casually, wouldn't just try to
           | brush this viewpoint away.
        
             | BadInformatics wrote:
             | > Do you think AMD should solve every problem CUDA solves
             | for their customers too?
             | 
             | They had no choice. Getting a bunch of HPC people to
             | completely rewrite their code for a different API is a
             | tough pill to swallow when you're trying to win
             | supercomputer contracts. Would they have preferred to spend
             | development resources elsewhere? Probably, they've even got
             | their own standards and SDKs from days past.
             | 
             | > everyone else using GPU's is running fast as they can
             | towards Vulkan
             | 
             | I'm not qualified to comment on the entirety of it, but I
             | can say that basically no claim in this statement is true:
             | 
             | 1. Not everyone doing compute is using GPUs. Companies are
             | increasingly designing and releasing their own custom
             | hardware (TPUs, IPUs, NPUs, etc.)
             | 
             | 2. Not everyone using GPUs is cares about Vulkan. Certainly
             | many folks doing graphics stuff don't, and DirectX is as
             | healthy as ever. There have been bits and pieces of work
             | around Vulkan compute for mobile ML model deployment, but
             | it's a tiny niche and doesn't involve discrete GPUs at all.
             | 
             | > Is it just too soon too early in the adoption curve
             | 
             | Yes. Vulkan compute is still missing many of the niceties
             | of more developed compute APIs. Tooling is one big part of
             | that: writing shaders using GLSL is a pretty big step down
             | from using whatever language you were using before (C++,
             | Fortran, Python, etc).
             | 
             | > do ya'll think there are more serious obstructions long
             | term to building a more Vulkan centric AI/ML toolkit
             | 
             | You could probably write a whole page about this, but TL;DR
             | yes. It would take _at least_ as much effort as AMD and
             | Intel put into their respective compute stacks to get
             | Vulkan ML anywhere near ready for prime time. You need to
             | have inference, training, cross-device communication,
             | headless GPU usage, reasonably wide compatibility, not
             | garbage performance, framework integration, passable
             | tooling and more.
             | 
             | Sure these are all feasible, but who has the incentive to
             | put in the time to do it? The big 3 vendors have their
             | supercomputer contracts already, so all they need to do is
             | keep maintaining their 1st-party compute stacks. Interop
             | also requires going through Khronos, which is its own
             | political quagmire when it comes to standardization. Nvidia
             | already managed to obstruct OpenCL into obscurity, why
             | would they do anything different here? Downstream libraries
             | have also poured untold millions into existing compute
             | stacks, OR rely on the vendors to implement that
             | functionality for them. This is before we even get into
             | custom hardware like TPUs that don't behave like a GPU at
             | all.
             | 
             | So in short, there is little inevitable about this at all.
             | The reason people may have been frustrated by your comment
             | is because Vulkan compute comes up all the time as some
             | silver bullet that will save us from the walled gardens of
             | CUDA and co (especially for ML, arguably the most complex
             | and expensive subdomain of them all). We'd all like it to
             | come true, but until all of the aforementioned points are
             | addressed this will remain primarily in pipe dream
             | territory.
        
           | dnautics wrote:
           | is there any indication that ROCm has solved its stability
           | issues? I wasn't doing the testing myself, but the reason why
           | we rejected ROCm a while back (2 years?) was because you
           | could get segfaults hours into a ML training run, which is...
           | frustrating, to say the least, and not easily identifiable in
           | quickie test runs (or CI, if ML did more CI).
        
         | rowanG077 wrote:
         | I think this situation can only be fixed by moving up into
         | languages that compile to vendor specific GPU languages. Just
         | treat CUDA, openCL, vulkan compute, metal compute(??) etc. as
         | the assembly of graphics cards.
        
           | T-A wrote:
           | https://www.oneapi.io/
        
             | snicker7 wrote:
             | Currently only supported by Intel.
        
           | hobofan wrote:
           | Barely anyone is writing CUDA directly these days. Just add
           | support in PyTorch and Tensorflow and you've covered probably
           | 90% of the deep learning market.
        
             | hprotagonist wrote:
             | and ONNX.
        
           | pjmlp wrote:
           | That is just part of the story.
           | 
           | CUDA wiped out OpenCL, because it went polyglot as of version
           | 3.0, while insisting that everyone should write in a C
           | dialect.
           | 
           | They also provide great graphical debugging tools and
           | libraries.
           | 
           | Khronos waited too long to introduce SPIR, and in traditional
           | Khronos fashion, waited for the partners to provide the
           | tooling.
           | 
           | One could blame NVidia, but it isn't as the competition has
           | done a better job.
        
       | astockwell wrote:
       | More promises tied --not to something in hand-- but to some
       | amazing future thing. Intel has not learned one bit.
        
         | tyingq wrote:
         | _" The earliest Arc products will be released in "the first
         | quarter of 2022"_
         | 
         | That implies they do have running prototypes in-hand.
        
       | bifrost wrote:
       | I'd be excited to see if you can run ARC on Intel ARC!
       | 
       | GPU Accelerated HN would be very interesting :)
        
       | [deleted]
        
       ___________________________________________________________________
       (page generated 2021-08-16 23:01 UTC)