[HN Gopher] Nvidia Unveils GeForce RTX 30 Series GPUs
       ___________________________________________________________________
        
       Nvidia Unveils GeForce RTX 30 Series GPUs
        
       Author : mkaic
       Score  : 449 points
       Date   : 2020-09-01 16:42 UTC (6 hours ago)
        
 (HTM) web link (blogs.nvidia.com)
 (TXT) w3m dump (blogs.nvidia.com)
        
       | Ninjinka wrote:
       | The 3080 requires a 750W PSU, while the 3070 only requires 650W.
       | Given I have a 650W, that might tip the scale for me.
        
         | arduinomancer wrote:
         | Heavily depends what else is in your system.
        
         | arvinsim wrote:
         | They did have a disclaimer that it depends on your PC
         | configuration. If you run something like the Ryzen 3600, it
         | might be fine.
        
         | gambiting wrote:
         | These numbers never meant anything. I run a 1050Ti on a 200W
         | PSU - nvidia recommends 450W minimum. Add the TDP of your GPU,
         | CPU and add about 100W for accessories = what you actually
         | need. Nvidia recommends a much more powerful PSU than needed
         | just in case.
        
           | kllrnohj wrote:
           | > Nvidia recommends a much more powerful PSU than needed just
           | in case.
           | 
           | The two things to watch out for here are
           | 
           | 1) Cheaper PSUs can't always actually hit their claimed
           | wattage, particularly not in real-world heat scenarios
           | 
           | 2) CPU & GPU both use the 12V rail for their power, and not
           | all PSUs can deliver all the rated wattage on the 12V rail.
           | 
           | Any decent to good PSU won't have either of those issues,
           | most list their rated wattage entirely on the 12V rails these
           | days.
           | 
           | So for example let's assume 250w for the GPU average and 120w
           | for the CPU average (turbo & boost & all that). A 400w PSU
           | could technically do that, particularly since if your only
           | drive is an SSD your "accessories" are basically a rounding
           | error. But if we take this 400W PSU for example:
           | https://www.newegg.com/coolmax-i-400-400w/p/N82E16817159140
           | it can only deliver 300W on the 12V rail. Not enough. By
           | comparison this EVGA 450W PSU can do a full 450W on the 12V
           | rail alone:
           | https://www.evga.com/products/product.aspx?pn=100-BR-0450-K1
           | 
           | That's a 150W useful difference in this scenario even though
           | the "rated" power only differs by 50W.
        
           | phaus wrote:
           | Yea I think if you run the calculations and you have room you
           | should be good if your PSU is just slightly above the power
           | requirements. If you run an unusual amount of memory or
           | HDDs/etc, you might want to calculate everything manually
           | rather than just assume its 100 watts though.
           | 
           | I have never had a PSU fail, but supposedly unlike pretty
           | much every other component if it fails its possible it will
           | destroy your GPU/CPU/MB so it makes sense to spend a little
           | extra on a good PSU.
           | 
           | Probably my best component purchase ever was a 1050W Modular
           | PSU in 2014. It was an old model even then and apparently no
           | one wanted 1000W+ power supplies back then because it was on
           | clearance. It should still be good for a 3090 and probably
           | even a 5090 when I upgrade again in the future.
           | 
           | I paid far less than a 1000w PSU costs now.
        
           | eMSF wrote:
           | I feel like this used to be more true in the past. These days
           | CPUs can exceed their official TDP by quite a large margin,
           | and while in theory they should only do so temporarily, many
           | motherboards default to unlimited boost clocks. (Then again,
           | perhaps you're never going to fully utilize both CPU and GPU
           | at the same time...)
        
             | gambiting wrote:
             | That is correct. Nowadays a 75W TDP Intel CPU can use as
             | much as 200W for short bursts. That wasn't the case in the
             | past. However, it should still be possible to find out that
             | maximum draw value for many motherboards and pick a PSU
             | accordingly.
        
         | Macha wrote:
         | This is likely a cautious recommendation on nVidia's part.
         | 
         | A 2080 Ti + 8700k system used 450W (nvidia recommended a 650w
         | psu). While high end CPUs have gotten a bit more power hungry
         | with higher core counts on the 10900k/10700k/3900x/3950x, I'd
         | be shocked if a 650W PSU couldn't handle a mainstream CPU +
         | 3080.
         | 
         | https://www.techspot.com/review/1701-geforce-rtx-2080/page4....
         | 
         | nVidia's recommendation is based on "we don't want people
         | pissed off because they put it in a system with a 3990wx at the
         | recommended PSU capacity and it didn't work"
        
           | tobyhinloopen wrote:
           | Do note that a PSU running at max capacity might also cause a
           | noisy hot PSU.
           | 
           | Having some headroom might result in a quieter system
        
           | redisman wrote:
           | You can see on Nvidias product page they are calculating
           | based on a 125W processor. My 3700X is 65W so I'll have
           | enough room for a 3070 easily on a 600W PSU.
        
           | shajznnckfke wrote:
           | My old theory was they add so much headroom that you might be
           | able to add a second one later in SLI.
        
       | ponker wrote:
       | Jensen's stovetop is the billionaire'a equivalent of a tricked-
       | out RGB setup.
        
       | ablekh wrote:
       | Very cool. I'm wondering about whether RTX 30 Series cards are
       | compatible (interfaces, form factor, etc.) with Quadro RTX 8000.
       | Thoughts?
        
       | jordache wrote:
       | dude has a lot of spatulas at his house!
        
       | Falell wrote:
       | What's the expected delay between reference card release and OEM
       | card release?
        
       | shantara wrote:
       | It's worrying to see GPU cooling system dumping the heat directly
       | onto the CPU, RAM and motherboard components. That's the only
       | thing that left me skeptical after watching the presentation.
        
         | jiofih wrote:
         | What do you mean? The fan design is pretty standard. Either way
         | it won't matter, as hot air should be extracted from the case,
         | it's not gonna meaningfully change the temp on anything it
         | hits.
        
           | shantara wrote:
           | The back fan flows the air through a hole in the GPU, past
           | the thermal tubes and carries the hot air above the GPU to
           | the upper part of the motherboard. Which is not in any way a
           | standard design with the hot air going directly outside the
           | PC case.
           | 
           | https://youtu.be/ALEXVtnNEwA?t=3283
        
         | shados wrote:
         | A few weeks/months after release, 3rd parties are going to come
         | out with a bazillion different thermal solutions, including
         | self contained liquid cooling, like they always do. So that's a
         | minor issue. Just don't buy the reference model.
        
         | redisman wrote:
         | I mean that's where the air will go anyway as it rises up, just
         | more inefficiently.
        
       | 0xfaded wrote:
       | I have some CUDA code that I need to run as fast as possible, so
       | if I was going to blow $1500, my use case would imply that I
       | should go with two 3080s.
       | 
       | However, I also play around with deep learning stuff, expect to
       | do so more in the future, but don't currently follow it so
       | closely.
       | 
       | Would someone care to ponder on what difference they think a 24GB
       | gpu vs a 10GB gpu will have as a tool for deep learning dev over
       | the next 3 years?
       | 
       | For what it's worth, I'm a computer vision guy, but I did have a
       | play with DeepSpeech earlier this year.
        
         | godelski wrote:
         | Being in the ML space myself I can tell you that memory is
         | pretty important. With my current workload (vision tasks) it is
         | my largest constraint.
         | 
         | That said, rumor has it that they will announce a 20Gb 3080
         | later.
        
           | abledon wrote:
           | and its a 20gb with the ability to do FP16 right? So
           | theoritcally lots of the models can actually be 40gb?
        
         | dplavery92 wrote:
         | It's not _impossible_ to distribute training across multiple
         | GPUs, but it 's certainly not straightforward. And if you want
         | activations for an entire 4K image in the same model, having a
         | lot of memory on one card is your friend.
        
         | hwillis wrote:
         | I'm fairly lay but imo GPT-3 demonstrates pretty soundly that
         | huge models are no magic bullet- it's got >2x as many
         | parameters as the human brain has neurons and it can't do long
         | division. Dogs and other animals get by just fine having less
         | than 1% as many neurons as humans.
         | 
         | Even a billion parameters is a _huge_ model, and a factor of
         | 2.4x increase is not going to make a tremendous difference in
         | your performance. In particular the data-heavy nature of vision
         | stuff means that you 'll be bottlenecked by training more than
         | memory, AFAIK (again, lay).
        
           | junipertea wrote:
           | It can't do long division because it literally can't read the
           | number inputs, due to the way text is encoded (BPE). That it
           | manages to learn any sort of arithmetic despite that is
           | pretty impressive to me.
        
           | ipsum2 wrote:
           | Model parameters and biological neurons do not map 1:1.
        
           | benlivengood wrote:
           | The human brain has about 100 trillion synapses which is a
           | closer analog to ML model parameters.
        
           | i-am-curious wrote:
           | This is a misleading comparison. You are comparing a massive
           | model with huge models. What you should be comparing are big
           | models vs medium models that a single consumer GPU will fit.
           | And - you don't need to take my word for it, there's tons of
           | papers - the bigger models definitely perform better.
        
       | ZeroCool2u wrote:
       | Wow, I wasn't planning on upgrading from a 1080 (non-Ti) but the
       | 3080 is so good and priced so well, I probably will. I just
       | ordered a 240 Hz 1440p IPS monitor and I wasn't planning to hit
       | 240 Hz, but this makes it so easy I might as well.
       | 
       | My day job is primarily ML as well, so I might just go for the
       | 3090. 24 GB of memory is a game changer for what I can do
       | locally. I really just wish Nvidia would get its shit together
       | with Linux drivers. Ubuntu has done some great work making things
       | easier and just work, but having them directly in the kernel
       | would be so much nicer.
       | 
       | One thing I'm curious about is the RTX IO feature. The slide said
       | it supports DirectStorage for Windows, but is there an equivalent
       | to this for Linux? I'm hoping someone with a little more insight
       | or an Nvidia employee may have some more information.
        
         | smileybarry wrote:
         | > One thing I'm curious about is the RTX IO feature.
         | 
         | Me too, but for a different reason: I wonder if it can work
         | with BitLocker if you're using software encryption (as hardware
         | encryption is transparent enough that the drive basically locks
         | and unlocks).
         | 
         | It would still save the CPU cost of decompression but it'd have
         | to go through RAM for (AES-NI accelerated) CPU decryption
         | either way. Maybe at that point RAM speed and latencies start
         | to matter more. Or the feature turns off altogether. Definitely
         | something to test.
        
         | marmaduke wrote:
         | > 24 GB of memory is a game changer
         | 
         | a little confused here, I see Dell offering machines with cards
         | up to 48 GB, so 24 GB seems quite nice, but not game changing.
        
           | jjcm wrote:
           | The difference is the Quadro RTX 8000 is $5500 (vs the 3090's
           | $1500 price), and the Quadro has less than half of the CUDA
           | cores that the 3090 does.
           | 
           | You can buy two of the 3090's for almost half the price of
           | the Quadro and have 4x the processing power.
           | 
           | EDIT: looks like I'm not fully correct here - nvidia changed
           | how they measure cores: https://www.reddit.com/r/hardware/com
           | ments/ikok1b/explaining...
           | 
           | Still great for the price, but not double the power
           | necessarily.
        
             | gowld wrote:
             | I thought quadro sacrifices speed to get accuracy, which
             | matters for modelling/rendering but not ML (and games)
        
         | cercatrova wrote:
         | Which monitor? I was looking at the Samsung G9.
        
           | formerly_proven wrote:
           | The G9 and G7 seem to suffer from the same problem that
           | previous VA adaptive sync monitors had, i.e. brightness
           | depends on frame rate, so the screen flickers with changing
           | framerate.
           | 
           | In addition to that they seem to have some issues with their
           | firmware trampling over its own memory, causing glitch
           | artifacts and such. I suspect they'll fix that through a
           | firmware update.
        
           | ZeroCool2u wrote:
           | The EVE Spectrum[1]. I actually pre-ordered a while back, so
           | got it for less than what it's priced at now. Their first
           | project had some issues with delivery, but they seem to have
           | done a great job with this one and I'm cautiously optimistic.
           | 
           | [1]: https://evedevices.com/pages/full-specs
        
             | cercatrova wrote:
             | Hm, Eve as a whole feels pretty suspect after what they did
             | with the V. I wonder if they'll really ship these this
             | time, but I doubt it.
        
               | ZeroCool2u wrote:
               | Yeah, I read the stories, but I also read their side of
               | the story and it seemed like they got pretty screwed with
               | the vendor.
               | 
               | Like I said, I'm cautiously optimistic and I'm also lucky
               | enough that losing out on my $100 deposit isn't a
               | financial issue for me.
        
         | [deleted]
        
         | godelski wrote:
         | > I really just wish Nvidia would get its shit together with
         | Linux drivers.
         | 
         | This has always confused me. They are pushing ML hard, yet we
         | often use Linux for that kind of work. And these cards are the
         | ones expected to be used in universities and home labs. Linux
         | drivers that "just work" would be a big push forward to really
         | show that they are trying to push for ML development.
        
           | dragandj wrote:
           | I have several nvidia GPUs of different generations on Linux,
           | and Nvidia drivers + CUDA "just work". They are closed source
           | - that's bad - but I don't get what people are complaining
           | about related to "just working". They DO "just work" (Arch
           | Linux, both package and BLOB install...
        
             | godelski wrote:
             | I'll give you a few examples.
             | 
             | - Ubuntu 18 I had hit or miss with the hdmi connection on
             | my laptop (1060Q). 20% of the time it would work if I
             | plugged it in. 50% of the time it would work if I had it
             | plugged in and rebooted. This makes giving presentations
             | difficult.
             | 
             | - Arch/Manjaro/Fedora/Ubuntu 16 I could never get the hdmi
             | connection working for a laptop.
             | 
             | - All distros, difficulty getting cuda running AND using
             | the display. Intel drivers for display + nvidia for cuda
             | works, but this means I can't use my GPU when I want to do
             | some of the limited linux gaming.
             | 
             | Laptops seem to have more problems than desktops. On
             | desktops I have many more hits than misses (if I have the
             | graphics card installed when I install the OS). Laptops
             | have just been terrible.
        
             | ChuckNorris89 wrote:
             | Because some DEs like KDE have devs behind them that don't
             | want to deal with non Open Source blobs out of principle so
             | you get a terrible experience with Nvidia. Gnome is usually
             | fine.
        
               | emilsedgh wrote:
               | Isn't it funny and ironic, since Gnome is a GNU project
               | and was created because KDE (Qt) wasn't FOSS enough.
        
               | [deleted]
        
             | kjs3 wrote:
             | I've had an M4000 in my desktop and a P620 in a side-box,
             | both running SuSE Tumbleweed for a year or so using the
             | NVIDIA drivers. Once I got the initial setup done (follow
             | the directions), I've had zero issues. Updates have been
             | smooth, though there's an extra step now (hit ENTER to
             | accept the NVIDIA license). GLMark2 score is a bit north of
             | 7000.
        
             | Miraste wrote:
             | They don't "just work" with Wayland, i.e. the future of
             | Linux desktop.
        
               | jakear wrote:
               | Is this just for gaming/etc or does the desktop
               | environment also cause issues if you're just trying to
               | use it as a highly parallel compute device?
        
               | godelski wrote:
               | The DEs are moving towards wayland.
        
               | jakear wrote:
               | That doesn't really answer my question.... rephrased: all
               | I do with high-powered GPU's is Remote-SSH into hosted
               | Linux machines and run ML jobs. Does this wayland thing
               | have any impact there?
        
               | godelski wrote:
               | It should answer the question because the use case which
               | you are stating (didn't before) is that you're using a
               | headless environment. Which means you aren't using a DE.
               | 
               | IF you are using a server with a DE then it answers the
               | question about how yes it does affect you because you are
               | using a DE.
               | 
               | In either case, it isn't "just for gaming"
        
             | benchaney wrote:
             | They mostly work most of the time. When people say they
             | want something that "just works", they are looking for more
             | consistency than that.
             | 
             | I don't doubt that they work for you, but not everyone is
             | so lucky. I have encountered serious reliability issues
             | with their drivers.
        
               | hajimemash wrote:
               | 90% good * 90% of the time = 81% good all the time [only
               | :( ]
        
               | gavinray wrote:
               | The headache with Nvidia drivers and Linux is that
               | depending on how new your model is and what distro you're
               | using, you may or may not be able to load the GUI until
               | you've manually upgraded kernel and disabled the Nouveau
               | drivers (if enabled) then updated to latest nvidia-
               | drivers-XXX package.
               | 
               | I tried to get Ubuntu/Pop_OS! 20.04 running on a dual-GPU
               | laptop that has an RTX-2060 and AMD Renoir integrated.
               | Had to modify kernel boot params to disable nouveau
               | modeset and then run script for mainline kernel upgrade +
               | drivers to get it to run. Was not a fun Saturday =/
               | 
               | But prior to this, yeah Nvidia non-open source drivers
               | have mostly "just worked" for me on the older models.
        
               | godelski wrote:
               | Something interesting I've noticed is that there is a big
               | difference in usability between desktops and laptops.
               | Desktops I have far less issues. But laptops I have to
               | get lucky to be able to use my HDMI port. And good luck
               | doing that and having cuda support.
        
               | gavinray wrote:
               | Out of curiosity, what laptop models + distros have you
               | tried to set up?
               | 
               | On Acer Nitro 5 with Nvidia GTX-1060Ti and i5, stock
               | Ubuntu 20.04 loaded no problems, and even DisplayLink
               | driver for dual monitor, one through regular HDMI + other
               | through USB -> HDMI adapter worked (though I couldn't get
               | it to rotate display vertically).
               | 
               | The bugs I did have were with it constantly re-disovering
               | network printers that I had to disable, and changing the
               | default wifi power-saving settings because something was
               | funky with it.
               | 
               | On Asus TUF A15 with Nvidia RTX-2060/AMD Renoir + AMD
               | Ryzen 7 4800h absolutely no distro worked out-of-the-box
               | and I needed mainline kernel + latest Nvidia drivers. But
               | after fixing that myself Pop_OS picks up everything
               | perfectly and no problems.
               | 
               | Both of them have CUDA working IIRC (at least running
               | "nvidia-smi" says it does).
        
               | godelski wrote:
               | I have an inspiron with a 1060Q and had similar problems
               | on an HP envy (forgot the card in there). I've tried a
               | bunch of distros (I've been on linux for about a decade
               | and mainly run Arch though).
               | 
               | I have heard great things about Pop and I am going to be
               | building a new machine with these new cards and giving
               | pop a try.
        
               | gavinray wrote:
               | Highly recommend Pop_OS!, it may as well be called
               | "Ubuntu, except more driver patches and performance
               | tweaks" haha.
        
               | Shared404 wrote:
               | And no Snap, although I suppose that could fall under
               | performance tweaks.
        
             | murgindrag wrote:
             | NVidia has always been a headache for me. I run ATI on
             | Linux. I use xmonad as a window manager, and NVidia's
             | proprietary drivers just didn't handle multiple screens
             | correctly. I switched to free drivers, and they did one
             | fewer monitor than I was driving. I switched to ATI, and
             | then things just worked.
             | 
             | What's more surprising is that ATI doesn't take this
             | opening.
        
             | literallycancer wrote:
             | AMD just works, as in the distro includes their drivers
             | because they are open source.
             | 
             | Nvidia works if you add their repo and install the blobs,
             | maybe. If your distro is mainstream enough.
        
         | fluffything wrote:
         | > I really just wish Nvidia would get its shit together with
         | Linux drivers. Ubuntu has done some great work making things
         | easier and just work,
         | 
         | This has been my experience. Ubuntu "just works". What would
         | get better with what you propose? (speaking from ignorance)
        
           | ZeroCool2u wrote:
           | Yeah, Wayland is one of the main issue here. Things are going
           | really well generally speaking with the Wayland transition,
           | but Nvidia is single-handedly delaying the wide-spread
           | transition from X to Wayland across most distros, which is
           | super unfortunate, because it's a very important part of a
           | desktop Linux experience. Using Wayland day to day on my XPS
           | 13 with regular Intel integrated graphics is great in terms
           | of perf and battery life. There's a much larger conversation
           | to be had about X vs Wayland, but broadly speaking Wayland is
           | just better and not bogged down by design decisions made 20
           | or 30 years ago.
           | 
           | The other key benefit would be driver management. Updating
           | your drivers right now can be a nightmare. If the drivers are
           | open sourced and upstreamed into the kernel this becomes a
           | non-issue. I do understand why Nvidia doesn't want to do that
           | though. A lot of their lead right now is not just hardware
           | based, but software based. They have a choke-hold on the ML
           | ecosystem and it's a huge cash cow for them. Giving away that
           | secret sauce in their drivers, so that AMD could make their
           | cards seamlessly compatible would probably be a huge mistake
           | from a business perspective for Nvidia.
        
           | freeone3000 wrote:
           | Weyland still doesn't work with nv drivers, so you get to
           | choose between a reasonable desktop display substrate and
           | full graphics acceleration.
        
             | fluffy87 wrote:
             | KDE and Gnome Support Nvidia drivers on wayland.
        
               | freeone3000 wrote:
               | They support nouveau. Wayland doesn't support EGS, no
               | matter what DE you stick in front, so nv drivers are out.
               | 
               | (Nvidia cards have two drivers on Linux, nouveau, the
               | open-source, slow, and incompatible-with-Quadro ones, and
               | NV, the binary blob shipped by Nvidia that people take
               | religious exception to.)
        
               | jborean93 wrote:
               | Gnome supports Wayland over EGS just not by default. I
               | tried it out a few months ago and while it "works" it was
               | somewhat buggy and I just went back to X.
        
         | tracker1 wrote:
         | Having gone through some hellish months on a 5700xt, AMD isn't
         | any better than Nvidia on the Linux front, though since late
         | January it was relatively stable (on beta linux kernel, which
         | meant I couldn't use KVM/Virtualbox reasonably).
         | 
         | Personally, looking at the RTX 3070, for 1440p, but I'm not
         | doing anything other than some casual gaming.
        
         | jacquesm wrote:
         | > I really just wish Nvidia would get its shit together with
         | Linux drivers
         | 
         | That's something I've never had a problem with when it comes to
         | NV. And I've been using their cards for years for combined
         | compute/display purposes.
        
         | sharken wrote:
         | Am in the same situation, the 1080 is still holding up well,
         | but the 3080 looks very tempting indeed.
         | 
         | Looking forward to some benchmarks that evaluate the difference
         | between PCIe 3.0 and 4.0. Getting a new motherboard with PCIe
         | 4.0 along with a CPU will up the cost considerably, so
         | hopefully that can be avoided.
         | 
         | Also considering an upgrade to the Corsair RM1000x PSU which
         | should be able to be totally silent even with the RTX 3080
         | according to https://www.tweaktown.com/reviews/7376/corsair-
         | rm1000x-1000w....
         | 
         | UPDATE:
         | 
         | According to this answer, using PCIe 3.0 should not make a huge
         | difference:
         | https://www.reddit.com/r/nvidia/comments/iko4u7/geforce_rtx_...
        
           | errantspark wrote:
           | I use an eGPU, so I have a 4x instead of a 16x PCIe 3.0 link
           | with a 1080Ti and even that doesn't make much difference,
           | some games blit/do more system <-> gpu transfers than others
           | (APEX, I'm lookin at you) and there's a noticeable perf
           | impact for the most part it's negligible. I wouldn't worry at
           | all about PCIe 3.0 vs 4.0.
        
             | jasomill wrote:
             | Same. In gaming and graphics benchmarks on my 1070Ti over
             | Thunderbolt 3 (Hades Canyon Intel NUC), I've seen no more
             | than a 10%-20% drop in frame rate vs. what I'd expect from
             | equivalent benchmark results reported online, so I'd expect
             | no material difference between PCIe 3.0 x16 and PCIe 4.0
             | x16 for typical gaming applications, and, from benchmark
             | numbers I've seen, only a modest performance drop from PCIe
             | 4.0 x16 to PCIe 3.0 _x8_.
             | 
             | This assumes, of course, that the PCIe lanes in question
             | actually exist+ and are dedicated to the card, and are not
             | shared with another in-use device via a switch or similar
             | chipset shenanigans.
             | 
             | Case in point: the only time I ever had performance
             | problems with my eGPU setup was when attempting to do 4K
             | HDMI output through a Blackmagic Decklink card while
             | simultaneously using the 1070Ti for compute over the same
             | (shared) Thunderbolt bus in DaVinci Resolve.
             | 
             | As far as I know, this would not typically be a problem
             | with a single-GPU, non-Thunderbolt-connected desktop
             | system, as desktop motherboards typically have at least one
             | full-bandwidth x16 slot, which you'd almost always use for
             | the GPU.
             | 
             | + Physical x16 slots can have fewer than sixteen connected
             | PCIe lanes. For example, while my HP Z820 has four "x16"
             | slots, only three have full bandwidth; the fourth slot
             | supports x16 cards, but only at the speed of an x8 slot,
             | because only eight of the slot's sixteen PCIe lanes are
             | actually connected.
        
           | proverbialbunny wrote:
           | The major difference is offloading some of the load from the
           | CPU when using the pci-e bus.
           | 
           | If you have a spare core not being used when playing a video
           | game, it can be used which should make the difference between
           | 3.0 and 4.0 somewhat indistinguishable.
        
         | option wrote:
         | 10496 CUDA cores plus 24GB should be pretty good for ML
        
           | nomel wrote:
           | edit: Wrong! Ignore/downvote!
           | 
           | > 10496 CUDA cores
           | 
           | Not quite. 5248 cores, each supporting double fp32.
        
             | XCSme wrote:
             | But they clearly state in the specs (for 3080): 8704 NVIDIA
             | CUDA(r) Cores
             | 
             | And 10496 for 3090.
        
               | nomel wrote:
               | Correct!
        
               | ohnoesjmr wrote:
               | https://www.reddit.com/r/hardware/comments/ikok1b/explain
               | ing...
               | 
               | Seems it is half the cores but double the fp
               | instructions.
               | 
               | The spec talks cuda cores which is their inventive unit
               | of measure.
        
         | wtallis wrote:
         | > One thing I'm curious about is the RTX IO feature. The slide
         | said it supports DirectStorage for Windows, but is there an
         | equivalent to this for Linux? I'm hoping someone with a little
         | more insight or an Nvidia employee may have some more
         | information.
         | 
         | I haven't been able to find any real technical details about
         | what DirectStorage really is, but my expectations are that it
         | will consist of:
         | 
         | 1. An asynchronous IO API allowing for low-overhead submission
         | of IO requests from the application to the kernel, including
         | batch submissions and reaping of completion events.
         | 
         | 2. Some provision for transparent compression offload, either
         | to the GPU or to a CPU-based fallback
         | 
         | 3. Optional support for peer-to-peer DMA transfers of data from
         | SSD to the GPU's VRAM
         | 
         | Linux is already the gold standard for (1) with the relatively
         | recent io_uring API, and has support for (3) to some extent
         | (P2P DMA has been a fairly obscure feature until recently).
         | 
         | There are still some pretty big unanswered questions about
         | DirectStorage. How well will it reduce the IO overhead that
         | currently allows antivirus and other programs to hook into the
         | IO stack? Will it be compatible with non-Microsoft NVMe
         | drivers, including Intel's RST drivers that are commonly used
         | for their software RAID? Microsoft doesn't seem to want to make
         | that kind of information public this year.
        
           | modeless wrote:
           | There's a slightly more detailed blog post here:
           | https://devblogs.microsoft.com/directx/directstorage-is-
           | comi...
        
             | wtallis wrote:
             | Yeah, I already read that. If you pay attention, you'll
             | notice it talks a lot about the problem they're trying to
             | solve, and very little about how exactly they're going
             | about solving it. Partly because it's apparently still in
             | flux, which calls into question what's going on with the
             | storage API for the Xbox, which really should have been
             | nailed down by now.
        
           | AaronFriel wrote:
           | Linux is the gold standard for (1)?
           | 
           | Windows NT 3.5 (1994) would beg to differ with its support
           | for IO completion ports (IOCP). I think io_uring is more
           | general and more flexible for the sort of M:N scheduling
           | systems now commonly used in programming languages, but IOCP
           | predates io_uring by 25 years!
           | 
           | The rest of your comment sounds right to me, there are
           | unanswered questions about how DirectStorage interacts with
           | filesystem filter drivers.
        
         | elipsey wrote:
         | >> My day job is primarily ML as well, so I might just go for
         | the 3090. 24 GB of memory is a game changer for what I can do
         | locally. I really just wish Nvidia would get its shit together
         | with Linux drivers.
         | 
         | What is your ML dev environment like?
        
           | ZeroCool2u wrote:
           | We have a small team of 7, so after a lot of bureaucratic
           | navigation I got us custom built desktops for local
           | experimentation with 2080 Ti's and for the rest we use your
           | typical cloud providers. Personally, I'm partial to GCP,
           | because of the TPU support and BigQuery, but AWS is fine too.
        
         | me_me_me wrote:
         | Don't buy it yet, usually oem versions are more optimised and
         | better value overall.
         | 
         | But i must say 3070 is really tempting for $500 for someone who
         | is not looking to upgrade.
        
           | i-am-curious wrote:
           | Not at all. The 20 series has excellent founders cards.
        
             | sammycdubs wrote:
             | I could be wrong - but I believe Nvidia bins the chips for
             | Founders Edition cards
        
               | sincerely wrote:
               | What does it mean to bin a chip?
        
               | xxpor wrote:
               | Basically, after the chips are manufactured, they're not
               | 100% uniform. Some have better performance, some have
               | worse.
               | 
               | In this case, it means they're reserving the best chips
               | for the founders cards. In other cases, there have been
               | instances where a company has two products, a high end
               | and a low end (or medium, etc). In some of those cases,
               | people have investigated and the chips are actually
               | exactly the same, but the lower end product will have a
               | core disabled or similar, depending on the exact product.
               | That'll happen a lot of the time when the company has
               | yield issues where too many of the chips don't have
               | acceptable performance or one part of the chip is just
               | broken. They'll disable the broken portion and boom, the
               | lower end product is born. That's still a net win for
               | them because the alternate is either to throw the entire
               | thing away or spend more time improving the yield.
        
               | dastbe wrote:
               | chip manufacturing is an imperfect process, and so there
               | is variance in performance/viability of all of the
               | hardware on a chip. the higher performing chips are
               | "binned" for the top end of the price point, while the
               | lower performing chips are either binned for lower
               | performance or have some of their functionality disabled.
               | For contrived example, A company may produce nothing but
               | quad core chips but sell those with some cores that don't
               | meet minimum performance as dual cores with the bad cores
               | physically disabled.
        
               | henriquez wrote:
               | they cherry-pick their best silicon so you can run them
               | at higher speeds if you're overclocking or lower voltage
               | if you want cooler temperatures and less power
               | consumption at stock speeds.
        
               | Covzire wrote:
               | I know for the 20 series Nvidia supplies AIB's like EVGA
               | with binned chips too, higher end cards had GPU's with
               | slightly different model numbers that typically clocked
               | better. They probably do the same for all the different
               | card makers.
        
               | BoorishBears wrote:
               | There's always a ton of speculation about it, I've seen
               | the claim NVIDIA bins for FE cards and factory
               | overclocked AIB cards together
               | 
               | At the end of the day it doesn't really matter, you're
               | paying a FE premium for early access mostly.
               | 
               | On the plus side, this time the FE card might have a top
               | tier cooling solution, which is why I'll probably be
               | caving to their FE tax (and probably plenty of others,
               | focusing on cooling was a smart move)
        
           | zachrip wrote:
           | I'm confused, wouldn't FE be considered OEM?
        
             | smileybarry wrote:
             | The terminology is a bit confusing: OEM in this context is
             | referring to aftermarket (NVIDIA partners) cards. It's
             | referred to as "OEM" because the aftermarket cards come
             | from OEMs like Asus, EVGA, etc.
        
           | numpad0 wrote:
           | I don't believe it, old reference fans were optimized for
           | static pressure on server chassis and gaming cards optimize
           | for low speed high volume airflow, I think.
        
           | redisman wrote:
           | That flow-through cooling looks like it would be really good
           | though. I might go for the FE cards this time around.
        
             | abvdasker wrote:
             | I have a Micro-ATX case for my PC and am a little skeptical
             | that flow-through would well for that form factor because
             | the intakes are on the bottom of the card.
        
       | trollied wrote:
       | It's going to be very interesting to see what AMD counter with. I
       | don't think anyone was expecting pricing this aggressive.
       | 
       | Competition is great!
        
         | DivisionSol wrote:
         | I wonder what the price would've been without AMD dipping their
         | toes into the GPU market again.
         | 
         | I very much assume Nvidia slices price, to not only compete but
         | also crush the AMD GPU market. Nvidia's innovation is crazy
         | huge, but they probably don't want to be in the same situation
         | as Intel where people regard the big dog as slow, profit-
         | extracting. Nvidia will do it's best to eat as much market
         | share with better products over AMD('s GPU.)
        
         | judge2020 wrote:
         | It's targeted for a November 2020 unveil[0] so we'll have to
         | wait to see what they're coming out with, but I doubt it'll
         | stack up to the 30xx series on both price and performance (at
         | least, based on these marketing slides).
         | 
         | 0: https://www.techradar.com/news/amd-big-navi-isnt-coming-
         | unti...
        
           | BearOso wrote:
           | It has to be out before the new consoles, which are
           | reportedly coming the first two weeks of November. I think
           | the consoles are being kept secretive on AMD's request as
           | well.
        
         | arvinsim wrote:
         | I already expected AMD to only compete in the low and mid level
         | range.
         | 
         | But NVIDIA pegged the 3070(which is faster than the 2080 TI) at
         | $499.
         | 
         | That's pretty hard to beat!
        
           | fluffything wrote:
           | The 2080 TI are going to overflow the used market, at
           | 100-200$ at best, so... if AMD can't top that significantly,
           | a used 2080 TI might be much better value than anything AMD
           | can offer.
        
             | kcb wrote:
             | > The 2080 TI are going to overflow the used market, at
             | 100-200$ at best
             | 
             | Not even close. 1080 ti's are still going for twice that
             | and it was $300 cheaper at launch.
        
               | selectodude wrote:
               | 1080ti has a pretty significant bump in GPU memory which
               | has helped keep the price up.
        
               | fluffything wrote:
               | Sure, but who buys a 1080 Ti or 2080 TI today for 400$
               | when a RTX 3070 costs 499$ ?
               | 
               | Like, really, if you know somebody, let me know. I have a
               | 1080 that I want to sell.
        
             | bradlys wrote:
             | That's overly aggressive. If a 3070 is really the same
             | performance as a 2080 TI then the 2080 TI will likely go
             | used for $400.
             | 
             | People don't see graphics cards like they're brake pads.
             | They don't wear down the same way. $150 (no tax) is still
             | $150 less for what is essentially the same thing.
        
               | fluffything wrote:
               | The 3070 appears to have better perf than the 2080 TI at
               | many things though.
        
         | tmpz22 wrote:
         | GPUs are still overpriced because of Nvidia's monopoly on high-
         | end cards. The fact that we're so normalized to this after the
         | 2xxx series is kind of sad. It's like praising Apple for a
         | phone that's "only" $999.
        
           | ChuckNorris89 wrote:
           | Without competition from AMD, Intel had normalized quad core
           | chips for $800 on the high end for over 10 years.
        
             | gruez wrote:
             | >Intel had normalized quad core chips for $800 on the high
             | end for over 10 years.
             | 
             | What are you talking about? The i7-3770K (top of the line
             | 4-core chip in 2012) sold for ~$330.
        
               | nolok wrote:
               | I assume you meant desktop only, because the top of line
               | 4 core chip of 2012 was the Xeon E3 1290v2 which sold for
               | almost $900
        
         | Macha wrote:
         | The rumour mill has the top end big navi at 2080 ti levels of
         | perf. If those rumours and nvidia's claims here are accurate,
         | they're out of the high end segment. again.
         | 
         | Here's hoping that they'll outdo the rumours.
        
           | redisman wrote:
           | That would be bad for them. 3070 already seems to do that at
           | a cheap price so they would have to have some very cheap
           | cards.
        
       | baybal2 wrote:
       | It looks really huge.
        
       | mmanfrin wrote:
       | The 3080 is '2x faster than the RTX 2080', which was roughly on
       | par with a 1080TI (it had advantages, of course, RTX among them).
       | 
       | 3 Generations newer with only a 2x speedup feels like a much
       | smaller leap than the prior generations.
        
         | ebg13 wrote:
         | I missed a step here. How is the 3080 3 generations newer than
         | either the 2080 or the 1080Ti?
        
           | mmanfrin wrote:
           | 1080ti -> 20xx -> 20xx ti -> 30xx
           | 
           | There were also the 16xx cards.
           | 
           | I'm being downvoted for this.
           | 
           | 10xx
           | 
           | 16xx
           | 
           | 20xx
           | 
           | 30xx
           | 
           | That's 3 gens difference.
        
             | ebg13 wrote:
             | Ah. It seems unreasonable to me to consider the 2080 and
             | 2080Ti as different generations when they were released at
             | the same time. Also I think we should factor that the 3080
             | at $700 costs _half_ what I paid for a 2080Ti in December
             | (The ASUS RoG Strix, a premium 3-fan model, was going for
             | like $1400 before it was discontinued). By NVidia prices,
             | the 3090 is the true price successor to the 2080Ti.
        
               | mmanfrin wrote:
               | You're right, I shouldn't likely be distinguishing the Ti
               | series (although it is usually a 'toc' generation, like
               | the just announced gen, as Ti variants were not
               | announced).
               | 
               | But that does leave the 16xx generation which was
               | released wholly on its own, in its own year.
        
         | berryjerry wrote:
         | I thought only the RTX was twice as fast, which no one really
         | uses still.
        
           | mmanfrin wrote:
           | If that's the case, then this is an even smaller leap.
        
       | npmaile wrote:
       | As much as I like to hate Nvidia for all of the right reasons,
       | this is pretty big and might make me compromise my morals until
       | AMD comes out with something that can compete.
        
       | tkuraku wrote:
       | When would the Quadro cards based on Ampere likely be released.
       | Any ideas?
        
       | modeless wrote:
       | I love a small feature he mentioned in the video: Monitors with
       | built-in latency measurement. You plug your mouse into the USB
       | ports on the monitor and it tells you how long it takes the image
       | to change after you move the mouse. Brilliant idea.
       | 
       | It's long overdue to have widely available metrics for latency in
       | consumer tech. Many hardware/software setups have absurdly long
       | latency for no good reason other than that it's difficult to
       | measure. People underestimate the subconscious effects it has on
       | your usage of technology.
       | 
       | I couldn't be happier that phone and monitor manufacturers are
       | finally starting to compete on refresh rates >60 Hz. It's far
       | more important than wide gamut or 8K.
        
         | dan-robertson wrote:
         | This does encourage optimising the OS or application part (eg
         | double buffering and taking another frame is slow), but it
         | doesn't measure two significant sources of latency:
         | 
         | The latency in the hardware before the signal makes it to the
         | usb (for most keyboards, even specialised gaming keyboards this
         | is like 30ms, which is 2 frames).
         | 
         | There is also the latency from when the monitor gets a signal
         | to when the pixels have perceptually finished transitioning.
         | This can be 10s of ms too.
         | 
         | So even if the OS has 0 latency, and the monitor measures that,
         | you could still easily observe a latency of say 60ms.
        
           | kllrnohj wrote:
           | > for most keyboards, even specialised gaming keyboards this
           | is like 30ms, which is 2 frames
           | 
           | Worth noting that unlike some of the other sources being
           | discussed, this source of latency is more due to physical
           | constraints than bad engineering or bloated software.
           | Keyboards are limited by the physical travel time & de-bounce
           | of the mechanical switches themselves.
           | 
           | So this is more a tradeoff of using mechanical keyboards at
           | all rather than "gah bloated electron" or whatever.
           | 
           | The other common peripheral, mice, don't have this mechanical
           | constraint. They can even achieve sub-10ms for clicks due to
           | the difference in switch expectations (
           | https://www.rtings.com/mouse/tests/control/latency )
           | 
           | EDIT: Also 30+ms seems to be quite far off. Cherry MX's specs
           | are around 5ms, and I'm getting 8ms playing around with this
           | tool: http://blog.seethis.link/scan-rate-estimator/
        
             | modeless wrote:
             | I understand keys take time to travel, but you can press
             | them faster if you want. On the other hand I've never
             | understood debounce as a justification for button latency.
             | Surely a properly designed debounce circuit would add zero
             | latency.
        
               | kllrnohj wrote:
               | > but you can press them faster if you want
               | 
               | I mean, yes but not really? There's very real speed
               | limits to the human finger, after all.
               | 
               | But the times given above are also quite far off - the
               | actual keyboard numbers seem to be more in the 5-15ms
               | range, not 30+ms. At least, my mechanical gaming keyboard
               | is hitting 8ms on this test:
               | http://blog.seethis.link/scan-rate-estimator/
        
           | modeless wrote:
           | True! I just watched this video [1] they put out with more
           | details, and it sounds like they are working with monitor
           | _and_ mouse manufacturers to build _true_ end-to-end latency
           | measurement when using supported hardware. It would be easy
           | for manufacturers to game these numbers, but hopefully Nvidia
           | will provide some enforcement to prevent that. Maybe they
           | could also add a microphone to measure audio latency because
           | that is another huge problem with modern systems.
           | 
           | The video mentions that they get 15 ms end-to-end latency on
           | a 360 Hz monitor in Fortnite. So that's what it takes for a
           | modern system to finally beat the latency that you used to
           | get on, say, an NES connected to a CRT. Still an order of
           | magnitude above the limits of perception though [2].
           | 
           | [1] https://www.youtube.com/watch?v=-cXg7GQogAE
           | 
           | [2] https://www.youtube.com/watch?v=vOvQCPLkPt4
        
         | jayd16 wrote:
         | Its not really full end to end latency though. I think, at best
         | it can measure when the the monitor thinks it received the
         | signal from the mouse and when it perceives the new image.
         | Latency in the monitor itself can't be measured, can it?
        
           | nightski wrote:
           | Is there any reason to believe latency within the monitor
           | itself wouldn't be relatively constant? Or would you expect
           | large variances in monitor latency? If it was a pretty narrow
           | distribution they could simply add on that constant factor.
           | I'm not sure if they do this though.
        
         | seiferteric wrote:
         | I wish monitors in general had better (and open) firmware. It
         | would be really cool if for example I could have my laptop
         | plugged into my desktop's monitor and have it show up as a
         | "window" on my desktop. All it would take would be for some
         | simple driver on my desktop to tell the monitor where to draw
         | the image when I moved the window around. Basically a better
         | version of the mostly useless PiP feature my monitor already
         | has.
        
           | slimsag wrote:
           | As with many things, this comes back to DRM. HDCP in
           | specific.
        
             | freeone3000 wrote:
             | Monitors are generally connected through DisplayPort or
             | Thunderbolt, not HDMI, so there's no reason for HDCP to
             | enter here. The card in question does have a HDCP output,
             | but advanced features are only available through
             | DisplayPort.
        
               | ChuckNorris89 wrote:
               | It might surprise you that most monitors out there,
               | especially the cheap garbage ones from Amazon/Walmart
               | that most consumers have in their homes, are connected
               | via HDMI.
        
               | tgb wrote:
               | As someone running their monitor over HDMI, this is news
               | to me. If it gives the resolution and framerate of the
               | monitor, what's the difference if it's HDMI or
               | DisplayPort?
        
               | ChuckNorris89 wrote:
               | For you as an end consumer nothing if bandwidth is enough
               | with either cabke, but techies like us will be sticklers
               | that one is more open than the other, not bound by
               | royalties and such.
        
               | formerly_proven wrote:
               | Displayport usually has something like 50 % higher
               | bandwidth compared to the HDMI standard of the same
               | vintage. Which makes sense, since home video folks are
               | fine with like 15 pictures per second and 4:0:0 chroma
               | subsampling, but that doesn't really work for
               | computers...
        
               | jasomill wrote:
               | That, and some video cards support higher resolutions
               | over DisplayPort. For example, while my 2013 Mac Pro has
               | both HDMI and Thuderbolt/Mini DisplayPort outputs, the
               | only way to get 2160p60 HDMI output (without chroma
               | subsampling) is via a DisplayPort-to-HDMI 2.x adapter
               | (and I had to try about half a dozen different adapters
               | before I found one that does this correctly).
        
               | nailer wrote:
               | > Monitors are generally connected through DisplayPort or
               | Thunderbolt, not HDMI, so there's no reason for HDCP to
               | enter here.
               | 
               | HDCP runs over DP and Thunderbolt. But derefr's
               | discussion above is accurate.
        
             | derefr wrote:
             | In this particular case, no, as the image wouldn't be being
             | sent into the PC to render in its framebuffer; but rather
             | the PC would just be drawing an empty window, and reporting
             | the geometry of that window to the monitor. The monitor,
             | with two HDMI leads plugged in, would be responsible for
             | compositing the inputs together, according to the geometry
             | the PC reported, but all internal to itself.
        
               | shawnz wrote:
               | I think they mean DRM is the reason we can't have open
               | firmware. Not that DRM is the reason we can't have
               | sophisticated PiP features.
        
               | smileybarry wrote:
               | That's how hardware-accelerated video decoding used to
               | work in Windows XP days IIRC (before GPU-based desktop
               | composition), the video player would be a blank black
               | square and the GPU would be told to draw the video on
               | those coordinates.
               | 
               | Because of how it was implemented, you could drag VLC
               | around while the video was playing and the video would
               | stay "behind" everything, with the VLC window acting as a
               | "hole" through which you could see it. (So you could move
               | the window to the left and see half a black square on the
               | left, and the left-most half of the video on the right)
               | 
               | Nowadays with desktop composition AKA DWM, Windows just
               | makes sure to black out DRM content from any frames/video
               | it sends to an app requesting to capture the screen,
               | making sure to send the video-including composed desktop
               | only to the display. (And if you have some GPU recording
               | software like NVIDIA ShadowPlay, it switches off when DRM
               | content starts playing) You can see it in action with the
               | Netflix UWP app. Of course, a bunch of DRM-respecting
               | software -- like Netflix in Google Chrome -- doesn't
               | really follow that spec and can still be screenshot/video
               | captured like any app.
        
               | formerly_proven wrote:
               | This isn't a firmware issue, because enabling this would
               | require adding the hardware to the scaler ASIC to
               | actually process multiple video streams, and to increase
               | the buffer size and bandwidth n-fold so that it can
               | synchronize and composite the unsynced video sources
               | (also introducing 1+ frames of latency).
        
           | nradov wrote:
           | Just use remote control software to connect from one computer
           | to the other and display it as a window.
        
             | seiferteric wrote:
             | I do, but now you have latency.
        
               | aaronblohowiak wrote:
               | HDMI encoder will be better than remote software, in my
               | experience (depending on the hdmi encoder)
        
         | amelius wrote:
         | > Monitors with built-in latency measurement.
         | 
         | They should start measuring the time it takes for monitors to
         | recognize the signal when you press the "Source" button.
         | 
         | Even on recent monitors, it's often 5 seconds or more.
        
           | modeless wrote:
           | Yes! I would love for more review sites to benchmark this, as
           | well as time to wake from sleep. I have a gsync monitor that
           | has only one input port and it wakes from sleep in like 0.2
           | seconds. After the experience of using it for a while, if I
           | now had to choose between gsync and fast wake from sleep I
           | would choose fast wake from sleep every time.
        
         | nomel wrote:
         | > You plug your mouse into the USB ports on the monitor and it
         | tells you how long it takes the image to change after you move
         | the mouse. Brilliant idea.
         | 
         | Related, "Your mouse is a terrible webcam", 2014:
         | https://hackaday.com/2014/01/14/your-mouse-is-a-terrible-web...
        
         | nickjj wrote:
         | Fortunately a number of monitor review sites include input
         | latency that's properly measured. For example
         | https://www.tftcentral.co.uk/.
         | 
         | They have a lot of popular models and super in depth reviews.
        
           | tgtweak wrote:
           | Rtings pretty solid too
        
         | ksec wrote:
         | And I am surprised [1] Link hasn't pop up yet.
         | 
         | Yes. The industry has been optimising for throughput in the
         | past decades. It is time to bring latency back to the table. I
         | want super low latency Computing. Street Fighter or The King of
         | Fighters in CRT Arcade Era just felt so much more responsive.
         | 
         | https://danluu.com/input-lag/
        
           | CarVac wrote:
           | Using Project Slippi, you can play Super Smash Bros Melee at
           | the same latency as on a CRT... over the internet!
           | 
           | (Melee on a CRT has 3 frames of input lag)
        
         | TaylorAlexander wrote:
         | Good thoughts all around.
         | 
         | I will say that I am excited about 8k video for computational
         | photography. I am studying computer vision for robotics and it
         | is quite clear to me that a simple spherical lens with a very
         | high resolution sensor would be a very good imaging system for
         | a real world robot.
         | 
         | I recently got a Panasonic GH5 and it shoots 18 megapixel "6k"
         | 30fps video, encoded in h265 (important for higher res). I am
         | experimenting with photogrammetry using this photo mode. The
         | initial results are very promising. In four minutes I took a
         | scan of a short street block in Oakland, and after processing
         | the photogrammetry solution I have a very good dense 3D full
         | color map of the street. The model has holes but I am slowly
         | learning how to plan a photogrammetry capture. Currently
         | computation takes hours but I am seeing more and more research
         | on ways that machine learning can improve compute times and
         | data density.
         | 
         | See how low the visual acuity is on 5.6k spherical video here:
         | https://youtu.be/nASvIYq3VkE
         | 
         | However all this is to say that very high resolution sensors
         | are a very good thing for robotics.
        
           | namibj wrote:
           | Hi there!
           | 
           | So, all free/open multi-view stereo and structure-from-motion
           | software I know of is incapable of handling rolling shutter
           | artifacts.
           | 
           | The problem seems to be that electronic global shutter
           | sensors lack (some) dynamic range compared to otherwise-equal
           | rolling shutter cameras.
           | 
           | If you'd be interested in talking more about this, contact
           | me/let me know (I'll get in touch if requested).
           | 
           | My photogrammetry experiments typically encompass low-effort
           | bulk data collection, though the search for a light-weight
           | camera to use with a prosumer-class drone and some
           | revelations about reconstruction quality issues inherent to
           | older, easily-optimized algorithms for both multi-view stereo
           | and mesh reconstruction stalled progress somewhat.
           | 
           | In general, machine learning doesn't seem to be as much of a
           | benefit as one might guess, when compared to applying the
           | resources in non-ML ways to the data.
           | 
           | Mind teasing some numbers from your street capture?
        
           | strogonoff wrote:
           | Could it be faster to skip in-camera encoding and build a 3D
           | scene based on raw scene-referred data?
        
             | namibj wrote:
             | It's not efficient to do all of this live. Skipping the
             | H.265 parts would likely be beneficial, though.
        
             | modeless wrote:
             | I think the far future of computer vision will be "event
             | cameras" feeding directly into ML systems:
             | https://en.wikipedia.org/wiki/Event_camera
        
         | jacquesm wrote:
         | That's because our operating systems treat user input as a soft
         | real time problem when actually it should be a hard real time
         | problem. That's why your window manager sometimes can be
         | unresponsive. I've used a hard real time OS for a desktop for
         | some years and the difference with consumer stuff was extreme.
         | You get used to the computer _instantly_ doing what you tell it
         | to do. None of these half second or longer delays between
         | command and result. It 's like magic.
        
           | grishka wrote:
           | And in addition to that, there's the problem of some apps
           | doing too much work on the UI thread.
        
             | phkahler wrote:
             | That should affect the app, but I have seen it cripple
             | gnome desktop too. That should not happen with Wayland.
        
           | rudolph9 wrote:
           | What OS do you use?
           | 
           | And does the mouse being a ps/2 input vs USB make a
           | difference?
        
             | jacquesm wrote:
             | Right now I use Linux, unfortunately a series of corporate
             | games left QnX without a window manager and mostly useful
             | for embedded work. But if someone were to do a 64 bit port
             | of it it would be an awesome workstation OS even today. And
             | that's just the UI, under the hood it was very impressive
             | as well. Extremely elegant and well thought out.
        
               | ahartmetz wrote:
               | Blackberry BB10 was based on QNX. The UI "feel" was
               | fantastic.
        
               | pqb wrote:
               | Yes, it was fantastic - minimal, elegant, simple, fast,
               | had low learning curve and everything had a description -
               | even icons in action bar[0]. As their competitors, namely
               | iOS7+ and Android 5+ totally replaced gradients into flat
               | design I was very welcoming the balanced UI redesign on
               | my Z10[1]. However, autocompletion on their touch
               | keyboard was probably the best [2], you had 3-5
               | suggestion that were spread around on the keyboard and to
               | use them I had to place a finger below and flick it up.
               | On BB Passport it was even more pleasant with physical
               | keyboard - it had small gesture sensor and reacted on
               | swipes in the air above the keyboard.
               | 
               | On the developer side, documentation was also amazing.
               | There are tens of full, working example applications.
               | BB10 used QT behind the scenes (Cascades UI). It had also
               | a Python (2.x) binary on the device among other typical
               | UNIX programs.
               | 
               | [0]: https://developer.blackberry.com/devzone/files/desig
               | n/bb10/i...
               | 
               | [1]: https://developer.blackberry.com/devzone/design/bb10
               | /10_3_vi...
               | 
               | [2]: https://developer.blackberry.com/devzone/design/bb10
               | /keyboar...
        
               | ahartmetz wrote:
               | I've been involved with Qt for a long time and got a Z10
               | as a conference freebie, but the great developer
               | documentation is actually news to me! I prefer to just
               | use phones, I develop for PC hardware for fun and
               | embedded platforms for money these days. At the time,
               | some of my coworkers helped with porting Qt to QNX. I
               | worked on a different Qt project on QNX and I was also
               | duly impressed by the technical quality and elegance of
               | the OS.
               | 
               | The BB10 UI was built on something like QML (the language
               | and runtime) from Qt 4 with their own UI elements and an
               | OpenGL based backend from the acquired company The
               | Astonishing Tribe. They had animations e.g. for slider
               | switches running in the render thread, perfect 60 fps. Qt
               | Quick (the UI framework based on QML, colloquially called
               | "QML") only got an OpenGL backend in Qt 5.
               | 
               | Another very good Qt-based phone OS (after the Nokia N9,
               | got one of these at a conference as well) that failed :(
               | 
               | By the way, the Ford Sync 3 IVI ("in-vehicle infotainment
               | system") is also based on QNX and Qt and it received
               | fairly good reviews. I think I made some tiny
               | contribution to it, if only helping a coworker with
               | something.
        
               | rudolph9 wrote:
               | Are there any open source real-time operating systems?
        
               | jacquesm wrote:
               | None with a GUI. I never released mine, and it's so far
               | behind the times now it would take major work to get it
               | ported to the 64 bit era and even then it would still be
               | a C based OS.
               | 
               | This would be one way in which the Rust crowd could
               | _really_ make a difference.
        
             | Shorel wrote:
             | > And does the mouse being a ps/2 input vs USB make a
             | difference?
             | 
             | The difference used to be very noticeable. Nowadays my PC
             | doesn't have a PS/2 port.
        
               | rudolph9 wrote:
               | Is the difference due to changes in the operating system
               | or is it due to differences in hardware usb vs ps/2
               | inputs?
        
               | mariusmg wrote:
               | Mainly due to hardware :
               | 
               | - USB works by polling for changes at fixed intervals
               | 
               | - ps/2 works with interrupts, so the OS will know
               | immediately when hw does something.
        
               | rudolph9 wrote:
               | Does a serial port poll or interrupt?
               | 
               | If it's possible to interrupt on a serial port, are there
               | existing examples of how to configure a mouse or keyboard
               | to interrupt over a serial port?
        
               | jacquesm wrote:
               | Depends on how you configure the chip. You don't actually
               | need a chip, you can bit-bang serial just fine if you
               | have accurate enough timing.
               | 
               | Typically a serial port would contain a small buffer
               | which would fill up, upon completion of the first byte an
               | interrupt would be generated and you'd respond to that
               | and read out the register freeing up room for more bytes
               | to be received. Transmit the same but reversed, as soon
               | as a byte had left the shifter in the chip it would
               | generate an interrupt so you could re-use that space for
               | more bytes to send.
               | 
               | This works quite well. Hardware flow control can help in
               | case the OS doesn't respond fast enough to the
               | interrupts, so you don't lose characters.
        
               | simcop2387 wrote:
               | It can do both, but I believe you need hardware flow
               | control working to get proper interrupt behavior for
               | that. I don't think any mice actually did it that way
               | back in the day.
        
               | jacquesm wrote:
               | All serial chips that I'm familiar with since the 8 bit
               | days would do interrupts, the only time that I worked
               | without is when we were just using GPIO pins to emulate
               | serial ports.
        
               | ChuckNorris89 wrote:
               | What about USB mice with 250-1000Hz polling rate. I can't
               | imagine it's still a problem.
        
           | carbocation wrote:
           | I feel like your comment gives voice to what I wanted to say
           | in the thread about why doctors hate their computers:
           | https://news.ycombinator.com/item?id=24336039
        
           | Dylan16807 wrote:
           | Most operating systems and software throw tons of junk into
           | the critical path. I don't believe they're anywhere near soft
           | real time, and I bet soft real time would work pretty well.
           | You can miss a couple deadlines by a couple million cycles
           | each and still have instant response.
           | 
           | (And a real-time scheduler isn't enough by itself, the many
           | layers of architecture around handling input events can still
           | be bad even if you're meeting every deadline there.)
        
           | acdha wrote:
           | This is one of the things I still miss about BeOS. On late
           | 90s hardware, the OS wasn't quite hard real-time like QNX but
           | the worst-latency latency was much better than Windows 10 or
           | MacOS on the latest hardware, even in the presence of heavy
           | resource contention -- I remember simultaneously surfing the
           | web, downloading video from a camcorder with a Firewire
           | interface which did not buffer, and compiling Mozilla. The UI
           | latency didn't change at all, and the DV transfer didn't drop
           | a packet -- everything else got throttled back as necessary,
           | of course, but that meant that, say, GCC took longer to run
           | rather than impacting what I was doing in the foreground.
        
             | outworlder wrote:
             | > I remember simultaneously surfing the web, downloading
             | video from a camcorder with a Firewire interface which did
             | not buffer, and compiling Mozilla
             | 
             | Did you forget to run the OpenGL teapot?
             | 
             | BeOS was magical. I wish I could use Haiku in my desktop
             | today.
        
           | winter_blue wrote:
           | > I've used a hard real time OS for a desktop
           | 
           | Which one?
        
           | anderspitman wrote:
           | https://64.ms/
        
           | copperx wrote:
           | I've long been of the same opinion. There's nothing that
           | would make me happier as a computer user. Delays in feedback
           | are unacceptable.
        
           | willtim wrote:
           | I feel like even "soft real time" is too generous for
           | Android.
        
             | grishka wrote:
             | Yeah just try using your phone while Google's updating some
             | crap in the background _again_... Doesn 't depend on the
             | newness of the phone either -- this happened on every
             | Android phone I've owned since 2011.
        
             | smichel17 wrote:
             | What, you don't enjoy the back-button+muscle-memory
             | keyboard/navigation tango?
        
             | formerly_proven wrote:
             | Not limited to Android. On iOS for example Safari loading
             | some ads or whatever causes so much input lag that the
             | keyboard can't keep up and mangles the input completely.
             | And on some websites that one just shouldn't visit without
             | rigorous blockers ( _ehrm_ reddit) the device becomes so
             | sluggish to respond that taps on the close button are not
             | registered consistently any more.
        
           | formerly_proven wrote:
           | One of the funniest instances of software intentionally
           | kneecapping itself are compositors. As far as I know
           | compositors universally treat all monitors together as a
           | single surface and just assume that they're synchronized. On
           | Windows, this causes all sorts of hilarious stuttering and
           | hitching. On Linux, compositors just vsync on the slowest
           | display.
        
             | simcop2387 wrote:
             | From what I understand, on linux wayland makes it possible
             | to do this properly but I'm not sure that any environment
             | on top of it is doing it. I think it's in a "nice to have"
             | holding pattern for them because there's still so much to
             | do to getting a stable and usable desktop environment there
             | to begin with.
        
               | juergbi wrote:
               | Should be fixed in the upcoming GNOME 3.38: https://gitla
               | b.gnome.org/GNOME/mutter/-/merge_requests/1285
        
               | phkahler wrote:
               | I wrote a gtk program in Rust that did all its work in
               | the wrong place. It locked up the ability to move ANY
               | windows on the desktop for several seconds. That's a
               | compositor design issue if ever there was one. Is that
               | going to get fixed?
        
             | kevincox wrote:
             | This is changing, I believe that the next release of mutter
             | (GNOME) will remove the global clock on Wayland so it can
             | drive each monitor at it's native frequency[1].
             | 
             | IIUC input is still driven at 60Hz but changing this is in
             | discussion [2].
             | 
             | [1]
             | https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1285
             | [2]
             | https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/168
        
           | burlesona wrote:
           | Which OS? What is your setup?
        
             | deaddodo wrote:
             | Probably QNX or VxWorks.
        
               | jacquesm wrote:
               | QnX is correct.
        
               | tokamak-teapot wrote:
               | Is it feasible for mortals to get hold of and run QnX as
               | a desktop OS?
        
               | pas wrote:
               | Apparently not really:
               | https://news.ycombinator.com/item?id=24346411
        
               | jacquesm wrote:
               | You might find an old archived copy of it but nothing
               | that would be very useful if you intend to run a modern
               | browser and other tools like that. For curiosity's sake
               | you could.
               | 
               | There used to be a one-floppy installer with the desktop
               | on it, I'm sure you could get that to work in a VM.
               | 
               | Lots of screenshots:
               | 
               | https://www.operating-
               | system.org/betriebssystem/_english/bs-...
               | 
               | Docs:
               | 
               | https://www.mikecramer.com/qnx/qnx_6.1_docs/sysadmin/intr
               | o.h...
               | 
               | Unfortunately I can't seem to locate any disk images,
               | rumor has it there was a VMWare image floating around.
        
               | neilpanchal wrote:
               | Also RTLinux kernel:
               | https://en.wikipedia.org/wiki/RTLinux
        
               | monocasa wrote:
               | RTLinux is soft real time, FWIW.
        
               | rudolph9 wrote:
               | Can you elaborate? The wiki says it combines hard-
               | realtime with soft-realtime. It sounds like it depends
               | how you configure it but this is the first I've read of
               | it so I could be mistaken.
        
               | kijiki wrote:
               | They're just mistaken, RTLinux is hard realtime. It works
               | by running a tiny real time OS on the bare metal, and
               | then running a Linux kernel as a (lightly)
               | paravirtualized lowest priority task. You write your hard
               | realtime code to run on the RTOS, and then your soft/non
               | realtime code runs on Linux.
               | 
               | In this model, your compositor, input pipeline and (at
               | least the UI latency sensitive part of) your applications
               | would have to be ported to the RTOS, which makes this
               | pretty infeasible. But it works really well if you have a
               | some hard-realtime control loop talking to a bunch of
               | non-realtime network IO or UI code. Would be a fun way to
               | build a wifi controlled quadcopter.
               | 
               | These days you probably want to use Xenomai, or possibly
               | RTAI.
        
               | monocasa wrote:
               | Yes, I was mistaken. I was thinking of the preempt_rt
               | patches.
               | 
               | Thanks for the great writeup correcting me!
        
               | als0 wrote:
               | RTLinux is a special supervisor below the kernel that
               | allows you to run "hard-realtime" programs outside of the
               | Linux environment with a lot of restrictions. Everything
               | within the Linux environment is soft. As far as I can
               | tell, it's not feasible to run the GUI stack in the more
               | primitive environment.
        
           | pixelface wrote:
           | this sounds like a really cool way to operate and it would be
           | really interesting if you could expand on your experiences.
        
             | jacquesm wrote:
             | I worked on a project that used QnX for large volume
             | message transmission via custom telex interfaces. Millions
             | of messages on a good Monday morning. Because I like my
             | development machine to run the same OS as the servers I
             | ended up using it for my daily driver for quite a few
             | years. In the end I liked it so much that when Quantum
             | Software dawdled on their 32 bit implementation that I
             | wrote my own version of the OS.
             | 
             | One really neat demo was 250 windows running the size of
             | poststamps with a little bouncing line demo inside them.
             | All still received their 'fair share' of time, all could
             | still be moved around and expanded as though the machine
             | was otherwise idle.
        
               | kjs3 wrote:
               | Would you have that OS you wrote up on github or
               | anything, would you? Big fan of QNX; like to look at OS
               | code.
        
               | jacquesm wrote:
               | Ok. see https://jacquesmattheij.com/task.cc
               | 
               | That's the scheduler. There are a lot of moving parts to
               | running this code, I may make a little project out of
               | restoring it to life under a VM at some point.
               | 
               | Happy reading.
               | 
               | Edit: reading it myself, wow, my English was cringe
               | worthy back then. Moving to Canada certainly fixed that.
        
               | mjcohen wrote:
               | About as good as the Amiga.
        
               | YarickR2 wrote:
               | And what about IO ? What will you do when everything is
               | stuck waiting for that spinning rust to position itself
               | under the drive head ?
        
               | jacquesm wrote:
               | That's just another user process. So everything else will
               | continue. You have to let go of your macro kernel mindset
               | if you want to make sense of the timing in a micro kernel
               | environment. Blocking threads is fine. Just make sure
               | that you don't do that sort of stuff in your UI thread
               | and you won't even realize there _is_ such a thing as
               | disk or network IO.
               | 
               | Mouse, keyboard, screen. Those are the things that should
               | run at high priority. Everything else can - quite
               | literally - wait.
        
               | leguminous wrote:
               | > Mouse, keyboard, screen. Those are the things that
               | should run at high priority. Everything else can - quite
               | literally - wait.
               | 
               | I might add audio to this list.
        
               | jacquesm wrote:
               | Ah yes, of course. Sorry, I wasn't thinking clearly, just
               | had the usual UI loop and regular interaction with a
               | computer in mind, you are 100% right, audio should have a
               | very high priority. Nothing more annoying than dropouts.
               | Incidentally, the way most OSs deal with that is by
               | having very large audio buffers which in turn will give
               | you terrible latency. On a hard real time OS you could
               | reduce the size of those buffers quite a bit because you
               | can guarantee they are filled (and emptied) regularly.
        
           | pier25 wrote:
           | You mean software vs hardware? Can you elaborate?
        
             | neilpanchal wrote:
             | It has to do with the architecture of operating system.
             | Real-time operating systems prioritize and preempt tasks
             | based on their priority value - no matter how inefficient
             | it may be; whereas operating systems such as Windows and
             | Linux try to optimize throughput and speed over event based
             | priorities.
             | 
             | More to read here:
             | https://stackoverflow.com/questions/5452371/why-isnt-
             | every-o...
        
               | jacquesm wrote:
               | Low latency > high throughput in a real time context.
        
             | joshvm wrote:
             | I assume it means the user input handler is given a
             | deterministic time slot to do stuff. Regardless of what the
             | operating system is doing, it will always schedule mouse
             | movement (or something along those lines). It's a software
             | implementation (scheduling), but it requires hardware that
             | can support it - usually you need fine-grain control over
             | interrupts. So typically nowadays you see RTOSes on
             | microcontrollers (eg ARM). Maybe more precisely you need
             | access to that fine grain control and that often means
             | getting hold of datasheets that are under strict NDA (eg
             | Broadcom).
             | 
             | RTOSes are often found in autopilot and vehicle management
             | systems controlling critical peripherals or safety critical
             | software. More mundanely there are also sensors that will
             | get upset if you don't give them undivided attention for
             | fixed periods of time (or if you interrupt them, they will
             | terminate the transfer). Image sensors are particularly
             | picky about this.
        
               | qppo wrote:
               | You can implement hard real-time scheduling using polling
               | instead of interrupts, which is usually faster anyway. It
               | just changes how you measure your tolerance, since
               | nothing is instantaneous, ever.
        
               | ChuckNorris89 wrote:
               | This! Automotive hard real time is polling based.
        
               | qppo wrote:
               | I think most things are trending that direction, even in
               | soft/non real-time. How fast can an interrupt respond
               | these days anyway, a dozen microseconds?
        
               | ianhowson wrote:
               | Nanoseconds if you can respond inside the interrupt
               | handler ('top half' in Linux.) Microseconds to
               | milliseconds with significant variability if you need to
               | be scheduled ('bottom half'.)
        
               | jacquesm wrote:
               | And that only if your codepath is deterministic. Linux
               | can do pretty funny stuff at times.
               | 
               | I've run a plasma cutter under Linux on a run-of-the-mill
               | VIA SBC, it was actually pretty easy: just make a program
               | setuid root, use that to disable interrupts after you've
               | read all your data in and then just go wild on polled and
               | precision timed IO until you're done. Re-enable
               | interrupts and return as if nothing ever happened.
        
               | joshvm wrote:
               | Yep, more that actually being able to turn interrupts off
               | is a barrier to doing real time work on desktop OS's, for
               | example. Even if you can disable them globally (I think
               | on the Pi you can with some low level C), but stuff goes
               | wonky quickly e.g. if the WiFi soc doesn't get serviced.
               | 
               | It's a good lesson to play with polling versus interrupts
               | on a micro. Susprising just how many cycles it takes to
               | jump into the ISR.
        
         | ponker wrote:
         | > It's far more important than wide gamut or 8K.
         | 
         | This is just a matter of opinion. Wide gamut is more important
         | to me, latency is more important to you.
        
           | hyko wrote:
           | ...and I'd opt for the spatial resolution, completing the
           | trifecta!
           | 
           | Thankfully, we can just have all three :)
        
             | formerly_proven wrote:
             | > Thankfully, we can just have all three :)
             | 
             | Not at this time. There are some hypothetical options like
             | the LG 27GN950, but it has poor contrast and uses
             | Displayport 1.4 compression, which is only supported in the
             | latest graphics cards. VESA and friends really made
             | DisplayPort 1.4 as close as they possibly could to false
             | advertising without actually committing it (because the
             | only relevant new thing in DP 1.4 -- DSC, display stream
             | compression -- is technically optional and no one claimed
             | they'd support DSC while saying they support DP 1.4, which
             | is pretty much the same as supporting DP 1.3, since nothing
             | of note changed).
             | 
             | And now we're looking at DisplayPort 2.0 which can already
             | barely support uncompressed 8K at 60 Hz and is basically
             | maxed out by 5K/6K 120/144 Hz. And it's unclear if the
             | presently introduced generation of GPUs even supports it,
             | or if we're going to use effectively-2013-Displayport until
             | ~2022.
             | 
             | Note how the marketing material only talks about HDMI 2.1;
             | DisplayPort isn't mentioned _once_.
        
               | tormeh wrote:
               | I think DisplayPort is not mentioned because Nvidia
               | hasn't upgraded their cards to 2.0. Super disappointing.
               | 4k/144Hz/HDR is enough for now, to be honest, but DP2 can
               | do 84/144Hz/HDR as long as it has DSC, 244Hz with
               | subsampling. The slow pace of development and deployment
               | has hurt everyone for sure, but I don't think the
               | standard is bad in itself - just a bit late.
        
             | Scene_Cast2 wrote:
             | This is why I'm personally looking at the LG CX 48" - low
             | latency, 120Hz, 4k, wide gamut, HDR.
        
               | formerly_proven wrote:
               | OLED TVs are in a much better space than PC LCDs, where
               | no really good options exist. Either IPS, which has poor
               | contrast, bleed and doesn't do HDR, or VA, which has good
               | contrast, but also doesn't really do HDR, and VA
               | generally has poor uniformity (nitpick) and viewing
               | angles. Some VA are pretty smeary, but that seems to have
               | cleaned up in the latest generation. TN panels are much
               | better than they used to be in the color department, but
               | it doesn't have the contrast of VA, and even poorer
               | viewing angles than VA.
               | 
               | OLED is clearly the way forward - accurate colors,
               | excellent contrast, no bleeding, no uniformity issues,
               | proper HDR, excellent response time. Except OLED doesn't
               | come to PCs.
        
               | Scene_Cast2 wrote:
               | Yep, agreed. I mentioned the CX 48 because it's the first
               | one that can somewhat reasonably be used as a monitor.
               | 
               | MicroLED is another promising tech that has a lot of OLED
               | upsides, but no burn-in issues.
        
               | namibj wrote:
               | Iiyama's MVA3 displays are, to my knowledge,
               | significantly in front of other LCD styles (e.g. TN, IPS)
               | as far as contrast, especially from an angle, is
               | concerned.
               | 
               | Unfortunately my desk's dear centerpiece,
               | https://iiyama.com/gl_en/products/prolite-x4071uhsu-b1/ ,
               | has been EOL'd about 2 years ago, because I have been
               | unable to find a replacement that's not worse, while
               | staying in the 40~50" range. Any concrete suggestion
               | would be greatly appreciated.
        
       | [deleted]
        
       | iforgotpassword wrote:
       | This looks reassuring. After the first couple rumors/teasers,
       | especially regarding power consumption, I feared that NVIDIA
       | mostly just sat on their hands and would just release something
       | that's mostly a bigger version (more execution units, ram) of the
       | current Gen. I think they did that once some years ago. Seems
       | they actually did improve on the technical level too for 30xx.
       | :-)
        
       | en4bz wrote:
       | All this on Samsung 8nm (~61 MT/mm2). They didn't even feel the
       | need to use TSMC 7nm (~100 MT/mm2). Probably keep the price down
       | and to reserve capacity at TSMC for the A100.
       | 
       | This is like the anime hero/villain (depending on your
       | perspective) equivalent of fighting at half power.
        
         | npunt wrote:
         | Yeah, impressive for the node. TSMC's latest N5 process is 173
         | MT/mm2, which Apple is using now and AMD (probably in N5P form)
         | will start using next year. EUV is really a big step up.
        
         | ss248 wrote:
         | >They didn't even feel the need to use TSMC 7nm
         | 
         | They just couldn't get enough wafers. They tried to force TSMC
         | to lower the prices and it backfired.
        
         | ZeroCool2u wrote:
         | On the slide in the presentation it did say something like
         | Samsung Nvidia Custom 8NM process, so perhaps Nvidia made such
         | significant contributions to the process that it's not really
         | Samsungs process anymore?
        
           | wmf wrote:
           | TSMC's "customized" 12FFN process just had a larger reticle,
           | so the burden should be on Nvidia to explain any
           | customization. I don't think they deserve any benefit of the
           | doubt here.
        
             | gpm wrote:
             | Why would Nvidia willingly explain any customization?
             | Surely keeping that secret is a competitive advantage.
        
       | jjcm wrote:
       | A small thing that I haven't seen mentioned yet, but these cards
       | have native hardware decoding for AV1. With chrome just launching
       | support for AVIF this last week it seems like more and more
       | platforms are getting out of the box support for it. Nvidia is
       | also working with a lot of partners[1] on it it seems. I'm
       | currently working on a social video platform, and having a
       | unified next-gen codec that works everywhere for both images and
       | video would be SO helpful. Hopefully this trend progresses -
       | would love to be able to do some extremely high quality
       | streaming.
       | 
       | [1] https://www.nvidia.com/en-us/geforce/news/rtx-30-series-
       | av1-...
        
       | Thaxll wrote:
       | Problem with AMD is their drivers, nowdays drivers are 50% of
       | what makes a good graphic card.
        
       | fomine3 wrote:
       | Comparing 3080/3070, 3080 is 146% greater TFLOPS than 3070 and
       | VRAM capacity and speed is different. Meanwhile 2080/2070 was
       | 135% greater TFLOPS in same MSRP with 3080/3070. 3080 looks very
       | competitive.
        
       | adamch wrote:
       | I wonder if the pricing will drop once AMD releases their ray-
       | tracing PC GPUs.
        
       | KingOfCoders wrote:
       | I wonder about the ML performance compared to my current setup of
       | 2080TIs.
        
       | dougmwne wrote:
       | I am curious how quickly Nvidia will add these to their GeForce
       | Now streaming servers. As of right now, they only stream in 1080p
       | and it seems this could allow them to stream 4k for about the
       | same hardware cost. I'm personally not in the market for a gaming
       | desktop, but happily subscribe to GPU as a service.
        
         | xx_alpha_xx wrote:
         | GFN has been a bear, very hit or miss. Current generation of
         | hardware is either 1080 or 2060. If they start adding 30xx
         | (which they have hinted at in this past) that would be great.
        
       | system2 wrote:
       | All these crazy graphics cards, they still couldn't figure out
       | high density VR displays. I want amazing VR with super clarity.
       | Then I'd invest whatever money they want for a graphics card. LCD
       | gaming just doesn't cut it.
        
         | Kapura wrote:
         | It's not that they can't figure out high density VR displays,
         | it's that they're prohibitively expensive to produce. Display
         | miniturisation is not a problem domain that a lot of tech is
         | focused on, so progress is necessarily slower than the more
         | profitable areas.
        
         | lodi wrote:
         | HP Reverb G2 is coming out soon. Dual 2k x 2k displays:
         | https://www8.hp.com/us/en/vr/reverb-g2-vr-headset.html
         | 
         | Looks really great in this video:
         | https://www.youtube.com/watch?v=v4wlEbD5vxk
        
       | gallerdude wrote:
       | These look great! It's amazing how much better hardware gets
       | annually. The only thing I was hoping for that wasn't mentioned
       | was hardware-accelerated VP9 encoding, but we can't get
       | everything we want in life.
        
         | daneel_w wrote:
         | If we were to imagine that VP9 hardware encoding by Nvidia
         | would hold the same standard as their H.265 hardware encoding
         | then we can stop holding our breaths, as we have not missed out
         | on anything of any value what so ever.
         | 
         | For H.265 their encoder is fast, yes, but the quality per
         | bitrate is complete rubbish, requiring higher bitrate than a
         | good H.264 encode yet still contrives the gruesome trick of
         | looking far worse, which entirely offsets all and any point
         | with H.265.
        
           | gallerdude wrote:
           | Oh interesting, I didn't know the Nvidia encoder was regarded
           | as trash. What tools can one I use to evaluate visual quality
           | of a video file? My proxy for quality has always been
           | bitrate, but I know that bitrate is just chasing visuak
           | quality anyways...
        
       | fluffything wrote:
       | * RTX 3070, 499$, faster than RTX 2080 Ti
       | 
       | * RTX 3080, 699$, 2x faster than RTX 2080
       | 
       | * RTX 3090 (Titan), 1500$, 1.5x faster than RTX Titan, 8k
       | resolution @ 60 FPS with RTX on Control.
       | 
       | ---
       | 
       | I hope that if somebody bought an RTX 2080 or similar in the last
       | 2-4 weeks, that they bought it over Amazon, and can return it.
        
         | m0zg wrote:
         | I couldn't find pricing for 3090. I seriously doubt it's less
         | than the current version of Titan. It'd also make sense - lots
         | of DL research is currently done on 2080Ti. Take that away and
         | people will buy a more expensive SKU just to get more VRAM.
         | 
         | If this pricing holds up though, I need to get a large NVDA
         | position, because they'll sell at TON of 3090. I'll buy 8.
        
           | ralusek wrote:
           | It's in the video, confirmed $1499.
        
             | m0zg wrote:
             | Exciting! Looking forward to it, then. Time to unload my
             | 1080Tis. :-)
        
               | ralusek wrote:
               | Same.
        
               | jacquesm wrote:
               | A bit past that time I think? Yesterday would have been a
               | lot better!
        
         | m3kw9 wrote:
         | Is not out till October so you gonna not play games or use an
         | inferior card for 2-3 months, as they gonna get sold out till
         | when ever
        
           | cooljacob204 wrote:
           | 3080 and 3090 is out this month. Probably will be sold out
           | though.
        
         | pier25 wrote:
         | Super aggressive pricing. Not sure if they are selling those at
         | a super low margin or they were ripping people off with the
         | previous gens...
         | 
         | Anyway, thank you AMD for the competition.
        
           | berryjerry wrote:
           | How is that aggressive pricing? Those are the same prices
           | they have used for previous generations.
        
           | kenhwang wrote:
           | Agreed, seems like Nvidia is preemptively trying to fend off
           | AMD with an extreme show of force. This level of
           | performance/pricing is much higher than even AMD's most
           | optimistic estimates for performance/efficiency gains for
           | their next generation.
        
           | SketchySeaBeast wrote:
           | Well, 2 generations ago the MSRP of the 1070 was $379 and the
           | 1080 was $599, so I'd say we've just gotten used to the
           | higher prices which they successfully normalized.
        
         | JackMcMack wrote:
         | As always, wait for the benchmarks before deciding to buy (or
         | return). My guess is the performance improvements are the
         | biggest for raytracing, which I personally don't care for. And
         | let's not forget the huge power draw, requiring a new power
         | connector and a 3 slot cooler.
         | 
         | 8N manufacturing process is presumably Samsung, which will
         | probably be beat by TSMC 7nm.
         | 
         | I'm holding out for RDNA2.
        
           | thdrdt wrote:
           | I was wondering if all benchmarks use the OptiX features or
           | only the CUDA features. In software like Blender this makes a
           | huge difference. OptiX can be twice as fast as CUDA. So of
           | benchmarks don't use the OptiX capabilities then they will
           | miss a lot of available speed.
        
           | alkonaut wrote:
           | Yeah there is no way they sell a card that does 200% the fps
           | of a 2080ti in "normal" (non raytracing, ultra quality)
           | averaged over multiple titles. If a 2080ti does 100fps
           | without raytracing and 30fps with, this might do 110fps
           | without raytracing and 60fps with, for they'll a "100%"
           | increase but most would consider that 10%.
        
           | outworlder wrote:
           | > My guess is the performance improvements are the biggest
           | for raytracing, which I personally don't care for.
           | 
           | I still don't understand the raytracing craze when it comes
           | to games. Years ago when I wrote my first raytracer I might
           | have got excited about it. But we have advanced so much with
           | rasterization that I don't really understand why this is
           | something we need.
           | 
           | Raytracing complexity is tricky and it is likely to prove
           | challenging to do in a game with consistent frame rates. Soft
           | shadows are expensive even for hardware.
           | 
           | I would be more excited about some other global illumination
           | improvements, like photon mapping.
        
           | adamch wrote:
           | I think the new connector is only on the 3090.
        
             | smileybarry wrote:
             | The product page linked elsewhere in the thread[1] has
             | photos of the 3070 with the 12-pin connector and mentions
             | that all Founder Edition cards include an adapter:
             | 
             | > To that end, our engineers designed a much smaller PCB,
             | shrank the NVLink and power connectors, and still managed
             | to pack in 18-phases for improved power delivery. Don't
             | worry, we included an adapter that allows Founders Edition
             | cards to work with users' existing power supplies.
             | 
             | [1] https://www.nvidia.com/en-us/geforce/news/introducing-
             | rtx-30...
        
           | fluffything wrote:
           | > As always, wait for the benchmarks before deciding to buy
           | (or return).
           | 
           | Why would you keep a 1200$ RTX 2080 Ti or a 2500$ Titan when
           | you can get at least the same perf for 50-75% of the price
           | with the new products, and much better RTX perf ?
           | 
           | This assumes that with RTX off the new gen won't be slower
           | than the old one, but I think that's a fair assumption, and
           | if it isn't, you can always return the 30xx card in 2 weeks,
           | and buy an used 2080 Ti or Titan on ebay for pennies, once
           | the market overflows from people upgrading for the 30xx
           | series.
           | 
           | People were still asking for 900$ for a used 2080 Ti on ebay
           | this morning, and the 3080 700$ price just destroys that
           | offer. Many owners are going to try and dump these cards for
           | as much as they can get in the next two weeks. I wouldn't buy
           | a used 2080 Ti for more than 250$ today. In one month, these
           | cards are going to sell used for 100-200$. If AMD can only
           | match the 2080 Ti perf, they are going to have a very hard
           | time pricing against the used 2080 Ti market.
           | 
           | > And let's not forget the huge power draw, requiring a new
           | power connector and a 3 slot cooler.
           | 
           | That's only for the 3090 IIUC. All other cards were announced
           | as being actually much smaller than the 20xx series ones.
        
             | ses1984 wrote:
             | >Why would you keep a 1200$ RTX 2080 Ti or a 2500$ Titan
             | when you can get at least the same perf for 50-75% of the
             | price with the new products, and much better RTX perf
             | 
             | Well for one, keeping a card you already bought is free,
             | buying a new one costs money.
             | 
             | Why are you all getting so excited about the prices given
             | in a paper launch? Wait until you can actually buy one at
             | that price, which I'm guessing isn't going to be until well
             | into 2021.
        
             | mu_killnine wrote:
             | Pretty bold claim that 2080TIs are going to sell for $250
             | in a month, lol.
             | 
             | Certainly the market for them will take a beating compared
             | to new prices now. But I can't imagine it collapsing like
             | that purely based on the supply. How many people are really
             | going to drop their 2080 just because a new thing is out
             | there?
             | 
             | I'd love to be wrong, as an original 2070 owner ;)
        
               | fluffything wrote:
               | > How many people are really going to drop their 2080
               | just because a new thing is out there?
               | 
               | Pretty much everyone who paid for a 2080 Ti on launch and
               | always need to have the latests bestest thing.
        
             | JackMcMack wrote:
             | True, the new generation offers better price/performance vs
             | the previous generation.
             | 
             | Do note that the 3080 is still 2 weeks away (17 sept), 3090
             | 24 sept, 3070 in October.
             | 
             | I would consider getting a non-founders edition though, in
             | the past other brands have had better/more silent cooling
             | for pretty much identical pricing.
             | 
             | Edit: While the 3090 (350W) is the only one requiring a 3
             | slot cooler, all 3 use the new 12 pin power connector. The
             | 3080 founder edition power draw is still 320W, vs 220W for
             | the 2080.
        
               | binaryblitz wrote:
               | The 12 pin connector has an adapter for two 8pin cables.
               | This is a non issue unless the connector just bothers you
               | for some weird reason.
        
               | formerly_proven wrote:
               | > True, the new generation offers better
               | price/performance vs the previous generation.
               | 
               | That's not that difficult considering the RTX cards had
               | terrible price/perf.
        
           | godelski wrote:
           | > 8N manufacturing process is presumably Samsung
           | 
           | They explicitly noted that it was Samsung.
        
             | JackMcMack wrote:
             | It's not mentioned in this announcement, was it confirmed
             | somewhere in the past by Nvidia?
        
               | fluffything wrote:
               | It was mentioned in the video.
        
               | JackMcMack wrote:
               | Indeed it is. For those wondering, this (official) link
               | has much more info (and the video), that the currently
               | linked blog post.
               | 
               | [1] https://www.nvidia.com/en-
               | us/geforce/news/introducing-rtx-30...
        
         | Cacti wrote:
         | The RTX at 24GB seems like a great deal for the machine
         | learning crowd. Assuming the heat dissipation is ok.
        
           | fluffything wrote:
           | They mentioned its 10K cooler than RTX Titan, and 10x less
           | noise (not sure which scala they use for noise, dB are
           | logarithmic...).
        
             | hajimemash wrote:
             | 10K = 10 Kelvin [?] 10 Celsius?
        
         | phoe-krk wrote:
         | > * RTX 3090 (Titan), 1500$, 1.5x faster than Titan, 8k
         | resolution @ 60 FPS with RTX on Control.
         | 
         | You likely meant something else; Titan can't be faster than
         | Titan.
        
           | tedunangst wrote:
           | 3090 holds the product position previously known as titan.
        
             | dplavery92 wrote:
             | I think it holds the market position previously known as
             | xx80Ti
        
               | bonoboTP wrote:
               | Extremely confusingly the Ti stands/stood for titan as
               | well, while at the same time Nvidia also released cards
               | with the name "Titan", like the Titan X, Titan Xp, Titan
               | V, Titan RTX etc. When people say "the titans" they may
               | refer to either the "something Ti" cards or the "Titan
               | something" cards.
        
               | smileybarry wrote:
               | The video[1] mentions that it (3090) was made for users
               | in the Titan product segment, and it's introduced as a
               | Titan replacement. The Ti models will probably come later
               | as usual.
               | 
               | [1] https://www.youtube.com/watch?v=E98hC9e__Xs&t=34m14s
        
           | fluffything wrote:
           | Yeah, naming is confusing, the RTX 3090 is the non-HPC
           | "Titan" model of this gen, and, if I understood correctly, is
           | 1.5x faster than the previous gen RTX Titan, which costed
           | this morning 2500$.
           | 
           | There is also a previous gen HPC "Titan V" with HBM memory,
           | but AFAICT no Ampere Titan card with HBM2 was announced.
        
             | bitL wrote:
             | There is a rumor about 48GB version of RTX3090 replacing
             | the original Titan RTX with a similar price. That would
             | make sense for Deep Learning workloads as 24GB is already
             | too small for attention-based models.
        
             | sudosysgen wrote:
             | It's 50% in ray-tracing performance, not raw performance,
             | and it doesn't have big memory bandwidth improvements. It's
             | not the Titan successor, it's the 2080Ti.
             | 
             | You can see this because the chip name is GA102, the 2 at
             | the end indicates that this is a cut-down chip.
        
               | fluffything wrote:
               | All previous generation Titan models have been cut down
               | chips as well TU102, GV102, etc.
               | 
               | With one exception: the Titan V card that I mentioned
               | which comes with the GV100 chip. But as I mentioned, that
               | one targets a very different market segment with HBM
               | memory, which the "RTX 3090" obviously doesn't target,
               | since otherwise it would come with HBM2 memory like all
               | the GA100 products.
        
               | sudosysgen wrote:
               | >All previous generation Titan models have been cut down
               | chips as well TU102, GV102, etc.
               | 
               | >With one exception:
               | 
               | This is simply wrong. The majority of Titans have been
               | full chips: Titan, Titan Z and Titan Black as well as the
               | Titan V and Titan X.
               | 
               | The other Titans were about as fast as manufacturer-
               | overclocked 80Ti models.
        
               | fluffything wrote:
               | Indeed.
               | 
               | The naming is super confusing. The 100 versions are
               | essentially HPC chips on a PCI-express board, while 10x
               | are completely different products.
        
           | addcninblue wrote:
           | I believe they're referencing past editions of Titan.
        
         | pornel wrote:
         | BTW: In the EU you have "14 day cooling off period" that gives
         | you a right to return items bought online.
        
       | mkaic wrote:
       | Here's the static launch page if anyone's interested:
       | 
       | https://www.nvidia.com/en-us/geforce/graphics-cards/30-serie...
        
         | ckastner wrote:
         | Thanks! Most importantly: "Available on September 17th"
        
           | tgb wrote:
           | That's for the 3080, the 3090 is Sep 24th, and 3070 is
           | "October", from lower down on that same page.
        
       | dang wrote:
       | We changed from https://www.nvidia.com/en-us/geforce/special-
       | event/?nvid=nv-..., which is a video of (what looks like) the
       | same material.
       | 
       | Some of the comments in this thread are about things people saw
       | in the video (spatulas?), so keep that context in mind.
        
         | mkaic wrote:
         | Yeah I posted the live announcement video and only found the
         | static launch page a few minutes after that. Apologies :)
        
           | dang wrote:
           | No worries!
        
       | sudosysgen wrote:
       | So it seems that the 3090 will be priced at 1499$. This is kind
       | of insane.
       | 
       | EDIT: For people comparing this to the Titan RTX, no. This GA102,
       | not GA100. It's the cut-down version of Ampere. GA100 will come
       | out, and it will be even more expensive.
        
         | rrss wrote:
         | > EDIT: For people comparing this to the Titan RTX, no. This
         | GA102, not GA100. It's the cut-down version of Ampere. GA100
         | will come out, and it will be even more expensive.
         | 
         | That doesn't mean it's not the Titan equivalent for this
         | generation. Titan X(pascal) and Titan XP were both GP102, and
         | Titan RTX was TU102. AFAIK, only Titan V used the "100" chip,
         | and that was sorta a fluke because there was no smaller volta
         | chip. (and 3090 was explicitly introduced as the Titan RTX
         | replacement)
        
           | sudosysgen wrote:
           | Actually, the Titan, Titan Z and Titan Black as well as the
           | Titan V and Titan X. Titans XP, Xp and RTX were basically
           | just overclocked 80Ti chips, sometimes slightly unlocked.
           | They are the outliers, and widely regarded as a scam, locking
           | away 60$ of memory behind 1500$, with a bit more CUs yet
           | about as fast or slower in aggregate than aftermarket 80Ti
           | models.
           | 
           | It's not the Titan, because it's not the biggest chip, and
           | also, it's not called "Titan". It fits the motif of the
           | 2080Ti almost to a T.
        
         | fluffything wrote:
         | No HBM2 though.
        
         | liuliu wrote:
         | Yeah. Even if it is not a Titan, it is at a price that I am
         | glad to pay for the performance (if the 1.5x faster than Titan
         | claim holds true). Bert / Transformer models are incredibly
         | memory hungry, and a sub-2k graphics card with 24GB memory is
         | great to have. Also, its number of CUDA cores seems to be
         | slightly more than A100, would be interesting to see benchmarks
         | once it comes out.
         | 
         | Not to mention that it still has NVLink support! The 3-slot
         | design is a bit challenging and for 4-card workstation, I need
         | to rework my radiator mount to make space for another PSU.
        
           | sudosysgen wrote:
           | Well, it's not just 1.5x faster than the Titan, it really is
           | 1.5x faster than the aftermarket 2080Tis too since they
           | basically are the same chip anyways.
           | 
           | If you're happy paying 40% more for a 50% faster card, that's
           | okay. I just don't think it's very good for the industry.
        
         | king_magic wrote:
         | $1499 is a rounding error for ML hardware.
        
         | zamalek wrote:
         | Virtually nobody needs a 3090, much less a gamer (let's be
         | honest, though, many will buy one regardless). For the people
         | that actually do need that horsepower, it's unbelievably cheap
         | _for what you are getting._ You could have easily paid twice
         | that a year ago for less.
        
           | sudosysgen wrote:
           | Well yeah, that's just how it is because of progress. It's
           | still more expensive than last year's XX102 chip. That said,
           | it's only about 50% faster than a 2080Ti at everything except
           | ray-tracing. 50% faster, 40% more expensive.
           | 
           | Back in Kepler, a 780Ti was 800$, and it had the GK110 chip,
           | which was the full-fat chip. Now, the cut-down chip costs
           | twice as much.
        
           | DoofusOfDeath wrote:
           | > Virtually nobody needs a 3090, much less a gamer (let's be
           | honest, though, many will buy one regardless).
           | 
           | That statement is absolutely true regarding me. Honestly, I
           | don't _need_ a gaming system at all, let alone one with such
           | a powerful GPU. But even in terms of my leisure-time gaming,
           | I could never justify the price difference over a much
           | cheaper card.
           | 
           | Still, I could imagine it making a lot more sense for other
           | people. E.g., pro gamers, people with big entertainment
           | budgets, or people using CUDA for number-crunching.
        
           | zimpenfish wrote:
           | > Virtually nobody needs a 3090, much less a gamer
           | 
           | I'm tempted to get one just to avoid having to think about
           | upgrading a graphics card for another 10 years. Plus I can do
           | some ML messing about as well for resume-driven development.
        
             | selectodude wrote:
             | Cheaper and better to get three $500 GPUs every three
             | years.
        
               | zimpenfish wrote:
               | Best I can get for $500 now (~PS377) is an 8GB GTX2060 -
               | 5x fewer cores, 1/3rd the RAM, 2/3rd the memory bandwidth
               | of the GTX3090 which is PS1399. Plus I really don't want
               | to upgrade my PC again for at least 5 years - just done
               | that and been reminded of why I hate it.
        
               | sudosysgen wrote:
               | Maybe you should spend 500 pounds and get a 2070 super?
        
         | Cacti wrote:
         | It's insane for gamers, but cheap for machine learning. $1500
         | for that speed and more importantly 24GB vram? Yes please.
        
           | starlust2 wrote:
           | What about for VR?
        
           | sudosysgen wrote:
           | 24GB of RAM at GDDR6X speeds, not HBM. It's not scaling
           | memory bandwidth with memory capacity, for a lot of ML
           | applications it's meh.
        
             | [deleted]
        
             | Cacti wrote:
             | For $1500 many are willing to trade speed for overall VRAM.
        
           | krautsourced wrote:
           | Also for GPU rendering.
        
         | Nursie wrote:
         | It's the Titan of this generation, but aimed at a slightly
         | wider audience. The RTX titan was what, $2400? SO you could say
         | it's come down a bit...
        
           | sudosysgen wrote:
           | It's not the Titan, it's a 2080Ti, upgraded. It is GA102, not
           | GA100.
        
             | Nursie wrote:
             | They even said it's the titan of this generation, but for a
             | wider audience.
             | 
             | Whether the chip number is right is pretty irrelevant.
        
               | sudosysgen wrote:
               | It isn't, because it means that there's a bigger chip.
        
             | redisman wrote:
             | I'm almost sure we're still getting the Ti's and Super's
             | next year since its easy marketing and money.
        
         | daneel_w wrote:
         | Yeah, for consumer use these things are starting to approach
         | "smartphone fashion" consumerism levels.
        
       | nirav72 wrote:
       | did they leave out the price of the 3090 in the article or did I
       | somehow miss it? All I see is the 3070 and 3080 prices.
        
       | berryjerry wrote:
       | I find it nuts that those streamers called the game "smooth as
       | butter" at 60 fps. Even if it was 8K, there is no way 60 fps
       | could feel smooth.
        
       | Macha wrote:
       | If their performance claims are accurate, AMD has a huge hurdle
       | ahead of it, as the rumour mill only had them drawing even with
       | the 2080 ti with big navi.
        
         | zamalek wrote:
         | > AMD has a huge hurdle ahead of it
         | 
         | I hope AMD can pull it off, as I am _really_ hoping to make my
         | first red box build. That being said, the performance:cost
         | ratio of 30xx is mindbogglingly attractive (assuming the
         | reviews back up NVIDIA 's claims).
        
           | arvinsim wrote:
           | I will wait for benchmarks. Their 2x qualifier was RTX On.
           | Not sure if they meant general performance or RTX performance
           | only.
        
             | PaulBGD_ wrote:
             | The small text also said DLSS iirc, so non-dlss games and
             | machine learning loads probably aren't 2x.
        
             | BearOso wrote:
             | The 3070 has 20 classic shading module TFLOPs. The 2080ti
             | was 13.5 TFLOPs. I think you'll see 50% better performance
             | at the minimum.
        
             | chinigo wrote:
             | Digital Foundry's initial analysis _mostly_ bears this
             | claim out.
             | 
             | They found FPS increases of between 160-190% for a bunch of
             | recent games featuring both RTX/traditional rendering, and
             | about a fixed 190% in Quake RTX (which is exclusively RTX
             | rendering).
             | 
             | https://www.youtube.com/watch?v=cWD01yUQdVA
        
             | Nursie wrote:
             | He was trying to sell it as double the RTX, double the
             | tensor cores _and_ double the general raster capability.
             | 
             | Guess the benchmarks will show us.
        
         | zucker42 wrote:
         | The rumours I've heard have them beating the 2080Ti, but not
         | enough to be competitive with the top Nvidia cards if these
         | performance claims are accurate. Plus I'd guess Nvidia will
         | launch a 3080Ti at ~$1000 sometime around the release of Big
         | Navi.
        
           | fluffything wrote:
           | Or they'll just move the 3070 and 3080 to 7nm TSMC to lower
           | the price. Or both.
        
       | tus88 wrote:
       | But can it play Crysis?
        
       | [deleted]
        
       | metalliqaz wrote:
       | 8K gaming is the ultimate gimmick. Even in the ideal conditions
       | they set up for that demo, I'm doubtful those gamers have the
       | ability to detect a significant improvement over 4K.
        
         | jiggawatts wrote:
         | In my experience, high resolution is only relevant to games
         | where the camera isn't constantly moving.
         | 
         | Think DOTA 2 or Civilization 5. They both look _amazing_ at 4K
         | and I bet would look noticeably better at 8K.
         | 
         | Especially in those two, the games assets have enough detail to
         | allow zooming in all the way from a bird's eye view to a first
         | person view. As you crank up the rendering resolution, there's
         | plenty of "real" detail available in the models and textures.
         | You could push these games to 16K and _still_ get more quality
         | out of them.
        
         | shados wrote:
         | That's fine. Its really hard to get good frame rate in 4k
         | today. A lot of that isn't just the GPU (eg: poorly optimized
         | games, CPU, I/O, etc), but if it can do 8k 60fps reasonably
         | consistantly on paper, then it can do 4k@60fps+ consistently
         | for real (assuming nothing else is the bottleneck).
         | 
         | That makes it worthwhile. That's personally what I was waiting
         | for before upgrading my 1080 GTX TI and my monitor.
        
         | cma wrote:
         | There already exist 1/2 8K VR headsets though (full 16:9 4K per
         | eye), and you can keep seeing a difference there up through
         | around 16K per eye and higher refresh rates than they showed.
        
         | kitsunesoba wrote:
         | Agree that 8K is a bit overkill, but it'll be nice for 60FPS+
         | 5k since 5k doubles 2560x1440 exactly and makes for a better
         | multipurpose use resolution than 4k does at 27".
        
           | shados wrote:
           | > since 5k doubles 2560x1440 exactly
           | 
           | Does that really matter? Integer scaling isn't really a thing
           | AFAIK, and game devs are more likely to test their HUD
           | layouts in 4k than 5k these days.
        
             | kitsunesoba wrote:
             | Doesn't matter much in games, but it makes a difference on
             | the desktop. Don't know about Windows but under macOS and
             | Linux integer scaling generally works more cleanly and
             | predictably than fractional scaling does.
        
               | shados wrote:
               | Ahh yeah, I don't know about Linux, but MacOS' desktop is
               | notorious for the way it scales things up.
        
         | friedman23 wrote:
         | Ultra high pixel count gaming is not a gimmick. Get a 4k
         | ultrawide form factor monitor and you have close to as many
         | pixels as are in an 8k monitor
        
         | t0mbstone wrote:
         | It's all about VR headsets
        
           | nomel wrote:
           | Until DLSS can work with VR, it's probably still out of
           | reach, beyond Roblox quality graphics.
        
           | fluffything wrote:
           | For VR 60 FPS is not enough, but maybe in the next generation
           | with another 2x leap we'll have 120 FPS at 8k.
        
             | omni wrote:
             | The interpolation tech they've come up with is not shabby,
             | 45 FPS will get you pretty far with that
        
       | Tuganin wrote:
       | Can anyone think of cases where the GPU/Processor unveiled by the
       | maker wasn't actually what they said it was once it is run in
       | real use-cases?
       | 
       | It always felt to me that something similar to the car's gas
       | emissions scandal is just waiting to happen in this industry.
        
       | dsign wrote:
       | That looks good!!! Can we upload already?
        
       | xvilka wrote:
       | Any news about that open source thing they promised to unveil
       | this year? Or they lied as usual?
        
       | solatic wrote:
       | Still no proper Wayland support?
       | 
       | I mean, I get that the primary market runs Windows. But some
       | people like to dual-boot.
        
         | wmf wrote:
         | Support vendors who support you. Buy AMD.
        
       | rnantes wrote:
       | Interesting that this card has 8K capable HDMI 2.1 but not
       | DisplayPort 2.0. Wonder when we start seeing DP 2.0 support in
       | products, VESA said late 2020 in the press release.
        
         | Const-me wrote:
         | Indeed, the specs page says all 3 have HDMI 2.1 and 3x
         | DisplayPort 1.4a: https://www.nvidia.com/en-
         | eu/geforce/graphics-cards/30-serie...
         | 
         | However, Wikipedia says it's capable of 8k with a proprietary
         | lossy video codec called Display Stream Compression:
         | 
         | DSC is a "visually lossless" encoding technique with up to a
         | 3:1 compression ratio. Using DSC with HBR3 transmission rates,
         | DisplayPort 1.4 can support 8K UHD (7680 x 4320) at 60 Hz
         | 
         | The quote is from there:
         | https://en.wikipedia.org/wiki/DisplayPort#1.4
        
         | DoofusOfDeath wrote:
         | Apologies if this question is super naive, but is there a good
         | reason for ongoing development of _both_ HDMI and DP? At least
         | for my use cases (home entertainment, and work computers) both
         | seem roughly equivalent.
         | 
         | The devices I've bought recently have tended to support HDMI
         | more than DP. So I got the impression that HDMI was "winning"
         | and DP would fade away.
         | 
         | But now it seems like vendors are moving towards video-over-
         | USB-C cables. And the "Alternate Mode protocol support matrix
         | for USB-C cables and adapters" table in this article [0] seems
         | to indicate that USB-C _cables_ have broader support for DP
         | than HDMI. Which makes me wonder if vendors will converge on
         | DP-protocol-over-USB-C-cable?
         | 
         | This makes me nostalgic for the relative simplicity of DVI.
         | 
         | [0] https://en.wikipedia.org/wiki/USB-C
        
           | EricE wrote:
           | HDMI is all about DRM. USB-C is just one of many reasons DP
           | hangs on. The ability to chain monitors is huge for digital
           | signage and other display uses too. Let's hope both continue
           | to be developed since for computing display port is far more
           | useful and free of at least some of the DRM hell of HDMI.
        
       | debaserab2 wrote:
       | How many studios are even going to produce art assets at the
       | level of fidelity that 8K provides? These installs are going to
       | be huge.
        
         | redisman wrote:
         | The 8k is really just a sneak peek at the future. I doubt
         | anyone will actually be expecting to game on that resolution
         | this generation.
        
         | EricE wrote:
         | It's not always about providing the final product in 8K - 8K
         | means you can have multiple 4K windows up.
        
       | randyrand wrote:
       | I have a GTX 970 that I bought for $379 in 2015. Does Nvidia make
       | flagship GPUs in this price point still?
        
       | d33lio wrote:
       | I just want to see an OCL-Hashcat bench of the 3090 :)
        
       | polishdude20 wrote:
       | He sure likes his silicone spatulas.
        
       | valine wrote:
       | It's interesting that the 3090 is a replacement for the Titan.
       | The $1500 price tag is a bit higher than expected but considering
       | the Titan cost $2500 it doesn't seem too unreasonable.
        
         | PeterStuer wrote:
         | tbh he pricing is much better than I expected. I'll get one for
         | my new build, but probably a 3rd party as I'm not convinced by
         | the cooling on the FE as of yet.
        
         | mkaic wrote:
         | In terms of just nomenclature, I think I like the consistency
         | of having all of the current lineup actually be called GeForce
         | 30XX.
        
           | Macha wrote:
           | The supers/tis/titans will come to clutter it up next year or
           | so.
        
       | rectangleboy wrote:
       | It was behind the exorbitant number of spatulas the whole time!
        
         | tbeseda wrote:
         | And then the oven 3090 reveal!
        
           | heyoni wrote:
           | It's so hot, he uses it to bake bread.
        
         | mkaic wrote:
         | Right?! I thought that was absolutely hilarious. Good to know
         | NVIDIA's got a sense of humor.
        
         | carabiner wrote:
         | Why does he have so many?
        
           | golergka wrote:
           | That's why.
        
       | gallerdude wrote:
       | Just compare how much Nvidia is pushing the limits as the
       | industry of leader of GPU's, to how Intel seems to be playing
       | catch-up reluctantly as the industry leader of CPU's. Leadership
       | matters.
        
         | jjoonathan wrote:
         | Nvidia isn't a fab. AMD isn't a fab. TSMC is the fab that beat
         | Intel. TSMC's customers, like AMD and NVidia, are
         | beneficiaries.
         | 
         | So far. As customers of TSMC, in the long term it behooves them
         | for TSMC to have competition.
         | 
         | Further, "leadership matters" is a somewhat ironic complaint
         | given that Intel ran face-first into a brick wall precisely
         | because they were leading. TSMC placed conservative bets on the
         | next node and Intel placed risky bets because they needed the
         | extra risk/reward to maintain leadership. Intel's bets failed
         | (in particular, cobalt wires and COAG). They chose "leadership
         | or bust" and went "bust," at least for now.
        
           | rrss wrote:
           | 'gallerdude didn't mention anything about fabs. Also, these
           | GPUs are fabricated at Samsung, so your comments about TSMC
           | are mostly irrelevant.
           | 
           | If the fab was all that mattered, AMD GPUs would be
           | dominating Nvidia, since they have been shipping GPUs using
           | TSMC 7nm (a superior process to Samsung 8nm) for over a year.
        
           | trynumber9 wrote:
           | In this case Nvidia said they're using Samsung's "8N"
           | process.
        
           | i-am-curious wrote:
           | This is such a simpleton take.
           | 
           | The whole reason AMD are able to crank out 128 core CPUs is
           | the CCX architecture - the one people laughed at. No TSMC
           | there. Not to mention other innovations like Infinity Fabric.
           | 
           | In ampere for instance, there are so many innovations, like
           | PAM signalling, 2x shader instructions per clock, DLSS, RTX
           | Voice.
           | 
           | TSMC beat Intel, sure, but that is not the main reason for
           | why Nvidia and even AMD are leading the industry. In fact,
           | ampere is on Samsung 8n.
        
             | Havoc wrote:
             | No need for name calling...
        
       | johnwheeler wrote:
       | It's interesting the effect new graphics have on how old graphics
       | are seen. I remember when Resident Evil came out on the first
       | Playstation, at the time, I considered certain game elements like
       | fire and water indistinguishable from reality. Now it looks like
       | pixelated garbage and I ask myself how I could have ever thought
       | that.
        
         | bobthepanda wrote:
         | Part of it is also that modern TVs and monitors are much higher
         | resolution than their predecessors.
         | 
         | One interesting point of reference is the initial switch to HD
         | in the 2000s; I remember there was a bit of panic in the
         | beginning because news studios had to adjust studio lighting
         | and makeup; flaws that were not perceptible on a CRT were all
         | of a sudden extremely noticeable blown up on a 48" flatscreen.
        
           | Shorel wrote:
           | My CRT could do 1600x1200, not bad at all compared with
           | modern 1080P.
           | 
           | My point is: CRT did show a lot more colours, had much lower
           | latency, and did not have a grid where you could see
           | individual pixels.
           | 
           | No colour banding, no lag and no screen door effect. That
           | really enhanced the content you saw when you had a CRT.
        
             | gowld wrote:
             | Nice CRT. Did your _videos_ have 1600x1200?
             | 
             | During that era, 640x480 video was the common high end.
        
           | MBCook wrote:
           | When you watch TV shows from the SD era that has been scanned
           | into HD from film, you can sometimes see this. For example
           | Seinfeld.
           | 
           | Occasionally things will be slightly out of focus, but it
           | wasn't apparent on a SD CRT so they shot the scene. On an HD
           | screen you can see it and it's kind of distracting.
        
           | keanebean86 wrote:
           | 30 Rock did a skit about HD cameras. Close to the end of this
           | clip: https://youtu.be/zoXfQupV5n8
        
           | Nokinside wrote:
           | Dogs and cats suddenly started to watch TV when refresh rate
           | and resolution increased.
        
             | tpmx wrote:
             | Presumably the TV size increase when we went from CRTs to
             | plasmas and LCD panels around the same time was a big
             | factor.
        
             | EamonnMR wrote:
             | Is this real? Would make a fascinating study.
        
               | atom-morgan wrote:
               | Based on anecdotes it seems like it. Older generations
               | always seemed to think pets had no idea what was going on
               | when viewing a TV to the point of ignoring it. I'm
               | assuming their body language suggested so.
               | 
               | Nowadays people open up koi pond videos on phones and let
               | their cats play with it. But like you mentioned, it would
               | be interesting to see a study on it.
        
               | tpmx wrote:
               | The framerate didn't increase. It was still 60 or 50 Hz.
               | The resolution did increase, but so did the size. The
               | typical angular resolution stayed pretty much constant, I
               | think.
        
           | rubber_duck wrote:
           | IMO watching a few sci-fi movies in 4k+ looks ridiculous - I
           | start noticing the difference between CGI environment and
           | actors and it kills the immersion completely, it goes from
           | "that character is really there" to "this guy is larping in
           | front of a green screen"
        
             | colechristensen wrote:
             | A lot of it is the lighting, I think.
             | 
             | Higher resolution, better color replication, and frame rate
             | make very obvious the fact that there seems to be a magical
             | glowing orb following around the characters right behind
             | the camera. Immersion breaking because you can get away
             | with it with less quality, it's more difficult to notice.
             | 
             | Something that I've also found more and more irritating is
             | the foley artists doing ridiculous things for sound
             | effects, especially in nature documentaries, but all over
             | the place really.
        
         | Narishma wrote:
         | Probably because low-resolution graphics look better on CRTs
         | than on flat panels.
        
           | Const-me wrote:
           | This.
           | 
           | When people are saying something is photo-realistic, they
           | aren't comparing to reality, they're comparing to photos or
           | videos _as viewed on the same display_.
           | 
           | By some metrics, even extremely expensive modern hardware is
           | very far from reality. Pretty much all games show Sun
           | occasionally, to reproduce same luminosity at 1m distance you
           | need kilowatts of light (assuming 180 degree viewing angle;
           | the surface of 1m half sphere is about 2 m^2 and the flux of
           | visible spectrum is about 550w/m^2), these levels are simply
           | unsafe for home use. For example, such display is going to
           | ignite or at least melt a keyboard if it's too close.
           | 
           | Similar for dynamic range. Reality has extreme dynamic
           | ranges, i.e. both very light and very dark areas on the same
           | scene.
           | 
           | At least modern hardware generally delivers realistic
           | resolutions, and color accuracy.
        
             | formerly_proven wrote:
             | > At least modern hardware generally delivers realistic
             | resolutions, and color accuracy.
             | 
             | I don't know, look at some flowers... most screens can't
             | show their colors. Stuff like the intense, super-saturated
             | reds and purples are basically impossible to get right in
             | sRGB, with very obvious artefacts, yet in real life there
             | are no artefacts, there is texture, detail and color, where
             | the picture only has a smear of four different reds. P-3
             | and Rec.2020 might reduce the issues there, as would 10 bit
             | color.
        
               | Const-me wrote:
               | > Stuff like the intense, super-saturated reds and
               | purples are basically impossible to get right in sRGB
               | 
               | I agree, but professional monitors are close to Adobe
               | RGB, and have been for decades. Adobe RGB is close to
               | DCI-P3, and not much worse than BT.2020.
               | 
               | P.S. Doesn't help with red/purple though, Adobe RGB
               | extends sRGB mostly in green direction.
        
         | Nokinside wrote:
         | I had the same experience with VR.
         | 
         | I had opportunity to test Varjo VR-2 Pro ($5,995.00)
         | https://store.varjo.com/varjo-vr-2-pro and now all consumer VR
         | products feel like total crap.
        
         | liamcardenas wrote:
         | That's interesting. Even if I look at current state of the art
         | graphics _today_ I don't think for a second that it's anything
         | close to indistinguishable from reality (not to say it's not
         | good or impressive).
        
         | sillysaurusx wrote:
         | It's even harder to convince other people. Once you realize
         | that we do not understand how to make graphics look real, the
         | pattern appears everywhere. Yet no one will acknowledge it.
         | 
         | Graphics -- even movie quality graphics -- don't look anything
         | like what a camera produces. The sole exception is when you're
         | mixing in real video with CG elements. But try to synthesize an
         | entire frame from scratch, then put it next to a video camera's
         | output, and people can tell the difference.
         | 
         | Also, screenshots are misleading. You have to test video, not
         | screencaps. Video is processed differently by the brain.
         | 
         | 10 out of 10 times, all the graphics engineers come out of the
         | woodwork going "but actually we do know how! It's a matter of
         | using so and so calculations and measuring BDRF and" none of
         | those equations work.
        
           | drran wrote:
           | Unreal engine looks pretty convincing with proper camera
           | setup: https://youtu.be/zKu1Y-LlfNQ?t=118 .
        
           | devit wrote:
           | We definitely know how to make graphics look real - look at
           | the unbiased renderers like LuxRender
           | [https://luxcorerender.org/gallery/].
           | 
           | The problem is that we can't do that in 1/60 second on
           | consumer-priced hardware, and also both scanning real objects
           | and manually modelling are expensive.
        
             | natdempk wrote:
             | Which of those images look real to you? To me, pretty much
             | all of them look strictly like rendered images or have some
             | aspect of them which gives away the fact that they are
             | rendered.
        
               | CydeWeys wrote:
               | We need a blind test where you evaluate a set of images
               | on whether you think each one is real or rendered.
        
               | natdempk wrote:
               | Heh, I agree! I'd love to try something like that if it
               | exists.
        
               | bonoboTP wrote:
               | With a caveat: the real images should not be cherry
               | picked to look as close to cgi as possible but the other
               | way around. Real and render can look indistinguishable
               | that way too, but we want real looking cgi, not carefully
               | arranged cgi-looking reality. The biggest giveaway is the
               | simplicity and pristine sterility of the rendered scenes,
               | no mess, clutter, irregularity. Just look at photos that
               | people haphazardly snap in the real world.pick those for
               | a blind study and cgi is nowhere near. Pick carefully lit
               | and arranged artificial scenes with heavy post-
               | processing, like studio shoots or hotel brochures or car
               | ads and the difference will be less obvious.
        
               | CydeWeys wrote:
               | I love this idea. Quick question for you, is this scene
               | real or rendered? https://imgur.com/a/MpKMo6j
               | 
               | If you think the answer is obvious, then CGI definitely
               | has more work to do. If not ... ??
        
               | bonoboTP wrote:
               | Looks real. The dirt on the computer, the curls of the
               | books and magazines and many other small details look
               | very real.
               | 
               | If this is CGI, then I'm impressed and want to see more
               | from where this came from.
        
             | SahAssar wrote:
             | I think that with a lot of work we can make a real scene
             | that looks like those renders, but none of those look like
             | a "normal" real photo.
        
             | stan_rogers wrote:
             | Do any of those actually look real to you? I'm not saying
             | that a convincingly realistic render is impossible, but
             | those are all obviously synthetic to me. Maybe that's just
             | a lifetime of photography at play.
        
             | sillysaurusx wrote:
             | Again, it's important to focus on video, not screenshots.
             | Video is processed differently by the brain.
             | 
             | "The Dress" illustrates how easy it is to fool people with
             | still images. Movement gives crucial context. Our visual
             | system has evolved for millions of years specifically to
             | exploit that context.
        
               | ACow_Adonis wrote:
               | also, the style of photos most of us are used to seeing
               | are so processed digitally before publication, there's a
               | real question of what it means to have a 'real photo' to
               | approach in the first place.
               | 
               | That were capable of getting pretty close isn't that
               | surprising, because most photos have already had N layers
               | of digital effects applied, moving them closer to
               | renders, rather than the other way around.
        
           | Terr_ wrote:
           | > don't look anything like what a camera produces
           | 
           | The reverse-problem is a pet-peeve of mine: It seems many
           | people have been accidentally brainwashed by Hollywood into
           | thinking that film-camera effects are signs of "realism."
           | 
           | So then the first-person game introduces something like lens-
           | flares, and everyone goes: "OMG it's so realistic now", even
           | though the truth is the exact opposite. If you were "really
           | there" as Hero Protagonist, you wouldn't have camera-lenses
           | for eyeballs except in a cyber-punk genre.
        
           | ponker wrote:
           | > Once you realize that we do not understand how to make
           | graphics look real, the pattern appears everywhere. Yet no
           | one will acknowledge it.
           | 
           | Making graphics that look real is almost equivalent to the
           | Turing Test, I think plenty of people are willing to
           | acknowledge that it's unsolved.
        
             | Terr_ wrote:
             | > Turing Test
             | 
             | Amusingly relevant yet slightly off-topic:
             | https://existentialcomics.com/comic/357
        
           | Scene_Cast2 wrote:
           | So I've dabbled a bit in graphics, and it feels like a
           | content problem. Sure, you can have an artist pump out some
           | models and textures. But for every level of detail increase,
           | the artist must spend ~3x the time of the previous level. So
           | for example, even making just a simple a dirt road look
           | _really_ good and photo-realistic would involve much more
           | time (a week? a month?) than one can reasonably spend on a
           | commercial project (where you have bajillions of assets to
           | worry about).
        
           | pdelbarba wrote:
           | I think it depends on context. There are a lot of sim games
           | where the environment is very controlled and forums are
           | littered with people who can't tell whether the occasional
           | picture is real or in-game. A forest is hard to render
           | accurately but a plane in flight is pretty trivial
        
             | i-am-curious wrote:
             | Forests have pretty much been solved. Look at games like
             | Shadow of the Tomb Raider, forest looks amazing. The
             | difficulty now is limited to finer details, like hair. Hair
             | is still pretty much unsolved.
        
               | sillysaurusx wrote:
               | See? It's almost deterministic. People really can't
               | accept that we don't know how to do something.
               | 
               | Plop a nature video next to your forest rendering and
               | it'll become apparent just how unsolved trees are. And
               | everything else, for that matter.
               | 
               | The precise claim is this: viewers should be able to
               | identify a rendered video no better than random chance.
               | If you conduct this experiment, you'll see that real
               | videos from actual video cameras wipe the floor.
        
               | mrob wrote:
               | The motion blur will probably give it away. Accurate
               | video motion blur is computationally expensive but
               | conceptually simple. Just render at about 100 times your
               | target frame rate and average batches of frames together
               | in linear colorspace. You can speed this up by rendering
               | at a lower frame rate (e.g. 10 times your target frame
               | rate), estimating motion, and blurring along the motion
               | vectors before averaging the frames. You can further
               | speed it up by using an adaptive frame rate depending on
               | motion speed and contrast. But a lot of rendered video
               | doesn't even try. Look at a fast-moving bright point of
               | light and you'll easily see the difference.
               | 
               | (But note this is only replicating video, not reality.
               | Truly realistic motion blur requires ultra-high displayed
               | frame rates beyond the capabilities of current hardware.)
        
               | EForEndeavour wrote:
               | > If you conduct this experiment, you'll see that real
               | videos from actual video cameras wipe the floor.
               | 
               | To be fair: have you run such an experiment yourself, or
               | are you just assuming that this conclusion will always
               | result?
        
         | agumonkey wrote:
         | Something that hasn't changed to me is the mood. RE is still as
         | gloomy and soul grabbing. The value is outside the pixel count.
        
         | Razengan wrote:
         | > _I considered certain game elements like fire and water
         | indistinguishable from reality. Now it looks like pixelated
         | garbage and I ask myself how I could have ever thought that._
         | 
         | Imagination. Same as the people who grew up with the original
         | Atari and Sinclair Spectrum and Commodore 64.
        
         | giarc wrote:
         | I remember when I first saw a DVD on a good screen. It was the
         | first Matrix movie in a friends basement. I was so blown away
         | by the quality jump from VHS.
        
         | jonplackett wrote:
         | I think it's something we learn.
         | 
         | I remember as a kid my younger sister watching cartoons and
         | realising she couldn't tell the difference between them and
         | live action.
         | 
         | I think as graphics get more complex our ability to distinguish
         | increases. But we'll probably hit a limit in our ability to
         | keep up sooner or later.
        
         | jamroom wrote:
         | Oh man - the same here with me - when I first got Morrowind
         | there was an option you could enable in ATI (now AMD) cards at
         | the time that made water look "real". I was able to enable that
         | and was blown away - I was like "well that's it - doesn't get
         | any more real than this". Having loaded Morrowind recently I
         | could not believe had bad it looked lol. Makes me wonder - what
         | are we "not" seeing right now that will make us think this way
         | 10-20 years from now?
        
           | lacker wrote:
           | The biggest thing to me is that video games do not have very
           | many independent objects, compared to reality. Go outside and
           | look at the real world and you will see the ground has little
           | pebbles and bits of mud and all this stuff that gets kicked
           | around. No video game will let you inspect the leaves of
           | plants to look for little bugs, the way reality will. Or for
           | indoor scenes, there is no video game that accurately
           | captures the experience of "picking up a living room with a
           | bunch of toys strewn about".
        
             | makapuf wrote:
             | > No video game will let you inspect the leaves of plants
             | to look for little bugs, the way reality will.
             | 
             | Why not? With procedural generation and LOD rendering it's
             | not impossible. Not that it's easy, but not impossible?
        
               | bonoboTP wrote:
               | Presumably you're not a native English speaker. "will"
               | here does not refer to the future, this is a subtle
               | grammatical thing that I can't explain well, but it
               | refers to presently existing games.
        
               | cyberlurker wrote:
               | Still, there is no reason I can think of that the game
               | described couldn't be created right now besides it not
               | being a very fun/Marketable game.
        
               | bonoboTP wrote:
               | But that's theoretical, the point is more how current
               | games actually look, not what people _could_ make
               | potentially with today 's tech.
               | 
               | Also, some games are intentionally cartoonish as an
               | artistic choice so photorealism isn't always the goal.
        
               | bobthepanda wrote:
               | Everything has a cost, either computational or human.
               | 
               | We've probably reached the point where human cost exceeds
               | computational cost, which is to say that developing and
               | QA testing such a feature would probably cost more money
               | than it's worth. How many users of software would gain
               | from such minutiae?
        
             | Shorel wrote:
             | That would require programming lots and lots of procedural
             | generation algorithms, one for each kind of thing.
             | 
             | Sounds like a fun project for the next generations.
             | Something to play when I am older.
        
               | perseusmandate wrote:
               | Procedural generation for plants is actually pretty
               | common already Most trees you see in video games are from
               | middleware like Realtree
        
             | outworlder wrote:
             | > Go outside and look at the real world and you will see
             | the ground has little pebbles and bits of mud and all this
             | stuff that gets kicked around
             | 
             | We are getting better. Like snow deformation in RDR2, for
             | instance(works even in a PS4).
             | 
             | But random bits of debris that can get kicked around - and
             | subsequently inspected - no. That's a problem.
             | 
             | > No video game will let you inspect the leaves of plants
             | to look for little bugs, the way reality will
             | 
             | That's an easier problem than tracking all the debris.
             | Before you inspect, you have no idea what will be there -
             | the computer also doesn't have to and it can be optimized
             | away until there's an observer.
             | 
             | Think heisenberg uncertainty principle but for virtual
             | worlds.
        
           | jakear wrote:
           | As someone who is most definitely not a gamer but with
           | coworkers and friends that get big into high cost gaming
           | systems, here's where I see the biggest deltas between real
           | life and the screen:
           | 
           | - Hair. Up until very recently hair was downright awful.
           | Nowadays it's acceptable-ish, but there's still lots of room
           | for improvement, in particular in natural motion of long
           | hair.
           | 
           | - Water. I spend a big chunk of my time on the water, so I'm
           | probably more attuned to how it moves than most. Games just
           | don't have it down. In particular, I think a lot could be
           | gained by embracing it's fractal nature: in my experience, at
           | every human scale (mm to dam and everything in between) very
           | similar wave patterns exist, but games tend to have just a
           | small fixed number of "wave-layers" at various scales stacked
           | together.
           | 
           | - Clouds. I can easily spend hours just observing clouds,
           | looking at things like their shape, overall motion, internal
           | motion, composition, edge behaviors, etc, and how they change
           | over time. Game clouds are lacking in all these regards,
           | particularly the time-sensitive nature of a cloud.
           | 
           | - Foliage. In games I've seen, individual plants/etc. in
           | isolation generally look really quite decent. But the second
           | an physical object interacts with them, they very clearly
           | don't respond in the right ways. There's a lot about how
           | branches bend and leaves rustle and more that is lost.
           | Additionally, in groups of plants it's often clear that some
           | small number of models are being reused, possibly with some
           | generated randomness added. But the variety doesn't come
           | close to matching what one would really see.
           | 
           | - Human faces and expressions. These are generally really
           | bad, especially in normal gameplay (cut-scenes are sometimes
           | better)
           | 
           | Again, this is probably all just really weird stuff I notice
           | because I spend the vast majority of my time outdoors and
           | only see "HiFi" games being played very infrequently. I don't
           | think games are worse for not implementing these, but I am
           | very interested in what they'll look like 10-20 years down
           | the road.
        
           | formerly_proven wrote:
           | Hm. Many people here share their sentiments of perceiving
           | older games (or current games) as being close to
           | photorealistic. Personally it never felt that way to me. All
           | games have obvious problems where they don't behave/look
           | anywhere close to reality. If you remove the interactive
           | element and only use somewhat massaged screenshots, then,
           | sure certain elements are basically photorealistic and have
           | been for a while. For example, landscapes look pretty darn
           | realistic. Anything alive or even most man-made artefacts,
           | not so much.
           | 
           | Apart from graphics most stuff in games is pretty rough.
           | Animations are generally bad and ways before reaching the
           | uncanny valley of "getting close"; they're still in the
           | "abstract representation of concept" detail level. AI is dumb
           | (largely for [perceived] player-acceptance reasons). Sound is
           | generally poor; some games still don't use 3D sound. Physics
           | are "abstract representation" level again, some games still
           | have fly-through walls and vibrating items. etc. etc.
        
             | foxdev wrote:
             | I was always so confused by people saying the latest games
             | were realistic. They looked so horrible. Maybe it's because
             | my point of comparison was Nintendo games where they
             | focused more on art direction than polygon count. I read an
             | interview with someone who worked with Shigeru Miyamoto and
             | they talked about how he would come back with changes to
             | specific leaves and rocks after spending time exploring a
             | cave or forest. The attention to detail, no matter how low-
             | poly and simply-textured, made all the difference.
             | 
             | This hasn't changed. New "realistic" games still range from
             | horrible to boring-looking. I don't know if it's collective
             | delusion or if I'm missing something.
        
               | WillPostForFood wrote:
               | The new Microsoft Flight Sim goes for realism, and often
               | achieves beauty.
               | 
               | https://www.youtube.com/watch?v=isvWpUXgKgM
               | 
               | Take this as a compliment, not a criticism when I say
               | you've rigged your proposition by holding up Nintendo
               | first party games, and Shigeru Miyamoto for comparison.
               | If you were to look back at he average Nintendo game, not
               | made by Nintendo, they are mostly unremarkable.
               | 
               | https://www.youtube.com/watch?v=E7ymWKnkAxM
        
               | foxdev wrote:
               | I should have specified first party. There were some
               | third party games I liked, but few paid as much attention
               | to style and detail. It's hard and expensive to get both
               | realism and style right. I don't think it's a coincidence
               | that most games I like for the art direction come from
               | small indie studios who never had a shot at competing on
               | realism.
        
             | deaddodo wrote:
             | Yeah, I never understood these comments. I went from the
             | 8-bit generation on up and never thought PSX/Saturn/n64
             | games looked "real"; just better and more options than
             | before. I don't think I ever considered a game "realistic"
             | until the mid-late 00's.
        
             | KaoruAoiShiho wrote:
             | Yeah... these comments are kinda weird. Hate to say it but
             | maybe they should go outside more lol, computer graphics
             | always looked pretty bad at me, even the clips in today's
             | nvidia presentation looks really unnatural. That's not to
             | take anything away from the technology and the massive
             | advances it represents, just compared to reality it's still
             | really far from fooling a human brain.
        
               | Nursie wrote:
               | To me the "marbles" RT demo looked amazingly real.
               | 
               | The rest, not so much, sure.
        
               | Retric wrote:
               | Just watched the RT demo video and the paintbrushes for
               | example look amazingly bad. All surfaces look flat for a
               | better word, most noticeable it's sterile without dust.
               | The steam blasts are terrible as they miss all the
               | internal swirling you get from actual vapor. They cover
               | most of this up by moving stuff around and adding clutter
               | which tries to keep you from really focusing on anything.
               | 
               | It's basically using the same technique as hand drawn
               | animation where as long as you realize what's being
               | represented you can automatically fill in missing
               | details. However, this fails as soon as you focus on any
               | one detail.
               | 
               | Honestly, it's not bad for what it is. I mean the physics
               | engine was probably the worst part, but as a visual demo
               | that's fine.
        
               | wiz21c wrote:
               | interestingly many of us feel that there's a huge gap
               | between the past and now. But how many of us can actually
               | verbalize what will change in the rendering of pictures
               | in, say 3 years.
               | 
               | For example, I can clearly see that ray tracing produces
               | better results. But it's a bit harder to tell how better
               | it is, to find the words that describes how better it is.
               | Of course, one can say that, for example, photon tracing
               | is more physically accurate. But still, what words can we
               | use to describe how real (or not real) a still image is
               | ("more realistic" doesn't count :-))
        
               | bonoboTP wrote:
               | Game graphics look sterile, too clean, sharp, plasticky.
               | The real world is messier with less clear separation
               | between objects, things blend together more, subtle
               | shadows, surfaces are less uniform, there is more
               | clutter, dirt, wrinkles, details.
        
               | formerly_proven wrote:
               | In games and animations, everything is textured, but few
               | things actually have a texture to them :)
        
               | nightski wrote:
               | Is that an artifact of the graphics capability? Or the
               | art style? I think the two often get confused and many
               | games are specifically designed for a sterile/clean look.
        
               | bonoboTP wrote:
               | Not sure. It's hard to analyze as it's more of a visceral
               | impression. It could be in part the way natural
               | environments tend to structure themselves over time, like
               | how we throw our stuff around in natural ways.
               | 
               | But also, the design often adapts to the capabilities.
               | Games like GTA3 used to have thick fog just to hide the
               | fact that they couldn't render stuff far away in time.
               | You can say that's an artistic choice to give a smoggy
               | big city atmosphere, but clearly it was a practical
               | choice as well. Even today, game designers like to make
               | things dark, blurry and rainy, so that the un-realism
               | becomes less obvious.
        
           | pvg wrote:
           | Maybe the water stood out since Morrowind was an ugly game
           | even in its day. All the Elder Scrolls games have a well-
           | earned reputation of looking like they're populated by
           | cardboard mutants. Reminds me a bit of the old Infocom ad:
           | 
           | http://www.retroadv.it/images/03082019/Home_Computer_Magazin.
           | ..
        
             | int_19h wrote:
             | Morrowind was complicated. The characters were ugly, and
             | animations especially horrible. But the landscapes were
             | considered very beautiful by the standards of the day.
        
           | Yajirobe wrote:
           | > Makes me wonder - what are we "not" seeing right now that
           | will make us think this way 10-20 years from now?
           | 
           | How about ray tracing?. If we get real-time 60fps+ ray-traced
           | computer graphics in games, that would blow what we have now
           | out of the water
        
           | 0xffff2 wrote:
           | >Makes me wonder - what are we "not" seeing right now that
           | will make us think this way 10-20 years from now?
           | 
           | My vote would be cloth simulation and clothing clipping. I've
           | yet to see a game that comes even close to doing this
           | realistically. Imagine what happens to your sleeves when you
           | lift your arms above your head for example, or how the plates
           | of a suit of armor naturally slide over each other. In every
           | game I can think of, clothing is rigidly attached to the
           | underlying skeleton and it just stretches/clips as the
           | character moves.
           | 
           | I guess fidelity of everything else has gotten much better,
           | because I recently started noticing this and now I find it
           | very distracting in any game that has in-engine cut scenes
           | involving character closeups.
        
           | noja wrote:
           | When you compared it recently did you use the same monitor,
           | cable (analog or digital), and resolution?
        
           | outworlder wrote:
           | YES! Morrowind was my 'shader benchmark' for some time.
        
           | int_19h wrote:
           | It was one of the first prominent uses of pixel shaders in
           | the game, coinciding with NVidia (not ATI) releasing GeForce
           | models that supported them.
           | 
           | It was so noticeable because it was such a huge increase in
           | quality compared to what passed for water in games before -
           | usually some kind of blue-grey translucent texture. For the
           | first time, pixel shaders produced water that was clearly an
           | attempt to imitate water IRL, not the cartoonish
           | representation of it.
        
           | Budabellly wrote:
           | For me, it's fidelity of human facial animation that has a
           | long way to come. There are teams doing a great job of it
           | [1], but the labor/skill required to bring one high hyperreal
           | facial rig to a game or movie seems insane. I think companies
           | pursuing AR/VR applications will lead here.
           | 
           | ML creative tools stand to automate a lot of this imo.
           | 
           | [1] random example: https://snapperstech.com/
        
           | johnwheeler wrote:
           | > "well that's it - doesn't get any more real than this"
           | 
           | > Makes me wonder - what are we "not" seeing right now that
           | will make us think this way 10-20 years from now?
           | 
           | Yes - well said. That's what I was trying to convey in my
           | comment.
        
             | lwansbrough wrote:
             | I think Epic's UE5 demo is really really close to reality
             | in terms of lighting and geometry. Next 10-20 years will
             | probably see the same technology being brought into larger
             | and larger environments with more moving parts.
             | 
             | Then there's the more obvious stuff that isn't done well
             | even today: skeletal animation is still lacking and feels
             | unnatural, physics systems are still very approximate and
             | constrained - often times most things are indestructible in
             | games, fluid dynamics are still very slow/limited. Human
             | models still don't look real though, and the voice acting
             | never quite matches the mouth movement or body language.
             | 
             | I do really feel like we've crossed the uncanny valley when
             | it comes to natural scenery rendering. But a lot of what
             | makes things feel real are still missing from games.
        
               | heipei wrote:
               | I agree, it's so close that it's actually being used as
               | an interactive movie background with a big-ass 360-degree
               | screen for dynamic scene and lighting:
               | https://www.youtube.com/watch?v=Hjb-AqMD-a4
        
               | formerly_proven wrote:
               | IMHO it looks fake. The lighting is way off, it quite
               | literally looks like they're standing directly below a
               | big diffuser - which they are. The background looks like
               | a poster with one of these diffraction parallax effects.
               | 
               | It's very cool tech. But it doesn't look real.
        
               | jiggawatts wrote:
               | The camera angles in that demo were off axis.
               | 
               | Take a look at The Mandalorian series, almost all of the
               | outdoor scenes were shot using the video wall technology.
               | 
               | I've worked in computer graphics and I didn't realise the
               | sets were fake until after I finished the whole series.
        
         | Shivetya wrote:
         | I remember the progress from CGA to EGA; there was that odd
         | bridge of Tandy 16 color; and then, boom, VGA, which looked so
         | magical compared to what came before. Even when I did gray
         | scale VGA on my IBM PS/2 50z it all just felt like a big jump
         | had been finally taken.
         | 
         | Then down the road came 3dfx with Voodoo and that to me was the
         | next great leap forward. Each iteration has been leading to ray
         | tracing which is the next great leap.
         | 
         | Now just for screen tech to become as affordable as the cards
         | that can drive them, the LG OLED we have is stunning but that
         | is "just" 4K.
         | 
         | just for fun, the story of 3dfx voodoo
         | https://fabiensanglard.net/3dfx_sst1/index.html
        
       | ttul wrote:
       | Jensen Huang has a really baller stove.
        
         | throwaway287391 wrote:
         | You can see it in action in this video:
         | https://www.youtube.com/watch?v=So7TNRhIYJ8
        
           | flas9sd wrote:
           | as someone who put failing green IGPs on notebook pcbs into
           | the oven in 2013 for a reball reanimation I feel some irony
           | when the chef himself is at the old trick.
        
         | xbryanx wrote:
         | How many spatulas does one family need though?!
        
         | sliverdragon37 wrote:
         | His entire house is really sweet. I went there once for an
         | intern event, there's an nvidia logo on the bottom of the pool.
         | Crazy stuff.
        
         | trynewideas wrote:
         | I looked at the backsplash for way too long because at first
         | glance I thought it was a field of crucifixes. But no, it's
         | just a vineyard.
        
           | emmanueloga_ wrote:
           | backsplash! That's a word I did not know. Anyway, I couldn't
           | help to notice the backsplash too and the kitchen enclosing,
           | it looks optimized to collect grease and make it hard to
           | clean :-/
        
         | ealexhudson wrote:
         | That stove is an Aga and in the centre of his Aga is one of
         | these RTX 30 GPUs. Keeps the whole house warm.
        
           | trumpeta wrote:
           | Then why does he need the leather jacket?
        
             | m3kw9 wrote:
             | To keep him looking cool
        
             | ChuckNorris89 wrote:
             | His leather jacket is a meme at this point. Be a shame to
             | waste it.
        
             | nomel wrote:
             | It's actually a full body oven mitt.
        
             | ralusek wrote:
             | To show off the specular highlights with RTX.
        
           | tpmx wrote:
           | The inventor of the continously burning "AGA cooker" beast
           | was an interesting person:
           | 
           | https://en.wikipedia.org/wiki/Gustaf_Dal%C3%A9n
        
       | bitxbit wrote:
       | As someone who runs data models at home in addition to 3D
       | rendering, 3090 is a must buy for me. I imagine it will be sold
       | out within minutes and supply will be an issue for months.
        
       | fortran77 wrote:
       | The pricing seems very good. Our company write a lot of CUDA
       | code, mostly for real-time audio processing. It's amazing how
       | much performance you can get with a desktop PC these days. These
       | really are Supercomputers on a card.
        
       | antpls wrote:
       | Not sure it was mentioned so far, but the reference to photon
       | instead of electron at the end of the presentation could point to
       | future photonic GPU ? https://www.anandtech.com/show/16010/hot-
       | chips-2020-live-blo...
        
         | zamadatix wrote:
         | I think it's more likely a reference that long distance high
         | speed communication is done via photons not electrons.
        
       | Ninjinka wrote:
       | The pricing is insane. $499 for a card that beats the 2080ti?
        
         | mkaic wrote:
         | And to think I was just about to buy a $400 2060 Super. I don't
         | think I'll be doing that anymore.
        
           | verst wrote:
           | About 4 months ago I picked up a 2060 Super for my ancient
           | gaming desktop while I am waiting for the 3080 or 3090 to
           | finally build a new gaming PC. Ideally I'd have a 4th gen
           | Ryzen equivalent of the 3900X.
           | 
           | The main reason my 2012 build of a PC is holding up ok was
           | PCIE 3.0 support, so for me PCIE 4.0 is a must. The only
           | thing I ever upgraded was the GPU and HDD. Went from 670 ->
           | 980 -> 2060 Super. The i7-3370k (OC to about 4.3Ghz) has held
           | up ok. The 16GB DD3-1866Mhz is slow however. Switching from a
           | 2TB 5200 RPM drive to a 2TB NVMe SSD (Samsung EVO 860) for
           | which I had to get a riser card since the ASROCK z77 Extreme
           | 4 doesn't have m.2 made a huge difference also. When I
           | upgraded to the RTX 2060 Super I ran out of PCI 3.0 lanes,
           | and as a result my SSD is running slower, but that's ok.
           | 
           | Suprisingly, I can play the new Microsoft Flight simulator
           | just fine in 3440x1440 resolution on High Settings.
           | Assassin's Creed Odyssey runs well in 1440p Ultra at ~60 FPS.
        
           | saberdancer wrote:
           | I almost bought an 2080 Ti. Now I have to wait :).
        
           | aligray wrote:
           | I bought one at the start of the year, it's always the way I
           | suppose.
        
             | ses1984 wrote:
             | You got to use it for most of the year.
        
               | arvinsim wrote:
               | 1 year of use is fair. I have 5700 XT and I think I got
               | good value of it.
        
               | wlesieutre wrote:
               | I just went and checked my RX 5700 receipt and I paid
               | $212 after tax last November. After selling the included
               | Borderlands 3 license because I'd already bought it, net
               | price under $200. GPU value of the century right there.
               | 
               | But as a lighting nerd, I do want the raytracing...
        
               | elipsey wrote:
               | >> $212 after tax
               | 
               | So they have doubled in price since then? wtf?
               | 
               | https://www.newegg.com/p/pl?N=100007709%20601341487%20601
               | 341...
        
           | easytiger wrote:
           | I recently got a 2070 super OC. It really struggles with,
           | say, COD:MW at near 4k Res (3440x1440). Indeed the campaign
           | with RT on doesn't get more than ~50fps. Kinda disappointed
           | really. Get about 100fps in multiplayer with everything
           | turned off/low.
           | 
           | But the cost of a 2080ti was ridiculous especially
           | considering the open secret of its impending obsolescence.
           | 
           | AMD really need to up their marketing budget. From the
           | zeitgeist I've no idea where their lineup sits comparatively.
           | No wonder they only have 20% market share
        
             | hellotomyrars wrote:
             | AMD still don't offer a compelling product for the high-
             | end, which Nvidia is taking full advantage of. AMD cards
             | are the most economical on the market and have their own
             | advantages but they're niche (Linux support is much better
             | for example).
             | 
             | AMD got on top of Intel by creating hardware that delivers.
             | Just a few years ago AMD CPUs were economical, but the
             | performance was abysmal, especially in single-threaded
             | applications by comparison. They didn't make a product for
             | the high-end.
             | 
             | I am eagerly awaiting what the next series of AMD cards are
             | going to be able to do. They're talking a big game for
             | sure. But Nvidia has a big software advantage as well as a
             | hardware advantage on AMD and that's likely to be a
             | sticking point for me personally on my next purchase.
             | Nvidia spends a lot of resources on working closely with
             | developers and providing them support they need to take
             | better advantage of the hardware with nvidia-specific
             | features. AMD doesn't seem to do the same, and has had much
             | higher profile issues with their drivers in my experience.
             | 
             | All that said, I hope AMD can provide a product to truly
             | compete at the high-end with Nvidia, to hopefully drive
             | prices down as GPU prices have gone up dramatically on the
             | high end.
        
         | corey_moncure wrote:
         | All of the performance claims are extremely hand-wavey. They
         | are using the deep learning and compute features to equivocate
         | about resolutions and framerates. 8K@60 isn't real, but with
         | DLSS activated it is.
        
         | beagle3 wrote:
         | It was the 20xx series that had insane pricing (in a bad way).
         | nVidia had no competition and priced things at the highest
         | markup the market could take, segmenting it by ML, cloud,
         | mining, gaming etc.
         | 
         | Not sure what their competition now in each vertical, but
         | apparently they believe that they need a lower price point.
        
         | m3kw9 wrote:
         | Maybe it beats with RTX
        
         | zucker42 wrote:
         | Almost seems to good to be true, huh?
        
         | WrtCdEvrydy wrote:
         | That's pretty nice actually, can't wait for the 20XX to go on
         | deep discount.
        
         | tmpz22 wrote:
         | I'd wait for in-game benchmarks on high end monitors before I
         | believe that comparison from NVIDIA. Even if the specs are
         | better games are still optimized for older gen cards.
        
           | asutekku wrote:
           | No matter what, $499 is still a better deal than $1200 for a
           | somewhat similar card. You really don't need benchmarks for
           | that.
        
         | alkonaut wrote:
         | I'm going to assume that's because it has twice the raytracing
         | power so it's with all the eyecandy turned up (Until I see
         | something that says otherwise).
        
         | highfrequency wrote:
         | Keep in mind that both the 3070 and the 3080 have less memory
         | than the 2080 Ti
        
           | cma wrote:
           | But they also have PCI-e 4 with CPU bypass and onboard
           | decompression for streaming in data from SSD.
           | 
           | Probably a better trade-off for gaming and worse for ML
           | training.
        
             | jessermeyer wrote:
             | Source on cpu bypass and ssd decomp? That reads like ps5
             | tech.
        
               | [deleted]
        
               | cma wrote:
               | >Leveraging the advanced architecture of our new GeForce
               | RTX 30 Series graphics cards, we've created NVIDIA RTX
               | IO, a suite of technologies that enable rapid GPU-based
               | loading and game asset decompression
               | 
               | https://www.nvidia.com/en-us/geforce/news/rtx-io-gpu-
               | acceler...
               | 
               | It can't both bypass the CPU and have decompression
               | unless it is decompression on the GPU. I'm not sure it is
               | dedicated decompression hardware, or if it is using the
               | normal GPU compute.
        
               | wtallis wrote:
               | Read a bit further into that link:
               | 
               | > Specifically, NVIDIA RTX IO brings GPU-based lossless
               | decompression, allowing reads through DirectStorage to
               | remain compressed while being delivered to the GPU for
               | decompression.
               | 
               | Still not sure if it'll use fixed-function decompression
               | units on the GPU or if it's just compute shaders, but
               | it's decompressing on the GPU.
        
         | phaus wrote:
         | Allegedly beats the 2080 TI, not just the regular one.
        
           | Ninjinka wrote:
           | I've heard a few people say that, this graphic makes it look
           | about the same though: https://i.imgur.com/fbo1UIU.png
           | 
           | EDIT: I am an idiot. It does say it's faster, though the
           | chart makes it look pretty close.
        
             | trollied wrote:
             | It says "Faster than 2080TI" right underneath it...
        
               | redisman wrote:
               | Probably like faster at RT. I'll almost certainly go for
               | the 3070 - that's crazy value for not much more than I
               | paid for 2060S.
        
               | phaus wrote:
               | People are speculating that the 2x numbers come from RTX
               | and/or DLSS, but they also doubled the number of CUDA
               | cores so its possible its going to be an actual raw
               | performance gain.
               | 
               | I think its wise to be skeptical until independant
               | benchmarks are available but I would be surprised if this
               | didn't end up being the biggest performance increase in
               | Nvidia's history for a single generation, just like they
               | say it is.
        
             | paulmd wrote:
             | > I've heard a few people say that
             | 
             | probably because of, oh I don't know, _the text right next
             | to the 3070 dot saying "faster than 2080 Ti"_...
             | 
             | (and I'm really only speculating here)
             | 
             | It probably won't be like, night and day faster than a 2080
             | Ti of course, it's going to be the same _bracket_ judging
             | by the chart, but I 'd expect it to usually edge out the
             | 2080 Ti by a couple percent based on the text there.
        
         | godelski wrote:
         | Everything was awesome except the 3090. $1500?!?!?!?! That's a
         | big jump from the 2080Ti pricing of $1200.
        
           | t3rabytes wrote:
           | Seems like the expectation on Reddit is that the 3090 is more
           | of a Titan replacement rather than a 2080Ti replacement and
           | that a Ti variant will come later with 20gb of mem.
        
             | saberdancer wrote:
             | From what I heard, nvidia wants to simplify naming, having
             | Ti and Super along side the numbers makes it confusing for
             | the consumer (in their opinion), so they want to go away
             | from it. I wouldn't be surprised that the rumor is true and
             | that they will ditch the Ti/Super suffixes.
        
             | godelski wrote:
             | That is fair. It is a bigger jump from the Ti. Though most
             | of the rumor sites were suggesting $1400. The 20Gb 2080
             | variant seems to be a really recent rumor though.
             | 
             | What I did find interesting is that it does seem like the
             | $1499 on the slide could have been mistakenly shown. They
             | didn't verbally announce it and other than that one second
             | avoided talking about the price.
        
             | gambiting wrote:
             | It's not an expectation from Reddit, this was said
             | explicitly in the presentation - that they used to make the
             | Titan for people who wanted the best of the best, so here's
             | 3090 for that market.
        
               | t3rabytes wrote:
               | Ope, my fault -- I didn't have a chance to watch it, so I
               | was just going off of what I read on Reddit.
        
             | redisman wrote:
             | Jensen said it is the new Titan so it is actually a lot
             | cheaper than 20 series Titan RTX at $2500.
        
             | bitL wrote:
             | Rumor is that RTX3080 gets 20GB RAM and another that
             | RTX3090 gets 48GB RAM just by simply replacing memory chips
             | for twice-the-size ones.
        
           | fluffything wrote:
           | > Everything was awesome except the 3090. $1500?!?!?!?!
           | That's a big jump from the 2080Ti pricing of $1200.
           | 
           | Since its a Titan model (for machine learning work, not for
           | gamers), and the last gen "RTX Titan" costs 2500$ today, its
           | actually a big jump, but in the opposite direction. Almost
           | half the price...
        
           | phaus wrote:
           | 3080 ti comes later. They said at the presentation the 3090
           | is the new Titan so its actually a massive price drop.
           | 
           | The 3080 TI will probably be out next year if they follow
           | typical patterns and it will have slightly better gaming
           | performance than the 3090 at a much lower price.
        
         | SketchySeaBeast wrote:
         | I don't know about last gen, but that was the same as the
         | generation before it - the 1070 beat the 980ti. It's the cycle
         | they do, and as a gamer why I'll never buy more than the x70
         | card.
        
       | metalliqaz wrote:
       | The 20xx series was very obviously the "tech development" release
       | and was a terrible value. It was first gen ray tracing and thus
       | actually unequipped to fulfill its promises. The 30xx series
       | looks to be much better and is probably finally worth the upgrade
       | from 9xx and 10xx equipment.
        
       | pixxel wrote:
       | I find the naming conventions for computer parts utterly
       | confusing. I'm looking to step away from Apple and do my first
       | (AMD) PC build. Need to find a good overview to read through.
        
         | stu2b50 wrote:
         | 3080
         | 
         | 30 <- represents the generation, previously it was 20,eg RTX
         | 2080
         | 
         | 80 <- represents power within the generation, an 80 is near the
         | top
         | 
         | Higher generation means newer. Higher number means more
         | powerful within that generation. To compare across generations,
         | you need benchmarks.
        
         | SahAssar wrote:
         | IMO apples is even worse, most models just have semi-official
         | dates like "Mid 2015" and no proper consumer-facing model
         | numbers.
        
       | zapnuk wrote:
       | Very impressive. I predict that they'll be (more or less) sold
       | out at least until 2021 at the very least.
        
         | phaus wrote:
         | I'm glad the 3080 is launching first so I can try getting one
         | of those and if they are sold out I have time to think about
         | whether I want to try dropping more than twice as much on a
         | 3090 instead.
        
         | mkaic wrote:
         | I sure hope not. This launch is SUPER exciting--I really want
         | to get my hands on a 3070 once I can afford it. They better be
         | in stock by then!
        
       | lachlan-sneff wrote:
       | 3080 ($700) apparently has 238 teraflops of "tensor compute."
       | We're frighteningly close to a petaflop of compute for less than
       | a thousand usd.
        
         | arcanus wrote:
         | That is FP16 (rounding up to FP32). It is not sufficient for
         | most HPC compute. Good for ML/AI, at least.
        
           | alkonaut wrote:
           | For all but the AI/DL crowd the move away from high precision
           | to high-power low precision is a bit sad.
        
       | bogwog wrote:
       | 10,496 cores on the 3090. That's just insane.
        
       | shmerl wrote:
       | I'm waiting for RDNA 2 cards from AMD.
        
         | Nursie wrote:
         | I'm waiting for AMD to show their hand, but this is a very
         | strong first strike from nvidia.
        
           | shmerl wrote:
           | Sure, if the bold 2x performance increase claim is to be
           | believed. I'd wait for benchmarks to validate that.
           | 
           | Plus for me, Nvidia is simply DOA on Linux, due them refusing
           | to upstream their driver and hindering Nouveau to reclock
           | properly. So even if AMD won't outdo them, I still won't
           | touch Nvidia.
        
             | BearOso wrote:
             | Ditto. But it's pretty much certain that even if the new
             | AMD cards don't match the top Nvidia performance, they're
             | still going to be competitive or better on
             | cost/performance. So you don't have to worry about missing
             | anything if you go AMD for Linux.
             | 
             | I'm just glad that we've finally gotten past that ceiling
             | at ~13 TFLOPs. Nvidia has been hobbling along for a few
             | years, so a breakthrough is nice.
        
               | Nursie wrote:
               | I think whether AMD can match price/performance is very
               | much up in the air right now.
               | 
               |  _If_ nvidia 's performance claims are real then that is
               | a massive challenge for AMD to meet.
        
             | uep wrote:
             | > Plus for me, Nvidia is simply DOA on Linux
             | 
             | Same. The best this announcement does for me is force AMD
             | to reduce the price of their GPUs. Which is appreciated,
             | because I am due to upgrade.
        
             | Nursie wrote:
             | Never had an issue using the proprietary driver on linux,
             | myself.
             | 
             | I know, I know, it would be nice to have a proper FOSS
             | driver, and better for integration, updates etc. But it
             | does work fine, IMHO.
        
               | shmerl wrote:
               | Depends, it works for cases Nvidia care about. But what
               | you call integration means all other cases :) And there
               | it simply falls apart or takes decades to be fixed.
        
               | Nursie wrote:
               | In linux as long as I can get XFCE going, all thw screens
               | at the right res and scling levela, and the cuda drivers
               | working, I'm generally happy :)
               | 
               | I'm probably pretty easy to please :)
        
       | Havoc wrote:
       | Guess I'm not keeping my 2070 super for long then
        
         | tobyhinloopen wrote:
         | To be fair most games run great on any RTX card. What are you
         | playing that would benefit an upgrade?
        
       | tobyhinloopen wrote:
       | Well my 2080TI still runs Factorio without issues so I suppose I
       | don't need to upgrade
        
         | ralusek wrote:
         | Ya, but we'll know, and more importantly, you'll know. Go ahead
         | and upgrade.
        
       ___________________________________________________________________
       (page generated 2020-09-01 23:00 UTC)