fair point
so i asked this previously in another thread but i don't think i've gotten a response. are the number of tensor and RT cores enough in the switch 2 to sufficiently do the things they're there for?
it's even more silly when you consider the fact that Switch already has some games utilizing FSR. if Switch was DLSS capable it would've already been using it.I hate how so many people act like DLSS is "too new" of a technology for Nintendo to adopt even though it fits their philosophy perfectly.
Also DLSS will have been around for nearly 6 years by the time Switch 2 launches.
lmao yep. I told y'all years & months ago to expect nothing with this new switch and don't expect DLSS.
Also people mention that 4K would take the computation time/longer than that of a single frame, but...games playing in 4K could also just play at 30fps if people really want it, so I don't think it's impossible to doYes. There's not many, but put it this way - the Steam Deck is capable of RT in some scenarios, and the Switch 2 will crush it in RT acceleration. The tensor cores won't be enough to do 4K DLSS at the speed of the desktop GPUs, but the concurrent DLSS technique (and/or targeting 1440p instead) will fix that.
I can see Nintendo making RT a big thing in their first-party games, since they like simple visuals that could leave for room for it. Imagine a third Zelda that doesn't push the other graphical elements too hard, but switches from the current lighting system to ray-traced global illumination.
Also people mention that 4K would take the computation time/longer than that of a single frame, but...games playing in 4K could also just play at 30fps if people really want it, so I don't think it's impossible to do
fair point
so i asked this previously in another thread but i don't think i've gotten a response. are the number of tensor and RT cores enough in the switch 2 to sufficiently do the things they're there for?
A+ reply, you win the internet today.
idc if people have a more pessimistic or conservative outlook towards the next switch. its better to not wish or think for too much to keep yourself and others in check, not a big deal, I can move past it :p. I'll stay delusional 🤪
It was also a RAM starved GPU, which I have to imagine at least somewhat contributed to the issues with DLSS in the video.We don't have any data to suggest it would take that long. DF's video used an underclocked laptop that was only a rough approximation, and the game they tested - Death Stranding - actually seems to have something wrong with its DLSS implementation that causes it to take longer. The 4K frame was taking over 5x longer to generate than a 1080p frame, which is very much not normal.
I still expect the cost to be substantial though, which is why I think concurrent DLSS would be the best move. It would increase latency, but running at 30fps instead of 60 does that too.
I'm gonna be honest, you wouldn't want that hardware in a game console. 1) DLSS functions with the Tensor Cores, and those are in the GPU, they sit right next to the shaders themselves if we go by the diagram Nvidia uses. They effectively are part of the ALUs per GPU core, or SM if you will.Thats, kinda insane? I feel everyone was expecting DLSS to be a big plus for the Switch 2 to punch above its weight. Hope Rich is wrong about that but he did cite "multiple sources" there.
It was also a RAM starved GPU, which I have to imagine at least somewhat contributed to the issues with DLSS in the video.
I fully expect some games to be 4K, but probably not many. 1440p will probably be the standard res on the system, with 1080p some heavy ports and the occasional 900p for really heavy ports.
Of course this could all be wrong if there's some lighter weight DLSS version specifically made for Switch 2 that allows 4k to be more common, maybe at a lower quality.
Yea, I misunderstood what "Deep Learning Accelerators" were when I posted that as Tensor Cores, anyone hoping that tech specifically made for automotives to suddenly appear on a gaming console were deluding themselves.I'm gonna be honest, you wouldn't want that hardware in a game console. 1) DLSS functions with the Tensor Cores, and those are in the GPU, they sit right next to the shaders themselves if we go by the diagram Nvidia uses. They effectively are part of the ALUs per GPU core, or SM if you will.
The deep learning accelerator is outside of the GPU and it's elsewhere on the silicon. The amount of travel, it would have to do to send it to the DLA and then send it back to the GPU back-and-forth would be pretty wasteful.
DLA is used only in training hardware, for training purposes only. Anyone upset about this should just take a step back , and maybe do a little more research or just look at what has a DLA, and what has DLSS and if it has a DLA.
Off the top of my head, zero Nvidia GPUs actually have the DLA and they all use DLSS just fine. The only product that have a DLA are products that are for training robots and cars a.k.a., Xavier and Tegra ORIN. So with that in mind, and the switch is not going to be a car, I think people can put this idea of a DLA to rest.
Funnily enough, ALL products with DLA don't do DLSS.
Makes you think….
The reason we'll likely see some games at 1080p and 900p after DLSS is because hardware resources will always be tight on the system. Ports of heavy PS5/Xbox Series titles will likely have to run at a very low internal resolution just to get them running without having to completely gut the visuals. Series S already has some shockingly low resolutions for some games, so "impossible ports" on Switch 2 using Ultra Performance 1080p or 900p (360p internal for 1080p, 300p for 900p) seems basically guaranteed. Even if the game is running at a higher internal resolution they may very well just not have the budget in the render time to go beyond 1080p. For a port the difference between upscaling 720p to 1080p or 4k could mean the difference between a solid frame rate and poor performance.True, the Switch 2 GPU may well have nearly double the available VRAM. I expect it to be similarly bandwidth-starved to the 2050M, though its RAM will be lower-latency which might help. I don't really think the DLSS output will drop that low in docked mode, though. The rendering resolution, sure, but the cost of DLSS is fixed to the output resolution and doesn't increase with a heavier game. Very heavy UE5 stuff would probably still target 1440p, but in Ultra Performance Mode from 480p or so, while handheld mode goes from 360p to 1080p. And if concurrent DLSS is used, then as long as a 4K frame can be built in less than 16.6ms there's no need to go lower AFAIK.
Yes. There's not many, but put it this way - the Steam Deck is capable of RT in some scenarios, and the Switch 2 will crush it in RT acceleration. The tensor cores won't be enough to do 4K DLSS at the speed of the desktop GPUs, but the concurrent DLSS technique (and/or targeting 1440p instead) will fix that.
I can see Nintendo making RT a big thing in their first-party games, since they like simple visuals that could leave for room for it. Imagine a third Zelda that doesn't push the other graphical elements too hard, but switches from the current lighting system to ray-traced global illumination.
I want to say, but just to be safe; it depends on the game/engine/mode.
DF's hypothetical test were there to show that for the docked mode (where the RT cores for any RT mode on 3rd party games are going to matter), DLSS could work and scale.
Handheld, well the steam deck already gives a glimpse on what's achieved on there through FSR. Switch 2 has NVIDIA's tools and hardware to exceed that + a more efficient CPU architecture and native games.
I mean at the end of the day, besides the "miracle" ports that came to the switch. Nintendo's own game will be amazing :p.
I'm gonna be honest, you wouldn't want that hardware in a game console. 1) DLSS functions with the Tensor Cores, and those are in the GPU, they sit right next to the shaders themselves if we go by the diagram Nvidia uses. They effectively are part of the ALUs per GPU core, or SM if you will.
The deep learning accelerator is outside of the GPU and it's elsewhere on the silicon. The amount of travel, it would have to do to send it to the DLA and then send it back to the GPU back-and-forth would be pretty wasteful.
DLA is used only in training hardware, for training purposes only. Anyone upset about this should just take a step back , and maybe do a little more research or just look at what has a DLA, and what has DLSS and if it has a DLA.
Off the top of my head, zero Nvidia GPUs actually have the DLA and they all use DLSS just fine. The only product that have a DLA are products that are for training robots and cars a.k.a., Xavier and Tegra ORIN. So with that in mind, and the switch is not going to be a car, I think people can put this idea of a DLA to rest.
Funnily enough, ALL products with DLA don't do DLSS.
Makes you think….
I dunno, Microsoft and Sony are not doing this and i have no doubt PS5/SX's RT performance is beyond Switch 2.Yes. There's not many, but put it this way - the Steam Deck is capable of RT in some scenarios, and the Switch 2 will crush it in RT acceleration. The tensor cores won't be enough to do 4K DLSS at the speed of the desktop GPUs, but the concurrent DLSS technique (and/or targeting 1440p instead) will fix that.
I can see Nintendo making RT a big thing in their first-party games, since they like simple visuals that could leave for room for it. Imagine a third Zelda that doesn't push the other graphical elements too hard, but switches from the current lighting system to ray-traced global illumination.
I dunno, Microsoft and Sony are not doing this and i have no doubt PS5/SX's RT performance is beyond Switch 2.
I dunno, Microsoft and Sony are not doing this and i have no doubt PS5/SX's RT performance is beyond Switch 2.
He probably wasn't fully in the know as to what the DLA is or got something crossed by accident, happens. But DLA all in all isn't useful for DLSS.interesting. but this does make me think if this was so useless/pointless for a gaming console, why did someone like Rich, who you would think would be familiar with all things video game graphics, would even bother to mention the possibility?
I mean... XBox and PlayStation use AMD hardware, and AMD is so behind with Ray-Tracing that probably even a mobile Nvidia chip could beat them in this regard.I dunno, Microsoft and Sony are not doing this and i have no doubt PS5/SX's RT performance is beyond Switch 2.
You could probably make DLSS run on the DLA h/w but that isn't necessary for DLSS to work and wouldn't make it any more "free" than running it asynchronously on tensor cores.Hmm okay. So that means Rich is wrong though? Because he definitely said that a "deep learning accelerator similar to the one that was in the T234" would "effectively make DLSS 'free', or at least a lot less computationally expensive". Or did he just mix stuff up?