• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.
Status
Not open for further replies.

Dakhil

Member
Mar 26, 2019
4,459
Orange County, CA
Why do we keep getting Samsung.AMD and Switch 2 OPEDs? Here's another one from today

www.tomsguide.com

Nintendo Switch 2 could be a true powerhouse with this breakthrough

Nintendo Switch 2 could tap into the joint power of AMD and Samsung

Just to add to Onix555's and ILikeFeet's point, I think people are desperate for AMD to beat Nvidia when it comes to GPUs, like how AMD is beating Intel in terms of multi-core performance on CPUs. But so far, I think AMD is far from beating Nvidia when it comes to power efficiency and new innovative features (DLSS comes to mind).
 

z0m3le

Member
Oct 25, 2017
5,418
Just to add to Onix555's and ILikeFeet's point, I think people are desperate for AMD to beat Nvidia when it comes to GPUs, like how AMD is beating Intel in terms of multi-core performance on CPUs. But so far, I think AMD is far from beating Nvidia when it comes to power efficiency and new innovative features (DLSS comes to mind).
Yeah there is a lot of unrealistic expectations about AMD, the Unreal Engine 5 demo ran 1440p at 30fps with dynamic resolution and one of epic's developers confirmed that their 9tflops RTX 2080 powered laptop ran the demo better.

Obviously the engine is a year and a half away, but PS5's 10.3tflops system is losing to a 9.4tflops Turing part in this unoptimized engine. Rdna2 doesn't seem to be a magic bullet for Nvidia's 2 year old architecture and this year Nvidia is launching their first 7nm product that is suppose to eat through raytracing without the issues Turing had.

If you tell any of this to the 'Nintendo is going back to AMD to make a powerful system' crowd, they either don't understand what was just said or they completely ignore it by saying that ue5 wasn't optimized to use PS5's ultra fast ssd, when the reality is that the laptop it ran against didn't have any advantages, it just straight up beat the console no problem, now maybe Jenson was on to something when he said that a RTX 2070 laptop gpu can match/beat ps5, since the epic developer wasn't using DLSS when the 2080 ran the game noticeably better.
 
Last edited:

Dakhil

Member
Mar 26, 2019
4,459
Orange County, CA
Yeah there is a lot of unrealistic expectations about AMD, the Unreal Engine 5 demo ran 1440p at 30fps with dynamic resolution and one of eoic's developers confirmed that their 9tflops RTX 2080 powered laptop ran the demo better.

Obviously the engine is a year and a half away, but PS5's 10.3tflops system is losing to a 9.4tflops Turing part in this unoptimized engine. Rdna2 doesn't seem to be a magic bullet for Nvidia's 2 year old architecture and this year Nvidia is launching their first 7nm product that is suppose to eat through raytracing without the issues Turing had.

If you tell any of this to the Nintendo is going back to AMD to make a powerful system, they either don't understand what was just said or they completely ignore it by saying that ue5 wasn't optimized to use PS5's ultra fast ssd, when the reality is that the laptop it ran against didn't have any advantages, it just straight up beat the console no problem, now maybe Jenson was on to something when he said that a RTX 2070 laptop gpu can match/beat ps5, since the epic developer wasn't using DLSS when the 2080 ran the game noticeably better

And do correct me if I'm wrong, but I don't think AMD has shown anything remotely similar to DLSS 2.0 when it comes to the RDNA2 GPUs (I don't think RIS is remotely similar to DLSS 2.0).

I've also heard people say that Nintendo's going to go back to AMD (and Samsung) for the "Nintendo Switch 2" since Nvidia hasn't released a mobile, non-automotive successor to the Tegra X1 (and I think Mariko is basically the Tegra X2, but without the Denver2 cores, and with the same memory bandwidth of the Tegra X1); and that Nvidia's primarily focusing on automotive SoCs for the Tegra lineup. Of course, there's no evidence that Nvidia's not going to focus on a mobile, non-automotive Tegra SoC that's suited for Nintendo's needs for the "Nintendo Switch 2".

With that being said, I'm impressed with what AMD is (allegedly) able to achieve with the mobile RNDA GPU in terms of performance. I can't wait to see AMD improve on future iterations of their mobile GPUs (if AMD is going to continue to license their GPUs to other mobile manufacturers, like Samsung). And I'm curious to see if AMD's mobile GPUs can be as competitive as Nvidia's mobile GPUs in the future.
 

z0m3le

Member
Oct 25, 2017
5,418
And do correct me if I'm wrong, but I don't think AMD has shown anything remotely similar to DLSS 2.0 when it comes to the RDNA2 GPUs (I don't think RIS is remotely similar to DLSS 2.0).

I've also heard people say that Nintendo's going to go back to AMD (and Samsung) for the "Nintendo Switch 2" since Nvidia hasn't released a mobile, non-automotive successor to the Tegra X1 (and I think Mariko is basically the Tegra X2, but without the Denver2 cores, and with the same memory bandwidth of the Tegra X1); and that Nvidia's primarily focusing on automotive SoCs for the Tegra lineup. Of course, there's no evidence that Nvidia's not going to focus on a mobile, non-automotive Tegra SoC that's suited for Nintendo's needs for the "Nintendo Switch 2".

With that being said, I'm impressed with what AMD is (allegedly) able to achieve with the mobile RNDA GPU in terms of performance. I can't wait to see AMD improve on future iterations of their mobile GPUs (if AMD is going to continue to license their GPUs to other mobile manufacturers, like Samsung). And I'm curious to see if AMD's mobile GPUs can be as competitive as Nvidia's mobile GPUs in the future.
AMD invested a lot in their CPU business over the last decade, their GPU investment was comparatively small. Nvidia has never stopped investing large amounts of money on their architectures and unlike Intel and AMD, Nvidia doesnt sit on their architectures for 3+ years, they are constantly changing out to newer, better technology. They aren't lazy and they have both the best graphics hardware engineers and graphics software engineers around. AMD has used open source projects to try and catch up and they just tried to turn Radeon rays into a closed software environment but the backlash had them open the code up partially.

AMD stopped investing in ARM right away too, they put out the 'Seattle' A57 cores for servers, started work on K12 (an ARM core) but just dropped it in favor of focusing on Zen's launch, maybe it was a great move, but ARM is always going to be more efficient than X86 and they are going to catch up sooner or later, that's a bad thing for AMD, since their ip is really only ryzen right now, as Radeon is an entire process node behind Nvidia when it comes to power:performance.

Getting back to Nintendo, them going with AMD would be a disaster, they will never invest what Sony or Microsoft do, so they will always be behind them, if they just have DLSS in the switch successor, they will end up with an advantage in power:performance that will shrink the 250watt to 25watt design gap here, that and 5nm being a large advantage too.

And yeah shapering software exists for both AMD and Nvidia, it's what Nvidia shield tv uses to enhance videos iirc, it's nice and can look great, but it can be applied on top of your games to achieve a sharper look, though it isn't perfect and can add some weird mistakes to the image. It also can't give more detail than what is there, a 540p rendered image, has 540p's information with anything except DLSS, which can pull information from motion vectors and a 16k image to bring in' ground truth' information that doesn't exist in the 540p render. The key example digital foundry gave was the 1080p native image not having the information for the hair around the character from control's ear, while the 540p rendered DLSS 1080p image had that correctly displayed.
 
Last edited:

Mr. Pointy

Member
Oct 28, 2017
5,141
Is it worth for Nintendo to go for a 18:9/2:1 screen with a wider 1080p resolution? I guess the main issue would be that TVs are still 16:9.
 

wbloop

Member
Oct 26, 2017
2,287
Germany
Is it worth for Nintendo to go for a 18:9/2:1 screen with a wider 1080p resolution? I guess the main issue would be that TVs are still 16:9.
Yup. The fact that TVs are still 16:9 and the main point of the Switch being able to play portably and on your TV and have a comparable experience rules out any chance of a wider Switch screen. Devs also would have to invest additional time to optimize their games for changing ratios on the fly.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
That said, 2080 is on par with 2 years older 1080ti without RT or tensor...
There's only so much you can do on the same node after all. Nvidia didn't put their efforts into per core performance but rather future-proofing Turing with DX12U support. If gaming Ampere was just a die shrunk Turing, I honestly wouldn't be surprisef
 

z0m3le

Member
Oct 25, 2017
5,418
I think it's a good sign if 8+ Tflops Turing matches PS5, if Switch's successor can get to 3tflops and use DLSS, it will be very close to that performance, like twice as close as Switch is to XB1 in fp32.
 

Zaimokuza

Member
May 14, 2020
979
I think it's a good sign if 8+ Tflops Turing matches PS5, if Switch's successor can get to 3tflops and use DLSS, it will be very close to that performance, like twice as close as Switch is to XB1 in fp32.
What would the supposed needed gpu teraflops be considering that ps5's objective is 4k@30fps while portable switch 2 is likely aiming for an upscaled 1080p@30fps?

If we put things like this wouldn't the main issues be cpu performance since Nvidia will have to rely on ARM's cpu cores? Can ARM cpus rival AMD's Zen architecture?
 

z0m3le

Member
Oct 25, 2017
5,418
What would the supposed needed gpu teraflops be considering that ps5's objective is 4k@30fps while portable switch 2 is likely aiming for an upscaled 1080p@30fps?

If we put things like this wouldn't the main issues be cpu performance since Nvidia will have to rely on ARM's cpu cores? Can ARM cpus rival AMD's Zen architecture?
A76 cores already match Zen architecture iirc, but the clock speed is limited. I'll be frank though, there shouldn't be a CPU issue with a Switch successor, A78 cores on 5nm should be clocked over 2GHz and while they won't match PS5's CPU, you'll get close enough to deal with any sort of port. GPU offloading will also not go away either, and I think there is going to be less offloading with PS5/XSX because there isn't much the CPUs can't do in those boxes, so why kill GPU performance to save the CPU here? This is why the focus in speculation should stay on GPU performance IMO, there is a limit to what you need a CPU for in games, even exclusive PC games don't really use the CPU for that much.

Anyways, because ARM has much smaller and fewer instruction sets than X86, ARM is much more efficient, you don't need to match clocks to get good enough performance to bring any next gen game to a Switch successor, I think it will end up 60-80% of the performance though. A78 hasn't hit the market yet, so hard to narrow the performance numbers here.
 

Deguello

Banned
Jan 14, 2019
269
And do correct me if I'm wrong, but I don't think AMD has shown anything remotely similar to DLSS 2.0 when it comes to the RDNA2 GPUs (I don't think RIS is remotely similar to DLSS 2.0).

I've also heard people say that Nintendo's going to go back to AMD (and Samsung) for the "Nintendo Switch 2" since Nvidia hasn't released a mobile, non-automotive successor to the Tegra X1 (and I think Mariko is basically the Tegra X2, but without the Denver2 cores, and with the same memory bandwidth of the Tegra X1); and that Nvidia's primarily focusing on automotive SoCs for the Tegra lineup. Of course, there's no evidence that Nvidia's not going to focus on a mobile, non-automotive Tegra SoC that's suited for Nintendo's needs for the "Nintendo Switch 2".

With that being said, I'm impressed with what AMD is (allegedly) able to achieve with the mobile RNDA GPU in terms of performance. I can't wait to see AMD improve on future iterations of their mobile GPUs (if AMD is going to continue to license their GPUs to other mobile manufacturers, like Samsung). And I'm curious to see if AMD's mobile GPUs can be as competitive as Nvidia's mobile GPUs in the future.

Some things to consider:

1. Supposedly, Nintendo wanted to partner with Nvidia since 2009, almost using the Tegra line of mobile SoCs for the 3DS, only changing when the then-current SoC, Tegra 2, just couldn't keep its thermals down. In some alternate timeline, the 3DS was delayed a year and used the Tegra 3.

2. Nintendo likes Backwards comaptibility, at least for one previous gen to add value to a system launch, so much that they potentially hamstrung the Wii U's cost-effectiveness just to include special parts to do backwards compatibility, especially for the super-popular Wii. The Switch is super popular, so I imagine they would like to ensure BC for Switch 2 as well. Switching to AMD would jeopardize that immensely.

3. NVidia CEO Jensen Huang has apparently said they would like this partnership to last "decades." Nintendo is Nvidia's ticket into the console marketplace, where they both are succeeding very well.

4. If Nvidia is making the next chip specifically for Nintendo, which is very likely, there's a more than betting chance it will be kept under wraps for now, so it won't be anything they announce for a general market, though the things they do announce can have clues to the capabilities of that chip. Nvidia will want to make this chip too, as sales of the Switch, and for that matter, the Wii U, tower over sales of the Nvidia's Shield tabets and microconsoles, so even a complete market collapse on the Scale of the Wii U would still be an order of ~15 million Chips.
 
Last edited:

T002 Tyrant

Member
Nov 8, 2018
9,103
All I want from a Pro model in 2021 would be

- BOTW running at 1080p, and 60fps on the Labo VR (stable all round)
- DOOM and Witcher 3 running slightly better resolutions and more stable 30fps
- The voice chat app running on Switch
- Bluetooth audio such as on the go support

All I want from a 2023/2024 Switch 2 is

- Nintendo games to run on DLSS 2.0 at 4k
- 60fps whenever possible, but I won't get mad if it's graphically pushing the hardware
- "Impossible ports" using DLSS for better resolutions at a stableish 30fps including features such as UE5's Nanites and Lumen and some kind of Ray Tracing implementation would be nice but not expecting it to be at super high resolution even with DLSS 2.0
- Larger storage but I'm not expecting it
 

Zaimokuza

Member
May 14, 2020
979
All I want from a Pro model in 2021 would be

- BOTW running at 1080p, and 60fps on the Labo VR (stable all round)
- DOOM and Witcher 3 running slightly better resolutions and more stable 30fps
- The voice chat app running on Switch
- Bluetooth audio such as on the go support

All I want from a 2023/2024 Switch 2 is

- Nintendo games to run on DLSS 2.0 at 4k
- 60fps whenever possible, but I won't get mad if it's graphically pushing the hardware
- "Impossible ports" using DLSS for better resolutions at a stableish 30fps including features such as UE5's Nanites and Lumen and some kind of Ray Tracing implementation would be nice but not expecting it to be at super high resolution even with DLSS 2.0
- Larger storage but I'm not expecting it

I don't understand why people in this thread keep writing that Nintendo won't upgrade their storage.
Their digital sales have been skyrocketing in recent years, they bring in more money than physical sales and are only limited by internet access and local storage: why wouldn't they try to solve one of the bottlenecks?

www.nintendolife.com

Nintendo Earned Almost $2 Billion In Digital Sales This Year, A 72% Yearly Increase

And mobile earnings have jumped, too
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
What would the supposed needed gpu teraflops be considering that ps5's objective is 4k@30fps while portable switch 2 is likely aiming for an upscaled 1080p@30fps?

If we put things like this wouldn't the main issues be cpu performance since Nvidia will have to rely on ARM's cpu cores? Can ARM cpus rival AMD's Zen architecture?
If we accept the suggestion that an 8 TF NVIDIA GPU equals a 10.3 TF AMD (PS5) GPU, then the speculated 3 TF GPU would be about 40% of the PS5 in terms of GPU. A drop from 4K to 1080p is a 4x resolution drop, so that gap should be easily bridged by the 3TF NVIDIA GPU in the Switch 2 (of course, GPU load doesn't scale 1-to-1 with resolution, but a 4x drop means a very big difference in rendering cost). Let put that up front: I think you should be able to create a 1080p that is of similar quality as the PS5 has. However, the discussion for the graphics doesn't stop there. First off, we want to do DLSS2.0 in order to construct a 4K image? Well, a portion of the rendering frame budget needs to be allocated to this, since it can't be done in parallel with the creation of the image itself. However, at 30 fps, DLSS takes a smaller portion relatively speaking, because the time to construct DLSS is independent of the frame rate. Check out this video at the 11:58 mark: you can see that using DLSS takes only 2.5 ms on an RTX 2060 Super. Let's be conservative and say that the Switch 2 GPU has enough RTX cores to do the process in 4 ms. That leaves 29 ms of frame time for the rest of the process. However, instead of rendering a 4K image, it only needs to render a 1080p image in that time. The video suggests that cutting native rendering from 4K to 1080p reduces the workload by about 2.7x (16 ms -> 6 ms). So, you have a workload of 2.7x as small, to be done 88% of the time. This means that your GPU capability needs to be about 42% of the PS5. So, a 3TF NVIDIA GPU seems almost good enough to produce a native 1080p image upscaled to 4K via DLSS2.0 using the same settings as the PS5. Drop these settings a notch, and the Switch 2 is able to run native gen 9 games really well. Of course, plenty of games on PS5 are probably going to opt for upscaling methods as well, instead of rendering at native 4K, so things will need to be pared back a little further on Switch 2. But it doesn't seem like a very big problem.

Secondly, we don't quite know how ray tracing scales with resolution. Pure ray tracing (the path tracing Minecraft demo uses this) is dependent on the screen resolution, and thus scales 1-to-1 with resolution. But the implementation in next gen engines will be hybrid ray tracing solutions, and we don't know if those scale like that as well. So, the ray tracing capability is another question that's up in the air.

As for the CPU, NVIDIA are likely to use the A78 since they have the license for it, and it's 2020 for a 2023 machine, which should be feasible for sure. The A78 should produce a 6x or better improvement in CPU, which should close the gap with the PS5/XSX compared to how far away the Switch was from PS4/XB1. As a result, CPU doesn't seem to be a huge problem per se. With a little luck, the system can come decently close to PS5/XSX (and note that PS5/XSX are now using top of the line CPUs, rather than the Jaguar cores, so the PC requirements are likely to be a bit lower in relative terms, leaving some headroom in the CPU department as well).
 

T002 Tyrant

Member
Nov 8, 2018
9,103
I don't understand why people in this thread keep writing that Nintendo won't upgrade their storage.
Their digital sales have been skyrocketing in recent years, they bring in more money than physical sales and are only limited by internet access and local storage: why wouldn't they try to solve one of the bottlenecks?

www.nintendolife.com

Nintendo Earned Almost $2 Billion In Digital Sales This Year, A 72% Yearly Increase

And mobile earnings have jumped, too

They haven't upgraded since the Wii U and seem to have the philosophy that the user should be responsible for how much or little the capacity is. I'd be super surprised if they went over 32GB on a standard model.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
They haven't upgraded since the Wii U and seem to have the philosophy that the user should be responsible for how much or little the capacity is. I'd be super surprised if they went over 32GB on a standard model.
They don't have that luxury with game dev paradigms changing thanks to faster ssds being the defacto storage. The problem is they don't have an expandable storage solution that can keep up without someone getting fucked on costs
 

z0m3le

Member
Oct 25, 2017
5,418
They don't have that luxury with game dev paradigms changing thanks to faster ssds being the defacto storage. The problem is they don't have an expandable storage solution that can keep up without someone getting fucked on costs
So just like PS5 and XSX? There doesn't need to be a change, Nintendo has supported SD cards since pretty much the moment they were introduced, they will likely keep going and just tell the end user that X AAA game needs to be installed on internal memory (just like Sony is planning to do), there will be plenty of games that will run just fine on an SD card though, I don't expect Super Mario Party 2 to require 2GB/s data streaming.

And because the solution is install to internal memory, I do believe they will go with at least a 256GB UFS 3.1 solution, however I'm still thinking they go with a 512GB version/sku, something a bit more in line with next gen consoles, especially since I don't expect mandatory installs for physical media again.
 

Deguello

Banned
Jan 14, 2019
269
They haven't upgraded since the Wii U and seem to have the philosophy that the user should be responsible for how much or little the capacity is. I'd be super surprised if they went over 32GB on a standard model.

If Nintendo upgrades to UFS as is likely due to the prevalence of UFS in flagship smartphones now, they may not be able to find 32GB UFS storage plentiful enough to be cost effective. Even the lowest tier Samsung Galaxy phone has 128 GB, today. To find a flagship phone that even has 32GB as an option, one has to go back to the Samsung Galaxy S7 or the iPhone 7.

You are correct in that it's not really a big deal and memory power users will find the base storage lacking no matter what, but Nintendo isn't going to spend more money to use a lower specced part just because they can.
 

Thraktor

Member
Oct 25, 2017
571
What would the supposed needed gpu teraflops be considering that ps5's objective is 4k@30fps while portable switch 2 is likely aiming for an upscaled 1080p@30fps?

If we put things like this wouldn't the main issues be cpu performance since Nvidia will have to rely on ARM's cpu cores? Can ARM cpus rival AMD's Zen architecture?

Keep in mind that, although PS5 and XBSX are advertising 4K, in the real world I'd expect many if not all games to run at less than native 4K as the generation goes on, as the difference in perceived sharpness from 1440p/1600p to 4K isn't all that big for most people, whereas the improvements in rendering techniques from the 50%-100% more processing power per pixel should be a lot more noticeable. So, if you were hypothetically trying to build a Switch 2 which was specifically designed to get PS5/XBSX ports (something I absolutely don't think Nintendo will do), you shouldn't really be basing your specs on the assumption that PS5/XBSX are going to be running games at full 4K.

You're right that CPU would likely be the main limiting factor for ports, however the issue isn't about absolute performance, but the really tight power consumption limits that a hybrid device has to stick to if it wants reasonable battery life in handheld mode. To put it in perspective, the CPU in the original Switch consumed about 1.8W of power. Both PS5 and XBSX are using effectively a Ryzen 3700X CPU, which has a TDP of 65W. At a stretch, Nintendo might push the CPU power budget up to as high as 3W in a Switch successor, but that's still a difference in power budget of over 20x you'd have to overcome.

Even if ARM cores are more power-efficient than AMD's Ryzen cores, which they are, and even if we were talking about a 2023 device on 5nm or even 4nm, a 20 fold increase in power efficiency simply isn't on the cards. Absolute best case scenario is perhaps around a third of the PS5/XBSX's CPU performance, which would be extremely impressive with such a low power draw, but still far enough off to make a lot of ports very challenging.
 

z0m3le

Member
Oct 25, 2017
5,418
Keep in mind that, although PS5 and XBSX are advertising 4K, in the real world I'd expect many if not all games to run at less than native 4K as the generation goes on, as the difference in perceived sharpness from 1440p/1600p to 4K isn't all that big for most people, whereas the improvements in rendering techniques from the 50%-100% more processing power per pixel should be a lot more noticeable. So, if you were hypothetically trying to build a Switch 2 which was specifically designed to get PS5/XBSX ports (something I absolutely don't think Nintendo will do), you shouldn't really be basing your specs on the assumption that PS5/XBSX are going to be running games at full 4K.

You're right that CPU would likely be the main limiting factor for ports, however the issue isn't about absolute performance, but the really tight power consumption limits that a hybrid device has to stick to if it wants reasonable battery life in handheld mode. To put it in perspective, the CPU in the original Switch consumed about 1.8W of power. Both PS5 and XBSX are using effectively a Ryzen 3700X CPU, which has a TDP of 65W. At a stretch, Nintendo might push the CPU power budget up to as high as 3W in a Switch successor, but that's still a difference in power budget of over 20x you'd have to overcome.

Even if ARM cores are more power-efficient than AMD's Ryzen cores, which they are, and even if we were talking about a 2023 device on 5nm or even 4nm, a 20 fold increase in power efficiency simply isn't on the cards. Absolute best case scenario is perhaps around a third of the PS5/XBSX's CPU performance, which would be extremely impressive with such a low power draw, but still far enough off to make a lot of ports very challenging.
The XSX/PS5 CPU is based on the Ryzen 4900u I thought? A 35 watt 8 core, 16 thread, with 8CU GPU at 1.75GHz. You are actually likely looking at 15 to 20 watts at the 3.5GHz for just the CPU in PS5.
 

Mr. Pointy

Member
Oct 28, 2017
5,141
I still think Nintendo will go for 128GB or 256GB internal memory. Whatever the largest cart size is at launch x2. It'll probably support whatever the fastest SD card standard is at the time and it'll shuffle cached data back and forth if necessary.
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
Keep in mind that, although PS5 and XBSX are advertising 4K, in the real world I'd expect many if not all games to run at less than native 4K as the generation goes on, as the difference in perceived sharpness from 1440p/1600p to 4K isn't all that big for most people, whereas the improvements in rendering techniques from the 50%-100% more processing power per pixel should be a lot more noticeable. So, if you were hypothetically trying to build a Switch 2 which was specifically designed to get PS5/XBSX ports (something I absolutely don't think Nintendo will do), you shouldn't really be basing your specs on the assumption that PS5/XBSX are going to be running games at full 4K.

You're right that CPU would likely be the main limiting factor for ports, however the issue isn't about absolute performance, but the really tight power consumption limits that a hybrid device has to stick to if it wants reasonable battery life in handheld mode. To put it in perspective, the CPU in the original Switch consumed about 1.8W of power. Both PS5 and XBSX are using effectively a Ryzen 3700X CPU, which has a TDP of 65W. At a stretch, Nintendo might push the CPU power budget up to as high as 3W in a Switch successor, but that's still a difference in power budget of over 20x you'd have to overcome.

Even if ARM cores are more power-efficient than AMD's Ryzen cores, which they are, and even if we were talking about a 2023 device on 5nm or even 4nm, a 20 fold increase in power efficiency simply isn't on the cards. Absolute best case scenario is perhaps around a third of the PS5/XBSX's CPU performance, which would be extremely impressive with such a low power draw, but still far enough off to make a lot of ports very challenging.
Do you happen to have some indicators for how to compare different CPUs with one another? I've been trying to find a good one, and happened upon Geekbench. That website suggests that the A76 single core performance at 2.00 GHz is 2/3 that of the Ryzen 3700x single core performance at 3.6 GHz. Those numbers are wildly off from what you suggested, so I'm doubtful this benchmark gives a good indication (or perhaps I'm misreading the provided information).
 

NineTailSage

Member
Jan 26, 2020
1,449
Hidden Leaf
I still think Nintendo will go for 128GB or 256GB internal memory. Whatever the largest cart size is at launch x2. It'll probably support whatever the fastest SD card standard is at the time and it'll shuffle cached data back and forth if necessary.

I don't think the cart size will dictate the internal memory size!
As most have mentioned, whatever memory size is popular in high end phones would be cheaper for Nintendo to get as well...
 

z0m3le

Member
Oct 25, 2017
5,418
Do you happen to have some indicators for how to compare different CPUs with one another? I've been trying to find a good one, and happened upon Geekbench. That website suggests that the A76 single core performance at 2.00 GHz is 2/3 that of the Ryzen 3700x single core performance at 3.6 GHz. Those numbers are wildly off from what you suggested, so I'm doubtful this benchmark gives a good indication (or perhaps I'm misreading the provided information).
Your information is correct for that benchmark, it's a general performance indicator that is definitely better than just guessing some round number based on nothing but power draw or expectations, however it doesn't give you how that CPU performs in a game, it should be somewhere along those lines, but since different benchmarks can find different bottlenecks, there is no great way to do it.

However, A78 should have a much higher performance per clock over A76, and I'd suggest over 2GHz on 5nm is likely going to be a thing, since A78 is designed for both 7nm and 5nm, and I can't imagine that it will be designed to run at ultra low clocks, where something like a successor to A55 would fill in better.

3_575px.png

CortexA77-5_575px.png
CortexA77-15_575px.png

Just to give an idea of expected performance gain of A78, A77 is 20% faster than A76 in some benchmarks, we can see from the first picture that A78 has a similar jump in performance expected, so if you look at that 2.0GHz A76 single core test, and compare it to the expected results for A78, you'll find that a 2GHz A78 core would get very close to the ryzen 3700 3.6GHz single core benchmark of 1254, somewhere over 1200. I'd expect Ryzen 4000 series in PS5, so whatever IPC increases found in the new Ryzen chip will be there, but PS5 is also a 3.5GHz chip, and I expect over 2GHz for A78 in a Switch.

This is again not an ideal comparison, and 8 cores @ 2GHz on 5nm is going to draw between 2 and 3 watts probably, but until A78 comes to market, we can't say that for sure, an A77 powered Snapdragon 865 with 4 A77 cores over 2.4GHz (1 over 2.8GHz) and 4 A55 cores + Adreno 650 (a 1.25TFLOPs FP32 GPU) draws only 5 watts on 7nm.

I actually think one thing Nintendo might do is clock 2 A78 cores higher as well, something like 2.4GHz on 2 cores and 2GHz on the other 6, if they use A55 cores, I'd suspect that the OS cores might be clocked differently than the ones reserved for devs, but considering 8 threads is probably enough for next generation, they might not need it.

I think the SoC power budget will go up by virtue of the screen being much more energy efficient in a Switch successor, much like the screen in the new models of Switch, the launch model saw a 7.1 watt to 9 watt power draw based on how bright you had the screen, expect something better than that, leaving room for Nintendo to increase the SoC budget from ~5 watts on Switch to ~6.5 watts on the successor.
 

Simba1

Member
Dec 5, 2017
5,391
I hope Pikmin 4 is a Switch 2 launch game. I want to see those 4K raytraced fruit.

Hardly, its possible that we will have Pikmin 4 as soon as next year, Switch 2 could have Pikmin 5.


All I want from a Pro model in 2021 would be

- BOTW running at 1080p, and 60fps on the Labo VR (stable all round)
- DOOM and Witcher 3 running slightly better resolutions and more stable 30fps
- The voice chat app running on Switch
- Bluetooth audio such as on the go support

BotW is 30 FPS game, no way it could run at 60 FPS on Switch Pro or even on Switch 2, because Nintendo would need to redone game for 60 FPS.


I don't understand why people in this thread keep writing that Nintendo won't upgrade their storage.
Their digital sales have been skyrocketing in recent years, they bring in more money than physical sales and are only limited by internet access and local storage: why wouldn't they try to solve one of the bottlenecks?

www.nintendolife.com

Nintendo Earned Almost $2 Billion In Digital Sales This Year, A 72% Yearly Increase

And mobile earnings have jumped, too

Actually, people were arguing how much will Nintendo upgrade storage, not if they will upgrade it at all.
Yeah, digital sales are higher every year, but that deosnt means that Nintendo need big internal memory if consumers again have option to increase alone memory, like they having on current Switch.

My bet is 128GB internal memory for Switch 2.
 
Last edited:

Simba1

Member
Dec 5, 2017
5,391
Keep in mind that, although PS5 and XBSX are advertising 4K, in the real world I'd expect many if not all games to run at less than native 4K as the generation goes on, as the difference in perceived sharpness from 1440p/1600p to 4K isn't all that big for most people, whereas the improvements in rendering techniques from the 50%-100% more processing power per pixel should be a lot more noticeable. So, if you were hypothetically trying to build a Switch 2 which was specifically designed to get PS5/XBSX ports (something I absolutely don't think Nintendo will do), you shouldn't really be basing your specs on the assumption that PS5/XBSX are going to be running games at full 4K.

You're right that CPU would likely be the main limiting factor for ports, however the issue isn't about absolute performance, but the really tight power consumption limits that a hybrid device has to stick to if it wants reasonable battery life in handheld mode. To put it in perspective, the CPU in the original Switch consumed about 1.8W of power. Both PS5 and XBSX are using effectively a Ryzen 3700X CPU, which has a TDP of 65W. At a stretch, Nintendo might push the CPU power budget up to as high as 3W in a Switch successor, but that's still a difference in power budget of over 20x you'd have to overcome.

Even if ARM cores are more power-efficient than AMD's Ryzen cores, which they are, and even if we were talking about a 2023 device on 5nm or even 4nm, a 20 fold increase in power efficiency simply isn't on the cards. Absolute best case scenario is perhaps around a third of the PS5/XBSX's CPU performance, which would be extremely impressive with such a low power draw, but still far enough off to make a lot of ports very challenging.

Like always, tnx for insight, I also think that Nintendo will not with high clocks with Switch 2, for CPU, I think they will go with something like 1.5-2GHz.

I think that Switch 2 in power will be closer to PS5/XBxS than current Switch is to XB1/PS4, but even with DLSS I dont see Switch reaching even half of PS5 power (maybe around 1/3).
 

Dekuman

Member
Oct 27, 2017
19,046
Keep in mind that, although PS5 and XBSX are advertising 4K, in the real world I'd expect many if not all games to run at less than native 4K as the generation goes on, as the difference in perceived sharpness from 1440p/1600p to 4K isn't all that big for most people, whereas the improvements in rendering techniques from the 50%-100% more processing power per pixel should be a lot more noticeable. So, if you were hypothetically trying to build a Switch 2 which was specifically designed to get PS5/XBSX ports (something I absolutely don't think Nintendo will do), you shouldn't really be basing your specs on the assumption that PS5/XBSX are going to be running games at full 4K.

You're right that CPU would likely be the main limiting factor for ports, however the issue isn't about absolute performance, but the really tight power consumption limits that a hybrid device has to stick to if it wants reasonable battery life in handheld mode. To put it in perspective, the CPU in the original Switch consumed about 1.8W of power. Both PS5 and XBSX are using effectively a Ryzen 3700X CPU, which has a TDP of 65W. At a stretch, Nintendo might push the CPU power budget up to as high as 3W in a Switch successor, but that's still a difference in power budget of over 20x you'd have to overcome.

Even if ARM cores are more power-efficient than AMD's Ryzen cores, which they are, and even if we were talking about a 2023 device on 5nm or even 4nm, a 20 fold increase in power efficiency simply isn't on the cards. Absolute best case scenario is perhaps around a third of the PS5/XBSX's CPU performance, which would be extremely impressive with such a low power draw, but still far enough off to make a lot of ports very challenging.

Would the situation be similar to current Switch v PS4/XONE ? How does the current CPU gap compare to your prospective gap.

Otherwise, it seems like Nintendo will be in a tough spot next gen in terms of securing multplats again?
 

T002 Tyrant

Member
Nov 8, 2018
9,103
BotW is 30 FPS game, no way it could run at 60 FPS on Switch Pro or even on Switch 2, because Nintendo would need to redone game for 60 FPS.

For the Labo VR version? In which better clocks of the OG Switch can run Witcher 3 at almost a stable 60fps? I think with higher clocked CPU or better CPU alongside higher clocks in the GPU and possibly more RAM I think it'd handle Labo VR just fine!?

I wasn't expecting the full docked version to be rocking 60fps. Unless you rendered it at a lower resolution and used DLSS 2.0 to hit the higher resolution, but DLSS 2.0 would require an entirely new GPU.
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
Your information is correct for that benchmark, it's a general performance indicator that is definitely better than just guessing some round number based on nothing but power draw or expectations, however it doesn't give you how that CPU performs in a game, it should be somewhere along those lines, but since different benchmarks can find different bottlenecks, there is no great way to do it.

However, A78 should have a much higher performance per clock over A76, and I'd suggest over 2GHz on 5nm is likely going to be a thing, since A78 is designed for both 7nm and 5nm, and I can't imagine that it will be designed to run at ultra low clocks, where something like a successor to A55 would fill in better.

3_575px.png

CortexA77-5_575px.png
CortexA77-15_575px.png

Just to give an idea of expected performance gain of A78, A77 is 20% faster than A76 in some benchmarks, we can see from the first picture that A78 has a similar jump in performance expected, so if you look at that 2.0GHz A76 single core test, and compare it to the expected results for A78, you'll find that a 2GHz A78 core would get very close to the ryzen 3700 3.6GHz single core benchmark of 1254, somewhere over 1200. I'd expect Ryzen 4000 series in PS5, so whatever IPC increases found in the new Ryzen chip will be there, but PS5 is also a 3.5GHz chip, and I expect over 2GHz for A78 in a Switch.

This is again not an ideal comparison, and 8 cores @ 2GHz on 5nm is going to draw between 2 and 3 watts probably, but until A78 comes to market, we can't say that for sure, an A77 powered Snapdragon 865 with 4 A77 cores over 2.4GHz (1 over 2.8GHz) and 4 A55 cores + Adreno 650 (a 1.25TFLOPs FP32 GPU) draws only 5 watts on 7nm.

I actually think one thing Nintendo might do is clock 2 A78 cores higher as well, something like 2.4GHz on 2 cores and 2GHz on the other 6, if they use A55 cores, I'd suspect that the OS cores might be clocked differently than the ones reserved for devs, but considering 8 threads is probably enough for next generation, they might not need it.

I think the SoC power budget will go up by virtue of the screen being much more energy efficient in a Switch successor, much like the screen in the new models of Switch, the launch model saw a 7.1 watt to 9 watt power draw based on how bright you had the screen, expect something better than that, leaving room for Nintendo to increase the SoC budget from ~5 watts on Switch to ~6.5 watts on the successor.
Thanks, that makes sense. For the Ryzen 4900u you mentioned earlier, the benchmark is 1225 for a 2 GHz clock, so pushing that up to 3.5 GHz max would suggest a 75% power improvement per core over the A78 core, assuming a 2 GHz A78 is used. Conversely, if Switch 2 has 8 A78 cores at 2 GHz, then it would have 57% of the CPU performance that the PS5 has. Again, assuming that this specific benchmark can be extrapolated to general performance (and we don't know how well it generalises!).

Your idea about the SoC power draw is quite interesting for sure. I could see them offering some options: for lower tier games and for backward compatibility: you don't need top CPU and GPU clocks for those types of games, so perhaps they can offer a low power clock configuration (this time for both CPU and GPU) in order to drop the SoC power draw to a very low level (perhaps sub-3W?) and give a lot of battery life for these lower tier or previous gen games. Then, they could have higher CPU and GPU clocks in a maximum clock configuration for the most demanding games. Those games might then draw your suggested 6.5W-7W, offering a tight 3 hour battery life. Obviously, all of this is an example, but I think there should be some use in dropping both CPU and GPU clocks, rather than just the GPU as the Switch 1 does.
 

Simba1

Member
Dec 5, 2017
5,391
Would the situation be similar to current Switch v PS4/XONE ? How does the current CPU gap compare to your prospective gap.

Otherwise, it seems like Nintendo will be in a tough spot next gen in terms of securing multplats again?

I think real change for Switch is DLSS and lower resolution than PS5/XBxS series, most of PS5/XBxS games will be 4K,
so Switch could render those game in 540-720p in portabl mode and at 720p-1080p in docked mode, and DLSS could push those resolutions even futher.

So Switch 2 regardless 3rd party situation, Switch should be in better position than its now,
and big difference will also be that Switch 2 will be released after very successful platform with 100m+ install base while current Switch was coming from failed Wii U that had install base of 13.5m units, so Switch 2 most likely have much stronger support out of gate than current Switch had.
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
BotW is 30 FPS game, no way it could run at 60 FPS ... on Switch 2, because Nintendo would need to redone game for 60 FPS.
I'm curious why you say that. Of course, the game won't magically run at 60 fps when a 30 fps lock is in place, but a small patch could very well lift that restriction and make it 1080p/60 fps, right? Unless they have tied the game's logic to frame rate, of course, in which case they would need to redo the game (but is there any indication for that?).
 

Simba1

Member
Dec 5, 2017
5,391
For the Labo VR version? In which better clocks of the OG Switch can run Witcher 3 at almost a stable 60fps? I think with higher clocked CPU or better CPU alongside higher clocks in the GPU and possibly more RAM I think it'd handle Labo VR just fine!?

I wasn't expecting the full docked version to be rocking 60fps. Unless you rendered it at a lower resolution and used DLSS 2.0 to hit the higher resolution, but DLSS 2.0 would require an entirely new GPU.

My point is that BotW is 30 FPS game, its made to work on 30FPS, higher clocked CPU would mean nothing for games like BotW, I mean it would mean even more stable FPS but not double FPS.

To run at 60 FPS even in VR mode, Nintendo needs to change how BotW operates.


I'm curious why you say that. Of course, the game won't magically run at 60 fps when a 30 fps lock is in place, but a small patch could very well lift that restriction and make it 1080p/60 fps, right? Unless they have tied the game's logic to frame rate, of course, in which case they would need to redo the game (but is there any indication for that?).

I think thats the case, every 3D Zelda game is made to work at 30 FPS, I think thats in code of BotW also.
 

iag

Member
Oct 27, 2017
1,380
Thanks, that makes sense. For the Ryzen 4900u you mentioned earlier, the benchmark is 1225 for a 2 GHz clock, so pushing that up to 3.5 GHz max would suggest a 75% power improvement per core over the A78 core, assuming a 2 GHz A78 is used. Conversely, if Switch 2 has 8 A78 cores at 2 GHz, then it would have 57% of the CPU performance that the PS5 has. Again, assuming that this specific benchmark can be extrapolated to general performance (and we don't know how well it generalises!).

Your idea about the SoC power draw is quite interesting for sure. I could see them offering some options: for lower tier games and for backward compatibility: you don't need top CPU and GPU clocks for those types of games, so perhaps they can offer a low power clock configuration (this time for both CPU and GPU) in order to drop the SoC power draw to a very low level (perhaps sub-3W?) and give a lot of battery life for these lower tier or previous gen games. Then, they could have higher CPU and GPU clocks in a maximum clock configuration for the most demanding games. Those games might then draw your suggested 6.5W-7W, offering a tight 3 hour battery life. Obviously, all of this is an example, but I think there should be some use in dropping both CPU and GPU clocks, rather than just the GPU as the Switch 1 does.
You don't drop the CPU clock to not mess with your game logic. Suppose you're making a racing game and you model the tire model to do one amount of calculations per second. If you drop the CPU clock you'll have a different tire model on handheld than on docked. Of course it can be done but it's just an example of why you should not do it.
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
I think thats the case, every 3D Zelda game is made to work at 30 FPS, I think thats in code of BotW also.
Hm, if that's the case, then yeah, you won't get the game at 60 fps. It's a poor design decision to tie game logic to frame rate for this reason, and it handicaps your ability to bring the game forward easily. But well, what can you do?

SMO should look pristine in native 1080p, at least!
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
You don't drop the CPU clock to not mess with your game logic. Suppose you're making a racing game and you model the tire model to do one amount of calculations per second. If you drop the CPU clock you'll have a different tire model on handheld than on docked. Of course it can be done but it's just an example of why you should not do it.
That makes sense, but if your game isn't too demanding, then maybe Nintendo can just offer a complementary low-CPU mode in docked? For example, we get a low-power profile on handheld with 1 GHz CPU clock and 384 MHz GPU, and then a complementary docked profile with 1 GHz CPU clock and 921 MHz GPU clock. And then a high-power profile on handheld with 2 GHz CPU clock and 691 MHz GPU, and then a complementary docked profile with 2 GHz CPU clock and 1.25 GHz GPU clock. In that case, you can target your profile of choice (low power or high power) and still have a consistent set of two power profiles for docked and undocked.
 
Apr 11, 2020
1,235
Is Sony intending to use its 8 cores for games on PS5? Aren't they using one or two cores for OS? 8xA78 could be good for next-gen games with 4xA55 for OS. Nintendo could even use them for a little 'hyperthreading'-like boost. Of course, it could only be possible if Switch 2 has 8 Big cores instead of 4.
 

Simba1

Member
Dec 5, 2017
5,391
Hm, if that's the case, then yeah, you won't get the game at 60 fps. It's a poor design decision to tie game logic to frame rate for this reason, and it handicaps your ability to bring the game forward easily. But well, what can you do?

SMO should look pristine in native 1080p, at least!

Nintendo thinks that 3D Zelda games dont need 60 FPS like 3D Mario for instance (thats not single one 3D Zelda was 60 FPS, while every 3D Mario expect M64 is 60FPS) and they are using Zelda games to push most graphics on some platform and 30 FPS offcourse helps there too.

If SMO is 1080p on Switch Pro, BotW would also be, they are both running at 900p dynamic resolution.
 

Dekuman

Member
Oct 27, 2017
19,046
As an aside, One thing I'm curious about is how big of a jump is Wii U to Switch from a CPU perspective. We know we get an extra core but what is per core performance like on Switch vs. WII U
 

ShadowFox08

Banned
Nov 25, 2017
3,524
Like always, tnx for insight, I also think that Nintendo will not with high clocks with Switch 2, for CPU, I think they will go with something like 1.5-2GHz.

I think that Switch 2 in power will be closer to PS5/XBxS than current Switch is to XB1/PS4, but even with DLSS I dont see Switch reaching even half of PS5 power (maybe around 1/3).
1/3 (or even 40%-50%) is basically the same as current gen though in GPU (switch vs xbone)
 

z0m3le

Member
Oct 25, 2017
5,418
Thanks, that makes sense. For the Ryzen 4900u you mentioned earlier, the benchmark is 1225 for a 2 GHz clock, so pushing that up to 3.5 GHz max would suggest a 75% power improvement per core over the A78 core, assuming a 2 GHz A78 is used. Conversely, if Switch 2 has 8 A78 cores at 2 GHz, then it would have 57% of the CPU performance that the PS5 has. Again, assuming that this specific benchmark can be extrapolated to general performance (and we don't know how well it generalises!).

Your idea about the SoC power draw is quite interesting for sure. I could see them offering some options: for lower tier games and for backward compatibility: you don't need top CPU and GPU clocks for those types of games, so perhaps they can offer a low power clock configuration (this time for both CPU and GPU) in order to drop the SoC power draw to a very low level (perhaps sub-3W?) and give a lot of battery life for these lower tier or previous gen games. Then, they could have higher CPU and GPU clocks in a maximum clock configuration for the most demanding games. Those games might then draw your suggested 6.5W-7W, offering a tight 3 hour battery life. Obviously, all of this is an example, but I think there should be some use in dropping both CPU and GPU clocks, rather than just the GPU as the Switch 1 does.
The clock performance won't change that much between 3700 and 4900, not single threaded performance. The IPC gain for the new ryzen cpu is not 75%. It is likely finding another bottleneck at a higher clock that we aren't seeing, however it won't be long before that cpu is on the market and we can get these scores for any clock from it.

It's also worth noting that Nintendo upped the boost mode for the portable model's GPU, so that ~5 watt launch model is likely closer to ~6 watts with something like MK11.
 

ShadowFox08

Banned
Nov 25, 2017
3,524
All I want from a Pro model in 2021 would be

- BOTW running at 1080p, and 60fps on the Labo VR (stable all round)
- DOOM and Witcher 3 running slightly better resolutions and more stable 30fps
- The voice chat app running on Switch
- Bluetooth audio such as on the go support

All I want from a 2023/2024 Switch 2 is

- Nintendo games to run on DLSS 2.0 at 4k
- 60fps whenever possible, but I won't get mad if it's graphically pushing the hardware
- "Impossible ports" using DLSS for better resolutions at a stableish 30fps including features such as UE5's Nanites and Lumen and some kind of Ray Tracing implementation would be nice but not expecting it to be at super high resolution even with DLSS 2.0
- Larger storage but I'm not expecting it
Chances of a pro are very unlikely.

And 1080p 60fps Zelda on VR is going to take a LOT of GPU power.
For one, bumping the resolution from 900 to 1080p requires a 44% increase of pixels and bandwidth. Then you'll so likely need at least twice switch's GPU and CPU to get to 60fps. Finally, each eye has to be accounted for. You're running 1080p twice at the same time, which again also takes GPU power. Botw and Odyssey aren't running at their native resolutions on labp VR. Both are adaptive 720p.

So we are looking at at least 1.5 x 2 x 2 = ballparking 6x in raw power to run botw at stable 60fps 1080p. This is something that would be possible on a 2.5 TFLOPs. Maybe a portable switch 2. We aren't getting that on a switch pro.
 

z0m3le

Member
Oct 25, 2017
5,418
I do hope Nintendo uses A55 cores or their successor for OS, it would be cool for them to bring back street pass imo.
 

fiendcode

Member
Oct 26, 2017
24,981
Switch Succ memory size is going to depend entirely on how Nintendo/Nvidia design their storage architecture. If they go for cheaper/older expanded storage (ie: SDXC), they'll likely go for more internal storage and mandate games be installed there to play. If they use a newer faster standard or move to proprietary expanded storage, they'll use whatever amount's cheapest for internal storage and let games be played from anywhere. I'd prefer the latter but could easily see the former depending on where storage standards are in 3+ years.
 
Status
Not open for further replies.