• The site will be down for scheduled maintenance this Friday the 24th at 2AM PST / 5AM EST / 9AM UTC while we perform a software update. We expect this process to take up to 5 hours. Thank you for your understanding and patience. New features will be included.
  • Community Spotlight sign-ups are open once again for both Gaming and EtcetEra Hangout threads! If you want to shine a spotlight on your community, please register now.

Next gen PS5 and next Xbox launch speculation - Secret sauces spicing 2019

When will the first 'next gen' console be revealed?

  • First half of 2019

    Votes: 593 15.6%
  • Second half of 2019(let's say post E3)

    Votes: 1,361 35.9%
  • First half of 2020

    Votes: 1,675 44.2%
  • 2021 :^)

    Votes: 161 4.2%

  • Total voters
    3,790
  • Poll closed .
Status
Not open for further replies.
Oct 27, 2017
3,105
Germany
He calculates his tflop figures based on polaris tech though, according to Navi rumours they look far more potent



According to Richards calculations 48cu will provide 11.1tflops @ 1.8ghz, but it says in this chart that 48cu on Navi will be vega 64 + 10-15%, the vega 64 is a 12.6tflop card.

My gut is really starting to tell me not to expect more then the 10-12tflop range.

Man there so much god dam anticipation for these all important important tflop figures.
Don't expect 1800 Mhz in a console. Lower it by 400 Mhz and you will eventually near what to expect there.
 

Putty

Double Eleven
Verified
Oct 27, 2017
580
Middlesbrough
I've said it before and i'll say it again...IMO the CPU increase is the big one...WHATEVER the TF number turns out...What we are going to have are seriously well balanced machines.
 
Last edited:
Oct 28, 2017
9,654
Worth a double post. New DF hotness.

This is really interesting. Might actually even check out lol. So depending on the CU's, we're looking at anything between 10-15 Tflops. The pastebin rumours which have been pretty much dead on thus far, even down to predicting when Sony would do an initial reveal, seemed to guess the new system would be around 14 Tflops, but that seems almost too good to be true. That performance figure is what was rumoured for Anaconda (Microsoft's premium new system).
 
Oct 25, 2017
3,415
Don't expect 1800 Mhz in a console. Lower it by 400 Mhz and you will eventually near what to expect there.
It depends on the efficiency gains of Navi. Nvidia hits 1700 boost clocks on 12nm in 120W cards. https://en.wikipedia.org/wiki/GeForce_16_series

That's a lofty goal, but AMD has said they deployed Zen engineers to help address perf/Watt issues on Navi. If you assume RX 3080 can hit Vega 64 + 15% performance in 150W, that's either a huge architectural efficiency jump, huge power efficiency jump, or some combo of both.
 
Oct 27, 2017
3,105
Germany
It depends on the efficiency gains of Navi. Nvidia hits 1700 boost clocks on 12nm in 120W cards. https://en.wikipedia.org/wiki/GeForce_16_series

That's a lofty goal, but AMD has said they deployed Zen engineers to help address perf/Watt issues on Navi. If you assume RX 3080 can hit Vega 64 + 15% performance in 150W, that's either a huge architectural efficiency jump, huge power efficiency jump, or some combo of both.
I understand that but you have to take into account the the SOC also runs a CPU, and the memory interface. I would estimate that that adds about 30W to what the GPU consumes.

I also want to add that the cooling solutions of such GPUs (in the 10TF ballpark) are quite different to what we see in consoles. Dual 90mm fans for example. While I would even love to see a single Noctua NF-A12x25 solution I doubt we will ever see such a thing ...
 
Last edited:
Oct 27, 2017
278
I guess the benefit is that you don't need to wake up the main SoC for standby tasks that need access to RAM. Makes me wonder how such a PS5 setup would handle background tasks such as that.
AMD done a lot of work on power management in the interim. I'd expect their future SoC's should be able to run with very little of the chip enabled for these tasks. Maybe one core at minimal clocks, one memory channel, the wifi/ethernet and the ssd/flash would have to be powered up.
 
Oct 27, 2017
278
If your right Colbert ( which I think you will be) , a lot of people are going to be upset by these tflop figures.
That's because they are ignorant. Who cares what ignorant people think?

It'd be one thing if the reason why the TF rating isn't the end-all be-all of performance indicators in every case hadn't been explained in painstaking detail all over his thread. Anybody who still doesn't get it either isn't capable of understanding or is being willfully obtuse.
 
Feb 10, 2018
11,048
This is really interesting. Might actually even check out lol. So depending on the CU's, we're looking at anything between 10-15 Tflops. The pastebin rumours which have been pretty much dead on thus far, even down to predicting when Sony would do an initial reveal, seemed to guess the new system would be around 14 Tflops, but that seems almost too good to be true. That performance figure is what was rumoured for Anaconda (Microsoft's premium new system).
What pastebin leaks?
 
Oct 27, 2017
37
You know, that "leak" of a 8GB HBM2/16GB DDR4 memory setup would perfectly fit an early dev kit that was running a Vega 56/64 in a PC.
If the rumor is correct, than Sony has a very sophisticated memory design that will deliver much better price/performance than any pure GDDR6 solution. You obviously save on cost, but just as important you prevent contention between the CPU & GPU for memory access. The highest performing gaming PC's always use separate pools of memory and Sony using that approach would a significant step up from the PS4 and Xbox One X designs.
 
Dec 14, 2018
690
St Kitts
He calculates his tflop figures based on polaris tech though, according to Navi rumours they look far more potent



According to Richards calculations 48cu will provide 11.1tflops @ 1.8ghz, but it says in this chart that 48cu on Navi will be vega 64 + 10-15%, the vega 64 is a 12.6tflop card.

My gut is really starting to tell me not to expect more then the 10-12tflop range.

Man there so much god dam anticipation for these all important important tflop figures.
Or who knows.... maybe the cards in that table are clocked higher than 1.8Ghz? And 1.8Ghz is actually the downclocked console version.

Either way I used the table when making my calculation but mistakenly used navi 12 (40CU) and Navi 10 (48CU).

On another note, I think people are letting whatever Vega 7 was get to them and in turn underestimating what AMD should really be able to do with a 7nm chip built for 7nm. Or maybe navi is even a 7nm+ chip like ryzen 3. Getting clocks as high as 1.8Ghz on a 7nm process shouldn't be that hard to believe considering nvidia could do that on a 16nm process. Maybe when Nvidia releases their own 7nm GPUs and people see those running at 2.4Ghz they would be back to believing that 1.8Ghz - 2Ghz is what they expect from AMD.
 
Oct 27, 2017
278
If the rumor is correct, than Sony has a very sophisticated memory design that will deliver much better price/performance than any pure GDDR6 solution. You obviously save on cost, but just as important you prevent contention between the CPU & GPU for memory access. The highest performing gaming PC's always use separate pools of memory and Sony using that approach would a significant step up from the PS4 and Xbox One X designs.
The highest performing gaming PC's have no other choice. You don't have enough memory bandwidth to support high-end GPUs with DDR4.
 

VX1

Member
Oct 28, 2017
3,351
Europe
I understand that but you have to take into account the the SOC also runs a CPU, and the memory interface. I would estimate that that adds about 30W to what the GPU consumes.

I also want to add that the cooling solutions of such GPUs (in the 10TF ballpark) are quite different to what we see in consoles. Dual 90mm fans for example. While I would even love a single Noctua NF-A12x25 solution I doubt we will ever see such a thing ...
Richard also noticed,and i think you mentioned that as well,that Cerny never mentioned not just TFs but GPU at all in that interview,except Ray tracing capability.
So...yeah.
 
Feb 10, 2018
11,048
That's because they are ignorant. Who cares what ignorant people think?

It'd be one thing if the reason why the TF rating isn't the end-all be-all of performance indicators in every case hadn't been explained in painstaking detail all over his thread. Anybody who still doesn't get it either isn't capable of understanding or is being willfully obtuse.
It happens all the time I think a lot of people here like to be disappointed, you should see the pre Nintendo direct thread, people expect some of the most crazy megaton to happen, then they don't and they get on there little outrage, I do find it kind of cute to be honest.
 
Oct 27, 2017
37
Will these systems really work on 8k TVs? I find it hard to believe.
Sony implemented custom support for checker-board rendering in the PS4 Pro to make it easier to scale games that were designed around 1080p to 4K. Similarly, Sony could use the same checker-board rendering technology and upscale 4K games to 8K. The upscaling will be so good, you would be hard pressed to see the difference between it and 8K native.
 
Oct 25, 2017
3,415
I understand that but you have to take into account the the SOC also runs a CPU, and the memory interface. I would estimate that that adds about 30W to what the GPU consumes.

I also want to add that the cooling solutions of such GPUs (in the 10TF ballpark) are quite different to what we see in consoles. Dual 90mm fans for example. While I would even love to see a single Noctua NF-A12x25 solution I doubt we will ever see such a thing ...
Discrete GPU cards have memory controllers, memory, VRMs and other IO chips in their total power budget. The delta between the VRMs and other IO circuitry are only eating into CPU budget at that point, so the delta may only be 10-15W, if that.

AMD done a lot of work on power management in the interim. I'd expect their future SoC's should be able to run with very little of the chip enabled for these tasks. Maybe one core at minimal clocks, one memory channel, the wifi/ethernet and the ssd/flash would have to be powered up.
That level of power gating would certainly help. A 1.6GHz base clock certainly helps too. I wonder if they undervolt in that case. Would they even need an external ARM SoC at that point? I recall RKSimon having some ARM commits recently, so perhaps they still have some of bootloader functionality...
 
Oct 27, 2017
3,105
Germany
From December last year.

https://pastebin.com/PY9vaTsR/

Though apparently with Navi Tflop calculation is a little bit different (closer to Nvidia), so I'm really not sure.
The theoretical peak performance in TF doesn't change if you calculate it with the same amount of CUDA cores or AMD stream processors. The real world performance though due to efficiently using the hardware is what differentiates the GPU architectures.
 
Oct 27, 2017
37
The highest performing gaming PC's have no other choice. The economics of that market don't justify an APU of a class that is used in consoles.
High end gaming PC's always have a choice. A split memory design is the best performing design, period. CPU's and GPU's have very different memory access requirements. One size doesn't fit all, except as a compromise.
 
Oct 26, 2017
1,911
Lol...makes sense now....

I strongly doubt that engineering o qualification sample chips are what makes it into early dev kits. Early dev kits are basically PCs.

It didn't just get interesting, it got flat out real. Only thing now is that we don't know if thats actually a PS5 APU, it could be for the next Xbox. And we don't know how many CUs its being clocked against.
I wonder though, if they’re the forerunner of chips coming sooner for dev kits. If PS4 is anything to go by the first soc based kits should arrive about 10-12 months prior to release.

You’re right though. There’s nothing to pin these chips to PS5 development necessarily. They could be Xbox chips. Or something else entirely amd is working on.
 
OP
OP
Phoenix Splash
Mar 23, 2018
1,501
https://www.resetera.com/threads/ne...ecret-sauces-spicing-2019.91830/post-19961180

This post gives a good summery on scalability, comparing the PS3 + ps4 is a different circumstance to comparing lockhart + anaconda, the lockheart is going to be designed to scale with Ana.
Also tflops can punch above there weight with the correct memory system, like how the 1X does ( it has double the res over the pro despite only have 50% more tflops, the ps4 Pro needed 100%more tflops in order to double PS4's resolution.)
Oh I did not read that post. Thanks for pointing it out!


Phoenix Splash
400 pages
Time for a new thread ;)
400 pages in 3 1/2 months...not bad :)
What the heck, I didn't realize we were at 400 pages already hahaha! Just created it!

Come on in, folks!

https://www.resetera.com/threads/ne...nch-thread-my-anaconda-dont-want-none.112607/
 
Dec 14, 2018
690
St Kitts
It depends on the efficiency gains of Navi. Nvidia hits 1700 boost clocks on 12nm in 120W cards. https://en.wikipedia.org/wiki/GeForce_16_series

That's a lofty goal, but AMD has said they deployed Zen engineers to help address perf/Watt issues on Navi. If you assume RX 3080 can hit Vega 64 + 15% performance in 150W, that's either a huge architectural efficiency jump, huge power efficiency jump, or some combo of both.
Actually Nvidia were hitting 1700Mhz even on a 16nm 1080 card albeit at around 150-300W lol. But surely AMD should be able to do something meaningful at 7nm especially if efficiency was one of the primary focuses in Navi. Cause the 64CU limit till sees to be in place so they have to have spent all that engineering time somewhere.
 
Oct 27, 2017
278
High end gaming PC's always have a choice. A split memory design is the best performing design, period. CPU's and GPU's have very different memory access requirements. One size doesn't fit all, except as a compromise.
They do not. They are still based on the base PC platform and that platform simply doesn't deliver enough main memory bandwidth to support more than a modest GPU in addition to the CPU, which is why integrated GPUs don't scale up any higher than they do. They would be bottlenecked by bandwidth and the extra processing units would be a waste of die space.
 
Status
Not open for further replies.