• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

When will the first 'next gen' console be revealed?

  • First half of 2019

    Votes: 593 15.6%
  • Second half of 2019(let's say post E3)

    Votes: 1,361 35.9%
  • First half of 2020

    Votes: 1,675 44.2%
  • 2021 :^)

    Votes: 161 4.2%

  • Total voters
    3,790
  • Poll closed .
Status
Not open for further replies.

anexanhume

Member
Oct 25, 2017
12,912
Maryland
I'm talking about having only GDDR6 interfaces vs. having both HBM2 and DDR4 interfaces on the chip.
Oh! Good point and oversight on my part, especially if they need a 4 channel DDR4 interface as detailed in the rumor. Hopefully the tradeoff is worth it if they went down that path.

And that would answer my question about how the bus is ultimately arbitrated. No separate master controller off-die or off-package. I believe the DDR4 in the Pro is managed by the ARM SoC off package?
 

M3rcy

Member
Oct 27, 2017
702
Wasn't 14.7 TF in one of the rumors? I don't understand the Pro and 1X numbers though.

What don't you understand? The way you get those numbers is explained right before they are shown. If the Pro and One X were clocked at 1.8Ghz, their TF rating would come out to those numbers. The 14.7TF came from someone doing the exact same math being done here for a 64CU part.
 

Saint-14

Banned
Nov 2, 2017
14,477
What don't you understand? The way you get those numbers is explained right before they are shown. If the Pro and One X were clocked at 1.8Ghz, their TF rating would come out to those numbers. The 14.7TF came from someone doing the exact same math being done here for a 64CU part.
Ah right, I missed that.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
Richard joins us on Gonzalo hype train! :)

cc: BitsandBytes , Colbert
You will see in the end DF is eventually use my baseline prediction LOL

-------------

On a side note:
Zen 2 yields are excellent, not quite as good as Zen 1 but still around 70% (Zen 1 was around 80%). In comparison Intel drive yields of their 28 core processors of 35% !!
 
Feb 10, 2018
17,534
interesting.......


So anywhere between 9TF and 11TF?

He calculates his tflop figures based on polaris tech though, according to Navi rumours they look far more potent

LhqKmZ6.jpg


According to Richards calculations 48cu will provide 11.1tflops @ 1.8ghz, but it says in this chart that 48cu on Navi will be vega 64 + 10-15%, the vega 64 is a 12.6tflop card.

My gut is really starting to tell me not to expect more then the 10-12tflop range.

Man there so much god dam anticipation for these all important important tflop figures.
 

M3rcy

Member
Oct 27, 2017
702
Oh! Good point and oversight on my part, especially if they need a 4 channel DDR4 interface as detailed in the rumor. Hopefully the tradeoff is worth it if they went down that path.

And that would answer my question about how the bus is ultimately arbitrated. No separate master controller off-die or off-package. I believe the DDR4 in the Pro is managed by the ARM SoC off package?

Everything but the CPU, GPU and main memory pool is managed by the ARM SoC. It's a weird setup.
 

M3rcy

Member
Oct 27, 2017
702
He calculates his tflop figures based on polaris tech though, according to Navi rumours they look far more potent

LhqKmZ6.jpg


According to Richards calculations 48cu will provide 11.1tflops @ 1.8ghz, but it says in this chart that 48cu on Navi will be vega 64 + 10-15%, the vega 64 is a 12.6tflop card.

My gut is really starting to tell me not to expect more then the 10-12tflop range.

Man there so much god dam anticipation for these all important important tflop figures.

Assuming Navi is better able to translate it's theoretical power into actual performance in games than Vega, it makes sense. Seems like a pretty safe assumption.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
He calculates his tflop figures based on polaris tech though, according to Navi rumours they look far more potent

LhqKmZ6.jpg


According to Richards calculations 48cu will provide 11.1tflops @ 1.8ghz, but it says in this chart that 48cu on Navi will be vega 64 + 10-15%, the vega 64 is a 12.6tflop card.

My gut is really starting to tell me not to expect more then the 10-12tflop range.

Man there so much god dam anticipation for these all important important tflop figures.
Don't expect 1800 Mhz in a console. Lower it by 400 Mhz and you will eventually near what to expect there.
 

Putty

Double Eleven
Verified
Oct 27, 2017
929
Middlesbrough
I've said it before and i'll say it again...IMO the CPU increase is the big one...WHATEVER the TF number turns out...What we are going to have are seriously well balanced machines.
 
Last edited:

nib95

Contains No Misinformation on Philly Cheesesteaks
Banned
Oct 28, 2017
18,498
Worth a double post. New DF hotness.



This is really interesting. Might actually even check out lol. So depending on the CU's, we're looking at anything between 10-15 Tflops. The pastebin rumours which have been pretty much dead on thus far, even down to predicting when Sony would do an initial reveal, seemed to guess the new system would be around 14 Tflops, but that seems almost too good to be true. That performance figure is what was rumoured for Anaconda (Microsoft's premium new system).
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Don't expect 1800 Mhz in a console. Lower it by 400 Mhz and you will eventually near what to expect there.
It depends on the efficiency gains of Navi. Nvidia hits 1700 boost clocks on 12nm in 120W cards. https://en.wikipedia.org/wiki/GeForce_16_series

That's a lofty goal, but AMD has said they deployed Zen engineers to help address perf/Watt issues on Navi. If you assume RX 3080 can hit Vega 64 + 15% performance in 150W, that's either a huge architectural efficiency jump, huge power efficiency jump, or some combo of both.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
It depends on the efficiency gains of Navi. Nvidia hits 1700 boost clocks on 12nm in 120W cards. https://en.wikipedia.org/wiki/GeForce_16_series

That's a lofty goal, but AMD has said they deployed Zen engineers to help address perf/Watt issues on Navi. If you assume RX 3080 can hit Vega 64 + 15% performance in 150W, that's either a huge architectural efficiency jump, huge power efficiency jump, or some combo of both.
I understand that but you have to take into account the the SOC also runs a CPU, and the memory interface. I would estimate that that adds about 30W to what the GPU consumes.

I also want to add that the cooling solutions of such GPUs (in the 10TF ballpark) are quite different to what we see in consoles. Dual 90mm fans for example. While I would even love to see a single Noctua NF-A12x25 solution I doubt we will ever see such a thing ...
 
Last edited:

M3rcy

Member
Oct 27, 2017
702
I guess the benefit is that you don't need to wake up the main SoC for standby tasks that need access to RAM. Makes me wonder how such a PS5 setup would handle background tasks such as that.

AMD done a lot of work on power management in the interim. I'd expect their future SoC's should be able to run with very little of the chip enabled for these tasks. Maybe one core at minimal clocks, one memory channel, the wifi/ethernet and the ssd/flash would have to be powered up.
 

M3rcy

Member
Oct 27, 2017
702
If your right Colbert ( which I think you will be) , a lot of people are going to be upset by these tflop figures.

That's because they are ignorant. Who cares what ignorant people think?

It'd be one thing if the reason why the TF rating isn't the end-all be-all of performance indicators in every case hadn't been explained in painstaking detail all over his thread. Anybody who still doesn't get it either isn't capable of understanding or is being willfully obtuse.
 
Feb 10, 2018
17,534
This is really interesting. Might actually even check out lol. So depending on the CU's, we're looking at anything between 10-15 Tflops. The pastebin rumours which have been pretty much dead on thus far, even down to predicting when Sony would do an initial reveal, seemed to guess the new system would be around 14 Tflops, but that seems almost too good to be true. That performance figure is what was rumoured for Anaconda (Microsoft's premium new system).

What pastebin leaks?
 

GameSeeker

Member
Oct 27, 2017
164
You know, that "leak" of a 8GB HBM2/16GB DDR4 memory setup would perfectly fit an early dev kit that was running a Vega 56/64 in a PC.

If the rumor is correct, than Sony has a very sophisticated memory design that will deliver much better price/performance than any pure GDDR6 solution. You obviously save on cost, but just as important you prevent contention between the CPU & GPU for memory access. The highest performing gaming PC's always use separate pools of memory and Sony using that approach would a significant step up from the PS4 and Xbox One X designs.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
He calculates his tflop figures based on polaris tech though, according to Navi rumours they look far more potent

LhqKmZ6.jpg


According to Richards calculations 48cu will provide 11.1tflops @ 1.8ghz, but it says in this chart that 48cu on Navi will be vega 64 + 10-15%, the vega 64 is a 12.6tflop card.

My gut is really starting to tell me not to expect more then the 10-12tflop range.

Man there so much god dam anticipation for these all important important tflop figures.

Or who knows.... maybe the cards in that table are clocked higher than 1.8Ghz? And 1.8Ghz is actually the downclocked console version.

Either way I used the table when making my calculation but mistakenly used navi 12 (40CU) and Navi 10 (48CU).

On another note, I think people are letting whatever Vega 7 was get to them and in turn underestimating what AMD should really be able to do with a 7nm chip built for 7nm. Or maybe navi is even a 7nm+ chip like ryzen 3. Getting clocks as high as 1.8Ghz on a 7nm process shouldn't be that hard to believe considering nvidia could do that on a 16nm process. Maybe when Nvidia releases their own 7nm GPUs and people see those running at 2.4Ghz they would be back to believing that 1.8Ghz - 2Ghz is what they expect from AMD.
 

M3rcy

Member
Oct 27, 2017
702
If the rumor is correct, than Sony has a very sophisticated memory design that will deliver much better price/performance than any pure GDDR6 solution. You obviously save on cost, but just as important you prevent contention between the CPU & GPU for memory access. The highest performing gaming PC's always use separate pools of memory and Sony using that approach would a significant step up from the PS4 and Xbox One X designs.

The highest performing gaming PC's have no other choice. You don't have enough memory bandwidth to support high-end GPUs with DDR4.
 

VX1

Member
Oct 28, 2017
6,999
Europe
I understand that but you have to take into account the the SOC also runs a CPU, and the memory interface. I would estimate that that adds about 30W to what the GPU consumes.

I also want to add that the cooling solutions of such GPUs (in the 10TF ballpark) are quite different to what we see in consoles. Dual 90mm fans for example. While I would even love a single Noctua NF-A12x25 solution I doubt we will ever see such a thing ...

Richard also noticed,and i think you mentioned that as well,that Cerny never mentioned not just TFs but GPU at all in that interview,except Ray tracing capability.
So...yeah.
 
Feb 10, 2018
17,534
That's because they are ignorant. Who cares what ignorant people think?

It'd be one thing if the reason why the TF rating isn't the end-all be-all of performance indicators in every case hadn't been explained in painstaking detail all over his thread. Anybody who still doesn't get it either isn't capable of understanding or is being willfully obtuse.

It happens all the time I think a lot of people here like to be disappointed, you should see the pre Nintendo direct thread, people expect some of the most crazy megaton to happen, then they don't and they get on there little outrage, I do find it kind of cute to be honest.
 

GameSeeker

Member
Oct 27, 2017
164
Will these systems really work on 8k TVs? I find it hard to believe.

Sony implemented custom support for checker-board rendering in the PS4 Pro to make it easier to scale games that were designed around 1080p to 4K. Similarly, Sony could use the same checker-board rendering technology and upscale 4K games to 8K. The upscaling will be so good, you would be hard pressed to see the difference between it and 8K native.
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
I understand that but you have to take into account the the SOC also runs a CPU, and the memory interface. I would estimate that that adds about 30W to what the GPU consumes.

I also want to add that the cooling solutions of such GPUs (in the 10TF ballpark) are quite different to what we see in consoles. Dual 90mm fans for example. While I would even love to see a single Noctua NF-A12x25 solution I doubt we will ever see such a thing ...

Discrete GPU cards have memory controllers, memory, VRMs and other IO chips in their total power budget. The delta between the VRMs and other IO circuitry are only eating into CPU budget at that point, so the delta may only be 10-15W, if that.

AMD done a lot of work on power management in the interim. I'd expect their future SoC's should be able to run with very little of the chip enabled for these tasks. Maybe one core at minimal clocks, one memory channel, the wifi/ethernet and the ssd/flash would have to be powered up.

That level of power gating would certainly help. A 1.6GHz base clock certainly helps too. I wonder if they undervolt in that case. Would they even need an external ARM SoC at that point? I recall RKSimon having some ARM commits recently, so perhaps they still have some of bootloader functionality...
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
From December last year.

https://pastebin.com/PY9vaTsR/

Though apparently with Navi Tflop calculation is a little bit different (closer to Nvidia), so I'm really not sure.
The theoretical peak performance in TF doesn't change if you calculate it with the same amount of CUDA cores or AMD stream processors. The real world performance though due to efficiently using the hardware is what differentiates the GPU architectures.
 

nib95

Contains No Misinformation on Philly Cheesesteaks
Banned
Oct 28, 2017
18,498
The theoretical peak performance in TF doesn't change if you calculate it with the same amount of CUDA cores or AMD stream processors. The real world performance though due to efficiently using the hardware is what differentiates the GPU architectures.

Agreed.
 

GameSeeker

Member
Oct 27, 2017
164
The highest performing gaming PC's have no other choice. The economics of that market don't justify an APU of a class that is used in consoles.

High end gaming PC's always have a choice. A split memory design is the best performing design, period. CPU's and GPU's have very different memory access requirements. One size doesn't fit all, except as a compromise.
 

gofreak

Member
Oct 26, 2017
7,733
Lol...makes sense now....

I strongly doubt that engineering o qualification sample chips are what makes it into early dev kits. Early dev kits are basically PCs.

It didn't just get interesting, it got flat out real. Only thing now is that we don't know if thats actually a PS5 APU, it could be for the next Xbox. And we don't know how many CUs its being clocked against.

I wonder though, if they're the forerunner of chips coming sooner for dev kits. If PS4 is anything to go by the first soc based kits should arrive about 10-12 months prior to release.

You're right though. There's nothing to pin these chips to PS5 development necessarily. They could be Xbox chips. Or something else entirely amd is working on.
 
OP
OP
Phoenix Splash
Mar 23, 2018
2,654
https://www.resetera.com/threads/ne...ecret-sauces-spicing-2019.91830/post-19961180

This post gives a good summery on scalability, comparing the PS3 + ps4 is a different circumstance to comparing lockhart + anaconda, the lockheart is going to be designed to scale with Ana.
Also tflops can punch above there weight with the correct memory system, like how the 1X does ( it has double the res over the pro despite only have 50% more tflops, the ps4 Pro needed 100%more tflops in order to double PS4's resolution.)

Oh I did not read that post. Thanks for pointing it out!


Phoenix Splash
400 pages
Time for a new thread ;)

400 pages in 3 1/2 months...not bad :)

What the heck, I didn't realize we were at 400 pages already hahaha! Just created it!

Come on in, folks!

https://www.resetera.com/threads/ne...nch-thread-my-anaconda-dont-want-none.112607/
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
It depends on the efficiency gains of Navi. Nvidia hits 1700 boost clocks on 12nm in 120W cards. https://en.wikipedia.org/wiki/GeForce_16_series

That's a lofty goal, but AMD has said they deployed Zen engineers to help address perf/Watt issues on Navi. If you assume RX 3080 can hit Vega 64 + 15% performance in 150W, that's either a huge architectural efficiency jump, huge power efficiency jump, or some combo of both.
Actually Nvidia were hitting 1700Mhz even on a 16nm 1080 card albeit at around 150-300W lol. But surely AMD should be able to do something meaningful at 7nm especially if efficiency was one of the primary focuses in Navi. Cause the 64CU limit till sees to be in place so they have to have spent all that engineering time somewhere.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
According to this gonzalo leaks gpu is navi 10 lite so if this navi leaked table is correct cu number is 48-56, with 1800mhz its 11-12.9tflops
 

M3rcy

Member
Oct 27, 2017
702
High end gaming PC's always have a choice. A split memory design is the best performing design, period. CPU's and GPU's have very different memory access requirements. One size doesn't fit all, except as a compromise.

They do not. They are still based on the base PC platform and that platform simply doesn't deliver enough main memory bandwidth to support more than a modest GPU in addition to the CPU, which is why integrated GPUs don't scale up any higher than they do. They would be bottlenecked by bandwidth and the extra processing units would be a waste of die space.
 
Status
Not open for further replies.