• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

john miller

Member
Nov 29, 2017
90
Interesting nothing around the SSD and the I/O stack in this video considering it was like half of the Cerny presentation.

Yeah, that was strange for DF to ignore the ssd part. At the beginning of the video they said they're going to talk about the small details, but ignored one of the major parts of Cerny's presentation.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
It's absolutely not like the PS3. If a developer wants to they can basically just ignore the variable clocks and the increased SSD speeds and call it a day.

I gotta say coming to this thread and realizing that some people forgot that games are not a continuous series of scripted events that have absolutely quantifiable resource requisites per frame. There is always performance overhead available to account for player agency and consequent variability, failing which we witness framerate drops and/or screen tearing.

Between things like dynamic resolution (and hopefully some tier of VRS), "smart shift" makes ample sense to divert processing power from moment to moment gameplay based on the tasks that need to be executed. After all, games on console operate at fixed framerates which offers the system driven power diversion mechanism the opportunity to check whether this decision might cause performance to drop below the target threshold, every time.
 

gundamkyoukai

Member
Oct 25, 2017
21,105
That or they just set the clocks to effectively max and change the workload (graphic effects, etc.) to maintain the performance they want, basically how every other system works.

Well if it can work that way i can see a lot of 3rd party devs doing that since that how things are normally done and cut back where need be.
 

MrKlaw

Member
Oct 25, 2017
33,038
It is silly for anyone to claim that the PS5 isn't powerful and won't have some crazy good next generation games developed on it, but outside of the SSD itself there are clear compromises compared to the Series X that developers have to contend with that will have some impact that may be most pronounced when we get true next generation games that really push the limits of what is possible with next gen hardware.

  • Variable frequency approach is definitely a way to probably have it "punch above its weight" in some cases at least, but this is still something that developers have to be aware of which could impact their full vision by not having fixed known power available for CPU and GPU at all times. A game that wants to do some insane things with AI and physics may have to throttle down the GPU more than desired which would impact available GPU compute performance for ray tracing and other graphics capabilities and vice versa.
  • 36 CUs is pretty much guaranteed to make ray tracing less performant on the PS5 compared to 52 CUs on the Series X based on how AMD does things, and the fact that Cerny didn't even mention it again in the deeper tech talk with DF is likely an indication that he knows where he should be selling his platform which is smart
  • Memory bandwidth could definitely end up being a potential challenge with the mention of the audio 20 GB/s where even Cerny admits that you have to be careful not to impact graphics processing, and that is on top of the 9 GB/s decompressed bandwidth required by the SSD

what do we know of how AMD does RT? Genuinely curious because normally you'd expect performance to be not directly related to CU number, but a function of CU number and frequency. If they're set up for RT differently it'd be interesting to know

Agree with your other points - there are caveats that devs may need to balance. *may* - some may not have any issues with clocks but if they do, they'll need to make a call on where to prioiritise power. Drop in actual performance shouldn't be too big, but it is one more thing to consider that they don't need to worry about on XSX. Probably not a huge issue - multiplatform devs are more likely to have flexible engines designed for multiple PC specifications so some adjustment shouldn't be a big thing
 
Dec 8, 2018
1,911
So you are looking at it backwards.

Any developer (including Sony) would ideally prefer the maximum possible GPU and CPU chip performance set at all times.

But that's simply not possible. You just are not able to reach these high clock speeds all the time under every possible workload.

The PS5 is designed to take advantage of the fact that variable performance allows the system to reach a higher performance level than it would if the same hardware had to conform to stable clocks. Stable clocks inevitably leave some performance on the table that the hardware would otherwise be able to achieve with more flexibility.

So, for example, if the SX could magically adopt this variable system with all other specs remaining the same, that would produce a stronger system in the end than the SX is now, not a weaker one. But I say "magically" because that's not possible, there would be trade offs for MS to do this that they did not want to make.

Thanks for this answer very informative.
 

Deleted member 43

Account closed at user request
Banned
Oct 24, 2017
9,271
Boost clock smells like some bullshit to me.
It's not.
"There's enough power that both CPU and GPU can potentially run at their limits of 3.5GHz and 2.23GHz, it isn't the case that the developer has to choose to run one of them slower."

this has been quoted multiple times already and was mentioned in the article.
Workloads and clock speeds are not the same thing. You can have a workload that means having to run the CPU at a lower speed in order to maintain the top GPU clock, just like you can have a workload that allows you to keep both the CPU and GPU at max.
 
Oct 27, 2017
4,642
For some reason Mark Cerny/ Sony seems extremely wary of DF/ Eurogamer (which might very simply be that they have their own timeline for reveals), whereas MS seems to just have full open access and a ton of coverage lined up with them.

Which brings me to a question I have for Dark1x for the sake of transparency (and my apologies if this has already been discussed/ asked):
Is there a contractual agreement to cover the Series X between Eurogamer/ DF and Microsoft, for some kind of exclusive access (much in the same way Sony seemed to have a deal with Wired last year) ?
I don't think its anything nefarious like that.

Its more along the lines that last gen DF showed most games being comparable between PS360 with a minor-to-mild edging going to 360 due to implications of shared RAM and difficulties in optimisation for cell. This gen however was a much more significant gap with PS4 widely outperforming the XB1 and it reflecting badly for MS in the DF face-offs. we all know the result of that.

MS decided to go redeem themselves with the core gaming community by chasing power again with the 1X and being very open with DF as a way to win back hearts and minds of. I doubt there is anything more nefarious going on than usual PR that targets a section of the audience MS want to appeal to.
 

Straffaren666

Member
Mar 13, 2018
84
DF makes a point that increasing the clock without increasing bandwidth limits the performance gains, and as they show, for a rdan1 gpu with same amount of CUs and BW than ps5 at 1.9ghz you already start seeing the gains being disproportional (they increase 200mhz from 1.7 to 1. 9 and saw twice the performance gain than going from 1.9 to 2.1).

With the large bandwidth gap it's hard to tell where the performance gap of these machines are going to be and for what workloads. (kinda like in some games X can blast through the 40% difference due the higher bandwidth).

But I do think it is not sounding to be as drastic as X to Pro

In a benchmark like that, the benchmark can be bound by the CPU, GPU and memory. We can rule out the CPU as they usually use a powerful CPU when benchmarking. That leaves us with the GPU and/or memory. In that case, if a GPU clock frequency increase of x% doesn't result in a x% performance increase, then it's because we're memory bound or that the actual GPU clock frequency doesn't increase as much as the overclock utility implies or a combination of both. If we at the same time know that a memory clock frequency increase of y% doesn't result in a y% performance increase, then we know that it's a combination of both and if we know that a memory clock frequency increase of 9% results in a 1% performance increase, then we can deduce that the discrepancy in performance increase is mostly due to that the actual clock frequency doesn't increase as much as the overclock utility indicates. The 5700XT has a variable clock frequency and unless you know what the average clock frequency is during the benchmark, you will most likely draw the wrong conclusions. To complicate it further, the average clock frequency varies depending of the characteristics of the workloads of the benchmark, which means it varies from one benchmark to another or different resolutions of the same benchmark.
 

RivalGT

Member
Dec 13, 2017
6,393
The RDNA1 GPU testing seems flawed, since those cards are likely power limited. No amount of overclocking(GPU clocks or RAM) or cooling will get you better performance if you are power limited.
 

•79•

Banned
Sep 22, 2018
608
South West London, UK
I really want to know why people are so hung up at clocks having to be constantly at 100% at all times. Is there even any game that is constantly taxing systems at all times like that.

And again for people who didnt read about how the architecture works its not dependent on temp anymore its dependent on power draw only, which would totally change how the processors work, and in this article again it re-iterates that there isnt throttling going on.

Console Top Trumps! More and higher numbers better.


Very few people commenting in here are working on games for these machines, very few understand conceptually what's going on with PS5 in particular and a number of folk are just being disingenuous.
 

riotous

Member
Oct 25, 2017
11,325
Seattle
Workloads and clock speeds are not the same thing. You can have a workload that means having to run the CPU at a lower speed in order to maintain the top GPU clock, just like you can have a workload that allows you to keep both the CPU and GPU at max.

The heavier the workload, the greater the need to lower clocks right?

Which is what is interesting about it; the more complex the game the less clock speed(s) it will have access too.

The "Catch 22 APU."
 

Civzy

Member
Mar 21, 2019
142
The tests Richard discusses using two GPUs at 9.67TF -- one with less CU's and a higher clock and one with more CU's and a lower clock -- are definitely of interest to me. I hope he's able to make a video with quite a few different sample cards. 4 extra CU's at lower clocks gave a 2.5%+ boost in performance.
 

Deleted member 2441

User requested account closure
Banned
Oct 25, 2017
655
It's not.

Workloads and clock speeds are not the same thing. You can have a workload that means having to run the CPU at a lower speed in order to maintain the top GPU clock, just like you can have a workload that allows you to keep both the CPU and GPU at max.

I know? You're preaching to the choir here...

I'm responding to a user who asked if the set power limit restricts both the CPU and GPU from running at max clocks concurrently. He's directly asking about clock speeds.

Does it restrict them? No, we already have explicit confirmation they can both run at max speeds simultaneously.

Can it? Yes, we already know that a decrease in power for a particular component, based on workload, can cause a frequency reduction.
 

Adder7806

Member
Dec 16, 2018
4,122
It's not.

Workloads and clock speeds are not the same thing. You can have a workload that means having to run the CPU at a lower speed in order to maintain the top GPU clock, just like you can have a workload that allows you to keep both the CPU and GPU at max.
Thank you for your explanations. Very informative. You have the patience of a saint.
 

platocplx

2020 Member Elect
Member
Oct 30, 2017
36,072
Console Top Trumps! More and higher numbers better.


Very few people commenting in here are working on games for these machines, very few understand conceptually what's going on with PS5 in particular and a number of folk are just being disingenuous.
Yeah. Because for me it seems like for Sony they went for a model of getting the best bang for your buck when it comes to performance. Like the 100% all the time is an absolute waste of system resources if it doesn't call for it, and I don't know any game that is like that vs games where it's almost and up and down and from what Cerny was getting at is that the devs can know exactly what performance they are getting in every scene on every PS5 and distribute workloads CPU GPU or both based on the needs on the tasks on screen.
 

Deleted member 43

Account closed at user request
Banned
Oct 24, 2017
9,271
I know? You're preaching to the choir here...

I'm responding to a user who asked if the set power limit restricts both the CPU and GPU from running at max clocks concurrently. He's directly asking about clock speeds.

Does it restrict them? No, we already have explicit confirmation they can both run at max speeds simultaneously.

Can it? Yes, we already know that a decrease in power for a particular component, based on workload, can cause a frequency reduction.
I was exapanding on the point, not countering you.
Thank you for your explanations. Very informative. You have the patience of a saint.
Thanks! Just trying my best.
 

Convasse

Member
Oct 26, 2017
3,814
Atlanta, GA, USA
For some reason Mark Cerny/ Sony seems extremely wary of DF/ Eurogamer (which might very simply be that they have their own timeline for reveals), whereas MS seems to just have full open access and a ton of coverage lined up with them.

Which brings me to a question I have for Dark1x for the sake of transparency (and my apologies if this has already been discussed/ asked):
Is there a contractual agreement to cover the Series X between Eurogamer/ DF and Microsoft, for some kind of exclusive access (much in the same way Sony seemed to have a deal with Wired last year) ?
Perhaps, and you may have noticed this, Sony has been less forthcoming with everyone with details about PS5? Whereas MS has been notably more hands-on and open for discussion and in-depth technical detail. The fault for that lies not with DF/Eurogamer, but with Sony. Why not continue their partnership with Wired? Sony started the next-gen ball rolling in April 2019, mind you. Many questions, few answers.
 

bcatwilly

Member
Oct 27, 2017
2,483
what do we know of how AMD does RT? Genuinely curious because normally you'd expect performance to be not directly related to CU number, but a function of CU number and frequency. If they're set up for RT differently it'd be interesting to know

Agree with your other points - there are caveats that devs may need to balance. *may* - some may not have any issues with clocks but if they do, they'll need to make a call on where to prioiritise power. Drop in actual performance shouldn't be too big, but it is one more thing to consider that they don't need to worry about on XSX. Probably not a huge issue - multiplatform devs are more likely to have flexible engines designed for multiple PC specifications so some adjustment shouldn't be a big thing

We at least know that RT is dependent on CUs as you noted even though frequency would play some role, but my general point is that no frequency advantage here is going to overcome 16 more CUs available on the Series X with greater memory bandwidth too.
 

platocplx

2020 Member Elect
Member
Oct 30, 2017
36,072
Between things like dynamic resolution (and hopefully some tier of VRS), "smart shift" makes ample sense to divert processing power from moment to moment gameplay based on the tasks that need to be executed. After all, games on console operate at fixed framerates which offers the system driven power diversion mechanism the opportunity to check whether this decision might cause performance to drop below the target threshold, every time.
Exactly this.
 

AllChan7

Tries to be a positive role model
Member
Apr 30, 2019
3,670
So from my understanding, the variable clocks are just a way to ensure efficient use of the CPU and GPU depending on the workload? That doesn't sound concerning and I have yet to see any devs express any sort of dread regarding the PS5 architecture. Not like it'll matter once the games are shown off. If anything, PS5 will probably be easier to develop for than the PS4
 

Straffaren666

Member
Mar 13, 2018
84
Somewhere else in the GPU. RDNA is good, but not perfect of course. They sell it at specific frequencies, with certain cache amounts, with a certain amount of ROPs, TMUs, with a certain memory bus, memory speed, etc. for a reason of course.

It doesn't make sense. If the GPU clock frequency increase is x% then all the units of the GPU will run x% faster and all the workloads not being memory bound should run x% faster.
 

Deleted member 43

Account closed at user request
Banned
Oct 24, 2017
9,271
Perhaps, and you may have noticed this, Sony has been less forthcoming with everyone with details about PS5? Where as MS has been notably more hands-on and open for discussion and in-depth technical detail. Perhaps the fault for that lies not with DF/Eurogamer, but with Sony? Why not continue their partnership with Wired? Many questions, fewer answers.
MS has been more reserved privately than Sony, but more open publicly.

They are just different strategies.
 

marecki

Member
Aug 2, 2018
251
Not an ALT and not trolling.

I will ask one simple question. Has it been confirmed that the boost clocks can run at max 100% of the time for every game. If the answer is no, it causes confusion because now you have more questions 1) what type of games game run at max 100% of time 2) for demanding games games what % of the time are both operating at max....

My point is sony is being vague intentionally.
Theoretically the answer is NO it cannot but neither could current gen consoles or series X, Cerny explains that if you define max load as maximum possible transistors working at once at the highest clock for the prolonged time (as unrealistic as that theoretical scenario is) it would simply shut down the console due to overheating.
 

Smokey

Member
Oct 25, 2017
4,175
gundamkyoukai
Matt
riotous
modiz
chris 1515

When I say PS3 vibes, I'm talking about compared to the PS4, which was much more "off the shelf". Here they're doing a lot of more customization on their system, which I think is cool, it just seems there's a lot of theorizing about what this certain feature can and can't do. The Cell was notoriously difficult to program for and that was a different era at Sony. I was leaning more towards the overall system configuration compared to their competition. It's similar to PS3 and 360 in that respect, in my opinion.
 

jroc74

Member
Oct 27, 2017
28,992
There is no such agreement at all. We're free to do as we please with the information we gleaned during the visit. Same goes for Austin Evans who was on-site with us! It's literally just a normal kind of press visit.

I think it's just the difference between a Japanese and American company more than anything. Sony is just holding the cards close to their chest at the moment. Also, the virus situation hurt our plans for this (not that it was going to involve seeing the box or anything). The Xbox stuff just squeaked by.

I think there are just more limits here as Microsoft had already revealed a lot more before we saw anything. Sony is taking a very different approach and that's fine! We'll know more soon enough.

Thanks for the explanation. Appreciate what you, Dictator and DF/Eurogamer crew do.
::reads thread::

Sigh.

Yeah, isn't it great?
The systems will end up being closer than the PS4 vs. XBO or X vs. Pro.
It's absolutely not like the PS3. If a developer wants to they can basically just ignore the variable clocks and the increased SSD speeds and call it a day.

No, it's not thermal bound in a traditional sense, it's bound by the electric draw. The system won't throttle by heat.

Yes.

That or they just set the clocks to effectively max and change the workload (graphic effects, etc.) to maintain the performance they want, basically how every other system works.
Remember, Matt gave the nod to the Series X in an early speculation OT..with an educated guess....

So don't start doubting him now, ok?
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
It doesn't make sense. If the GPU clock frequency increase is x% then all the units of the GPU will run x% faster and all the workloads not being memory bound should run x% faster.
There has not been much research done on it for RDNA, but you should spend some time to look at GCN tests made regarding core frequency and memory speed. It also did not show linear scaling at all like you mention.

Same thing here is all.
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
The systems will end up being closer than the PS4 vs. XBO or X vs. Pro.
I wasn't fishing for a definitive answer like this, but I can't help noticing the wording.

You mean they will end landing on, meaning that right now they aren't? And why is that?
So you are looking at it backwards.

Any developer (including Sony) would ideally prefer the maximum possible GPU and CPU chip performance set at all times.

But that's simply not possible. You just are not able to reach these high clock speeds all the time under every possible workload.

The PS5 is designed to take advantage of the fact that variable performance allows the system to reach a higher performance level than it would if the same hardware had to conform to stable clocks. Stable clocks inevitably leave some performance on the table that the hardware would otherwise be able to achieve with more flexibility.

So, for example, if the SX could magically adopt this variable system with all other specs remaining the same, that would produce a stronger system in the end than the SX is now, not a weaker one. But I say "magically" because that's not possible, there would be trade offs for MS to do this that they did not want to make.
And again the wording here got me thinking.

What tradeoffs Ms would need to do to have the same system. And it would be possible to have the same specs and the boost system or do those trade offs would have reflected on the specs themselves?
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
We at least know that RT is dependent on CUs as you noted even though frequency would play some role, but my general point is that no frequency advantage here is going to overcome 16 more CUs available on the Series X with greater memory bandwidth too.
it's not overcome, frequency just lower 1.44x advantage to 1.2x
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
gundamkyoukai
Matt
riotous
modiz
chris 1515

When I say PS3 vibes, I'm talking about compared to the PS4, which was much more "off the shelf". Here they're doing a lot of more customization on their system, which I think is cool, it just seems there's a lot of theorizing about what this certain feature can and can't do. The Cell was notoriously difficult to program for and that was a different era at Sony. I was leaning more towards the overall system configuration compared to their competition. It's similar to PS3 and 360 in that respect, in my opinion.

This is hard or not hard to use by the developer. Here it is easy. I pretty sure having fast load time needs work but not difficult things.

After the other advantage, there are first party to show what the console is able to do. Sony has multiples very good studio able to show the strong point of the console. And with commercial and critical acclaim.
 

Deleted member 2441

User requested account closure
Banned
Oct 25, 2017
655
gundamkyoukai
Matt
riotous
modiz
chris 1515

When I say PS3 vibes, I'm talking about compared to the PS4, which was much more "off the shelf". Here they're doing a lot of more customization on their system, which I think is cool, it just seems there's a lot of theorizing about what this certain feature can and can't do. The Cell was notoriously difficult to program for and that was a different era at Sony. I was leaning more towards the overall system configuration compared to their competition. It's similar to PS3 and 360 in that respect, in my opinion.

It's still the same fundamental basis. x86-64 architecture on the CPU and a custom GPU using the same architecture as desktop products, same memory type as desktop GPUs, etc.

Cell was a completely different, bespoke architecture. It was different at a fundamental level.

Cerny talked about this in the presentation, about being easy to learn but having room to master. They have a common base architecture with PCs and XSX, but bespoke add-ons that developers can learn to use over time to eke more out of it.
 

HeWhoWalks

Member
Jan 17, 2018
2,522
gundamkyoukai
Matt
riotous
modiz
chris 1515

When I say PS3 vibes, I'm talking about compared to the PS4, which was much more "off the shelf". Here they're doing a lot of more customization on their system, which I think is cool, it just seems there's a lot of theorizing about what this certain feature can and can't do. The Cell was notoriously difficult to program for and that was a different era at Sony. I was leaning more towards the overall system configuration compared to their competition. It's similar to PS3 and 360 in that respect, in my opinion.

Which is actually one of the reasons I'm more excited about the PS5 than I ever was about the PS4. Developers also seem to be on board (unlike at any point prior to the PS3's launch).
 

Straffaren666

Member
Mar 13, 2018
84
There has not been much research done on it for RDNA, but you should spend some time to look at GCN tests made regarding core frequency and memory speed. It also did not show linear scaling at all like you mention.

Same thing here is all.

BTW, I elaborated about this the other day. Not even under liquid nitrogen cooling to -50 degrees celsius does the clock frequency of the 5700XT seem to be stable. See this link, https://youtu.be/JS9in-RHbjw?t=2172.
 
Oct 27, 2017
3,893
ATL
This was a great deep dive. I too wish that Sony was more forth coming about the actual feature set of the GPU though. I'm guessing they're waiting for AMD to reveal the full specification of RDNA2? Hopefully they reveal their box and have the full teardown soon.
 

EvilBoris

Prophet of Truth - HDTVtest
Verified
Oct 29, 2017
16,680
It's not.

Workloads and clock speeds are not the same thing. You can have a workload that means having to run the CPU at a lower speed in order to maintain the top GPU clock, just like you can have a workload that allows you to keep both the CPU and GPU at max.

I don't get it , it sounded like you just described a situation where the GPU can only run that fast if the CPU is running at a lower speed, but also a a situation where that doesn't matter and they can both run at max speed.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
Rich made a video in 2019 looking at GCN vs RDNA scaling and comparing the two, that is neat.

But you can just read most of the great websites out there like Computerbase.de and take a look at any of their reviews for GCN cards like the R290x, FuryX, Vega 64, or Radeon VII and checkout their overclocking section. It will show unbalanced results when changing core and memory frequency.
BTW, I elaborated about this the other day. Not even under liquid nitrogen cooling to -50 degrees celsius does the clock frequency of the 5700XT seem to be stable. See this link, https://youtu.be/JS9in-RHbjw?t=2172.

There is a reason Rich stopped his tests at 2100 mhz.
 

MrKlaw

Member
Oct 25, 2017
33,038
We at least know that RT is dependent on CUs as you noted even though frequency would play some role, but my general point is that no frequency advantage here is going to overcome 16 more CUs available on the Series X with greater memory bandwidth too.

But what would make a difference compared to the current 15% or so TF difference? Wouldn't RT be in line with that (until we know more about how it works)