• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.
Status
Not open for further replies.

Deleted member 63832

User requested account closure
Banned
Feb 14, 2020
420
Don't all modern GPU/CPUs raise and lower their clock speed depending on the demand?

That's what I thought this meant when I saw the presentation. I could be wrong tho.
 

zombiejames

Member
Oct 25, 2017
11,934
The problem with the PS5 specs is that it's not clear if the given CPU and GPU clocks are a joint sustained reality or if either have to cut back when the other has hit maximum clocks for a while.

It's been said before, but Sony's speech was lacking on detail so we have to wait for real world performance comparisons on cross-platform titles before we can form any conclusion on the general performance delta between platforms.

Another Cerny transcript:

"That doesn't mean all games will be running 2.23GHz and 3.5GHz. When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."

Sounds clear to me that both can't run at their maximums simultaneously.
 
Dec 4, 2017
11,481
Brazil
Don't all modern GPU/CPUs raise and lower their clock speed depending on the demand?

That's what I thought this meant when I saw the presentation. I could be wrong tho.
As I understand it, it is something to reduce energy consumption and heat without losing much performance. Ps5 will be able to reproduce the same conditions every time a game-specific command requests GPU or CPU, so that you don't experience a reduction in performance if your ps5 is in a hot room for example, but at the same time this approach will generate less heat. Or something like that.

If I had to speculate, I would say that Sony is either looking for a way to reduce the number of defective devices over the generation or is looking to resolve complaints about console noise.

It would be funny if such a decision that is generating "anger" for making the console weaker ends up saving the consoles from malfunctioning.
 
Last edited:

mordecaii83

Avenger
Oct 28, 2017
6,862
Another Cerny transcript:

"That doesn't mean all games will be running 2.23GHz and 3.5GHz. When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."

Sounds clear to me that both can't run at their maximums simultaneously.
Did you miss the 5 minutes before that where he literally talks about both chips running at their max speed a majority of the time?

Even in that quote you just said, he says "That doesn't mean ALL games will be running at 2.23GHz AND 3.5GHz". Which means that at least some will be. Which means that it's possible.
 

nib95

Contains No Misinformation on Philly Cheesesteaks
Banned
Oct 28, 2017
18,498
Another Cerny transcript:

"That doesn't mean all games will be running 2.23GHz and 3.5GHz. When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."

Sounds clear to me that both can't run at their maximums simultaneously.

The transcript literally points at the opposite being true. "That doesn't mean all games will be running 2.23GHz and 3.5Ghz" implies there will be games that will, which by virtue means the system can run at 2.23Ghz and 3.5Ghz in tandem.

I suppose we'll find out more in future anyway
 

zombiejames

Member
Oct 25, 2017
11,934
Did you miss the 5 minutes before that where he literally talks about both chips running at their max speed a majority of the time?
Not at all. What I quoted is what he said right after. Sounds to me like if a developer needs the CPU to be at 3.5GHz, they can have it. If they need the GPU to be at 2.23GHz, they can have it. But if they need both at their maximum at the same time, one or both will drop by a few percent.
 

mordecaii83

Avenger
Oct 28, 2017
6,862
Not at all. What I quoted is what he said right after. Sounds to me like if a developer needs the CPU to be at 3.5GHz, they can have it. If they need the GPU to be at 2.23GHz, they can have it. But if they need both at their maximum at the same time, one or both will drop by a few percent.
You should read my edit and the post right above yours, I believe you misread the statement.
 

Phellps

Member
Oct 25, 2017
10,809
Not at all. What I quoted is what he said right after. Sounds to me like if a developer needs the CPU to be at 3.5GHz, they can have it. If they need the GPU to be at 2.23GHz, they can have it. But if they need both at their maximum at the same time, one or both will drop by a few percent.
I think what Cerny meant is that the clocks will drop to reduce heat when possible, not when the game needs that clock to be at its highest speed. He specifically used God of War as an example of a game that generated too much heat because both the GPU and CPU were running at their highest clocks, but probably didn't need all of the CPU's powers at the time.
 

Castamere

Member
Oct 26, 2017
3,517
I think what Cerny meant is that the clocks will drop to reduce heat when possible, not when the game needs that clock to be at its highest speed. He specifically used God of War as an example of a game that generated too much heat because both the GPU and CPU were running at their highest clocks, but probably didn't need all of the CPU's powers at the time.

Which seems like a direct response to "the fans a too loud" this gen. It seems like a big focus was making sure the system would stay cool and quiet. Wherein MS wanted to stay at a native 4k on almost every title.
 

tusharngf

Member
Oct 29, 2017
2,288
Lordran
I'm not a game programmer, but I do develop GPU accelerated scientific models, so I have a few years experience in optimizing different gpu architectures to accelerate computation. So I have sense of what makes something fast or slow as I've seen it play out on various hardware.

The notion of a 'sustained TF rate' doesn't really make sense. The TF number is a peak theoretical number that will in all likelyhood never be actually hit on either console(possibly occassionally for a few milliseconds at a time, but generally they'll not be computing at that rate). It's not a benchmark, it's just a way of counting components and clocks within a single number. It's actually a lot like a the way a business considers the number of 'man hours' it'll take to perform a task. The actual computational throughput will be determined by things like thread occupancy and how much shared and local memory each thread requires. Since both components use the same architecture, this in theory effects both equally, but it's not nonsense to suggest the higher clockrate gives a bit of help to the PS5 here as it's able to utilize local data (the data sitting in the GPU cache) and shuffle it out for the next piece of data that it needs more readily. It is an effective bandwidth increase on the ram->cache->computation pipeline.

More technical version:
The biggest factor on speeding up a GPU is how much you can saturate the compute units (those CUs that we keep hearing about). If you can get a concurrent thread on each ALU within each CU, and have minimal or no requirement to reach back to VRAM within a kernel call to swap data in and out of the CU cache, then you can get pretty close to your peak throughput. This is, in practice, not common, as the available local storage within the CU is tiny, a few tens of KB shared between all the ALUs. For example, in the Nvidia volta architecutre (which i'm most familiar with), there is a single 256KB block of memory (arranged in 32bit registers) for every thread running on that SM to use for data exclusive to that thread. In a perfect world, every one of the 64 CUDA cores in a single Volta SM would have it's own thread, meaning each one gets ~4KB per thread to store useful data. (There is 96KB of shared memory as well, but I'll ignore this for the moment. It's extremely useful but somewhat immaterial for this explanation). This is not generally practical, in my experience, so you're left with two options, not mutually exclusive. You can reduce the number of concurrent threads, or you can periodically swap data in and out of registers by calling back to VRAM. The former is what Cerny was alluding to when he said it's hard to fill more CUs than fewer, although I somewhat disagree with his characterization in the case where all CUs in both platforms have access to the same relative register and shared memory. I'm not familiar with RDNA2 though so I don't want to comment on that too much. In the latter case, which is almost always necessary to some degree, you can think of an analogy to screen tearing as to what happens under the hood.

Several threads are going about their business, making computations, thread a says 'oh, i need something from vram'. Thread a then gets paused, and thread b gets moved into his place to keep computing while the data for thread a is fetched. Meanwhile, thread b finishes his work, and thread a either is ready to keep going or isnt. This is determined by the latency in access to vram. Now if thread a isn't ready, most likely another thread gets moved in to thread b's place and keeps going. If thread a is ready, then it'll get shuffled back in to pick up the computation where it left off with its new data. The analogy to screen tearing is this. If you have ever played with v-sync on a 60hz monitor, and then on a 144hz monitor, you've probably noticed that screen tearing is far less noticeable than on a slower refresh. This is because the gap between when getting data and being able to use it is smaller. A similar analogy holds with clock speeds in a GPU. A faster clock speed will generally lead to less 'down time' in any given ALU as it is more likely to be ready to go sooner when the requisite data is available.

What I want to point out, is that NONE of this shows up in a TF metric. The underlying reality of swapping data in and out, the various bottlenecks, tradeoffs, etc, all of that is presumed to essentially not exist when discussing TF. However this is one of the biggest considerations when doing optimization, as you have to take into account these facts of life about having threads 'stall', so to speak.

Will this make the PS5 faster in computations than the XSX? In a few cases, possibly, but in general, no it won't. However it does mean that the story on what that gap is isn't as simple as many here are claiming. I expect that the PS5 may generally run at a slightly lower resolution (some quick calculations would put the resolution of 3504x1971 at a hair more than 16% reduction in pixel count), but in many cases I think the gap will be closer than you'd expect from a raw TF count, because the higher clock speed does help 'in the real world' in a somewhat non-linear fashion as compared to raw TF numbers in that it makes the penalty of moving data in and out of local storage smaller. It's not a HUGE difference (at least not in most cases), but it's not nothing.
thank you for the explanation !!
 

Phellps

Member
Oct 25, 2017
10,809
Which seems like a direct response to "the fans a too loud" this gen. It seems like a big focus was making sure the system would stay cool and quiet. Wherein MS wanted to stay at a native 4k on almost every title.
Yeah, it was a severe design flaw on the PS4, even more so on the Pro.
There is always power running through the CPU and GPU and generating heat even when they're not being fully utilized. It has to be capped when it doesn't need its full power. That's why PC motherboards automatically manage CPU clocks when they're not at full load. Anyone can download CPUID and watch their clocks fluctuate on the fly as they use the computer.
 

nib95

Contains No Misinformation on Philly Cheesesteaks
Banned
Oct 28, 2017
18,498
But isn't that the point? They don't need to push their clocks that high.

Isn't that what point? The discussion wasn't about whether they needed to, but whether they could, and if they could, what sort of frequency increases we might potentially be talking about. But yes, as they have the CU count advantage, any increased clocks would go a longer way, but conversely the added CU count might produce more heat too.
 

Deleted member 63832

User requested account closure
Banned
Feb 14, 2020
420
As I understand it, it is something to reduce energy consumption and heat without losing much performance. Ps5 will be able to reproduce the same conditions every time a game-specific command requests GPU or CPU, so that you don't experience a reduction in performance if your ps5 is in a hot room for example, but at the same time this approach will generate less heat. Or something like that.

If I had to speculate, I would say that Sony is either looking for a way to reduce the number of defective devices over the generation or is looking to resolve complaints about console noise.

It would be funny if such a decision that is generating "anger" for making the console weaker ends up saving the consoles from malfunctioning.

I would be happy to take a hit on performance if it means no more jet engine sounds on some games lol.

Hope thats what it means!
 

tusharngf

Member
Oct 29, 2017
2,288
Lordran
Don't all modern GPU/CPUs raise and lower their clock speed depending on the demand?

That's what I thought this meant when I saw the presentation. I could be wrong tho.
That is correct. There is temp limit on Nvidia gpus. Gtx 970 could hit higher clocks 100% usage under stress conditions. The temp will lock at 80* c. The only problem is it reduces the life of a gpu in a long run. I used to downclock my gpu to sustain 70c level. Now on ps5 side they must have some insane cooling system to keep the clocks high. Sony is very good at designing hardware.
 

Deleted member 63832

User requested account closure
Banned
Feb 14, 2020
420
That is correct. There is temp limit on Nvidia gpus. Gtx 970 could hit higher clocks 100% usage under stress conditions. The temp will lock at 80* c. The only problem is it reduces the life of a gpu in a long run. I used to downclock my gpu to sustain 70c level. Now on ps5 side they must have some insane cooling system to keep the clocks high. Sony is very good at designing hardware.

Yea, my 2080 Ti caps at ~80c on demanding 4k games, so that's why I thought that's what Cerny meant.
 

Hermii

Member
Oct 27, 2017
4,685
That is correct. There is temp limit on Nvidia gpus. Gtx 970 could hit higher clocks 100% usage under stress conditions. The temp will lock at 80* c. The only problem is it reduces the life of a gpu in a long run. I used to downclock my gpu to sustain 70c level. Now on ps5 side they must have some insane cooling system to keep the clocks high. Sony is very good at designing hardware.
Not very good at designing cooling systems though.

Or maybe they weren't even trying with the ps4/ pro.
 

Gay Bowser

Member
Oct 30, 2017
17,708
Yeah, these are far more closely matched consoles than we've seen in the past. All of this talk about which hardware is superior is pretty meaningless until we know pricing.

If the smaller die size allows Sony to deliver the PS5 to consumers at a lower price, then I think they absolutely made the right choice.
 

mordecaii83

Avenger
Oct 28, 2017
6,862
In the Xbox reveal video they tested series x in desert conditions. But Ms is notorious for pushing the clocks. Let's see what they do with the base console.
Oh I'm not saying they can't push clocks (no way to know one way or the other), just that it doesn't make a lot of sense for them this time around due to already being the stronger console and aiming for silence.
 

ty_hot

Banned
Dec 14, 2017
7,176
Another Cerny transcript:

"That doesn't mean all games will be running 2.23GHz and 3.5GHz. When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."

Sounds clear to me that both can't run at their maximums simultaneously.
If thats your conclusion from that statement I hope you dont ever need to do those online tests for job interviews that rely on logic
 

Last_colossi

The Fallen
Oct 27, 2017
4,256
Australia
If the GPU clock drops down by even 4% (2.14Ghz) then it's compute drops down to 9.87TF so.... That article isn't completely wrong but it's still hyperbolic.
 

Deleted member 31133

User requested account closure
Banned
Nov 5, 2017
4,155
Another Cerny transcript:

"That doesn't mean all games will be running 2.23GHz and 3.5GHz.
"When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."

Sounds clear to me that both can't run at their maximums simultaneously.

I am sorry, but it appears you're misreading this quote.

That doesn't mean all games will be running 2.23GHz and 3.5GHz.

It appears to me that both can run at their maximum simultaneously, but this won't be the case for all games.
 

Deleted member 49535

User requested account closure
Banned
Nov 10, 2018
2,825
6 more months of these articles, folks. Hang in there. I assume neither console will really disappoint, and as always, people will go where the games they want to play are, regardless of power differences realized or perceived.
TBH this is what happens when you announce new consoles but don't show a single game for them. There's nothing else to talk about.

But yeah I agree, these next months are going to be embarrassing as usual.
 

mordecaii83

Avenger
Oct 28, 2017
6,862
TBH this is what happens when you announce new consoles but don't show a single game for them. There's nothing else to talk about.

But yeah I agree, these next months are going to be embarrassing as usual.
Nah, once games are shown it will switch to nonstop arguing about which console had the most impressive looking game until release and which company was showing real games versus smoke and mirrors. Then after release we finally switch back to DF comparisons.
 

Monkhaus

Banned
Apr 18, 2019
59
Usually i tend do buy all 3 systems and I'm
so happy now that it will be obsolete for next gen. I just ordered a ssd yesterday, thats everything i need.
Standalone ssd. Unleash the Variable kraken
 

Deleted member 61469

Attempted to circumvent ban with alt account
Banned
Nov 17, 2019
1,587
If the GPU clock drops down by even 4% (2.14Ghz) then it's compute drops down to 9.87TF so.... That article isn't completely wrong but it's still hyperbolic.

I wish they would be a little more transparent on this. Like, what is the floor? How much is the maximum drop you had in testing? I suspect the range would be something like 9.2TF to 10.28TF depending on the game.
 

Alexandros

Member
Oct 26, 2017
17,811
I wish they would be a little more transparent on this. Like, what is the floor? How much is the maximum drop you had in testing? I suspect the range would be something like 9.2TF to 10.28TF depending on the game.

Yeah, I think that the vagueness of those statements is the reason for the debate. Cerny disclosed the ceiling but not the floor of the stated frequencies so until we see benchmarks and frequency measurements it's impossible to definitively say what the floor is or how frequently does the system have to throttle.
 

Castamere

Member
Oct 26, 2017
3,517
Nah, once games are shown it will switch to nonstop arguing about which console had the most impressive looking game until release and which company was showing real games versus smoke and mirrors. Then after release we finally switch back to DF comparisons.

Yep. Im interested to see what MS does given that all of their games in the next 2 years will have the Xbox One as the LCD, on top of the fact that most of the studios they bought will have nothing substantial to show for awhile.

Will it be another E3 2014 situation for MS where they show CG and Tech demos? Will Sony have anything to show beyond 3rd Party? Will Sony show LoU2 and GoT with Ps5 enhancements? Will the Corona Virus catch up to games, who have remain relatively unscathed? Lots of questions.
 

Bakura

Member
Oct 26, 2017
517
I'm not a game programmer, but I do develop GPU accelerated scientific models, so I have a few years experience in optimizing different gpu architectures to accelerate computation. So I have sense of what makes something fast or slow as I've seen it play out on various hardware.

The notion of a 'sustained TF rate' doesn't really make sense. The TF number is a peak theoretical number that will in all likelyhood never be actually hit on either console(possibly occassionally for a few milliseconds at a time, but generally they'll not be computing at that rate). It's not a benchmark, it's just a way of counting components and clocks within a single number. It's actually a lot like a the way a business considers the number of 'man hours' it'll take to perform a task. The actual computational throughput will be determined by things like thread occupancy and how much shared and local memory each thread requires. Since both components use the same architecture, this in theory effects both equally, but it's not nonsense to suggest the higher clockrate gives a bit of help to the PS5 here as it's able to utilize local data (the data sitting in the GPU cache) and shuffle it out for the next piece of data that it needs more readily. It is an effective bandwidth increase on the ram->cache->computation pipeline.

More technical version:
The biggest factor on speeding up a GPU is how much you can saturate the compute units (those CUs that we keep hearing about). If you can get a concurrent thread on each ALU within each CU, and have minimal or no requirement to reach back to VRAM within a kernel call to swap data in and out of the CU cache, then you can get pretty close to your peak throughput. This is, in practice, not common, as the available local storage within the CU is tiny, a few tens of KB shared between all the ALUs. For example, in the Nvidia volta architecutre (which i'm most familiar with), there is a single 256KB block of memory (arranged in 32bit registers) for every thread running on that SM to use for data exclusive to that thread. In a perfect world, every one of the 64 CUDA cores in a single Volta SM would have it's own thread, meaning each one gets ~4KB per thread to store useful data. (There is 96KB of shared memory as well, but I'll ignore this for the moment. It's extremely useful but somewhat immaterial for this explanation). This is not generally practical, in my experience, so you're left with two options, not mutually exclusive. You can reduce the number of concurrent threads, or you can periodically swap data in and out of registers by calling back to VRAM. The former is what Cerny was alluding to when he said it's hard to fill more CUs than fewer, although I somewhat disagree with his characterization in the case where all CUs in both platforms have access to the same relative register and shared memory. I'm not familiar with RDNA2 though so I don't want to comment on that too much. In the latter case, which is almost always necessary to some degree, you can think of an analogy to screen tearing as to what happens under the hood.

Several threads are going about their business, making computations, thread a says 'oh, i need something from vram'. Thread a then gets paused, and thread b gets moved into his place to keep computing while the data for thread a is fetched. Meanwhile, thread b finishes his work, and thread a either is ready to keep going or isnt. This is determined by the latency in access to vram. Now if thread a isn't ready, most likely another thread gets moved in to thread b's place and keeps going. If thread a is ready, then it'll get shuffled back in to pick up the computation where it left off with its new data. The analogy to screen tearing is this. If you have ever played with v-sync on a 60hz monitor, and then on a 144hz monitor, you've probably noticed that screen tearing is far less noticeable than on a slower refresh. This is because the gap between when getting data and being able to use it is smaller. A similar analogy holds with clock speeds in a GPU. A faster clock speed will generally lead to less 'down time' in any given ALU as it is more likely to be ready to go sooner when the requisite data is available.

What I want to point out, is that NONE of this shows up in a TF metric. The underlying reality of swapping data in and out, the various bottlenecks, tradeoffs, etc, all of that is presumed to essentially not exist when discussing TF. However this is one of the biggest considerations when doing optimization, as you have to take into account these facts of life about having threads 'stall', so to speak.

Will this make the PS5 faster in computations than the XSX? In a few cases, possibly, but in general, no it won't. However it does mean that the story on what that gap is isn't as simple as many here are claiming. I expect that the PS5 may generally run at a slightly lower resolution (some quick calculations would put the resolution of 3504x1971 at a hair more than 16% reduction in pixel count), but in many cases I think the gap will be closer than you'd expect from a raw TF count, because the higher clock speed does help 'in the real world' in a somewhat non-linear fashion as compared to raw TF numbers in that it makes the penalty of moving data in and out of local storage smaller. It's not a HUGE difference (at least not in most cases), but it's not nothing.
This was a good read, thank you.
 

Deleted member 14927

User requested account closure
Banned
Oct 27, 2017
648
Yeah, I think that the vagueness of those statements is the reason for the debate. Cerny disclosed the ceiling but not the floor of the stated frequencies so until we see benchmarks and frequency measurements it's impossible to definitively say what the floor is or how frequently does the system have to throttle.
You're correct. The vagueness is frustrating. "Most of the time" means anywhere from 50.1% of the time to 100%.

It's not like on BC where they say the overwhelming majority of titles.
 

Sprat

Member
Oct 27, 2017
4,684
England
Yeah, I think that the vagueness of those statements is the reason for the debate. Cerny disclosed the ceiling but not the floor of the stated frequencies so until we see benchmarks and frequency measurements it's impossible to definitively say what the floor is or how frequently does the system have to throttle.
He said a 2% drop max to get back 10% of the power envelope. Doubt it will ever go further than that
 

TsuWave

Member
Oct 27, 2017
6,992
Another Cerny transcript:

"That doesn't mean all games will be running 2.23GHz and 3.5GHz. When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."

Sounds clear to me that both can't run at their maximums simultaneously.

Your interpretation of this quote goes against what is being said. Am I missing something?
 

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,550
Dictator is not actually disagreeing. You missed the point of my post and NXGamers in a rush to disapprove it. What NXGamer is saying is that currently games rarely max out clock frequencies on both the GPU and CPU simultaneously on a frame by frame basis anyway. So naturally that means that what Mark Cerny said would be true, that the max potential clock speed of the GPU would be available the majority of the time.

What Dictator is discussing is about both the GPU and CPU frequencies being maxed simultaneously, which is a different thing altogether, and something that NXGamer and Cerny imply rarely ever happens anyway (Cerny speaking with respect to power load).

Dictator is also presumably theorising on whether max frequencies with the CPU and GPU can be simultaneously reached, because that hasn't actually been clarified yet. Infact, in an earlier post he stated that he's not sure yet, and is just assuming.

For rendering as we see it today this would be true, but with Ray Tracing on these consoles (which still use shader units regardless of having RT cores) I suspect this will not be the case and GPU loads will be consistently high in games that implement RT. I think we can say for sure that the number of games that implement RT will increase quickly as the generation moves along.
 

Mubrik_

Member
Dec 7, 2017
2,725
Another Cerny transcript:

"That doesn't mean all games will be running 2.23GHz and 3.5GHz. When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."

Sounds clear to me that both can't run at their maximums simultaneously.

You should also keep in mind what he means by 'worse case'

Someone correct me if I'm wrong but using Cerny horizon map screen and actual gameplay comparison, his 'worse case' scenario isn't based on only having both run at 2.23 and 3.35 but also based on the 'workload' on the CPU/GPU.
Which is why if you look at the slide he showed, ( can't recall the exact time on YouTube) the frequency/Power control was based on the 'activity' of the CPU/GPU

Meaning you could actually always be running at this frequency as long as the workload isn't hectic ( we know these systems don't use full TF anyways most of the time )
So in that 'worse case' scenario the down clocking that he also calls minor will occur.

Again, someone more knowledgeable on this can expand more, I've seen some good posts across multiple threads.

Threads like this should be locked IMO
 

test_account

Member
Oct 25, 2017
4,645
As I understand it, XsX is similar to other consoles of the past with fixed performance with varying power (watt) consumption. This is why sometimes the fan in the system kicks up higher under higher load.

PS5 is designed to run at constant power consumption and will flex cpu/gpu speeds as needed to keep it at a constant power. Cerny said he expects most of the time the 'peak' speeds will be obtained.
The article is nonsense. Read the post by guitarNINJA to understand what's going on.
The peak TF value is a function of frequency. On Series X, the peak is always 12 because it is running at a fixed rate. The peak on PS5 is going to vary because its clock varies, basically this 10.3 figure is the peak of the peak, and that's the point the author is trying to make that these are not directly comparable, if you want to compare you need some kind of average on PS5 that we can't get so he's trying to estimate.
Thanks for the answers.

In regards to "peak of the peak", that sounds like it wont be able to reach that point very often, but didnt Cerny say that it would be able to reach that most of the time? And isnt the variable frequency something that the developer have control over?
 

PLASTICA-MAN

Member
Oct 26, 2017
23,620
In fact, if they were to approach a dynamic clocking philosophy as well, they can hit an astounding 14.6 TFLOPs figure (using the PS5 clocks)

LMAO. The, amount of ignorance in this is so much. As if there is a technology to allow that high amount of CUs running with that frequencey, let alone stable.
 
Status
Not open for further replies.