• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Vimto

Member
Oct 29, 2017
3,712
. We dont know how low it will clock but Cerny did state that they couldnt achieve a locked 2 ghz, which is just over 10% less. By far the most reasonable deduction to be made from the information we have is that under heavy load the GPU will likely need to drop below 2 ghz.

??????

Cerny never said that, wth are you talking about?
 

KeRaSh

I left my heart on Atropos
Member
Oct 26, 2017
10,246
Easily gone higher huh? so how high do you expect PC RDNA 2 chips to clock?
That's what Cerny said and I'm sure they did the tests, otherwise he wouldn't have claimed that.
Whether you believe him or not is your choice but it won't change the reality no matter what it ends up being.
However, based on what we know about RDNA2 so far seems to point in the direction that Cerny wasn't lying.

/edit: This guy has a take on clock speeds towards the end of his video:

 
Last edited:

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,549
Easily gone higher huh? So how high do you expect PC RDNA 2 chips to clock?
Not much, maybe not even close. At that frequency you're already past the point of diminishing returns. Silicon have a power/performance exponential curve that after a certain point the power needed to increase performance will skyrocket. People like to say that the PS5 GPU is "optimized" or "efficient" when in reality at that frequency it's neither of them. N7 process doesn't fix this, it actually makes it worst.
 

Alexandros

Member
Oct 26, 2017
17,799
People rather read it the opposite way.
"Holy shit ! Cerny is so good that he can reduce power consumption by 10% just with a few percent downclock".

I wonder what the system's total power draw is. In my mind it would have been way more efficient to clock the chip a couple percent lower and just boost to that frequency whenever the amount of processing load demanded it.

It's able to more closely track a power budget than a system with a fixed clock, so in that sense it's an efficient use of die space and the available parts budget. Yes, you could run the whole system at half the speed for a small fraction of the power. Of course, then you'd have a system running at half the speed and you're still paying just as much for it. So just about everyone runs silicon as fast as they can get away with reliably unless cost is no object or power efficiency is incredibly important, as when considering battery life. It doesn't matter if it's an Intel CPU, AMD APU, or Nvidia GPU, power scaling around the clock speeds devices ship at scale at roughly the cube of the clock. This chip isn't an exception, it's just following the same curve as everyone else.

Doesn't it seem incredibly wasteful though to spend 10% of the system's total power budget, on a platform that is bound to be constrained by power limits, just to add a couple of percentage points to the GPU's already stratospheric clocks?
 
Last edited:

GhostTrick

Member
Oct 25, 2017
11,300
I wonder what the system's total power draw is. In my mind it would have been way more efficient to clock the chip a couple percent lower and just boost to that frequency whenever the amount of processing load demanded it.



Doesn't it seem incredibly wasteful though to spend 10% of the system's total power budget, on a platform that is bound to be constrained by power limits, just to add a couple of percentage points to to the GPU's already stratospheric clocks?



Gotta push the Tflops over 10 for the marketing slides.
 

GhostTrick

Member
Oct 25, 2017
11,300
I mean look at him - Mark Cerny is the slick face of Playstation marketing. He's not at all like a mild-mannered professor of EE/Game design/SWE



It doesn't mean Cerny is a marketing person nor that he has no tech knowledge. The person clearly knows his job and he's really talented at engineering and such. But it's not incompatible with a few marketing stuff here and there. And yes, a big 10Tflops in a slide is a better sell than 9. The same way "Supercharged PC architecture" is a good sounding word to sell the tech they're making.
 
Oct 25, 2017
13,246
At the current stage, it's hard for me to doubt Cerny on stuff he says given how the SSD turned out.

He made a claim on the SSD back in the Wired article that lots of folks, justifiably, casted some doubts on. It was a bold claim and yet one that holds true in a pretty crazy way even today.

With that said, it's hard for me to get a feel for a lot of the design decisions behind the PS5 until we start seeing games.
 

Mubrik_

Member
Dec 7, 2017
2,723
It doesn't mean Cerny is a marketing person nor that he has no tech knowledge. The person clearly knows his job and he's really talented at engineering and such. But it's not incompatible with a few marketing stuff here and there. And yes, a big 10Tflops in a slide is a better sell than 9. The same way "Supercharged PC architecture" is a good sounding word to sell the tech they're making.

"Gotta push the Tflops over 10 for the marketing slides."

So you think the console is 9tf?

And Cerny is marketing a 9tf console to developers who will work on it as 10tf? Lol

Genius lol
 

Gemüsepizza

Member
Oct 26, 2017
2,541
Doesn't it seem incredibly wasteful though to spend 10% of the system's total power budget, on a platform that is bound to be constrained by power limits, just to add a couple of percentage points to the GPU's already stratospheric clocks?

But it's only 10% of the system's total power budget when the power budget has been reached. There will be many instances where it looks like this:

- Processor load 100%, power consumption 50%, clocks 100%

or

- Processor load 100%, power consumption 75%, clocks 100%

Gotta push the Tflops over 10 for the marketing slides.

That's not how it works. Why do people at this point still not understand how it works?
 

Dekim

Member
Oct 28, 2017
4,300
I don't really understand this "Sony is desperate to get over 10 TFlops, so they OC'd their chip like crazy." If Sony was as concerned about getting double digit TFs for marketing reasons, then going wide and slow like MS was an option they could have easily taken at the start. You don't design a 36 CU chip and expect to get super high double digit TFlops. You'd just be barely touching double digit TFlops using such a config, and that's only after using super high clocks.
 

Micerider

Member
Nov 11, 2017
1,180
Gotta push the Tflops over 10 for the marketing slides.

Because that's what all designs should do. Every engineer know that the main point is to make a number big before anything else (like having a coherent system for the task it must accomplish on a given budget?) /s

I don't really understand this "Sony is desperate to get over 10 TFlops, so they OC'd their chip like crazy." If Sony was as concerned about getting double digit TFs for marketing reasons, then going wide and slow like MS was an option they could have easily taken at the start. You don't design a 36 CU chip and expect to get super high double digit TFlops. You'd just be barely touching double digit TFlops using such a config, and that's only after using super high clocks.

Exactly. They certainly had a budget to respect. But within that budget, they had a choice to go for higher "Teraflops" (or, just simply : higher GPU perf) and a slower SSD, but decided not to...on purpose. They were not desperate to hit a figurative number that has no practical use on it's own. They could have gone for that too most likely if this was their intention, it's not like AMD shut the door to them.

The question being now : was it the right decision? Well, only games will tell.
 

Alexandros

Member
Oct 26, 2017
17,799
But it's only 10% of the system's total power budget when the power budget has been reached. There will be many instances where it looks like this:

- Processor load 100%, power consumption 50%, clocks 100%

or

- Processor load 100%, power consumption 75%, clocks 100%

The amount of power at any given time doesn't make the chosen design any less wasteful. In your scenarios power consumption could drop by the hefty chunk of 10% by reducing the frequency a couple of percentage points. This means that the whole time the console uses its full frequencies it is using a disproportionately large amount of power in order to keep the clocks a couple of percentage points higher, for a tiny performance benefit.

For me the logical and truly efficient design would be to keep the standard frequencies at a level in which power consumption scales more reasonably and only use the highest frequency when it is absolutely necessary.
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
At the end of the day they have a power budget that they are sticking to. I think they have made their choices based on affordability while also squeezing as much out of their hardware as possible and I think they have done this in an elegant and efficiënt way.
 

Micerider

Member
Nov 11, 2017
1,180
At the end of the day they have a power budget that they are sticking to. I think they have made their choices based on affordability while also squeezing as much out of their hardware as possible and I think they have done this in an elegant and efficiënt way.

Yes, and I don't think MS aimed at 12TF just for the number either. They certainly wanted to be performance leaders and that's how they landed it with their design and the budget they were targetting.
 

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,549
That's what Cerny said and I'm sure they did the tests, otherwise he wouldn't have claimed that.
Whether you believe him or not is your choice but it won't change the reality no matter what it ends up being.
However, based on what we know about RDNA2 so far seems to point in the direction that Cerny wasn't lying.

/edit: This guy has a take on clock speeds towards the end of his video:


You shouldn't take anything MLID says as a fact. This guy is wrong more often than not, by a lot.
 

GhostTrick

Member
Oct 25, 2017
11,300
"Gotta push the Tflops over 10 for the marketing slides."

So you think the console is 9tf?

And Cerny is marketing a 9tf console to developers who will work on it as 10tf? Lol

Genius lol


That's not what I'm saying. I'm not saying it's a 9tf machine in practice but a 10tf one. But according to Cerny, a few % of clock reduction means around 10% of power consumption drop, which indicates the GPU was clocked beyond its optimal clockspeed ratio compared to power consumption, to reach a certain figure.
 
Nov 2, 2017
2,275
Cheaper chip, cheaper cooler, cheaper power supply and only 15% less perfomance.
Higher clocks increase temperature & power draw exponentially. It's very much possible the XSX runs cooler and is more power efficient despite being a bit faster. That's the big downside of going for higher clocks over going wider with CUs. The upside is that it's cheaper when it comes to the chip.
 

mordecaii83

Avenger
Oct 28, 2017
6,853
That's not what I'm saying. I'm not saying it's a 9tf machine in practice but a 10tf one. But according to Cerny, a few % of clock reduction means around 10% of power consumption drop, which indicates the GPU was clocked beyond its optimal clockspeed ratio compared to power consumption, to reach a certain figure.
Not really, it means their current strategy (fixed power budget) allowed for higher clocks most of the time than a fixed clock speed strategy would have, so why not take advantage of it?

That's why there is actually a tangible benefit to doing variable clocks, you can get a higher clock speed at least sometimes than if you had to lock the clocks to what would be stable in a worst case scenario.
 

GhostTrick

Member
Oct 25, 2017
11,300
Not really, it means their current strategy (fixed power budget) allowed for higher clocks most of the time than a fixed clock speed strategy would have, so why not take advantage of it?

That's why there is actually a tangible benefit to doing variable clocks, you can get a higher clock speed at least sometimes than if you had to lock the clocks to what would be stable in a worst case scenario.


I mean, didn't Cerny said it'll be at that clock most of the time ? A console has a limited power budget usually. When you're willing to go for a few percent power "most of the time" for 10% more power consumption, that screams "attempt to get as much power no matter what". This reminds me of AMD releasing GPUs like the RX 590 which was the same as RX 580 but pushing clocks higher and TDP far higher for the sake of numbers.
 

Mubrik_

Member
Dec 7, 2017
2,723
That's not what I'm saying. I'm not saying it's a 9tf machine in practice but a 10tf one. But according to Cerny, a few % of clock reduction means around 10% of power consumption drop, which indicates the GPU was clocked beyond its optimal clockspeed ratio compared to power consumption, to reach a certain figure.
1. What is this ' optimal clock speed ' figure you have in your mind?

2. You fail to take in the fact that the design he went with allowed this amount of continuous boost hence your thinking of ' optimal clock speed ' it could go even go higher according to Cerny

3. Sony doesn't seem worried about the power draw, why should we?

Edit: Think of the process first. not the end figure, you're clearly thinking 10tf then process after.
 

mordecaii83

Avenger
Oct 28, 2017
6,853
I mean, didn't Cerny said it'll be at that clock most of the time ? A console has a limited power budget usually. When you're willing to go for a few percent power "most of the time" for 10% more power consumption, that screams "attempt to get as much power no matter what". This reminds me of AMD releasing GPUs like the RX 590 which was the same as RX 580 but pushing clocks higher and TDP far higher for the sake of numbers.
I'm looking at it from a different perspective. They have a specialized cooling system, and they know what that system can handle. They decided to allow the system to always use the amount of power/heat their cooling system can handle at a specified noise level.

It may only be a few percent gain, but it's "free" in this case so why not use it? The other alternative was to have it locked to a lower clock speed so that even in the worst case the system would never go above what its cooling system can handle, but they'd lose performance the vast majority of the time versus their current solution.
 

M3rcy

Member
Oct 27, 2017
702
I'm looking at it from a different perspective. They have a specialized cooling system, and they know what that system can handle. They decided to allow the system to always use the amount of power/heat their cooling system can handle at a specified noise level.

It may only be a few percent gain, but it's "free" in this case so why not use it? The other alternative was to have it locked to a lower clock speed so that even in the worst case the system would never go above what its cooling system can handle, but they'd lose performance the vast majority of the time versus their current solution.

That's a reasonable take.
 

GhostTrick

Member
Oct 25, 2017
11,300
1. What is this ' optimal clock speed ' figure you have in your mind?

2. You fail to take in the fact that the design he went with allowed this amount of continuous boost hence your thinking of ' optimal clock speed ' it could go even go higher according to Cerny

3. Sony doesn't seem worried about the power draw, why should we?

Edit: Think of the process first. not the end figure, you're clearly thinking 10tf then process after.


1. Basically, most GPUs have a sweet spot between performance and power consumption. After a certain threshold, any gain in clockspeed and performance will result a far higher power consumption.
As an exemple, you have a GPU which's perfect sweetspot would be 1600mhz for 150W. At 1700mhz, it might be at 180W. You have a 6% gain in performance/clockspeed but a 20% increase in power consumption. Because you're higher than the sweetspot in performances.

2. "The design he went with". Yeah, you can clock GPUs higher. RX 5700XT for instance is usually clocked at 1800mhz. But you can push it to 2200mhz. So yes, you can always push high and higher. But as Cerny's own admission: "A few percent drop of clockspeed leads to a 10% drop in power consumption". This doesn't mean that it's a well optimised hardware, but that they pushed even further for a few percentage of performances in spite of a lot higher power consumption. They're targeting over that optimal sweetspot.

I'm looking at it from a different perspective. They have a specialized cooling system, and they know what that system can handle. They decided to allow the system to always use the amount of power/heat their cooling system can handle at a specified noise level.

It may only be a few percent gain, but it's "free" in this case so why not use it? The other alternative was to have it locked to a lower clock speed so that even in the worst case the system would never go above what its cooling system can handle, but they'd lose performance the vast majority of the time versus their current solution.


As for why not use it, I'm not against that. But it's not efficient at all. It's not a "smart design" as some people seems to imply. It's fine to go over that optimal clockspeed, let's not just pretend that it's something well done or optimized.
 

M3rcy

Member
Oct 27, 2017
702
2. You fail to take in the fact that the design he went with allowed this amount of continuous boost hence your thinking of ' optimal clock speed ' it could go even go higher according to Cerny

Parts of the chip could go faster, but that's not really relevant. Every chips clocks are limited by their weakest component.
 

Mubrik_

Member
Dec 7, 2017
2,723
1. Basically, most GPUs have a sweet spot between performance and power consumption. After a certain threshold, any gain in clockspeed and performance will result a far higher power consumption.
As an exemple, you have a GPU which's perfect sweetspot would be 1600mhz for 150W. At 1700mhz, it might be at 180W. You have a 6% gain in performance/clockspeed but a 20% increase in power consumption. Because you're higher than the sweetspot in performances.

2. "The design he went with". Yeah, you can clock GPUs higher. RX 5700XT for instance is usually clocked at 1800mhz. But you can push it to 2200mhz. So yes, you can always push high and higher. But as Cerny's own admission: "A few percent drop of clockspeed leads to a 10% drop in power consumption". This doesn't mean that it's a well optimised hardware, but that they pushed even further for a few percentage of performances in spite of a lot higher power consumption. They're targeting over that optimal sweetspot.




As for why not use it, I'm not against that. But it's not efficient at all. It's not a "smart design" as some people seems to imply. It's fine to go over that optimal clockspeed, let's not just pretend that it's something well done or optimized.

I disagreed with the 'optimal clock point' as the design allows them the flexibility

If your argument is strictly about power to clock relation then I'll agree to disagree cause it doesn't seem like they're worried about power draw so I'm not sure why we should?

Consoles will not always work at maximum capacity anyways
 

M3rcy

Member
Oct 27, 2017
702
As for why not use it, I'm not against that. But it's not efficient at all. It's not a "smart design" as some people seems to imply. It's fine to go over that optimal clockspeed, let's not just pretend that it's something well done or optimized.

I disagree. The way it was implemented, it has no major drawbacks. The tricky bit was making the performance predictable and consistent in all environments and in all PS5's. That's what was well done and clever.
 

OnPorpoise

Avenger
Oct 25, 2017
1,300
I don't really understand this "Sony is desperate to get over 10 TFlops, so they OC'd their chip like crazy." If Sony was as concerned about getting double digit TFs for marketing reasons, then going wide and slow like MS was an option they could have easily taken at the start. You don't design a 36 CU chip and expect to get super high double digit TFlops. You'd just be barely touching double digit TFlops using such a config, and that's only after using super high clocks.
Agreed, it's pretty obvious Sony didn't just make all these decisions last second.

Sony was likely keenly aware of the advantages of having double-digit teraflops, but decided the lower end of that range was "enough" for their needs, and opted for 36CUs and high clock speeds from the beginning.
 

Liabe Brave

Professionally Enhanced
Member
Oct 27, 2017
1,672
GDDR6 is QDR, just like GDDR5X, hence why the 1750MT/s translate to 14GT/s (2 transfers for rise, 2 for fall).
Also the 14Gbps chips are clocked at 1750MHz, that's what both Micron and SKHynix list on their product pages.
Thanks, I stand corrected regarding quad data rate. Overall, I think my larger point stands: that your way of describing things is overprecise for most discussion contexts. A lot of people are confused by simple terms like "bandwidth", so when you start pulling out the niceties of "QDR" and "MT/s" and suggest complex formulas they lose the plot.

My version requires three inputs: bandwidth, RAM data rate, and the fact there are 8 bits per byte. That last fact is comparatively well known. RAM data rate is often written right on the chip, so pictures show it even if people don't know how to derive it. Bandwidth is usually a given number, but can be explained further using the very easy formula [number of chips at once] x 32.

I think sticking to that eliminates confusion. And not just for the less technically-minded--it stops things like your repeating the erro of a "1050 GHz" memory clock, and so forth.

Wouldn't the content on the XSX affect it more then the PS5 because of the split bandwidth, contention will cause lower bandwidth on both but on the PS5 at least all the memory runs at the same speed (as far as we know for now), meaning that any access should be at the full speed.
Either way, the system can only move 256 bits at a time. If the CPU needs some of that bandwidth, then it can't be used by the GPU. Hence contention, and lowered GPU bandwidth. On PS4, it seems that somehow this process of CPU access lowered GPU bandwidth by more than just what the CPU was taking. This is out of my depth, so I'm not sure either why that is, or whether it might have been fixed for PS5.

Agreed. We just don't know what "a little bit" is on the PS5.

We know what the "little bit" is on the Xbox: zero. It won't drop the clocks at full load.
Which is ultimately why it can't reach clockspeeds as high.

What evidence would that be? The first "leak" was an engineering sample and is meaningless as they were never going to run at 1 ghz. If they are having issues even locking at 2 ghz what do you expect the actual sustained clocks to be? Lets assume by a couple Cerny did mean 2. 2% of 2.23 ghz is 45 mhz. That brings the clock down to 2.185 ghz which is well above the 2ghz clock they cant sustain. How do you envision this scenario playing out
What you and many others miss is the scenarios Mr. Cerny referred to as being the usual absolute maximum power draw. We're used to thinking of power draw increasing as game complexity and visual ambition rise. This is true. But past a certain point, that very complexity starts to make games inefficient. Parallelization means work dependent on other unfinished work stalls, which is why you see framerates or IQ drop in demanding sections. Counter-intuitively, the very highest power usage is when complex games run simpler loads. The engine has been optimized to shove as much work through as possible, so when the work is simple--like a menu screen--there are no stalls. Every single transistor ends up in use, and power peaks.

It doesn't need to. Even at a lower clockspeed, there's more than enough compute available to render the menu. But in typical thermal design with fixed clocks, you don't have a choice. The temperature a 100% saturated menu causes at 1.9GHz might match what 95% saturated gameplay draws at 2.2GHz. (Illustrative only, not real data.) But this means a cooling system good enough to cover either situation can't support a 2.2GHz fixed clock. Because a 100% saturated GPU at 2.2GHz will produce too much heat, and you can't have the console shutting off to avoid overheating whenever a player brings up the menu.

Here's where Sony's solution comes in. Remember, it adjusts clock based on the activity of the chip, not temperature. So when the chip becomes 100% saturated by a menu screen or the like, clock goes down. Performance also reduces, but it doesn't matter because you're rendering very little. When the user goes back into gameplay, saturation reduces to 95% and the clock can rise back to 2.2GHz. This is how a cooling system that can't handle sustained 2.0GHz can reach 2.2GHz with Sony's variable approach. Note that all this same logic applies to the CPU as well: due to inherent inefficiencies it should be able to sustain at 3.5GHz in game situations, and only drop with simple workloads that won't be affected.

What about when games hit higher than 95% saturation, though? It appears that Sony's profiling tools suggest this is more likely on the GPU than CPU side, so AMD SmartShift then will allow reduction of CPU provisioning to allow 96% on GPU. There's still a limit, though. Mr. Cerny did acknowledge that at some point, there will come a game that hits 97 or 98% efficiency in gameplay, on both CPU and GPU at the same time. Here, the clock must be reduced, and that game will not have 10.3TF of performance.

But it should take some time (years?) for developers to get to grips with the platform such that they deftly avoid bottlenecks to this extent (such as fully using SMT). By that point, they'll also have come up with less intensive methods to accomplish some of the same shading, so the games will still look better than titles from early in the gen, even though the compute resources are slightly less. And the talk emphasized "slightly". Because of power curves, every percentage point you drop the clockspeed you lower power draw by multiple percent.

All this complexity could be eliminated by just putting in a cooling system capable of handling 100% saturation at 2.2GHz. But this must be huge and prohibitively expensive, because neither platform attempted it. Series X has a novel wind-tunnel design, a huge fan, and a heatsink that takes up a lot of the interior volume, and can't reach even 2GHz fixed. (Neither could Sony, by their own account.)
 

Hellshy

Member
Nov 5, 2017
1,170
1. Basically, most GPUs have a sweet spot between performance and power consumption. After a certain threshold, any gain in clockspeed and performance will result a far higher power consumption.
As an exemple, you have a GPU which's perfect sweetspot would be 1600mhz for 150W. At 1700mhz, it might be at 180W. You have a 6% gain in performance/clockspeed but a 20% increase in power consumption. Because you're higher than the sweetspot in performances.

2. "The design he went with". Yeah, you can clock GPUs higher. RX 5700XT for instance is usually clocked at 1800mhz. But you can push it to 2200mhz. So yes, you can always push high and higher. But as Cerny's own admission: "A few percent drop of clockspeed leads to a 10% drop in power consumption". This doesn't mean that it's a well optimised hardware, but that they pushed even further for a few percentage of performances in spite of a lot higher power consumption. They're targeting over that optimal sweetspot.




As for why not use it, I'm not against that. But it's not efficient at all. It's not a "smart design" as some people seems to imply. It's fine to go over that optimal clockspeed, let's not just pretend that it's something well done or optimized.

Well considering it should run at those clock speeds most of the time vs the traditional boost and the added space on the main chip this leaves for the i/o customizations they added seems to be smart design to me.
 

nelsonroyale

Member
Oct 28, 2017
12,124
My personal objection to calling Sony's approach efficient is that Cerny himself said that a big amount of power is needed to push those clocks. "A couple percent of downclocking saves 10% power" means that you have to spend 10% of your power budget to get those clocks 2% higher, right? I wouldn't characterize this as efficient design.

Isn't it? It means you can re-direct that to the CPU with bigger relative gains maybe?
 

GhostTrick

Member
Oct 25, 2017
11,300
I disagree. The way it was implemented, it has no major drawbacks. The tricky bit was making the performance predictable and consistent in all environments and in all PS5's. That's what was well done and clever.


I mean, yes it does. It's inefficient. They're pushing over the efficiency of their GPU design.

What is "well done and clever" here ?


Well considering it should run at those clock speeds most of the time vs the traditional boost and the added space on the main chip this leaves for the i/o customizations they added seems to be smart design to me.



What traditionnal boost ?
Where's the smart design in pushing clocks higher than the efficiency point ?

I mean by itself, I have nothing against that. I just dont see where's the smart design here.


I disagreed with the 'optimal clock point' as the design allows them the flexibility

If your argument is strictly about power to clock relation then I'll agree to disagree cause it doesn't seem like they're worried about power draw so I'm not sure why we should?

Consoles will not always work at maximum capacity anyways



What flexibility ?
Then again, it's fine to not care about power draw. I didn't make it as a bad point. Just that it's not a smart design as some people claims. There's nothing smart to push higher clocks to the point of inefficiency. It almost sounds like an afterthought.
 

Alexandros

Member
Oct 26, 2017
17,799
I don't really understand this "Sony is desperate to get over 10 TFlops, so they OC'd their chip like crazy." If Sony was as concerned about getting double digit TFs for marketing reasons, then going wide and slow like MS was an option they could have easily taken at the start. You don't design a 36 CU chip and expect to get super high double digit TFlops. You'd just be barely touching double digit TFlops using such a config, and that's only after using super high clocks.

It is plausible that they learned about Microsoft's specs after the chip design had been finalized.
 

BreakAtmo

Member
Nov 12, 2017
12,815
Australia
All this complexity could be eliminated by just putting in a cooling system capable of handling 100% saturation at 2.2GHz. But this must be huge and prohibitively expensive, because neither platform attempted it. Series X has a novel wind-tunnel design, a huge fan, and a heatsink that takes up a lot of the interior volume, and can't reach even 2GHz fixed. (Neither could Sony, by their own account.)

Stupid Sony/MS. They could've hit 3GHz easy if they just stuck some RGB and icicle decals on the fans.
 

Hellshy

Member
Nov 5, 2017
1,170
What traditionnal boost ?
Where's the smart design in pushing clocks higher than the efficiency point ?

I mean by itself, I have nothing against that. I just dont see where's the smart design here.

Well nxgamers video shows how much the gpu clocks can fluctuate and drop on a traditional set up. Cerny is promising these higher clocks will be the norm and predictable.Its different and many have pointed that out.

In the end if they can keep up with the xsx gpu and have an advantage with the i/o customizations on the main chip then it's hard to deny it was smart design. If its noticeably behind and has no games to shoe its strengths then sure it's a fatal mistake.
 

M3rcy

Member
Oct 27, 2017
702
I mean, yes it does. It's inefficient. They're pushing over the efficiency of their GPU design.

What is "well done and clever" here ?

That they can do this without blowing their power budget and having to have massive power delivery/cooling and without making devs work significantly harder?

Both users and developers get more performance than they would if they didn't do this. And what's clever about it is exactly what I said. It's common for CPUs/GPUs to clock up under low load when there is power and thermal headroom available and clock down when power/thermals exceed a threshhold. What's different with the PS5 is to have your clock changes vary in a predictable and consistent way.
 

ty_hot

Banned
Dec 14, 2017
7,176
I was reading this thread late last night and woke up with a thought: could it all have started with Cerny (or someone) thinking about how weird it is for a main menu screen to make the console go loud? With their deterministic approach I understood that CPU and GPU will vary frequency based on their loads (meaning, based on the instructions they will be running at a given point?), so when on a main menu or map screen (of game paused) there is no AI running, no graphics to be pushed, etc, so it will automatically reduce the frequency in those situations, right?
 

M3rcy

Member
Oct 27, 2017
702
I was reading this thread late last night and woke up with a thought: could it all have started with Cerny (or someone) thinking about how weird it is for a main menu screen to make the console go loud? With their deterministic approach I understood that CPU and GPU will vary frequency based on their loads (meaning, based on the instructions they will be running at a given point?), so when on a main menu or map screen (of game paused) there is no AI running, no graphics to be pushed, etc, so it will automatically reduce the frequency in those situations, right?

Not at all. When running a game the clock only adjusts when power limits are reached (under high load) they won't lower on low load like a PC might, in fact they'll be at highest clocks in those situations. If devs want the system to not go HAM in a menu screen, they should cap the framerate. :)