• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Kida

Member
Oct 27, 2017
1,898
From the specs it seems third party games will be marginally better on Xbox. I expect first party efforts will be a lot more interesting and competitive with Sony being able to design around the faster SSD and MS being able to push more pixels and better raytracing.

Overall it will be a fascinating generation and I look forward to seeing how the different approaches will play out.
 

nelsonroyale

Member
Oct 28, 2017
12,124
I know, but that's what Cerny told to them.
Tom Warren is also a super reliable source directly in contact with devs and insiders.

Variable is variable, as it is a "boost" compared to a baseline clock.
Sustained or Locked will always stays at that frequencies

We'll see.

He isn't right here though, at least based on what Cerny said. You really think the CPU is going max load most of the time? There is a lot more CPU to play with this gen. Also, Warren is definitely a partial commentator...it is obvious he has contacts there and bats as such.

Also, 'the 10.28 can't be sustained for long'. Based on what? Cernie claimed that a 10% drop in power results in a 2-3% drop in frequency. That kind of contradicts what you say, but the proof is in the pudding. Again the claim here is that the max clock is effectively the typical clock and in drops when under load. It isn't likely to be under max load most of the time on CPU...so...

Anyway, I suppose difficult to know how much that holds up until it is tested, but this doesn't seem to be analogous to other systems.
 

Kemono

▲ Legend ▲
Banned
Oct 27, 2017
7,669
Variable only means that the dev can decide how much power is used in each and every scene.

You can't use 100% of the gpu power when you already use 100% of the CPU power.
But if you want the full gpu power you can do that.

this is not the same as with boost clocks on pc hardware.
 

Deleted member 27551

User requested account closure
Banned
Oct 30, 2017
660

Decarb

Member
Oct 27, 2017
8,632
Where is this notion that PS5 can't maintain 10.28TF for long coming from? Its not running on battery and Cerny did say in his video that cooling is designed for max load so it doesn't go into overdrive like Pro. Just because it throttles down when not required doesn't mean it can't run at max speed at all times. Have people seen how their GPUs behave in a PC? Feels like I'm taking crazy pills here.
 
Feb 1, 2018
5,239
Europe
Yea I'll eat my hat if its 9.2tf as it's just to wild. How can you develop a game with the chance the console would drop an entire TF. I think alot of people are not using a bit of logic.
I don't think it drops "randomly". Cerny said you can choose your speed, but at a price. You will lose speed in other components.

It is interesting, developers can tailor their clock speed depending on their bottleneck, still it can never beat a fixed clock speed system like XSX, especially since XSX has higher fixed clock speeds.
 

Gemüsepizza

Member
Oct 26, 2017
2,541
Sure, but then 10.28 cannot be sustained for long, where XSX can have 12.15 sustained all the time, even enabling all the other components at max usage.

Not necessarily. On XSX, there might not be a system that is based on power budget, but there is 100% one that is based on temperature (it has to be, or the chips die). So if the XSX GPU is running at full 12.155 TF, and you hammer the CPU with AVX stuff, it will throttle the frequencies. The difference seems to be, that on PS5 you don't have to worry about temperatures, it's all done with power budget. Which means devs have complete control about it, and aren't restricted by things like room temperature.
 

Shpeshal Nick

Banned
Oct 25, 2017
7,856
Melbourne, Australia
Not necessarily. On XSX, there might not be a system that is based on power budget, but there is 100% one that is based on temperature (it has to be, or the chips die). So if the XSX GPU is running at full 12.155 TF, and you hammer the CPU with AVX stuff, it will throttle the frequencies. The difference seems to be, that on PS5 you don't have to worry about temperatures, it's all done with power budget. Which means devs have complete control about it, and aren't restricted by things like room temperature.

lol. Some of you make me laugh.
 

Marble

Banned
Nov 27, 2017
3,819
It's gonna be interesting.

Basically X has a small edge (20%) over TF's, but has lower clocks per CU and a slight advantantage in CPU clock.

And PS5 has a huge edge over the IO speed.

Really curious to see both in action and what it all means for multiplatform games.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Not necessarily. On XSX, there might not be a system that is based on power budget, but there is 100% one that is based on temperature (it has to be, or the chips die). So if the XSX GPU is running at full 12.155 TF, and you hammer the CPU with AVX stuff, it will throttle the frequencies. The difference seems to be, that on PS5 you don't have to worry about temperatures, it's all done with power budget. Which means devs have complete control about it, and aren't restricted by things like room temperature.
So you say that the ps5 has an advantage because of variable clock speed over XSX?
 
Feb 1, 2018
5,239
Europe
Not necessarily. On XSX, there might not be a system that is based on power budget, but there is 100% one that is based on temperature (it has to be, or the chips die). So if the XSX GPU is running at full 12.155 TF, and you hammer the CPU with AVX stuff, it will throttle the frequencies. The difference seems to be, that on PS5 you don't have to worry about temperatures, it's all done with power budget. Which means devs have complete control about it, and aren't restricted by things like room temperature.
Where do you get that?? Lol?
 

P40L0

Member
Jun 12, 2018
7,591
Italy
That wasn't my point, and I think you know that.
I think Warren is ok, and the "variable" thing coupled with showing only the max frequency without mentioning the minimum on an official paper sheets reveal is not transparent at all, and can easily be related to a tentative of providing the best answer possible to earlier competitor's superior specs reveal.

We'll se in real games and real benchmark the actual difference anyway, especially on third parties.

That doesn't mean it can't be sustained.

Someone with an Xbox avatar not arguing in good faith? I'm shocked.
It's an Halo avatar, which is also on PC you know.
Plus I had all systems.

We're in a console comparison thread anyway, what do you expect?
 

Deleted member 27551

User requested account closure
Banned
Oct 30, 2017
660
I don't think it drops "randomly". Cerny said you can choose your speed, but at a price. You will lose speed in other components.

It is interesting, developers can tailor their clock speed depending on their bottleneck, still it can never beat a fixed clock speed system like XSX, especially since XSX has higher fixed clock speeds.
Yes but I really cant see the maximum at full load dropping so much it just seems insane and there is zero proof yet eiva.

It's a strange one with the split memory or the xbox with 6gb going to cpu and OS, is this going to affect overall performance in some really cpu intensive games. I dont pretend to know alot about all this so it would be interesting to know.

I think it's clear the xbox has the advantage overall I think but I feel some just wanna downplay the ps5 and spread misinformation, to make it look like there is a bigger gap between these consoles than there actually is.
 

Gemüsepizza

Member
Oct 26, 2017
2,541

Monkhaus

Banned
Apr 18, 2019
59
I thought it would be better when the specs are released, but it get even worser.

Hence that we can go on with the same
Routine. First suggestion to go on is the rdna gnc debacle.

Cerny himself stated gcn!=rdna 2
so we can conclude the gap considering only the tf is now even higher than between the pro and x, since it was a 1,8 tf gcn difference. A 1,8 rdna2 tf difference should be way more difference :) :) :)

Thanks cerny for Clearing this up as the best Orator ever :) :)

Sorry I phil , sorry feel a little bad for this post but IT's neccesarry for the ongoing discussion
 

Decarb

Member
Oct 27, 2017
8,632
The 10.28 can be sustained all the time if the dev reduce the CPU speed to match the max thermal envelope.
You saying they designed the cooling not taking account both CPU and GPU running at max speed at the same time? Because he spent a lot of time talking about cooling and power and how they messed it up on PS4/Pro and how they're fixing it.
 

Raide

Banned
Oct 31, 2017
16,596
It's literally in Cerny's talk.



It's probably a bit more comfortable for devs, because they have full control about frequencies. It also indicates that PS5 will have a very strong cooling system. But tbh I think XSX will have something similar.
They have already shown the Series X cooling. Advanced version of what was in One X.
 

P40L0

Member
Jun 12, 2018
7,591
Italy
Not necessarily. On XSX, there might not be a system that is based on power budget, but there is 100% one that is based on temperature (it has to be, or the chips die). So if the XSX GPU is running at full 12.155 TF, and you hammer the CPU with AVX stuff, it will throttle the frequencies. The difference seems to be, that on PS5 you don't have to worry about temperatures, it's all done with power budget. Which means devs have complete control about it, and aren't restricted by things like room temperature.
Nope, Microsoft already confirmed that CPU and GPU will remain sustained to those frequencies even at max load, and even playing with the console in a desert xD

This will be possible thanks to how they increased the size of the console itself and placed the internals with ventilation and cooling in mind since the beginning.
Plus it also seems to be silent thanks to that.
The main sacrifice for that sustained performance will basically be the huge form factor.
 
Oct 27, 2017
8,617
The World
Not necessarily. On XSX, there might not be a system that is based on power budget, but there is 100% one that is based on temperature (it has to be, or the chips die). So if the XSX GPU is running at full 12.155 TF, and you hammer the CPU with AVX stuff, it will throttle the frequencies. The difference seems to be, that on PS5 you don't have to worry about temperatures, it's all done with power budget. Which means devs have complete control about it, and aren't restricted by things like room temperature.

That doesn't make any sense. XSX is designed to run at those frequencies, that takes into account their cooling solution also. XSX CPU runs at a single frequency itself.

PS5 devs actually have to figure out whether they want to use CPU speed or GPU speeds at max frequency.
 

M4xim1l1ano

Member
Oct 29, 2017
1,094
Santiago, Stockholm, Vienna
It's literally in Cerny's talk.



It's probably a bit more comfortable for devs, because they have full control about frequencies. It also indicates that PS5 will have a very strong cooling system. But tbh I think XSX will have something similar.

Was there ever an issue before when devs complained about not having control of frequencies?

I dont know so therefore I'm asking..
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
It's literally in Cerny's talk.



It's probably a bit more comfortable for devs, because they have full control about frequencies. It also indicates that PS5 will have a very strong cooling system. But tbh I think XSX will have something similar.
We already know from DF that XSX is quiet at least and given how Xbox One, Xbox One S and Xbox One X are built, I wouldn't worry about temperature, nor noise or cooling in general.
 

Kalasai

Member
Jan 16, 2018
895
France
You saying they designed the cooling not taking account both CPU and GPU running at max speed at the same time? Because he spent a lot of time talking about cooling and power and how they messed it up on PS4/Pro and how they're fixing it.
The cooling is designed about the WATT. The PS5 will always use the same watt. It's a fixed pool of watt. And the dev choose the gpu / cpu speed with no possibilities to go higher than the watt pool. It's a design radicaly different. The managing is done around the thermal envelope and not around the heat of the cpu / gpu.
The good news here, is that the thermal is OK, the PS5 will always make the same noise, in summer or in winter, in light load scene and in heavy load scene.
 

Muntaner

Member
May 12, 2018
956
No console warrior or plastic-box defender here, just a passionate gamer which would like to understand more on the technical topic. I'd like to try and discuss with you guys the choice of Sony (Cerny) on the GPU front, which is apparently causing all of this mess.

So the architectural approach where Sony went all in is "less CUs, more clocks". In the yesterday's presentation, Cerny clearly states it at the beginning of the GPU component description saying "I prefer to push on clocks than of the number of the CUs", insisting on the fact that higher clocks can perform better. Even the choice of an optimal cooling solution points in that direction - and here I think, why not go for the same path of Xbox strategy and install more CUs less clocked by using less budget for the cooling solution?

Cerny then reported a toy-example where you have 36CUs @ 1GHz and 48CUs @ 0.75GHz: both have roughly the same amount of Teraflops (4.6), but in his opinion the first scenario with less CUs is fair more performant for rasterization, better use of caches etc., saying that having more CUs can lead to other problems such as keeping more CUs busy at work is more difficult for the dev, etc.

My question for you guys is: how does this translate, in your opinion, in real-life scenarios where devs have to work with those two very, very different GPU architectures?
For what I understood, my feeling is that we'll get roughly the same performances even with the TFs difference. But well, probably only side-by-side demos will fade those doubts at the end of the day.
 

Raide

Banned
Oct 31, 2017
16,596
Was there ever an issue before when devs complained about not having control of frequencies?

I dont know so therefore I'm asking..
The issue has really just been developers being stuck with old ass Jaguar for so long. They had no choice. CPU bound games had no other options. Now we have much better CPU options but I really wonder what games developers will make that fire the CPU or GPU balance to shift. In the PC space it was all about preventing bottlenecks, just so one of the other is not being held back.
 

Fiddler

Member
Oct 27, 2017
380
The 10.28 can be sustained all the time if the dev reduce the CPU speed to match the max thermal envelope.

That is also not true, they never said that nor hinted at something like that. The shown clock speeds are the norm. If due to code, since different tasks draw a different amount of power, the power budget would get exceeded the Gpu downclocks to save power and by that never exceeds a thermal limit. The Smart Shift technology from AMD the Ps5 uses as well comes on top of that, which means that Cpu and Gpu share the same power budget but if due to tasks the Cpu needs more power and the Gpu less it can transfer that power and vice-versa. It has nothing to do with the clock of the components.
 

Wollan

Mostly Positive
Member
Oct 25, 2017
8,807
Norway but living in France
My question for you guys is: how does this translate, in your opinion, in real-life scenarios where devs have to work with those two very, very different GPU architectures?
For what I understood, my feeling is that we'll get roughly the same performances even with the TFs difference. But well, probably only side-by-side demos will fade those doubts at the end of the day.
This is the interesting part where only real-world performance benchmarking (using a real game) will reveal the actual effect. How Cerny described things it seems like the PS5 will punch above its 'TF target' in real-world performance, closing the gap towards the XsX.
 

M4xim1l1ano

Member
Oct 29, 2017
1,094
Santiago, Stockholm, Vienna
This is the interesting part where only real-world performance benchmarking (using a real game) will reveal the actual effect. How Cerny described things it seems like the PS5 will punch above its 'TF target' in real-world performance, closing the gap towards the XsX.

wondering, doesn't XSX also have tech/software solutions to punch above its weight as well?
 

Kalasai

Member
Jan 16, 2018
895
France
Any dev here to explains how the approach is different between the thermal managing of the PS5 and XBOX and how this can affect performance in a good and bad way ?
 

Decarb

Member
Oct 27, 2017
8,632
The cooling is designed about the WATT. The PS5 will always use the same watt. It's a fixed pool of watt. And the dev choose the gpu / cpu speed with no possibilities to go higher than the watt pool. It's a design radicaly different. The managing is done around the thermal envelope and not around the heat of the cpu / gpu.
The good news here, is that the thermal is OK, the PS5 will always make the same noise, in summer or in winter, in light load scene and in heavy load scene.
Yeah that's what I was wondering because I don't think they'll screw up again with cooling and leave the cooling envelop small enough that devs have to juggle CPU/GPU speeds to keep them from throttling. Especially now when fan speed is capped at constant rate. He did mention that GPU can go lot higher than 2.23Ghz in their current setup and they had to cap it so thermal/watt doesn't seem like a bottleneck.
 

Wollan

Mostly Positive
Member
Oct 25, 2017
8,807
Norway but living in France
wondering, doesn't XSX also have tech/software solutions to punch above its weight as well?
There's a lot of factors. Potentially the Xbox 'as a conservative whole' gives it an advantage VS Sony's really-high-clocks "there's a lot to be said about being faster" & really-fast-SSD/IO approach. We need real-world performance benchmarking!
 
Last edited:

haveheart

Member
Oct 25, 2017
2,076
That is also not true, they never said that nor hinted at something like that. The shown clock speeds are the norm. If due to code, since different tasks draw a different amount of power, the power budget would get exceeded the Gpu downclocks to save power and by that never exceeds a thermal limit. The Smart Shift technology from AMD the Ps5 uses as well comes on top of that, which means that Cpu and Gpu share the same power budget but if due to tasks the Cpu needs more power and the Gpu less it can transfer that power and vice-versa. It has nothing to do with the clock of the components.

I probably missed that but was it established that the overall power budget of the PS5 is not enough to drive both the GPU and CPU at 100%?
I watched the live stream yesterday but this wasn't my take away...
 
Dec 8, 2018
1,911
No console warrior or plastic-box defender here, just a passionate gamer which would like to understand more on the technical topic. I'd like to try and discuss with you guys the choice of Sony (Cerny) on the GPU front, which is apparently causing all of this mess.

So the architectural approach where Sony went all in is "less CUs, more clocks". In the yesterday's presentation, Cerny clearly states it at the beginning of the GPU component description saying "I prefer to push on clocks than of the number of the CUs", insisting on the fact that higher clocks can perform better. Even the choice of an optimal cooling solution points in that direction - and here I think, why not go for the same path of Xbox strategy and install more CUs less clocked by using less budget for the cooling solution?

Cerny then reported a toy-example where you have 36CUs @ 1GHz and 48CUs @ 0.75GHz: both have roughly the same amount of Teraflops (4.6), but in his opinion the first scenario with less CUs is fair more performant for rasterization, better use of caches etc., saying that having more CUs can lead to other problems such as keeping more CUs busy at work is more difficult for the dev, etc.

My question for you guys is: how does this translate, in your opinion, in real-life scenarios where devs have to work with those two very, very different GPU architectures?
For what I understood, my feeling is that we'll get roughly the same performances even with the TFs difference. But well, probably only side-by-side demos will fade those doubts at the end of the day.


It´s very hard to estimate how much the increased clocks will close the already very close gap between the two and it will probably differ from game to game and how well both are utilized we probably wont have an answer until the actual games are being tested by DF and similar sites.
 

MrKlaw

Member
Oct 25, 2017
33,029
I was going to make a thread with a poll, but I'm not sure if it would've been buried or even laughed at, so I figured I'd ask here...

Which console do you think is set up better for a mid-generation upgrade like this generation's Pro and X?

Seems like Sony put a lot of focus towards a ridiculously fast and custom SSD setup, however over time off-the-shelf SSDs will match and surpass their speeds, so is it as simple as Microsoft upgrading their SSDs to match those speeds while also having higher specs in all the other areas? Is it easier for Sony to upgrade the CPU and GPU like both consoles did this generation? Is it stupid for somebody to be asking these questions months before the generation even begins?

Please note I know very little about this shit. Just wondering if it's at all possible to predict who will have an easier time upgrading in a few years, as I'm sure there may be a few people who will only jump in once the revisions release.


PS5 IMO (but I personally don't know if there will be mid gens this time, no big resolution increase like 4k was last time, no clear process node improvements, wafers are expensive, existing consoles are likely to stay expensive for a while, will want to save process improvements for PS6 etc)

you can increase CUs and get more graphics. SSD is a relatively fixed baseline because you can't really code for an increase without leaving the older consoles behind. Higher res/FPS is more additive so easier to update I think. And with PS5 being 36CU, I think there is potential to go bigger - that will be trickier for MS, but still doable.
 

Buenoblue

Banned
May 5, 2018
313
Let's all be honest here. Sony dropped the ball focusing on CU count matching PS4 pro for backwards compatibility. Even though they don't seem to have managed full compatibility. Wtf. Now they have found out about Xbox series x 12tf, which let's be honest took us all by surprise and are now trying to fudge the numbers to seem closer. Ps5 is still a beast and will have stunning looking games but if they wanted more power they just shoulda put more cu's in ps5, not try to run this thing at 2.23 Ghz. That's faster than any of PC graphics card I've ever seen. Trying to say variable clocks is a good thing is laughable. After the noise levels of my original PS4 and PS4 pro I've lost confidence that Cerny can deliver a fast quiet modern console. This thing is gonna be loud. Where as Microsoft have delivered 12tf and higher cpu clocks at the same noise profile as one x. If Sony can come in at a lower price then maybe that's the reasoning behind the specs. Xbox series x does seem like a no compromise machine and I worry about price. I still think Sony takes this gen, for me they just have the better games. I just wish we could have a Microsoft designed console with Sony games on it lol.
 
Feb 1, 2018
5,239
Europe
It's probably a bit more comfortable for devs, because they have full control about frequencies. It also indicates that PS5 will have a very strong cooling system. But tbh I think XSX will have something similar.
What? How does having control over frequencies a thing? What is happening in this thread?

Ok I am going to lurk for a while.... just get stressed reading all this :)
 

slsk

Member
Oct 27, 2017
247
The real winner in all of this is DigitalFoundry, who will get to make endless amounts of content comparing game performance between consoles.
 

MrKlaw

Member
Oct 25, 2017
33,029
No. 2.23GHz is the peak clock.

Also, it's interesting that the GPU and CPU do not seem to be able to hit peak simultaneously.

We really need Sony to talk to eg Wired or DF and clarify this. They explain it in a way that sounds like 90% of cases its at that max figure, and only rarely dips and only by a small amount. But clearly this is something being jumped on as a big negative, being compared to PC GPUs and talking about 'sometimes' reaching the declared clocks.

If that is the case, then fine - but if it really is set up to be mostly 2.23 I'd like to see some clarification and examples.