• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Bunzy

Banned
Nov 1, 2018
2,205
yep. if the 12.1 tflops console is offering performance thats equivalent to 11.4 turing tflops, we can assume very little gains.

thats assuming the 12.1 rdna 2.0 tflops gpu is running at max clocks all the time. the 9.7 tflops 5700xt is actually running games at lower clocks which effectively makes it 9.3 tflops. gears on my rtx 2080 runs at a constant 1950 mhz when running the benchmark tool which comes out to be 11.4 tflops.

this should put the ps5 around 2070 super if it maintains those 2.23 ghz clocks.

2070s and 2080 are about an 8 percent difference in power so if consoles could hit 2070 super that would still be awesome. Like someone said earlier, really won't be able to tell till the games start showing up and we can measure performance
 

Bunzy

Banned
Nov 1, 2018
2,205
Wait
No it seems they are different versions, the article is confused but the DF video is better in benchmark this devkit version each 2080 level maybe final version will be better .
Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrong
 

RivalGT

Member
Dec 13, 2017
6,385
Wait

Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrong
The 2 week build has improvements over the PC version, that's the version that was running at mostly 60 fps with dynamic 4k res I believe. The benchmark was ran at the same pc settings with no improvements and native 4k. That version ran around 45 fps and comparable to a RTX 2080.
 

AegonSnake

Banned
Oct 25, 2017
9,566
Wait

Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrong
its confusing, but there are two different versions of the series x demo. one with all those fancy features not in the pc version, and the other with all those features disabled and that runs at performance equivalent to an rtx 2080. MS only benchmarked a demo without those features to paint the series x in a better light. i am assuming those extra features make the game run at a worse framerate.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
Wait

Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrong

There is this version but they showed the benchmark at Ultra level and after two weeks of work on a non final devkit it runs as good as a 2080. I think at the end it will run at least a little bit better. I don't expect it to be comparable to a 2080Ti too much difference but if it's comparable to a 2080 Super it would be great maybe it will be between 2080 and 2080 Super.
 

Simuly

Alt-Account
Banned
Jul 8, 2019
1,281
You're going to be disappointed if you think either are going to perform better or equal to a 2080ti.

I took that to mean because of the console 'X-factor' of fixed hardware spec and AAA budget for exclusives results in games like God of War running on a very slow Jaguar CPU and underpowered GPU, which is a crazy achievement.

So something like Horizon 2 will look like something on PC maxing out a 2080 Ti (res excluded). Or something like that :)
 

gozu

Member
Oct 27, 2017
10,296
America
The only console optimization bonus this generation we saw on cpu, gpu behaved similarly to their desktop's equivalent in same/close graphic setup.

I understand why you might draw this conclusion, but until we get Horizon Zero Dawn and Death Stranding and see how they run on a 7850 with 2 disabled CUs and the shittiest AMD 8-core out there or equivalent, we will not actually know. We are blessed this happens at all, as usually first party games stay first party forever. Let's settle this in a few months when those games are out.


Well, I might of course be wrong but don't forget they will benefit from a more recent architecture and node (RDNA2) that will be released 2 years and 2 months after the 2080ti released. Let's compare the next Forza, halo, and whatever other 1st party software that comes out for the MS ecosystem and see who was right.

You're going to be disappointed if you think either are going to perform better or equal to a 2080ti.

I mean, the same thing was said when people wished for 10-12 TF consoles with multi-turbo-boosted SSDs :)

I think we're going to be pleasantly surprised by both Sony and MS. I am quite pleased by almost everything minus MS not investing in VR and not pushing the envelope more with their controller. But those have no impact on graphics.
 

gozu

Member
Oct 27, 2017
10,296
America
I took that to mean because of the console 'X-factor' of fixed hardware spec and AAA budget for exclusives results in games like God of War running on a very slow Jaguar CPU and underpowered GPU, which is a crazy achievement.

So something like Horizon 2 will look like something on PC maxing out a 2080 Ti (res excluded). Or something like that :)

That was precisely my meaning :)
 
Jan 21, 2019
2,902
I really hope and expect next gen to blast past current gens texture limitations. I don't know exactly what it is but I feel like we have hit the limit of texture fidelity and it keeps the worlds from popping out more.

One example is Doom Eternal, a crazy good looking game but then I see the textures on this character and it just blows my mind how we are can obviously spot pixalation on the highest PC settings. It also feels like the character model is missing depth on certain components of ?her body.

K0gRWWK.jpg


Look at ?her fingers. I feel like next gen should really step it up here. With RT and better textures, this scene could be on CGI level and maybe even photorealistic since it's something "unnatural".

Anyone else feeling this way? While games are still impressive looking, the lighting and textures have produced a look that feels familier across multiple games.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
I understand why you might draw this conclusion, but until we get Horizon Zero Dawn and Death Stranding and see how they run on a 7850 with 2 disabled CUs and the shittiest AMD 8-core out there or equivalent, we will not actually know. We are blessed this happens at all, as usually first party games stay first party forever. Let's settle this in a few months when those games are out.
It would be better to compare to ps4pro and rx470(littel downclocked) as support of 7850 could be lacking nowadays (and as I said, cpu showed boost so shittiest amd cpu will not be enough).
 

Andromeda

Member
Oct 27, 2017
4,841
There is this version but they showed the benchmark at Ultra level and after two weeks of work on a non final devkit it runs as good as a 2080. I think at the end it will run at least a little bit better. I don't expect it to be comparable to a 2080Ti too much difference but if it's comparable to a 2080 Super it would be great maybe it will be between 2080 and 2080 Super.
Was the game running using checkerboard rendering on the 2080 ?
 

leng jai

Member
Nov 2, 2017
15,114
I really hope and expect next gen to blast past current gens texture limitations. I don't know exactly what it is but I feel like we have hit the limit of texture fidelity and it keeps the worlds from popping out more.

One example is Doom Eternal, a crazy good looking game but then I see the textures on this character and it just blows my mind how we are can obviously spot pixalation on the highest PC settings. It also feels like the character model is missing depth on certain components of ?her body.

K0gRWWK.jpg


Look at ?her fingers. I feel like next gen should really step it up here. With RT and better textures, this scene could be on CGI level and maybe even photorealistic since it's something "unnatural".

Anyone else feeling this way? While games are still impressive looking, the lighting and textures have produced a look that feels familier across multiple games.

The Doom games are a bad example, they've never had good textures.
 

Andromeda

Member
Oct 27, 2017
4,841
The benchmark comparing Series X and the PC (RTX2080, Threadripper 2950x(16core), 64GB ram) was an apples-to-apples comparison at fixed 3840x2160 at plain PC ultra settings, that showed nearly identical performance. They didn't get video footage of this.

Watch from 11:10.
They talk about deactivating the dynamic resolution on XBX for the benchmark, it's unclear if CBR is still used or not. But they actually say the 2080 still has the advantage on XBX.
 

Straffaren666

Member
Mar 13, 2018
84
You should check out this video: http://www.youtube.com/watch?v=IDO-UHZebV4&t=4m40s

You need a 2450mhz 5700xt, which means it's a 12.5 TF GPU, to match a 2080. Probably heavily bandwidth limited but the PS5 will have even less bandwidth available as it needs to share.

Thanks, that's a very interesting video. Though, after watching it I was left with a couple of glaring questions that made the warning bells ring for me.

1. Nowhere in the video was it clear that 2450Mhz was the actual average clock frequency of the overclock.
2. Did he really manage to get a rock solid clock frequency at 2450Mhz for all three benchmarks?
3. That 2450Mhz is a suspiciously nice round number.
4. The poor performance scaling implies the 5700XT to be pretty severely memory BW-bound. Let's look at the TimeSpy gfx score. If we assume the stock 5700XT is running at an average clock of 1800Mhz (which might be slightly off, but it's probably close enough for our purpose), then a ~36% GPU overclock and ~5% memory overclock resulted in a performance gain of ~21%. So, if we're entirely non-BW bound we should see a ~36% performance increase and if we're entirely BW bound we should see a ~5% performance increase. That tells us we're ~45% BW-bound in the overclock. I have extensive experience in writing renderers and profiling/optimizing them on all console generations up to the current one. Unfortunately, I'm WFH due to the Corona virus and don't have access to any dev kits here. The percentage of time being BW-bound isn't a metric that is typically tracked/relevant when optimizing (even though it should be possible to manually compute that number from a good GPU trace), so I can't say how well it would match a typical real world scenario, but my gut feeling is that we shouldn't be as BW-bound as the performance scaling implies, by quite a distance.

The video you link to is a recap of a live overclock stream and the video of that stream is available on youtube as well. I didn't look through it all but I found very strong indications that the actual frequency of the GPU is lower than what the recap indicates. If you look at https://youtu.be/JS9in-RHbjw?t=2172, you can see the methodology he's using. I'm not familiar at all with this, but he seems to drive the GPU voltage from an envelope and the end interval of that envelope is what he's referring to as the SET of the overclock and that number does not equal the average frequency of the overclock. If you then go back to the recap video, you can see that it denotes the overclock as the 2450/915 SET.

The guy in the video is right, that the only explanation for the poor performance scaling of the overclock is that it's BW-bound, but that's only if the average clock frequency really is 2450Mhz. Does any one know the guy performing this test? He certainly knows a lot about overclocking but I'm not sure he knows the inner workings of a GPU and for that reason he might not realize that's absolutely imperative to know the average clock frequency before he can draw any conclusions about being BW-bound. From the video, I get the impression that he doesn't really think it's that important.

If someone is able to obtain the actual average frequency of both the stock 5700XT and the overclocked, for all three benchmarks, it's trivial to determine to what extent the overclock is BW-bound. Until then, I believe my extrapolations, which assumes a low BW-bound percentage, should be pretty close to what a 10.3TF 5700XT would perform.
 

Nooblet

Member
Oct 25, 2017
13,621
I really hope and expect next gen to blast past current gens texture limitations. I don't know exactly what it is but I feel like we have hit the limit of texture fidelity and it keeps the worlds from popping out more.

One example is Doom Eternal, a crazy good looking game but then I see the textures on this character and it just blows my mind how we are can obviously spot pixalation on the highest PC settings. It also feels like the character model is missing depth on certain components of ?her body.

K0gRWWK.jpg


Look at ?her fingers. I feel like next gen should really step it up here. With RT and better textures, this scene could be on CGI level and maybe even photorealistic since it's something "unnatural".

Anyone else feeling this way? While games are still impressive looking, the lighting and textures have produced a look that feels familier across multiple games.
It's more of a file size issue.
Upping resolution on those small parts eventually end up taking a lot of space, look at the "high res texture packs" that games like Siege, MHW have, they don't really add detail to easily visible areas like terrain and signs as those end up looking the same....but the detail they add is limited to tiny things like hands, engravings on weapons, parts of clothing etc and yet they end up adding 40GB or so on top of existing installation.

Additionally Doom Eternal isn't really a good example of a game with very crisp texture work, the previous one was actually quite average in texture department (as far as clarity is concerned, not art). Games like Star Citizen, Division 2, Deus Ex Mankind Divided, Metro Exodus, Modern Warfare, HZD, Death Stranding, Battlefield, R6: Siege , Arkham City, Gears 5 are what I'd say would be examples of games with great crisp texturing this gen. I'd even add smaller games like A Plague tale Innocence and Vanishing of Ethan Carter.
 
Last edited:

Straffaren666

Member
Mar 13, 2018
84
His point is not that the PS5 would match the 2080 TF wise, but that with the usual slew of console optimizations and consoles having a history of punching well above their weight. The PS5 with its 10.3TF GPU would end up eeking out performance equivalent to a 2080. So basically a 10.3TF console GPU would perform like a 12.5TF CPU GPU. And a 12TF console GPU would perform like a 13.5/14TF CPU GPU.

Obviously, my numbers are just estimates, but I am at least certain that consoles have (and likely will) always punched above their weight. You simply can't take a 10.TF console GPU and pair it with a 10TF CPU GPU and expect the same results.


While I love how it looks, I am not gonna get my hopes up. I just don't trust devs anymore when it comes to stuff like this. I would believe it when you show me someone playing it or when its coming from a studio that I know I can trust.

I was trying to extrapolate the performance of a fixed clocked 10.3TF 5700XT and relate it to a RTX 2080. So in that sense, Napata's video would contradict my prediction. However, I don't believe the guy in that video is drawing the right conclusions, simply because there are strong indications he doesn't know what the average frequency actually is for each of the three benchmarks. If he doesn't know that, then the only thing his video is telling us is how a 5700XT overclocked with WattMan configured with an end envelope frequency of 2450Mhz (and some other settings) performs. Which in itself might be interesting, but not useful for extrapolating performance or determine whether the 5700XT is memory BW-bound or not. If you are interested, you can read my reply to Napata.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
I actually have not seen a hard source for this 15% IPC perf figure that gets thrown about. Do you have it?
I think it comes from AMD saying RDNA2 has a 50% better perf/w. they said the same of RDNA1 over GCN, which they calculated with a 15% IPC improvement and whatever percentage of power consumption improvement
 

disco_potato

Member
Nov 16, 2017
3,145
I actually have not seen a hard source for this 15% IPC perf figure that gets thrown about. Do you have it?
Most rdna2 articles I see are also about zen3 which was part of the same presentation. ~15% IPC improvement is mentioned in regard to zen3. RDNA2 IPC increase is mentioned as part of the 50% improvement over rdna1. No specific official numbers.
 

amstradcpc

Member
Oct 27, 2017
1,768
I actually have not seen a hard source for this 15% IPC perf figure that gets thrown about. Do you have it?
In the RDNA2 AMD presentation in B3D someone said the speaker mentioned that 15%. I thought it was in a slide where the remaining 35% was thanks to more Ghz and better transistor gates management, but i havent found the slide...
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
actually it might be 25% more performance per clock? I'm trying to find the footnote that mentioned the 15% IPC improvement

COMPUTEX_KEYNOTE_DRAFT_FOR_PREBRIEF.26.05.19-page-011.jpg
 

III-V

Member
Oct 25, 2017
18,827
Most rdna2 articles I see are also about zen3 which was part of the same presentation. ~15% IPC improvement is mentioned in regard to zen3. RDNA2 IPC increase is mentioned as part of the 50% improvement over rdna1. No specific official numbers.
In the RDNA2 AMD presentation in B3D someone said the speaker mentioned that 15%. I thought it was in a slide where the remaining 35% was thanks to more Ghz and better transistor gates management, but i havent found the slide...
actually it might be 25% more performance per clock? I'm trying to find the footnote that mentioned the 15% IPC improvement

COMPUTEX_KEYNOTE_DRAFT_FOR_PREBRIEF.26.05.19-page-011.jpg
Thanks! Yeah, I have never seen the number specific to RDNA 2. I think there should be some though. Some of the above (1.25x RDNA vs GCN) may not apply because of how they changed processing in general. This is really interesting as Dictator showed that the RTX 2080 still has an advantage over the XSX. Possibly the Gears demo was still running with a lot of legacy code though and was missing some RDNA 2 perf gains?
 

III-V

Member
Oct 25, 2017
18,827
It says:

See end-note RZ3-102. RDNA2 improvements based on AMD internal estimates.

Note that the second slide is compared to GCN.

yBEXWFA.png

KwfQZWVh.png


4QVRSk7h.png




 
Last edited:

nullZr0

Alt account
Banned
Mar 2, 2020
240
This discussion about the GPUs clocks cant be the reaction Sony was expecting. Another thing they should clarify asap.
What's to clarify? Its either locked or variable. If the GPU spent most of its time at the 2.2 GHz figure, then why didn't Cerny just say the clock speed and leave it at that? Why mention that clock speed is variable?

I've never seen this so confusing before. Even the Github rumored 2.0 GHz seemed overclocked for a console GPU. This reeks like pure marketing. Like MS in 2013 overclocking the Xbox One CPU by a little bit just to one up Sony in one area. This looks purely like something to avoid hitting the rumored specs.
 

Lady Gaia

Member
Oct 27, 2017
2,476
Seattle
What's to clarify? Its either locked or variable. If the GPU spent most of its time at the 2.2 GHz figure, then why didn't Cerny just say the clock speed and leave it at that? Why mention that clock speed is variable?

Because it is? Cerny is a technologist who is deep in the details of the architecture, and I thought he did a fine job of explaining why higher clock speeds are normally challenging to commit to and what his team's design has done to address that. Variable doesn't have to mean constantly fluctuating, it can mean able to respond to exceptional circumstances, and the talk seemed to suggest that it was more about the latter.
 

Deleted member 10847

User requested account closure
Banned
Oct 27, 2017
1,343
Because it is? Cerny is a technologist who is deep in the details of the architecture, and I thought he did a fine job of explaining why higher clock speeds are normally challenging to commit to and what his team's design has done to address that. Variable doesn't have to mean constantly fluctuating, it can mean able to respond to exceptional circumstances, and the talk seemed to suggest that it was more about the latter.

Cerny is an amazing architect but also part of the PR machine. We will probably never know how variable it is, but I'm sure it's not locked 99% if the time, otherwise they wouldn't even mentioned the variable word.