the current gen is actually more off the shelf then any gen so it's not best to compare it with upcoming generation
I don't agree but we will know soon enough, so it's rather pointless to speculate without more info.
the current gen is actually more off the shelf then any gen so it's not best to compare it with upcoming generation
This a non final devkit with unfinished API and maybe not the final hardware. The final devkit are supposed to arrive in June.
yep. if the 12.1 tflops console is offering performance thats equivalent to 11.4 turing tflops, we can assume very little gains.
thats assuming the 12.1 rdna 2.0 tflops gpu is running at max clocks all the time. the 9.7 tflops 5700xt is actually running games at lower clocks which effectively makes it 9.3 tflops. gears on my rtx 2080 runs at a constant 1950 mhz when running the benchmark tool which comes out to be 11.4 tflops.
this should put the ps5 around 2070 super if it maintains those 2.23 ghz clocks.
Also doesn't that version of gears have more graphical bells and whistles then even pc ultra, and I don't think the dev team spent a significant amount of time on it.
Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrongNo it seems they are different versions, the article is confused but the DF video is better in benchmark this devkit version each 2080 level maybe final version will be better .
The 2 week build has improvements over the PC version, that's the version that was running at mostly 60 fps with dynamic 4k res I believe. The benchmark was ran at the same pc settings with no improvements and native 4k. That version ran around 45 fps and comparable to a RTX 2080.Wait
Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrong
its confusing, but there are two different versions of the series x demo. one with all those fancy features not in the pc version, and the other with all those features disabled and that runs at performance equivalent to an rtx 2080. MS only benchmarked a demo without those features to paint the series x in a better light. i am assuming those extra features make the game run at a worse framerate.Wait
Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrong
Wait
Wait I thought digital foundry said they were playing a version of gears 5 that was on series x which had more bells and whistles then even ultra pc settings. Am I remembering it wrong
You're going to be disappointed if you think either are going to perform better or equal to a 2080ti.
The only console optimization bonus this generation we saw on cpu, gpu behaved similarly to their desktop's equivalent in same/close graphic setup.
You're going to be disappointed if you think either are going to perform better or equal to a 2080ti.
I took that to mean because of the console 'X-factor' of fixed hardware spec and AAA budget for exclusives results in games like God of War running on a very slow Jaguar CPU and underpowered GPU, which is a crazy achievement.
So something like Horizon 2 will look like something on PC maxing out a 2080 Ti (res excluded). Or something like that :)
It would be better to compare to ps4pro and rx470(littel downclocked) as support of 7850 could be lacking nowadays (and as I said, cpu showed boost so shittiest amd cpu will not be enough).I understand why you might draw this conclusion, but until we get Horizon Zero Dawn and Death Stranding and see how they run on a 7850 with 2 disabled CUs and the shittiest AMD 8-core out there or equivalent, we will not actually know. We are blessed this happens at all, as usually first party games stay first party forever. Let's settle this in a few months when those games are out.
I mean, the same thing was said when people wished for 10-12 TF consoles with multi-turbo-boosted SSDs :)
Was the game running using checkerboard rendering on the 2080 ?There is this version but they showed the benchmark at Ultra level and after two weeks of work on a non final devkit it runs as good as a 2080. I think at the end it will run at least a little bit better. I don't expect it to be comparable to a 2080Ti too much difference but if it's comparable to a 2080 Super it would be great maybe it will be between 2080 and 2080 Super.
Was the game running using checkerboard rendering on the 2080 ?
Was the game running using checkerboard rendering on the 2080 ?
I really hope and expect next gen to blast past current gens texture limitations. I don't know exactly what it is but I feel like we have hit the limit of texture fidelity and it keeps the worlds from popping out more.
One example is Doom Eternal, a crazy good looking game but then I see the textures on this character and it just blows my mind how we are can obviously spot pixalation on the highest PC settings. It also feels like the character model is missing depth on certain components of ?her body.
Look at ?her fingers. I feel like next gen should really step it up here. With RT and better textures, this scene could be on CGI level and maybe even photorealistic since it's something "unnatural".
Anyone else feeling this way? While games are still impressive looking, the lighting and textures have produced a look that feels familier across multiple games.
They talk about deactivating the dynamic resolution on XBX for the benchmark, it's unclear if CBR is still used or not. But they actually say the 2080 still has the advantage on XBX.The benchmark comparing Series X and the PC (RTX2080, Threadripper 2950x(16core), 64GB ram) was an apples-to-apples comparison at fixed 3840x2160 at plain PC ultra settings, that showed nearly identical performance. They didn't get video footage of this.
Watch from 11:10.
You should check out this video: http://www.youtube.com/watch?v=IDO-UHZebV4&t=4m40s
You need a 2450mhz 5700xt, which means it's a 12.5 TF GPU, to match a 2080. Probably heavily bandwidth limited but the PS5 will have even less bandwidth available as it needs to share.
It's more of a file size issue.I really hope and expect next gen to blast past current gens texture limitations. I don't know exactly what it is but I feel like we have hit the limit of texture fidelity and it keeps the worlds from popping out more.
One example is Doom Eternal, a crazy good looking game but then I see the textures on this character and it just blows my mind how we are can obviously spot pixalation on the highest PC settings. It also feels like the character model is missing depth on certain components of ?her body.
Look at ?her fingers. I feel like next gen should really step it up here. With RT and better textures, this scene could be on CGI level and maybe even photorealistic since it's something "unnatural".
Anyone else feeling this way? While games are still impressive looking, the lighting and textures have produced a look that feels familier across multiple games.
They talk about deactivating the dynamic resolution on XBX for the benchmark, it's unclear if CBR is still used or not. But they actually say the 2080 still has the advantage on XBX.
Depends on data being compressed. There are things that will compress 10:1 and more but unlikely to make bulk of your data set.
His point is not that the PS5 would match the 2080 TF wise, but that with the usual slew of console optimizations and consoles having a history of punching well above their weight. The PS5 with its 10.3TF GPU would end up eeking out performance equivalent to a 2080. So basically a 10.3TF console GPU would perform like a 12.5TF CPU GPU. And a 12TF console GPU would perform like a 13.5/14TF CPU GPU.
Obviously, my numbers are just estimates, but I am at least certain that consoles have (and likely will) always punched above their weight. You simply can't take a 10.TF console GPU and pair it with a 10TF CPU GPU and expect the same results.
While I love how it looks, I am not gonna get my hopes up. I just don't trust devs anymore when it comes to stuff like this. I would believe it when you show me someone playing it or when its coming from a studio that I know I can trust.
I actually have not seen a hard source for this 15% IPC perf figure that gets thrown about. Do you have it?You arent including the 15% of IPC performance increment that RDNA2 offers.
I think it comes from AMD saying RDNA2 has a 50% better perf/w. they said the same of RDNA1 over GCN, which they calculated with a 15% IPC improvement and whatever percentage of power consumption improvementI actually have not seen a hard source for this 15% IPC perf figure that gets thrown about. Do you have it?
Seems logical. Hopefully we can get that validated. Thanks.I think it comes from AMD saying RDNA2 has a 50% better perf/w. they said the same of RDNA1 over GCN, which they calculated with a 15% IPC improvement and whatever percentage of power consumption improvement
Most rdna2 articles I see are also about zen3 which was part of the same presentation. ~15% IPC improvement is mentioned in regard to zen3. RDNA2 IPC increase is mentioned as part of the 50% improvement over rdna1. No specific official numbers.I actually have not seen a hard source for this 15% IPC perf figure that gets thrown about. Do you have it?
In the RDNA2 AMD presentation in B3D someone said the speaker mentioned that 15%. I thought it was in a slide where the remaining 35% was thanks to more Ghz and better transistor gates management, but i havent found the slide...I actually have not seen a hard source for this 15% IPC perf figure that gets thrown about. Do you have it?
That slide is from the navi/rdna1 announcement.actually it might be 25% more performance per clock? I'm trying to find the footnote that mentioned the 15% IPC improvement
Most rdna2 articles I see are also about zen3 which was part of the same presentation. ~15% IPC improvement is mentioned in regard to zen3. RDNA2 IPC increase is mentioned as part of the 50% improvement over rdna1. No specific official numbers.
In the RDNA2 AMD presentation in B3D someone said the speaker mentioned that 15%. I thought it was in a slide where the remaining 35% was thanks to more Ghz and better transistor gates management, but i havent found the slide...
Thanks! Yeah, I have never seen the number specific to RDNA 2. I think there should be some though. Some of the above (1.25x RDNA vs GCN) may not apply because of how they changed processing in general. This is really interesting as Dictator showed that the RTX 2080 still has an advantage over the XSX. Possibly the Gears demo was still running with a lot of legacy code though and was missing some RDNA 2 perf gains?actually it might be 25% more performance per clock? I'm trying to find the footnote that mentioned the 15% IPC improvement
yes, that's where people are getting RDNA2's IPC performance from
What's to clarify? Its either locked or variable. If the GPU spent most of its time at the 2.2 GHz figure, then why didn't Cerny just say the clock speed and leave it at that? Why mention that clock speed is variable?This discussion about the GPUs clocks cant be the reaction Sony was expecting. Another thing they should clarify asap.
What's to clarify? Its either locked or variable. If the GPU spent most of its time at the 2.2 GHz figure, then why didn't Cerny just say the clock speed and leave it at that? Why mention that clock speed is variable?
Because it is? Cerny is a technologist who is deep in the details of the architecture, and I thought he did a fine job of explaining why higher clock speeds are normally challenging to commit to and what his team's design has done to address that. Variable doesn't have to mean constantly fluctuating, it can mean able to respond to exceptional circumstances, and the talk seemed to suggest that it was more about the latter.