• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

sncvsrtoip

Banned
Apr 18, 2019
2,773
btw if 12tf rdna2 xsx is around 2080 level in gears performance increase per teraflop on rdna2 is probably marginal and close to rdna1
 

Straffaren666

Member
Mar 13, 2018
84
How is that very obvious ?
The results from Anandtech are for the reference design. Those are for the Asus Strix OC. So yes, it is maintaining over 2ghz. It is the actual game clock.

Because it would vary. It would not be at a static level as your graph shows. I honestly don't know if you are trolling me or not, but I don't think we can get any further with this. Anyway, thanks for the link to the Anandtech review, it contained very useful data.
 

GhostTrick

Member
Oct 25, 2017
11,312
Because it would vary. It would not be at a static level as your graph shows. I honestly don't know if you are trolling me or not, but I don't think we can get any further with this. Anyway, thanks for the link to the Anandtech review, it contained very useful data.


It's an overclocked model. It's meant to run at that high clock. No, I'm not trolling you but I feel like you are. I keep bringing concrete evidences that you either don't read or only keep what you want to run some weird calculations that wont translate to gain in performances. You want your 10.3 Tflops RX 5700 XT ? You have it. It's 18% away from a 2080 during gaming. If facts dont suit you, well, you can still push them aside and go back to making crazy theories.
 

Straffaren666

Member
Mar 13, 2018
84
It's an overclocked model. It's meant to run at that high clock. No, I'm not trolling you but I feel like you are. I keep bringing concrete evidences that you either don't read or only keep what you want to run some weird calculations that wont translate to gain in performances. You want your 10.3 Tflops RX 5700 XT ? You have it. It's 18% away from a 2080 during gaming. If facts dont suit you, well, you can still push them aside and go back to making crazy theories.

Ok, fair enough. We simply have to agree to disagree.
 

Deleted member 56995

User requested account closure
Banned
May 24, 2019
817
Both machines will be fine. XSX is the better machine, but both are close enough to make it the closest generation gap we've ever seen. Ultimately, this gen is gonna come down to the games. PS5 will have some of those nice PlayStation exclusives, but Xbox Game Studios is starting to make great strides as well. Both will be brilliant.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773

Alexandros

Member
Oct 26, 2017
17,811
Absolutely, I love questions :)
  1. I'm going to say a "mid-range" graphics card is something in the $250-300 range, as whatever is released there seems to end up the most popular. So, if a 2080-level card is released for $300, and there are more powerful cards above that, would that mean console GPUs are "mid-range"? Yes. I'd be surprised at that though. I think that level will stay at $400-500.
  2. Absolutely, and I think this is actually the more relevant metric than the "current product range." The 1060 came out in 2016, and is still the most popular card. Using the wayback machine to review previous years, the percentage hasn't even changed much. The 970 was the most popular before that, and it was released in 2014. With the economy the way it is, I think it's going to take even longer than usual for the average person to upgrade if they've got a functional computer at the moment. I'd bet the 1060 will still be the most popular card a year from now, though it will go down in percentage a bit.
So yes, it's possible the 2080 will be a "mid-range"-level card at the end of this year, just unlikely given the history of the prices of GPU power brackets going up over time. If the 3060 is equal to 2080, but at $400, that's not mid-range. But even when 2080 is midrange eventually, which it will be, it'll take a long time for the average gaming computer to actually be at that level.

Thank you for the detailed reply. In my opinion your conclusions do not take into account the realities of the PC market and the average PC gamer's upgrade pattern which is much different than the console one. PC gamers do not upgrade at fixed intervals because the release of new hardware doesn't cut off access to games the way next-gen exclusives do on a console. Not to mention that 20xx series adoption rate is low because their main selling point, RTX, was only useful in a handful of games.
 

space_nut

Member
Oct 28, 2017
3,306
NJ
www.resetera.com

NX Gamer: PS5 Full Spec Analysis | A new generation is Born

It's unlikely that we'll see huge strides in lossless compression. Just a couple of percent more would be considered pretty revolutionary. Kraken is absurdly good already, and it sounds like it's largely baked into the hardware design, so that's what it's likely to remain as. The sorts of...

The crazy thing is that's with a 2 week simple port not using any new features on the XSX it's performing like a 2080 so imagine what devs can do years going further into the features and whole teams coding a engine to it
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
The crazy thing is that's with a 2 week simple port not using any new features on the XSX it's performing like a 2080 so imagine what devs can do years going further into the features and whole teams coding a engine to it
for sure 2080 level in nextgen console is very impressive, sad we have to wait long time for proper xsx exclusive without xone burden
 

Deleted member 56995

User requested account closure
Banned
May 24, 2019
817
www.resetera.com

NX Gamer: PS5 Full Spec Analysis | A new generation is Born

It's unlikely that we'll see huge strides in lossless compression. Just a couple of percent more would be considered pretty revolutionary. Kraken is absurdly good already, and it sounds like it's largely baked into the hardware design, so that's what it's likely to remain as. The sorts of...


I had watched the video and that was the conclusion I had reached. Guess I was wrong.
 

D BATCH

Member
Nov 15, 2017
148
Yup... I think what is really impressive gong from RDNA1 to RDNA2 is the power consumption. Its a more efficient chip.. oh and has RT stuff.
The efficiency is 50% better than RDNA 1. This is more than a marginal increase in Tflops over Previous gen. 50% improvement per watt Vs RDNA 1 will yield big gains. Nvidia will have competition finally. Just wrap your head around the fact AMD has APU's that compete with top END PC's. 2.23GHz on the GPU is impressive for any GPU much less one in small form factor(Console) 12Tflops performing like 2080(Gears 5) and unoptimized(not using any RDNA 2 features) Ya there is a big jump from first Gen RDNA.

Now just imaging how powerful the PC GPU's based on RDNA 2 will be.
 
Last edited:

jroc74

Member
Oct 27, 2017
28,995
Know what I find amazing?

These consoles just skipped right over RDNA 1 and are using 2.

That in itself is something to be excited for.
 

zeuanimals

Member
Nov 23, 2017
1,454
In order for games to have higher quality textures And other assets on PS5, games would have to saturate the SSD bus at (conservatively) 2.4GB/s. That's roughly 25 to 50 times faster than current gen games stream assets and current gen games look fantastic. Which means that games will have 20 to 50 times larger assets (on a 825GB SSD). And these assets will fit into RAM that is slightly larger than current gen. I think you guys have unreasonable expectations for next gen games. Compute and RAM is still going to be the limiter on games, not the SSD.

But games on current gen have to keep every asset they may need within the next 30 seconds on the RAM. Next-gen, they only need assets to be there for the next few seconds which greatly relieves the RAM since it never has to store anything that won't be used in the next few seconds. So even though it's only 14ish GB of RAM, they basically will always have access to all 14GB unlike the current gen.
 

BradGrenz

Banned
Oct 27, 2017
1,507

Lady Gaia

Member
Oct 27, 2017
2,479
Seattle
sometimes I wonder what the point of RDNA1, but it did bring AMD's name back and make people excited for RDNA2

There's nothing like trying to get a product to market to find out where the limiting factors and flaws are. Shipping RDNA1 is almost certainly what taught them important lessons they needed to finish work on RDNA2. Progress in technology is almost never along a straight line from where you were to where you need to be.
 

McFly

Member
Nov 26, 2017
2,742
No, that's not typical, that's theoretical max without BCpack (which is used for texture AFAIK).
Why do people keep misquoting what BCPack is?

BCPack is a texture compression format. It does not make the IO any faster than it is, it makes texture compression better so you can fit more texture per second through the IO when compressed, the IO speed does not change. The 6GB/s theoretical max does not change, what you can fit through it changes when you have a good compression ratio.

And BCPack is totally taken into consideration when giving the theoretical max throughput otherwise the number would be higher. That is why they said 4.8GB compressed even though the IO has a max theoretical of 6GB/s.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
It will be interesting to see the IPC delta between RDA1 and 2. Of course, it's going to be somewhat muddy if commercial RDNA2 cards end up being fabricated on N7+ as opposed PS5 and XSX's N7P platform.
 

III-V

Member
Oct 25, 2017
18,827
It will be interesting to see the IPC delta between RDA1 and 2. Of course, it's going to be somewhat muddy if commercial RDNA2 cards end up being fabricated on N7+ as opposed PS5 and XSX's N7P platform.
I don't think the discrete cards coming later this year will be on 7nm+. Expecting N7P. There could and likely will be other differences though.
 

AegonSnake

Banned
Oct 25, 2017
9,566
Yeah, no mention of it at the AMD FAD. I assume <10% if any.
yep. if the 12.1 tflops console is offering performance thats equivalent to 11.4 turing tflops, we can assume very little gains.

thats assuming the 12.1 rdna 2.0 tflops gpu is running at max clocks all the time. the 9.7 tflops 5700xt is actually running games at lower clocks which effectively makes it 9.3 tflops. gears on my rtx 2080 runs at a constant 1950 mhz when running the benchmark tool which comes out to be 11.4 tflops.

this should put the ps5 around 2070 super if it maintains those 2.23 ghz clocks.
 

mordecaii83

Avenger
Oct 28, 2017
6,862
Why do people keep misquoting what BCPack is?

BCPack is a texture compression format. It does not make the IO any faster than it is, it makes texture compression better so you can fit more texture per second through the IO when compressed, the IO speed does not change. The 6GB/s theoretical max does not change, what you can fit through it changes when you have a good compression ratio.

And BCPack is totally taken into consideration when giving the theoretical max throughput otherwise the number would be higher. That is why they said 4.8GB compressed even though the IO has a max theoretical of 6GB/s.
Disregard, I misread the post.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
What is the likelihood of this actually happening? Would it be possible for developers to use smart programming or game design to try and maximise it?
I would say highly unlikely because not all data type compresses the same. And even at that, there is a mixture of data types in any transfer so there would be some stuff that could compress really well and others that can't. Sony and MS gave that 8GB/s-9GB/s and 4.8GB/s respectively numbers for a reason. The 6GB/s and 22GB/s thingy is just a PR theoretical max as far as I am concerned. Depicting levels of compression that isn't quite possible just yet.
I would say a good indication would be game sizes.

Speaking of sizes though, I think we would benefit from total removal of "in-engine" cinematics stored as video files and usually used to hide loading since everything would likely be done in-engine and real-time on the hardware this time around.
 

BreakAtmo

Member
Nov 12, 2017
12,838
Australia
I would say highly unlikely because not all data type compresses the same. And even at that, there is a mixture of data types in any transfer so there would be some stuff that could compress really well and others that can't. Sony and MS gave that 8GB/s-9GB/s and 4.8GB/s respectively numbers for a reason. The 6GB/s and 22GB/s thingy is just a PR theoretical max as far as I am concerned. Depicting levels of compression that isn't quite possible just yet.

I would say a good indication would be game sizes.

Speaking of sizes though, I think we would benefit from total removal of "in-engine" cinematics stored as video files and usually used to hide loading since everything would likely be done in-engine and real-time on the hardware this time around.

Oh god, yes. CGI cutscenes might still have something of a place (though I still dislike their severe limitations like not being able to change character and weapon models and such), but 1080p30 in-engine videos that look like compressed shit, take up disc space and are made to hide loading? Burn them all, and salt the earth where they stood.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Oh god, yes. CGI cutscenes might still have something of a place (though I still dislike their severe limitations like not being able to change character and weapon models and such), but 1080p30 in-engine videos that look like compressed shit, take up disc space and are made to hide loading? Burn them all, and salt the earth where they stood.
As it stands they went from making videos using full-on CGI, to making "in-engine videos" that they use as cutscenes. either way, they are data hogs. Now we would see more of in-engine cinematic running in real-time, no reason to still be putting video files on the disc unless their cutscene is totally unrelated to the actual in-game scenario... like doing a back story of sorts or something.
You will obviously not unload the whole OS, just the memory hungry parts of it, aka the UI.
Yup... 5GB/s second isn't enough to run the OS, but its definitely enough to load it in and out quickly. If XSX can make do with a 2.5GB of RAM OS reserve, then the PS5 can make do with a 1GB reserve. And be able to move like 4GB of data into RAM in like 0.50 seconds.

I would be shocked if sony doesn't do something like that.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
I'm hoping Monster Hunter world looks like that. That would be amazing..

For me, it's not the fidelity that is striking. Rather, it is the animation. Look at the hit reaction as well as how locomotion and movement is determined by the footing of the creature and mass- it simply does slide from place to place.

This generation did not see a commensurate increase for animation quality like it did with fidelity (barring exceptions, like Death Stranding where human foes had some of the best animations to day). It is both sad and interesting that I still remember KZ2, a PS3 game, that featured great foe animations and esp. hit reactions which still have not been surpassed to this day.

Here's hoping next gen delivers on that front.
 

Josh378

Member
Oct 27, 2017
3,521
For me, it's not the fidelity that is striking. Rather, it is the animation. Look at the hit reaction as well as how locomotion and movement is determined by the footing of the creature and mass- it simply does slide from place to place.

This generation did not see a commensurate increase for animation quality like it did with fidelity (barring exceptions, like Death Stranding where human foes had some of the best animations to day). It is both sad and interesting that I still remember KZ2, a PS3 game, that featured great foe animations and esp. hit reactions which still have not been surpassed to this day.

Here's hoping next gen delivers on that front.


I agree with you 100%. I'm hoping that we see KZ2 type of complex animation from third-party developers and the next generation to come. The hardware is there so there's no excuse there.
 

gozu

Member
Oct 27, 2017
10,331
America
btw if 12tf rdna2 xsx is around 2080 level in gears performance increase per teraflop on rdna2 is probably marginal and close to rdna1

In practice, I expect PS5 to perform around 2080TI level and SeX to slightly exceed that , thanks to the usual console optimization bonus.

The introduction of new consoles will basically be: "This is what a game looks like when built with high end PC specs as a baseline". It will be dope.

I have generally seen a significant lack of imagination here regarding how the new fast and dream-level SSDs will be used to revolutionize game design.

This leads me to believe a similar phenomenon will happen among the dev community and the SSD will go under-used for many years to come. I don't remember seeing UE4 or Unity demos showing new awesome PCIE3 or PCIE4 NVME-exclusive techniques.
 

D BATCH

Member
Nov 15, 2017
148
In practice, I expect PS5 to perform around 2080TI level and SeX to slightly exceed that , thanks to the usual console optimization bonus.

The introduction of new consoles will basically be: "This is what a game looks like when built with high end PC specs as a baseline". It will be dope.

I have generally seen a significant lack of imagination here regarding how the new fast and dream-level SSDs will be used to revolutionize game design.

This leads me to believe a similar phenomenon will happen among the dev community and the SSD will go under-used for many years to come. I don't remember seeing UE4 or Unity demos showing new awesome PCIE3 or PCIE4 NVME-exclusive techniques.
No
 
Nov 2, 2017
2,275
I doubt that. The game clock varies between different games. If it was recorded for all games it would look different.

Since I don't have a life ;-), I took the performance data from Anandtech's review (https://www.anandtech.com/show/14618/the-amd-radeon-rx-5700-xt-rx-5700-review/12), since they write out the measured game clock and recomputed the benchmark FPS if the 5700XT was scaled up to 10.3TF. Then it looks like this,

Game Game Clock 5700XT/10.3TF 5700XT/2070 Super Performance ratio
Tomb Raider 1780MHz 39.8/44.98/42.6 1.0559
F1 2019 1800MHz 54.8/61.25/59.9 1.0225
Assassin's Creed 1900MHz 38.3/40.55/46.2 0.8777
Metro Exodus 1780MHz 34.7/39.2/35.0 1.12
Strange Brigade 1780MHz 69.6/78.66/75.0 1.0488
Total War: TK 1830MHz 26.2/28.8/30.0 0.96‬
The Division 2 1760MHz 34.7/39.66/40.2 0.9866
Grand Theft Auto V 1910MHz 41.0/43.1/47.5 0.9073
Forza Horizon 4 1870MHz 60.1/64.65/58.0 1.1147
avg: 1.0104

Unfortunately, the review doesn't include the 2080, so I used the 2070 super instead. For the 9 benchmarks a 10.3TF 5700XT would be about 1.0104 times faster than a 2070 super.

Then I used the relative performance from www.techpowerup.com to compare the 2070 super with the 2080. It looks like,

RTX 2070 Super: 114%
RTX 2080: 123%

Hence a 2080 would have about 1.23 / (1.14 * 1.0104) = 1.0678 higher performance than a 10.3TF 5700XT and a 10.3TF 5700XT would have about 1.14 * 1.0104 = 1.152 higher performance than a 5700XT.

We still don't take the RDNA 2 architectural improvements into account, nor the BW deficit caused by the CPU/SSD/Audio. I still believe there are good reasons to expect the PS5 GPU to perform on a comparable level to the 2080 and about ~15% above the 5700XT.
You should check out this video: http://www.youtube.com/watch?v=IDO-UHZebV4&t=4m40s

You need a 2450mhz 5700xt, which means it's a 12.5 TF GPU, to match a 2080. Probably heavily bandwidth limited but the PS5 will have even less bandwidth available as it needs to share.
 

Corine

Member
Nov 8, 2017
870
In practice, I expect PS5 to perform around 2080TI level and SeX to slightly exceed that , thanks to the usual console optimization bonus.

The introduction of new consoles will basically be: "This is what a game looks like when built with high end PC specs as a baseline". It will be dope.

I have generally seen a significant lack of imagination here regarding how the new fast and dream-level SSDs will be used to revolutionize game design.

This leads me to believe a similar phenomenon will happen among the dev community and the SSD will go under-used for many years to come. I don't remember seeing UE4 or Unity demos showing new awesome PCIE3 or PCIE4 NVME-exclusive techniques.
You're going to be disappointed if you think either are going to perform better or equal to a 2080ti.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
You should check out this video: http://www.youtube.com/watch?v=IDO-UHZebV4&t=4m40s

You need a 2450mhz 5700xt, which means it's a 12.5 TF GPU, to match a 2080. Probably heavily bandwidth limited but the PS5 will have even less bandwidth available as it needs to share.
His point is not that the PS5 would match the 2080 TF wise, but that with the usual slew of console optimizations and consoles having a history of punching well above their weight. The PS5 with its 10.3TF GPU would end up eeking out performance equivalent to a 2080. So basically a 10.3TF console GPU would perform like a 12.5TF CPU GPU. And a 12TF console GPU would perform like a 13.5/14TF CPU GPU.

Obviously, my numbers are just estimates, but I am at least certain that consoles have (and likely will) always punched above their weight. You simply can't take a 10.TF console GPU and pair it with a 10TF CPU GPU and expect the same results.

This as a baseline for animation, interactivity, physics, and LOD would be amazing.
While I love how it looks, I am not gonna get my hopes up. I just don't trust devs anymore when it comes to stuff like this. I would believe it when you show me someone playing it or when its coming from a studio that I know I can trust.

I am still waiting for deep down... 7 years later.
 

Alexandros

Member
Oct 26, 2017
17,811
His point is not that the PS5 would match the 2080 TF wise, but that with the usual slew of console optimizations and consoles having a history of punching well above their weight. The PS5 with its 10.3TF GPU would end up eeking out performance equivalent to a 2080. So basically a 10.3TF console GPU would perform like a 12.5TF CPU GPU. And a 12TF console GPU would perform like a 13.5/14TF CPU GPU.

Obviously, my numbers are just estimates, but I am at least certain that consoles have (and likely will) always punched above their weight. You simply can't take a 10.TF console GPU and pair it with a 10TF CPU GPU and expect the same results.

The current generation showed us that you actually can.