Jeremy

Member
Oct 25, 2017
6,639
One other thing just popped into my mind... On Series X, if you have games running download speeds get reduced significantly, I've found.

Is it possible that Smart Resume games or downloading in the background could affect in-game performance? I imagine that the system would not have a shared pool of memory/cpu resources and that wouldn't be an issue, but I don't know definitively. Maybe that's why some people are having a worse experience than others?
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Honest question. Can you tell me why the PS5 is faster in the blue RATE sections if the overall GPU TF (Whish is related to clock speeds and CUs) is higher on the XSX.
That's the point...there is no such thing as "overall GPU TF".

The GPU has something called stream processors. There are 64 of them in each compute unit, 36 Compute units in the PS5 and 52 in the XSX.

These cores have a specific task. The color pixels. That's an oversimplification of what they do, but the end result of their function is that pixels are colored. They do a lot of other stuff number crunching, particles..etc. The coloring of pixels is just one part of a render pipeline. It's actually at the end of it. These Teraflops people keep talking about, is the unit measure of a GPUs shader capability. So in that ONE area, the XSX has 12TF vs the PS5s 10TF.

But that shader core is just one component in a GPUs Compute Unit and ultimately in the GPU. We also have texture mapping units, geometry processors, Render output units...etc. All these things are tied to the GPU's clock speed. hence why Cerny said, "a rising tide raises all ships".

Now take the geometry processor for instance. Like the PS5, the XSX (and all other AMD GPUs for that matter) has only one of them. However, that one geometry processor runs at a higher clock on the PS5 than in the XSX. So the PS5 ends up being able to draw polygons faster. So all those areas where they have a similar number of components in both GPUs, the ps5 performs better there because it's clocked higher.
 
Last edited:

pg2g

Member
Dec 18, 2018
5,126
Hopefully Ubisoft addresses some of this in a future update, though I will probably have already finished the game.

Not really interested in the pissing contest, but there is absolutely no reason for the game to have as much screen tearing as it does on the XSX.
 

Ravage

Banned
Oct 31, 2017
1,536
I think the main takeaway is that the series s is a terrible choice for gaming enthusiasts and you'll be much better off sticking to your current gen machines (especially if you have a pro or 1x) until they stop releasing cross-gen titles.
 

nib95

Contains No Misinformation on Philly Cheesesteaks
Banned
Oct 28, 2017
18,498
That's the point...there is no such thing as "overall GPU TF".

The GPU has something stream procesors. There are 64 of them each compute unit, 36 Compute units in the PS5 and 52 in the XSX.

These cores have a specific task. The color pixels. That's an oversimplification of what they do, but the end result of their function is that pixels are colored. The coloring of pixels is just one part of a render pipeline. It's actually at the end of it. These Teraflops people keep talking about, is the unit measure of a GPUs shader capability. So in that ONE area, the XSX has 12TF vs the PS5s 10TF.

But that shader core is just one component in a GPUs Compute Unit and ultimately in the GPU. We also have texture mapping units, geometry processors, Render output units...etc. All these things are tied to the GPU's clock speed. hence why Cerny said, "a rising tide raises all ships".

Now take the geometry processor for instance. Like the PS5, the XSX (and all other AMD GPUs for that matter) has only one of them. However, that one geometry processor runs at a Tiger clock on the PS5 than in the XSX. So the PS5 ends up being able to draw polygons faster. So all those areas where they have a similar number of components in both GPUs, the ps5 performs better there because it's locked higher.

PS5 runs at a Tiger clock 🤣 Maybe that's apt lol.

On this post, we obviously do not know why the PS5 is enjoying these multi-platform advantages, and if it's purely API and tools related or a combination of multiple things, but I think it's important to clarify that both the Crytek dev and Dirt 5's tech director (David Springgate) just a few days ago, spoke about the PS5's GPU clockspeed when comparing the XSX with the PS5 in terms of performance.

It was the example given by David about why he's not sure it's fair to say the XSX enjoys a big power advantage. It's also something the Crytek dev touched on, eg how Tflops being a peak number for floating point operations is not necessarily always relevant to games, and how in real world examples the PS5's higher clockspeed means it's often performing better in a useable sense. Whilst I can't speak to the validity of these points, I do think it's interesting both developers referenced the clockspeeds.

Also interesting is that in the same podcast, David actually piled on praise to the Xbox's GDK, saying he's very happy with it, that it was actually faster and more stable than the XDK, and that they bought over most things from the XDK for added familiarity, despite being in its infancy.

Richard from Digital Foundry did say that some developers he spoke to about the Series X hardware and development, were "extremely happy" with the GDK, whilst others were "having problems with it".
 
Last edited:

Ether_Snake

Banned
Oct 29, 2017
11,306
The fact many people say "I never see tearing except in a few cutscenes" and many others say "It's the worst ever, it's everywhere" makes me think there is some variable that is missing, something is causing some users to experience a lot of tearing, others not. There is no way this is just people not seeing it.
 

leng jai

Member
Nov 2, 2017
15,151
The fact many people say "I never see tearing except in a few cutscenes" and many others say "It's the worst ever, it's everywhere" makes me think there is some variable that is missing, something is causing some users to experience a lot of tearing, others not. There is no way this is just people not seeing it.

I guess you've never been in the OT of big games which have frequent measurable frame rate drops. There's always people saying they can't see it or their particular game runs perfect no matter how bad it is.

VRR fixes the tearing specifically but even then 95% of people don't have a display that supports it anyway.
 
OP
OP
chandoog

chandoog

Member
Oct 27, 2017
20,119
The fact many people say "I never see tearing except in a few cutscenes" and many others say "It's the worst ever, it's everywhere" makes me think there is some variable that is missing, something is causing some users to experience a lot of tearing, others not. There is no way this is just people not seeing it.

Maybe some system level settings alleviate it. Like a comparison on the last page says if you put your Xbox on 60hz (instead of 120hz) on the system level, it reduces tearing ?

Dictator any chance you may be able to check/verify this ?
 

EggmaniMN

Banned
May 17, 2020
3,465
The reality is some people are lying because they like product. The game has tearing. 60hz, 120hz, whatever, it doesn't matter. It's been shown, measured, it exists.
 

Yerffej

Prophet of Regret
Member
Oct 25, 2017
24,092
The reality is some people are lying because they like product. The game has tearing. 60hz, 120hz, whatever, it doesn't matter. It's been shown, measured, it exists.
? I'm the first to notice tearing. I can't stand it. I haven't had one iota of it on Series X with my TV set to 120hz. If it did, I'd have to wait until it was patched out.
 
Oct 27, 2017
5,012
That's the point...there is no such thing as "overall GPU TF".

The GPU has something called shaders. There are 64 of them each compute unit, 36 Compute units in the PS5 and 52 in the XSX.

These cores have a specific task. The color pixels. That's an oversimplification of what they do, but the end result of their function is that pixels are colored. The coloring of pixels is just one part of a render pipeline. It's actually at the end of it. These Teraflops people keep talking about, is the unit measure of a GPUs shader capability. So in that ONE area, the XSX has 12TF vs the PS5s 10TF.

But that shader core is just one component in a GPUs Compute Unit and ultimately in the GPU. We also have texture mapping units, geometry processors, Render output units...etc. All these things are tied to the GPU's clock speed. hence why Cerny said, "a rising tide raises all ships".

Now take the geometry processor for instance. Like the PS5, the XSX (and all other AMD GPUs for that matter) has only one of them. However, that one geometry processor runs at a Tiger clock on the PS5 than in the XSX. So the PS5 ends up being able to draw polygons faster. So all those areas where they have a similar number of components in both GPUs, the ps5 performs better there because it's locked higher.
I get what you're saying but if you're going to make bullet points, then you kinda should put the CU count difference. That's, historically, always been very important in determining relative GPU performance.

Generally speaking, a more parallelized workload trumps higher clockspeeds in most workloads. OF COURSE, a game can have widely varying workloads in different scenes and splitting everything up might not work so well especially across different games. But...historically, there's been a pretty linear relationship between the number of CU/CUDA cores and the relative GPU performance while raising clock speeds usually has exponentially decreasing returns.

These things all have situations where one benefits more than the other but based on the last 20 years of PC benchmarks, the number of CUDA cores/CUs has been more important than the clock speed.
 

Ether_Snake

Banned
Oct 29, 2017
11,306
The reality is some people are lying because they like product. The game has tearing. 60hz, 120hz, whatever, it doesn't matter. It's been shown, measured, it exists.

Seems like most agree (didn't read most of the thread), but it just seems to be consistently "only in a few cutscenes" and "tearing everywhere", as if it's literally two different scenarios, no in-between, hence why I am thinking it might be something specific leading to more tearing, no idea.

Like, could it be male Eivor with full hair causes more tearing because of his beard? 🤔

Can people who experience a lot of tearing say if they are playing with male Eivor?
 

kubus

Member
Oct 27, 2017
1,502
My apologies if this has been asked before or if this is a dumb questiob but does the tearing on PS5 get better if I play this on a C9 or CX? I was under the assumption that only the Xbox supported VRR and PS5 didn't.
 

Nolbertos

Member
Dec 9, 2017
3,368
Well Round 1 goes to PS5. I haven't bought a PS5 or Series X until late 2021 or early 2022, but if multiplats games are gonna better on PS system than its a no brainer PS5 will be my 3rd party machine. Abit disheartening that right outta the gate, Xbox Series X is slightly struggling over the PS5 in Valhalla
 
Last edited:

Myself

Member
Nov 4, 2017
1,282
That's the point...there is no such thing as "overall GPU TF".

The GPU has something called stream processors. There are 64 of them in each compute unit, 36 Compute units in the PS5 and 52 in the XSX.

These cores have a specific task. The color pixels. That's an oversimplification of what they do, but the end result of their function is that pixels are colored. They do a lot of other stuff number crunching, particles..etc. The coloring of pixels is just one part of a render pipeline. It's actually at the end of it. These Teraflops people keep talking about, is the unit measure of a GPUs shader capability. So in that ONE area, the XSX has 12TF vs the PS5s 10TF.

But that shader core is just one component in a GPUs Compute Unit and ultimately in the GPU. We also have texture mapping units, geometry processors, Render output units...etc. All these things are tied to the GPU's clock speed. hence why Cerny said, "a rising tide raises all ships".

Now take the geometry processor for instance. Like the PS5, the XSX (and all other AMD GPUs for that matter) has only one of them. However, that one geometry processor runs at a Tiger clock on the PS5 than in the XSX. So the PS5 ends up being able to draw polygons faster. So all those areas where they have a similar number of components in both GPUs, the ps5 performs better there because it's locked higher.

So those things in the RATE sections are not actually existing X per CU? I thought with unified shaders each CU has a "copy" of everything so they can do vertex shading, pixel shading, etc - everything.
 

Penny Royal

The Fallen
Oct 25, 2017
4,166
QLD, Australia
Meh, not seeing it.

You know what IS impressive? My almost half decade old gaming PC giving me better gaming experience than these brand new gen of consoles on this game. GTX 1080 and I'm running the game at high, so very likely higher settings than the next gen consoles, and at 1440p ultra wide at mostly 60 FPS.

Or in other words, value isn't just about lower price.

And I bet you it still cost you more than $500 when you bought it.
 

Jeremy

Member
Oct 25, 2017
6,639
My apologies if this has been asked before or if this is a dumb questiob but does the tearing on PS5 get better if I play this on a C9 or CX? I was under the assumption that only the Xbox supported VRR and PS5 didn't.

Tearing exists on both versions, but it's apparently worse on the Series X.

VRR basically fixes the tearing problem on Series X.

PS5 does not support VRR yet.

If you have a VRR set, all other things being equal, get the Series X version.
If you don't have a VRR set, all other things being equal, get the PS5 version.
 
Oct 26, 2017
6,151
United Kingdom
I get what you're saying but if you're going to make bullet points, then you kinda should put the CU count difference. That's, historically, always been very important in determining relative GPU performance.

Generally speaking, a more parallelized workload trumps higher clockspeeds in most workloads. OF COURSE, a game can have widely varying workloads in different scenes and splitting everything up might not work so well especially across different games. But...historically, there's been a pretty linear relationship between the number of CU/CUDA cores and the relative GPU performance while raising clock speeds usually has exponentially decreasing returns.

These things all have situations where one benefits more than the other but based on the last 20 years of PC benchmarks, the number of CUDA cores/CUs has been more important than the clock speed.

Consoles =/= PCs (where hardware is hidden away behind multiple layers of software API abstraction)

Any trends seen in PC benchmarks for discrete GPU cards don't really apply here. Performance bottlenecks on PC are wholly different to consoles.
 

Deleted member 1589

User requested account closure
Banned
Oct 25, 2017
8,576
Tearing exists on both versions, but it's apparently worse on the Series X.

VRR basically fixes the tearing problem on Series X.

PS5 does not support VRR yet.

If you have a VRR set, all other things being equal, get the Series X version.
If you don't have a VRR set, all other things being equal, get the PS5 version.
Pretty much.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
PS5 runs at a Tiger clock 🤣 Maybe that's apt lol.

On this post, we obviously do not know why the PS5 is enjoying these multi-platform advantages, and if it's purely API and tools related or a combination of multiple things, but I think it's important to clarify that both the Crytek dev and Dirt 5's tech director (David Springgate) just a few days ago, spoke about the PS5's GPU clockspeed when comparing the XSX with the PS5 in terms of performance.

It was the example given by David about why he's not sure it's fair to say the XSX enjoys a big power advantage. It's also something the Crytek dev touched on, eg how Tflops being a peak number for floating point operations is not necessarily always relevant to games, and how in real world examples the PS5's higher clockspeed means it's often performing better in a useable sense. Whilst I can't speak to the validity of these points, I do think it's interesting both developers referenced the clockspeeds.

Also interesting is that in the same podcast, David actually piled on praise to the Xbox's GDK, saying he's very happy with it, that it was actually faster and more stable than the XDK, and that they bought over most things from the XDK for added familiarity, despite being in its infancy.

Richard from Digital Foundry did say that some developers he spoke to about the Series X hardware and development, were "extremely happy" with the GDK, whilst others were "having problems with it".
Haha!! Thanks for catching that lol. Blasted keyboard.

And yes, we don't really know why this is happening. So this is all just conjecture.
I get what you're saying but if you're going to make bullet points, then you kinda should put the CU count difference. That's, historically, always been very important in determining relative GPU performance.

Generally speaking, a more parallelized workload trumps higher clockspeeds in most workloads. OF COURSE, a game can have widely varying workloads in different scenes and splitting everything up might not work so well especially across different games. But...historically, there's been a pretty linear relationship between the number of CU/CUDA cores and the relative GPU performance while raising clock speeds usually has exponentially decreasing returns.

These things all have situations where one benefits more than the other but based on the last 20 years of PC benchmarks, the number of CUDA cores/CUs has been more important than the clock speed.
Not if the system is designed specifically to take advantage of it.

In these cases u speak of, what you have a a GPU clocked at say 1500MHz, and then some tinkerers go in there and overclock it to 1800Mhz or something. Then talk about how they aren't seeing the expected performance gains. That happens because that GPU wasn't designed to run at those clocks.

If you design hardware from the ground up to run at higher clocks, that's going to be reflected in everything down to how your traces are run in the chip. Memory timings...etc.

The reason you see processors getting bigger instead of faster, Is because it's just easier to do and manage. It's not because going bigger is inherently better. I mean with every new GPU we hear the same thing. Go bigger instead if faster. Then the next GPU is released two years later. And not only is it bigger. It's over 50% faster. And we just write that all off to fab processes lol.

Oh... I did put the CU count difference. And then some. Because the bullet points I made touched on multiple components found inside the CU. Stream processors, texture mapping units...etc. The whole point of that post was to demystify this notion that all u need to do in a GPU is look at the CU count and TFs... Cause there is more to a GPU than that.
 
Dec 31, 2017
1,430
I get what you're saying but if you're going to make bullet points, then you kinda should put the CU count difference. That's, historically, always been very important in determining relative GPU performance.

Generally speaking, a more parallelized workload trumps higher clockspeeds in most workloads. OF COURSE, a game can have widely varying workloads in different scenes and splitting everything up might not work so well especially across different games. But...historically, there's been a pretty linear relationship between the number of CU/CUDA cores and the relative GPU performance while raising clock speeds usually has exponentially decreasing returns.

These things all have situations where one benefits more than the other but based on the last 20 years of PC benchmarks, the number of CUDA cores/CUs has been more important than the clock speed.
This can be easily seen looking at benchmarks of nvidia cards, recently the 2000 series super cards with some having higher clock speeds than the tier above yet performing worse in gaming benchmarks than the card with more CUDA cores, so I tend to agree with you as well. And if it scales that well on PC, it should as well on consoles and not necessarily only rely on higher clock speeds.
 

kubus

Member
Oct 27, 2017
1,502
Tearing exists on both versions, but it's apparently worse on the Series X.

VRR basically fixes the tearing problem on Series X.

PS5 does not support VRR yet.

If you have a VRR set, all other things being equal, get the Series X version.
If you don't have a VRR set, all other things being equal, get the PS5 version.
Thanks, I do have a VRR Set (two actually) and a PS5 and Xbox Series X copy of the game. The plan was that I would play the PS5 version and my boyfriend the Series X version so we won't have to "fight" over a console (these are hardcore first world problems). But now I'm having second thoughts :p. Hmmmm.

Sure would be nice if Sony added VRR support to the PS5 :(.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
So those things in the RATE sections are not actually existing X per CU? I thought with unified shaders each CU has a "copy" of everything so they can do vertex shading, pixel shading, etc - everything.
Some are... Some aren't.

The geometry stuff. Isn't in the CU. The texture mapping units and RT stuff, are. There are 16 TMU in each CU and one RT core in each CU. Basically. The stuff that the PS5 does better are usually in the shader engine or tied to a shader array vs being in the CU.

But don't take this too seriously. I cannot tell you what kinda impact they have overall. But one thing for certain is, there are things the PS5 does faster. Those things combined can shave off like 2ms in a 16ms render time in the PS5, but then that just means that in the areas where the XSX would be faster, the PS5 while slower there would have more time to do those tasks. 2ms is just an example mind you. It could be 0.5ms for all I know.
 

leng jai

Member
Nov 2, 2017
15,151
Tearing exists on both versions, but it's apparently worse on the Series X.

VRR basically fixes the tearing problem on Series X.

PS5 does not support VRR yet.

If you have a VRR set, all other things being equal, get the Series X version.
If you don't have a VRR set, all other things being equal, get the PS5 version.

Even without the tearing the Series X version has worse performance and the cutscene bugs.
 
Dec 31, 2017
1,430
Haha!! Thanks for catching that lol. Blasted keyboard.

And yes, we don't really know why this is happening. So this is all just conjecture.

Not if the system is designed specifically to take advantage of it.

In these cases u speak of, what you have a a GPU clocked at say 1500MHz, and then some tinkerers go in there and overclock it to 1800Mhz or something. Then talk about how they aren't seeing the expected performance gains. That happens because that GPU wasn't designed to run at those clocks.

If you design hardware from the ground up to run at higher clocks, that's going to be reflected in everything down to how your traces are run in the chip. Memory timings...etc.

The reason you see processors getting bigger instead of faster, Is because it's just easier to do and manage. It's not because going bigger is inherently better. I mean with every new GPU we hear the same thing. Go bigger instead if faster. Then the next GPU is released two years later. And not only is it bigger. It's over 50% faster. And we just write that all off to fab processes lol.

Oh... I did put the CU count difference. And then some. Because the bullet points I made touched on multiple components found inside the CU. Stream processors, texture mapping units...etc. The whole point of that post was to demystify this notion that all u need to do in a GPU is look at the CU count and TFs... Cause there is more to a GPU than that.
Nvidia GPU clock speeds haven't really changed in the last 4 years though, my 3070 runs at the same clock speeds as my 1080 did (around 2-2.1ghz) and benchmarks show games running that much faster and also scaling well with the bigger GPU, so why wouldn't that also apply to consoles and to Series X? There is no reason why it shouldn't.
 

Deleted member 46804

User requested account closure
Banned
Aug 17, 2018
4,129
PS5 runs at a Tiger clock 🤣 Maybe that's apt lol.

On this post, we obviously do not know why the PS5 is enjoying these multi-platform advantages, and if it's purely API and tools related or a combination of multiple things, but I think it's important to clarify that both the Crytek dev and Dirt 5's tech director (David Springgate) just a few days ago, spoke about the PS5's GPU clockspeed when comparing the XSX with the PS5 in terms of performance.

It was the example given by David about why he's not sure it's fair to say the XSX enjoys a big power advantage. It's also something the Crytek dev touched on, eg how Tflops being a peak number for floating point operations is not necessarily always relevant to games, and how in real world examples the PS5's higher clockspeed means it's often performing better in a useable sense. Whilst I can't speak to the validity of these points, I do think it's interesting both developers referenced the clockspeeds.

Also interesting is that in the same podcast, David actually piled on praise to the Xbox's GDK, saying he's very happy with it, that it was actually faster and more stable than the XDK, and that they bought over most things from the XDK for added familiarity, despite being in its infancy.

Richard from Digital Foundry did say that some developers he spoke to about the Series X hardware and development, were "extremely happy" with the GDK, whilst others were "having problems with it".
I would guess developers being happy or unhappy with the GDK might come down to the specific game. A game like Dirt 5 has a completely different set of development goals compared to Valhalla.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Nvidia GPU clock speeds haven't really changed in the last 4 years though, my 3070 runs at the same clock speeds as my 1080 did (around 2-2.1ghz) and benchmarks show games running that much faster and also scaling well with the bigger GPU, so why wouldn't that also apply to consoles and to Series X? There is no reason why it shouldn't.
But AMD GPU clocks have...

Nvidia just reached the leak of their thermal efficiency quicker. At the end of the day you can only clock so high on any given architecture and fab process.

But seriously, just look at AMD GPU clocks over the last 4 years. I mean there 80CU GPUs can hit 2500Mhz right now. Even tho they are designed to run at 2300Mhz~.

And I don't get the whole scaling thing. I've never said anything isn't scaling well with the XSX bigger GPU.

All I'm saying is that there are other components in a GPU that have a fixed number or similar amount across some GPUs. the PS5 and XSX share some of those similarities. In those cases, having a higher clock works better for those specific components.
 

mogwai00

Member
Mar 24, 2018
1,279
Jeez, most of the conversation about these consoles was already boring before launch, but it's even borier now.
 
Oct 28, 2017
1,951
Except the resolution bump and performance on the new consoles, from the videos the game looks visually very similar to Odyssey.

I was incorrectly expecting the visual experience to be much more (apart from resolution/performance bump), or may be its just the janky nature of the game.
 

leng jai

Member
Nov 2, 2017
15,151
Come on.

Stop the system wars shit. I'm trying to offer an objective summary.

Congratulations on the self proclaimed objectivity there mate. How is literally pointing out the other issues the Series X version has that's highlighted in the analysis system warring? It's not just the tearing that's the problem. I've got both systems and bought this on Series X since I thought it would run it better. Why would I war for PS5? I have the game and don't enjoy much because of the sub par performance and bugs.
 

CrichtonKicks

Member
Oct 25, 2017
11,340
Maybe some system level settings alleviate it. Like a comparison on the last page says if you put your Xbox on 60hz (instead of 120hz) on the system level, it reduces tearing ?

Dictator any chance you may be able to check/verify this ?

C'mon- most people don't have 120 hz TVs and the tearing reports are rampant. It's clearly a huge issue for 60 hz TV users.
 

LilScooby77

Member
Dec 11, 2019
11,285
Super disappointing for Xbox here. It's only done better than ps5 in one game. (DMC). I know it's the dev kits but that's 100% on Microsoft.
 

Myself

Member
Nov 4, 2017
1,282
Some are... Some aren't.

The geometry stuff. Isn't in the CU. The texture mapping units and RT stuff, are. There are 16 TMU in each CU and one RT core in each CU. Basically. The stuff that the PS5 does better are usually in the shader engine or tied to a shader array vs being in the CU.

But don't take this too seriously. I cannot tell you what kinda impact they have overall. But one thing for certain is, there are things the PS5 does faster. Those things combined can shave off like 2ms in a 16ms render time in the PS5, but then that just means that in the areas where the XSX would be faster, the PS5 while slower there would have more time to do those tasks. 2ms is just an example mind you. It could be 0.5ms for all I know.
Thanks.
 

Jeremy

Member
Oct 25, 2017
6,639
Congratulations on the self proclaimed objectivity there mate. How is literally pointing out the other issues the Series X version has that's highlighted in the analysis system warring? It's not just the tearing that's the problem. I've got both systems and bought this on Series X since I thought it would run it better. Why would I war for PS5? I have the game and don't enjoy much because of the sub par performance and bugs.

No time for this. Going to save myself a headache and put you on ignore.
 

Mike991

Member
Mar 22, 2020
2
As a (multiplatform) developer, I can't give you information about the various SDKs (NDAs, etc.), but I know many developers from Ubisoft Sofia who worked on both versions like crazy until the release date. I think all the Ubi teams deserve applause for releasing simultaneously how many - seven (?) different versions of a huge title like AC:V in these pandemic conditions.

One more thing, the new SDK - the more general GDK is created not just for Series consoles and PC, but also Xbox One and One X. Anyone knows how One X and One S are performing in comparison to PS4 Pro and PS4 versions?


Better framerate and less screen tearing on Sony consoles.

zZCedoi.jpg
 

BadAss2961

Banned
Oct 25, 2017
3,069
I think the main takeaway is that the series s is a terrible choice for gaming enthusiasts and you'll be much better off sticking to your current gen machines (especially if you have a pro or 1x) until they stop releasing cross-gen titles.
I never saw the point of the Series S other than to finesse casuals with a huge undercut of whatever the PS5's price was going to be.

Sony adjusted with the Digital Edition, which i'm guessing they would've rather priced at $450, and it's by far the best value of all these consoles.