Devs react to PS5 specs - Twitter edition

Nos

Member
Oct 27, 2017
169
Dictatorrr !!!!!!!!
You could have just waited for the comp videos to make your points for you! You reap what you sow and now you're dead!
 

Fizzlefry9

Member
Apr 23, 2018
67
So I'm a total ignoramous, but how is having an SSD so revolutionary? Have we not had SSDs, particular m.2 NVMEs in PCs for years now? What make these console ones so different and so drastically improved upon?
 

beta

Member
Dec 31, 2019
174
So I'm a total ignoramous, but how is having an SSD so revolutionary? Have we not had SSDs, particular m.2 NVMEs in PCs for years now? What make these console ones so different and so drastically improved upon?
They are way faster then the cast majority of PC SSD's, they are using filesystems that allow you actually access files quickly with little overhead. Additionally, developers can actually rely on the existence of this technology, which is something you cannot do easily on a PC with so many varying configurations.
 
Jan 21, 2019
1,468
I might be in the minority but I'm so unimpressed anytime I see any SC footage. Everything from the environment to the NPC's looks so lifeless, so dead, lacking any kind of atmosphere. On a technical level it's probably insane, but It does nothing to me. It's like a bland tech demo
Despite being current gen games like RDR2, Death Stranding or Ghosts of Tsushima look so much better to me.
I don't think we are the minority. As you say it might be impressive on a technical basis, but there are games on base consoles that look so much better in my eyes and I fully expect next gen games to blow us away.
 

modiz

Member
Oct 8, 2018
9,290
So I'm a total ignoramous, but how is having an SSD so revolutionary? Have we not had SSDs, particular m.2 NVMEs in PCs for years now? What make these console ones so different and so drastically improved upon?
There are 2 reasons:
1) SSDs couldnt have been used as a baseline before because of consoles, now its possible which means games could be made under the assumption of fast asset streaming.
2) for the case of the PS5, one of the things cerny talked about is how an SSD that is 10x faster than the PS4's HDD only lends 2x loading improvement, the reason for that is the data streaming is bottlenecked by a lot of processes in the loading, and so Sony has built a custom silicon unit than handles all these bottlenecks, so that their SSD, which is roughly 100x faster than the PS4's HDD, actually streams data 100x as fast. There hasnt been a scenario where some component of a console got 100x faster in a long, long time.
 

Jade1962

Member
Oct 28, 2017
1,605

nib95

Banned
Oct 28, 2017
12,967
Ah thanks! Would you be so kind as to link me his post here?


 

Ricky_R

The Fallen
Oct 27, 2017
2,792
Thanks breh!


Thanks to you too breh. ❤
 

gundamkyoukai

Member
Oct 25, 2017
8,441
Thing is, where did Dictator talk about "procedural texturing" in his post ?
Unless I'm being mistaken, I feel like this person is replying to something that was never even said.
It was in one of the other tech threads .
We also had another dev chime it on what he said about the SSD when it comes to how fast they can get assets .
 
Jan 21, 2019
1,468
Let's say we get a game that so heavily utilizes the SSD of the PS5 or Xbox, that they reliably have to know that turning 360° will take a second. Could we see a generation of games where you can't change the camera speed? (I don't believe that but I would applaud any dev that tried to saturate those respective speeds for their exclusive games).
 

GhostTrick

Member
Oct 25, 2017
8,196
Nin95 link to his post about the "procedural texturing .
The one i talking about in the NXgamer thread about PS5.


I see, it's one of the listed method.
That dev seems to claim there's no procedural texturing used in AAA games.
It seems like Far Cry 5 does ?

I feel like there's a weird thing going on in that thread, trying to get a "gotcha" moment around DF and Alex, for weird reasons.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
2,744
Berlin, 'SCHLAND
I am not sure why he thinks I am talking about run time procedural texturing, as I am not wanting to talk about that? So I am not sure why he thinks I am? Perhaps I wrote it in a way where he thinks it is?

My point is (I guess no one understands it still with the way I wrote it here on ResetEra, oddly enough it seems it was understood on Beyond3D?):

Imagine you have 2 ways to prepare a texture. One in which it is a completely bespoke 4k texture baked out from a high res model. Then you have another way where it has a lower base resolution (1024) and its detail is then made up by stamped or instanced trims, decals, shared repeating detail textures. The latter is the direction modern game dev has gone, and most especially modern open world development since you are sharing much of the detail layering between objects.

The first requires a lot more space on the disk and indeed in VRAM. It represents the idea of every object being wholly bespoke, unique, and static in memory as that asset. The other type of texturing system is smaller in VRAM and on Disk, and it also requires less artist time since you are not remaking an entire asset to create variation, rather you are changing decals, trims, base colour, etc. at run time in editor (as was shown to DF by Cloud Imperium Games, id, and I am very sure many other game studios have switched over to this method of detail creation). I called the later procedural, as I understand it as that. Not procedural as in "the GPU is generating textures".

The idea that an open world game would want fully bespoke completely unique details enabled by something like, textures, and therefore need to swap out huge swathes of (texture) data as you merely turn the camera about is anti-thetical to how asset reuse (trims, decals, tiling detail textures) is integral to making modern games with large scales where they cannot spend the time to make fully bespoke textures. It is also very confusing to imagine you would need to need to flush such large amounts GPU memory when turning the camera in such a game (even one with very unique textures per asset), considering you are only going to be seeing mip 0 very close to the camera. You would be swapping only a number of extremely large textures in reality, and further mid distance and far detail would perhaps not be swapping at all, or would be swapping mid chain low res mips. Why? To prevent aliasing, of course, which is why we use mipmaps as well.
 
Last edited:

Fizzlefry9

Member
Apr 23, 2018
67
There are 2 reasons:
1) SSDs couldnt have been used as a baseline before because of consoles, now its possible which means games could be made under the assumption of fast asset streaming.
2) for the case of the PS5, one of the things cerny talked about is how an SSD that is 10x faster than the PS4's HDD only lends 2x loading improvement, the reason for that is the data streaming is bottlenecked by a lot of processes in the loading, and so Sony has built a custom silicon unit than handles all these bottlenecks, so that their SSD, which is roughly 100x faster than the PS4's HDD, actually streams data 100x as fast. There hasnt been a scenario where some component of a console got 100x faster in a long, long time.
Well to be fair, being 10x the speed of a 5400rpm HDD isn't much to write home about. PS4 and X1 were using SUCH antiquated HDDs it's insane. Even the X1X uses a 5400rpm disc!!! Craziness. At least this gen they're using very current and quite high end components.
 

MrKlaw

Member
Oct 25, 2017
10,868
Surely faster is just better. Yes the baseline is higher which is great, and the lowest common denominator will be xsx which might mean less clear benefit s- but there should still be benefits.

like the GPU - lowest common denominator will be ps5 but xsx will get some benefits in resolution etc.

any unique function - eg VRS, or mesh shaders or anything that requires significant effort to implement may be something you mostly see first parties leveraging. So hopefully most of the things being talked about are standard rdna2 so devs can properly dive in and play with, knowing that both consoles will support
 

modiz

Member
Oct 8, 2018
9,290
Well to be fair, being 10x the speed of a 5400rpm HDD isn't much to write home about. PS4 and X1 were using SUCH antiquated HDDs it's insane. Even the X1X uses a 5400rpm disc!!! Craziness. At least this gen they're using very current and quite high end components.
10x is the example that was used by cerny, basically if you were to use a SATA drive on a PS4, you would only really benefit up to 2x speed, but with the PS5, their 100x faster SSD actually performs 100x faster than the HDD in the PS4 thanks to all of their custom hardware.
 

Fizzlefry9

Member
Apr 23, 2018
67
Also since when did an SSD have anything to do with graphical power or rendering? I mean, I feel like I'm living in crazy land, but a good CPU and GPU are vastly more important, no? SSDs are definitely a huge help and will work great in tandem with the other components, but are we not hyping up the drive a bit too much? Again, I'm saavy but I'm no Paul Allen.
 

MrKlaw

Member
Oct 25, 2017
10,868
Also since when did an SSD have anything to do with graphical power or rendering? I mean, I feel like I'm living in crazy land, but a good CPU and GPU are vastly more important, no? SSDs are definitely a huge help and will work great in tandem with the other components, but are we not hyping up the drive a bit too much? Again, I'm saavy but I'm no Paul Allen.
the limited ram size increase means a fast ssd really helps to increase the effective access to assets in your game. Also critical to keep game install sizes under control as you don’t need to duplicate assets for faster loading
 

sncvsrtoip

Member
Apr 18, 2019
2,313
I am not sure why he thinks I am talking about run time procedural texturing, as I am not wanting to talk about that? So I am not sure why he thinks I am? Perhaps I wrote it in a way where he thinks it is?

My point is (I guess no one understands it still with the way I wrote it here on ResetEra, oddly enough it seems it was understood on Beyond3D?):

Imagine you have 2 ways to prepare a texture. One in which it is a completely bespoke 4k texture baked out from a high res model. Then you have another way where it has a lower base resolution (1024) and its detail is then made up by stamped or instanced trims, decals, shared repeating detail textures. The latter is the direction modern game dev has gone, and most especially modern open world development since you are sharing much of the detail layering between objects.

The first requires a lot more space on the disk and indeed in VRAM. It represents the idea of every object being wholly bespoke, unique, and static in memory as that asset. The other type of texturing system is smaller in VRAM and on Disk, and it also requires less artist time since you are not remaking an entire asset to create variation, rather you are changing decals, trims, base colour, etc. at run time in editor (as was shown to DF by Cloud Imperium Games, id, and I am very sure many other game studios have switched over to this method of detail creation). I called the later procedural, as I understand it as that. Not procedural as in "the GPU is generating textures".

The idea that an open world game would want fully bespoke completely unique details enabled by something like, textures, and therefore need to swap out huge swathes of (texture) data as you merely turn the camera about is anti-thetical to how asset reuse (trims, decals, tiling detail textures) is integral to making modern games with large scales where they cannot spend the time to make fully bespoke textures. It is also very confusing to imagine you would need to need to flush such large amounts GPU memory when turning the camera in such a game (even one with very unique textures per asset), considering you are only going to be seeing mip 0 very close to the camera. You would be swapping only a number of extremely large textures in reality, and further mid distance and far detail would perhaps not be swapping at all, or would be swapping mid chain low res mips. Why? To prevent aliasing, of course, which is why we use mipmaps as well.
Thx for detailed clarification, maybe some twitter conversation with Andrew? Surely would be very interesting for many.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
2,744
Berlin, 'SCHLAND
the limited ram size increase means a fast ssd really helps to increase the effective access to assets in your game. Also critical to keep game install sizes under control as you don’t need to duplicate assets for faster loading
But if you follow the logic of much of this thread, the SSD would enable unprecedented amounts of unique detail if you are swapping your VRAM constantly. Which would mean unprecedented amounts of disk space being used by games.
 

Fizzlefry9

Member
Apr 23, 2018
67
the limited ram size increase means a fast ssd really helps to increase the effective access to assets in your game. Also critical to keep game install sizes under control as you don’t need to duplicate assets for faster loading
Interesting. Well, consoles have always been known for their optimization since they are one configuration only (unlike PC which you can either have an amazing experience or a bad one due to the infinite amount of configs you can have in your tower), so although I'm a PC guy first, I'm certainly excited to see how these consoles perform in practical situations.
 
Oct 25, 2017
7,494
Also since when did an SSD have anything to do with graphical power or rendering? I mean, I feel like I'm living in crazy land, but a good CPU and GPU are vastly more important, no? SSDs are definitely a huge help and will work great in tandem with the other components, but are we not hyping up the drive a bit too much? Again, I'm saavy but I'm no Paul Allen.
Devs obviously think differently or it would not have been their most requested thing. I think CPU + SSD are way bigger leaps than the GPUs.
 

DavidDesu

Member
Oct 29, 2017
3,958
Glasgow, Scotland
To me this whole thing is about the differences and similarities and how different or similar those disparities are.

When talking about TF you have the XSX with a powerful modern GPU with the latest architecture that is leaps and bounds more powerful than current gen. The way people are talking you’d think PS5 was working off some outdated tech when in reality it has virtually the exact same tech but is 2 TF behind in raw counting. They’re both still hugely powerful compared to anything we have now and the difference between raw TF numbers is what, 15/18% when comparing up/down. This gen started off with a 40% difference between XB1 and PS4.

Then we have SSD speeds. Both massively faster than what we have now, generationally different and will absolutely open up new avenues for game design and is about more than just loading times. In this new paradigm however we have one implementation that is I believe 125% faster than the other. One is really fast, one is still twice as fast as that.

Weirdly many try to downplay that latter differential and claim it will mean nothing and that the only thing to care about is that 15% difference in TF which at the end of the day will literally mean most games will run at a slightly lower dynamic resolution than the other. The games will look the same, as they do now even with a 40% difference this generation, but have slightly different resolutions (all way above 1080p as standard) and some small framerate differences.

And as for the true differential, the true generational difference between this console gen and the next, raw storage access speeds, one has more than TWICE the capability of the other.

But no fear the difference is purely academical apparently.
 

darthkarki

Member
Feb 28, 2019
77
I am not sure why he thinks I am talking about run time procedural texturing, as I am not wanting to talk about that? So I am not sure why he thinks I am? Perhaps I wrote it in a way where he thinks it is?

My point is (I guess no one understands it still with the way I wrote it here on ResetEra, oddly enough it seems it was understood on Beyond3D?):

Imagine you have 2 ways to prepare a texture. One in which it is a completely bespoke 4k texture baked out from a high res model. Then you have another way where it has a lower base resolution (1024) and its detail is then made up by stamped or instanced trims, decals, shared repeating detail textures. The latter is the direction modern game dev has gone, and most especially modern open world development since you are sharing much of the detail layering between objects.

The first requires a lot more space on the disk and indeed in VRAM. It represents the idea of every object being wholly bespoke, unique, and static in memory as that asset. The other type of texturing system is smaller in VRAM and on Disk, and it also requires less artist time since you are not remaking an entire asset to create variation, rather you are changing decals, trims, base colour, etc. at run time in editor (as was shown to DF by Cloud Imperium Games, id, and I am very sure many other game studios have switched over to this method of detail creation). I called the later procedural, as I understand it as that. Not procedural as in "the GPU is generating textures".

The idea that an open world game would want fully bespoke completely unique details enabled by something like, textures, and therefore need to swap out huge swathes of (texture) data as you merely turn the camera about is anti-thetical to how asset reuse (trims, decals, tiling detail textures) is integral to making modern games with large scales where they cannot spend the time to make fully bespoke textures. It is also very confusing to imagine you would need to need to flush such large amounts GPU memory when turning the camera in such a game (even one with very unique textures per asset), considering you are only going to be seeing mip 0 very close to the camera. You would be swapping only a number of extremely large textures in reality, and further mid distance and far detail would perhaps not be swapping at all, or would be swapping mid chain low res mips. Why? To prevent aliasing, of course, which is why we use mipmaps as well.
I think you're very right, generally that is exactly how it will still work. I think the point is that that isn't required.

If you can't reload the entire memory when turning around, then what you are explaining has to be done. Meaning, you can't turn around and see something totally different. It has to be mostly the same as what was already in front of you, just re-arranged.

When you get the point that you could literally reload a giant chunk of memory in the time it takes to turn around, you don't have to do that anymore. Now, as a designer, you don't have to take these constraints into consideration, you can put whatever you want wherever you want in your game. That's Sony's goal, to eliminate these creative constraints so that you just don't have to think about it anymore.
 

nib95

Banned
Oct 28, 2017
12,967
But if you follow the logic of much of this thread, the SSD would enable unprecedented amounts of unique detail if you are swapping your VRAM constantly. Which would mean unprecedented amounts of disk space being used by games.
I'm fairly confident that the space savings from duplicated data as a result of the current HDD speed limitations, will be offset by the quantity and quality of assets for next gen games. In other words, in real world terms I'm not sure we will actually see a space saving, but at least there might be a more efficient and/or diverse use of the data actually on the disk/drive.
 

gundamkyoukai

Member
Oct 25, 2017
8,441
But if you follow the logic of much of this thread, the SSD would enable unprecedented amounts of unique detail if you are swapping your VRAM constantly. Which would mean unprecedented amounts of disk space being used by games.
Well you will be replacing the duplicate assets with unique ones in terms of disc size.
So the question will be how much duplicate assets made you games bigger and how much the new data will take up.
Still i expect games to get bigger but not as huge as some people think.
 

gofreak

Member
Oct 26, 2017
3,836
But if you follow the logic of much of this thread, the SSD would enable unprecedented amounts of unique detail if you are swapping your VRAM constantly. Which would mean unprecedented amounts of disk space being used by games.

Hmm, yes and no. Depending on the streaming model anyway. A degree of asset repetition is down to what's in neighbouring 'zones' that are being kept in memory, and a ratio of asset sharing between neighbouring zones, as about having unique data across the entire install. Indeed the procedural methods mentioned might benefit from being able, in a given area, to draw from a wider range of assets from the install than just what (previously) might only have been available in memory because of your locality in the game at that point.

Side note, though, I think the non purely procedural aspect of what you were saying in the original post about this was a bit lost possibly. Procedural in this context means chopping and 'assembling' static data. Static data remains super important. We will see how higher streaming speeds might inform that if it allows greater working sets in memory at once.
 

Sankt Ra

Member
Oct 27, 2017
2,023
But if you follow the logic of much of this thread, the SSD would enable unprecedented amounts of unique detail if you are swapping your VRAM constantly. Which would mean unprecedented amounts of disk space being used by games.
Would Sony raise their BOM "just" for faster load times if there is nearly no benefit in game compared to a SSD solution with half the speed?

Thx for the detailed explanations so far!
 

gundamkyoukai

Member
Oct 25, 2017
8,441
I think you're very right, generally that is exactly how it will still work. I think the point is that that isn't required.

If you can't reload the entire memory when turning around, then what you are explaining has to be done. Meaning, you can't turn around and see something totally different. It has to be mostly the same as what was already in front of you, just re-arranged.

When you get the point that you could literally reload a giant chunk of memory in the time it takes to turn around, you don't have to do that anymore. Now, as a designer, you don't have to take these constraints into consideration, you can put whatever you want wherever you want in your game. That's Sony's goal, to eliminate these creative constraints so that you just don't have to think about it anymore.
Yep one of the points of the SSD is you don't have to do things like how they were before .
How that will change games and engines we still have to see but dev have been given more options to play around with .
 

Alexandros

Member
Oct 26, 2017
8,610
This fixation that some users have on discrediting Alex is now bordering on creepy. It might be time for the moderation team to step in and set some rules. I don't understand why it should be in any way acceptable for people to keep constantly tagging Alex and calling for him to respond to any comment, tweet or theory on the internet as if he is being put on trial. At which point does this stop being an ernest discussion in good faith and starts becoming harrassment?
 

Thera

Member
Feb 28, 2019
3,116
That dev seems to claim there's no procedural texturing used in AAA games.
It seems like Far Cry 5 does ?
I don't think he refer to how you build the world, but how it is rendered.
You use procedural texturing to "generate" the world, but this will still make assests and texure in the end.

After that, you have No Man's Sky which generate almost eveything from procedural generation.
 

renx

Member
Jan 3, 2020
95
Jumping from 80MB/s to 2GB/s SSD would have been huge and cheaper than what Sony did.
I have to believe that they invested and designed this for a reason.
That IO is supposed to manage data in a way that not even high end PC can currently do, or is far from standard.
It may end up being overkill or revolutionary. But I need to wait and see.
 

nib95

Banned
Oct 28, 2017
12,967
This fixation that some users have on discrediting Alex is now bordering on creepy. It might be time for the moderation team to step in and set some rules. I don't understand why it should be in any way acceptable for people to keep constantly tagging Alex and calling for him to respond to any comment, tweet or theory on the internet as if he is being put on trial. At which point does this stop being an ernest discussion in good faith and starts becoming harrassment?
That's what threads like this are for though, technical discussion. If you interject and offer your own technical insight or take, of course you can expect others to debate and discuss it. And if other technically minded users or developers disagree, debate or challenge what is posted, and other users want further clarification because of that, it's hardly harassment, it's instead in the pursuit of clarification, truth and accountability.

Otherwise anyone could just post anything without ever being challenged or held accountable on the relevancy or truth of the details or points.

It's even more relevant when it's people in the public sphere who are actually known or specialise in technical insight. For the latter folk, I can't see why they would be strongly against discussing or debating the challenged points by other experts in the field, unless they simply didn't have the answers.
 
Last edited: