• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

RoboPlato

Member
Oct 25, 2017
6,809
The more I read about the potential of SSDs in consoles how they can improve data streaming and the user experience, the more excited I get. Really smart move to focus on this for next gen
 

Jeffram

Member
Oct 29, 2017
3,924
Great work. This might be why Sony has been so vocal about SSD as a game changer. They probably feel like they will have a performance advantage in this area, so want to put a focus on it ahead oh Microsoft's blowout.
 

travisbickle

Banned
Oct 27, 2017
2,953
Is this something that may blindside MS like the fast install PS4 had at the start of this gen? Or is it something that's been mentioned by a lot of manufacturers and an expected standard for future consoles.
 

Deleted member 2340

User requested account closure
Banned
Oct 25, 2017
4,661
Based on this:

- There would be no need for GPU decompression as the custom SSD has its own hardware accelerator.
- The solution means PS4 games via BC could all benefit from the fast loadings by default: all PS4 games on PS5 could have super quick loadings like shown in the Spiderman demo.

I am not a tech person

Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?

I've seen that statement so many times an. It doesn't make sense to me if you add the above context.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
https://www.giantbomb.com/forums/ge...w-consoles-and-texture-decompression-1499274/

I have been under the impression it has been done solely on on the CPUs of game consoles. This is further buoyed by sony wanting to add hardware decompression (if I read that right in the OP)

It seems it is done on GPU on MS side from a verified insider here

Nvidia did some research using CUDA for LZW decompression and a GPU this is much faster than the CPU

http://on-demand.gputechconf.com/gtc/2016/posters/GTC_2016_Algorithms_AL_11_P6128_WEB.pdf

And it is possible to do it using compute on AMD GPU
 

melodiousmowl

Member
Jan 14, 2018
3,774
CT
I am not a tech person

Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?

I've seen that statement so many times an. It doesn't make sense to me if you add the above context.

Theres a few things being conflated:

Old games unpatched probably just load really fast - and TBH that could just be due to fast storage and better CPU.

I watched a bit of the spiderman vid, but its not clear if they patched it in any form, so it's really all useless conjecture now (we still dont know what the final solution is)
 

Wereroku

Member
Oct 27, 2017
6,248
I am not a tech person

Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?

I've seen that statement so many times an. It doesn't make sense to me if you add the above context.
No. The bandwidth increase is so high with the switch to an SSD that even if game file size doubles or triples it would still be massively improved compared to PS4.
 

melodiousmowl

Member
Jan 14, 2018
3,774
CT
It seems it is done on GPU on MS side from a verified insider here

Nvidia did some research using CUDA for LZW decompression and a GPU this is much faster than the CPU

http://on-demand.gputechconf.com/gtc/2016/posters/GTC_2016_Algorithms_AL_11_P6128_WEB.pdf

And it is possible to do it using compute on AMD GPU

Well neat then! Honestly, the whole asset loading thing is kind of a black box without an NDA? Just from adding an SSD to the ps4, its faster, but not as fast as it could be so theres some bottleneck somewhere.
 
Oct 25, 2017
1,575
also, keep in mind that sata 3 at max in a ps4 was ~500mb/s real world(with an appropriate ssd), much lower depending on the workload. spinning hard drives probably topped out at 120mb/s but usually sustain around 50.
Insomniac at a presentation actually revealed that because they had to account for user swappable harddrives (and some of those could be really slow), they had to design spider-man around a 20mb/s target.

Edit: I see someone already mentioned that to you.
 

Fisty

Member
Oct 25, 2017
20,227
I am not a tech person

Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?

I've seen that statement so many times an. It doesn't make sense to me if you add the above context.

I think the power jump for the PS5 overall would be around 5x or 6x more than PS4, but the SSD jump would be like 40x? I dont know the exact numbers obviously, but the SSD from 5400rpm HDD gap will be MUCH greater than the other gaps between PS4 and PS5
 

stan423321

Member
Oct 25, 2017
8,676
So SSD manufacturers made SSDs around absurdly small sectors (given their size) and Sony now patented fixing that mistake? F@#%.
 

DieH@rd

Member
Oct 26, 2017
10,569
I want to know how the thing is going to work when you play from an external regular hdd.
If game developers have to rely on internal custom SSD [if every PS5 native game HAS to use it, which is most likely the case to help devs normalize the playing field], external drives will most likely be used only as a "cold storage".

Which means, you can have as many game installations on a external drive, but to actually start them you will need to transfer it to the internal SSD. Yes, this is a small inconvenience, but it will have to be endured to ensure the level playing field for every software.

One exception can possibly be BC software. That software was not built for this new storage spec, so it could possibly be played from ordinary external hard drives directly.
 

bombshell

Banned
Oct 27, 2017
2,927
Denmark
If game developers have to rely on internal custom SSD [if every PS5 native game HAS to use it, which is most likely the case to help devs normalize the playing field], external drives will most likely be used only as a "cold storage".

Which means, you can have as many game installations on a external drive, but to actually start them you will need to transfer it to the internal SSD. Yes, this is a small inconvenience, but it will have to be endured to ensure the level playing field for every software.

One exception can possibly be BC software. That software was not built for this new storage spec, so it could possibly be played from ordinary external hard drives directly.
Your last paragraph is a great point that I had not thought about in my previous reply to that post.
 
OP
OP
gofreak

gofreak

Member
Oct 26, 2017
7,736
So SSD manufacturers made SSDs around absurdly small sectors (given their size) and Sony now patented fixing that mistake? F@#%.

No, general SSDs use a data granularity that needs to be typically 'OK' for different kinds of access. It's not that this is a mistake to fix, it's just in a box targeting one kind of application, you can take advantage of assumptions that you can't take advantage of in systems that need to be good around a range of access patterns. Or add things that would be worth adding but that in a general space might be considered not worth it for just 'one' type of application (e.g. read-only game asset access).
 

Euler.L.

Alt account
Banned
Mar 29, 2019
906
I am not a tech person

Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?

I've seen that statement so many times an. It doesn't make sense to me if you add the above context.

With the custom SSD solution you can dump and fill the complete system ram in mere seconds. It's basically getting rid of an old bottleneck.
 

bombshell

Banned
Oct 27, 2017
2,927
Denmark
The use of SRAM is very much aimed at data access that is write-once, read-many. Which dovetails nicely with game installs - but would not necessarily suit general applications. A PC has to be ready for anything.

Some of the other proposals could be done - secondary CPUs and accelerators - but would need OS support.
Games today are updated often though or is it still too few writes to be a disadvantage for that solution?
 

Deleted member 2340

User requested account closure
Banned
Oct 25, 2017
4,661
Theres a few things being conflated:

Old games unpatched probably just load really fast - and TBH that could just be due to fast storage and better CPU.

I watched a bit of the spiderman vid, but its not clear if they patched it in any form, so it's really all useless conjecture now (we still dont know what the final solution is)
No. The bandwidth increase is so high with the switch to an SSD that even if game file size doubles or triples it would still be massively improved compared to PS4.
I think the power jump for the PS5 overall would be around 5x or 6x more than PS4, but the SSD jump would be like 40x? I dont know the exact numbers obviously, but the SSD from 5400rpm HDD gap will be MUCH greater than the other gaps between PS4 and PS5

Thanks I think I understand a little better now.
 

CatAssTrophy

Member
Dec 4, 2017
7,621
Texas
Thanks for this thread. Sounds like we can put the "it's just NVME on PCIE4" theories to bed. I'm also happy to hear that this scheme can make it cost effective too. Hoping for a buttload of storage in that block so that entire games can be "cached" to it for this trick.

I wonder how the PS5 OS will prioritize what goes into that storage though. IE: if I add a 4TB drive to my PS5 and fill it up, but the fast storage is only 2TB, will the OS just cache my most recently used games, and as I play different things delete the older data?
 

AegonSnake

Banned
Oct 25, 2017
9,566
10GB per second is insane. Even if they dont come close to that number, we will no longer need to get large amounts of expensive RAM in the system. Just put in 12-16GB VRAM and call it a day. if you can fill it up in a second and a half then whats the need to have more.
 
OP
OP
gofreak

gofreak

Member
Oct 26, 2017
7,736
Games today are updated often though or is it still too few writes to be a disadvantage for that solution?

If we're talking about patches and stuff, the read-write ratio is still super high there. You're installing those patches once, the data may be read umpteen times thereafter while playing the game.

To be clear though, it's not that the write performance in game installs and patching would be slow. Indeed the opposite - the patent application actually talks about the process of writing game asset data to the disk with coarser data granularity, and how that process can be improved. It would be data written through the 'normal' virtual file system - the patent suggests save data for example - that might benefit less from these optimisations. But even there, again, it's not necessarily the case these would be disadvantaged...just not as 'advantaged' as data accessed through the file archive api.
 
Oct 25, 2017
17,904
giphy.gif
😂😂
 

melodiousmowl

Member
Jan 14, 2018
3,774
CT
Insomniac at a presentation actually revealed that because they had to account for user swappable harddrives (and some of those could be really slow), they had to design spider-man around a 20mb/s target.

Edit: I see someone already mentioned that to you.


heh, I totally believe it, spinning HDs can have awful performance especially on things like random reads

edit:

Just put that in perspective: from coding to a 20MB/s baseline to a 3000+MB/s baseline.
 

bombshell

Banned
Oct 27, 2017
2,927
Denmark
If we're talking about patches and stuff, the read-write ratio is still super high there. You're installing those patches once, the data may be read umpteen times thereafter while playing the game.

To be clear though, it's not that the write performance in game installs and patching would be slow. Indeed the opposite - the patent application actually talks about the process of writing game asset data to the disk with coarser data granularity, and how that process can be improved. It would be data written through the 'normal' virtual file system - the patent suggests save data for example - that might benefit less from these optimisations. But even there, again, it's not necessarily the case these would be disadvantaged...just not as 'advantaged' as data accessed through the file archive api.

That makes sense, thanks.

10GB per second is insane. Even if they dont come close to that number, we will no longer need to get large amounts of expensive RAM in the system. Just put in 12-16GB VRAM and call it a day. if you can fill it up in a second and a half then whats the need to have more.
Indeed. I know that some people will react negatively at first to a minor RAM increase between current gen to next gen, but the insane increase to read/write I/O speeds will make a big RAM increase unnecessary.
 

monketron

Member
Oct 27, 2017
2,859
heh, I totally believe it, spinning HDs can have awful performance especially on things like random reads

edit:

Just put that in perspective: from coding to a 20MB/s baseline to a 3000+MB/s baseline.

Yep. It's definitely a reason to get excited for. Going to see a huge generational leap in how games (especially big open world games) will look and feel on the PS5, and that's not even counting the improved CPU/GPU that will come with it.
 
Oct 25, 2017
17,904
This would definitely explain why Sony has been giving the SSD and what it will bring to next gen so much focus.

Looking forward to it.
 
Oct 28, 2017
4,589
damn so if we want to upgrade the storage we have to buy it from sony right? if swapping is possible that is...

i wonder will they be M.2 style?
 

Inuhanyou

Banned
Oct 25, 2017
14,214
New Jersey
does this patent at all adress the possibility of the SSD being attached onto the motherboard with HDD's still being the standard? i figurethey would consider such any plans around that an optimal solution due to size concerns of games and such
 

degauss

Banned
Oct 28, 2017
4,631
I am not a tech person

Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?

I've seen that statement so many times an. It doesn't make sense to me if you add the above context.

For running PS4 designed games, the PS5 should just be like a PS4 Pro visually, but with greatly reduced load times.
For running PS5 designed games, it will allow them to stream more assets more quickly (more rich/detailed worlds).
 

Pacbois

Member
Oct 27, 2017
80
If it's indeed their secret sauce, it's pretty damn exiting. It's also a smart approach, optimizing a component that was never made for consoles per se, and also while not making it crazy expensive.

Looks like getting Cerny on board as their hw guy was Sony's best move in years.
 

Fularu

Member
Oct 25, 2017
10,609
Interesting

I'm also fully expecting 500$ 2tb drives if past Sony proprietary storage media pricing is in order
 

Fisty

Member
Oct 25, 2017
20,227
does this patent at all adress the possibility of the SSD being attached onto the motherboard with HDD's still being the standard? i figurethey would consider such any plans around that an optimal solution due to size concerns of games and such

They will definitely need some kind of HDD storage compatibility, either with a separate internal HDD or simply support for externals. Probably some kind of 3-4 game rotation like others have mentioned. Maybe for PS5 games specifically they can keep stuff like non-priority audio and cutscenes or stuff like that stored in normal HDD and only keep high-usage game files in the SSD portion. Depends on the size of the SSD I guess.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
As a drug addled non-techie, I have a few queries:

1. AFAIK, SRAM is more expensive than DRAM. Given the size of DRAM in SSD increase with the overall capacity, what size of SRAM cache can be realistically expected for 1TB SSD?

2. For general use SSDs what is relationship between the size of DRAM cache, look up table and data block? Do any of them have to be proportional to the other (given you mentioned that DRAM can cache the entire look up table)?

3. I used to think "Finer granularity" corresponded "smaller" details. The portion about "coarser granularity" in "larger data blocks" allows for accessing files as "small" as 32KB instead 1GB is sort of breaking my brain. Does the "coarser" mean the sizes of data blocks expanded via software (firmware) in the hypothetical PS5 compared to mass market SSD? And if so, how does that allow for smaller data tables?

Edit:

4. The patent indicates a strong case a non tiered single storage set up which locks out the end user from swapping said internal storage. Am I correct in making that assumption?
 
Last edited:
OP
OP
gofreak

gofreak

Member
Oct 26, 2017
7,736
As a drug addled non-techie, I have a few queries:

1. AFAIK, SRAM is more expensive than DRAM. Given the size of DRAM in SSD increase with the overall capacity, what size of SRAM cache can be realistically expected for 1TB SSD?

2. For general use SSDs what is relationship between the size of DRAM cache, look up table and data block? Do any of them have to be proportional to the other (given you mentioned that DRAM can cache the entire look up table)?

The application says a 'typical' block size would be 4KB. For a 1TB drive, that requires a look-up table of 1GB - 0.1% of capacity.

For SRAM, an amount would be in megabytes I guess.

By coarsening the data block size you can reduce the look-up table size. At one extreme, from the application:

In the present embodiment, therefore, the address conversion table size is minimized by increasing the data processing unit in response to a write request, i.e., granularity level, at least for part of data. Assuming, for example, that the write granularity level is 128 MiB and that data of each entry in the address conversion table is 4 bytes in size as described above, the data size of the address conversion table as a whole is a ½25th fold of the capacity of the flash memory 20. For example, a 32-KiB (32λ210 bytes) address conversion table can express 1 TiB (240 bytes) of area.

Thus, storing a sufficiently small-sized address conversion table in the SRAM 24 of the flash controller 18 makes it possible to convert addresses without the mediation of an external DRAM. Making the write granularity coarser is particularly effective, for example, for game programs that are loaded from an optical disc or network, stored in the flash memory 20, and only repeatedly referred to

It goes on to describe multiple granularities that can be switched between, with the lookup tables for the first 1 or 2 being small enough to fit in SRAM, and the rest partially cached.

3. I used to think "Finer granularity" corresponded "smaller" details. The portion about "coarser granularity" in "larger data blocks" allows for accessing files as "small" as 32KB instead 1GB is sort of breaking my brain. Does the "coarser" mean the sizes of data blocks expanded via software (firmware) in the hypothetical PS5 compared to mass market SSD? And if so, how does that allow for smaller data tables?

When you write data to the SSD, you split it up into blocks. If you split - say - a 16MB file into blocks 4KB in size, that means you have 4096 blocks. 4096 entries in a lookup table. At 4 bytes each, that's 16KB required for that file in the lookup table.

If you split that 16MB file into a much smaller number of blocks - or pack that file with many others into a single block - you can vastly reduce the lookup table size for the data.

The patent goes into a lot more detail about how data is addressed in this context, how addresses are calculated and hashed etc.

By the way, from the Japanese patents, by the way, there is one that seems to talk in a lot more detail about using a portion of the SRAM as a general data cache (not just for address table lookups), a cache controller for that, how it compares to DRAM etc. There doesn't seem to be an english/us application for that (yet).
 
Last edited:

jroc74

Member
Oct 27, 2017
28,996
Those saying bye to replaceable internal hard drives, maybe they can expand on what they did with the PS3 Super Slim.

It had internal storage and an empty slot for an internal hard drive.

But when you installed it, it disabled the internal storage. That wouldn't be useful for the PS5 tho.

Some interesting points bought up about external hard drives. I don't think they will take away using externals. I hope not.
 

Hieroph

Member
Oct 28, 2017
8,995
Interesting stuff. Really want to see Sony's application in action, whatever it ends up being.
 

Craiji

Member
May 26, 2018
217
So what are the chances this gets ratified in some sort of PCIe 4 SSD spec and we get user serviceable drives? that is my main worry is that we get a fixed drive, and can only add more space using much slower external drives.
 

Baccus

Banned
Dec 4, 2018
5,307
Remember how this gen many games are filled with pop-in? It'll be gone. Next gen is gonna be, if at first, the ultimate form of this gen.

All those unlocked resolution and framerate games on the Pro? 4k60 super quick loading zero pop in free remaster baby.

It's gonna be amazing.