Based on this:
- There would be no need for GPU decompression as the custom SSD has its own hardware accelerator.
- The solution means PS4 games via BC could all benefit from the fast loadings by default: all PS4 games on PS5 could have super quick loadings like shown in the Spiderman demo.
Lmfao
https://www.giantbomb.com/forums/ge...w-consoles-and-texture-decompression-1499274/
no? https://en.wikipedia.org/wiki/InfiniBand if this is what you mean. All this stuff will go over PCI - I cant imagine trying to engineer their own solution.
https://www.giantbomb.com/forums/ge...w-consoles-and-texture-decompression-1499274/
I have been under the impression it has been done solely on on the CPUs of game consoles. This is further buoyed by sony wanting to add hardware decompression (if I read that right in the OP)
I am not a tech person
Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?
I've seen that statement so many times an. It doesn't make sense to me if you add the above context.
No. The bandwidth increase is so high with the switch to an SSD that even if game file size doubles or triples it would still be massively improved compared to PS4.I am not a tech person
Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?
I've seen that statement so many times an. It doesn't make sense to me if you add the above context.
It seems it is done on GPU on MS side from a verified insider here
Nvidia did some research using CUDA for LZW decompression and a GPU this is much faster than the CPU
http://on-demand.gputechconf.com/gtc/2016/posters/GTC_2016_Algorithms_AL_11_P6128_WEB.pdf
And it is possible to do it using compute on AMD GPU
Insomniac at a presentation actually revealed that because they had to account for user swappable harddrives (and some of those could be really slow), they had to design spider-man around a 20mb/s target.also, keep in mind that sata 3 at max in a ps4 was ~500mb/s real world(with an appropriate ssd), much lower depending on the workload. spinning hard drives probably topped out at 120mb/s but usually sustain around 50.
I am not a tech person
Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?
I've seen that statement so many times an. It doesn't make sense to me if you add the above context.
This was too coherent for Jeff. It's actually readable and understandable. :)
I miss his insanity over random whitepapers.
If game developers have to rely on internal custom SSD [if every PS5 native game HAS to use it, which is most likely the case to help devs normalize the playing field], external drives will most likely be used only as a "cold storage".I want to know how the thing is going to work when you play from an external regular hdd.
Your last paragraph is a great point that I had not thought about in my previous reply to that post.If game developers have to rely on internal custom SSD [if every PS5 native game HAS to use it, which is most likely the case to help devs normalize the playing field], external drives will most likely be used only as a "cold storage".
Which means, you can have as many game installations on a external drive, but to actually start them you will need to transfer it to the internal SSD. Yes, this is a small inconvenience, but it will have to be endured to ensure the level playing field for every software.
One exception can possibly be BC software. That software was not built for this new storage spec, so it could possibly be played from ordinary external hard drives directly.
So SSD manufacturers made SSDs around absurdly small sectors (given their size) and Sony now patented fixing that mistake? F@#%.
I am not a tech person
Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?
I've seen that statement so many times an. It doesn't make sense to me if you add the above context.
Games today are updated often though or is it still too few writes to be a disadvantage for that solution?The use of SRAM is very much aimed at data access that is write-once, read-many. Which dovetails nicely with game installs - but would not necessarily suit general applications. A PC has to be ready for anything.
Some of the other proposals could be done - secondary CPUs and accelerators - but would need OS support.
Theres a few things being conflated:
Old games unpatched probably just load really fast - and TBH that could just be due to fast storage and better CPU.
I watched a bit of the spiderman vid, but its not clear if they patched it in any form, so it's really all useless conjecture now (we still dont know what the final solution is)
No. The bandwidth increase is so high with the switch to an SSD that even if game file size doubles or triples it would still be massively improved compared to PS4.
I think the power jump for the PS5 overall would be around 5x or 6x more than PS4, but the SSD jump would be like 40x? I dont know the exact numbers obviously, but the SSD from 5400rpm HDD gap will be MUCH greater than the other gaps between PS4 and PS5
Games today are updated often though or is it still too few writes to be a disadvantage for that solution?
Insomniac at a presentation actually revealed that because they had to account for user swappable harddrives (and some of those could be really slow), they had to design spider-man around a 20mb/s target.
Edit: I see someone already mentioned that to you.
If we're talking about patches and stuff, the read-write ratio is still super high there. You're installing those patches once, the data may be read umpteen times thereafter while playing the game.
To be clear though, it's not that the write performance in game installs and patching would be slow. Indeed the opposite - the patent application actually talks about the process of writing game asset data to the disk with coarser data granularity, and how that process can be improved. It would be data written through the 'normal' virtual file system - the patent suggests save data for example - that might benefit less from these optimisations. But even there, again, it's not necessarily the case these would be disadvantaged...just not as 'advantaged' as data accessed through the file archive api.
Indeed. I know that some people will react negatively at first to a minor RAM increase between current gen to next gen, but the insane increase to read/write I/O speeds will make a big RAM increase unnecessary.10GB per second is insane. Even if they dont come close to that number, we will no longer need to get large amounts of expensive RAM in the system. Just put in 12-16GB VRAM and call it a day. if you can fill it up in a second and a half then whats the need to have more.
heh, I totally believe it, spinning HDs can have awful performance especially on things like random reads
edit:
Just put that in perspective: from coding to a 20MB/s baseline to a 3000+MB/s baseline.
I am not a tech person
Isn't that inaccurate? The Spider-Man game was made to push the PS4 and PS4 Pro to their limits not the PS5. Wouldn't load times on the PS5 be business as usual once PS5 games meant to push that hardware to its limit are released?
I've seen that statement so many times an. It doesn't make sense to me if you add the above context.
No, Hmqgg was talking about Ms' solution, whichever Sony uses may or may not have nothing to do with thatSo the GPU decompression has been told by hmqqg was bullshit, right?
does this patent at all adress the possibility of the SSD being attached onto the motherboard with HDD's still being the standard? i figurethey would consider such any plans around that an optimal solution due to size concerns of games and such
No, Hmqqq was talking about PS5 using GPU Decompression.No, Hmqgg was talking about Ms' solution, whichever Sony uses may or may not have nothing to do with that
Oh indeed. I saw some other posts from him, but it was more like speculation if Sony was also using the same thing as MSNo, Hmqqq was talking about PS5 using GPU Decompression.
Here's the proof:
https://www.resetera.com/threads/xbox-game-studios-ot4-when-everyone-plays-we-all-win.94058/page-319#post-15,921
As a drug addled non-techie, I have a few queries:
1. AFAIK, SRAM is more expensive than DRAM. Given the size of DRAM in SSD increase with the overall capacity, what size of SRAM cache can be realistically expected for 1TB SSD?
2. For general use SSDs what is relationship between the size of DRAM cache, look up table and data block? Do any of them have to be proportional to the other (given you mentioned that DRAM can cache the entire look up table)?
In the present embodiment, therefore, the address conversion table size is minimized by increasing the data processing unit in response to a write request, i.e., granularity level, at least for part of data. Assuming, for example, that the write granularity level is 128 MiB and that data of each entry in the address conversion table is 4 bytes in size as described above, the data size of the address conversion table as a whole is a ½25th fold of the capacity of the flash memory 20. For example, a 32-KiB (32λ210 bytes) address conversion table can express 1 TiB (240 bytes) of area.
Thus, storing a sufficiently small-sized address conversion table in the SRAM 24 of the flash controller 18 makes it possible to convert addresses without the mediation of an external DRAM. Making the write granularity coarser is particularly effective, for example, for game programs that are loaded from an optical disc or network, stored in the flash memory 20, and only repeatedly referred to
3. I used to think "Finer granularity" corresponded "smaller" details. The portion about "coarser granularity" in "larger data blocks" allows for accessing files as "small" as 32KB instead 1GB is sort of breaking my brain. Does the "coarser" mean the sizes of data blocks expanded via software (firmware) in the hypothetical PS5 compared to mass market SSD? And if so, how does that allow for smaller data tables?
First thought, omg.