SFS sounds cool, but what is BCPack?
It's a compression formats aimed at texture compression. IIRC it's what Google and others use to compress satellite image data for instance.
Is this something that is going to only be used by Microsoft's internal studios, or by third parties as well? Cerny seemed to imply that Kraken was becoming heavily adopted throughout the indsutry.
Zlib is an industry choice for compression but not just games. It's used pretty much everywhere.
Kraken is replacing that but it's still a general compression algorithm (IIRC it's even built on top of zlib and a bit more modern).
Bcap is specialized in image compression and it's also widely used in the industry.
As for the specific implementation of SX. That will be open for 3rd parties and Ms is already working with manufacturers of ssds, gpus, motherboards etc to make the feature available on Pc as well, and with directx 12U they want developers using the same code on SX and pc.
Getting 10 GB on screen quickly is likely not the PS5's primary goal. Instead, I believe we are looking at tech designed to get asset streaming off the SSD fast enough to only occupy RAM on demand (or close to it). No more need to pre-allocate heavy data like multiple mesh LODs because the drive bandwidth can't keep up.
And the more freed up RAM Next Gen the better, especially if Raytracing really takes off. BVH trees from my understanding can be quite large.
Well, the point is that 16 isn't enough for next gen, but they both mitigate that with the ssd as the size becomes less important if you are able to refresh the entire memory fast enough.
And I believe that for practical purposes both will be fast enough that traversal speed is never the problem and they both have as much data as the gpu can render available.
That's for example why Hellblade 2 may have looked so good.
Mip levels are as old as the hills.
Like Microsoft is doing interesting things. And I'm sure Sony is too on their stack. But let's not pick over tweets and words we don't understand and present them as some new magic tech. This is literally what happened in 2013 with the 'secret sauce' presentation of generic technology.
This isn't about secret source. It's about a feature (getting data from the ssd fast as possible as it can be used as vram) implemented in many blocks.
The main blocks of that are:
- The SSD
- The hardware compression including hardware support for a format specified in compressing textures
- A way for the ssd to directly feed the ram reducing latency
- Sampler feedback all the way back to the ssd so the ssd itself can load only the part ion of the file that is going to be needed instead of the whole file. (basically extending tiled resources to the ssd)
With all this Ms believes it can deliver data from the ssd at a rate it won't cause any bottleneck. Sony did some stuff similar and others different.
So it's not about secret sauce. It's about 2 different ways of solving the problem and we needing more detail on Ms side because on Sony the raw speed of the ssd already makes it feasible.
I think both implementations will be enough to provide 1:1 pixel per texel rate meaning that each pixel on screen will be able to have a unique texel, bar storage issues (basically what mega texture wanted to solve but without the compromises it brought)