The dev kit is 40GB because that's a more natural fit for a 320bit bus than 32GB. 40GB means symmetrical memory modules and a uniform 560GB/s for the whole pool, whilst 32GB would require replicating the asymmetrical chips and split bandwidth setup of the retail console. Given the price isn't a concern for a dev kit they may as well max out the whole bus. Devs can always find ways to use it.Makes me wonder if XSX was going to have 16+4GB RAM at one point rather than 12+4, xb dev kits have had double RAM of retail units before (360, scorpio) but more than double seems excessive. Or maybe it was just simpler to use certain chips to reach or exceed their target.
I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?The reason the retail system has the 10+6 split is because they metaphorically started with a 20GB ram setup, split it in half, then subtracted 4GB for one half (which also limited the amount of the bus it could use), leaving 10GB with full speed and 6GB at 60% speed.
I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
It's cheaper. Shouldn't be too much of an issue in practice. Not everything needs awhole 560GB/s of bandwidth and 336GB/s is still plenty fast. As long as devs fill up the "slow" memory first with the non-bandwidth critical stuff it shouldn't cause any issues. Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
Can I interject here, the Road to PS5 presentation clearly stated the Tempest audio was a fully custom built extra element Sony designed themselves. It's basically a PS3 style Cell processor (I forget the technical terminology for it) that can handle thousands of positional audio channels.
Sort of, here's a quote from a transcription I keep going back to. It's coming in quite handy tbh.
https://playstationvr.hateblo.jp/entry/2020/03/30/181003
We're calling the hardware unit that we built the Tempest Engine.
It's based on AMD's GPU technology we modified a compute unit in such a way as to make it very close to the SPU is in PlayStation 3.
While it's based on an AMD compute unit, he sounds pretty clear that it's Sony's design. It almost sounds like they could use it to help with PS3 BC, but it's unlikely.
To echo gofreak, the slide is ambiguous about if MS is counting the DSP units as part of the fp32 performance. It's not a terrible thing to do, it's a more efficient way of applying standard effects, it's just not as flexible as doing everything in a compute unit like how Sony is doing.
To begin with no, later in the gen of stuff starts using more than 10gb RAM yes.
They've also built the system to use RAM far more efficiently, as detailed both by the original breakdown and this.It's cheaper. Shouldn't be too much of an issue in practice. Not everything needs awhole 560GB/s of bandwidth and 336GB/s is still plenty fast. As long as devs fill up the "slow" memory first with the non-bandwidth critical stuff it shouldn't cause any issues. Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.
It's cheaper. Shouldn't be too much of an issue in practice. Not everything needs awhole 560GB/s of bandwidth and 336GB/s is still plenty fast. As long as devs fill up the "slow" memory first with the non-bandwidth critical stuff it shouldn't cause any issues. Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.
Bandwidth in XSX won't drop if games will use more than 10GB of VRAM?
Yep exactly.It's only really 3.5GB in the slower pool due to the OS reserving 2.5GB. It's really not going to be a big issue, and if it's the only way they could hit 560GB/s within budget then it's a sensible compromise.
Yes it will, but I can't see it happening until later in the new gen.
I don't think it really matters who has the better audio hardware, because both are pushing audio to new heights and that was my point. We had so many posts suggesting someone (Sony) is finally pushing audio, while Xbox always does something with audio and Xbox One was better in that regard than PS4. Or post saying Xbox is brute force, while Sony is smarter with their SSD and audio design. Or posts saying Xbox has no dedicated audio hardware or didn't invest a lot into audio as they haven't talked about it much...Well, like gofreak said few days ago when XSX was showed at Hot chips
Yep and that's what matters. Some guys should stop fighting and be happy that both companies push audio to the next level. You like to hear it.I am now hopeful for better audio across the board now for many games.
Not that nonsense again. It's clear PS5 uses RDNA4 already. /sWhy did Xbox change the branding from "world's most powerful console" to "most powerful Xbox". They know Sony have some custom features that AMD are probably going to include in RDNA 3. This dosent mean PS5 is RDNA 3, it means AMD are taking the feature and including it for themselves (mark Cerny mentioned this already).
Redgamingtech covered this lots of times already. He has good info on AMD and he is known for leaking AMD stuff on his channel. He's not a clickbait normal useless YTer (in my opinion).
They use save states and they are much smaller as textures for example don't need to be stored. Don't worry about it :)That bit about quick resume just saving the game's memory to SSD made me think...they support 3 games at once, and probably need some scratch space while swapping between them. That's probably around 50GB of the SSD being reserved by this one OS feature.
No, devs even asked for this as you can read in the DF Xbox Series X article a few months ago. Not evey tasks needs very high bandwidth and while 16GB at 560GB/s would be nice, the compromise to save cost, while pushing bandwith very high for the 10GB is worth it.I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
No, you always need some RAM for CPU, Audio, ... , thus games won't suddenly use 13,5GB RAM for the GPU.To begin with no, later in the gen of stuff starts using more than 10gb RAM yes.
I think this will need to be repeated till the end of this generation as so many don't seem to know the architecture is still unified.Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.
No. The CPU needs to access RAM too. Typical CPU bandwidth requirements are well under 100GB/s. A Core i9-9900k only has a ram bandwidth of 39.74 GiB/s!I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
No, it will increase performance because the 10GB is much faster than the unified setup.I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
DF wrote it was roughly a 2080 level os performance but the actual performance they were told was higher. They were shown SX going through the benchmark tool exceeding 100fps. A 2080 does not reach that, it's 2080ti territory.correct, but the 60 cu radeon 7 was vastly more powerful than the 56 cu vega 56. it was clocked higher and had more/better vram and was able to offer 30% more performance for 30% more tflops.
That said, I know I said earlier that we just dont know how well it scales with CUs because we dont have benchmarks. well i was wrong. we do have a benchmark, and one straight from MS and Digital Foundry. DF were told that Gears 5 benchmark running on the 12 tflops RDNA 2.0 xsx gpu performed equivalent to the rtx 2080.
Now we know the 5700xt anniversary edition is 10.14 tflops, very close to the ps5 10.28 tflops. So we can use the anniversary edition to the RTX 2080 to see what the actual performance difference between the two might be.
Looks like 13%.
Techpowerup shows rtx 2080 is 11% faster than the anniversary edition. So we can assume roughly 10% power difference between the two consoles based on these benchmarks. Which means the extra 20% tflops in the xsx gpu offered only around 11% more performance meaning the extra CUs dont scale as well or there is bottleneck somewhere else.
AMD Radeon RX 5700 XT 50th Anniversary Specs
AMD Navi 10, 1980 MHz, 2560 Cores, 160 TMUs, 64 ROPs, 8192 MB GDDR6, 1750 MHz, 256 bitwww.techpowerup.com
the interesting thing here is that the 5700xt does not hit its peak clocks, the 9.7 tflops 5700xt is roughly 9.3 tflops at average 1.8 ghz game clocks. wouldnt be surprised if the 1.98 ghz anniversary edition doesnt hit those clocks either during gameplay. seeing as how its offering only 4% more performance, it's actually maxing out at 9.6 tflops. but if the ps5 gpu can regularly hit 10.28 tflops then it might actually outperform the anniversary edition by a good 7% which would close the gap between the ps5 and xbox series x gpus even more. of course, that doesnt line up with what dusk golem said, but i guess this is the best comparison we have right now based on the rtx 2080/xsx gears 5 benchmarks and the anniversary edition benchmarks. Assuming of course the ps5 can run at peak clocks at all times.
I am going to look into some Gears 5 benchmarks next to see if the differences there are more pronounced because i know that the averages can sometimes be worse or better for some games.
The dev kit is 40GB because that's a more natural fit for a 320bit bus than 32GB. 40GB means symmetrical memory modules and a uniform 560GB/s for the whole pool, whilst 32GB would require replicating the asymmetrical chips and split bandwidth setup of the retail console. Given the price isn't a concern for a dev kit they may as well max out the whole bus. Devs can always find ways to use it.
Mind you if price wasn't an issue for the retail system and they did want to go with 20GB, it would likewise have a uniform 560GB/S bandwdith, not be a 16+4 split like you suggest. The reason the retail system has the 10+6 split is because they metaphorically started with a 20GB ram setup, split it in half, then subtracted 4GB for one half (which also limited the amount of the bus it could use), leaving 10GB with full speed and 6GB at 60% speed.
To start unoptimised and improve down to final spec (which it can represent on demand).I thought the whole purpose of a dev kit was to have hardware as close to the retail unit as possible? Why the extra RAM.
Development and debugging purposes, it's standard for devkits.I thought the whole purpose of a dev kit was to have hardware as close to the retail unit as possible? Why the extra RAM.
To start unoptimised and improve down to final spec (which it can represent on demand).
Oh man, I thought I've remembered you and John talking about similar performance to the 2080RTX in one of the Minecraft RTX videos. I might have misremembered.It really depends on the map how the game performs, let us just wait till launch to see how the series X or PS5 compare in cross platform games with RT before saying it is like an RTX 2080. I honestly do not think it will perform as good as that GPU.
Nah, the only comparison drawn to the 2080 was raster performance in Gears 5.Oh man, I thought I've remembered you and John talking about similar performance to the 2080RTX in one of the Minecraft RTX videos. I might have misremembered.
Tempest EngineI don't think it really matters who has the better audio hardware, because both are pushing audio to new heights and that was my point. We had so many posts suggesting someone (Sony) is finally pushing audio, while Xbox always does something with audio and Xbox One was better in that regard than PS4. Or post saying Xbox is brute force, while Sony is smarter with their SSD and audio design. Or posts saying Xbox has no dedicated audio hardware or didn't invest a lot into audio as they haven't talked about it much...
All of these posts were simply wrong and they put to much weight into talking or marketing of aspects, when companies like Sony and Microsoft have their own ideas of communicating. And neither does mean PS5 or Xbox are lacking in a area. Ultimately, it doesn't matter who has the slightly better solution, because as I said both having a very good audio solution and hardware means it gets more attractive for third party devs to use them and in the end we consumers benefit.
Yep and that's what matters. Some guys should stop fighting and be happy that both companies push audio to the next level. You like to hear it.
Not that nonsense again. It's clear PS5 uses RDNA4 already. /s
They use save states and they are much smaller as textures for example don't need to be stored. Don't worry about it :)
- Don't take marketing so seriously.
- Redgamingtech isn't reliable imo
No, devs even asked for this as you can read in the DF Xbox Series X article a few months ago. Not evey tasks needs very high bandwidth and while 16GB at 560GB/s would be nice, the compromise to save cost, while pushing bandwith very high for the 10GB is worth it.
No, you always need some RAM for CPU, Audio, ... , thus games won't suddenly use 13,5GB RAM for the GPU.
"Where we ended up is a unit with roughly the same SIMD power and bandwidth as all eight Jaguar cores in the PS4 combined
To make it easier to debug the game.I thought the whole purpose of a dev kit was to have hardware as close to the retail unit as possible? Why the extra RAM.
A Ti runs better, but the 2080 seems to be in the same ballpark as SX, but from what I've seen, performance varies a lot depending on the map on PC, so unless we have the final game to make 1:1 comparisons it won't be fair.IIRC MC on RTX 2080 ti was more stable at 60 and demo was more complex than XSX demo. Anyway, nice to see RT on consoles. But still full path RT is a power hungry.
DF wrote it was roughly a 2080 level os performance but the actual performance they were told was higher. They were shown SX going through the benchmark tool exceeding 100fps. A 2080 does not reach that, it's 2080ti territory.
Because he knows how hardware is designed and last minute change to power budget messes the whole chip power allocation design drastically. 40 50 mhz sure . But not this drastic . Then u need to change everything againTbf, as far as I'm aware... Matt is a Dev... He has access to dev kits. That doesn't mean he has access to all of sonys internal discussions and decisions. There would be a select few people privy to those kinds of decisions and they sure as hell wouldn't be sharing it to a random developer imo.
Unless Matt is high up in Sony, and why would he share things on here if he was?
Thats the only reason I asked.
That is not what we put out, rather I have said that it was close to a 2080 or "broadly similar to" it in the gears 5 benchmark on Ultra with no dynamic res on - in actuality it performed a bit below the 2080 in the gears 5 benchmark as Richard has said in one of his Videos. So nothing about 100 fps or whatever you say there. Regarding RT performance, we have said nothing definitive yet since there is not definitive comparisons out there without Vsync on - so we will have to wait for the game's to Release. But the rumbling in my gut elsewhere does not make me think it will be exactly be RTX 2080 Tiered.DF wrote it was roughly a 2080 level os performance but the actual performance they were told was higher. They were shown SX going through the benchmark tool exceeding 100fps. A 2080 does not reach that, it's 2080ti territory.
And 5700XT is the 40CUs configuration, it has more performance than the 36CUs 5700. In fact even when overclocked past 2ghz the 5700 usually stays behind or at best on par with the 5700XT.
Also, the chart you posted is for 1080p do you have it for 4k? I assume the higher resolution will hammer the gpu more both CUs and bandwidth and should give a more realistic performance delta.
No. The CPU needs to access RAM too. Typical CPU bandwidth requirements are well under 100GB/s. A Core i9-9900k only has a ram bandwidth of 39.74 GiB/s!
And the CPUs have reasonably sized caches to prevent needing to access ram every cycle. The CPU will use the slower pool, leaving the faster pool for the GPU. GPUs can't use caches as effectively as CPUs can due to the amount of data they have to access compared to a CPU. It's a brilliant design for a console.
No, it will increase performance because the 10GB is much faster than the unified setup.
The slower pool is still at 336 GB/s, and the difference between that and the ps5 setup is exactly the same between ps5 and the faster pool. But that's enough bandwidth for even textures not to be much trouble, especially for lower mips targeted at distant objects.
The real important thing are the big bandwidth consumers. Framebuffer, RT, etc. It's important that for gpu tasks that demand more bandwidth that they can use the entirety of that bandwidth, and 10GB is more than enough for that.
I think it's far more likely that 3.5 GB won't be enough for the low bandwidth required content and they will have to use the fast pool for that as well than the other way around, where 10GB won't be enough for everything that needs to have bandwidth and they will have to put on the slower side and get a performance hit.
Also keep in mind xbone had what 1/3 of ps4 bandwidth on the main ram, and while the games reduced everything that was bandwidth bound on it (resolution, lower quality post processing, less grass/blending) I don't think there's even a single game that used lower quality textures than ps4. That's how little textures requires compared to the big bandwidth spenders.
If it goes over 10gbs then all of the ram functions at the slower speed. So the answer is devs won't go over that amount. SFS is meant to keep that kind of situation from happening. Devs will be able to use higher quality textures on smaller amounts of ram.Oh nice, thank you. That does sound very promising!
Thank you for explaining, too. But what if games in 4 or 5 years time will start getting so big that 10 GB of fast RAM will not be enough? Is that possible or will that not happen for the forseeable future?
No lies detected and that's why averaging the RAM in comparisons between the consoles never made any sense.The real important thing are the big bandwidth consumers. Framebuffer, RT, etc. It's important that for gpu tasks that demand more bandwidth that they can use the entirety of that bandwidth, and 10GB is more than enough for that.
Indeed. The multiplayer runs @60 fps even on base X1 while the single-player runs @30fps on it so the XSX should have enough performance to run multiplayer @120 fps.People still seem to be attributing the "over 100fps" part to single player when that relates to multiplayer
unlikely as the dedicated audio chip and CPU will still need RAM in a few years. Both consoles can't use all their RAM for GPU in a realistic scenario.Thank you for explaining, too. But what if games in 4 or 5 years time will start getting so big that 10 GB of fast RAM will not be enough? Is that possible or will that not happen for the forseeable future?
Do people not realize that MS can price it whatever they want? That they can easily forego making a profit or breaking even on hardware in order to get more machines into more hands to facilitate more GamePass subscriptions (the real profit engine they're building)?
It kinda doesn't matter how much it cost to build in the end. That won't dictate price. Phil Spencer explicitly said this last year. That they won't be beat on price or power or something to that effect. It seems, weirdly, that some of you almost *want* it to be expensive for some reason.
Do people not realize that MS can price it whatever they want? That they can easily forego making a profit or breaking even on hardware in order to get more machines into more hands to facilitate more GamePass subscriptions (the real profit engine they're building)?
It kinda doesn't matter how much it cost to build in the end. That won't dictate price. Phil Spencer explicitly said this last year. That they won't be beat on price or power or something to that effect. It seems, weirdly, that some of you almost *want* it to be expensive for some reason.
The price or power statement wasn't about a single console. Their s and x strategy allows them to offer both the cheapest and most powerful console. No company wants to eat a loss like the PS3. It sold 85mil units and still probably never broke even.Do people not realize that MS can price it whatever they want? That they can easily forego making a profit or breaking even on hardware in order to get more machines into more hands to facilitate more GamePass subscriptions (the real profit engine they're building)?
It kinda doesn't matter how much it cost to build in the end. That won't dictate price. Phil Spencer explicitly said this last year. That they won't be beat on price or power or something to that effect. It seems, weirdly, that some of you almost *want* it to be expensive for some reason.
Ehh the whole logic that 'the baseline is some arbitrarily chosen lower-clock and we overclock the hw' would need to be demonstrated first.but we do have benchmarks that show higher clocks dont scale performance 1:1
Why not? Intersection compute happening in TMUs actually makes it scale easier (and makes that useful for a lot more than just light-rays) than having to send results back and forth between some bespoke circuitry. Seems to offer a fair bit of opportunity for coherency optimizations and multiple-bounces - something that RTX did not do well at all so far.I am interested to understand how the RT hardware works as the presentation seems to suggest it shares resources with the TMUs which isn't exactly encouraging
if that were the case they would price it 399 and would have opened preorders already
This still doesn't mean they can just price their console however they want.People on this forum always jump to extremes to make their "points"
"If MS really wants their games available to everyone, why not put them Playstation?"
"If they can be flexible on price, why not charge $2?"
It becomes so tiresome sometimes.
The Tempest Engine can support 'hundreds of sound sources'. Not thousands.So, Tempest chip is designed like Cell's SPUs. How many audio channels has XSX audio chip then if Tempest chip can handle thousands of them?
It sound like to me that 3D audio in PS5 is more sophisticated.
Consoles are almost never the right choice if you're already 50/50 about getting either/or.
You can be honest Alex. You know RT performance won't be anywhere near 2080 or even 2070.It really depends on the map how the game performs, let us just wait till launch to see how the series X or PS5 compare in cross platform games with RT before saying it is like an RTX 2080. I honestly do not think it will perform as good as that GPU.
Cost dictates what gets put into these things. They probably wanted 10, 2GB chips but ended up with a mix of 1/2GB chips.I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
Oh thanks for the correction. I could swear I saw footage of the benchmark showing framerates around the 100fps mark, I guess I just confused myself.That is not what we put out, rather I have said that it was close to a 2080 or "broadly similar to" it in the gears 5 benchmark on Ultra with no dynamic res on - in actuality it performed a bit below the 2080 in the gears 5 benchmark as Richard has said in one of his Videos. So nothing about 100 fps or whatever you say there. Regarding RT performance, we have said nothing definitive yet since there is not definitive comparisons out there without Vsync on - so we will have to wait for the game's to Release. But the rumbling in my gut elsewhere does not make me think it will be exactly be RTX 2080 Tiered.
Yep and this goes for both companies. They won't just do one thing when there is a lot to consider from a business standpoint. Nobody has a unlimited warchest and even if that were the case, Sony and MS wouldn't have gotten the warchest by making their decision based around winning a war and just being lower on price no matter what. Both have to make sensible decisions and they know price matters, thus they haven't announced them yet.if that were the case they would price it 399 and would have opened preorders already
I don't see how Nvidia's RT cores are "bespoke" when they're bundled in the shader multicoreWhy not? Intersection compute happening in TMUs actually makes it scale easier (and makes that useful for a lot more than just light-rays) than having to send results back and forth between some bespoke circuitry. Seems to offer a fair bit of opportunity for coherency optimizations and multiple-bounces - something that RTX did not do well at all so far.
Traversal being in software also means it'll be subject to implementation efficiency - so we may well see it evolve over time considerably.
The point I was trying to make is that they will definitely use more than 10GB, but that won't affect performance. What matters is that the pool is large enough to hold the data that does need the extra bandwidth and that doesn't seem that it will be an issue.Oh nice, thank you. That does sound very promising!
Thank you for explaining, too. But what if games in 4 or 5 years time will start getting so big that 10 GB of fast RAM will not be enough? Is that possible or will that not happen for the forseeable future?
There's evidence they were targeting lower clocks initially. All the leaks and testings that have proven super reliable.Ehh the whole logic that 'the baseline is some arbitrarily chosen lower-clock and we overclock the hw' would need to be demonstrated first.
Clock : performance scaling always followed the same rules - you don't get 1:1 scaling just based on how much more compute your chip can do if the rest of the components aren't keeping up, this isn't a revelation to anyone.
But for this to be relevant we must assume Sony was designing for that mythical 8TFlop machine. Which isn't impossible - but noone's brought receipts for it yet. And no - tests of lower clocked parts aren't it - at least half of the hardware in last 20 years has documented instances of running at (often significantly)lower clocks prior to launch(including showing tech demos, and actual games running on those lower clocks), and people never computed their performance scaling with the assumption their final designs were 'overclocked'. If anything - certain actual 'last-minute' overclocks were subject to being actively inflated post launch by tech enthusiast sites (well past what 1:1 scaling would actually give).
Why not? Intersection compute happening in TMUs actually makes it scale easier (and makes that useful for a lot more than just light-rays) than having to send results back and forth between some bespoke circuitry. Seems to offer a fair bit of opportunity for coherency optimizations and multiple-bounces - something that RTX did not do well at all so far.
Traversal being in software also means it'll be subject to implementation efficiency - so we may well see it evolve over time considerably.
Well that's kinda the thing. I doubt either wants to price it at 399 if it's costing more than that, but I doubt either would let the competitor reign free at that price point. Hence the cat and mouse we are in.if that were the case they would price it 399 and would have opened preorders already