• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

antispin

Member
Oct 27, 2017
4,780
Yeah, no way this is $499. Just accept it folks.

I'm glad they pushed the envelope. Looks very very nicely engineered, inside and out.
 

LuigiV

One Winged Slayer
Member
Oct 27, 2017
2,685
Perth, Australia
Makes me wonder if XSX was going to have 16+4GB RAM at one point rather than 12+4, xb dev kits have had double RAM of retail units before (360, scorpio) but more than double seems excessive. Or maybe it was just simpler to use certain chips to reach or exceed their target.
The dev kit is 40GB because that's a more natural fit for a 320bit bus than 32GB. 40GB means symmetrical memory modules and a uniform 560GB/s for the whole pool, whilst 32GB would require replicating the asymmetrical chips and split bandwidth setup of the retail console. Given the price isn't a concern for a dev kit they may as well max out the whole bus. Devs can always find ways to use it.

Mind you if price wasn't an issue for the retail system and they did want to go with 20GB, it would likewise have a uniform 560GB/S bandwdith, not be a 16+4 split like you suggest. The reason the retail system has the 10+6 split is because they metaphorically started with a 20GB ram setup, split it in half, then subtracted 4GB from one half (which also limited the amount of the bus it could use), leaving 10GB with full speed and 6GB at 60% speed.
 
Last edited:

Nayeon

Member
Oct 29, 2017
329
The reason the retail system has the 10+6 split is because they metaphorically started with a 20GB ram setup, split it in half, then subtracted 4GB for one half (which also limited the amount of the bus it could use), leaving 10GB with full speed and 6GB at 60% speed.
I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
 

LuigiV

One Winged Slayer
Member
Oct 27, 2017
2,685
Perth, Australia
I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
It's cheaper. Shouldn't be too much of an issue in practice. Not everything needs awhole 560GB/s of bandwidth and 336GB/s is still plenty fast. As long as devs fill up the "slow" memory first with the non-bandwidth critical stuff it shouldn't cause any issues. Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.
 

bi0g3n3sis

Banned
Aug 10, 2020
211
Can I interject here, the Road to PS5 presentation clearly stated the Tempest audio was a fully custom built extra element Sony designed themselves. It's basically a PS3 style Cell processor (I forget the technical terminology for it) that can handle thousands of positional audio channels.
Sort of, here's a quote from a transcription I keep going back to. It's coming in quite handy tbh.
https://playstationvr.hateblo.jp/entry/2020/03/30/181003

We're calling the hardware unit that we built the Tempest Engine.
It's based on AMD's GPU technology we modified a compute unit in such a way as to make it very close to the SPU is in PlayStation 3.

While it's based on an AMD compute unit, he sounds pretty clear that it's Sony's design. It almost sounds like they could use it to help with PS3 BC, but it's unlikely.

To echo gofreak, the slide is ambiguous about if MS is counting the DSP units as part of the fp32 performance. It's not a terrible thing to do, it's a more efficient way of applying standard effects, it's just not as flexible as doing everything in a compute unit like how Sony is doing.

So, Tempest chip is designed like Cell's SPUs. How many audio channels has XSX audio chip then if Tempest chip can handle thousands of them?
It sound like to me that 3D audio in PS5 is more sophisticated.


To begin with no, later in the gen of stuff starts using more than 10gb RAM yes.

Bandwidth in XSX won't drop if games will use more than 10GB of VRAM?
 

ArchedThunder

Uncle Beerus
Member
Oct 25, 2017
19,062
It's cheaper. Shouldn't be too much of an issue in practice. Not everything needs awhole 560GB/s of bandwidth and 336GB/s is still plenty fast. As long as devs fill up the "slow" memory first with the non-bandwidth critical stuff it shouldn't cause any issues. Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.
They've also built the system to use RAM far more efficiently, as detailed both by the original breakdown and this.
 

brain_stew

Member
Oct 30, 2017
4,731
It's cheaper. Shouldn't be too much of an issue in practice. Not everything needs awhole 560GB/s of bandwidth and 336GB/s is still plenty fast. As long as devs fill up the "slow" memory first with the non-bandwidth critical stuff it shouldn't cause any issues. Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.

It's only really 3.5GB in the slower pool due to the OS reserving 2.5GB. It's really not going to be a big issue, and if it's the only way they could hit 560GB/s within budget then it's a sensible compromise.
 

LuigiV

One Winged Slayer
Member
Oct 27, 2017
2,685
Perth, Australia
It's only really 3.5GB in the slower pool due to the OS reserving 2.5GB. It's really not going to be a big issue, and if it's the only way they could hit 560GB/s within budget then it's a sensible compromise.
Yep exactly.

Yes it will, but I can't see it happening until later in the new gen.

Outside of maybe one or two games in the whole generation I don't see it happening at all. As Brain Stew said above, the OS takes up 2.5GB, so from a Dev's perspective it's a 10 to 3.5 split. Dedictating more than 74% of the entire memory pool to bandwidth critical data strikes me as extreme.
 
Last edited:

arsene_P5

Prophet of Regret
Member
Apr 17, 2020
15,438
Well, like gofreak said few days ago when XSX was showed at Hot chips
I don't think it really matters who has the better audio hardware, because both are pushing audio to new heights and that was my point. We had so many posts suggesting someone (Sony) is finally pushing audio, while Xbox always does something with audio and Xbox One was better in that regard than PS4. Or post saying Xbox is brute force, while Sony is smarter with their SSD and audio design. Or posts saying Xbox has no dedicated audio hardware or didn't invest a lot into audio as they haven't talked about it much...

All of these posts were simply wrong and they put to much weight into talking or marketing of aspects, when companies like Sony and Microsoft have their own ideas of communicating. And neither does mean PS5 or Xbox are lacking in a area. Ultimately, it doesn't matter who has the slightly better solution, because as I said both having a very good audio solution and hardware means it gets more attractive for third party devs to use them and in the end we consumers benefit.
I am now hopeful for better audio across the board now for many games.
Yep and that's what matters. Some guys should stop fighting and be happy that both companies push audio to the next level. You like to hear it.
Why did Xbox change the branding from "world's most powerful console" to "most powerful Xbox". They know Sony have some custom features that AMD are probably going to include in RDNA 3. This dosent mean PS5 is RDNA 3, it means AMD are taking the feature and including it for themselves (mark Cerny mentioned this already).

Redgamingtech covered this lots of times already. He has good info on AMD and he is known for leaking AMD stuff on his channel. He's not a clickbait normal useless YTer (in my opinion).
Not that nonsense again. It's clear PS5 uses RDNA4 already. /s
  • Don't take marketing so seriously.
  • Redgamingtech isn't reliable imo
That bit about quick resume just saving the game's memory to SSD made me think...they support 3 games at once, and probably need some scratch space while swapping between them. That's probably around 50GB of the SSD being reserved by this one OS feature.
They use save states and they are much smaller as textures for example don't need to be stored. Don't worry about it :)
I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
No, devs even asked for this as you can read in the DF Xbox Series X article a few months ago. Not evey tasks needs very high bandwidth and while 16GB at 560GB/s would be nice, the compromise to save cost, while pushing bandwith very high for the 10GB is worth it.
To begin with no, later in the gen of stuff starts using more than 10gb RAM yes.
No, you always need some RAM for CPU, Audio, ... , thus games won't suddenly use 13,5GB RAM for the GPU.
 

MrKlaw

Member
Oct 25, 2017
33,059
If it was a larger split, the ram could have been an issue. The point of unified memory is you have one pool to work from. If you start to have to manage your ram carefully to get the most from it, thats not great. (xbox 1 with esram is perhaps an extreme example). but as mentioned, its a few GB after the OS takes its cut so it should be almost a non-issue. Smart design.
 

arsene_P5

Prophet of Regret
Member
Apr 17, 2020
15,438
Keep in mind, despite the split bandwidth, it's still a unified memory architecture so data arrays can freely cross the boundary if needed.
I think this will need to be repeated till the end of this generation as so many don't seem to know the architecture is still unified.
 

ShapeGSX

Member
Nov 13, 2017
5,225
I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
No. The CPU needs to access RAM too. Typical CPU bandwidth requirements are well under 100GB/s. A Core i9-9900k only has a ram bandwidth of 39.74 GiB/s!

And the CPUs have reasonably sized caches to prevent needing to access ram every cycle. The CPU will use the slower pool, leaving the faster pool for the GPU. GPUs can't use caches as effectively as CPUs can due to the amount of data they have to access compared to a CPU. It's a brilliant design for a console.
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
No, it will increase performance because the 10GB is much faster than the unified setup.

The slower pool is still at 336 GB/s, and the difference between that and the ps5 setup is exactly the same between ps5 and the faster pool. But that's enough bandwidth for even textures not to be much trouble, especially for lower mips targeted at distant objects.

The real important thing are the big bandwidth consumers. Framebuffer, RT, etc. It's important that for gpu tasks that demand more bandwidth that they can use the entirety of that bandwidth, and 10GB is more than enough for that.

I think it's far more likely that 3.5 GB won't be enough for the low bandwidth required content and they will have to use the fast pool for that as well than the other way around, where 10GB won't be enough for everything that needs to have bandwidth and they will have to put on the slower side and get a performance hit.

Also keep in mind xbone had what 1/3 of ps4 bandwidth on the main ram, and while the games reduced everything that was bandwidth bound on it (resolution, lower quality post processing, less grass/blending) I don't think there's even a single game that used lower quality textures than ps4. That's how little textures requires compared to the big bandwidth spenders.
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
correct, but the 60 cu radeon 7 was vastly more powerful than the 56 cu vega 56. it was clocked higher and had more/better vram and was able to offer 30% more performance for 30% more tflops.

That said, I know I said earlier that we just dont know how well it scales with CUs because we dont have benchmarks. well i was wrong. we do have a benchmark, and one straight from MS and Digital Foundry. DF were told that Gears 5 benchmark running on the 12 tflops RDNA 2.0 xsx gpu performed equivalent to the rtx 2080.

Now we know the 5700xt anniversary edition is 10.14 tflops, very close to the ps5 10.28 tflops. So we can use the anniversary edition to the RTX 2080 to see what the actual performance difference between the two might be.


Looks like 13%.

Techpowerup shows rtx 2080 is 11% faster than the anniversary edition. So we can assume roughly 10% power difference between the two consoles based on these benchmarks. Which means the extra 20% tflops in the xsx gpu offered only around 11% more performance meaning the extra CUs dont scale as well or there is bottleneck somewhere else.

www.techpowerup.com

AMD Radeon RX 5700 XT 50th Anniversary Specs

AMD Navi 10, 1980 MHz, 2560 Cores, 160 TMUs, 64 ROPs, 8192 MB GDDR6, 1750 MHz, 256 bit

A1YcCGz.png


the interesting thing here is that the 5700xt does not hit its peak clocks, the 9.7 tflops 5700xt is roughly 9.3 tflops at average 1.8 ghz game clocks. wouldnt be surprised if the 1.98 ghz anniversary edition doesnt hit those clocks either during gameplay. seeing as how its offering only 4% more performance, it's actually maxing out at 9.6 tflops. but if the ps5 gpu can regularly hit 10.28 tflops then it might actually outperform the anniversary edition by a good 7% which would close the gap between the ps5 and xbox series x gpus even more. of course, that doesnt line up with what dusk golem said, but i guess this is the best comparison we have right now based on the rtx 2080/xsx gears 5 benchmarks and the anniversary edition benchmarks. Assuming of course the ps5 can run at peak clocks at all times.

I am going to look into some Gears 5 benchmarks next to see if the differences there are more pronounced because i know that the averages can sometimes be worse or better for some games.
DF wrote it was roughly a 2080 level os performance but the actual performance they were told was higher. They were shown SX going through the benchmark tool exceeding 100fps. A 2080 does not reach that, it's 2080ti territory.

And 5700XT is the 40CUs configuration, it has more performance than the 36CUs 5700. In fact even when overclocked past 2ghz the 5700 usually stays behind or at best on par with the 5700XT.

Also, the chart you posted is for 1080p do you have it for 4k? I assume the higher resolution will hammer the gpu more both CUs and bandwidth and should give a more realistic performance delta.
 

supercommodore

Prophet of Truth
Member
Apr 13, 2020
4,193
UK
The dev kit is 40GB because that's a more natural fit for a 320bit bus than 32GB. 40GB means symmetrical memory modules and a uniform 560GB/s for the whole pool, whilst 32GB would require replicating the asymmetrical chips and split bandwidth setup of the retail console. Given the price isn't a concern for a dev kit they may as well max out the whole bus. Devs can always find ways to use it.

Mind you if price wasn't an issue for the retail system and they did want to go with 20GB, it would likewise have a uniform 560GB/S bandwdith, not be a 16+4 split like you suggest. The reason the retail system has the 10+6 split is because they metaphorically started with a 20GB ram setup, split it in half, then subtracted 4GB for one half (which also limited the amount of the bus it could use), leaving 10GB with full speed and 6GB at 60% speed.

I thought the whole purpose of a dev kit was to have hardware as close to the retail unit as possible? Why the extra RAM.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
It really depends on the map how the game performs, let us just wait till launch to see how the series X or PS5 compare in cross platform games with RT before saying it is like an RTX 2080. I honestly do not think it will perform as good as that GPU.
Oh man, I thought I've remembered you and John talking about similar performance to the 2080RTX in one of the Minecraft RTX videos. I might have misremembered.
 

DrDeckard

Banned
Oct 25, 2017
8,109
UK

Tbf, as far as I'm aware... Matt is a Dev... He has access to dev kits. That doesn't mean he has access to all of sonys internal discussions and decisions. There would be a select few people privy to those kinds of decisions and they sure as hell wouldn't be sharing it to a random developer imo.

Unless Matt is high up in Sony, and why would he share things on here if he was?

Thats the only reason I asked.
 

gremlinz1982

Member
Aug 11, 2018
5,331
I don't think it really matters who has the better audio hardware, because both are pushing audio to new heights and that was my point. We had so many posts suggesting someone (Sony) is finally pushing audio, while Xbox always does something with audio and Xbox One was better in that regard than PS4. Or post saying Xbox is brute force, while Sony is smarter with their SSD and audio design. Or posts saying Xbox has no dedicated audio hardware or didn't invest a lot into audio as they haven't talked about it much...

All of these posts were simply wrong and they put to much weight into talking or marketing of aspects, when companies like Sony and Microsoft have their own ideas of communicating. And neither does mean PS5 or Xbox are lacking in a area. Ultimately, it doesn't matter who has the slightly better solution, because as I said both having a very good audio solution and hardware means it gets more attractive for third party devs to use them and in the end we consumers benefit.
Yep and that's what matters. Some guys should stop fighting and be happy that both companies push audio to the next level. You like to hear it.
Not that nonsense again. It's clear PS5 uses RDNA4 already. /s
  • Don't take marketing so seriously.
  • Redgamingtech isn't reliable imo
They use save states and they are much smaller as textures for example don't need to be stored. Don't worry about it :)
No, devs even asked for this as you can read in the DF Xbox Series X article a few months ago. Not evey tasks needs very high bandwidth and while 16GB at 560GB/s would be nice, the compromise to save cost, while pushing bandwith very high for the 10GB is worth it.
No, you always need some RAM for CPU, Audio, ... , thus games won't suddenly use 13,5GB RAM for the GPU.
Tempest Engine

"Where we ended up is a unit with roughly the same SIMD power and bandwidth as all eight Jaguar cores in the PS4 combined

Xbox Series X Audio.

Greater SPFP HW math than all 8 CPU's in Xbox One X

Once said that there is a misconception going around just because Microsoft did not name their audio chip. Looks like they have built one heck of a machine.
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
IIRC MC on RTX 2080 ti was more stable at 60 and demo was more complex than XSX demo. Anyway, nice to see RT on consoles. But still full path RT is a power hungry.
A Ti runs better, but the 2080 seems to be in the same ballpark as SX, but from what I've seen, performance varies a lot depending on the map on PC, so unless we have the final game to make 1:1 comparisons it won't be fair.

But there is some interesting points:
- The SX demo was not optimized, the PC version had more than a year of work in comparison, and had further optimizations since the demo leading to the open beta.
- June GDK documentation says SX RT performance is still not final. We don't know if GDK is the only way to develop for SX or if XDK is still a thing, but if in June it's still not final, likely it wasn't for when the demo was made.
 

DjRalford

Member
Dec 14, 2017
1,529
DF wrote it was roughly a 2080 level os performance but the actual performance they were told was higher. They were shown SX going through the benchmark tool exceeding 100fps. A 2080 does not reach that, it's 2080ti territory.

Man, if you're expecting the XSX to be on the 2080ti level you're going to be extremely disappointed.
 

Gamer17

Banned
Oct 30, 2017
9,399
Tbf, as far as I'm aware... Matt is a Dev... He has access to dev kits. That doesn't mean he has access to all of sonys internal discussions and decisions. There would be a select few people privy to those kinds of decisions and they sure as hell wouldn't be sharing it to a random developer imo.

Unless Matt is high up in Sony, and why would he share things on here if he was?

Thats the only reason I asked.
Because he knows how hardware is designed and last minute change to power budget messes the whole chip power allocation design drastically. 40 50 mhz sure . But not this drastic . Then u need to change everything again

Sony has made ps5 with the idea that it needs to be extremely fast and that includes gpu having highest filtrate possible to use this fast data being provided to it. It has been their basic design idea from get go .
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
DF wrote it was roughly a 2080 level os performance but the actual performance they were told was higher. They were shown SX going through the benchmark tool exceeding 100fps. A 2080 does not reach that, it's 2080ti territory.

And 5700XT is the 40CUs configuration, it has more performance than the 36CUs 5700. In fact even when overclocked past 2ghz the 5700 usually stays behind or at best on par with the 5700XT.

Also, the chart you posted is for 1080p do you have it for 4k? I assume the higher resolution will hammer the gpu more both CUs and bandwidth and should give a more realistic performance delta.
That is not what we put out, rather I have said that it was close to a 2080 or "broadly similar to" it in the gears 5 benchmark on Ultra with no dynamic res on - in actuality it performed a bit below the 2080 in the gears 5 benchmark as Richard has said in one of his Videos. So nothing about 100 fps or whatever you say there. Regarding RT performance, we have said nothing definitive yet since there is not definitive comparisons out there without Vsync on - so we will have to wait for the game's to Release. But the rumbling in my gut elsewhere does not make me think it will be exactly be RTX 2080 Tiered.
 

Nayeon

Member
Oct 29, 2017
329
No. The CPU needs to access RAM too. Typical CPU bandwidth requirements are well under 100GB/s. A Core i9-9900k only has a ram bandwidth of 39.74 GiB/s!

And the CPUs have reasonably sized caches to prevent needing to access ram every cycle. The CPU will use the slower pool, leaving the faster pool for the GPU. GPUs can't use caches as effectively as CPUs can due to the amount of data they have to access compared to a CPU. It's a brilliant design for a console.

Oh nice, thank you. That does sound very promising!

No, it will increase performance because the 10GB is much faster than the unified setup.

The slower pool is still at 336 GB/s, and the difference between that and the ps5 setup is exactly the same between ps5 and the faster pool. But that's enough bandwidth for even textures not to be much trouble, especially for lower mips targeted at distant objects.

The real important thing are the big bandwidth consumers. Framebuffer, RT, etc. It's important that for gpu tasks that demand more bandwidth that they can use the entirety of that bandwidth, and 10GB is more than enough for that.

I think it's far more likely that 3.5 GB won't be enough for the low bandwidth required content and they will have to use the fast pool for that as well than the other way around, where 10GB won't be enough for everything that needs to have bandwidth and they will have to put on the slower side and get a performance hit.

Also keep in mind xbone had what 1/3 of ps4 bandwidth on the main ram, and while the games reduced everything that was bandwidth bound on it (resolution, lower quality post processing, less grass/blending) I don't think there's even a single game that used lower quality textures than ps4. That's how little textures requires compared to the big bandwidth spenders.

Thank you for explaining, too. But what if games in 4 or 5 years time will start getting so big that 10 GB of fast RAM will not be enough? Is that possible or will that not happen for the forseeable future?
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
People still seem to be attributing the "over 100fps" part to single player when that relates to multiplayer
 

Wereroku

Member
Oct 27, 2017
6,243
Oh nice, thank you. That does sound very promising!

Thank you for explaining, too. But what if games in 4 or 5 years time will start getting so big that 10 GB of fast RAM will not be enough? Is that possible or will that not happen for the forseeable future?
If it goes over 10gbs then all of the ram functions at the slower speed. So the answer is devs won't go over that amount. SFS is meant to keep that kind of situation from happening. Devs will be able to use higher quality textures on smaller amounts of ram.
 
Jul 13, 2020
211
Do people not realize that MS can price it whatever they want? That they can easily forego making a profit or breaking even on hardware in order to get more machines into more hands to facilitate more GamePass subscriptions (the real profit engine they're building)?

It kinda doesn't matter how much it cost to build in the end. That won't dictate price. Phil Spencer explicitly said this last year. That they won't be beat on price or power or something to that effect. It seems, weirdly, that some of you almost *want* it to be expensive for some reason.
 

arsene_P5

Prophet of Regret
Member
Apr 17, 2020
15,438
The real important thing are the big bandwidth consumers. Framebuffer, RT, etc. It's important that for gpu tasks that demand more bandwidth that they can use the entirety of that bandwidth, and 10GB is more than enough for that.
No lies detected and that's why averaging the RAM in comparisons between the consoles never made any sense.
 

Scently

Member
Oct 27, 2017
1,464
People still seem to be attributing the "over 100fps" part to single player when that relates to multiplayer
Indeed. The multiplayer runs @60 fps even on base X1 while the single-player runs @30fps on it so the XSX should have enough performance to run multiplayer @120 fps.

As for the relative GPU performance of the XSX, I expect it to perform better than the 2080S but not quite 2080Ti. The Gears demo with 2 weeks of unoptimized code was already very very close to the performance of a 2080. This was back in March with unfinished GDK/XDK. At the start of a generation, GDK, drivers, and what not are never at their most optimal state going in to launch so to be getting the level of performance that they are getting atm is impressive and bodes well for the future.
 

arsene_P5

Prophet of Regret
Member
Apr 17, 2020
15,438
Thank you for explaining, too. But what if games in 4 or 5 years time will start getting so big that 10 GB of fast RAM will not be enough? Is that possible or will that not happen for the forseeable future?
unlikely as the dedicated audio chip and CPU will still need RAM in a few years. Both consoles can't use all their RAM for GPU in a realistic scenario.
 
Jan 20, 2019
10,681
Do people not realize that MS can price it whatever they want? That they can easily forego making a profit or breaking even on hardware in order to get more machines into more hands to facilitate more GamePass subscriptions (the real profit engine they're building)?

It kinda doesn't matter how much it cost to build in the end. That won't dictate price. Phil Spencer explicitly said this last year. That they won't be beat on price or power or something to that effect. It seems, weirdly, that some of you almost *want* it to be expensive for some reason.

You are absolutely wrong.
 

Sia

Attempted to circumvent ban with alt account
Banned
Jun 9, 2020
825
Canada
Do people not realize that MS can price it whatever they want? That they can easily forego making a profit or breaking even on hardware in order to get more machines into more hands to facilitate more GamePass subscriptions (the real profit engine they're building)?

It kinda doesn't matter how much it cost to build in the end. That won't dictate price. Phil Spencer explicitly said this last year. That they won't be beat on price or power or something to that effect. It seems, weirdly, that some of you almost *want* it to be expensive for some reason.

if that were the case they would price it 399 and would have opened preorders already
 

Wereroku

Member
Oct 27, 2017
6,243
Do people not realize that MS can price it whatever they want? That they can easily forego making a profit or breaking even on hardware in order to get more machines into more hands to facilitate more GamePass subscriptions (the real profit engine they're building)?

It kinda doesn't matter how much it cost to build in the end. That won't dictate price. Phil Spencer explicitly said this last year. That they won't be beat on price or power or something to that effect. It seems, weirdly, that some of you almost *want* it to be expensive for some reason.
The price or power statement wasn't about a single console. Their s and x strategy allows them to offer both the cheapest and most powerful console. No company wants to eat a loss like the PS3. It sold 85mil units and still probably never broke even.
 

Fafalada

Member
Oct 27, 2017
3,066
but we do have benchmarks that show higher clocks dont scale performance 1:1
Ehh the whole logic that 'the baseline is some arbitrarily chosen lower-clock and we overclock the hw' would need to be demonstrated first.
Clock : performance scaling always followed the same rules - you don't get 1:1 scaling just based on how much more compute your chip can do if the rest of the components aren't keeping up, this isn't a revelation to anyone.
But for this to be relevant we must assume Sony was designing for that mythical 8TFlop machine. Which isn't impossible - but noone's brought receipts for it yet. And no - tests of lower clocked parts aren't it - at least half of the hardware in last 20 years has documented instances of running at (often significantly)lower clocks prior to launch(including showing tech demos, and actual games running on those lower clocks), and people never computed their performance scaling with the assumption their final designs were 'overclocked'. If anything - certain actual 'last-minute' overclocks were subject to being actively inflated post launch by tech enthusiast sites (well past what 1:1 scaling would actually give).


I am interested to understand how the RT hardware works as the presentation seems to suggest it shares resources with the TMUs which isn't exactly encouraging
Why not? Intersection compute happening in TMUs actually makes it scale easier (and makes that useful for a lot more than just light-rays) than having to send results back and forth between some bespoke circuitry. Seems to offer a fair bit of opportunity for coherency optimizations and multiple-bounces - something that RTX did not do well at all so far.
Traversal being in software also means it'll be subject to implementation efficiency - so we may well see it evolve over time considerably.
 
Jul 13, 2020
211
if that were the case they would price it 399 and would have opened preorders already

People on this forum always jump to extremes to make their "points"

"If MS really wants their games available to everyone, why not put them Playstation?"

"If they can be flexible on price, why not charge $2?"

It becomes so tiresome sometimes.
 
Oct 30, 2017
8,706
People on this forum always jump to extremes to make their "points"

"If MS really wants their games available to everyone, why not put them Playstation?"

"If they can be flexible on price, why not charge $2?"

It becomes so tiresome sometimes.
This still doesn't mean they can just price their console however they want.

Of course they are doing cost benefit analysis for pricing their consoles. And yes the actual production cost of the system and how they expect the cost of production to change over time will factor into this decision.

Microsoft's strategy allows them to have the price and power advantage. But not necessarily with the same game console.
 

Timlot

Banned
Nov 27, 2019
359
So, Tempest chip is designed like Cell's SPUs. How many audio channels has XSX audio chip then if Tempest chip can handle thousands of them?
It sound like to me that 3D audio in PS5 is more sophisticated.
The Tempest Engine can support 'hundreds of sound sources'. Not thousands.

You're confusing the sound chanber array Cerny said they used to sampled (recorded) thousands of hfrt location. Thoses samples get mixed into one of hundred 3D sources to its approximate location.

Sony never gave a specific number. They've just said hundreds of sources. Microsoft said at Hot Chips the XSX can support >300 sound sources.
 

Prine

Attempted to circumvent ban with alt account
Banned
Oct 25, 2017
15,724
What a beast of a machine, and even better to know XSX innovations will feed into and inform PC tools.

MS being quietly confident about audio is just icing on top, seems that they're just waiting for results to be made public, cant wait to what Coalition pull off. Also, looking forward to DF multiplat analysis in November.
 

disco_potato

Member
Nov 16, 2017
3,145
Yep! XSX and PS5 it is. Can't wait for the holidays!
Consoles are almost never the right choice if you're already 50/50 about getting either/or.
If DLSS2.0 or something similar is your main decider, then there is no reason to even consider a console..

It really depends on the map how the game performs, let us just wait till launch to see how the series X or PS5 compare in cross platform games with RT before saying it is like an RTX 2080. I honestly do not think it will perform as good as that GPU.
You can be honest Alex. You know RT performance won't be anywhere near 2080 or even 2070.

I am no Tech expert but that does seem like a bad idea to me. Why have 10 GB at full speed and 6 at a lower? Will that not hinder game performance in comparison to 16 GB of unified speed?
Cost dictates what gets put into these things. They probably wanted 10, 2GB chips but ended up with a mix of 1/2GB chips.
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
That is not what we put out, rather I have said that it was close to a 2080 or "broadly similar to" it in the gears 5 benchmark on Ultra with no dynamic res on - in actuality it performed a bit below the 2080 in the gears 5 benchmark as Richard has said in one of his Videos. So nothing about 100 fps or whatever you say there. Regarding RT performance, we have said nothing definitive yet since there is not definitive comparisons out there without Vsync on - so we will have to wait for the game's to Release. But the rumbling in my gut elsewhere does not make me think it will be exactly be RTX 2080 Tiered.
Oh thanks for the correction. I could swear I saw footage of the benchmark showing framerates around the 100fps mark, I guess I just confused myself.

For the RT part I was going with the quoted performance of native 1080p @30-60fps for SX, considering it will only get better in the final game and final hardware that seems to be in the same ballpark at least.

Having a console with similar performance, especially RT, as a 2080 does sound like a pipe dream though. But that might have been explained by Nvidia in the minecraft tech deep dive. They mentioned that denoising takes more time than raytracing, and that in order to significantly reduce the denoising it would take more than an order of magnitude more rays, so I can see a situation where the hardware to perform intersections won't be the bottleneck, but rather shader performance and bandwidth.
 

arsene_P5

Prophet of Regret
Member
Apr 17, 2020
15,438
if that were the case they would price it 399 and would have opened preorders already
Yep and this goes for both companies. They won't just do one thing when there is a lot to consider from a business standpoint. Nobody has a unlimited warchest and even if that were the case, Sony and MS wouldn't have gotten the warchest by making their decision based around winning a war and just being lower on price no matter what. Both have to make sensible decisions and they know price matters, thus they haven't announced them yet.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
Why not? Intersection compute happening in TMUs actually makes it scale easier (and makes that useful for a lot more than just light-rays) than having to send results back and forth between some bespoke circuitry. Seems to offer a fair bit of opportunity for coherency optimizations and multiple-bounces - something that RTX did not do well at all so far.
Traversal being in software also means it'll be subject to implementation efficiency - so we may well see it evolve over time considerably.
I don't see how Nvidia's RT cores are "bespoke" when they're bundled in the shader multicore
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
Oh nice, thank you. That does sound very promising!



Thank you for explaining, too. But what if games in 4 or 5 years time will start getting so big that 10 GB of fast RAM will not be enough? Is that possible or will that not happen for the forseeable future?
The point I was trying to make is that they will definitely use more than 10GB, but that won't affect performance. What matters is that the pool is large enough to hold the data that does need the extra bandwidth and that doesn't seem that it will be an issue.
Ehh the whole logic that 'the baseline is some arbitrarily chosen lower-clock and we overclock the hw' would need to be demonstrated first.
Clock : performance scaling always followed the same rules - you don't get 1:1 scaling just based on how much more compute your chip can do if the rest of the components aren't keeping up, this isn't a revelation to anyone.
But for this to be relevant we must assume Sony was designing for that mythical 8TFlop machine. Which isn't impossible - but noone's brought receipts for it yet. And no - tests of lower clocked parts aren't it - at least half of the hardware in last 20 years has documented instances of running at (often significantly)lower clocks prior to launch(including showing tech demos, and actual games running on those lower clocks), and people never computed their performance scaling with the assumption their final designs were 'overclocked'. If anything - certain actual 'last-minute' overclocks were subject to being actively inflated post launch by tech enthusiast sites (well past what 1:1 scaling would actually give).



Why not? Intersection compute happening in TMUs actually makes it scale easier (and makes that useful for a lot more than just light-rays) than having to send results back and forth between some bespoke circuitry. Seems to offer a fair bit of opportunity for coherency optimizations and multiple-bounces - something that RTX did not do well at all so far.
Traversal being in software also means it'll be subject to implementation efficiency - so we may well see it evolve over time considerably.
There's evidence they were targeting lower clocks initially. All the leaks and testings that have proven super reliable.

But the better question would be: what they changed from AMD's baseline design that shows that they can actually support the higher clocks? Imo that's where there's not even a shred of evidence suggesting that.

They still have the same bandwidth setup as the quoted 5700 (and mind you they actually tested the chip with higher bandwidth on the GH data), there's no evidence of new hardware features to offset that or a change into the cache structure.

So realistically how Sony could bend the rules and achieve more performance gains than everyone else in the market?