• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

What do you think could be the memory setup of your preferred console, or one of the new consoles?

  • GDDR6

    Votes: 566 41.0%
  • GDDR6 + DDR4

    Votes: 540 39.2%
  • HBM2

    Votes: 53 3.8%
  • HBM2 + DDR4

    Votes: 220 16.0%

  • Total voters
    1,379
Status
Not open for further replies.

Your Fave

Alt account
Banned
Oct 27, 2017
31
Looking back after watching all the conferences, I think it was a brilliant marketing move for Sony to announce the PS5 a few weeks ago, and other hardware makers will surely take notice for the future.

The news would have been buried underneath all the other gaming news around E3. I feel only the techies are even talking about Scarlett right now while the PS5 dominated mainstream press cycles for a good while. To be honest, it didn't help that their reveal was basically a complete re-tread of what Sony mentioned a few weeks ago.

Not really, Google "Keanu Reeves Next Xbox".
 

Lady Gaia

Member
Oct 27, 2017
2,476
Seattle
What is better? A GPU with more CUs and lower clock speed or a GPU with less CUs and higher clock speed? I know the latter brought you higher flops but on the terrain which will have better results in games, streaming textures and shaders etc..?

I've posted a few times in the subject. The quick summary is more CUs are "better" in the sense that they'll yield better performance within the same thermal constraints, presuming tasks can be efficiently scheduled. Higher clock speeds are "better" in that smaller designs are easier and cheaper to manufacture, and they scale predictably (but one of the those predictable factors is that clock scaling results in exponential increases in heat which will ultimately limit how high you can go.)
 
Last edited:

anexanhume

Member
Oct 25, 2017
12,912
Maryland
🤔 from May 25th


im joking of course, but would be funny if both of you are right on the money, and if this leak is real. I also imagine that even if this pb is real, final clocks may still change
My predict is based on precedent. The Xbox One X had clocks that roughly matched the RX570, which was around 100 MHz lower than RX580. RX 580 is the 180W card to compare to 5700 (non XT), so I subtracted a little more than 100 MHz from the game clock to get 1500MHz. Only a 300 MHz bump from X1X.

RX 570 is a full 30W below RX 580 with those clocks, despite only being 4 less CUs. 5W is probably due to slower RAM, but you're probably saving 15W normalized for CU count just by lowering clock ~100 MHz.
 
Oct 27, 2017
699
User Banned (2 Weeks): System wars, trolling, history of similar infractions
I think we're at Xbox is more powerful, or was that yesterday? Maybe it was PS5 and Xbox is today?
You need to keep up it changes by the hour..

I think the main thing when astroturfing is to ensure your chosen product is always in the conversation, no matter what the topic. At least that way there is a chance that it can be relevant once again rather than completely forgotten by potential customers.

Maybe some of the astroturfers can correct this assumption if I've got things wrong? I'm happy to take all feedback on board ;)
 

Expy

Member
Oct 26, 2017
9,860
For 4k rendering wouldn't we start to see diminishing returns on how much these GPUs can achieve after a certain point? Back when the Pro was announced a lot of people thought 8TF was a good place to be for this objective.

I think the CPUs will be far more relevant this next generation.
Definitely referring to Polyphony there. Whether or not we're seeing PS5 Devkit footage in those videos is another story.
 

Wollan

Mostly Positive
Member
Oct 25, 2017
8,807
Norway but living in France
Quoting this from the Radeon RX 5000 family GPU thread :
3-1080.8a2c9622.jpg


4-1080.99511094.jpg


Pretty much de-mystifies how next-gen consoles will implement hardware ray-tracing. This is not some in-house secret-sauce from Sony or Microsoft camps specifically but rather incoming features on the Radeon roadmap (2020-2021 on PC). We will see if next-gen consoles will be based on RDNA2 or if they will be based on RDNA with the hardware ray-tracing aspect brought over early (I would bet on the latter). Just like the PS4 Pro Polaris GPU had rapid-packed-math implemented early despite it being a Vega GPU feature on the roadmap.
 

M3rcy

Member
Oct 27, 2017
702
i dont know about that. i have no idea what happened here overnight but when i went to sleep we were discussing 11-12 tflops, 56-60 CUs because the Scarlett APU is huge, way bigger than the base PS4, Pro and the X1X. Some estimates are 400 mm2.

The Navi GPU revealed yesterday is only 250mm2 and 40CUs. Even if you add 70mm2 for the Zen 2 CPU, you still have a lot of space to add more CUs.

I am back on the 12.9 Tflops train after that.

Scarlet - 64 CU at 1.5 Ghz - 11.5 tflops
PS5 - 56 CU at 1.8 Ghz - 12. tflops (HBM2 + DDR4 lets them push clocks higher)

I have no idea why we all thought they would settle for a 350mm2 die when they are no longer going for $399 standard cooling consoles. These consoles are going to be $499 which means they can create a bigger die than a PS4. With Vapor chamber cooling they can push clocks up to high levels as well.

You're failing to take into account the additional die area taken up by the RT hardware. In Turing, this is not insignificant.
 

Fafalada

Member
Oct 27, 2017
3,065
Say bye bye to ray-tracing support on next-gen consoles and Navi 2 GPUs.
a) consoles will have a bigger userbase than entirety of RT supported cards on PC out there in a week from launching. You can expect that's when RT adoption really starts, and CDPR has always been very port-happy with their updates.
b) in all probability, the nextbox SDK will include DXR implementation (it doesn't matter if software or hardware based), so at least support for MS console will be relatively simple to adopt (barring performance problems of course). PS5 will be the odd man out in terms of API support, but then that's been the case for the past 25 years on all non MS consoles, so nothing new there either.
 

Deleted member 49804

User requested account closure
Banned
Nov 21, 2018
1,868
Good, you are confirming that a HBM2 + DDR4 solution has a better pj/bit metric than an equivalent GDDR6 solution (say, 8GB HBM2 + 16GB DDR4 vs. 24GB GDDR6. Note that total bandwidth is similar, but not identical and that is intentional on part of the electrical engineer). Given that the memory access patterns for a 8-core/16 thread Zen 2 are widely different than those of a Navi GPU, I'm sure you can clearly see that the efficiency gains from separating the memory traffic into two memory pools is high. Fortunately, the two memory pools appear as one at the software API level, meaning there is no complication factor for the game programmer.
The problem is, that HBM2 + DDR4 has not the same bandwidith.

24GB GDDR6 RAM 14Gbps on a 386bit bus has a total bandwith of 672GB/s
HBM2 in a 2 stack (2*4GB) configuration at 1000MHz on a 2048 bit bus has a total bandwidth of 512GB/s
So you still need 160GB/s bandwidth with your DDR4 RAM solution to match GDDR6
And that would be a 256 bit interface with 5000MHz DDR4 RAM (8* 2GB)



HBM makes sense. HBM in combination with DDR4 does not make sense, unless you're okay with less total bandwidth.
And only in that case would you save any significant power savings over GDDR6 only.
 

VX1

Member
Oct 28, 2017
7,000
Europe
My predict is based on precedent. The Xbox One X had clocks that roughly matched the RX570, which was around 100 MHz lower than RX580. RX 580 is the 180W card to compare to 5700 (non XT), so I subtracted a little more than 100 MHz from the game clock to get 1500MHz. Only a 300 MHz bump from X1X.

RX 570 is a full 30W below RX 580 with those clocks, despite only being 4 less CUs. 5W is probably due to slower RAM, but you're probably saving 15W normalized for CU count just by lowering clock ~100 MHz.

Yeah,i guess we can expect something like that from Anaconda.
Any predictions for PS5 Anex? It's a bit tricky to extrapolate Pro unlike 1X i suppose?
 
Oct 27, 2017
4,639
I just caught up with this thread in time to see SBK was freed and re-incarcerated within a few hours...

Don't do this to ya self man.
 

Godzilla24

Member
Nov 12, 2017
3,371
I don't think its even near official that Lockhart is out of the game yet. MS just stated they have their overall concept of scarlett in the works but didn't mentioned anything about how many consoles will be out. Its still up in the air and probably being decided in the upcoming year.
 

Luckydog

Attempting to circumvent a ban with an alt account
Banned
Oct 25, 2017
636
USA
I think the main thing when astroturfing is to ensure your chosen product is always in the conversation, no matter what the topic. At least that way there is a chance that it can be relevant once again rather than completely forgotten by potential customers.

Maybe some of the astroturfers can correct this assumption if I've got things wrong? I'm happy to take all feedback on board ;)

well that was quick
 

Lady Gaia

Member
Oct 27, 2017
2,476
Seattle
In short:
RDNA Tearflops ≠ VEGA Teraflops
5700XT = 11 VEGA Teraflops

It's more a matter of "peak teraflops is a terrible measure of gaming performance." There is no simple multiplier at work. The 1.25x factor people have adopted here is likely not a reliable indicator (it was a description of how many instructions can be issued per clock cycle, which will already be factored into the published peak TF number.) So no, the 5700XT is not theoretically capable of 11 billion floating point operations per second. What it should be capable of is spending more time closer to its actual theoretical 9.75TF peak than an equivalent Vega GPU could with the same workload.

... and we're seeing that borne out in benchmarks. The next generations consoles will be a significant leap forward in terms of what developers can actually get on the screen, which is what matters in the first place. The proof is in the pudding, as they say, and it's likely to be at least the end of this year before we start to see any actual games. Next E3 will be the blowout once all the remaining big current generation titles have already shipped.
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Yeah,i guess we can expect something like that from Anaconda.
Any predictions for PS5 Anex? It's a bit tricky to extrapolate Pro unlike 1X i suppose?

No PS5 prediction because we have no die to make guesses off, but I would guess less CUs based off Gonzalo clock.

It's more a matter of "peak teraflops is a terrible measure of gaming performance." There is no simple multiplier at work. The 1.25x factor people have adopted here is likely not a reliable indicator (it was a description of how many instructions can be issued per clock cycle, which will already be factored into the published peak TF number.) So no, the 5700XT is not theoretically capable of 11 billion floating point operations per second. What it should be capable of is spending more time closer to its actual theoretical 9.75TF peak than an equivalent Vega GPU could with the same workload.

... and we're seeing that borne out in benchmarks. The next generations consoles will be a significant leap forward in terms of what developers can actually get on the screen, which is what matters in the first place. The proof is in the pudding, as they say, and it's likely to be at least the end of this year before we start to see any actual games. Next E3 will be the blowout once all the remaining big current generation titles have already shipped.

The 1.25X is not included in TF numbers because it doesn't affect them. The IPC number is a utilization or efficiency metric. TF is a theoretical capability measurement. The amount of theoretical compute per CU has not changed, but the way those ALUs are organized is vastly different than before.
 

vivftp

Member
Oct 29, 2017
19,744
I think the main thing when astroturfing is to ensure your chosen product is always in the conversation, no matter what the topic. At least that way there is a chance that it can be relevant once again rather than completely forgotten by potential customers.

Maybe some of the astroturfers can correct this assumption if I've got things wrong? I'm happy to take all feedback on board ;)
Damn, was that a new record for rebanning? 😮
 

DukeBlueBall

Banned
Oct 27, 2017
9,059
Seattle, WA
No PS5 prediction because we have no die to make guesses off, but I would guess less CUs based off Gonzalo clock.

Gonzalo was said to have Navi 10 lite.
So 36-40 CU with ~1800 mhz.

That being said the addition of RT and the rumors of high tfs in PS5 makes me doubt Gonzalo was PS5.

Gonzalo might have been Lockhart(half ROPs), or some other announced Chinese console.
 

Godzilla24

Member
Nov 12, 2017
3,371
Are they suddenly cancelling Halo Infinity, Gears 5, Forza 8, Forza Horizon, and a possible Fable? I get people may not like the Gears/Forza/Halo constant hammering, but if these are their major AAA franchises, and they work in a bunch of AA stuff on a steady basis for cheap with gamepass, it seems a valid stategy. It may not be for everyone but it could be for them. And with Spencer saying he doesnt care how many consoles they sell, only who is using their services, then the Gamepass way may be good for them.
Yeah that response didn't even make sense. Not to forget Age of Empires 4, Microsoft Flight simulator, etc
 

FSavage

Member
Oct 30, 2017
562
My predict is based on precedent. The Xbox One X had clocks that roughly matched the RX570, which was around 100 MHz lower than RX580. RX 580 is the 180W card to compare to 5700 (non XT), so I subtracted a little more than 100 MHz from the game clock to get 1500MHz. Only a 300 MHz bump from X1X.

RX 570 is a full 30W below RX 580 with those clocks, despite only being 4 less CUs. 5W is probably due to slower RAM, but you're probably saving 15W normalized for CU count just by lowering clock ~100 MHz.

Yeah, was def not saying you made your predictions off of the pastebin or anything, just a funny coincidence. Your posts in this thread have been very level headed and reasonable. I'm just glad more people are coming around to the idea of next gen consoles having higher clocks than previously expected now.

No PS5 prediction because we have no die to make guesses off, but I would guess less CUs based off Gonzalo clock.

I actually think both consoles will have the same number of active CUs (56), any differences in clock speeds will come from cuts or innovations in other areas of the console, such as different memory and cooling setups..
 

DukeBlueBall

Banned
Oct 27, 2017
9,059
Seattle, WA
IMO Navi is capable of doing 4 SEs of 10 dual CUs each. I think next year we will see 80 CU cards.

Consoles with have 3 SEs and 60 CUs. 54 active like Anex said.
 
Feb 1, 2018
5,239
Europe
After reading the matt booty interview about the goal to release a First party game every 2 3 months on game pass , I kinda got the feeling that MS is done with AAA games .AA games released quickly is their model due to nature of gamepass (and their recent studio acquisitions also shows samething kinda, small AA studios).

Isn't that a bit alarming to Xbox gamers ?? AAA games take 4 to 5 years minimum . And sacrificing quality for quantity should not be objective of the platform holder.

Anyways this doesn't speak well to me but hey I haven't been their target audience since 2011 so maybe Xbox gamers are fine with this .
Maybe the gaming industry as a whole is also done with AAA gaming soon?
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
You're failing to take into account the additional die area taken up by the RT hardware. In Turing, this is not insignificant.
It depends what they do. One of their research paper talks about only a 5% die cost for speeding up kd tree traversal.

AMD has already stated they're going to target a limited number of RT effects for RDNA 2. I would expect their solution to be more compact.

What did this say again, I had the page open and it was a chart. I refreshed it and the tweet is dead now.

He keeps deleting the tweet to update the table. I edited the post to include the latest.
 

EGOMON

Member
Nov 5, 2017
924
Earth
I think the main thing when astroturfing is to ensure your chosen product is always in the conversation, no matter what the topic. At least that way there is a chance that it can be relevant once again rather than completely forgotten by potential customers.

Maybe some of the astroturfers can correct this assumption if I've got things wrong? I'm happy to take all feedback on board ;)
Wtf Spinning you just cameback! Sigh
 
Jun 18, 2018
1,100
AMD has already stated they're going to target a limited number of RT effects for RDNA 2. I would expect their solution to be more compact.

Devs get to choose what effects RT is used for. Heck, there is even a project where it's used for super sampling along aliasing edges.

Like NVidia's RTX cards, AMD are suggesting that there isn't the power to fully ray trace all lighting, shadows & reflections, so developers will choose where to apply it.

NVidia had a good talk on Ray Traced Irradiance Fields at GDC this year, hinting at a good use for rays that won't be as accurate as per pixel, but doesn't have the issue of scenes taking multiple seconds before their lighting data is complete.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
The problem is, that HBM2 + DDR4 has not the same bandwidith.

24GB GDDR6 RAM 14Gbps on a 386bit bus has a total bandwith of 672GB/s
HBM2 in a 2 stack (2*4GB) configuration at 1000MHz on a 2048 bit bus has a total bandwidth of 512GB/s
So you still need 160GB/s bandwidth with your DDR4 RAM solution to match GDDR6
And that would be a 256 bit interface with 5000MHz DDR4 RAM (8* 2GB)



HBM makes sense. HBM in combination with DDR4 does not make sense, unless you're okay with less total bandwidth.
And only in that case would you save any significant power savings over GDDR6 only.
Don't forget how that bandwidth is actually being used.

One pool of GDDR6 @672GB/s is sharing that bandwidth between the GPU and CPU. And we have all heard the horror stories of memory contention in a shared pool of RAM. So even if the CPU only needs like 50GB/s of bandwidth it ends up taking anywhere from 75GB/s and up.

With that split pool, all of that 512GB/s of HBM bandwidth is going to the GPU exclusively.

I can see a set up where sony puts 16GB of DDR4 RAM, 8GB reserved for OS and SSD and 8GB for games coupled with 8GB of HBM2 RAM. If so doing would be cheaper (and less power hungry) than going with one pool of 24GB GDDR6? I can see it happening.
 

chowyunfatt

Banned
Oct 28, 2017
333
I think the main thing when astroturfing is to ensure your chosen product is always in the conversation, no matter what the topic. At least that way there is a chance that it can be relevant once again rather than completely forgotten by potential customers.

Maybe some of the astroturfers can correct this assumption if I've got things wrong? I'm happy to take all feedback on board ;)
What! can't take a light hearted comment or what.. I have know idea what you're even implying LOL.
I thought forums were supposed to be light hearted fun?
 

chowyunfatt

Banned
Oct 28, 2017
333
ms flight simulator?
Now thats one game I would be there day one for on new Xbox, Microsoft's conference didn't really interest me with what they showed but that looked incredible, especially if they can pull off what it looks like their trying to do.

Can only imagine how good that will look and feel on a 60" Tv next gen

EDIT:' Sorry for double post my internets jumping all over the place
 

nelsonroyale

Member
Oct 28, 2017
12,124
So basically the latest legitimate info (by the GI editor) suggests PS5 may be more powerful...but there still seems to be a lot of speculation that the next xbox will be more powerful based on what? Neither situation is confirmed in anyway, but it doesn't seem like there is any confirmed insider info to indicate the latter, except for speculation based on uncertainty around the two sku model...

Apart from the most recent tweet by the GI editor, is any think with decent credence that indicates either way? Or is it largely down to speculation orientated to a degree by the preferences of the speculator?

I am sure this is just a diversion. Most of the interesting detective work is simply around working out specs based on info, rather than the comparative stuff obviously.
 

BreakAtmo

Member
Nov 12, 2017
12,805
Australia
Don't forget how that bandwidth is actually being used.

One pool of GDDR6 @672GB/s is sharing that bandwidth between the GPU and CPU. And we have all heard the horror stories of memory contention in a shared pool of RAM. So even if the CPU only needs like 50GB/s of bandwidth it ends up taking anywhere from 75GB/s and up.

With that split pool, all of that 512GB/s of HBM bandwidth is going to the GPU exclusively.

I can see a set up where sony puts 16GB of DDR4 RAM, 8GB reserved for OS and SSD and 8GB for games coupled with 8GB of HBM2 RAM. If so doing would be cheaper (and less power hungry) than going with one pool of 24GB GDDR6? I can see it happening.

Yeah, this is what I was going to say. The rumour described this exact thing. The stated bandwidth of GDDR6 is not what you actually get in reality, unlike with the other method.
 

Bradbatross

Member
Mar 17, 2018
14,191
So basically the latest legitimate info (by the GI editor) suggests PS5 may be more powerful...but there still seems to be a lot of speculation that the next xbox will be more powerful based on what? Neither situation is confirmed in anyway, but it doesn't seem like there is any confirmed insider info to indicate the latter, except for speculation based on uncertainty around the two sku model...

Apart from the most recent tweet by the GI editor, is any think with decent credence that indicates either way? Or is it largely down to speculation orientated to a degree by the preferences of the speculator?

I am sure this is just a diversion. Most of the interesting detective work is simply around working out specs based on info, rather than the comparative stuff obviously.
We have that tweet, and we have other insiders saying that nothing is final, so speculation is wide open at the moment.
 

iTehDroiD

Member
Oct 28, 2017
136
Don't forget how that bandwidth is actually being used.

One pool of GDDR6 @672GB/s is sharing that bandwidth between the GPU and CPU. And we have all heard the horror stories of memory contention in a shared pool of RAM. So even if the CPU only needs like 50GB/s of bandwidth it ends up taking anywhere from 75GB/s and up.

With that split pool, all of that 512GB/s of HBM bandwidth is going to the GPU exclusively.

I can see a set up where sony puts 16GB of DDR4 RAM, 8GB reserved for OS and SSD and 8GB for games coupled with 8GB of HBM2 RAM. If so doing would be cheaper (and less power hungry) than going with one pool of 24GB GDDR6? I can see it happening.

Wouldn't that mean no unified memory for the CPU and GPU? A feature Mark Cerny advertised as a highly requested dev feature and a big advantage of the PS4 ?
 

BreakAtmo

Member
Nov 12, 2017
12,805
Australia
Wouldn't that mean no unified memory for the CPU and GPU? A feature Mark Cerny advertised as a highly requested dev feature and a big advantage of the PS4 ?

No. The High Bandwidth Cache Controller causes the RAM to appear as a single pool to developers and manages it all in the background, so the issues with split RAM pretty much go away.
 
Status
Not open for further replies.