Next-gen PS5 and next Xbox speculation launch thread |OT5| - It's in RDNA

What do you think could be the memory setup of your preferred console, or one of the new consoles?


  • Total voters
    1,144

PLASTICA-MAN

Member
Oct 26, 2017
7,407
Here’s my Anaconda prediction now:

3 shader engines, 6 workgroup processors, 1CU pair per SE disabled - 54 total CUs. 1500MHz. 10.4TF.
You think that both MS and Sony will further customize the architecture of current NAVI to include more CUs to have more TFlops or do you think that they will wait a little more for AMD to finalize their NAVI RDNA 2 design to put as final step in their consoles?
 
Last edited:

Luckydog

Member
Oct 25, 2017
547
USA
It shouldn't really, but it constantly surprises me how much Microsoft's history of throwing money down the Xbox well has altered attitudes toward them. I mean, you seem like a perfectly normal, reasonable person. And here you are suggesting that a possible strategic move would be to make it so for the first six months, a bunch of customers use Microsoft's network resources and don't pay them any money at all. About a million people (or up to maybe 3+ million, if by "day one" you meant an edition rather than literally the first 24 hours) buying a console--that may itself be sold at a loss--and then not contributing a single penny through the entire launch window! This would be an opportunity cost of close to a billion dollars for Xbox.

Man, all the billions they've spent in (effectively) paying people to use their platforms has really skewed expectations. They've bought a ton of goodwill, that's for sure. But I'm not smart enough to see how they cash it all back in.
Microsoft has never been shy about getting people into their ecosystems with free trials. 6 months is way too aggressive. but even 1 month trial to gamepass (two weeks feels short) could be a big selling point. It feels a lot easier to swallow an expensive console when you can point to the fact that upgraders wouldnt have to buy any accessories or games at launch. No "hidden cost" of a second controller. Play our biggest hit for a few weeks....gut sucked into our eco system...... It certainly feels like they could persuade people to buy in early by alliviating day one cost (even if they have to pay it later, little by little, to keep those games). Thats just how subscriptions work. Get people sucked in the short term and make it just easy enough not to cancel.
 

GameSeeker

Member
Oct 27, 2017
51
A hypothetical system with a total memory bandwidth of 700GB/s uses around the same power with either a DRR4 or GDDR6 solution.
So there is no significant power saving at all using DRR4 on a per throughput basis.
That is the point.


HBM on the other hand is a lot better when it comes to power consumption per GB/s
Good, you are confirming that a HBM2 + DDR4 solution has a better pj/bit metric than an equivalent GDDR6 solution (say, 8GB HBM2 + 16GB DDR4 vs. 24GB GDDR6. Note that total bandwidth is similar, but not identical and that is intentional on part of the electrical engineer). Given that the memory access patterns for a 8-core/16 thread Zen 2 are widely different than those of a Navi GPU, I'm sure you can clearly see that the efficiency gains from separating the memory traffic into two memory pools is high. Fortunately, the two memory pools appear as one at the software API level, meaning there is no complication factor for the game programmer.
 

Luckydog

Member
Oct 25, 2017
547
USA
After reading the matt booty interview about the goal to release a First party game every 2 3 months on game pass , I kinda got the feeling that MS is done with AAA games .AA games released quickly is their model due to nature of gamepass (and their recent studio acquisitions also shows samething kinda, small AA studios).

Isn't that a bit alarming to Xbox gamers ?? AAA games take 4 to 5 years minimum . And sacrificing quality for quantity should not be objective of the platform holder.

Anyways this doesn't speak well to me but hey I haven't been their target audience since 2011 so maybe Xbox gamers are fine with this .
Are they suddenly cancelling Halo Infinity, Gears 5, Forza 8, Forza Horizon, and a possible Fable? I get people may not like the Gears/Forza/Halo constant hammering, but if these are their major AAA franchises, and they work in a bunch of AA stuff on a steady basis for cheap with gamepass, it seems a valid stategy. It may not be for everyone but it could be for them. And with Spencer saying he doesnt care how many consoles they sell, only who is using their services, then the Gamepass way may be good for them.
 

AegonSnake

Member
Oct 25, 2017
4,227
So 10tflops is the maximum flops we're gonna get in PS5! Sounds disappointing to me >:(
i dont know about that. i have no idea what happened here overnight but when i went to sleep we were discussing 11-12 tflops, 56-60 CUs because the Scarlett APU is huge, way bigger than the base PS4, Pro and the X1X. Some estimates are 400 mm2.

The Navi GPU revealed yesterday is only 250mm2 and 40CUs. Even if you add 70mm2 for the Zen 2 CPU, you still have a lot of space to add more CUs.

I am back on the 12.9 Tflops train after that.

Scarlet - 64 CU at 1.5 Ghz - 11.5 tflops
PS5 - 56 CU at 1.8 Ghz - 12. tflops (HBM2 + DDR4 lets them push clocks higher)

I have no idea why we all thought they would settle for a 350mm2 die when they are no longer going for $399 standard cooling consoles. These consoles are going to be $499 which means they can create a bigger die than a PS4. With Vapor chamber cooling they can push clocks up to high levels as well.
 

modiz

Member
Oct 8, 2018
3,732
project awakening is hinted at being a next gen title in the interview. based off the TGS trailer this game looks very impressive.
 

Ushay

Member
Oct 27, 2017
3,111
Guys whats your take on the mention of hardware based ray tracing? As its pretty absent from most Navi discussions on AMD side. Would that be a customer solution? Or just hot air by MS?
 

Expy

Member
Oct 26, 2017
3,118
Guys whats your take on the mention of hardware based ray tracing? As its pretty absent from most Navi discussions on AMD side. Would that be a customer solution? Or just hot air by MS?
They both have hardware raytracing.

It's part of the Navi roadmap, so could ne custom hardware for these Navi chips but based on the upcoming standard RT hardware for the Navi line of GPU.
 

modiz

Member
Oct 8, 2018
3,732
Guys whats your take on the mention of hardware based ray tracing? As its pretty absent from most Navi discussions on AMD side. Would that be a customer solution? Or just hot air by MS?
RDNA 2 is confirmed to have hardware ray tracing, its likely MS and Sony will modify their chip with that feature (like ps4 pro including vega features)
 
Oct 27, 2017
548
Welp, it's good to be back.

I've missed you fine upstanding ladies and gentlemen.

What are the current astroturfing objectives topics of conversation currently being kicked around?

I'm sure there's lots to dig into.

TF navel gazing is always a favourite to fall back on. Although I still don't think either Anaconda or PS5 should be below 12.0 TF in platforms that are expected to launch next year and last 7-8 years.
 

anexanhume

Member
Oct 25, 2017
4,008
You think that both MS and Sony will further customize the architecture of current NAVI to include more CUs to have more TFlops or do you think that they will wait a little more for AMD to finalize their NAVI 2 design to put as final step in their consoles?
It isn’t customization to add CUs. The architecture is inherently scalable. I think we’ll see 80 CU flagship GPUs next year.

RDNA 2 is probably not going to be named Navi 2. For now, I’d call consoles RDNA 1.5.
 

Bradbatross

Member
Mar 17, 2018
3,559
Welp, it's good to be back.

I've missed you fine upstanding ladies and gentlemen.

What are the current astroturfing objectives topics of conversation currently being kicked around?

I'm sure there's lots to dig into.

TF navel gazing is always a favourite to fall back on. Although I still don't think either Anaconda or PS5 should be below 12.0 TF in platforms that are expected to launch next year and last 7-8 years.
Didn't even realize you were banned for console warring. Not surprised though.
 

PLASTICA-MAN

Member
Oct 26, 2017
7,407
It isn’t customization to add CUs. The architecture is inherently scalable. I think we’ll see 80 CU flagship GPUs next year.

RDNA 2 is probably not going to be named Navi 2. For now, I’d call consoles RDNA 1.5.
My abd I meant RDNA 2. :p

i dont know about that. i have no idea what happened here overnight but when i went to sleep we were discussing 11-12 tflops, 56-60 CUs because the Scarlett APU is huge, way bigger than the base PS4, Pro and the X1X. Some estimates are 400 mm2.

The Navi GPU revealed yesterday is only 250mm2 and 40CUs. Even if you add 70mm2 for the Zen 2 CPU, you still have a lot of space to add more CUs.

I am back on the 12.9 Tflops train after that.

Scarlet - 64 CU at 1.5 Ghz - 11.5 tflops
PS5 - 56 CU at 1.8 Ghz - 12. tflops (HBM2 + DDR4 lets them push clocks higher)

I have no idea why we all thought they would settle for a 350mm2 die when they are no longer going for $399 standard cooling consoles. These consoles are going to be $499 which means they can create a bigger die than a PS4. With Vapor chamber cooling they can push clocks up to high levels as well.
What is better? A GPU with more CUs and lower clock speed or a GPU with less CUs and higher clock speed? I know the latter brought you higher flops but on the terrain which will have better results in games, streaming textures and shaders etc..?
 

Pheonix

Member
Dec 14, 2018
961
St Kitts
I knew you will answer this and insist on this again. Check my previous post in this thread, please:





Even if this is what its on paper, it's the decision to the devs of a certain game whether to implement the NV implementation due to a partnership or not sign any partnership and do the general supporting solution like COD.
CDProjektRed chose to do a partnership with NVIDIA like with Witcher 3. So don't expect them to offer a NV solution to consoles and ATI GPUs (obvious) and or offer a replacement solution for the others. A deal is a deal and they have to respect it.
This is purely a marketing agreement and nothing related to how the tech is developed. This is what many aren't getting and this happeend before and will happen.
We won't have ray-tracing on consoles and ATI GPUs for games having partnership with RTX and NVidia unless 2 cases: The company that signed a partnership with NVidia is willing to offer ray-tracing on other platforms than NVidia thus not fulfilling their deal or that NVidia will allow RTX implementation to work on consoles and ATI GPUs (with no hurdle) and both have chances close to none to happen.

Until then, it's a big NO.
Hehehe....firestarter.

They have announced RT support on the one GPU series that has native hardware support for RT. How about you wait till AMD has GPUs with native hardware support for RT before you start peddling exclusive partnerships.

There is RT, then there is what Nvidia calls it (RTX) and there is what AMD will call it. A dev saying they support RTX doesn't mean they wouldn't support RT in whatever form AMD and hence consoles will have it.
 

AegonSnake

Member
Oct 25, 2017
4,227
My abd I meant RDNA 2. :p



What is better? A GPU with more CUs and lower clock speed or a GPU with less CUs and higher clock speed? I know the latter brought you higher flops but on the terrain which will have better results in games, streaming textures and shaders etc..?
I've asked this several times myself. i dont think it makes a difference tbh.

That said, i was looking at vega 64 and vega 7 (60 CU) benchmarks and vega 7 had drastically better performance despite having fewer CUs and only around 7% more tflops. But that was likely due to the 1 Terabyte memory bandwidth.

 

FSavage

Member
Oct 30, 2017
454
I'm not assuming that at all, I'm going with a conservative estimation of 1600-1750 for a chip with more CUs than 5700XT has.
And on the contrary, I don't fully understand why people here think that a wider part with lower clocks would consume less than a narrower one with higher clocks - all examples we have in PC space show the exact opposite of that.
Here’s my Anaconda prediction now:

3 shader engines, 6 workgroup processors, 1CU pair per SE disabled - 54 total CUs. 1500MHz. 10.4TF.
🤔 from May 25th


im joking of course, but would be funny if both of you are right on the money, and if this leak is real. I also imagine that even if this pb is real, final clocks may still change
 

dgrdsv

Member
Oct 25, 2017
2,239
Msk / SPb, Russia
We won't have ray-tracing on consoles and ATI GPUs for games having partnership with RTX and NVidia unless 2 cases: The company that signed a partnership with NVidia is willing to offer ray-tracing on other platforms than NVidia thus not fulfilling their deal or that NVidia will allow RTX implementation to work on consoles and ATI GPUs (with no hurdle) and both have chances close to none to happen.
There is no limitation on what platforms a dev may implement raytracing on in a deal with NV. DXR/VK raytracing is working on Pascal GPUs right now, without any h/w acceleration, it's pure GPU compute essentially, RT h/w just makes parts of said compute run faster.
Now, it is certainly possible that future DXR-compatible RT h/w won't be well suited to running current DXR implementations which are made and tuned for Turing's RT cores. In some cases some patching may be required. But that's hardly NV's or developers fault that nobody but NV has RT h/w available right now.
 

chowyunfatt

Member
Oct 28, 2017
261
Welp, it's good to be back.

I've missed you fine upstanding ladies and gentlemen.

What are the current astroturfing objectives topics of conversation currently being kicked around?

I'm sure there's lots to dig into.

TF navel gazing is always a favourite to fall back on. Although I still don't think either Anaconda or PS5 should be below 12.0 TF in platforms that are expected to launch next year and last 7-8 years.
I think we're at Xbox is more powerful, or was that yesterday? Maybe it was PS5 and Xbox is today?
You need to keep up it changes by the hour..
 

Ushay

Member
Oct 27, 2017
3,111
For 4k rendering wouldn't we start to see diminishing returns on how much these GPUs can achieve after a certain point? Back when the Pro was announced a lot of people thought 8TF was a good place to be for this objective.

I think the CPUs will be far more relevant this next generation.
 
Oct 27, 2017
548
Quick and dirty:



No need for Lockhart since XBOX One S (and maybe X, but I don't think so!) can take care of the price sensitive customers. MS announcement not to deliver any next-gen exclusive 1st party titles in the forseeable future strongly indicates that XBOX One will still be around once Anaconda arrives.

Looking forward to anyone's feedback to those maps btw.
I like a good picture! They really are worth a thousand words (or wrong assumptions).

I don't believe the Post E3 image is quite right. If we disregard performance for now and concentrate solely on price.

Sony is targeting the mass market consumer, not the performance enthusiasts. So while Lockhart is still a thing (and it is) the PS5 price will always be skewed towards that rather than Anaconda. Even though the PS5 and Anaconda have similar specs.

Lockhart wasn't mentioned at E3 because it confuses the simple soundbite type of messaging that is required at these events. It needs more explaining to customers so they don't get the wrong idea or bad perception about it. So better to avoid that at this stage.

I'm sure that MS would be delighted if Sony bought the 'Lockhart is dead' rumour to tempt Sony to play at the same price point as Anaconda but that is never going to happen.

edit ugh. Auto correct
 

M.Bluth

Member
Oct 25, 2017
1,559
I've been thinking about this, because of that Gran Turismo SXSW ray tracing demo

but it's bloody 8k and 120fps.
I don't know if it was in SXSW, but googling GT Sports 8K demo returns some CES 2019 results
And here's the thing... GTPlanet.net says it's upscaled to 8k. It's not like it was shown in a manner where pixel counting would be possible to make sure. So that's one.
Two, it would not be the first time for PD to show off a GT demo running on multiple consoles to hit insane resolution and frame rates that will never make it to the retail GT versions.
 

Flutter

Member
Oct 25, 2017
4,635
I don't know if it was in SXSW, but googling GT Sports 8K demo returns some CES 2019 results
And here's the thing... GTPlanet.net says it's upscaled to 8k. It's not like it was shown in a manner where pixel counting would be possible to make sure. So that's one.
Two, it would not be the first time for PD to show off a GT demo running on multiple consoles to hit insane resolution and frame rates that will never make it to the retail GT versions.
Yeah I know. There's a few, just that I wonder was it using the devkit or not.

Didn't know it was upscaled though!
 

Your Fave

Member
Oct 27, 2017
16
Looking back after watching all the conferences, I think it was a brilliant marketing move for Sony to announce the PS5 a few weeks ago, and other hardware makers will surely take notice for the future.

The news would have been buried underneath all the other gaming news around E3. I feel only the techies are even talking about Scarlett right now while the PS5 dominated mainstream press cycles for a good while. To be honest, it didn't help that their reveal was basically a complete re-tread of what Sony mentioned a few weeks ago.
Not really, Google "Keanu Reeves Next Xbox".
 

Lady Gaia

Member
Oct 27, 2017
428
Seattle
What is better? A GPU with more CUs and lower clock speed or a GPU with less CUs and higher clock speed? I know the latter brought you higher flops but on the terrain which will have better results in games, streaming textures and shaders etc..?
I’ve posted a few times in the subject. The quick summary is more CUs are “better” in the sense that they’ll yield better performance within the same thermal constraints, presuming tasks can be efficiently scheduled. Higher clock speeds are “better” in that smaller designs are easier and cheaper to manufacture, and they scale predictably (but one of the those predictable factors is that clock scaling results in exponential increases in heat which will ultimately limit how high you can go.)
 
Last edited:

anexanhume

Member
Oct 25, 2017
4,008
🤔 from May 25th


im joking of course, but would be funny if both of you are right on the money, and if this leak is real. I also imagine that even if this pb is real, final clocks may still change
My predict is based on precedent. The Xbox One X had clocks that roughly matched the RX570, which was around 100 MHz lower than RX580. RX 580 is the 180W card to compare to 5700 (non XT), so I subtracted a little more than 100 MHz from the game clock to get 1500MHz. Only a 300 MHz bump from X1X.

RX 570 is a full 30W below RX 580 with those clocks, despite only being 4 less CUs. 5W is probably due to slower RAM, but you’re probably saving 15W normalized for CU count just by lowering clock ~100 MHz.
 

Flutter

Member
Oct 25, 2017
4,635
....wait shit he actually said this 2 years ago?


edit: nvm, found the article and it seems he's saying the assets for gran turismo can run at 8k

 
Oct 27, 2017
548
User Banned (2 Weeks): System wars, trolling, history of similar infractions
I think we're at Xbox is more powerful, or was that yesterday? Maybe it was PS5 and Xbox is today?
You need to keep up it changes by the hour..
I think the main thing when astroturfing is to ensure your chosen product is always in the conversation, no matter what the topic. At least that way there is a chance that it can be relevant once again rather than completely forgotten by potential customers.

Maybe some of the astroturfers can correct this assumption if I've got things wrong? I'm happy to take all feedback on board ;)
 

Expy

Member
Oct 26, 2017
3,118
For 4k rendering wouldn't we start to see diminishing returns on how much these GPUs can achieve after a certain point? Back when the Pro was announced a lot of people thought 8TF was a good place to be for this objective.

I think the CPUs will be far more relevant this next generation.
Definitely referring to Polyphony there. Whether or not we're seeing PS5 Devkit footage in those videos is another story.
 

Wollan

Mostly Positive
Member
Oct 25, 2017
3,194
Norway but living in France
Quoting this from the Radeon RX 5000 family GPU thread :

Pretty much de-mystifies how next-gen consoles will implement hardware ray-tracing. This is not some in-house secret-sauce from Sony or Microsoft camps specifically but rather incoming features on the Radeon roadmap (2020-2021 on PC). We will see if next-gen consoles will be based on RDNA2 or if they will be based on RDNA with the hardware ray-tracing aspect brought over early (I would bet on the latter). Just like the PS4 Pro Polaris GPU had rapid-packed-math implemented early despite it being a Vega GPU feature on the roadmap.
 

M3rcy

Member
Oct 27, 2017
335
i dont know about that. i have no idea what happened here overnight but when i went to sleep we were discussing 11-12 tflops, 56-60 CUs because the Scarlett APU is huge, way bigger than the base PS4, Pro and the X1X. Some estimates are 400 mm2.

The Navi GPU revealed yesterday is only 250mm2 and 40CUs. Even if you add 70mm2 for the Zen 2 CPU, you still have a lot of space to add more CUs.

I am back on the 12.9 Tflops train after that.

Scarlet - 64 CU at 1.5 Ghz - 11.5 tflops
PS5 - 56 CU at 1.8 Ghz - 12. tflops (HBM2 + DDR4 lets them push clocks higher)

I have no idea why we all thought they would settle for a 350mm2 die when they are no longer going for $399 standard cooling consoles. These consoles are going to be $499 which means they can create a bigger die than a PS4. With Vapor chamber cooling they can push clocks up to high levels as well.
You're failing to take into account the additional die area taken up by the RT hardware. In Turing, this is not insignificant.
 

Fafalada

Member
Oct 27, 2017
1,139
Say bye bye to ray-tracing support on next-gen consoles and Navi 2 GPUs.
a) consoles will have a bigger userbase than entirety of RT supported cards on PC out there in a week from launching. You can expect that's when RT adoption really starts, and CDPR has always been very port-happy with their updates.
b) in all probability, the nextbox SDK will include DXR implementation (it doesn't matter if software or hardware based), so at least support for MS console will be relatively simple to adopt (barring performance problems of course). PS5 will be the odd man out in terms of API support, but then that's been the case for the past 25 years on all non MS consoles, so nothing new there either.
 

KennyX

Member
Nov 21, 2018
1,478
Good, you are confirming that a HBM2 + DDR4 solution has a better pj/bit metric than an equivalent GDDR6 solution (say, 8GB HBM2 + 16GB DDR4 vs. 24GB GDDR6. Note that total bandwidth is similar, but not identical and that is intentional on part of the electrical engineer). Given that the memory access patterns for a 8-core/16 thread Zen 2 are widely different than those of a Navi GPU, I'm sure you can clearly see that the efficiency gains from separating the memory traffic into two memory pools is high. Fortunately, the two memory pools appear as one at the software API level, meaning there is no complication factor for the game programmer.
The problem is, that HBM2 + DDR4 has not the same bandwidith.

24GB GDDR6 RAM 14Gbps on a 386bit bus has a total bandwith of 672GB/s
HBM2 in a 2 stack (2*4GB) configuration at 1000MHz on a 2048 bit bus has a total bandwidth of 512GB/s
So you still need 160GB/s bandwidth with your DDR4 RAM solution to match GDDR6
And that would be a 256 bit interface with 5000MHz DDR4 RAM (8* 2GB)



HBM makes sense. HBM in combination with DDR4 does not make sense, unless you're okay with less total bandwidth.
And only in that case would you save any significant power savings over GDDR6 only.
 

VX1

Member
Oct 28, 2017
3,739
Europe
My predict is based on precedent. The Xbox One X had clocks that roughly matched the RX570, which was around 100 MHz lower than RX580. RX 580 is the 180W card to compare to 5700 (non XT), so I subtracted a little more than 100 MHz from the game clock to get 1500MHz. Only a 300 MHz bump from X1X.

RX 570 is a full 30W below RX 580 with those clocks, despite only being 4 less CUs. 5W is probably due to slower RAM, but you’re probably saving 15W normalized for CU count just by lowering clock ~100 MHz.
Yeah,i guess we can expect something like that from Anaconda.
Any predictions for PS5 Anex? It’s a bit tricky to extrapolate Pro unlike 1X i suppose?
 

Crusadernm

Member
Nov 12, 2017
1,876
I don't think its even near official that Lockhart is out of the game yet. MS just stated they have their overall concept of scarlett in the works but didn't mentioned anything about how many consoles will be out. Its still up in the air and probably being decided in the upcoming year.