How much money are you willing to pay for a next generation console?

  • Up to $199

    Votes: 33 1.5%
  • Up to $299

    Votes: 48 2.2%
  • Up to $399

    Votes: 318 14.4%
  • Up to $499

    Votes: 1,060 48.0%
  • Up to $599

    Votes: 449 20.3%
  • Up to $699

    Votes: 100 4.5%
  • I will pay anything!

    Votes: 202 9.1%

  • Total voters
    2,210
Status
Not open for further replies.
Oct 27, 2017
7,178
Somewhere South
Komachi mentioned RayTracing only for Arden.


That could mean Ray tracing being hardware only on Scarlett and not PS5. Or Scarlet could have more hardware dedicated to RT, while PS5 only some minor stuff, like the AMD TMU (textures unit) patent.

Because it's sub 10tflops, and only 16GB of memory. But I think they'll lose more money for each console sold compard to PS4.

Inclined to believe Arden isn't a specific product codename, but the codename for a family of products.

i guess you're doing a balance between
1) clock a small chip fast and although diminishing performance returns it's at least an increase. Smaller Chip is cheaper to make with good yields You just have to figure out how to cool it..
2) make a bigger chip and run at slower clocks. Easier to cool but now your costs will potentially skyrocket above a certain size

probably no 'perfect' answer.

There isn't. And one big thing to consider is that progression to the next node shrinks isn't as certain as it was before, because not only they're not bringing as much of an improvement in density and power characteristics, but they're also getting considerably more expensive. That might mean any gains you might make by shrinking (i.e. more chips per wafer, better yields, less expensive cooling, etc) are wiped by the price hike of the new node. You might be looking at a scenario where shrinking actually increases the price overall for the package, and then it's a matter of the other benefits of shrinking outweighing that added cost.

So, it's possible that whatever they're designing right now is what they're going to be stuck with for the long haul.
 

thuway

Member
Oct 27, 2017
5,168
Komachi mentioned RayTracing only for Arden.


That could mean Ray tracing being hardware only on Scarlett and not PS5. Or Scarlet could have more hardware dedicated to RT, while PS5 only some minor stuff, like the AMD TMU (textures unit) patent.

Because it's sub 10tflops, and only 16GB of memory. But I think they'll lose more money for each console sold compard to PS4.
I think this might be key!

Sony spent a bulk of their R&D on the SSD, cooling, and GPU. This would give the PS5 the magic of SSD and the GPU performance edge.


Microsoft probably put it towards Ray tracing. If Microsoft expands these ray tracing units. Hence the words of Matt booty: 'We will have the most immersive console'
 

VX1

Member
Oct 28, 2017
7,007
Europe
After a two week ban, I'm back. Did I miss anything? :)
(JK, I kept up with the thread)

I think Microsoft already "told" us their memory configuration in the CGi video. They have x10 14gbps certified chips running at an unknown clock. That means 10GB (if they go all 1GB chips) or 20GB (if the go all 2GB chips) or anything in between (if they mix the chip types.

Regarding bandwidth, if the CGI trailer means anything, and the X CGI trailer was very accurate, they have a 320-bit interface and thats written in stone because E3 2019 was already too late in order to change the APU silicon. From here we have 3 options:
1) They are still using 14gbps certified chips. Consoles always down-clock the memory a bit so if we look at how much the X down-clocked its GDDR5 memory, we can expect something around 13.6gbps which will result in around 544GB/s.
2) They will upgrade to 16gbps, after the same down-clock will result in around 622GB/s.
3) They will upgrade to 18gbps, after the same down-clock it will result in around 700GB/s.

IMO 544GB/s sounds enough. 5700XT has 448GB/s bandwidth and even after the overuse of bandwidth by the CPU in a shared memory setup, a Zen2 will never use 100GB/s. So a 544GB/s bus will provide more bandwidth than the 5700XT has, sounds great to me even for a 10TF GPU. But hey, if Microsoft are willing to spend more money in bandwidth all power to them, developers will use it, but it sounds unnecessary.


It's really hard to know what clocks are possible on a new architecture that we knew nothing about. It's as hard to guess what clocks are possible on a new node like 7nm before its been really used. Guessing what clocks a new architecture AND a new node can do is virtually impossible. I guessed that the next gen GPUs will be around 1550Mhz and I thought I was very optimistic, 2000Mhz is a real shock. So I don't think you should blame Colbert for thinking of lower clocks, for anyone that knows hardware a 2Ghz console sounds like sci-fi.

The demo was 1080p/30fps while RT was used for reflections only. Another thing to note is that the reflections are perfect reflections on smooth surfaces in an urban area without objects with small holes in them like fences and trees which makes it the best scenario possible for RT. So I wouldn't take that demo very seriously, it's very miss leading.

ROPs sit outside the CU, not inside of it. So a 36CU GPU and a 44CU will have the same amount of ROPs. ROPs, just like the CU performance, are affected by clock speed. So even though a 36CU GPU running at 2170Mhz and 44CU GPU running at 1775Mhz have the exact same TF performance, the 36CU GPU has a much better ROPs performance because both have 64 ROPs, but on is running at 1775Mhz and the other at 2170Mhz.

People love to talk about TF, but it's only one parameter in a GPU performance.

Welcome back :)
 

Deleted member 8784

User requested account closure
Banned
Oct 26, 2017
1,502
Inclined to believe Arden isn't a specific product codename, but the codename for a family of products.

Arden is an area in Warwickshire pretty close to Stratford. I've no idea if it's a big tinfoil hat thing, but I've seen some people mention a Shakespeare theme with codenames, and that name would fit into that?

Or slap me down if this is known / wrong, I don't really know what I'm on about.
 

chris 1515

Member
Oct 27, 2017
7,076
Barcelona Spain
Komachi mentioned RayTracing only for Arden.


That could mean Ray tracing being hardware only on Scarlett and not PS5. Or Scarlet could have more hardware dedicated to RT, while PS5 only some minor stuff, like the AMD TMU (textures unit) patent.

Because it's sub 10tflops, and only 16GB of memory. But I think they'll lose more money for each console sold compard to PS4.

Because Sony is hardware too but probably not an AMD solution most probably Sony own solution or less probably coming at least for the raytracing(RT) part from Power VR for example... Nothing to do with the photon mapping patent which is another part of global illumination process, the light transport algorithm...
 

Dizastah

Member
Oct 25, 2017
6,126
Excuse my ignorance but is everyone saying 36cu because that would make PS5 perfectly compatible with PS4? Would a higher cu count make backwards compatibility that much more difficult?
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
Excuse my ignorance but is everyone saying 36cu because that would make PS5 perfectly compatible with PS4? Would a higher cu count make backwards compatibility that much more difficult?
I think thats the least important reason. 2ghz meens that numbers of cu can't be big as power consumption would be huge and 2ghz 36cu 9.216tf are on pair with previous leaks/predictions.
 
Last edited:

DrDamn

Member
Oct 27, 2017
466
Oct 27, 2017
7,178
Somewhere South
Excuse my ignorance but is everyone saying 36cu because that would make PS5 perfectly compatible with PS4? Would a higher cu count make backwards compatibility that much more difficult?

PS4 game code doesn't seem to be very abstracted. So, if you want to run the same code without patching or recompiling towards a new target, you want the hardware to match as closely as possible or else you're gonna introduce a bunch of variance that can break things. That's how the PS4Pro did its "BC" with PS4, and the Boost Mode, that did as little as boost frequency by a mere 111MHz, already broke some things in some games (a surprisingly low number of them, though).

Another thing I have to mention is that, while theoretically possible (Nvidia has done it before), AMD hasn't shipped a GCN GPU with 3 Shader Engines (and RDNA is a GCN evolution/derivative kinda sorta). It was either 1, 2 or 4. Maybe Navi can do it, maybe it can't, we have no way to know right now.

That said, a 54CU GPU running at 2GHz would chug a ridiculous amount of power. That ain't happening, fam.
 
Last edited:

Deleted member 5764

User requested account closure
Banned
Oct 25, 2017
6,574
Forgive me if this is a simple question, but do we have any insight as to why PS5's specs are leaking while we seem to have nothing on Scarlett?
 
Oct 25, 2017
13,246
Forgive me if this is a simple question, but do we have any insight as to why PS5's specs are leaking while we seem to have nothing on Scarlett?

Likely because PS5 dev kits have been out in the wild longer, as well as proliferated amongst more devs. We've also known about some of the codenames related to PS5 that form the basis of new leaks for some time now (e.g Gonzalo).
 

dgrdsv

Member
Oct 25, 2017
12,062
That could mean Ray tracing being hardware only on Scarlett and not PS5. Or Scarlet could have more hardware dedicated to RT, while PS5 only some minor stuff, like the AMD TMU (textures unit) patent.
Again, this isn't how it works with AMD's semicustom division. Clients are free to choose any mix and any proportion of AMD tech which they want to be implemented into their designs but they can't choose between, say, two different h/w RT solutions simply because it makes zero sense for AMD to even have two of them in the first place.
It's possible that one console will have a faster implementation of RT h/w due to, for example, a higher amount of RT cores in its APU compared to competition. But even this is rather unlikely because the amount of RT processing h/w a chip has is directly coupled with the chip's internal+external bandwidths and its shading throughput which are both likely to be very close for PS5 and Xb4 making such difference basically useless on practice.
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
Komachi mentioned RayTracing only for Arden.


That could mean Ray tracing being hardware only on Scarlett and not PS5. Or Scarlet could have more hardware dedicated to RT, while PS5 only some minor stuff, like the AMD TMU (textures unit) patent.

Because it's sub 10tflops, and only 16GB of memory. But I think they'll lose more money for each console sold compard to PS4.
I trust Matt
 

RoboPlato

Member
Oct 25, 2017
6,827
I think this might be key!

Sony spent a bulk of their R&D on the SSD, cooling, and GPU. This would give the PS5 the magic of SSD and the GPU performance edge.


Microsoft probably put it towards Ray tracing. If Microsoft expands these ray tracing units. Hence the words of Matt booty: 'We will have the most immersive console'
I'm not sure if it'll be as clear cut as that but I do see a situation where the consoles have similar specs but are customized to accelerate different types of gaming workloads. Some engines/games/genres could run better on one machine than the other, which would be an interesting outcome.
 
Oct 27, 2017
7,178
Somewhere South
I can imagine Scarlett having slightly better RT simply due to having, say, 20 WGPs vs. the PS5 packing only 18 WGPs. 5700XT has 16 more TMUs than the 5700, and if the RT cores are linked to them, that could give it a slight advantage.

That said, BVH traversal is one of those things that likes running faster better rather than wider, IIRC.
 
Oct 27, 2017
699
So PlayStation meeting allegedly now scheduled for Feb 12 according to 4chan leaks.

Scarlett's more in depth reveal has to arrive before that if they want to capitalise on all the "PS5 is weak" misinformation that has been circulating.

It will be difficult to run with that after the official PS5 reveal ;)
 
Nov 2, 2017
2,275
yea sure, but the point was that it is much more efficient than NVidia's solution, although probably not as accurate.
then again game development is never about accurately replicating everything to perfection, it is about getting as close as possible for as cheap as possible, which is why i think Crytek's direction with the tech is better
FYI the 1660Ti, which is in the same ballpark as the V56, gets better performance in BF5 with raytraced reflections than the V56 in the Neon Noir techdemo. Also BF5 is an actual game and not a controlled setting like a tech demo or cutscene where you can generally reach higher graphical fidelity than what you can achieve in real games.

Now I wouldn't conclude anything based on 1 example but the example we have in the most apples to apples comparison (both situations have raytraced reflections) points towards the opposite of what you're claiming so I have to ask: what information are you basing your claim on?
 
Jan 17, 2019
964
Komachi mentioned RayTracing only for Arden.


That could mean Ray tracing being hardware only on Scarlett and not PS5. Or Scarlet could have more hardware dedicated to RT, while PS5 only some minor stuff, like the AMD TMU (textures unit) patent.

Because it's sub 10tflops, and only 16GB of memory. But I think they'll lose more money for each console sold compard to PS4.

But how Komachi's tweet implies that PS5 doesn't have RT? He was very specific for Xbox in tweet
 

SeanMN

Member
Oct 28, 2017
2,188
Not sure if this was posted yet - new article from Gamespot based on interviews around E3 this year.

I think the area that we really want to focus on next generation is frame rate and playability of the games," Spencer said. "Ensuring that the games load incredibly fast, ensuring that the game is running at the highest frame rate possible. We're also the Windows company, so we see the work that goes on [for] PC and the work that developers are doing. People love 60 frames-per-second games, so getting games to run at 4K 60 [FPS] I think will be a real design goal for us.
Another thing that will be a little bit new for us is the fact that we want to also respect the compatibility of the controllers that you already have. This generation, we came out with the Elite controller, we've done work on controllers and people have invested in personalized controllers, the things that they love and we want to make those compatible with future generations of our console as well. So really, the things that you've bought from us, whether the games or the controllers that you're using, we want to make sure those are future compatible with the highest fidelity version of our console, which at that time will obviously be the one we've just launched.

There's also another article on Gamespot with Phil Spencer focused on xCloud:
We didn't say that [a streaming console was in the works]. I think maybe some people thought that that was the disc-less one that we just shipped. We are not working on a streaming-only console right now.
 
Oct 27, 2017
4,018
Florida
A lower clock and wider GPU will be more power efficient and likely more cost efficient throughout the generation.

A large APU will enjoy cost reductions from node shrinks, a more intricate cooling solution will not.

Everyone will get at least 60 active CUs.

I don't think that's true. If you node shrink a smaller APU it will require less power draw (heat). A node shrink lifts all boats.
 

-Le Monde-

Avenger
Dec 8, 2017
12,613
What does it mean for a potential ps5 boost mode, if the system does end up using two clock(ps4/Pro) speeds to run ps4 games?

is a ps5 boost mode reserved only for patched games?

That's the one thing I'm concerned about. Everything else sounds great.
 

thuway

Member
Oct 27, 2017
5,168
I'm not sure if it'll be as clear cut as that but I do see a situation where the consoles have similar specs but are customized to accelerate different types of gaming workloads. Some engines/games/genres could run better on one machine than the other, which would be an interesting outcome.
👍 Agreed. I'm just super curious about his ssd implementation. The idea than an ssd can be used as ram disk or can be used to stream data to allow richer worlds has me intrigued!
 

BreakAtmo

Member
Nov 12, 2017
12,980
Australia
So PlayStation meeting allegedly now scheduled for Feb 12 according to 4chan leaks.

Scarlett's more in depth reveal has to arrive before that if they want to capitalise on all the "PS5 is weak" misinformation that has been circulating.

It will be difficult to run with that after the official PS5 reveal ;)

I believe the same leak claimed there would be a State of Play with Star Wars Jedi: Fallen Order as one of the headliners... except that game seems to have a Microsoft Xbox marketing deal. Most likely a lazy fake.
 
Oct 27, 2017
699
I believe the same leak claimed there would be a State of Play with Star Wars Jedi: Fallen Order as one of the headliners... except that game seems to have a Microsoft Xbox marketing deal. Most likely a lazy fake.

The meeting would be scheduled after the APU is finalised in Q1 2020 because why not?

I assume Jedi Fallen Order info has been added to the leak to add spice and generate more discussion ;)
 
Oct 26, 2017
6,151
United Kingdom
ROPs sit outside the CU, not inside of it. So a 36CU GPU and a 44CU will have the same amount of ROPs. ROPs, just like the CU performance, are affected by clock speed. So even though a 36CU GPU running at 2170Mhz and 44CU GPU running at 1775Mhz have the exact same TF performance, the 36CU GPU has a much better ROPs performance because both have 64 ROPs, but one is running at 1775Mhz and the other at 2170Mhz.

People love to talk about TF, but it's only one parameter in a GPU performance.

This.

In fact the whole GPU pipeline benefits from higher clocks, so from the FE, geometry, rasterisation, shading and render backends as you mentioned.

I would guess, Sony weighed the option of a much larger die and decided instead they'd get more benefit from a smaller faster GPU than a wider slower one.

I honestly, can't imagine why MS wouldn't come to the same conclusion given the data available to them will be the same.

There isn't. And one big thing to consider is that progression to the next node shrinks isn't as certain as it was before, because not only they're not bringing as much of an improvement in density and power characteristics, but they're also getting considerably more expensive. That might mean any gains you might make by shrinking (i.e. more chips per wafer, better yields, less expensive cooling, etc) are wiped by the price hike of the new node. You might be looking at a scenario where shrinking actually increases the price overall for the package, and then it's a matter of the other benefits of shrinking outweighing that added cost.

So, it's possible that whatever they're designing right now is what they're going to be stuck with for the long haul.

This.
 

Thera

Banned
Feb 28, 2019
12,876
France
What does it mean for a potential ps5 boost mode, if the system does end up using two clock(ps4/Pro) speeds to run ps4 games?

is a ps5 boost mode reserved only for patched games?

That's the one thing I'm concerned about. Everything else sounds great.
You have, with assumption :
  • Unpatched PS4 games : can run natively / boost mode Pro / boost mode PS5 (not sure, most are already at their best on boost)
  • Patched / already Pro enabled games : can run natively (maybe?) / Pro Mode with or without downsampling / boost mode PS5.
This is my take. Boost mode PS5 will allow better frame rate and / or better average resolution on dynamic resolutions games (which is most of recent third party games).
 

AegonSnake

Banned
Oct 25, 2017
9,566
It's really hard to know what clocks are possible on a new architecture that we knew nothing about. It's as hard to guess what clocks are possible on a new node like 7nm before its been really used. Guessing what clocks a new architecture AND a new node can do is virtually impossible. I guessed that the next gen GPUs will be around 1550Mhz and I thought I was very optimistic, 2000Mhz is a real shock. So I don't think you should blame Colbert for thinking of lower clocks, for anyone that knows hardware a 2Ghz console sounds like sci-fi.
But i literally said I DONT blame Colbert in the post quoted. ;p
and tbh, i dont blame him.

2.0 Ghz is insane. IIRC, even i thought 2.0 Ghz was impossible in a console GPU, i was only talking about desktop GPUs going over 2.0 Ghz.

However, i was able to justify the 1.8 ghz clockspeed. I do remember doing some calculations based on AMD's 7nm improvements listed below.

7nm-vega_04.jpg



This was before the Navi/RDNA reveal dating back to last year. I was under the impression they would still be using the GCN arch, and wouldnt be able to go above 64 CUs. To me, that meant three things:

1) A smaller node compared to the Pro since 2x of 36 CUs would be 72 and that wouldve been impossible.
2) The 1.35x performance multiplier suggested clock speed increases of 1.35x.
3) This slide was in comparison to 14nm, not the 16nm which left even more room for CUs and power efficiency. 2.3x improvement. if we went by the x1x which utilized a better form of cooling to really push the clocks, we wouldve seen a 92 CU GPU with a 1.58 Ghz clockspeed. But since we were limited by the 64 CU GCN limit at the time, chip wouldve been much smaller (64 vs 92 on the same size die as the x1x) and they wouldve been able to increase the clocks by a massive amount, well over 1.58 ghz.

Of course, none of this matters now since RDNA chips already have double the transistors of the polaris 36 CU GPUs. But back then, it wasnt out of the realm to have a 1.8 ghz 60 CU polaris GPU in a console and well over 2.0 ghz in a desktop part, but no one wanted to believe it. Everyone kept pointing to the Vega 7 as proof that 7nm wouldnt have 2x density despite it looking clearly like a stop gap card. We are doing the same with the 5700 now which again i feel is a stop gap card. We wont know what the consoles can do until the RDNA 2 cards arrive next year. or at the very least when the 5800 drops with 56-60 CUs and shows how much power a 1.58 Ghz GPU consumes.
 

VX1

Member
Oct 28, 2017
7,007
Europe

Important part:

"With their HBM2E memory set to go into mass production in 2020, SK Hynix expects the new memory to be used on "supercomputers, machine learning, and artificial intelligence systems that require the maximum level of memory performance." All of these are distinctly high-end applications where HBM2 is already being heavily used today, so HBM2E is a natural extension to that market. At the same time, it's also a reflection of the price/margins that HBM2 carries. HBM2 has remained (stubbornly) high-priced well after its release – a situation that memory manufacturers seem to be perfectly content with – and we're not expecting HBM2E to change that. So expect to see SK Hynix's HBM2E memory remain the domain of servers and other high-end equipment."
 
Status
Not open for further replies.