Next-gen PS5 and next Xbox speculation launch thread |OT6| - Mostly Fan Noise and Hot Air

How much money are you willing to pay for a next generation console?

  • Up to $199

    Votes: 29 1.4%
  • Up to $299

    Votes: 46 2.2%
  • Up to $399

    Votes: 296 14.4%
  • Up to $499

    Votes: 983 47.8%
  • Up to $599

    Votes: 420 20.4%
  • Up to $699

    Votes: 91 4.4%
  • I will pay anything!

    Votes: 191 9.3%

  • Total voters
    2,056
Oct 27, 2017
3,583
Somewhere South
It's less that SIEA is running the show and more that PlayStation is being run from the USA and they're restructuring so they have unified messaging and marketing across all regions. And you can be sure that is going to be done from HQ.

Being from marketing and advertising myself, it sucks to see that their creative teams are being dismantled. Their European marketing team has always done an amazing job.
 

PLASTICA-MAN

Member
Oct 26, 2017
8,492
To both of you...

  • The PS4 was amongst (if not the first) the first mainstream products to use 8Gb (1GB) GDDR5 chips back in 2013. So because something is not being made right now (it's scheduled to go into production between 2019/2020) it doesn't mean it can't make it into a next-gen console.

  • The issue with HBM isn't that there is more demand for it, its actually that there isn't enough demand or it, that's why prices remain high.

  • And something you both may not know about HBM3 (or HBM in general), what really makes it expensive is the fact that they use an interposer. That's literally having silicon on silicon. But it's necessary because it "was" the only way they saw to accommodate the 1024-bit/stack bus width. With HB3, one of the game changers of that spec is that they can instead choose to use a smaller bus of 512-bit/stack.

  • Why this is great is that it means you can do away with that silicon (costly) interposer entirely and use an organic interposer(already used in MCM designs, that stuff that chiplets are put on) instead. These are much cheaper than silicon interposers while being able to handle significantly higher routing than PCBs. And that's just one of two ways HBM3 is going to be cheaper. It retains the same bandwidth throughput by increasing (doubling) the clock speed while halving the bus size.

  • If sony were to have decided to go with HBM, it would have been a decision they could have made as late as last year, and that is time enough to tape out a chip that incorporated a HBM mem controller as opposed to a GDDR one. And I am almost certain sony will have a better bead on these things and their availability than us, orat least enough to make an informed decision.
It's hard for some to fathom. None of us here is a hardware architect. A good architect always plans a hardware with a vision to sustain the premises of the unpredicted future even it is hard for commoners to expect.
 

Munki

Member
Apr 30, 2019
131
It's less that SIEA is running the show and more that PlayStation is being run from the USA and they're restructuring so they have unified messaging and marketing across all regions. And you can be sure that is going to be done from HQ.

Being from marketing and advertising myself, it sucks to see that their creative teams are being dismantled. Their European marketing team has always done an amazing job.
Perfect timing for MS to snag up some of those folks, get the Scarlett hype train goin in the EU.
 

Detective

Member
Oct 27, 2017
2,630
Sony has always put something unique and future proof in their console imo. Last gen was blue ray and HDMI, this gen was the memory setup I think. Wonder what’s for PS5!?

Exiting times indeed.
What am looking most for is the unpacking of the consoles and that new console smell. Don’t know if it’s weird but I loved :D
Reminds me of the old days!
 
Sony has always put something unique and future proof in their console imo. Last gen was blue ray and HDMI, this gen was the memory setup I think. Wonder what’s for PS5!?

Exiting times indeed.
What am looking most for is the unpacking of the consoles and that new console smell. Don’t know if it’s weird but I loved :D
Reminds me of the old days!
Which old days? from when you posted Ybarra leaving MS in the sony layoff thread?

(sorry for the hijack but I mean, it couldn't be more obvious)
 

Munki

Member
Apr 30, 2019
131
Sony has always put something unique and future proof in their console imo. Last gen was blue ray and HDMI, this gen was the memory setup I think. Wonder what’s for PS5!?

Exiting times indeed.
What am looking most for is the unpacking of the consoles and that new console smell. Don’t know if it’s weird but I loved :D
Reminds me of the old days!
Both Sony and MS have done a garbage job of future proofing their products, that's why the Pro and X exist.
 

gremlinz1982

Member
Aug 11, 2018
1,554
To both of you...

  • The PS4 was amongst (if not the first) the first mainstream products to use 8Gb (1GB) GDDR5 chips back in 2013. So because something is not being made right now (it's scheduled to go into production between 2019/2020) it doesn't mean it can't make it into a next-gen console.

  • The issue with HBM isn't that there is more demand for it, its actually that there isn't enough demand or it, that's why prices remain high.

  • And something you both may not know about HBM3 (or HBM in general), what really makes it expensive is the fact that they use an interposer. That's literally having silicon on silicon. But it's necessary because it "was" the only way they saw to accommodate the 1024-bit/stack bus width. With HB3, one of the game changers of that spec is that they can instead choose to use a smaller bus of 512-bit/stack.

  • Why this is great is that it means you can do away with that silicon (costly) interposer entirely and use an organic interposer(already used in MCM designs, that stuff that chiplets are put on) instead. These are much cheaper than silicon interposers while being able to handle significantly higher routing than PCBs. And that's just one of two ways HBM3 is going to be cheaper. It retains the same bandwidth throughput by increasing (doubling) the clock speed while halving the bus size.

  • If sony were to have decided to go with HBM, it would have been a decision they could have made as late as last year, and that is time enough to tape out a chip that incorporated a HBM mem controller as opposed to a GDDR one. And I am almost certain sony will have a better bead on these things and their availability than us, orat least enough to make an informed decision.
1. Sony had already designed a console around GDDR5. When densities doubled, they simply doubled the allocation because they did not have to change anything in design. This is not the same as what you are proposing.

2. There is demand for HBM. Samsung was on record as saying that they could double output but it would still not meet demand. if Demand>Supply, then prices have to be high because there is more money in contention for limited goods. If Supply>Demand, then prices drop because there are more goods competing for limited funds and manufacturers need to undercut on price......in fact, producers are likely to try and drop prices to stimulate demand.

3. These consoles are being built to serve as mass market devices. There is always going to be a bigger emphasis to try and bring costs down more than there is to innovate if that is going to drive costs north. This is why we are unlikely to see the fastest chips on the market, or the most expensive solution. It is why you are no longer seeing bespoke chips on the market like Cell or Xenos. It is, let AMD do it, and we will customize some aspects.

4. I am not in the engineering field, but I read quite a bit when I have the time. One thing that is common is that companies do not like running over budget or making changes unless they feel they are necessary. What is the advantage of going HBM over GDDR6? What is the cost, and how is that cost going to be recouped?

Everything that goes into these consoles costs something, and the consumer either has to bear that cost, or the company has to bite the bullet. There is always a chance that they might choose HBM (simply because it is an option), but HBM3? How would they be designing a console on something that is not yet in production? How would they test chip samples to see what needs to be tweaked?
 

AegonSnake

Member
Oct 25, 2017
5,415
Digital Foundry has said that due to the makeup of these new consoles the tflops count is not going to be as relevant as it used to be. It's worth pointing out that neither Microsoft or Sony has gone on record with a number, and the reason for that is likely that the number itself may not be all that high (let's say 10ish), but in reality it may perform better than the number would indicate on paper. At least that's the way the Digital Foundry guys described it in a video I recall watching a while back.
i actually just read his PS5 news breakdown and it makes zero sense to me. GPU power DOES matter. And a guy who made his name counting pixels and frames should know this.

It will matter more than ever next as the CPU allows devs to be able to simulate weather systems, hundreds of NPCs and their daily cycles, and of course destruction. The GPU will still be needed to render all that. Not to mention the massive hit Ray tracing and higher resolutions can have on the GPU. We have already seen this in the Navi 5700 XT cards. Decent at 1440p, useless at native 4k. And this is when running current gen games designed around a 1.3 tflops console. With next gen visual features, running games is going to be an even harder task even at 1440p.

And lastly, that spiderman SSD demo is cool but the GPU is still only rendering PS4 quality graphics. Who is to say that the next gen GPU will be able to handle a next gen Manhattan being rendered so fast?
 

Pheonix

Member
Dec 14, 2018
1,744
St Kitts
Both Sony and MS have done a garbage job of future proofing their products, that's why the Pro and X exist.
That's not true.

No matter what any console manufacturer does, their hardware becomes "outdated" the second a smaller manufacturing node becomes available. That's just how it goes. And if you look at the PS4pro in particular, the only thing sony changed was the GPU. they didn't add more RAM or use a different CPU (just clocked it up), that tells you that it wasn't something done because their hardware was outdated, but rather something done to at least remain relevant.

Think people need to understand that consoles are usually made the best way they can be made at the time they are being made and to eat least good enough t last 5-6yrs. Its is literally impossible to make a future proof console if by "future" you are talking about 6-7yrs in the tech world.
1. Sony had already designed a console around GDDR5. When densities doubled, they simply doubled the allocation because they did not have to change anything in design. This is not the same as what you are proposing.

2. There is demand for HBM. Samsung was on record as saying that they could double output but it would still not meet demand. if Demand>Supply, then prices have to be high because there is more money in contention for limited goods. If Supply>Demand, then prices drop because there are more goods competing for limited funds and manufacturers need to undercut on price......in fact, producers are likely to try and drop prices to stimulate demand.

3. These consoles are being built to serve as mass market devices. There is always going to be a bigger emphasis to try and bring costs down more than there is to innovate if that is going to drive costs north. This is why we are unlikely to see the fastest chips on the market, or the most expensive solution. It is why you are no longer seeing bespoke chips on the market like Cell or Xenos. It is, let AMD do it, and we will customize some aspects.

4. I am not in the engineering field, but I read quite a bit when I have the time. One thing that is common is that companies do not like running over budget or making changes unless they feel they are necessary. What is the advantage of going HBM over GDDR6? What is the cost, and how is that cost going to be recouped?

Everything that goes into these consoles costs something, and the consumer either has to bear that cost, or the company has to bite the bullet. There is always a chance that they might choose HBM (simply because it is an option), but HBM3? How would they be designing a console on something that is not yet in production? How would they test chip samples to see what needs to be tweaked?
Again, while more expensive than GDDR6, HBM3 cn cost significantly less than HBM2 especially if applied the way that its being proposed to be applied. What that cost delta is isn't something that you or I know. But here are some advantages.

Let's say we are comparing 20GB of GDDR6 to 20GB of HBM3. And let's say GDDR6 solution costs $100 and the HBM3 solution costs $130.
  • With more adoption, cost of HBM3 will fall faster than GDDR6
  • Smaller and cheaper PCB required for HBM3.
  • less power draw meaning more power can go to the APU.
  • Less heat generated
  • Much higher bandwidth
  • Less space is taken up for mem controller on the chip meaning you will need a smaller APU and also be able to clock that APU higher since it will generate less heat.
Now the question is, how much will the cost savings I other areas of our system offset the cost of using HBM3? And is the resultant cost worth the benefits it gives you? What of if sony has a 320mm2 chip that cost them $150 and MS has a 370mm2 chip that cost them $190. Or Sonys PCB ost them $14 t MS $22 PCB. All these things add up.

And you are saying everything you are saying about HBM based on what we know so far of it being very expensive. ut none of that is taking into account the radically different application methods or design f HBM3. Which were all specifically designed to make it cheaper and easier to make.

Some more additions to better explain myself.

  • HBM3 and these changes has been listed by Samsung since 2016 with a scheduled release of 201/2020. So that's something that boh sony and MSwaould hav been fuly aware of.
  • Some more insight into the interposer free design. Currently, HBM2 requires a silicon interposer is used and both it and the GPU chip sits on that interposer then the interposer is connected to the PCB. The interposer free design is akin to something like what you see with any Ryzen CCX based CPU. But now imagine that instead of having2 CCX and an I/O die, you have an APU and two HBM stacks. The substrate that ryzen CPUs areon is significantly cheaper than what you typically have as the HBM interposer.
  • However, using this cheaper substrate means you can't have trace as dense as you would find in a HBM2 interposer. Which is why the bus width/stack has been dropped from 1024-bt to 512-bit.
  • Another cost slashing initiative is to reduce the number of TSVs and the buffer die.
  • All these things have been known and planned since 2016.
 
Last edited:

sncvsrtoip

Member
Apr 18, 2019
1,362
i actually just read his PS5 news breakdown and it makes zero sense to me. GPU power DOES matter. And a guy who made his name counting pixels and frames should know this.

It will matter more than ever next as the CPU allows devs to be able to simulate weather systems, hundreds of NPCs and their daily cycles, and of course destruction. The GPU will still be needed to render all that. Not to mention the massive hit Ray tracing and higher resolutions can have on the GPU. We have already seen this in the Navi 5700 XT cards. Decent at 1440p, useless at native 4k. And this is when running current gen games designed around a 1.3 tflops console. With next gen visual features, running games is going to be an even harder task even at 1440p.

And lastly, that spiderman SSD demo is cool but the GPU is still only rendering PS4 quality graphics. Who is to say that the next gen GPU will be able to handle a next gen Manhattan being rendered so fast?
Agree, Richard contradict himself in one sentence: "The teraflop war is irrelevant now, as we saw in our apples-to-apples GCN vs RDNA compute face-off, not to mention how key Xbox One X titles have stacked up against PS4 Pro equivalents." - ok he showed that rdna has 1.3x perf/tf advantage over polaris but that's change nothing - just have to keep in mind and don't look only on tf number when we comparing different architecture - and thats quite obvious. Second sentance is bizzare for me (maybe my english is too poor) but xox shows that tflops (and proper bandwidth for tf amount) has huge impact on games quality - especially in rdr2.
 
Last edited:

AegonSnake

Member
Oct 25, 2017
5,415
Agree, Richard contradict himself in one sentence: "The teraflop war is irrelevant now, as we saw in our apples-to-apples GCN vs RDNA compute face-off, not to mention how key Xbox One X titles have stacked up against PS4 Pro equivalents." - ok he show that rdna hac 1.3x perf/tf advantage over polaris but that's change nothing - just have to keep in mind and don't look only on tf number when we comparing different architecture - and thats quite obvious. Second sentance is bizzare for me (maybe my english is too poor) but xox shows that tflops (and proper bandwidth for tf amount) has huge impact on games quality - especially in rdr2.
lol i didnt even notice that bit. he totally contradicts himself. if anything, this gen has shown how important tflops wars really are. I remember how when the gen started, CoD Ghosts was 720 while the ps4 was 1080p. and now pro games are 1440p while most MS games spend more time at native 4k then they do at 1440p.

i really liked his GCN vs RDNA analysis but this is weird.
 

DrKeo

Member
Mar 3, 2019
986
Israel
Yes this probably 10-11 Tflops.
Yup.

Is there proper definition for console monster?;) gou in console above vega 64 could be a monster for some devs for sure
If we are north of Vega64, I’m happy.

Official minimum gpu for rdr2: Nvidia GeForce GTX 770 2GB ;)
I'll admit it, I was wrong regarding RDR2 and the 750ti.

BUT - the AMD minimum required GPU for RDR2 is the RX280 which is 28CU running at 933Mhz which is 3.3TF, basically closer to PS4 Pro than PS4 while One S, a GPU with almost 1/3 of the RX280 power, runs the game just fine. So yeah, even the One S runs RDR2 better than a 3TF+ GCN PC card but that's the nature of console optimization VS the PC's brute force. It doesn't mean that the RX280 isn't almost X2 more powerful than the PS4, just that the PS4 version is much more optimized.

In the end, that what the original point which leads to this nonsensical 750ti VS PS4 discussion, that the PS4 GPU on launch day was a low-end PC GPU at best (weaker than the 139$ RX260X) and Sony (and MS) went really cheap on us in 2013.

The only sure things is that Jim Ryan is from SIEE and was marketing director for years there and head of SIEE. This is not a decision purely coming from someone from SIEA.

After this is sad like always for people looking for a new job.

Perfect timing for MS to snag up some of those folks, get the Scarlett hype train goin in the EU.
Actually, if you look at the replies, Aeron Greenberg and Undead Labs directed them to their hiring pages.

We had an amd flute benchmark, did that indicate any sort of bandwidth?
Yes, I don't remember it by heart but it was something like 540GB/s. People take from leaks whatever fits their view, you will see someone recite the Flute's 2Ghz clock speed and at the same time totally ignore the 540GB/s bandwidth.

edit:
It was 529.4GB/s :)

Nope... 399 was the problem.
The BOM was the problem. You had a PS3 sold for over 200$ lose BOM VS MSRP and a $360 sold at almost 200$ lost BOM VS MSRP and then you got a PS4 and One sold for a profit BOM VS MSRP. When Sony and Microsoft had spent ~100$ BOM for the whole APU while the generation before that wouldn't even cover the GPU alone and that's before adjusting to inflation, that alone tells the whole story of this generation.
 
Last edited:

Pheonix

Member
Dec 14, 2018
1,744
St Kitts
lol i didnt even notice that bit. he totally contradicts himself. if anything, this gen has shown how important tflops wars really are. I remember how when the gen started, CoD Ghosts was 720 while the ps4 was 1080p. and now pro games are 1440p while most MS games spend more time at native 4k then they do at 1440p.

i really liked his GCN vs RDNA analysis but this is weird.
I think what he means by TF war is irrelevant now isn't that TFs don't matter, but that because the "value" of the new RDNA TF is different from that of the current-gen. So there is no point saying yeah we have 10TF because that doesn't sound that impressive when compared to a 6TF XB1X or even a 10.7TF stadia. Whereas its significantly more powerful than both (equivalent to like 13TF).

Before we could just say 6TF XB1X is 4.6x better than XB1but now coming and saying 10TF Sar is not going to have the same oomph without some further explanation.
 

PlayBee

Member
Nov 8, 2017
1,595
lol i didnt even notice that bit. he totally contradicts himself. if anything, this gen has shown how important tflops wars really are. I remember how when the gen started, CoD Ghosts was 720 while the ps4 was 1080p. and now pro games are 1440p while most MS games spend more time at native 4k then they do at 1440p.

i really liked his GCN vs RDNA analysis but this is weird.
I feel like you guys are twisting his message there. He's just saying don't compare the TF of the new consoles to the current gen and assume that tells the whole story or else you'll be disappointed
 

Thorrgal

Member
Oct 26, 2017
4,847
Damn. Thanks for the info
Team 2019 were right....
But we already knew that...the info that the PS5 got delayed due to, among other things, BC, was shared on the thread like a year ago.

I can't recall by whom, but could have been Klee? Either him or Matt. Then Benji talked about "software" reasons...I think?

Man, maybe I can't recall the exact details, but I'm 100% sure both Matt and Benji said the PS5 had been delayed a year, for various reasons, some unspecified, some mention BC
 

Bosch

Member
May 15, 2019
363
Yup.


If we are north of Vega64, I’m happy.


I'll admit it, I was wrong regarding RDR2 and the 750ti.

BUT - the AMD minimum required GPU for RDR2 is the RX280 which is 28CU running at 933Mhz which is 3.3TF, basically closer to PS4 Pro than PS4 while One S, a GPU with almost 1/3 of the RX280 power, runs the game just fine. So yeah, even the One S runs RDR2 better than a 3TF+ GCN PC card but that's the nature of console optimization VS the PC's brute force. It doesn't mean that the RX280 isn't almost X2 more powerful than the PS4, just that the PS4 version is much more optimized.

In the end, that what the original point which leads to this nonsensical 750ti VS PS4 discussion, that the PS4 GPU on launch day was a low-end PC GPU at best (weaker than the 139$ RX260X) and Sony (and MS) went really cheap on us in 2013.



Actually, if you look at the replies, Aeron Greenberg and Undead Labs directed them to their hiring pages.


Yes, I don't remember it by heart but it was something like 540GB/s. People take from leaks whatever fits their view, you will see someone recite the Flute's 2Ghz clock speed and at the same time totally ignore the 540GB/s bandwidth.


The BOM was the problem. You had a PS3 sold for over 200$ lose BOM VS MSRP and a $360 sold at almost 200$ lost BOM VS MSRP and then you got a PS4 and One sold for a profit BOM VS MSRP. When Sony and Microsoft had spent ~100$ BOM for the whole APU while the generation before that wouldn't even cover the GPU alone and that's before adjusting to inflation, that alone tells the whole story of this generation.
The tablet cpu was the biggest problem but with 100 more on the Apu we would get something much better.
 

Lausebub

Member
Nov 4, 2017
416
Maybe a dumm question, but why are CPUs as expensive as GPUs, but have a smaller die and no memory and extra PCB?
Is it because there are just more CPUs that get sold?
 

Ruslnis

Member
Feb 26, 2018
2,243
But we already knew that...the info that the PS5 got delayed due to, among other things, BC, was shared on the thread like a year ago.

I can't recall by whom, but could have been Klee? Either him or Matt. Then Benji talked about "software" reasons...I think?

Man, maybe I can't recall the exact details, but I'm 100% sure both Matt and Benji said the PS5 had been delayed a year, for various reasons, some unspecified, some mention BC
Oh I forgot about that yeah
 

Snakeeee

Member
Jan 20, 2019
1,949
But we already knew that...the info that the PS5 got delayed due to, among other things, BC, was shared on the thread like a year ago.

I can't recall by whom, but could have been Klee? Either him or Matt. Then Benji talked about "software" reasons...I think?

Man, maybe I can't recall the exact details, but I'm 100% sure both Matt and Benji said the PS5 had been delayed a year, for various reasons, some unspecified, some mention BC
It has because of the ps4 sales and the software lineup.
 

console lover

Member
Feb 19, 2018
6,032
Yup.


If we are north of Vega64, I’m happy.


I'll admit it, I was wrong regarding RDR2 and the 750ti.

BUT - the AMD minimum required GPU for RDR2 is the RX280 which is 28CU running at 933Mhz which is 3.3TF, basically closer to PS4 Pro than PS4 while One S, a GPU with almost 1/3 of the RX280 power, runs the game just fine. So yeah, even the One S runs RDR2 better than a 3TF+ GCN PC card but that's the nature of console optimization VS the PC's brute force. It doesn't mean that the RX280 isn't almost X2 more powerful than the PS4, just that the PS4 version is much more optimized.

In the end, that what the original point which leads to this nonsensical 750ti VS PS4 discussion, that the PS4 GPU on launch day was a low-end PC GPU at best (weaker than the 139$ RX260X) and Sony (and MS) went really cheap on us in 2013.



Actually, if you look at the replies, Aeron Greenberg and Undead Labs directed them to their hiring pages.


Yes, I don't remember it by heart but it was something like 540GB/s. People take from leaks whatever fits their view, you will see someone recite the Flute's 2Ghz clock speed and at the same time totally ignore the 540GB/s bandwidth.

edit:
It was 529.4GB/s :)


The BOM was the problem. You had a PS3 sold for over 200$ lose BOM VS MSRP and a $360 sold at almost 200$ lost BOM VS MSRP and then you got a PS4 and One sold for a profit BOM VS MSRP. When Sony and Microsoft had spent ~100$ BOM for the whole APU while the generation before that wouldn't even cover the GPU alone and that's before adjusting to inflation, that alone tells the whole story of this generation.
What could the 529.4 indicate as far as type of ram/bandwidth? Sorry to keep pumping you for info lol
 

DrKeo

Member
Mar 3, 2019
986
Israel
What could the 529.4 indicate as far as type of ram/bandwidth? Sorry to keep pumping you for info lol
It actually told us more than just the speed, it had a clamshell design with 16 1GB chips, so it means 16GB with a 256-bit interface. 529.4GB/s isn't really the "on the paper" number, it came from a real-world benchmark so the real spec number could be 540GB/s or even 570GB/s. It didn't give us the final figure but it did give us a ballpark.

If I had to convert the Flute leak into an actual console memory spec, I would say it's something similar to this:
- 256-bit bus.
- 16GB.
- GDDR6 18Gbps.
- Actually running at 17Gbps (lowers the cost, heat and power draw).
- 544GB/s bus speed.
 
Last edited:

VX1

Member
Oct 28, 2017
4,487
Europe
It actually told us more than just the speed, it had a clamshell design with 16 1GB chips, so it means 16GB with a 256-bit interface. 529.4GB/s isn't really the "on the paper" number, it came from a real-world benchmark so the real spec number could be 540GB/s or even 570GB/s. It didn't tell us the final figure but it did give us a ballpark.

If I had to convert the Flute leak into an actual console spec, I would say it's something similar to this:
- 256-bit bus.
- 16GB.
- GDDR6 18Gbps.
- Actually running at 17Gbps (lowers the cost, heat and power draw).
- 544GB/s bus speed.
Sounds perfectly reasonable and like something we should expect in ( i’d say $399) PS5.
 

Pheonix

Member
Dec 14, 2018
1,744
St Kitts
Maybe a dumm question, but why are CPUs as expensive as GPUs, but have a smaller die and no memory and extra PCB?
Is it because there are just more CPUs that get sold?
Its not a dumb question...

Honest answer? Because they can price it that high. The real head ringer is; do you know that typically the GPU hs more transistors than a CPU? Like twice as much.
 

console lover

Member
Feb 19, 2018
6,032
It actually told us more than just the speed, it had a clamshell design with 16 1GB chips, so it means 16GB with a 256-bit interface. 529.4GB/s isn't really the "on the paper" number, it came from a real-world benchmark so the real spec number could be 540GB/s or even 570GB/s. It didn't give us the final figure but it did give us a ballpark.

If I had to convert the Flute leak into an actual console memory spec, I would say it's something similar to this:
- 256-bit bus.
- 16GB.
- GDDR6 18Gbps.
- Actually running at 17Gbps (lowers the cost, heat and power draw).
- 544GB/s bus speed.
Thanks! Seriously, why have people not been talking about this more? Here we are "begging" for info and we have core count, clock speed, thread count, detailed ram breakdown. What more do we actually need? Lol
 

DrKeo

Member
Mar 3, 2019
986
Israel
Sounds perfectly reasonable and like something we should expect in ( i’d say $399) PS5.
Yes, I would agree. 544GB/s makes a lot of sense considering the 5700XT has 448GB/s bus but the PS5 will also need some extra bandwidth for the CPU which will use the same memory.

Thanks! Seriously, why have people not been talking about this more? Here we are "begging" for info and we have core count, clock speed, thread count, detailed ram breakdown. What more do we actually need? Lol
What bugs me more is that people will talk about the 2Ghz GPU and in the same post talk about HBM or more than 16GB of GDDR6. I mean, take the info or ignore it but don't pick & choose :)

BTW, a fun fact that has nothing to do with anything, the PS5's x3 XL layer 100GB BR is actually 93.2GB, not 100GB. There are x4 layer XL 128GB BR disks which are actually 119.2GB but it appears that the PS5 won't support them.
 
Last edited:

CosmicBolt

Member
Oct 28, 2017
573
Userbenchmark scores are usually the actual in game results and not the max theoretical.

RAM scores in Userbenchmark are usually ~8% of max theoretical bandwidth.

Flute userbenchmark 16 gb RAM bandwidth score is 529.6 gb/s, which is ~8% of 576 gb/s.

GDDR6 18gbps at 256-bit is 576 gb/s.

So it seems if flute is using GDDR6 Ram it is 18Gbps.
 

AegonSnake

Member
Oct 25, 2017
5,415
I feel like you guys are twisting his message there. He's just saying don't compare the TF of the new consoles to the current gen and assume that tells the whole story or else you'll be disappointed
this is what he wrote.

The teraflop war is irrelevant now, as we saw in our apples-to-apples GCN vs RDNA compute face-off, not to mention how key Xbox One X titles have stacked up against PS4 Pro equivalents. Indeed, I wouldn't be surprised to see Mark Cerny double down on the philosophy seen in the Pro, with innovative solutions and smart design just as important as raw shader count, if not more so. As a consequence of this, a smaller processor means a more cost-efficient box - and Sony got the balance just right between performance and build cost with PS4.
hes talking about tflops wars as in tflops comparison between the next two consoles. not current gen polaris ones vs next gen rdna ones.
 

RoboPlato

Member
Oct 25, 2017
2,752
Yes, I would agree. 544GB/s makes a lot of sense considering the 5700XT has 448GB/s bus but the PS5 will also need some extra bandwidth for the CPU which will use the same memory.
Think that’ll be enough to drive the GPU after the CPU takes its share? I really want next-gen to be able to have well-fed GPUs on the bandwidth front to that we can get proper AF, effects, etc. Bandwidth seems to becoming more and more important for modern GPUs. It especially shows on the Xbox One X.
 
Last edited:

DrKeo

Member
Mar 3, 2019
986
Israel
Userbenchmark scores are usually the actual in game results and not the max theoretical.

RAM scores in Userbenchmark are usually ~8% of max theoretical bandwidth.

Flute userbenchmark 16 gb RAM bandwidth score is 529.6 gb/s, which is ~8% of 576 gb/s.

GDDR6 18gbps at 256-bit is 576 gb/s.

So it seems if flute is using GDDR6 Ram it is 18Gbps.
I totally agree but consoles always downclock the memory. Flute was using 18Gbps GDDR6. we know that for sure, but the PS5 will not have the 18Gbps GDDR6 modules running at 18Gbps just like the PS4, the Pro and the X underclocked their GDDR5 chips. For example, look at the X that uses 7Gbps modules which should have given it a 336GB/s bus but instead, MS runs them at 6.8Gbps which gives them 326.4GB/s.

In the end, we don't really know how much Sony will downclock the 18Gbps GDDR6, but if that's the PS5's setup then 576GB/s is the higher bound which the PS5 will probably won't reach.

Think that’ll be enough to drive the GPU? I really want next-gen to be able to have well-fed GPUs on the bandwidth front to that we can get proper AF, effects, etc. Bandwidth seems to becoming more and more important for modern GPUs. It especially shows on the Xbox One X.
It can be 540GB/s, it can also be 560GB/s but not much higher than that if the Flute leak is true. PS4 was very similar to the RX 270 (with 4CU turned off and lower clocks) but while the 270 had 179GB/s bus, the PS4 had 176GB/s which was also shared with the CPU. The 5700XT has 448GB/s, so having 100GB/s more bandwidth sounds pretty good to me considering the CPU will need much less than 100GB/s which will leave the GPU with more bandwidth than the 5700XT. If we get ~5700XT in the PS5 then I would say that regarding bandwidth to GPU-power ratio, the PS5 is in better shape than the PS4.
 
Last edited: