• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

What do you think could be the memory setup of your preferred console, or one of the new consoles?

  • GDDR6

    Votes: 566 41.0%
  • GDDR6 + DDR4

    Votes: 540 39.2%
  • HBM2

    Votes: 53 3.8%
  • HBM2 + DDR4

    Votes: 220 16.0%

  • Total voters
    1,379
Status
Not open for further replies.

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
I think it depends where you reside in relation to AWS server that PSN is built on. I get speeds between 250 and over 400 Mb/s. On the other hand, I think MS has a hard cap of around 125 Mb/s, a speed I have never exceeded despite lacking any background tasks. I hope it changes with the new gen.
I hope they improve it too. Would an SSD affect the download time for physical media?
 
Oct 27, 2017
4,018
Florida
Xbox devkit specs from polish forum:
CPU: Custom Zen 2 CPU / 12c/12t, 2.5GHz no turbo
RAM: 24GB GDDR6 448 GB/s (20GB available for games)
GPU: Custom Navi GPU - 44CU / Clock: 2000MHz
FPU: 4CU
SSD: 1TB
API: Next Generation LL API

Doesn't sound too reliably but still "leak" ;)

Serious doubts about this one:

12c CPU would have to be 2, 6 core chiplets binned from the trash heap from PC fabs to be running at only 2.5ghz knowing what we know about the power curve on Zen 2. There isn't any real advantage to this vs. going 8 core at 3.2.
 

SeanMN

Member
Oct 28, 2017
2,185
Yup.
56CU can fit in the estimated Scarlett SOC size.

My predictions based on everything we know so far:

CPU: Zen 2 8C/16T @ 3.2GHz
GPU: RDNA2 with 56CU @ 1.575MHz
RAM: 16GB GDDR6 @ 560GB/s
SSD: 1TB NVMe @ 4GB/s

This is under the assumption that Scarlett features 2 Shader Engines and that each shader engine can support 28 CUs, whereas the 5700XT has 20 CU per SE. I have no basis for knowing what the limit here is, but my gut tells me anything getting into the 50 CU amount and above would likely have 3 Shader Engines. If the shader engines need to be symmetrical (not sure) then we'd be looking at 48 active (54 total) or 54 active (60 total) CUs.

And regarding 48 CUs: A GPU with 2 SE and 48 CU would be smaller than one with 3 SE and 48 CU, but who knows if it could be fed properly. Hard to say which would be the better option in a console.

Right now my prediction is PS5 featuring 2 SE's (40, 44, or 48 CU) but clocked higher (Gonzalo @ 1800 MHZ), while Scarlett with it's (supposed) big die features 3 SE's (48 or 54 CU), and perhaps a bit lower clocks (~ 1650 MHZ) with the assumption that both end up with similar thermals.


Directly from an Audiokinetics guy, it was not inside the PS4 maybe they use TrueAudio only for compression/decompression...


And the Dominic Mallinson slide no trace of AMD trueAudio DSP

EDIT:
Basically the reply I was working on right here.
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
Yes. It comes down to caching capacity. My Pro is equipped with an 860 Evo and that undoubtedly plays a role in long periods of sustained download speeds of ~400Mb/s.
Thank goodness. My PS4 HDD is the default one and I thinks its cap is like 37 MB/s (it probably averages around 10-20MB/s). RDR 2 was nearly 100 GB and it took like an hour in a half. It's one of my biggest issues this gen.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
Thank goodness. My PS4 HDD is the default one and I thinks its cap is like 37 MB/s (it probably averages around 10-20MB/s). RDR 2 was nearly 100 GB and it took like an hour in a half. It's one of my biggest issues this gen.

That sounds about right for majority of ~5400RPM HDD with 32MB cache. 37 Mega Bytes / second is honestly not bad at all but the sizes of downloads are getting quite insane nowadays. I wonder how it will play out next gen given data duplication ought not to be employed.
 
Oct 25, 2017
3,595
This is under the assumption that Scarlett features 2 Shader Engines and that each shader engine can support 28 CUs, whereas the 5700XT has 20 CU per SE. I have no basis for knowing what the limit here is, but my gut tells me anything getting into the 50 CU amount and above would likely have 3 Shader Engines. If the shader engines need to be symmetrical (not sure) then we'd be looking at 48 active (54 total) or 54 active (60 total) CUs.

And regarding 48 CUs: A GPU with 2 SE and 48 CU would be smaller than one with 3 SE and 48 CU, but who knows if it could be fed properly. Hard to say which would be the better option in a console.

Right now my prediction is PS5 featuring 2 SE's (40, 44, or 48 CU) but clocked higher (Gonzalo @ 1800 MHZ), while Scarlett with it's (supposed) big die features 3 SE's (48 or 54 CU), and perhaps a bit lower clocks (~ 1650 MHZ) with the assumption that both end up with similar thermals.


Basically the reply I was working on right here.

48CU or 54CU with a bit more clocks is fine too :)
 
OP
OP
Mecha Meister

Mecha Meister

Next-Gen Guru
Member
Oct 25, 2017
2,800
United Kingdom
Settings:
2050 MHz
1055mv
890 memory

Results:
1970 MHz
169 Peak Watts
20,067 Firestrike

PS
Stock Settings Power Draw shows 210+ Watts in Wattman

WdKdcog.jpg

k3PGKTq.jpg

Very interesting tests! I wonder how accurate those power readings are? NVIDIA recently released a software called FrameView which reveals chip and total board power consumption in an overlay and is allegedly able to give more accurate readings than other tools. I haven't properly researched AMD's own monitoring software but I'm curious to find out more about it.

The 5700 series are really interesting GPUs, I'm intrigued to see the the 5700 XT creeping up on my GTX 1080 Ti's performance. It also looks like it has some decent overclocking pottential. I look forward to seeing more people slap some coolers on them and for the AIB models to release.
From looking at a couple of reviews it appears to be about 10-15% off the performance of the GTX 1080 Ti and Radeon VII, and in some games it can match the GTX 1080 Ti's performance from what I had seen from Gamer's Nexus and Digital Foundry's look at the GPU.

What could have really made these GPUs really special would be if the 5700 XT came at the price-point the GTX 970 and RX 480 did, somewhere around £250-300.
GTX 1080-1080 TI performance at that aggressive price-point would be ridiculous! Although I'm uncertain of how much it costs to make these GPUs and how it would effect their profit per unit. But I imagine these GPUs would be flying off the shelves much like the GTX 970 did if AMD had priced them like that.

The concept of AMD "jebaiting" NVIDIA with the RX 5700 series price just seemed like bullshit to me, from when i saw their original price reveal I was unimpressed and knew that this price wasn't right. Then they revealed it's launch price point.

Also, seeing as this is the best that AMD has to offer at the moment, who knows how NVIDIA could have retaliated if they had aggressively priced the RX 5700 series GPU.
NVIDIA have the the RTX 2060 to 2080 GPUs which they could compete with and lower prices if things got heated.

Some interesting tests made by Hardware Unboxed to compare IPC and latency between Ryzen 3000 and Intel i9 8800K under the same conditions (Clock Speed).

Worth a watch/listen.

Most interesting to me was that AMD now has indeed a better IPC than Intel but struggles at Core and Memory Latency quite a bit.

That is (as mentioned in the video) by a high chance the reason why Intel is still better in gaming.

One root cause for the latency issue could be the chiplet based design with the bandwidth/latency limitations of the Infinity Fabric AMD is using.

If a monolithic design can solve those issues we probably can expect better results regarding latency and in result better performance for games comopared to a PC counterpart at similar clocks..

This is also why I never really supported a chiplet design for next gen consoles.



Edit:

The Latency charts, for those that are too lazy to watch the whole video ;)

YXdNSeb.png


0xBFRvB.png


Hehe, I saw this a couple of minutes after the video went up! Amazing stuff! As I mentioned in the OP, we're at a point where AMD are able to rival Intel's offerings and even beat them depending on the workloads.

For the new thread I'm going to put some more benchmarks in the OP alongside Tech Report's benchmarks, this IPC test is definitely going to make its way in there too.



Slow down ya'll!
 
Last edited:

AegonSnake

Banned
Oct 25, 2017
9,566
52-56 I'm hoping
1650-1750 MHz

I think something with these specs can be pretty power efficient from what's been shown with Navi, especially if RDNA2 on 7mm+
I think the SOC size estimates, your downclocking results and austin evans managing to get the 8 core ryzen to run at 34w gives me hope that a big 56 CU GPU is indeed possible without even going to 7nm+.

I have a feeling that the RX 5700 and 5700 XT were supposed to be mid end cards in the $200-300 range, but after seeing them go toe to toe with the turing cards, AMD decided to sell them in a higher price bracket. They likely overclocked them which explains the insanely high tdp of stock cards and the somewhat resonable TDPs after your underclocked them.

It's clear to me that if AMD went with a 300-400mm2 chip they would be able to offer better performance than the rtx 2080. So why not do it? Do they really care that much about TDP when they just released a 300w GPU earlier this year? Maybe they ARE waiting for the big Navi 20 chip that will be on 7nm+ next year and have VRR and hardware RT, and this is just a stop gap card overclocked to hell and back.
 
Oct 26, 2017
6,151
United Kingdom
Xbox devkit specs from polish forum:
CPU: Custom Zen 2 CPU / 12c/12t, 2.5GHz no turbo
RAM: 24GB GDDR6 448 GB/s (20GB available for games)
GPU: Custom Navi GPU - 44CU / Clock: 2000MHz
FPU: 4CU
SSD: 1TB
API: Next Generation LL API

Doesn't sound too reliably but still "leak" ;)

Outside of the obvious 2GHz GPU clock bullshit, this leaker failed the basic level leaker stupidity test: i.e. a MS console not using a DirectX API is just... no.
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
I think the SOC size estimates, your downclocking results and austin evans managing to get the 8 core ryzen to run at 34w gives me hope that a big 56 CU GPU is indeed possible without even going to 7nm+.

I have a feeling that the RX 5700 and 5700 XT were supposed to be mid end cards in the $200-300 range, but after seeing them go toe to toe with the turing cards, AMD decided to sell them in a higher price bracket. They likely overclocked them which explains the insanely high tdp of stock cards and the somewhat resonable TDPs after your underclocked them.

It's clear to me that if AMD went with a 300-400mm2 chip they would be able to offer better performance than the rtx 2080. So why not do it? Do they really care that much about TDP when they just released a 300w GPU earlier this year? Maybe they ARE waiting for the big Navi 20 chip that will be on 7nm+ next year and have VRR and hardware RT, and this is just a stop gap card overclocked to hell and back.
I'm getting more and more certain that were getting 2080 peformance GPU wise next gen...
 

eathdemon

Member
Oct 27, 2017
9,607
I think the SOC size estimates, your downclocking results and austin evans managing to get the 8 core ryzen to run at 34w gives me hope that a big 56 CU GPU is indeed possible without even going to 7nm+.

I have a feeling that the RX 5700 and 5700 XT were supposed to be mid end cards in the $200-300 range, but after seeing them go toe to toe with the turing cards, AMD decided to sell them in a higher price bracket. They likely overclocked them which explains the insanely high tdp of stock cards and the somewhat resonable TDPs after your underclocked them.

It's clear to me that if AMD went with a 300-400mm2 chip they would be able to offer better performance than the rtx 2080. So why not do it? Do they really care that much about TDP when they just released a 300w GPU earlier this year? Maybe they ARE waiting for the big Navi 20 chip that will be on 7nm+ next year and have VRR and hardware RT, and this is just a stop gap card overclocked to hell and back.
lack of rt, if you tried to sell a high end card without it, you would be crushed, even amd knows that.
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
OS is continually recording video to the drive, plus there may be downloads going on at the same time. So raw read speed is not representative of performance in context.
So if you have something big running in the background that's bottlenecking the HDD the loading times for example would be slower? Or is each application still locked with its own data cap?

Edit: Would this also effect the boot up time for the app?
 

Gamer17

Banned
Oct 30, 2017
9,399
I remember after Sony e3 2018 ,many press were tweeting things like first look at ps5 games (implying sony was showing last of us part 2 and ghost of tsushima running on ps5)later on digital foundry analysis confirmed they were running on a PS4 pro.
If updated engine for sucker punch or naughty dog makes people think it's ps5, imagine what their engine for ps5 will do. I just can't imagine it yet .hopefully they can impress us.
 

inpHilltr8r

Member
Oct 27, 2017
3,238
Or is each application still locked with its own data cap?

I do not believe that any data rate caps are directly enforced, rather I suspect that there's OS level prioritisation going on. As a dev you pick a rate you think that the game can achieve, and design to that (x MBs will take y seconds to load, so be sure to make your load halls z meters long given a maximum player speed of...) (the spiderman tech talk goes into this IIRC)
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
I remember after Sony e3 2018 ,many press were tweeting things like first look at ps5 games (implying sony was showing last of us part 2 and ghost of tsushima running on ps5)later on digital foundry analysis confirmed they were running on a PS4 pro.
If updated engine for sucker punch or naughty dog makes people think it's ps5, imagine what their engine for ps5 will do. I just can't imagine it yet .hopefully they can impress us.
Yep, even some parts of RDR 2 feel next gen or at least ahead of its time. Imagine what ND will be able to do physics wise!
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
I do not believe that any data rate caps are directly enforced, rather I suspect that there's OS level prioritisation going on. As a dev you pick a rate you think that the game can achieve, and design to that (x MBs will take y seconds to load, so be sure to make your load halls z meters long given a maximum player speed of...) (the spiderman tech talk goes into this IIRC)
This will probably all change once devs get SSDs on their hands! Well, it wouldn't make data streaming obsolete it would just radically reduce it.
 

AegonSnake

Banned
Oct 25, 2017
9,566
I remember after Sony e3 2018 ,many press were tweeting things like first look at ps5 games (implying sony was showing last of us part 2 and ghost of tsushima running on ps5)later on digital foundry analysis confirmed they were running on a PS4 pro.
If updated engine for sucker punch or naughty dog makes people think it's ps5, imagine what their engine for ps5 will do. I just can't imagine it yet .hopefully they can impress us.
frankly, it was embarrassing to see journalists call TLOU2 and especially Ghosts next gen. there is nothing next gen about the visuals. maybe the animation in TLOU2 but thats it.
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
frankly, it was embarrassing to see journalists call TLOU2 and especially Ghosts next gen. there is nothing next gen about the visuals. maybe the animation in TLOU2 but thats it.
What confuses me even more is people saying CP 2077 looks to good for current gen. I mean it's dense and the scale of the game is humongous, but as far as photorealism goes I don't think it's as good as those titles that we mentioned.
 

SeanMN

Member
Oct 28, 2017
2,185
We did get some good info during this thread. Some confirmations on details on Scarlett and details on RDNA and Zen 2. While there haven't been many leaks, we've got the basics of what these consoles will be.
 
Status
Not open for further replies.