• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

How much money are you willing to pay for a next generation console?

  • Up to $199

    Votes: 33 1.5%
  • Up to $299

    Votes: 48 2.2%
  • Up to $399

    Votes: 318 14.4%
  • Up to $499

    Votes: 1,060 48.0%
  • Up to $599

    Votes: 449 20.3%
  • Up to $699

    Votes: 100 4.5%
  • I will pay anything!

    Votes: 202 9.1%

  • Total voters
    2,210
Status
Not open for further replies.

Lady Gaia

Member
Oct 27, 2017
2,479
Seattle
Yes, it's just impractical because of dead address space relative to the number of address bits.

Impractical might not be quite the right sentiment. Unusual, though? Absolutely. To put another spin on this, what you get is effectively a 320-bit data bus for half the address space, and the width of the other half of your address space is dictated by how many 2Gb chips you use. One 2GB chip gives you a 32-bit data bus, two gives you 64-bits, etc. So you get 10GB of fast GDDR6 plus however much additional GDDR6 you add using larger chips at a speed that varies by how much you put in. The memory controller and caching mechanism also get correspondingly more complex.

I can't think of a case where I've seen the approach used,. It isn't a simple hedge against a late-breaking decision regarding the amount of RAM because of the differences in available bandwidth that result from making different decisions. It could be annoying to develop for unless the point was to have the CPU primarily accessing the slower, narrower data bus and the GPU effectively limited to the fast, wide 10GB subset.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
LOL, they have Arcturus for the Scarlett GPU.




Yes, it's just impractical because of dead address space relative to the number of address bits. For example, with a 320-bit bus and 8Gb/4Gb chips, you would have (2^32)^10 address possibilities, but some of those addresses would map to addresses that didn't physically exist. It can also prevent you from storing things in efficient blocks if you like to spread it across chips. An example being that to exploit bandwidth of the memory, you could spread a texture out across all 10 chips in the same address range for each individual chip. However, some of those chips would have half of their address range "dead" in comparison to the other chips, making exploiting the parallel bus trickier.
Thanks, so if I am getting you right.... you wouldn't be able to spread certain chunks of data across all chips to take advantage of the and width that would provide because some chips will have half of the addressable blocks? Which in turn means that at times you have less of your maximum theoretical bandwidth?

Well if that is what you are saying then thats what I have been suspecting with regards to GDDR. But I am guessing this only applies to data sets that require that much bandwidth to begin with that you would need to stripe it across all the available RAM modules.
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
Impractical might not be quite the right sentiment. Unusual, though? Absolutely. To put another spin on this, what you get is effectively a 320-bit data bus for half the address space, and the width of the other half of your address space is dictated by how many 2Gb chips you use. One 2GB chip gives you a 32-bit data bus, two gives you 64-bits, etc. So you get 10GB of fast GDDR6 plus however much additional GDDR6 you add using larger chips at a speed that varies by how much you put in. The memory controller and caching mechanism also get correspondingly more complex.

I can't think of a case where I've seen the approach used,. It isn't a simple hedge against a late-breaking decision regarding the amount of RAM because of the differences in available bandwidth that result from making different decisions. It could be annoying to develop for unless the point was to have the CPU primarily accessing the slower, narrower data bus and the GPU effectively limited to the fast, wide 10GB subset.
Phones do it a lot. 3GB, 6GB, etc.


Thanks, so if I am getting you right.... you wouldn't be able to spread certain chunks of data across all chips to take advantage of the and width that would provide because some chips will have half of the addressable blocks? Which in turn means that at times you have less of your maximum theoretical bandwidth?

Well if that is what you are saying then thats what I have been suspecting with regards to GDDR. But I am guessing this only applies to data sets that require that much bandwidth to begin with that you would need to stripe it across all the available RAM modules.

Yes. You're essentially adding caveats to your max theoretical bandwidth.
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
I don't think phones do it. Phones have some funky LPDDR4 sizes. 8Gb, 12Gb, 16Gb, 24Gb and 32Gb. So they never mix RAM sizes either. Also why you don't see phones with sizes like 5GB, 7GB, 11GB...etc.
You normally only see adjacent powers of two. Accesses would be even more lopsided otherwise. But phones tend to get by on lower speed accesses of smaller size, so chip congruity isn't as essential.
 
Last edited:

DrKeo

Banned
Mar 3, 2019
2,600
Israel
I remember faintly a Geforce card which had one GDDR5 or 4 (I don't remember how far back that was) on the back of the card, sharing a controller with another chip. So if the card had a 256-bit bus (I don't really remember how much it had), it had 9 chips, 8 all around the die and one on the other side of the card, right under one of the other GDDR chips, sharing it's controller. There are all sort of weird setups :)

LOL, they have Arcturus for the Scarlett GPU.




Yes, it's just impractical because of dead address space relative to the number of address bits. For example, with a 320-bit bus and 8Gb/4Gb chips, you would have (2^32)^10 address possibilities, but some of those addresses would map to addresses that didn't physically exist. It can also prevent you from storing things in efficient blocks if you like to spread it across chips. An example being that to exploit bandwidth of the memory, you could spread a texture out across all 10 chips in the same address range for each individual chip. However, some of those chips would have half of their address range "dead" in comparison to the other chips, making exploiting the parallel bus trickier.
It basically means that 10GB will have a 320-bit bus while the rest will have less, depends on how many 1GB chips there are. For instance x8 2GB modules and x2 1GB modules of 14Gbps will result in 18GB, 10GB with 560GB/s bus and 8GB with 448GB/s bus.

GameSpot has a list comparing specs of scarlet and ps5 based on public info...it says scarlet has a 1.6 clock on th zen 2. Where did they pull that bs number from? Lol

Edit: and it specifically says scarlet has 16 gigs of ram, what is going on?
Wow that post by GameSpot. Loved that place when I first got into game forums in 2006 during peak Wii hype.

why would they even post that? I know the clicks but they'll get just as many without the 1TB, 1.6Ghz numbers and details.
I think they've got the 1.6Ghz from the Flute leak, it had 1.6Ghz base, 3.2Ghz boost.
 
Last edited:

Doctor Avatar

Member
Jan 10, 2019
2,599
Who knows though, there could be different modes with lower rez and higher fps though.

Sounds like a lot of work to add to the development of the game and likely less optimisation time spent on each mode, for something that the general public don't seem to care very much about... I think the likelihood is that games will be developed at either 30 or 60fps as is currently the case.
 
Oct 25, 2017
17,904
Sounds like a lot of work to add to the development of the game and likely less optimisation time spent on each mode, for something that the general public don't seem to care very much about... I think the likelihood is that games will be developed at either 30 or 60fps as is currently the case.
I figure they'd just go all in on one too. Saves them time and effort and other resources.
 

More Butter

Banned
Jun 12, 2018
1,890

GameSpot has a list comparing specs of scarlet and ps5 based on public info...it says scarlet has a 1.6 clock on th zen 2. Where did they pull that bs number from? Lol

Edit: and it specifically says scarlet has 16 gigs of ram, what is going on?
I don't read a lot of sites. Isn't gamespot supposed to be reputable? How did they publish this nonsense?
 

Thera

Banned
Feb 28, 2019
12,876
France
About a year until the next consoles come out...We should be getting some footage of next-gen games that we figure are bullshit really soon, right?
I'm ready for my Ubisoft bullshots
Honestly i expected ubisoft bullshots in this E3.
Maybe ubi are being more cautious this time
Truth is i don't think we will see much bullshots or they won't be that much above final software .
They have the dev kits for a while and games already look so damn good there is no need for it .
We already had footage of next gen ubisoft game and it is BGE2. I think the whole Watch_Dogs fiasco made them cold feet and that's a reason of why they didn't showed anything. The other reason is E3 is now about selling the games you are releasing in the fiscal year.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Do you all realy think a 1TB SSD will be enough next gen, especialy without the ability to use external storage.
It depends on execution. I would not be surprised to have the option for "long term storage" to be able to be on an external, large drive but you would have to copy it over to the SSD to play. This could be very fast, though, with a tech like play ready, only this time it let's you actually play the game within a minute and not just show you the title screen and perhaps a demo room. While an HDD is still quite slow compared to an SSD, it's way faster than an ODD.
 
Oct 25, 2017
17,904
Of course it is possible to manage. The thing is that you actually have to manage it. This is of course not rocket science but has to be done by the customer.
I don't even have to really manage anything honestly. I only have stuff I know I will get to DL'd. Once I am done, I delete and DL something else.

It also helps that I usually get smaller games.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Right, but I am saying there is nothing to it. It isn't a hassle whatsoever.

It isn't like I am looking at how much storage I have and what will fit and what I would have to delete in order to DL something else and so on.
I thin that with the introduction of whatever they changed which made certain games require more space so that patches can be applied this became more apparent to me. Anyway, it's something that has to be done by the customer and I don't see anything you can do about it.
 

More Butter

Banned
Jun 12, 2018
1,890
Where did they get the 1.6 GHz from and Arcturus GPU ? Wasn't Navi confirmed for MS next console directly from AMD ?
It's like they read into some rumors that have all floated through this thread over the months and started making conclusions. It's so sloppy and disingenuous. Were they that desperate to get out a next gen comparison piece? It's crazy.
 

Tappin Brews

#TeamThierry
Member
Oct 25, 2017
14,879
my favorite take away from the gamespot next gen comparison article: " Scarlett's specs suggest a much stronger console than the Xbox One X, while the PS5 sees a similar improvement over the PS4 Pro. "

PHEW

wtf is this shit?
 

dgrdsv

Member
Oct 25, 2017
11,885
Could Arcturus be used as a chip/chiplet for the Scarlett cloud blades for AI/ML? Or even be a separate chip in the console itself?
With 8192 GCN SPs Arcturus will likely end up being at ~500 mm^2 on an N7+ process, with ~450W peak power consumption.
Is it possible to have a separate 500 mm^2 in addition to the main APU one for ~600W total power consumption? Well, theoretically, yes... but...

The idea that it can be used for XCloud servers is more sound but still I'd expect such servers to be based on a future big RDNA2 GPUs instead of the old GCN5 ones. Remember that anything which will aim to run next gen games will have to support h/w raytracing and Arcturus being GCN5 won't.
 

gundamkyoukai

Member
Oct 25, 2017
21,136
my favorite take away from the gamespot next gen comparison article: " Scarlett's specs suggest a much stronger console than the Xbox One X, while the PS5 sees a similar improvement over the PS4 Pro. "

PHEW

wtf is this shit?

LMAO , stuff like this and pass few days has showed next gen is upon us .
The year before is always full of crazy\stupid stuff.
 

sleepr

Banned for misusing pronouns feature
Banned
Oct 30, 2017
2,965
The whole gamespot article is a bunch of bullshit. And to think people get paid for writting these things.
 
Status
Not open for further replies.