• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

How much money are you willing to pay for a next generation console?

  • Up to $199

    Votes: 33 1.5%
  • Up to $299

    Votes: 48 2.2%
  • Up to $399

    Votes: 318 14.4%
  • Up to $499

    Votes: 1,060 48.0%
  • Up to $599

    Votes: 449 20.3%
  • Up to $699

    Votes: 100 4.5%
  • I will pay anything!

    Votes: 202 9.1%

  • Total voters
    2,210
Status
Not open for further replies.
Sep 19, 2019
2,262
Hamburg- Germany
Still thinking that statement 10-15w is world difference when both console will probably be around 180-200w is funny exageration, if this is not so funny for you, hmm whatever ;)

What the actual difference will be then needs to be seen obviously. He only stated that it can make a world difference not that it will be. And still it is only about the RAM power consumption he was talking about not the entire system.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
lol i didnt even notice that bit. he totally contradicts himself. if anything, this gen has shown how important tflops wars really are. I remember how when the gen started, CoD Ghosts was 720 while the ps4 was 1080p. and now pro games are 1440p while most MS games spend more time at native 4k then they do at 1440p.

i really liked his GCN vs RDNA analysis but this is weird.
I am pretty sure we are noticing bigger differences due to BW in most PS4pro and X1X games.
 

gremlinz1982

Member
Aug 11, 2018
5,331
Then just define the meaning behind "worlds" so I get the idea behind your comments. Phoenix was talking about power consumption of only a part of the entire system what has an even more impact than the overall power consumption since cooling would be different accordingly and so on... To laugh about 10- 15 watts less power consumption of only a component of the entire system that impacts other parts significantly is just false.
You are simply throwing numbers.

What is the difference between the base consoles? It is RAM type and how that decision plays out on CU count with the base PS4 having 18 while the Xbox having 12.

What is the biggest difference between the PS4 Pro and Xbox One X? Nothing apart from more RAM volume and 36CU to 40CU respectively.

On the base console, Microsoft upped clocks while on the mid gen refresh their console runs way higher clocks.

I just find it odd that Sony would go for a smaller chip, couple that to a more expensive RAM type and then make the difference in clock speeds.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
I can bet that if xox had 4.2tf the difference between two consoles would be much smaller.
Well, if you think about the compute power differences they do not at all line up with the resolution differences we tend to see.
We have heard from developers directly saying to us working around the lack of bandwidth feeding the GPU in the pro is frustrating and that is where it bottleneck lies.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
Well, if you think about the compute power differences they do not at all line up with the resolution differences we tend to see.
We have heard from developers directly saying to us working around the lack of bandwidth feeding the GPU in the pro is frustrating and that is where it bottleneck lies.
Not deny it, but statement in df article that tf numbers doesn't matter now and puting xox as an example was rather bizzare.
 
Sep 19, 2019
2,262
Hamburg- Germany
You are simply throwing numbers.

What is the difference between the base consoles? It is RAM type and how that decision plays out on CU count with the base PS4 having 18 while the Xbox having 12.

What is the biggest difference between the PS4 Pro and Xbox One X? Nothing apart from more RAM volume and 36CU to 40CU respectively.

On the base console, Microsoft upped clocks while on the mid gen refresh their console runs way higher clocks.

I just find it odd that Sony would go for a smaller chip, couple that to a more expensive RAM type and then make the difference in clock speeds.

I was just talking about that a difference in "Watts" can make a big difference in performance. More watts within the same architecture is caused by things like you explained in your post. Don't know what's wrong with my "throwing of numbers".
 
Sep 19, 2019
2,262
Hamburg- Germany
Well, if you think about the compute power differences they do not at all line up with the resolution differences we tend to see.
We have heard from developers directly saying to us working around the lack of bandwidth feeding the GPU in the pro is frustrating and that is where it bottleneck lies.

It wouldn't surprise me if you have your own therapist next to you in the office. :D
 

gremlinz1982

Member
Aug 11, 2018
5,331
I was just talking about that a difference in "Watts" can make a big difference in performance. More watts within the same architecture is caused by things like you explained in your post. Don't know what's wrong with my "throwing of numbers".
Just throwing numbers around is simplistic. However, the Xbox One X has also showed us that one can have a higher CU count and clock higher.

That does not bode well with the thought process that the primary way of getting higher clocks is by going for a less power hungry RAM type.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
We don't really know how the R9 280 and lower GPUs will perform in this game until it's out. So saying the XBox One S runs RDR2 better than a 3TF GCN PC GPU is just guess work based on the system requirements until it has been tested.

We don't even know what the minimum system requirements are targeting in terms of graphical settings as they're quite vague to say the least.
Could it be Xbox One quality settings at 900p 30 fps? Or even PS4 quality settings at 1080p 30, or even 900p to 1080p 60 fps? We don't know.

Red Dead Redemption 2 is even rumoured to feature DX12 and Vulkan based on information some people found in the Rockstar Games Launcher, this will certainly be of much benefit to PC Gamers as good implementations of these APIs can really make a huge difference for performance.
The inclusion of these will certainly make it one of the most fascinating PC releases to date!

Edit - Expanded post and made it clearer.
It's a mistake to think minimum requirements are consistent. I've seen games where the minimum requirements delivered way past console performance and I've seen games where the minimum requirements gave you 20fps at 720p.

You haven't actually lost the argument yet. It's still possible for the 750ti to match the PS4 in RDR2, but it's rather unlikely.
I agree, I just didn't want to go there in my post :)
I don't care much about the 750ti vs PS4 argument, I've only given the 750ti as an example because that was DF's GPU of choice when they tried to make a PC that mimics the PS4 so it was the first thing that popped in my head. I could have and maybe should have said R7 260X (or R7 270) which was a 1.97TF GCN 2.0 card for 139$ that came out a few months before the PS4. But unfortunately from that point, the conversation digressed towards a 750ti VS PS4 argument.

Which game had an issue?
Also Apotheon, P.T, AC black flag, Batman: The Telltale Series, Dead Island, Dragon Quest Builders, God Eater Resurrection, Mirror's Edge Catalyst, Shadow Complex, Trivial Pursuit Live!, Tembo the Badass Elephant, Slender: The Arrival, Grow Home and other games had problems in boost mode.
 
Last edited:

msia2k75

Member
Nov 1, 2017
601
I'm still having a hard time believing Sony would go with a memory clocked at 18gpbs, even though in terms of bandwidth delivered, it makes sense.
 

Silencerx98

Banned
Oct 25, 2017
1,289
I agree, I just didn't want to go there in my post :)
I don't care much about the 750ti vs PS4 argument, I've only given the 750ti as an example because that was DF's GPU of choice when they tried to make a PC that mimics the PS4 so it was the first thing that popped in my head. I could have and maybe should have said R7 260X (or R7 270) which was a 1.97TF GCN 2.0 card for 139$ that came out a few months before the PS4. But unfortunately from that point, the conversation digressed towards a 750ti VS PS4 argument.
For what it's worth, the 750Ti started falling significantly behind the PS4 by the end of 2015. 2016 was the year when last gen was left behind completely with almost every AAA game being current gen only with the exception of sports games like Fifa. It's reasonable to say the i3/750Ti combo was competitive against the PS4 because engines were not yet properly optimized to the strengths of current gen consoles.

Rise of the Tomb Raider was one of the first examples where this comparison became void with even the base XB1 version pulling ahead
 

gofreak

Member
Oct 26, 2017
7,734
Another haptics/vibration patent - again, cast more in a VR controller context, but clearly DS5 is borrowing some of the same tech.


This patent talks about using a per-trigger vibration actuator along with a 'force presenting device' (i.e. the adaptive/programmable resistance Sony has confirmed). I'm not sure it's confirmed the DS5 will have vibrators per trigger also, but I suppose it's a possibility.

kB9jpQx.png



3MHzDi3.png
 

BreakAtmo

Member
Nov 12, 2017
12,822
Australia
Mainly uncapped frames on previous gens, wanna see how crazy those old games look once you take off the technical leash, plus they need something to showcase that 8K/120fps marketing push they've been telling us lol.

Keep in mind that "8K 120fps" will not be a thing as even HDMI 2.1 cannot handle that. What they mean is that they could do one or the other (8K at 60fps or 4K at 120fps).

Well, if you think about the compute power differences they do not at all line up with the resolution differences we tend to see.
We have heard from developers directly saying to us working around the lack of bandwidth feeding the GPU in the pro is frustrating and that is where it bottleneck lies.

This is precisely what I already heard. I assume that's why, while there are some games where the X has more than double the resolution, there are others where it only offers 40% more and little else (or even having lower performance) - some games are bandwidth-starved and some aren't. I've wondered before what some games would be like on the Pro if it had the same RAM setup as the X, but no other changes.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
For what it's worth, the 750Ti started falling significantly behind the PS4 by the end of 2015. 2016 was the year when last gen was left behind completely with almost every AAA game being current gen only with the exception of sports games like Fifa. It's reasonable to say the i3/750Ti combo was competitive against the PS4 because engines were not yet properly optimized to the strengths of current gen consoles.

Rise of the Tomb Raider was one of the first examples where this comparison became void with even the base XB1 version pulling ahead
If we want to be fair, much more powerful GCN cards fell behind consoles in games like Rise of The Tomb Raider:


I mean, an R7 360 (1.6TF GCN 2.0) is more powerful than the One and it falls to a single-digit FPS in ROTTR. You can't really compare 1:1 a console GPU and PC performance because of all the optimization consoles versions get. If PS5 will have a 5700XT as a GPU, you can be sure that 4 years from now the PS5 will kick a PC with the 5700XT ass.

(I can't believe I'm being pulled into that 750ti VS PS4 argument again :D)

Again, for like the 4th time, if anyone has problems with the 750ti, let's just talk about the 260X (139$ on Aug 2013, 115W TBP) or 270 (179$ on Nov 2013, 150W TBP) instead, cards no one can argue that they aren't more powerful than the PS4's GPU.
 
Last edited:

Thera

Banned
Feb 28, 2019
12,876
France
Hmm, didn't know that, thought HDMI 2.1 offered both 8K and 120fps at the same time, cheers for the clear up.
Oh you can, but not with uncompressed data. You need to use VESA Display Stream Compression. From the norm :
"Q: What type of compression is supported?
A: The specification incorporates VESA DSC 1.2a link compression, which is a visually lossless compression scheme. VESA DSC 1.2a also can be used to obtain higher resolutions than 8K60/4:2:0/10-bit color, such as 8K60 RGB, 8K120 and even 10K120. VESA DSC 1.2a also supports 4Kp50/60 with the benefit of enabling operation at much lower link rates."
 

Sekiro

Member
Jan 25, 2019
2,938
United Kingdom
Oh you can, but not with uncompressed data. You need to use VESA Display Stream Compression. From the norm :
"Q: What type of compression is supported?
A: The specification incorporates VESA DSC 1.2a link compression, which is a visually lossless compression scheme. VESA DSC 1.2a also can be used to obtain higher resolutions than 8K60/4:2:0/10-bit color, such as 8K60 RGB, 8K120 and even 10K120. VESA DSC 1.2a also supports 4Kp50/60 with the benefit of enabling operation at much lower link rates."

Holy shit they support 10K120, thats insane.
 

Andromeda

Member
Oct 27, 2017
4,844
Not deny it, but statement in df article that tf numbers doesn't matter now and puting xox as an example was rather bizzare.
As both consoles will have a tf number very similar, as many journalists already know for a fact since E3, they are starting to change the power narrative from tflops to immersion. Both Microsoft and Sony are doing the same. It's all about the immersion now.

They better start their messaging now to prepare their audience for next gen, even it it's awkward indeed to apply that to XBX after all the superlatives about the "6 tflops beast" they repeated during all those years.
I'm still having a hard time believing Sony would go with a memory clocked at 18gpbs, even though in terms of bandwidth delivered, it makes sense.
Well, remember that was for their devkit only which are produced in relatively small quantities. We don't know what's going to be in the final box because they'll need to build million of those.
 

Mitchman1411

Member
Jul 28, 2018
635
Oslo, Norway
Keep in mind that "8K 120fps" will not be a thing as even HDMI 2.1 cannot handle that. What they mean is that they could do one or the other (8K at 60fps or 4K at 120fps).
Hmm, didn't know that, thought HDMI 2.1 offered both 8K and 120fps at the same time, cheers for the clear up.
It does, you can do 10k @ 120Hz with the 48Gbps offered by HDMI 2.1 devices supporting the full bandwidth using DSC (Display Stream Compression). But don't believe me, believe https://www.rtings.com/tv/learn/tv/tests/inputs/hdmi-2-1:

"HDMI 2.1 significantly increases the maximum bandwidth capability of HDMI up to 48Gbps, and even more when compression is used. This allows HDMI to transmit resolutions as high as 10k @ 120Hz. "
 

Silencerx98

Banned
Oct 25, 2017
1,289
If we want to be fair, much more powerful GCN cards fell behind consoles in games like Rise of The Tomb Raider:
https://www.eurogamer.net/articles/digitalfoundry-2016-rise-of-the-tomb-raider-pc-face-off

I mean, an R7 360 (1.6TF GCN 2.0) is more powerful than the One and it falls to a single-digit FPS in ROTTR. You can't really compare 1:1 a console GPU and PC performance because of all the optimization consoles versions get. If PS5 will have a 5700XT as a GPU, you can be sure that 4 years from now the PS5 will kick a PC with the 5700XT ass.

(I can't believe I'm being pulled into that 750ti VS PS4 argument again :D)

Again, for like the 4th time, if anyone has problems with the 750ti, let's just talk about the 260X (139$ on Aug 2013, 115W TBP) or 270 (179$ on Nov 2013, 150W TBP) instead, cards no one can argue that they aren't more powerful than the PS4's GPU.
Hey, now, not dragging you into PS4 vs 750 Ti argument. I was tired of that back in 2015 :P
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
I am not ignoring that HBM3 can be made cheaper. But what makes something cheaper? It is refinement of the process, and scaling up of production. If there is high production/supply and high demand, what usually happens is that costs can be spread over a larger canvas so you end up having a high volume, lower margin business. It also allows for a faster break even point, and because it is something that is selling extremely well, you are likely to have more players coming investing in capacity. Competition is something that will now bring in even more cost reductions. This at the end of the day is what drives economies of scale.

When the demand is weaker, you do not have the same economies of scale. We now live in a worldwide economy where people want the cheapest goods. Governments and central banks which always want to have some inflation have had a problem generating it because of just how interconnected the worldwide economy is. So, you get some niche products that are high end, that can charge a premium, but to the rest of the market, it is a bloodbath where companies are simply trying to sell enough sometimes to just stay in business.
This is why you never see new tech adopted fast enough if the cost benefit is not there. This in turn means that there is not much demand to force prices down.....it is a vicious cycle. So console manufacturers will go for a known entity, not competing for the fastest chips on the market either, because those also cost more.

You started with design A, pumped money into it. You have now abandoned that and are planning a new design, which also costs money to have a more expensive RAM type whose costs will likely not come down as fast going by how HBM prices have behaved. There are people that sign off on expenditure, and they are more concerned about the bottom line than they are innovation.

Sony, Microsoft, Nintendo will not be chasing new tech if the price is not right. Cost at the end of the day is what drives the console business. Sony/Microsoft could have gone for more capable Intel CPU's in 2013, could have gone with SSD solutions or better GPU's. They did not because they are building something for the masses.
Who said anything about starting with ne design then switching to another? All I am and have been saying is that if they are using HBM then its a decision the made along time ago, and it will be because they can make it work. All I have been doing since then is discussing ways with which that ould work or not work. And up until now I have been having this discussion with you in god faith, it's not like I am an HBM evangelist or some shit, couldn't care less which RAM solution they use as long as it works. I am just taking the Pro HBM stance and making a case for it so I can hear, find out or learn why such an option will be impossible. If at all it is.
Still thinking that statement 10-15w is world difference when both console will probably be around 180-200w is funny exageration, if this is not so funny for you, hmm whatever ;)
You with this nonsense again... Did I kill your dog or something?

I am not even talking about the power difference of the overall system, I am talking about the power difference of HBM3 vs GDDR6. And saying that in a console the power saved from using one RAM solution over the other could be used or channeled into something else in the system. like "SAY FOR EXAMPLE" Allow you clock a GPU up from 1.8Ghz to 2Ghz.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Yep doesn't make sense but there is hbm fetish team here ;)
If you must know I also threw around some silly discrete CPU and GPU claim too based on the rumored smaller chip thingy. Since you seem to pick on everything I say you must have missed that one.

You must have also missed all the times I said the whole small chip thing doesn't make any sense.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
I am not even talking about the power difference of the overall system, I am talking about the power difference of HBM3 vs GDDR6. And saying that in a console the power saved from using one RAM solution over the other could be used or channeled into something else in the system. like "SAY FOR EXAMPLE" Allow you clock a GPU up from 1.8Ghz to 2Ghz.
Yeah I know, 1.8ghz to 2ghz for sure is not world difference ;) btw increase in power consumption would be bigger as it doesn't scale well on such high clocks
 
Last edited:

Deleted member 56995

User requested account closure
Banned
May 24, 2019
817
Updated predictions:
8c/16t Zen 2 @ 2.8GHz
44 active CUs @ 1.85 GHz = 10.4 TF
16GB GDDR6 (14GBs available to devs) @ 17 Gbps on a 256-bit memory bus - 544GB/s bus speed
$499
 

Jeffram

Member
Oct 29, 2017
3,924
If you must know I also threw around some silly discrete CPU and GPU claim too based on the rumored smaller chip thingy. Since you seem to pick on everything I say you must have missed that one.

You must have also missed all the times I said the whole small chip thing doesn't make any sense.
Does the fact that we don't know anything about the Ray Tracing implementation leave the possibility open? If the RT cores sit outside of the CUs, then to get the most out of them it might be better to have a higher clocked GPU, so the RT cores get more cycles in. Wouldn't that be a reason to have fewer but higher clocked CUs?
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Updated predictions:
8c/16t Zen 2 @ 2.8GHz
44 active CUs @ 1.85 GHz = 10.4 TF
16GB GDDR6 (14GBs available to devs) @ 17 Gbps on a 256-bit memory bus - 544GB/s bus speed
$499
I think that's a pretty good prediction for the PS5 but I would have given the CPU a few more hundred Mhz. I also think that 40CU at 2Ghz will be better than 44CU at 1.85Ghz but it will probably cost more regarding power consumption and heat.
 

modiz

Member
Oct 8, 2018
17,825

AegonSnake

Banned
Oct 25, 2017
9,566
I am pretty sure we are noticing bigger differences due to BW in most PS4pro and X1X games.
I am sure the Pro memory bandwidth hurts some games but come on man, RDR2 isnt rendering at native 4k just because of the extra bandwidth. From DF's own DF analysis of 60 fps games, the extra bandwidth mostly comes into play for 60 fps and games targeting native 4k. i tholught guys said that the 4kcb and native 4k both render the full 8 million pixels and the pro's bandwidth can handle that for 30 fps games like rdr2 and yet the difference is massive anyway.
 

VX1

Member
Oct 28, 2017
7,000
Europe
I think that's a pretty good prediction for the PS5 but I would have given the CPU a few more hundred Mhz. I also think that 40CU at 2Ghz will be better than 44CU at 1.85Ghz but it will probably cost more regarding power consumption and heat.

What he said sounds to me more like $399, not $499 box :)
 

JooJ

Member
Oct 27, 2017
576
So how realistic is the common idea people here seem to have that PS4 games will get resolution and frame rate bump instead of just running on PS5?
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
It might be a devkit with different ram setup.
PS5 is silicon locked for almost a year now and the memory controllers are part of the silicon so I doubt that's subject to change but maybe someone here knows the process of chip development better than me and can tell us if there is such a thing as a console sample 16 months before release having a different memory setup.

What he said sounds to me more like $399, not $499 box :)
Maybe, depends on if Sony is willing to lose money or wants to make money day 1 :)
But honestly, I'm having a hard time estimating considering we don't know for how much Sony will be buying the GDDR, how much they are going to spend on the APU VS the PS4 and how expensive is having an SSD VS the PS4's 37$ HDD, etc. It seems like we are all guessing here, who knows what the BOM is on such a console?

So how realistic is the common idea people here seem to have that PS4 games will get resolution and frame rate bump instead of just running on PS5?
I don't think that will happen without a patch. MS is using its API wrapper in order to run old games which allows them to play with their profile which still requires them to do some QA in order to have them run in different settings. Sony, on the other hand, is trying to match the hardware to run them as is, I'm not sure they are going to allow this out of the box without some kind of patch. But I guess we will have to wait and see.

My expectations from the PS5 BC are to run PS4/Pro games as is and if I turn on boost mode, if the game has something dynamic in it (uncapped FPS, dynamic resolution, hiccups, etc.), then the PS5 will run it at the best case scenario possible at all times.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
We don't have any official info, so best guess is looking at how it is handled on the Pro.
Pro is actually a pretty good comparison because it seems like they are trying to do the same thing on PS5.

Has backwards compatibility with PS4 been confirmed?
Yes, by Mark Cerny in the April Wired article and now again in Famitsu a few days ago. this is a terrible Google Translate version of what Famitsu had said:

To me, it sounds like they are saying "We got most of the PS4 covered and we are trying to iron out some stuff to get all of the PS4 library working". Anyone here reads Japanese and can try and give us the "spirit" of this paragraph?
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
I am sure the Pro memory bandwidth hurts some games but come on man, RDR2 isnt rendering at native 4k just because of the extra bandwidth. From DF's own DF analysis of 60 fps games, the extra bandwidth mostly comes into play for 60 fps and games targeting native 4k. i tholught guys said that the 4kcb and native 4k both render the full 8 million pixels and the pro's bandwidth can handle that for 30 fps games like rdr2 and yet the difference is massive anyway.
I am not sure what you are saying ?

Some games are more bandwidth bound than others and especially the higher they up the resolution, of course.
 
Status
Not open for further replies.