Possibly a boosted boost mode or a max clock test.
Boost mode pspro patched games? To push those resolutions to native 4k and the unlocked frame rates to 60fps?...maybe?
yeah im gonna press the doubt button on pro boost. this is 6.3x faster than the OG PS4 and 2.7x faster than PS4 Pro. This is also why it makes sense that it's probably PS5 because XSX is only 2.5x faster than X1X. They actually took a bigger jump than MS did. It's still a generational leap. Probably thought MS overcorrected with X1X.
as for max clock... doubt too. it wouldn't be a round number if they're testing the limits of a processor.
People can't forget power budget is a thing. PS4, PS4 Pro and Xbox One X all targeted 150W TDP so Sony probably thought that MS would do the same this time but instead pushed the upper bound... because PS5 is supposedly in 150W TDP but is going to be hitting closer to it's peak like X1X did. MS went above that. Their APU alone will be drawing more power than the entire X1X did by itself.
like no joke... the only Sony console that hit 200W was the PS3.
Xbox 100W, PS2 50W, Xbox 360 170W, PS4 135W, PS4 Pro 150W, Xbox One 115W, Xbox One X 170W
PS5 170W projected, Xbox Series X 200W+ projected
Remember when everyone thought over 10 TF was a fantasy? (well except me, i thought they could push 18-24 TF just to send the hammer down because of 7nm) there was a time when 8 TF was considered the ceiling. you go back and tell your old self that it was 9.2 TF and you'd be happy. MS just went above that, that's all. There's nothing to be sad about... the games are still going to look fantastic. Just look at what 1.8 TF GCN is doing now. Look at what 4.2 TF GCN is doing now. Imagine having a much better CPU and having raytracing *for free* imagine having Last of Us 2 but with realistic looking hair *for free*
People are just freaking out because one number is higher than the other.
This is kind of a circular argument especially when we are talking about multi platform games. You couldn't program use of GPGPU into your game for obvious reasons if you are designing for Xbox and PS4. So it would also stand to reason that we would see a significant difference in exclusive titles which is what we do see. Titles like DriveClub used the extra power in various ways, we also saw Killzone Shadowfall and Infamous which used it to perform sound reflections. I think the point I'm making, even with Claybook, is that the extra performance doesn't "have" to be allocated visuals so that it can scale down to a lower system. I do think the CPUs this time around are much better but we also have 7 years to go with the hardware and whatever baseline we establish upfront.
We went into the generation with a low baseline for CPU that really began to hurt towards the end of this generation, we will be starting a whole new 7 year period with a 4Tflop GPU baseline.
edit: actually as I'm thinking about it we are essentially talking an exact replay of how it worked last gen. Games were designed for the Xbox base system and scaled up, not the other way around. We already have an example with the current gen where if we have similar CPU but different GPUs that most will simply be resolution difference and maybe a frame rate increase with some other added effects and call it a day. We see this already with the Xbox One OG up to the X1X being 5x more powerful.
Lockhart is technically too powerful relative to the other consoles. You make a game maximizing Lockhart at 1080p utilizing zero GPGPU and you would be using dynamic resolution on Xbox Series X because it wouldn't be able to handle the scene you made on Lockhart. It's 3x. Don't forget that. You need 4x.
This means on an Xbox exclusive, you have a GPGPU budget of 1.3 TF. For a multiplatform game, you'd have a GPGPU budget of around 1.9TF (assuming PS5 @ 9.2TF), you exceed that and the game cannot be ported to Lockhart. now last time I checked dedicating nearly 25% of your GPU budget to GPGPU would be kinda nuts. Lockhart is not going to hold back PS5 or Xbox Series X. Full stop. If Lockhart is higher than 4 and closer to 4.8 which I think it might be... the numbers get even crazier. 2.4 TF GPGPU budget for exclusives 3TF budget for multiplatform. this is assuming you are utilizing the rest for shaders/RT in the most optimal way possible. (which is why the multiplatform number is higher, so XSX gets IQ benefit and framerate stability with no extra bells and whistles)
I don't know about you but having 1.9-3TF budget for GPGPU usage sounds pretty darn exciting to me, considering we just came from a generation where all consoles are below that mark no matter what.
edit: apparently I did my math like a lazy ass... 1.9TF should be around 2.3 TF. woops.