No console warrior or plastic-box defender here, just a passionate gamer which would like to understand more on the technical topic. I'd like to try and discuss with you guys the choice of Sony (Cerny) on the GPU front, which is apparently causing all of this mess.
So the architectural approach where Sony went all in is "less CUs, more clocks". In the yesterday's presentation, Cerny clearly states it at the beginning of the GPU component description saying "I prefer to push on clocks than of the number of the CUs", insisting on the fact that higher clocks can perform better. Even the choice of an optimal cooling solution points in that direction - and here I think, why not go for the same path of Xbox strategy and install more CUs less clocked by using less budget for the cooling solution?
Cerny then reported a toy-example where you have 36CUs @ 1GHz and 48CUs @ 0.75GHz: both have roughly the same amount of Teraflops (4.6), but in his opinion the first scenario with less CUs is fair more performant for rasterization, better use of caches etc., saying that having more CUs can lead to other problems such as keeping more CUs busy at work is more difficult for the dev, etc.
My question for you guys is: how does this translate, in your opinion, in real-life scenarios where devs have to work with those two very, very different GPU architectures?
For what I understood, my feeling is that we'll get roughly the same performances even with the TFs difference. But well, probably only side-by-side demos will fade those doubts at the end of the day.