I mean yes but that doesn't make them a terrible measure of performance.
There are plenty of reasons why they're a poor predictor of performance, that's just one of them.
It doesn't say what you will get out of it but what the potential of the tech is.
Without knowing how much of that potential is realized in practice, does it really matter? It's also only the potential of one aspect of the rendering pipeline. Fill rate, memory bandwidth and latency, texture cache hit ratio, average pixel coverage per polygon, and plenty of other factors play equally important roles. It's just hard for most people to understand all the intricacies (heck, it's hard for the silicon architects who make a living doing this to predict how everything is going to interact without extensive simulations.)
There is a reason everyone uses them (MS/Sony/AMD/NVidia) and it isn't because it is a useless metric...
Everyone also uses star ratings to rate movies, megapixels to describe cameras, gigahertz to describe CPUs and they're all equally lazy and of similarly limited value without knowing a whole lot more. I assure you, it's not what the engineers who design GPUs for a living care most about, nor is it what developers pay attention to. It's a marketing measure focused on a customer base that isn't going to take the time to learn all the factors involved.
If you have a better way to communicate performance besides benchmarks please do share.
If you're going to eliminate the only really effective way to measure performance, benchmarks, then you're left with slim pickings. That's literally the gold standard for how technical teams measure the effectiveness of their designs, using real-world test cases that are believed to be representative of typical scenarios. It's like saying "if you know of a better way to reproduce besides sex please do share." Every other approach is just more complicated, and require understanding the interaction between lots of parts of the system - many of which we don't know enough about in this case.
What we do know is that the Series X
likely has a raw throughput advantage, but that it's probably smaller than the 16-25% reflected in GPU peak performance and memory bandwidth for a subset of the memory, because clock speeds play a role in other parts of the rendering pipeline and the PS5 has a clear advantage there. It
could be the case that variable clock speeds make a difference, too, but it's impossible to quantify yet and the one reference we have suggests that it's likely only a couple of percent in extreme cases.
Me? I'm waiting for those all important benchmarks to satisfy my curiosity, but I also know that a 10 or 20 percent performance gap isn't a big enough difference to make a decision for me. It still comes down to games, as it almost always does, and that's what I'm most eager to see.