Console manufacturers definitely made 'flops' a big part of their technical marketing. Polys/sec was the simplistic metric at one point, then Sony pushed flops a bit with PS2, and a lot with PS3.
However I would say beyond what the platform holders were telling people, the picture being telegraphed to consumers through dev commentary etc. was already becoming more complicated from the 360/PS3 onward. If they weren't previously, people became more aware of development ease and memory as things that were important to end results. Much was made of the split memory in PS3 vs unified in 360. Which is why, when you spring forward to the PS4, Cerny wasn't simply highlighting the GPU tflops figure above all else. It was one part of a bigger picture presented. Much was also made of the 8GB of gddr5 in particular, and the hard disk in every box - memory management things. In PS4/One comparisons, flops became a vaguely proportional metric for what people were seeing in real games, the difference was relatively large and it seemed to make sense it was they key to the difference people were seeing - and it was a big part of it - but hidden in that was just how much the memory differences between the two were hurting performance too, especially in the earlier games.
And that's kind of how it's often been - often the system with the better GPU has also had a better memory hierarchy (360, PS4). So often in those situations, maybe it was easier for some observers to just bundle all that up and credit it to the flops or more powerful GPU difference only - it seemed a reasonable representation of the difference in general. But in deeper technical discourse, memory's been a big part of that conversation for at least 2 gens now, so I think it'd be wrong to say that flops were the only issue anyone was ever talking about in technical discourse between consoles previously. The reason memory/data might seem like a 'bigger' topic now is that unlike the previous two gens, we have one machine with the better GPU, and one machine perhaps with the better data setup. So the complexity of that can't be hidden behind one number that's apparently responsible for all the difference between the two systems. That number will easily explain some differences (e.g. res), but it can't necessarily serve as the more general simplification that it did in prior gens, with systems that had both a better gpu and a better memory setup.