This is so fucking dumb. The pc video card is really whack with Nvidia and AMD as the only players.
ATi would have been so much better without AMD. AMD gave RTG next to no money post acquisition while Nvidia have plowed cash into R&D. AMD basically gave ATi the busywork of sticking GPUs onto CPU dies which ended up being a low value wide goose chase in the end.
The biggest problem in my opinion was that Terascale hung around for way too long. VLIW was sensible back in the fixed function era and would have been great for pre-DX9 but the scheduling proved too unwieldy in the DX9 and forward era. You need some pretty smart software people to cover for the deficiencies in Terascale, they ultimately couldn't do it, and they kept trying for way too long.
Then in GCN they tried to solve it by making it the anti-Terascale. Massive SIMD. All hardware scheduling, ALL THE TIME. Blows up so much silicon budget, you still have awful software (*cough* fine wine drivers *cough*) and your occupancy still sucks because you've made your architecture hack upon hack upon hack to try and fill the pipelines and those hacks can only go so far.
As far as I can see, ultimately this has to be laid at the feet of a faulty vision with some really bad assumptions. AMD spent a good decade working under the assumption that GPUs were going away. Come mid-2020s the idea would be that they could take a GPU command stream, a CPU command stream, have it all on the same die and basically run a giant stream of instructions and data through it.
There are two immediate problems with this approach.
1) One look at the memory subsystem of a typicaly x86 machine would tell you this is a fool's errand. The memory bandwidth is an order of magnitude too low.
2) How the hell do you get from A to B without everything imploding in the mean time?
But they tried and they tried valiantly but, in the end, in vain. Remember heterogeneous computing? Remember how AMD haven't uttered those words in the last five years? That didn't stop them though. Hell, they tried to bring in HBM2 to bring this vision back on track (get the memory bandwidth and interposer sorted, bring in a CPU die and voila! We have Frankenstein's x86 monster!) and all it did was cost them a fuckton of money and a generation of high end cards with zero margin.
Meanwhile, Nvidia just kept getting better at doing the core competency of a GPU through hard won R&D. Solving the bounding box problem of ray tracing was a ridiculously difficult problem but now Nvidia can hardware accelerate ray tracing! AMD? Raytracing? Meh. Have you seen our Ashes of the Singularity benchmark results?
If the GPUs were trains, AMD was trying to make a maglev and Nvidia decided to just keep improving steel on steel. Now AMD is left billions in the hole from their APU line with barely anything to show for it and Nvidia is selling their tech to the high end of town and making bank.