AMD is aware of what it will take to accelerate ray-tracing, and already has tools for rendering in professional applications and real-time. It's understandable they didn't push the envelope for consumer graphics because developer adoption is key. This paper and patent shows they've been considering HW accelerated RT for at least 4 years.
https://pdfs.semanticscholar.org/26ef/909381d93060f626231fe7560a5636a947cd.pdf
https://patentimages.storage.googleapis.com/9a/4e/87/9e66d9a430c575/US20130328876A1.pdf
Doing RT acceleration on a separate die is likely impractical. You want the existing architecture modified for it, just as Nvidia has done with Turing.
AMD has been behind the game compared to Nvidia ever since Nvidia fixed Fermi's power dissipation. Nvidia can hit higher clocks at much lower power. This has to do with architecture sure, but they've probably cooked up some transistor level optimizations that AMD needs to catch up on. Perhaps the Zen engineers helping them out will make baby steps towards that.
Yes, Vega 7 burning 300W to hit those numbers is bad,
but AMD's own metric says Vega 7 should also be able to match existing Vega 64 performance at half the power. Throw some architectural advantages on top of that, and you're in the 150W range, still at 12TF, and hopefully better than Vega 7 clock for clock due to Navi's improvements.