It's cool but still a trade off right? It's using the existing shaders to carry out the operations. So if you're doing ML you aren't doing shading in those CUs
That works well for something like a DLSS equivalent as long as the cost of ML time is less than the time saved by the GPU shading less stuff. It might also be necessary if RT really needs to run at lower resolutions to be efficient (as we've seen on PC already)
there are so many new tools for devs it'll be fascinating to see how engines develop to leverage them. ML, VRS, mesh shaders are really big ones that could have huge impact