in that case, why not just use ray tracing in dev kits, have developers render the lights in real time and do all the needed adjustments and then switch off the RTX for the game itself to run at a steady performance?
The thing is, if the devs want to have dynamic lighting systems at play time, they need to implement those to be run in the game, either as RT or some other technique. They can (and sometimes do) use RT to visualize how a scene should look like, placing lights and all the other stuff - a target render - and then go back to the game code, with its play time implementation of GI, shadows, etc and try to tweak and match so it looks as close as possible to the target render while still performant.
So the benefit of RT is more so to ease game development.
The best comparison I can think of, is a character in a game with ragdoll physics, you can manipulate that character in 1000s of different ways and get 1000s of different results, where pre canned animations have to be individually created.
So RT is like the ragdoll?
Yup, you get the idea. With ragdoll you create models with skeletons with plausible approximate ranges of motion, apply an approximation of gravity, friction and other forces acting upon the ragdoll and the calculations do the rest for you. No need to hand code all the millions of possible animations, like a character folding over a railing or falling down a bunch of steps. With RT you approximate how light (reflections, refractions, etc) and materials behave and it does the rest.
Good explanation, but SVOGI is not a low computation renderinf technics.
I didn't say it was cheap, I said it was cheaper than RT. And it is, significantly.
Larrabee (or Cell) lives!
It's pretty crazy the increasing convergence between GPU and CPU architectures, CPUs becoming more parallel and vectorized, GPUs getting more and more scalar features.
Crazy Ken was way ahead of his time!