• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Nooblet

Member
Oct 25, 2017
13,622
Wonder if we'll see AMD's take on DLSS?
Well we have DirectML which is Direct X ver of machine learning API. So yes we could very well get machine learning reconstruction on AMD.

And since it's DirectX Devs can probably do it on their own instead of having to send their games to Nvidia for training the neural network. Thereby making it more accessible/common.
 

Egocrata

Member
Aug 31, 2019
419
Control on a good PC with graphics maxed out is just pure gaming nirvana. The one game where it TRULY shines and makes a difference is Metro Exodus, however. It fts the game so well.
 

WorldHero

Member
Oct 27, 2017
188
I personally think current raytracing tech is overrated for the performance hit it creates. I'd rather take better quality cube maps over raytracing until it gets optimized much better.
 

Jedi2016

Member
Oct 27, 2017
15,614
I personally think current raytracing tech is overrated for the performance hit it creates.
Which is precisely why the dreams of a 60fps next-gen are already dead in the water. As long as they can hit a solid 30fps on the new systems, they'll throw everything and the kitchen sink at those graphics.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
Which is precisely why the dreams of a 60fps next-gen are already dead in the water. As long as they can hit a solid 30fps on the new systems, they'll throw everything and the kitchen sink at those graphics.
given how well stuff like DLSS and other upscaling techniques work, in addition to variable rate shading, and dynamic resolution, I don't see why 60fps is off the cards. a 2060 can hit well above 60fps in Wolfenstein with upscaling and VRS, and next gen systems will surpass that
 

ShadowFox08

Banned
Nov 25, 2017
3,524
Like I said before, I don't think ray tracing will be common till generation after next. Ray tracing takes up too much resources and guarantees that 30-60fps performance is here to stay next gen.
 

ShadowFox08

Banned
Nov 25, 2017
3,524
Which is precisely why the dreams of a 60fps next-gen are already dead in the water. As long as they can hit a solid 30fps on the new systems, they'll throw everything and the kitchen sink at those graphics.
Next gen will be interesting. It's going to be the most decisive in terms of performance vs fidelity. It's gonna be all over the place. I don't expect to see a ton of +90 to 120fps games for xbox series x and ps5, but I imagine there will be a decent amount attempted at least. Cross gen ports for the next 2-3 years could be the perfect example, like cod.

Edit: sorry for the double post. Would have condensed
 

MysticGon

One Winged Slayer
Member
Oct 31, 2017
7,285
RT might be to next gen what bloom was the seventh gen.

I hope it is done realistically and with restraint. I have faith is the team at Polyphony to again take the industry to school like they did with their HDR work this gen.
 

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
DLSS is a method of super sampling so it's not the same. that said, microsoft does have direct machine learning, so there's nothing stopping AMD from using an agnostic variation of DLSS

Aside from performance of course. Inferencing such a complex neural network on the GPU will cost a lot of performance, that's why Nvidia has Tensor Cores. Afaik AMD has nothing similar to that yet.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
I wouldn't be surprised if we start seeing that used in games. It basically eliminates a lot of wait period and speeds up development while still making sure it's not as demanding. Could work wonders for games with static lighting. We could have that UC4 room situation deployed everywhere in the game instead of specific rooms. I'm sure it'd become commonplace as devs realise the benefits of it over total real time implementations or traditional prevailing, might actually be something we see in 60FPS games next gen like Call of Duty. And the best part is you get the consistency of RT.

A mix of old and new...I like it!

You don't need it to be fully static for example Horizon Zero Dawn has some GI done for multiple TOD and weather condition for "static lighting", Dynamic lighting have his own system. Raytracing can help to make probe placement automatic too. You can have better static lighting with the improve RAM size and SSD streaming.

A mix of old and new will be the norm for the next 6 years.
 
Last edited:

Chettlar

Member
Oct 25, 2017
13,604
Yup, this. It's like the move to 3D graphics in games, it'll become standard despite performance differences. It'll make development much easier in many respects as well. Having to "fake" real lighting is such a huge time sink.

Bingo.

Better quality lighting that literally takes up less space on the disk (baked lighting takes data right?), that takes very little time to make since it's just simulated, and just looks so so much better.

At some point the actual making of a game matters.
 

Nooblet

Member
Oct 25, 2017
13,622
You don't need it to be fully static for example Horizon Zero Dawn has some GI done for multiple TOD and weather condition for "static lighting", Dynamic lighting have his own system. Raytracing can help to make probe placement automatic too. You can have better static lighting with the improve RAM size and SSD streaming.

A mix of old and new will be the norm for the next 6 years.
I know that, they can use it to generate probes more accurately. Basically my original post was talking about the wait times for both fully baked and probe methods and how RT is just going to cut that.

The possibility of actually using a powerful hardware for RT to check the updates in real time before finalising the look and baking it so it runs on less powerful hardware is not something I had thought about but now it makes sense and more than likely that it's basically how we'll see GI implemented for next 6 years...whereas real time RT is used on relatively less expensive features like reflections.
 
Last edited:

Pottuvoi

Member
Oct 28, 2017
3,062
I personally think current raytracing tech is overrated for the performance hit it creates. I'd rather take better quality cube maps over raytracing until it gets optimized much better.
It really depends how it is used.
You can combine Ray tracing with cube maps or baked information.

One of the big costs with Ray traced reflections is that shading amount explodes.
UE4 has option to use cheaper versions of shades within reflections.
There is nothing that prevents using simplified geometry and baked textures and lighting to reduce it further.
 

dgrdsv

Member
Oct 25, 2017
11,846
Its a form of reconstruction. Nvidias naming is absurd
It's not just a form of reconstruction, it's an antialiasing solution too. I've said this to you multiple times already but you continue to spread BS.

Well we have DirectML which is Direct X ver of machine learning API. So yes we could very well get machine learning reconstruction on AMD.

And since it's DirectX Devs can probably do it on their own instead of having to send their games to Nvidia for training the neural network. Thereby making it more accessible/common.
To get anything based on ML from AMD we need to get AMD h/w capable of good ML performance first, APIs are secondary.
 

Pottuvoi

Member
Oct 28, 2017
3,062
It's not just a form of reconstruction, it's an antialiasing solution too. I've said this to you multiple times already but you continue to spread BS.
In many techniques AA and reconstruction or upscaling is combined. (Temporal injection, UE4 temporal upsampling etc.)

He's right that the naming is absurd, DLAA would have been a better choice, even if Force Unleashed had method using the name already.
 

Bjones

Member
Oct 30, 2017
5,622
raytracing wil be a decent leap in visuals but the thing is that it won't be a game changer like going to 3D was. Only something like a holodeck could do that next
 

dgrdsv

Member
Oct 25, 2017
11,846

th1nk

Member
Nov 6, 2017
6,262
I hope machine learning techniques will increase performance for ray tracing big time during the course of next-gen. Is something like this possible? For example training an AI to create the lighting for a scene without needing to ray-trace it?
 

Pottuvoi

Member
Oct 28, 2017
3,062
I hope machine learning techniques will increase performance for ray tracing big time during the course of next-gen. Is something like this possible? For example training an AI to create the lighting for a scene without needing to ray-trace it?
Should be possible.

Feasible or accurate enough to use?
We shall see when someone tries.

Something like teaching with direct lighting and result after light propagation to light volume or light map.
Basically teaching how PRT would work in the scene and hope AI would hallucinate close enough results faster or smaller size.
 
Last edited:

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
Should be possible.

Feasible or accurate enough to use?
We shall see when someone tries.

Something like teaching with direct lighting and result after light propagation to light volume or light map.
Basically teaching how PRT would work in the scene and hope AI would hallucinate close enough results faster or smaller size.
I could imagine an AI could leverage a mix of screen space and world space hints to perhaps inference what a second or approximate third bounce of diffuse GI might look like

Would it just be cheaper to do it the real way? Maybe
 

Nooblet

Member
Oct 25, 2017
13,622
I'm actually looking forward to bounce lighting from trivial lightsources. We already have bounce lighting from global source like sun, or important lightsources like player held flashlight/flamethrower. But I'd like to see them deploy it on things like lightbulbs, street lamps etc.
 

Soap

Member
Oct 27, 2017
15,168
You will see small uses here and there, but I don't see it being truly game changing for another generation or two.
 

Monster Zero

Member
Nov 5, 2017
5,612
Southern California
I'm actually looking forward to bounce lighting from trivial lightsources. We already have bounce lighting from global source like sun, or important lightsources like player held flashlight/flamethrower. But I'd like to see them deploy it on things like lightbulbs, street lamps etc.

They are already doing that in the Metro Exodus DLC. Cyberpunk will feature that use of raytracing as well.

 

Pottuvoi

Member
Oct 28, 2017
3,062
I'm actually looking forward to bounce lighting from trivial lightsources. We already have bounce lighting from global source like sun, or important lightsources like player held flashlight/flamethrower. But I'd like to see them deploy it on things like lightbulbs, street lamps etc.
Specular bounce would be amazing as well.

I cannot wait to see Silent Hill like game where cone from flashlights bounces from puddles and mirrors.

Would allow light puzzles and such.
 

laxu

Member
Nov 26, 2017
2,782
RT might be to next gen what bloom was the seventh gen.

I hope it is done realistically and with restraint. I have faith is the team at Polyphony to again take the industry to school like they did with their HDR work this gen.

That's not how it works. Unlike with bloom effects, there is no "over the top" way of doing raytracing because it's not an effect but a simulation of how light is reflected etc. The only way for example raytraced lighting or reflections would look over the top is if the artist decides that everything should be 100% reflective and has dramatic lighting - which they can already do with current tech faking those things.

Raytracing can in some scenes look like nothing at all because it looks closer to real life so you don't pay attention to it. In Metro Exodus for example you mostly see it as every object being more properly grounded to its environment due to casting the right kind of shadows and ambient occlusion. When you turn it off a table that previously looked like it sits on a floor suddenly looks like it is floating slightly above it. We are very used to seeing this sort of thing in games so we don't acknowledge it as being incorrect anymore.
 

KKRT

Member
Oct 27, 2017
1,544
I hope machine learning techniques will increase performance for ray tracing big time during the course of next-gen. Is something like this possible? For example training an AI to create the lighting for a scene without needing to ray-trace it?
Things like that are kinda actually being worked on already:

 

Yogi

Banned
Nov 10, 2019
1,806
It's only a single bounce in metro exodus? No Light->surface->surface->camera?
 

StudioTan

Member
Oct 27, 2017
5,836
RT might be to next gen what bloom was the seventh gen.

I hope it is done realistically and with restraint. I have faith is the team at Polyphony to again take the industry to school like they did with their HDR work this gen.

That doesn't really make sense. Bloom is a post processing effect, like lens flare. Ray tracing is a way of calculating light and illumination (and other things like sound and physics can use it too), there's no such thing as too much ray tracing lol.
 

Nooblet

Member
Oct 25, 2017
13,622
RT might be to next gen what bloom was the seventh gen.

I hope it is done realistically and with restraint. I have faith is the team at Polyphony to again take the industry to school like they did with their HDR work this gen.
Not really.
Bloom is an artist's implementation of how light should interact put in place using game technology. Ray Tracing is a mathematical model of light interaction put into place using game technology.

You can't really have too much ray tracing, what you can have is too much of showcasing a particular effect. Like on Watch Dogs 3's ray traced reflection, every reflective surface is a mirror reflection, there is no middle ground. It's either glossy and like mirror or completely diffused with no reflection. That's an art problem rather than ray tracing problem.
 

raketenrolf

Member
Oct 28, 2017
5,203
Germany
I feel like everytime I see a video about RTX performance, the performance loss is too big for me personally. Sure, it looks amazing but a decrease in around 50% is way too high (for me). Yes, if you have a high end rig with a RTX2080ti you can still run most things at 60fps and high graphical options but I don't and probably never will spend so much money on a PC that expensive so a decrease in 50% would basically mean 30fps for me (and I guess on the new consoles as well). I still prefer 60fps over 30fps /w eye candy.

But yeah, in the future, when the tech matures and the GPUs become better, this should be standard because it looks absolutely amazing.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
So, if Stadia is to compete with next-gen consoles and PC and standardize RT, how can they attain that with GCN Vega 56 GPU? Software solution?
 

Pottuvoi

Member
Oct 28, 2017
3,062
Would be interesting to see some games use RT for generation of g-buffer.
Should allow easier way to fit image properly to ultra wide and curved displays. (especially if monitor setup is scanned.)

Add fast camera for head tracking and they could work as windows to virtual world.
 
Last edited:

MysticGon

One Winged Slayer
Member
Oct 31, 2017
7,285
That's not how it works. Unlike with bloom effects, there is no "over the top" way of doing raytracing because it's not an effect but a simulation of how light is reflected etc. The only way for example raytraced lighting or reflections would look over the top is if the artist decides that everything should be 100% reflective and has dramatic lighting - which they can already do with current tech faking those things.

Raytracing can in some scenes look like nothing at all because it looks closer to real life so you don't pay attention to it. In Metro Exodus for example you mostly see it as every object being more properly grounded to its environment due to casting the right kind of shadows and ambient occlusion. When you turn it off a table that previously looked like it sits on a floor suddenly looks like it is floating slightly above it. We are very used to seeing this sort of thing in games so we don't acknowledge it as being incorrect anymore.
That doesn't really make sense. Bloom is a post processing effect, like lens flare. Ray tracing is a way of calculating light and illumination (and other things like sound and physics can use it too), there's no such thing as too much ray tracing lol.
Not really.
Bloom is an artist's implementation of how light should interact put in place using game technology. Ray Tracing is a mathematical model of light interaction put into place using game technology.

You can't really have too much ray tracing, what you can have is too much of showcasing a particular effect. Like on Watch Dogs 3's ray traced reflection, every reflective surface is a mirror reflection, there is no middle ground. It's either glossy and like mirror or completely diffused with no reflection. That's an art problem rather than ray tracing problem.

Thats good. Thanks for the informative responses.
 

dgrdsv

Member
Oct 25, 2017
11,846
100% the future of gaming, but I believe it will be a mixed bag for next gen consoles.
I think it will do fine on next gen consoles. Whatever we have on PC right now shows that if anything even the smallest amounts of RT in a scene creates a "next gen" visual leap and going any further on the same implementation usually doesn't add much on top of that. So even with "weak" RT h/w consoles should be able to produce "next gen" visuals.

The question is though will next consoles be powerful enough to pull off something like Control's RT where RT is used for several things at once and if not then what will be the main RT usage area on PS5/XSX.
 

Majukun

Banned
Oct 27, 2017
4,542
all i know is that i don't care one bit about it no matter how much marketing or graphic enthusiast tell me it's amazing.

much like all graphical improvements it will be cool for a week or two and then you will stop noticing it.