AKAIK it lets the GPU work a tiny bit ahead of the CPU. It adds input lag, but you should see an increase in GPU usage and FPSCould someone please explain what future frame rendering does? Why does it see such an increase in performance? Thanks!
Yo that Future Frame rendering doing work. Dropped from 110 fps to 70 when I turned it off at the spot I tested in Rotterdam. That's on max settings at 1440p, mind you.
I'm interested to know how much it actually affects input lag.
my future frame rendering is blocked out, but i'd keep it off anyway. i'd rather have no latency over fps
I'm running on a 2080ti and game is above 100 FPS in 1440p but doesn't feel smooth and stutters. That is with vsync off when I turned it on it stopped stuttering.
my future frame rendering is blocked out, but i'd keep it off anyway. i'd rather have no latency over fps
Will state here again that FFR being ON is the default in BF1, BF4, BFH and pretty much every other Frostbite game out there so if you didn't think those had any input lag, you might as well enable it and give yourself the much more important 30+ fps. :)
Good to know. The framerate difference is massive on my system. Curious for trying to do math on the latency, it's 3 frames being rendered according to some other DICE employee on reddit... is this 3 frames in addition to the normal 1 or how exactly does the math work out in terms of how many added frames there are?
I'm trying to figure out exactly what the difference would be in latency in terms of ms instead of frames when compensating for the added framerate.
Default engine value is 3. When FFR is set to OFF, the value is 1.
Does the game run better for final release? I remember it lagging badly on my i5 4690k and GTX 980 PC, even on medium.
GTX 970, i7-4790K OC'd at 4.5 GHz on all cores, 16 GB DDR3 2400 RAM.
I can't get to 60fps in 1080p without Future Frame Rendering, which is depressing because I absolutely noticed some input delay. And yes, I use raw mouse input and have VSync disabled. On High I get 35 to 40fps without FFR.
I mean, I wanted to switch to a RTX 2070 or 80 in January anyway, but it's still a bit depressing, especially after how well BF1 ran on my rig.
I have it enabled, because I need those 60fps. I did not know about the default FFR in BF1, the more you know. It somehow still felt a bit better in terms of input delay than BFV does now. Other than that, you guys did a kick-ass job. BF games are always the reason why I upgrade my PCs and V seems to be no exception. ;)
Default engine value is 3. When FFR is set to OFF, the value is 1.
2080 Ti here as well. Sometimes the game was super smooth, other times I'd get stuttering despite 100-130 FPS.
Ended up changing to DX11 and lost about 50 FPS until I enabled Future Frame Rendering, now it's consistently running smooth at an average of about 125 FPS. I may try DX12 + V-Sync as well to see what happens.
Hardware abstraction is something that exists for a reason, and moving to higher or lower levels of abstraction introduces a complex set of tradeoffs. I miss that aspect in the general discussion of it.
I wouldn't use DX12 since it introduces a lot of stuttering. Really a shame since I was looking forward to trying Ray Tracing but in its current state it's pretty much unplayable.
Nonetheless this game is a real looker and exactly why I bought a 2080Ti. Getting 120-140fps at 1440p with Gsync is gaming heaven. Only thing that needs improvement are the heavy pop-ins when parachuting into the map.
Thank you!AKAIK it lets the GPU work a tiny bit ahead of the CPU. It adds input lag, but you should see an increase in GPU usage and FPS
We haven't seen any serious performance regressions in D3D12/VK games on new GPU architectures so far so this is a moot point. Two key issues with LL APIs are resource management (which should really be done separately for every GPU architecture out there and may lead to some performance issues when a new GPU runs old code which wasn't optimized for it) and this thing known as async compute (which should really be done separately for every single GPU even inside one architecture). The first one seems to be doing fine so far - likely because each next GPU arch is an improvement here meaning that running old code won't result in a slowdown, just in a lack of significant gains. The second one matters only to AMD as NV GPUs don't need it anyway and AMD hasn't really updated their core arch since 2012 - hopefully when they will it will be an improvement for them as well to a point where they won't be needing async compute to saturate their shader arrays.
What I'm primarily advocating for is more awareness in public discourse of the fact that there are downsides (and they aren't just "lazy devs have to work harder").I'm not entirely sure what you're advocating here, because we can't really stay on DX11/OpenGL, and introducing a new industry-supported HL API that's directly tied by IVHs (closed source) drivers to the hardware and continue relying on their driver hacks to optimize individual titles doesn't seem like a future worth having, nor am I sure the industry would want that. Anyway, I'll take the downsides to get away from that.
I think it's too early to throw out the baby with the bath-water, especially based on relatively immature DX12 renderers.
To be clear, I'm not advocating for throwing out low-level APIs and going back to DX11/OpenGL (though those are still perfectly suitable for many use cases).And I mean, do we have a choice? To make full use of multicore CPUs we will have to use LL APIs anyway and it's not like there's any other option as adding more CPU cores is the only way left for further CPU performance improvements. The fact that MS adds new rendering features (DXR, DXML, etc) only to D3D12 runtime is pretty telling here.
What's everyone's experience with "3D sound"? If I choose to enable it, do I also need to turn DTS on in my headset software (Arctis 7)?
Tested with a $800 CPU.... I get that they don't want a bottleneck but this could give a wrong impression I feel.So according to Techspot a RX 570 can manage 60 fps at ultra settings:
Of course, that's in the singleplayer campaign while running on a i9 9900k.
Tested with a $800 CPU.... I get that they don't want a bottleneck but this could give a wrong impression I feel.
I also miss a benchmark that shows how important cores/threads are for this game.
I also miss a benchmark that shows how important cores/threads are for this game.
GameGPU said:
Next patch will introduce some rendering improvements (for both Dx11, Dx12, and DXR). Can't promise that it will fix everything, but hopefully things will be a bit better. :)
For me the narvik map runs the same. Its the most demanding map in the game surprisingly. Usually the vegetation maps are the most heavyDoes the game run better for final release? I remember it lagging badly on my i5 4690k and GTX 980 PC, even on medium.
Well, the core idea is to remove all of this from the driver and put this into the renderer code meaning that the driver won't need to do nearly as much work freeing up CPU for other things. Can it be done differently? Probably no as for this to work it absolutely has to be a part of the renderer itself since otherwise you're getting the same situation where a driver need to "guess" what the renderer wants and this will always be a heavy weight task which will be nearly impossible to parallelize due to the need to have all the info in one thread for any "guess" to happen.That said, I think it's absolutely not proven that you e.g. need low-level control over queuing, resource management or synchronization points to effectively use CPU-side parallelism in a graphics API. It just hasn't been done so far ;)
You have to change the overall quality setting from Auto to custommy future frame rendering is blocked out, but i'd keep it off anyway. i'd rather have no latency over fps
You have to change the overall quality setting from Auto to custom
Edit:
Are people running the binaural audio or war tapes?
I used the binaural initially but everything seemed kinda quiet so I went to war tapes. I love accurate 3D audio though.
Next patch will introduce some rendering improvements (for both Dx11, Dx12, and DXR). Can't promise that it will fix everything, but hopefully things will be a bit better. :)
Next patch will introduce some rendering improvements (for both Dx11, Dx12, and DXR). Can't promise that it will fix everything, but hopefully things will be a bit better. :)
Next patch will introduce some rendering improvements (for both Dx11, Dx12, and DXR). Can't promise that it will fix everything, but hopefully things will be a bit better. :)
For me it makes a big difference having if OFFDefault engine value is 3. When FFR is set to OFF, the value is 1.
So I've played more and 1080p ultra is mostly fine but there is still stuttering and fps tanks with big explosions on my 4.4ghz/6600k/1070. Are there any options I should be lowering in particular?