• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

dex3108

Member
Oct 26, 2017
22,846

View: https://twitter.com/pcgamer/status/1760301048210526519

A division of Epic Games is pointing the finger at Intel's latest CPUs when it comes to an increasing number of reported Unreal Engine game crashes. Modern top-end CPUs are jammed full of cores, all running at high speeds, and it takes a lot of energy to keep them running like this. That's particularly true for Intel's most recent Core i9 processors, which are some of the most power-hungry chips around. And to eke out even more performance, motherboard vendors often use BIOS settings that push things even further.

All of this seems to be the culprit in the increased reports of game and app crashes, especially those that are built in Unreal Engine, and the only solution is to delve into those settings to calm the beast down a notch.


That's according to RAD, a division of Epic Games, which develops the Bink video codec and Oodle data compression tech used in hundreds of games on the market. It points the finger directly at Intel, too: "We believe that this is a hardware problem which affects primarily Intel 13900K and 14900K processors, less likely 13700, 14700 and other related processors as well."

The software team states that the problems have nothing to do with any code in its products or Unreal, and notes that other software, such as CineBench, Handbrake, and Visual Studio also exhibit the same issue.

If your gaming PC has crashed, giving an 'out of video memory' GPU message, and the system sports a Core i9 13th or 14th Gen processor, such as the Core i9 14900K, then the problem is possibly being caused by the CPU (despite the nature of the error). Certain i7 models, typically the unlocked K-versions, are also reportedly experiencing the problem, though to a lesser extent.

Because this isn't a software issue, your only course of action is to delve into the CPU's settings, either in the motherboard BIOS or by using Intel's Extreme Tuning Utility (XTU). One simple fix that may work is reducing the clock multiplier for the P-cores down a notch or two. For example, if the default value is x55, then dropping it to x54 or x53 may well stop the crashes from occurring.

Shader compilation in Unreal Engine games and video decompression can put CPU cores under sudden, and sometimes sustained, loads so if the power limits are too high, then the processor could be fighting to remain stable.
Not every Core i9 user will be experiencing these crashes, even if they have, say, an Asus motherboard with multicore enhancement enabled. Given that it's a relatively small number of people reporting the issues, this may be a case of such CPUs being those that only just pass the tests performed during manufacturing that determine what processor model a chip will be used for.


www.pcgamer.com

There are increased reports of crashing in Unreal Engine games, etc. and Epic is blaming Intel chips

Power-hungry, high-clocked CPUs can be a tad unstable if they're not given the right BIOS settings.
 

Suedemaker

Linked the Fire
Member
Jun 4, 2019
1,776
I've been crashing with an R9 5900X so no...I don't think it's that, Timothy.
 

Vector

Member
Feb 28, 2018
6,685
The way shader compilation works is stupid and might potentially affect some games for a very long time with stuttering issues until CPUs have enough single-threaded performance to compile them in under a frame time.
 

cowbanana

Member
Feb 2, 2018
13,965
a Socialist Utopia
I've seen this mentioned for a few UE games on the Steam forums, when people are having crashing problems. It's always in games where I have zero crashes and always on these Intel processors with the error mentioned in the article. The first mentions of the CPUs being the issue was weeks (months?) ago.
 

Servbot24

The Fallen
Oct 25, 2017
43,384
What a nice way to title the article. Sometimes "blame" is accurate. And I bet no one in this thread will know one way or the other.
 

Mivey

Member
Oct 25, 2017
17,916
The way shader compilation works is stupid and might potentially affect some games for a very long time with stuttering issues until CPUs have enough single-threaded performance to compile them in under a frame time.
Shader stuttering is not a CPU power problem. It's about needing to go back and forth between CPU and GPU, potentially mutiple times during the same frame. The bottleneck isn't the CPU speed, it's the bandwidth needed to transfer stuff from GPU land to CPU land, and back again. I don't see that ever going away. Even if we double the internal bus bandwidth in say 5 years, by then you will also have much faster GPUs and CPUs, so that problem remains, copying stuff around will slow stuff down suddenly and massively, and thus you have stutters
 

Cash Money

Member
Dec 8, 2023
71
Been having this issue with my i9-13900K. Prevented me from playing The Finals, had to resort to the Extreme Tuning Utility fix described in the article so I could play Tekken 8.
 

Vector

Member
Feb 28, 2018
6,685
Shader stuttering is not a CPU power problem. It's about needing to go back and forth between CPU and GPU, potentially mutiple times during the same frame. The bottleneck isn't the CPU speed, it's the bandwidth needed to transfer stuff from GPU land to CPU land, and back again. I don't see that ever going away. Even if we double the internal bus bandwidth in say 5 years, by then you will also have much faster GPUs and CPUs, so that problem remains, copying stuff around will slow stuff down suddenly and massively, and thus you have stutters
I thought shader compilation was purely a CPU task but I'm also not a game dev and haven't worked extensively with computer graphics.
 

SunBroDave

"This guy are sick"
Member
Oct 25, 2017
13,310
I was gonna say, fortunately I have not had any issues over the past year since getting a 13600k, but now that I think about it, I actually don't know how many UE5 games I've played over the last year
 

ElFly

Member
Oct 27, 2017
2,736
Shader stuttering is not a CPU power problem. It's about needing to go back and forth between CPU and GPU, potentially mutiple times during the same frame. The bottleneck isn't the CPU speed, it's the bandwidth needed to transfer stuff from GPU land to CPU land, and back again. I don't see that ever going away. Even if we double the internal bus bandwidth in say 5 years, by then you will also have much faster GPUs and CPUs, so that problem remains, copying stuff around will slow stuff down suddenly and massively, and thus you have stutters

uh

I wonder if GPUs could compile directly without all the back and forth
 

Unknown

Member
Oct 29, 2017
267
Shader stuttering is not a CPU power problem. It's about needing to go back and forth between CPU and GPU, potentially mutiple times during the same frame. The bottleneck isn't the CPU speed, it's the bandwidth needed to transfer stuff from GPU land to CPU land, and back again. I don't see that ever going away. Even if we double the internal bus bandwidth in say 5 years, by then you will also have much faster GPUs and CPUs, so that problem remains, copying stuff around will slow stuff down suddenly and massively, and thus you have stutters

Sorry, no, that's not correct.

The reason you see shader compilation stutter in a games is pretty simple: on PC shaders are stored in to an intermediate format that is platform agnostic. The GPU vendors driver must do a conversion step ('compile') to convert them in to a format compatible with that specific GPU. This takes time, and can be *very* computationally intensive.

It's up to the game engine to make sure this conversion step happens before the game tries to use a shader - otherwise the driver will need to stall and wait for the conversion before it can use it to draw something.

Unreal, historically, has been especially bad at this because it has (in UE4 at least) used a practice called 'lazy initialization' whereby while each shader is typically loaded early on, it only gets initialized (the GPU is told about it) right when it first gets used in a frame. The idea being you don't initialize things you don't use.

Which is great in theory if initialization is free or very cheap, but it's not. It's a CPU heavy task due to this conversion.

Which means the GPU driver has to panic because it's basically being told "hey, please draw these 10,000 batches - oh and btw 500 of them have shaders you've never seen before". Hence a huge stall right when a new thing appears on screen.

Nothing to do with bandwidth.
 

Gloomz

Member
Oct 27, 2017
2,423
RTX 4090 / i7-13700K have had constant crashing issues with UE games, as well as "Out of Video Memory" problems with games like Harry Potter, Outer World: Spacer's Choice Edition and basically any game that had to compile shaders after launching - the only thing that ended up fixing it was disabling hyperthreading in bios.
 

empo

Member
Jan 27, 2018
3,182
another trash pcgamer article
literally the second paragraph of the RAD post blame it on overly optimistic BIOS settings
 

1-D_FE

Member
Oct 27, 2017
8,287
As much as I like to blame Sweeney for things, it's absurd that these latest Intel chips can use up to 400 watts of power by themselves. So, yeah, it doesn't take a lot of imagination to realize all the ways things can go wrong from a system stability POV when under load.
 

Roshin

Member
Oct 30, 2017
2,844
Sweden
I don't know enough to really comment on this. All I can add is that I haven't had any issues like those described in the OP.
 

NeoBob688

Member
Oct 27, 2017
3,661
I had general crashing on my system with 12900k on intensive games. Turns out it was a BIOS issue. Somehow memory issues with DDR5 manifesting in memory heavy applications. Upgrading BIOS fixed issues for me. If anyone has issues can try. Beyond that, intel CPUs have gotten insanely bad on power usage and thermals. But I don't mean to let Epic off the hook, probably there are also problems on their end. Usually it's a combination of issues.
 

Jonnax

Member
Oct 26, 2017
4,971
Why not post the source linked rather than the opinion piece ?

Intel Processor Instability Causing Oodle Decompression Failures

RAD Game Tools' web page. RAD makes Bink Video, the Telemetry Performance Visualization System, and Oodle Data Compression - all popular video game middleware.
RAD has become aware of a problem that can cause Oodle Data decompression failures, or crashes in games built with Unreal. We believe that this is a hardware problem which affects primarily Intel 13900K and 14900K processors, less likely 13700, 14700 and other related processors as well. Only a small fraction of those processors will exhibit this behavior. The problem seems to be caused by a combination of BIOS settings and the high clock rates and power usage of these processors, leading to system instability and unpredictable behavior under heavy load. As far as we can tell, there is not any software bug in Oodle or Unreal that is causing this. Due to what seem to be overly optimistic BIOS settings, some small percentage of processors go out of their functional range of clock rate and power draw under high load, and execute instructions incorrectly. This is being seen disproportionately in Oodle Data decompression because unlike most gameplay, simulation, audio or rendering code, decompression needs to perform extra integrity checks to handle accidentally or maliciously corrupted data, and is thus likely to spot inconsistencies very soon after they occur. These decode failures then typically result in an error message. When starting an Unreal Engine-based game, the most common failure is of this type: DecompressShader(): Could not decompress shader (GetShaderCompressionFormat=Oodle) However, this problem does not only affect Oodle, and machines that suffer from this instability will also exhibit failures in standard benchmark and stress test programs. Any programs which heavily use the processor on many threads may cause crashes or unpredictable behavior. There have been crashes seen in RealBench, CineBench, Prime95, Handbrake, Visual Studio, and more. This problem can also show up as a GPU error message, such as spurious "out of video memory" errors, even though it is caused by the CPU.

We do not have acccess to diagnostic processor information that would nail down the exact cause and best workaround for this problem. It seems that many motherboard/BIOS manufacturers are shipping with settings that push the processor outside its safe operating range. Because this problem appears to affect only a small fraction of processors, some users have had success with returning their processor to the manufacturer and getting a new one which doesn't exhibit the problem. Other workarounds require using tuning utilities or modifying BIOS settings. Note that doing so incorrectly can cause damage to your system. The changes we are recommending here are, to the best of our knowledge, completely safe, but you are solely responsible for any damages or loss caused by changing these settings from their factory defaults. If you are uncomfortable or worried about using tuning utilities (even officially sanctioned ones) or changing your BIOS settings, and frequent crashes also occur in the benchmark programs mentioned previously, you should be able to return the CPU or the entire computer to the manufacturer instead.
 

jediyoshi

Member
Oct 25, 2017
5,162
ME THIS IS ME

THIS IS ME AND MY 14900K

God I thought it was so bizzare it was exclusively happening with UE4/5 games and nothing else.

Fortnite, crash. Forza, fine. Ready or Not, crash. Starfield, fine. Sifu, crash. Cyberpunk, fine.

I've just come to terms with using xtu to bring me down to x55
 
Oct 27, 2017
4,112
is this why i could never play remnant 2? luckily it was through gamepass so i just uninstalled and went about my life. no other games have acted the same, luckily
 

1-D_FE

Member
Oct 27, 2017
8,287
The existence of a CPU bug which may be causing some kinds of crashes does not rule out crashes happening on other CPUs for other reasons.

Is it even a bug? It kinda sounds like it's more that UE can really hammer the CPU for certain things, and with certain systems, this can lead to instability. PCs can appear stabile, but real world scenarios can often reveal how they're not. The 5900X isn't exactly a power sipper itself. I could see how a chip like that could also begin to reveal system instabilities if it was being hammered at times.
 

Jebusman

Member
Oct 27, 2017
4,099
Halifax, NS
So like, all I'm reading is that this doesn't seem to "strictly" be a CPU problem, but a problem with motherboard vendors doing their out of the box "auto-overclocking" pushing these CPUs that are already at the edge in terms of stability, potentially into a non-stable place if the chip wasn't one of the "good" ones. And it only gets noticed when put under a true stress test. That if the boards were actually following the Intel guidelines to a tee, they wouldn't be exhibiting these kinds of symptoms, but the practice has become so commonplace (due to years of it mostly being non-intrusive) that almost every motherboard is going to be doing a "little" bit of tweaking by default, and the 13900K and 14900K (sometimes) doesn't have even that little bit of wiggle room.
 

senj

Member
Nov 6, 2017
4,525
Is it even a bug? It kinda sounds like it's more that UE can really hammer the CPU for certain things, and with certain systems, this can lead to instability. PCs can appear stabile, but real world scenarios can often reveal how they're not. The 5900X isn't exactly a power sipper itself. I could see how a chip like that could also begin to reveal system instabilities if it was being hammered at times.
If the out-of-the-box behavior of Turbo Boost (or whatever Intel calls the technology) is pushing the P-cores to clocks at which they're physically unstable, then yeah I'd say that's a bug. Maybe not a physical errata but definitely something that should be addressed at the microcode or BIOS level. I don't think it's unreasonable to say "your CPU should not, at completely default clock speeds and auto-boosting behavior the end user has not intervened in, be so unstable as to crash when running a compile in Visual Studio or Handbrake or Unreal or any other normal CPU intensive task".
 

Suedemaker

Linked the Fire
Member
Jun 4, 2019
1,776
Is it even a bug? It kinda sounds like it's more that UE can really hammer the CPU for certain things, and with certain systems, this can lead to instability. PCs can appear stabile, but real world scenarios can often reveal how they're not. The 5900X isn't exactly a power sipper itself. I could see how a chip like that could also begin to reveal system instabilities if it was being hammered at times.
The real issue is that this never happened until a couple days ago, and isn't an Intel-specific problem. Never touched my BIOS settings nor overclock anything, 1000w PSU.
 

senj

Member
Nov 6, 2017
4,525
The real issue is that this never happened until a couple days ago, and isn't an Intel-specific problem. Never touched my BIOS settings nor overclock anything, 1000w PSU.
I mean that sucks, my point was just that it's not that "this isn't an Intel-specific problem" it's that the problem you're running into isn't the same problem the article is talking about. There is AN Intel-specific problem with (mostly) the 14900K that a lot of developers have been running into, which I've also been seeing in my day job, that appears to root-causable to the hardware and is impacting a lot more things than Unreal:

developercommunity.visualstudio.com

Developer Community

Developer Community
developercommunity.visualstudio.com

Developer Community

Developer Community

Re: What is wrong with the 14900k?

The chip is by definition not overclocked... It quit the factory having presumably passed tests and i do not believe transport could have caused invisible damage that would present itself as nonsensical errors... Either the factory tests are too loose or the motherboard defaults(which i trust...
 

Allard

Member
Oct 25, 2017
1,937
So like, all I'm reading is that this doesn't seem to "strictly" be a CPU problem, but a problem with motherboard vendors doing their out of the box "auto-overclocking" pushing these CPUs that are already at the edge in terms of stability, potentially into a non-stable place if the chip wasn't one of the "good" ones. And it only gets noticed when put under a true stress test. That if the boards were actually following the Intel guidelines to a tee, they wouldn't be exhibiting these kinds of symptoms, but the practice has become so commonplace (due to years of it mostly being non-intrusive) that almost every motherboard is going to be doing a "little" bit of tweaking by default, and the 13900K and 14900K (sometimes) doesn't have even that little bit of wiggle room.

This is definitely the first thing I did with my i7 I got last year, its a stupid setting and was warned to turn that off unless I had proper over clock cooling structure which I didn't. I haven't had any crashes but I haven't run a ton of UE4 and UE5 games to complain about the issue.
 

Ry.

AVALANCHE
Member
Oct 10, 2021
1,172
the planet Zebes
Shader stuttering is not a CPU power problem. It's about needing to go back and forth between CPU and GPU, potentially mutiple times during the same frame. The bottleneck isn't the CPU speed, it's the bandwidth needed to transfer stuff from GPU land to CPU land, and back again. I don't see that ever going away. Even if we double the internal bus bandwidth in say 5 years, by then you will also have much faster GPUs and CPUs, so that problem remains, copying stuff around will slow stuff down suddenly and massively, and thus you have stutters

The APU future says hello. (this is why Nvidia is getting into CPU) Piecemeal hardware will have to go away eventually in order to remove this bottleneck issue. APU/SoC's of all types will be the predominant form factor for pretty much everything eventually. Consoles are already going this route, though it doesn't paint a rosy picture for building your own gaming PC. What time frame this all happens in? I dunno. Probably faster than I'd like.
 

Mivey

Member
Oct 25, 2017
17,916
Sorry, no, that's not correct.

The reason you see shader compilation stutter in a games is pretty simple: on PC shaders are stored in to an intermediate format that is platform agnostic. The GPU vendors driver must do a conversion step ('compile') to convert them in to a format compatible with that specific GPU. This takes time, and can be *very* computationally intensive.

It's up to the game engine to make sure this conversion step happens before the game tries to use a shader - otherwise the driver will need to stall and wait for the conversion before it can use it to draw something.

Unreal, historically, has been especially bad at this because it has (in UE4 at least) used a practice called 'lazy initialization' whereby while each shader is typically loaded early on, it only gets initialized (the GPU is told about it) right when it first gets used in a frame. The idea being you don't initialize things you don't use.

Which is great in theory if initialization is free or very cheap, but it's not. It's a CPU heavy task due to this conversion.

Which means the GPU driver has to panic because it's basically being told "hey, please draw these 10,000 batches - oh and btw 500 of them have shaders you've never seen before". Hence a huge stall right when a new thing appears on screen.

Nothing to do with bandwidth.
Compiling code is generally a hard task for the CPU, though I guess I am not too familiar into what goes into shader compilation. If it's as hard as you say, then that just adds a further big issue, in addition to needing to transfer stuff from CPU to GPU.
Though I wonder to what extent this is really so engine specific. I think this just comes down to modern APIs like DX12 and Vulkan, I don't recall this ever being much of an issue with games using older APIs, regardless of their engine. We do see this issue crop up even outside Unreal Engine, after all.
The APU future says hello. (this is why Nvidia is getting into CPU) Piecemeal hardware will have to go away eventually in order to remove this bottleneck issue. APU/SoC's of all types will be the predominant form factor for pretty much everything eventually. Consoles are already going this route, though it doesn't paint a rosy picture for building your own gaming PC. What time frame this all happens in? I dunno. Probably faster than I'd like.
If its' really a compute heavy task, then it boils down to either having longer load times (and making sure you do your compilation while loading the level or relevant part of it) or doing precompilation. Even an APU wouldn't help.
 

1-D_FE

Member
Oct 27, 2017
8,287
If the out-of-the-box behavior of Turbo Boost (or whatever Intel calls the technology) is pushing the P-cores to clocks at which they're physically unstable, then yeah I'd say that's a bug. Maybe not a physical errata but definitely something that should be addressed at the microcode or BIOS level. I don't think it's unreasonable to say "your CPU should not, at completely default clock speeds and auto-boosting behavior the end user has not intervened in, be so unstable as to crash when running a compile in Visual Studio or Handbrake or Unreal or any other normal CPU intensive task".

It doesn't necessarily mean the CPU in unstable. It could mean the voltages are insufficient or all kinds of stuff. Just because you buy a high end CPU that's clocked to that absolute extreme doesn't necessarily mean your motherboard is up to snuff and can handle it without voltage changes. Beyond being a hotbox, it's why I never push my stuff to the max. And these CPUs are being pushed to their max by default. Which kinds of sucks from a stability POV. But it's obvious by their power draw numbers that Intel is pushing them beyond where they really should be. And it just creates a mess for everyone.
 

senj

Member
Nov 6, 2017
4,525
It doesn't necessarily mean the CPU in unstable. It could mean the voltages are insufficient or all kinds of stuff. Just because you buy a high end CPU that's clocked to that absolute extreme doesn't necessarily mean your motherboard is up to snuff and can handle it without voltage changes. Beyond being a hotbox, it's why I never push my stuff to the max. And these CPUs are being pushed to their max by default. Which kinds of sucks from a stability POV. But it's obvious by their power draw numbers that Intel is pushing them beyond where they really should be. And it just creates a mess for everyone.
No but the CPU is literally unstable. It's not a "specific motherboard failing to supply voltage" thing, it's a widespread issue with the model. That Intel forum like I posted earlier spells it out well: the CPU's core behavior is malfunctioning in very easily noticeable ways at out-of-the-box clock speed boosts Intel has set it up with.

The CPU pushing itself beyond the max, to "people cannot do their basic day job of running a compiler because the damn thing completely shits the bed by default" is absolutely the CPU being unstable by default.
 

Remark

Member
Oct 27, 2017
3,632
There's definitely something going on with UE5 and DX12.

Every UE5 game I play that uses DX12 I either eventually crash or get an out of memory error even though I have a RTX 4090.

Only fix I found with downclocking the Performance cores on my 13900k to 52x in XTU. Been a lot more stable since then but I still get crashing from time to time.
 

Ry.

AVALANCHE
Member
Oct 10, 2021
1,172
the planet Zebes
Compiling code is generally a hard task for the CPU, though I guess I am not too familiar into what goes into shader compilation. If it's as hard as you say, then that just adds a further big issue, in addition to needing to transfer stuff from CPU to GPU.
Though I wonder to what extent this is really so engine specific. I think this just comes down to modern APIs like DX12 and Vulkan, I don't recall this ever being much of an issue with games using older APIs, regardless of their engine. We do see this issue crop up even outside Unreal Engine, after all.

If its' really a compute heavy task, then it boils down to either having longer load times (and making sure you do your compilation while loading the level or relevant part of it) or doing precompilation. Even an APU wouldn't help.

I think future hardware combined with better decompression algorithms and higher bandwidth architecture could certainly mitigate a lot of these issues in the future.

You aren't wrong however, and I wish more game developers would simply allow for a load time here and there, but we're also moving away from this type of game development as well, rasterization's days are numbered over the long run. Hopefully shader comp stuttering will simply be viewed as just a quirk of games that came out at a certain time (now) in the future.