Relentness crashes at render time, Win10 RTX 3070

Disciple

New member
Greetings. I'm not much of a forumite. I contributed a thing or two, but now I'm a bit stuck, and hoping for crowdhelp.

I've been happily using LW2015 since back then, but work started calling for heavier hardware, so I obtained a CyberPowerPC running Windows 10 home edition, core i7, GeForce RTX 3070 (similar system). LW2015 ran at warp speed until the render cycle, whereupon it suffered a hardware crash every time. Animating a couple of spaceship meshes from DAZ3D, attempting 550 TIFFs at 1080i, it couldn't write more than 180 frames without crashing at random, once after only 3 frames. "Random" means crashing sometimes during the compute phase, and sometimes during write to disk, unpredictable. I've since installed every Windows update and nVidia driver I can find, and downloaded LW2020 trial version, and now the renders reach to the 300's before hardware crashing as before. :mad:

So I hope someone can point out my mistake, subtle or obvious. Even better would be directions for fixing this unacceptable failure. Thanks in advance.

Hallelujah!
Disciple
 
A few questions:

Do these crashes occur only on specific scenes or any scene?
Are the scenes that crash only on old scenes from a previous machine or do they also occur on freshly created scenes on the new machine?
Do the crashes occur only on scenes with third party meshes or also on Modeler created meshes?
Have you tried to render a crashing scene without the third party meshes?
In a troublesome scene, have you tried replacing the third party meshes with objects created in modeler?

From a fresh LightWave installation, I would load up a vanilla LightWave scene (no plugins or third party meshes) from the LightWave contents folder, and run a test render of 550 PNG frames to see if it crashes.

If it does, then you know the problem is definitely a hardware issue.
 
I can perform the test you describe, once the current one finishes, but here's what I'm juggling.

It's a race against time before the warranty ends. A single test can run over 30 hours before the crash, and the workload is beginning to pile up. I have to use third party meshes, as I have done by the hundreds using LW2015 on my older hardware. Days for extensive testing under various conditions, I just don't have 'em.
I'm hoping there's something I'm clearly missing, like "Dude, never use nVidia drivers! Only use NewTek ones! Everybody knows that!" ...or whatever.
I guess the alternate question would be, what Windows system do I need that will give me that level of performance and not fill my life with crashy headaches? I'm no hardware geek. All input welcome.

[edit] I should add that the first 2 fails prompted me to fire up my older Win10 laptop with its GTX 1050ti and render the very same sequence using LW2015 from the final frame counting backwards. It did so slowly, but without a single problem. [/edit]

Hallelujah!
Disciple
 
Last edited:
LW2015 ran at warp speed until the render cycle, whereupon it suffered a hardware crash every time. Animating a couple of spaceship meshes from DAZ3D, attempting 550 TIFFs at 1080i, it couldn't write more than 180 frames without crashing at random, once after only 3 frames. "Random" means crashing sometimes during the compute phase, and sometimes during write to disk, unpredictable.
Disable preview and try again..

Try LWSN instead.

Does VPR work?
 
I've seen random app errors (like you describe) before. In essence: big computational load spread across multiple cores - but plagued by seemingly random spurious exceptions. An exception could happen after several hours or a few minutes. In my case it was a combination of memory configuration and CPU power parameters that needed tweaking. The fact that I was running 64GB of memory (with default BIOS memory/power settings) made the issue appear. Basically, CPU boosting caused enough transient voltage droop to memory DIMMs to create spurious memory errors - which would crash long running busy apps.

I confirmed the issue by running Prime95 for a couple hours - which showed numerous errors.

How to use Prime95 for system stability testing:

https://www.tenforums.com/tutorials/16474-prime95-stress-test-your-cpu.html

Where to get Prime95 (it's free):

https://www.mersenne.org/download/

Some googling showed that, yes, running more than 32GB of fast memory on my motherboard could require some BIOS tweaks.

In my case ( an AMD Ryzen system), the fix was to go into BIOS and increase memory voltage just a bit (+ 0.10 volts) and set LoadLineCalibration (LLC) to level 3. LLC reduces voltage droop under load changes (the lower the number the more aggressive it is). BTW: this was 64GB DDR4 memory running @ 3200.

Another possible cause is the vendor enabled an "easy" overclock setting in BIOS. In BIOS these are often named something like "EZ Overclock", "EZ Mode" , "Auto overclock", "AI Overclock" or whatever. These "easy" overclock BIOS modes can cause problems with many CPU/memory combinations. If set in BIOS, turn it off & retest.

Or, you could just try dropping your memory speed a notch. For example, if you are currently running memory at 3200, drop it to 2666 or even 2400 and test. There would be a modest performance hit doing this. Anyway, if the problem goes away it may mean your memory truly needs special settings to run at the faster speed... or your memory may actually have compatibility issues.
 
Some googling showed that, yes, running more than 32GB of fast memory on my motherboard could require some BIOS tweaks.
Thanks for the pointers. Drilling down all the unfamiliar BIOS menus I could find, I failed to locate anything resembling "LLC" or overclocking, just DRAM voltage, CPU base clock and clock ratio... all at default values.

I'm starting to think the CPU is simply overheating. Your link said to watch temperatures, so I installed Speccy, and it says a render session shoots the CPU temp right up into the red. The unit sits in the same room with me with unobstructed clearance on all six sides, no heat wave ATM. I should think its design would deliver full performance without exotic climate control. Do others overheat in a standard configuration doing a standard render in a standard environment? Do I have to stick my computer in a beer fridge every time there's a render job to do?

Now I have another wordy question. I'm not a hardware geek, nor an under-the-hood wizard. I just install the software, read the manual, and use the bits I can figure out, so I'm about to sound like the densest noob in the forums.
The buzz I heard last year all said, "If you want to render in 3D, then you need GPU horsepower, and gearing up with nVidia is the way to go!" Also, "We can't render 3D with nVidias 'cause the crypto miners grabbed 'em all!" So I shopped around and obtained a box with a respectable GPU, which I'm wrangling now.
But as look more closely than ever before at what my system is doing at render time, the question nags at me... are my nVidia horses doing anything? I don't love iRay, but I've seen how it gets a GPU boost, but what about the LW renderer? I've ransacked the NewTek site and the forums, and all I see GPUs doing is interactive previews and noise reduction. In the game of rendering pixels, is the CPU the only player on the court, and the GPU just runs out to mop up the noise after the play is over? Did I put my render money into the wrong hardware, and investing in more Intel cores and DRAM would have been the better plan for improved render capacity?

Okay, mocking answers are accepted, as long as they're answers. ;)

Hallelujah!
Disciple
 
Thanks for the pointers. Drilling down all the unfamiliar BIOS menus I could find, I failed to locate anything resembling "LLC" or overclocking, just DRAM voltage, CPU base clock and clock ratio... all at default values.

I'm starting to think the CPU is simply overheating. Your link said to watch temperatures, so I installed Speccy, and it says a render session shoots the CPU temp right up into the red. The unit sits in the same room with me with unobstructed clearance on all six sides, no heat wave ATM. I should think its design would deliver full performance without exotic climate control. Do others overheat in a standard configuration doing a standard render in a standard environment? Do I have to stick my computer in a beer fridge every time there's a render job to do?

Now I have another wordy question. I'm not a hardware geek, nor an under-the-hood wizard. I just install the software, read the manual, and use the bits I can figure out, so I'm about to sound like the densest noob in the forums.
The buzz I heard last year all said, "If you want to render in 3D, then you need GPU horsepower, and gearing up with nVidia is the way to go!" Also, "We can't render 3D with nVidias 'cause the crypto miners grabbed 'em all!" So I shopped around and obtained a box with a respectable GPU, which I'm wrangling now.
But as look more closely than ever before at what my system is doing at render time, the question nags at me... are my nVidia horses doing anything? I don't love iRay, but I've seen how it gets a GPU boost, but what about the LW renderer? I've ransacked the NewTek site and the forums, and all I see GPUs doing is interactive previews and noise reduction. In the game of rendering pixels, is the CPU the only player on the court, and the GPU just runs out to mop up the noise after the play is over? Did I put my render money into the wrong hardware, and investing in more Intel cores and DRAM would have been the better plan for improved render capacity?

Okay, mocking answers are accepted, as long as they're answers. ;)

Hallelujah!
Disciple
What is the temperature range you see after rendering for several frames? What do the other temperature sensors show? Generally, newer Intel CPUs do a good job throttling the CPU down when too hot. But the motherboard and memory can still get too hot if there isn't proper system cooling, which could cause system errors.

Lightwave does not natively support using a GPU for actual rendering. The only native GPU support in Lightwave (2018+) is for denoising flitering. To do rendering via GPU would need to purchase a 3rd party plugin/app for Lightwave, like Octane.
 
@Disciple: regardless, seeing as you mentioned previously that the end of the warranty period is nearing... you should seriously consider doing a warranty/support claim ASAP with the vendor. The system - as shipped - is apparently too unstable for your use.
 
What is the temperature range you see after rendering for several frames? What do the other temperature sensors show?
Speccy shows the CPU hovering in the mid-90's, motherboard and graphics staying under 40, and it's cooled with basic vents and fans, factory standard.
BIOS says the smart fans hit full speed when CPU reaches 65C, and the CPU thermal protection setting is "Auto" instead of a number, so no idea what it's doing.

[edit=1]...and yes, I have a Wednesday appointment to haul the thing back for the second time. Too unreliable. [/edit]
[edit=2] Correction: After a few hours the graphics temperature is passing the mid-50's and slowly climbing. The motherboard's a little warmer, too. [/edit]

Hallelujah!
Disciple
 
Last edited:
So, it sounds like during heavy load (like rendering) your CPU stays around 95C. That's probably too hot for a sustained temp for a CPU processing a heavy load for a long period of time. It could also lead to the overall system (motherboard, memory, etc) temperature getting too hot.

A "good" CPU temp can vary from CPU to CPU and the type of use. Since I use my systems for doing heavy processing over long durations (like rendering animations), I make sure my systems have more than adequate cooling solutions. Generally speaking, I tune the system to keep the overall system/motherboard temp below 50C and the max sustained CPU temp below 80C.

At the very least, consider upgrading your CPU cooler. If you want to stick with air cooling, consider upgrading to a Noctua NH-D15 cooler. It's BIG, so you'll have to talk to your PC vendor to see if it will fit your case. The NH-D15's cooling performance is impressive. At least as good as mid-level water cooling solutions. BUT: you do need a good case with good airflow and adequate number of good fans to get the best out of an NH-D15.


Your GPU temps sound fine. Sustained GPU temps of 70C-80C are not unusual for a 3070 under heavy gaming load.
 
Last edited:
@Disciple: You said you got your system from Cyberpower. I checked out their site. It looks like they tend to configure their systems with a water cooling solution. If that's what your system has then I'm kind of surprised it runs so hot.

Some of their configurations include their "Venom Boost factory overclocking". If you chose that you might want to try your system without it.
 
@Disciple - RE: GPU rendering and LightWave. As @eo_neo stated, LightWave does not natively support GPU rendering and so if you really need it, you'll have to invest in a third party LW GPU renderer plug-in. However, be advised that because of the uncertainty of LightWave's future, continued support of those third party plug-ins for LightWave may be limited or currently discontinued.

What this means is that new forthcoming features and bug fixes which those plug-ins may have in future updates, most likely won't be available to the LightWave versions of the plug-in.

However, on the flip side, LightWave comes with fantastic render farm capabilities as standard.
 
You said you got your system from Cyberpower. I checked out their site. It looks like they tend to configure their systems with a water cooling solution. If that's what your system has then I'm kind of surprised it runs so hot.
Well, upon closer inspection, maybe that is a fluid reservoir. It does have twin conduits connected to a heat exchanger in the fan. Even the GPU case has heat pipes poking out. I don't know what propels the coolant, but the air definitely flows freely. Any evidence of overclocking I have not found.
But yes, even rendering a simple GI noise reduction test scene spikes it right up to 95C.

...be advised that because of the uncertainty of LightWave's future, continued support of those third party plug-ins for LightWave may be limited or currently discontinued.

What this means is that new forthcoming features and bug fixes which those plug-ins may have in future updates, most likely won't be available to the LightWave versions of the plug-in.

However, on the flip side, LightWave comes with fantastic render farm capabilities as standard.
Yeah, I think I won't look to nVidia for much in the way of rendering, except the occasional iRay still. Even the AI noise reduction disappoints. No frame-to-frame coherence, but then I don't know how there could be.

The whole setup goes back tomorrow, then I try again with more emphasis on CPU/DRAM power and a more modest GPU... and capable cooling. 🥶 Any recommendations?

[edit=1] Wouldn't ya know it? My simple 120-frame Optix test render crashed after frame 111. 🤦‍♂️ Frame_082.jpg [/edit]

Hallelujah!
Disciple
 
Last edited:
Regardless of CPU make, regard the stated max boost frequency as mostly marketing bait. Pay more attention to the max base frequency, as that may be the frequency your CPU will drop down to when rendering.

For example, some current high end Intel i7 & i9 CPUs spec a max boost of around 5Ghz on the performance cores. These CPUs also have a pretty high TDP (max power envelope, in watts), which is a clear indication you'll need some serious cooling. Anyway, you'll likely never see anywhere near a sustained 5Ghz (across all cores) when rendering... because the CPU will spend most its time too hot & throttling the cores down to around the 3-4Ghz range. That's if you have reasonable cooling bandwidth. With mid-range cooling, you'd probably be throttled down to the highest base frequency (around 2-3Ghz) most the time during rendering. Sure, if you are a 'tech 'wiz you can do some serious 'mods & tunings to your system to get 5Ghz (or close to it) across all cores during renders. But most consumers of tech gear don't have those skills.

For the AMD Ryzens, AMD will list a seemingly low TDP (max power envelope, in watts) - but that is misleading. When you enable Ryzen's Precision Boost Overdrive in BIOS - which almost everyone does - you enable a form of dynamic overclocking, which results in a much larger TDP. And that means way more heat and the need for serious cooling.

The key to extracting good rendering performance from the current crop of high end CPUs is tuning the system & cooling to find the right balance of max power use (CPU & memory speed) and cooling bandwidth sufficient to dissipate the watts produced (with cooling bandwidth to spare) to avoid throttling. This might mean you actually have to "tune down" your CPU & memory a bit to avoid constant overheating & throttling.

For example, on my system I have things tuned on my Ryzen 3950X so i hit 4.1Ghz across all 32 threads when rendering frames, with max CPU temp well below what triggers throttling. My CPU is capable of going beyond 4.1Ghz, but if it does then cores will get too hot and will be throttled (which ruins performance). So, I've tuned things to effectively limit the power my CPU cores can consume, which limits both the max speed and temp the cores can run at. My cooling is air - via a Noctua NH-D15, in an expansive Phanteks Eclipse P500A case with multiple 140mm fans.
 
Last edited:
There's a hopeful update. [edit](Returns for refund denied. 🤨 )[/edit]
I managed to contact a tech at CyberPowerPC who essentially said yep, that's way too hot, we've seen it cause other crashes, we'll correct it under warranty. So I ship it back tomorrow hoping for the best. I've kinda formed a counterintuitive attachment to the silly machine, go figure, so I'm really wishing this to succeed. Maybe its GPU muscles will benefit some future renderer that comes my way. 💪
Thank you all for the useful input. I'd still be sitting stumped without it.

Hallelujah!
Disciple
 
Last edited:
Back
Top