PDA

View Full Version : LW 2018 only using 70% cpu when rendering.



oatsey
03-20-2019, 04:06 PM
Can anyone tell me why Lightwave 2018 only uses 70% of my cpu when I render. I have a thread ripper 1950x and have set my threads to 32 but notice that it is not firing on all cylinders.
I have just checked Seems a waste if I am not using all of my processing power. Is there some setting I am missing that will put all of my computer on to the task?

next_n00b
03-20-2019, 04:22 PM
Have you tried any other LW version than 2018 or any other rendering software while having the same issue, or is it only in LW 2018?

Sensei
03-20-2019, 05:05 PM
What are you rendering? And how long it takes (per frame).. ?
If it takes seconds per frame, CPU is busy writing files to disk.
Do you have SSD or HDD? network disk?

gar26lw
03-20-2019, 08:10 PM
you put that fix on that some dude made to make threadripper work properly on windows ?

oatsey
03-21-2019, 05:07 AM
I have tried on 11.6, and 2015 which seem to render at 97-100% Blender also uses everything. I have tried different scenes in lw2018 they vary 70-80% Start off at 100% for a couple of seconds and then tail off.
I have ssd drives .
I have not heard of a Threadripper fix? Any ideas where that is?

oatsey
03-21-2019, 05:11 AM
Sensei, I forgot to mention this is rendering high res stills. I have tried different tile sizes as I wondered if there was some down time between tiles but it has made no difference.

gar26lw
03-21-2019, 05:40 AM
sorry, its for the 2990x

https://bitsum.com/portfolio/coreprio/

oatsey
03-21-2019, 05:48 AM
I have just done a bit more testing. If I render the same scene on a dual xeon (24 threads) it sits at 100% throughot the render.
If I render the same scene in LW 2019 I see a small improvement. 90-95% for the preprocess, 85- 90% for the second phase and 75-80% for the adaptive sampling. There definitely seems to be a problem with a 32 thread Threadripper for the rendering/ antialiasing phase of the render.

Qexit
03-21-2019, 06:58 AM
As I only have Intel CPUs, I cannot investigate the problem firsthand. However, I did find this description of the different modes available for the 1950X chip that could shed some light on the problem and a possible solution:

https://www.anandtech.com/show/11697/the-amd-ryzen-threadripper-1950x-and-1920x-review/4

The UMA/NUMA switch looks to be worth investigating.

gar26lw
03-21-2019, 07:12 AM
Tl;dr?

Qexit
03-21-2019, 07:21 AM
Tl;dr?Well, this is the section that caught my attention:


' Legacy Compatibility Mode, on or off (off by default)
Memory Mode: UMA vs NUMA (UMA by default)

The first switch disables the cores in one fo the silicon dies, but retains access to the DRAM channels and PCIe lanes. When the LCM switch is off, each core can handle two threads and the 16-core chip now has a total of 32 threads. When enabled, the system cuts half the cores, leaving 8 cores and 16 threads. This switch is primarily for compatibility purposes, as certain games (like DiRT) cannot work with more than 20 threads in a system. By reducing the total number of threads, these programs will be able to run. Turning the cores in one die off also alleviates some potential pressure in the core microarchitecture for cross communication.

The second switch, Memory Mode, puts the system into a unified memory architecture (UMA) or a non-unified memory architecture (NUMA) mode. Under the default setting, unified, the memory and CPU cores are seen as one massive block to the system, with maximum bandwidth and an average latency between the two. This makes it simple for code to understand, although the actual latency for a single instruction will be a good +20% faster or slower than the average, depending on which memory bank it is coming from.

NUMA still gives the system the full memory, but splits the memory and cores into into two NUMA banks depending on which pair of memory channels is nearest the core that needs the memory. The system will keep the data for a core as near to it as possible, giving the lowest latency. For a single core, that means it will fill up the memory nearest to it first at half the total bandwidth but a low latency, then the other half of the memory at the same half bandwidth at a higher latency. This mode is designed for latency sensitive workloads that rely on the lower latency removing a bottleneck in the workflow. For some code this matters, as well as some games low latency can affect averages or 99th percentiles for game benchmarks.'

Switching between UMA and NUMA should be straightforward but without access to the necessary hardware, I cannot be sure :)

gar26lw
03-21-2019, 07:43 AM
thanks kev

oatsey
03-21-2019, 07:47 AM
Great, thank you, I ll have a play with the settings and see how it effects things.

stoecklem
03-21-2019, 09:21 AM
I'll potentially be in the market for a Threadripper soon so I appreciate any updates and information you find. Researching it and finding out all the NUMA windows issues that apparently haven't been resolved makes me hesitant. I'm very interested in your findings.

Dan Ritchie
03-22-2019, 10:48 AM
I read somewhere that newer Threadrippers will switch automatically between UMA vs NUMA modes as needed.

Nicolas Jordan
03-22-2019, 12:02 PM
So far I haven't had any issues rendering in either LW 2018 or 2019 with my 2990WX. Seems to stay at 100%.

oatsey
03-23-2019, 09:26 AM
I could not find any NUMA or uma in my bios so I installed the Ryzen master and tried rendering with the current set up, creator mode (NUMA) and game mode (UMA) and it rendered marginally faster in the current setup which was a NUMA mode. STill it was not using 100% of the CPU. This Threadripper is still a quick renderer it renders in half the time of my dual xeon X5675 machines which is about where it should be according to the Benchmarks. It is just a shame that Lightwave does not appear to be making the most of the processing power.

Qexit
03-23-2019, 09:41 AM
I could not find any NUMA or uma in my bios so I installed the Ryzen master and tried rendering with the current set up, creator mode (NUMA) and game mode (UMA) and it rendered marginally faster in the current setup which was a NUMA mode. STill it was not using 100% of the CPU. This Threadripper is still a quick renderer it renders in half the time of my dual xeon X5675 machines which is about where it should be according to the Benchmarks. It is just a shame that Lightwave does not appear to be making the most of the processing power.Sorry to hear that didn't fix the problem. Hopefully, someone running a 1950X with LW under Windows 10 will be able to shed more light on the problem or suggest a solution.

Dan Ritchie
03-23-2019, 12:06 PM
It it reaching the full range of clocks? Often in task manager, I see less than 100% of utilization even when the graph shows no spikes or dips, just a saturated flat curve, but not at the full hyper-clock range, because of some thermal throttling. But, then, I'm on mobile. My mobile ryzen goes to 3.2 ghz, but it usually settles at 3 ghz when rendering.

fishhead
03-24-2019, 05:54 AM
Just tried here (using a rather simple scene - cg element integrated into footage...) I am just working on. LW 2019.0.3 on Win 10 Enterprise - seems to process everything at around 100%...
No noticable difference between VPR and F9..
144535

fishhead
03-25-2019, 06:08 AM
Okay, after doing some more cpu-heavy renderings now, I have to stand slightly corrected: in average it is at around 95% give or take 1% or 2%... Still 2019.0.3 ...