View Full Version : mdd render slowdown

01-29-2014, 08:32 AM
Hello wavers, i am rendering a character animation. The render time is good enough but if i bake the animation and render the scene with an .mdd the render time is almost triple.

The character skin has sss and there is dof and pmb.

01-29-2014, 09:21 AM
If it's the MDD that causes the render issue, it might have to do with the size of the data. If you use the native players in Layout, LightWave will load then entire MDD to RAM, which can cause a lot of network traffic. It also leaves less RAM for the renderer.

Try Denis Pontonnier's MDD Pointer (http://dpont.pagesperso-orange.fr/plugins/MDD_Pointer.html). This tool makes Layout load only the current frame with streaming data. It's not so great for interactivity but it can speed things up for rendering and make more RAM available. I use it when I need to render a scene with an exceptionally huge MDD file or several MDD files. If I'm working in the scene, I'll switch to a standard player or just disable MDD until I'm ready to render.


01-29-2014, 09:27 AM
One more thing to consider: This is probably not the issue you're running into but if network rendering is slowing down considerably in general, check that your render controller is using all threads for all machines. I know in BNR, I need to set it to 0 to use all available threads. (0 is the same as setting it to Automatic in Layout. If you use a different controller, check your docs for the appropriate setting.) Without that option enabled, my network computers can take four times longer to render a scene.

01-29-2014, 09:40 AM
Greenlaw, great tips, esp. about the streaming MDD, although the controller threads is also surprising--I'd think the CONTROLLER wouldn't be all that CPU intensive.

01-29-2014, 10:05 AM
Great tips Greenlaw. I use tequilascream, but it seems not to be a ram or cpu problem, just the render time. I suspect photoreal motion blur is the key here but i need to make some test first.

01-29-2014, 11:22 AM
@ Jeric, regarding the controller CPU usage, the setting is not for the controller but rather instructions that it sends to the individual LWSN nodes. Depending on your defaults, the multi-threading setting for LWSN may or may not be the same as what you use in Layout. On my farm at home, for example, if I don't have BNR set to send 0 ('Automatic'), the quad core computers will render using only a single thread thus, taking approximately four times longer than Layout set to Automatic.

Some controllers may default to using single thread mode for compatibility with older plug-ins and processors. If you're only using modern computers and plug-ins on your network, you should definitely switch to Automatic mode. That's just my personal opinion of course--others may have had different experiences with this.


01-29-2014, 11:35 AM
@Joseba, hope you get this figured out. Remember to let us know what you discover. It's good to have as much info as possible online for future reference.

01-30-2014, 03:02 AM
Thanks for the info here Greenlaw, I had no idea about the MDD RAM issue. So if I'm rending on 8 nodes using LWSN the full MDD would be loaded for each rendered frame but if I'd used MDD Pointer then just the relevant frame would be loaded? Not sure if this is a correct explanation but what you describe would explain the slow render of a relatively simple scene made recently.

01-30-2014, 09:06 AM
Thanks for the info here Greenlaw, I had no idea about the MDD RAM issue. So if I'm rending on 8 nodes using LWSN the full MDD would be loaded for each rendered frame but if I'd used MDD Pointer then just the relevant frame would be loaded? Not sure if this is a correct explanation but what you describe would explain the slow render of a relatively simple scene made recently.
That's correct. My understanding with Pointer, is that only the data for the current frame is loaded into RAM, and when the frame is changed, the MDD data is purged and replaced for the new current frame. This is mostly useful during network renders as it can reduce long scene loading times and it can also reduce RAM usage.

Where it's not so good is when you're trying to work interactively in a scene with a huge MDD because it's constantly streaming from your drive when you scrub through frames--in this situation it's better to use a standard player or, if possible, disable the MDD player until you're ready to render.

In general, I wouldn't be too concerned about this--if you have enough RAM and a fast processor, it's not normally a big deal. Pointer is primarily useful when you have a REALLY large MDD or many MDDs, and need to cut down the overhead. I don't think about using Pointer until I see a problem. But when I need it, I'm very glad we have this tool. (Remember to tip the coder.) :)


01-30-2014, 09:12 AM
I should clarify the previous post.

When you're rendering through LWSN, the entire MDD isn't necessarily loaded repeatedly unless you've told LWSN to do that. Normally, it will only load the entire MDD for the first frame and that entire MDD is kept in RAM until the LWSN shell is finished rendering the scene. If you have your controller set to start a new LWSN shell with each frame then, yes, it will need to load the entire MDD for each frame. Which method is better probably depends on the size and speed of your network, how many people are using it, and the typical size of your scene files. (I'm not an IT guy so that's the best way I can explain it.)

With Pointer, it will only load the MDD for a single frame for each currently rendering frame, regardless of how you have LWSN set up. That means the file can be smaller whether you rendering one frame at a time or rendering the entire scene in a single LWSN shell.

Use your judgement--as described above, there can be benefits and downsides to any of these methods depending on the situation.

Hope this helps.


01-30-2014, 10:04 AM
Hello everyone. I am doing some test right now.
The render time with the mmd drived character is 3min 7secs, the render time of the original bones driven character is 1min 22secs.
But it is not the only difference, the mdd driven one have larger motion blur. That explain the larger render time but then another question appears: why the baked mdd do not render motion blur the same way?
The scene is complex, the character is parented inside a robot with noise applied in some channels and is moving fast at times, so i do not know but it seems that the mdd bake process or the way mmd vertex are evaluated for motion bur purposes is not acurate enough?
I remember Modo fixed a mdd render speed big issue sometime ago. I donīt known it is similar or not.

01-30-2014, 10:59 AM
Wow! That's surprising to hear. That MDD is rendering MB so differently from bones sounds very wrong to me--and it would explain the longer render time. You might want to submit a report with content to Fogbugz.

Sorry, I can't help with that. I haven't encountered this issue because in production I routinely rely on the embedded motion vector data in exrTrader for motion blur, and I think the last time I used PR motion blur was probably five or six years ago. I wonder if I'll notice a difference in the intensity of the vector data though...I'll check when I get a chance.

BTW, which version of LightWave are you using?


01-30-2014, 11:12 AM
One other thing I remember that might cause a slow down during LWSN rendering: if you have deforming subpatch models in your scene, having a different preview and render setting can have a big impact on render time. This is because LightWave needs to do a subpatch level conversion each time it moves the object, and if the difference between levels is huge, this can take up extra time which will add up throughout the course of the scene render.

To work around this issue, simply set the subpatch preview level setting to match the render level setting--doing this eliminates the step during rendering. Note that this trick has the same considerations as when using Pointer--if you set your preview to match your render setting, interactivity may slow down considerably. You should really only do this when you see a big hit in scene rendering and then only when you're ready to render the scene. (If you have a lot of MDD's, you might want to save a separate scene.)

When I was with the Box, we had a script that would allow us to globally set the preview level for all subpatch objects in a scene to match the render level when we submitted a scene for rendering. This didn't affect the working state of the scene so we didn't have to worry about reverting our settings to 'interactive mode'. Maybe LW3DG should add this feature natively.


01-31-2014, 09:20 AM
Thank you very much for the tip, Greenlaw. I always wonder why the mesh freezing render step is so slow! I ALWAYS have a different subdivision preview/render level in my scenes. That is a very good point, in this scene the mesh freezing takes almost a minute to complete. OpenGL interactivity also suffers more at Last subdivision order than at first, even if your object has no deformation at all.
We are using the very last Lw version, but i suspect this problem is not new. Iīve had similar slowdowns (not so critical) everytime i render a mdd character with a fast photoreal motion blur near the camera.
Modos bug was the same, so maybe render an mdd with a realistic stochastic motion blur is not as easy as it seems.

01-31-2014, 10:33 AM
??? I'm surprised (appalled) that LWSN even looks at the preview render setting at all, considering it has no user UI.

01-31-2014, 10:40 AM
I am not a programmer, but i think LW loads the "preview" render level geometry in memory when loading the scene and it uses it at render time if it is the same as render level. Fprime and vpr both load from there also, because it is already in memory. Just speculating.

01-31-2014, 11:02 AM
I always assumed it created the preview geometry on the fly, at scene load, and only for the user version of LW, not LWSN.

If it stored the preview geometry, where would it store it? Logically it would be in the LWO, and that would mean that LWOs would vary greatly in size depending on the preview subd level. Layout doesn't even need to load a scene if it's simply being the LWSN controller, so it wouldn't have an opportunity to create the preview mesh.

Or so I believe. If LWSN has any interaction w/the preview level, I'd suspect it's just a left-over from being a very cut-down version of Layout. That would be some EXTREMELY low-hanging fruit to eliminate a significant stumbling block.

01-31-2014, 12:04 PM
I don't know why it does that either. It probably has to do with making certain that any given scene is read and processed by LWSN exactly the way Layout would do it before rendering starts--if it ignores some value or stage in the .lws, the result might lead to a difference in rendering. I don't mean in this situation specifically but just in general.

Still, for this particular situation, yeah, I'm not sure why LWSN should care about the subpatch preview setting either. But then I'm not a programmer. Maybe a more knowledgeable user will stop by to explain.