Render Silhouette Outline Edges


Well-known member

I am looking to render objects with small edges on the interior of objects and a thick edge on the boundary/silhouette.

Using edge rendering covers the interior but the silhouette is not possible with the edge settings, as far as I can see.

This type of effect:


I thought adding the outline as a pixel filter, using the alpha as as mask might work but not sure how.

Now I am thinking that the use of a mask may allow for a nodal edge setup to dictate where the outline is thick and where its thin?

Anyone got an ideas?
This is a quick and dirty setup that I used for a project. It's not entirely flawless, maybe it'll suit you too. :)


This is the Node Setup for the Edges Node Editor:


Basically the ray cast geometry node looks if there is an object behind it or not. If the distance to the next object is greater than the value B in the Logic node, then the line width is set to 1, otherwise to 0. Therefore the value must be greater than the distance to the objects. If that makes any sense. :)

You create the node setup for one object and then copy it to the others.


Note: You can select all desired objects in the scene editor together and paste the node setup at once.

One more note:
In order not to let the outline disappear completely, you can increase the value for If False in the Logic node. Of course you can also use all the other edge rendering options.



Hi folks, I'm looking for such a solution trying not to use the old halftone shaders from the 2015.3 version, but use nodes any suggestions?



Hi folks, I'm looking for such a solution trying not to use the old halftone shaders from the 2015.3 version, but use nodes any suggestions?

I've put together a simple halftone shader node using dots. It's not perfect, but fun to play around with (Since the points are on the surfaces of three-dimensional objects, they are also perspectively distorted).
I'm attaching the compound node below.


It is based on a Lambert shader. The color output is plugged into the color input of a Standard node. Diffuse and Specular of the Standard node are set to 0% and Luminosity to 100% (the node contains its own shader node, we don't want any additional shading).

How to set up the node:


The inside of the compound node:


The Halftone Shader compound node uses a Reference object called "Camera Motion" for the Dots Texture. You have to create a Null called "Camera Motion" and parent it to the camera before you load the compound node.
If it doesn't work, it is of course always possible to select the reference object in the Dots Node within the Halftone Shader Compound Node later.

You must plug a scalar node into Dots Diameter and Dots Distance for it to work.
Dots Diameter's default value is 0,45 (you can use values between 0 and 1). Dots Distance depends on the size of the object and is, well, rather small :).

Dots Contrast blurs the dots (values between 0 and 1).
Surface Color is the base color of the surface and Shadow Color is the color of the dots.

That's it for now. Use at your own risk. I hope it helps you.

P.s.: The native Halftone Pixel Filter is also available in Lightwave 2020.


    2.1 KB · Views: 97
This can happen when you have too much time.

Halftone Dots Color Crazy compound node

The compound node is based on a Oren-Nayar and a Beckmann shader. The color output is plugged into the color input of a Standard node. Diffuse and Specular of the Standard node are set to 0% and Luminosity to 100% (the node contains its own shader node, we don't want any additional shading).

The Halftone Shader compound node uses a Reference object called "Camera Motion" for the Dots Texture. You have to create a Null called "Camera Motion" and parent it to the camera before you load the compound node.
If it doesn't work, it is of course always possible to select the reference object in the two Dots nodes within the Halftone Shader Compound node later.


You must plug a scalar node into Dots Distance for it to work.

Dots Diameter's default value is 0,45 (you can use values between 0 and 1). Dots Distance depends on the size of the object and is, well, rather small.

Dots Contrast blurs the dots (values between 0 and 1).

Dots Rotation controls the orientation of the dots (values between 0° and 360°, even if everything is repeated from 90° :) ).

Roughness is a material property and controls the roughness of the surface (0 - 100%).

With the Halftone Dots Color Crazy compound node you get three "modes" for shading.

Classic mode

Dots_005.jpg Dots_009.jpg

Plug colors into Surface Color and Paper Color.
Surface Color is, well, the Surface Color (color of the dots) and Paper Color is the color between the dots. As with any color input, a texture can also be added to Paper Color. The colors of the lights colors the Surface Color, not the Paper Color.


To create a paper background you can map the Paper Color texture with Front Projection and use the same texture as a background image in the Compositing tab of the Backdrop window.

CelShading Color mode


With the use of Shading Color Dark, Medium and Bright you can create the classic celshaded look with individual sharply delimited shadow colors. Of course, the shading still comes from the light but the light color is ignored.
You simply plug the desired color into the respective input.


With Dark Portion you can control the proportion of the dark shadow color (0 - 100%).
With Medium Portion you can control the proportion of the medium shadow color (0 - 100%).

Both values add up, the rest of 100% remains for Shading Color Bright. If e.g. Dark Portion already has 100%, nothing remains for the other two colors and they are not visible at all.
If nothing is plugged in, all three colors have a one-third portion.

Note: The 0 - 100% area means the area of the shading that has a gradient, completely black areas always use Shading Color Dark.


Dots_011.jpg Dots_012.jpg

Darkmode inverts the scaling of the dots and shaded areas between dots are colored in the Darkmode color. This gives the shading a dark impression. However, the Darkmode Color can be any color or even a texture.

Darkmode Color overwrites all other shading colors. It must have at least a color value of RGB 001/001/001 for the compound node to recognize it.


The Paper Color is not visible in Darkmode. However, Surface Color works and you can of course plug in a texture here too.
The light colors also colors the Surface Color in Darkmode.

This compound node was created by playing around. As a result, some over-complicated node setups may have arisen. If someone finds improvements or even bugs then please report it. :)

Warning: looking inside the compound node can be unsettling for the faint of heart. :cool:

That's it for now. Use at your own risk. Have fun.


    287.1 KB · Views: 105
Last edited:
This weekend I was bombarded with a few trailers at the theater that made use of NPR. It might be useful to investigate their production methods.
Another halftone option is to make the effect in a compositor or image editor. You could use LightWave buffers to isolate different areas to receive the effect in the external editor.
It's just playing around with the Halftone Dots Color Crazy compound node. This is for test purposes and therefore without any post-processing directly from Lightwave renderer.
Dragon head model is from Vladimir Alexandrow.


CelShading Color Mode

Classic Mode

Classic Mode with DarkMode BG plate.


P.s.: The tests also show the limitations of the shader node: perspectively distorted dots in some places.
Additional info on rendering technique in LW

(1) Marty the Monkey + LW 11.6 nodes | NewTek Forums and (1) want to reproduce LW11.6 shading!! | NewTek Forums

The original blog post no longer exists, so I have resurrected it here:

Behind the Scenes: Creating Marty the Monkey

Posted by John Einselen – July 24, 2013
Kaiser Permanente partnered with Vectorform to build a new tool for use in autism assessment, helping make the experience fun for kids and easier for clinicians. Structured around speech, occupation, and physical therapy mini-games, the Microsoft Kinect system was combined with therapist input to help track performance and improvement. As a key part of the experience we created a 3D animated character to act as guide, coach, and friend; Marty the Monkey.

Character design and modelling​

Backgrounds, icons, and the original 2D character were all illustrated by James Anderson. Because a large number of character animations were required, we decided it would be more efficient to animate the character in 3D, allowing us to make character changes more easily after primary animation completion, and reuse specific animations with different camera angles when needed.

Based on a front/side/back character sheet, everything was modelled in Lightwave using box and sketch modelling techniques and Catmull-Clark subdivision surfaces.

Facial features that work in 2D do not always translate well to 3D, especially smile shapes on a round object. It took a number of revisions and modifications, but I worked hard to retain as much of the spirit of the character as possible. The final model was kept as low-poly as possible, using edge sharpness to control details while keeping everything optimised for fast animation and smooth curves. Subdivision surfaces allow for very flexible geometry resolution, and for rendering the divisions were simply increased until discernible polygon edges were no longer visible.

Surfacing and shading​

Image maps were avoided, instead modelled shapes were used to divide surfaces, with separate shaders applied to each area. This bypassed the entire UV mapping process, and resulted in edge sharpness free of raster limitations. This also become important during the style development process, as skin, eyes, and fur were easily shaded with different node setups. All surfaces for the character were built in the surface node editor using grayscale values and scalar math, with specific color palettes applied at the very end of the network.

Base diffuse and specular​

Lightwave’s sub surface scattering shader solves several issues common in designing cartoon surfaces. From the technical side, SSS interpolates even the grainiest dome lights and soft shadows before further nodes are applied, allowing for sharp cell edges (via logic, gradient, or other stepped nodes) with no extra oversampling, something that’s entirely impossible using a Lambert diffuse shader. From the artistic side, SSS groups and blends areas much like an artist simplifies and combines broader shapes while overlooking unnecessary geometry details.

I chose to use Phong specular shading for the eyes due to the bigger, slightly offset, and more stylised look over the smaller, harsher, and more accurate Blinn shader (I fully admit, sometimes the Phong specular shader is fun just because it’s so classic, such as when working on the Tron: Legacy project).

Though using a gradient node would allow for multi-step cell shading and more granular control over luminosity ranges, I opted for two simple scalar nodes and a mixer. The smooth step node is constantly used for controlling value ranges, and the curve at either end of the range results in richer contrast. Mixed with a simple logic node set to return 0.0 or 1.0 based on a threshold value, I created sharp cell shading for the nose and furry areas, while retaining a small amount of shading for better depth and to lessen the harshness of the lighting.

Of note, the skin uses no such combination, relying only on the smooth step node for a softer, flatter look. I used combinations of this smooth and sharp cell shading throughout the project, depending on the feel or style needed for each material. These values were then remapped using multiplication and addition nodes, so that I could add further details in the shadow areas.

The eye surface uses only a logic node and the Phong shader to create a bright, flat highlight. Because a dome light would still result in unresolvable noise, I implemented a split lighting scheme in Layout; a soft dome light enabled only for diffuse (affecting only the SSS nodes), and a distant light enabled only for specular (affecting only the Phong node).

Occlusion weight maps​

Using weight maps, when possible, can be much simpler than fussing with UV mapping, and significantly faster and smoother than trying to calculate ambient occlusion at render time. In this case it also allowed for art direction, applying shading to specific areas I wanted to darken, while leaving other areas bright, regardless of physical accuracy. This weight map was then combined with the diffuse shading by simple multiplication.



    2.9 KB · Views: 90


Raytraced edge projection​

To better match the original illustrations, I needed to improve the definition between areas of the character that were separated by depth but not by screen space, especially around the face and ears. Essentially outlining only those areas that needed extra separation, without looking like a continuous stroke. Given Lightwave’s poor native edge rendering and the inability to push edges behind geometry, that wasn’t going to be an option.

Rendering an edge effect using the depth map in post, either via Denis Pontonnier’s free image filter nodes or custom effects in your compositing application of choice, has the advantage of screen space thickness controls, but unfortunately this isn’t helpful for inclusion within the shader pipeline unless you render out pieces separately and combine in post. For a shader setup like this, it would be excruciatingly overcomplicated, and there’s always the issue of depth map aliasing requiring renders at much higher resolutions or significant antialiasing work in post.

Using an additional light positioned in front of the camera and using RGB channels to create separate diffuse light sets at render time would be one possible solution (for example, a 100% red SSS node and a 100% red dome light, along with a 100% green Lambert node and a 100% green point light), and while combining these in the node editor isn’t a problem, the Layout setup is a pain to deal with. I just wasn’t happy with the results, wanting something a little more automated and built strictly within the shader itself.

I finally settled on creating a custom edge projection setup using the RayCast node and a bit of math. The trick here is that it’s essentially hard-coded for this character and this scene (see below for a more universal solution). By taking the Ray Source from a Spot Info node, I grabbed the location of the current viewpoint, scaled it down, and shifted it up slightly. This provided a direction for the rays that would converge well in front of the camera; any pieces of geometry visible from the camera that were much further behind other pieces of geometry would return a measured hit instead of empty air. Combined with a logic node to filter out -1 (no hit at all) and a smooth step node to limit and normalise the distance traveled (and fading out results larger than I was looking for), I was able to render black lines around geometry edges.


For a more universally useful solution, a combination of radial offsets is preferable. Previous to Lightwave 11.6 an array like this was simply too unwieldy, but using the Compound node (introduced during Siggraph 2013, prerelease available for all registered users) makes this much simpler, compiling large node networks into a single group. You can download a sample collection of nodes at the end of this article.

Final color mix​

Using the grayscale values from each shader network as an input for mixers, colors from the original illustrations were selected and mixed to create the desired highlights and shadows. This sort of absolute control over shadow tinting and hue shifts can result in rendered output that feels far more illustrative than 3D.

The eyes were feeling a little dead, so at the suggestion of a coworker I mixed additional white around the eyes using another weight map.

Character rigging and animation​

The character was rigged with automated bounce built into extremities (such as the ears and modelled tufts of hair), and a dual skeleton system that allowed me to use a combination of motion capture data and keyframed positions. This allowed the character to have a very specific start and end position, making the pre-rendered animation assets mix and match seamlessly and consistently within the game experience.

Motion capture was recorded, tracked, and targeted using iPi Soft’s motion capture solution, and lip-sync was animated using Papagayo (importing the data files using Mike Green’s LScript).

Rendering and delivery​

Render queues were managed using a simple Finder service in OSX, with all shots tracked in a spreadsheet, detailing progress from voice over recording (also completed in-house at Vectorform) through final image sequence delivery.

Because everything was rendered straight out of Lightwave without any need for compositing, image sequences were simply converted to the desired size and format for the development team, who then implemented the animations in the final experience.

I tried to avoid perspective distortion of the dots. This works to a certain extent. Unfortunately, it introduces new distortions depending on the focal length. Barely visible at normal and longer focal lengths but more noticeable at wide-angle shots. Unfortunately, this exceeds my knowledge of Lightwave Nodes. :(


Here is a test with larger dots so you can see the correction better.

With Darkmode

Top Bottom