Results 1 to 6 of 6

Thread: Tone Mapping Trick

  1. #1

    Tone Mapping Trick

    I've seen some people asking about this on other forum recently and in spite of HDRShop has a free plugin for this, as far as I know, there's only one free Tonemap filter available for LW right now (Pipari's ToneReproduction); so I want to share a technique to get very very similar result, only with Lightwave tools. I designed this trick on LW6 days but I've updated the technique to LW9 so there are some things that we can get, in an easier way now.

    As we know, common computer monitors can only display a very limit range of tonal values from radiance images. LW and almost all 3D apps convert, in a linear way, these thousand of tonalities (1 to 100000) into an image with tonal values ranging from just 1 to 255. The problem with this linear convertion is that we get a very contrasted image. Something like this:

    This is not a problem to work at all, the problem comes when we display the image, so what we commonly do, is to work (all or some aspects) in a LCS (linear color space), and barely before the renderer displays the image, we apply a correction gamma with exponent near to 2 (commonly 2.2 - 2.5 for PCs and 1.6 - 1.8 for MACs but this can vary and depends on your system and OS configuration since OS on MAC for example, already makes a correction near to 1.4). The idea for this is to get an image with a tonal balace more near to a film and human eye:

    However, there are cases where we need even more control over the final "look" of our images, cases in where we need to show all details in all zones of an image (mainly in the most brighter zones and in the most dark areas), as if our monitors could display all the range that a human eye can perceive. This is what ToneMapping process does:

    To solve this process we have mainly three ways: Exposure Blending method, that works blending exposed versions of LDRIs in a single balanced image; the Global Operator method, that works based on intensity of the global tonalities values (like a tonal curve for example); and the Local Operator method, that works based on the pixel's location to determine what areas should be mapped to a different value depending on whether it's located in a dark or bright area. This last method tends to produce more pleasing results (like the one shown above) and this is the method that is more similar to my technique (although it looks more like Exposure Blending method, my technique doesn't have its limitations), and in spite of being the best and more complex method, the technique is very simple as you'll see:

    The first thing we need is an HDR image (I'm using memorial.hdr from Devebec's web site)

    Clone this image twice. Since LW linearize automatically any HDR image format, for these clones, we going to begin adjusting a 2.2 gamma correction (which is my system gamma). For the first clone, we add HDRExposure and adjust the white and black points, here we should only worry about obtaining balanced values in the middle and dark areas:


    For the second clone we adjust the exposure in such a way we reduce the high range of the very bright areas (we don't care the rest):


    Now, we have two parts of our balanced image: the dark and bright areas are balanced in two different images. What we need now is to get a single image. So take note of your HDRI resolution and make a blank image (8bits) at the same size. You can do this in your 2D favorite app or within LW (just rendering a black or white image and saving it like a targa24. png24, etc)
    Add a Textured Filter (Image Editor/Processing), set your HDRI and disable the MipMap option (since is a preprocess):

    Then add the black&white filter:

  2. #2
    What we have here is a luminance map of our radiance values. We going to use this map as a mask (alpha) to specify in what scaled pixels should be shown of our two balanced versions (we have mapped our HDRI into a LDRI because LW behaves better with LDRIs for alphas)

    We can make this in at least three ways: Through Textured Filter into a blank HDRI (this is the method that I used in LW6 days, but we won't use it this time), we can use the cool Node Image Filter from the prolific Dennis Pontonnier (which we'll use now!), or we can use a plane (a flat polygon with our HDRI size).

    Then, go to Effects/Processing/Add Image Filter and chose Node Image Filter. What we do here is easier to understand in this way:

    We have three color layers, the first one has our version balanced for the dark areas, the second has the version of the balanced values for the brighter areas and the third one has our luminance map which serves in this case as an opacity map. We could plug this output directly into the image filter node but we would should add a WaveFilter image later to adjust the saturation; besides we won't have all the flexibility for color correction that LW9 Shader system provides. So the simpler way to get some control over this, is through Color Tool:

    Let's compare it with the real tonemap result

    Considering the levels so high in this HDR image, results are pretty near ugh?

    In spite results are aceptable, I noticed an "error" in this way: We have set FPGamma before the HDRExposure and this is a disadvantage if for example we'd want to add some nodes to extend the color correction possibilities (since HDRExposure affects the image in a linear way and gamma in a non-linear way). So we fix that:

    For our first and second clones (we need to re-adjust all parameters, but if necesary if you want to process the image in LCS properly)

    let's compare now:

    Now things runs nicely for more complex adjustments or color corrections.

    Btw, if you want to process a sequence with this technique, is better if you use a plane to map your sequence through nodes in its surface, since is 200% faster than Node Image Filter.

    My final advice is not overuse tonemapping if you are representing a film curve response or you are integrating CG elements with live action plates, since the tonemap result is more near to our eyes perception range.

    Is easier to make this with a real ToneMapping filter, but I hope you find this trick useful in some way and it contributes helping you to make your work a little bit better


  3. #3
    Michael Nicholson zapper1998's Avatar
    Join Date
    Feb 2003
    spokane washington usa
    Blog Entries
    wow thank you
    Intel Core i7-980, OC 4.7ghz water cooled.
    Gigabyte x58-UD3R with an Intell SSD 180GIG
    Nvidia Titan Z, x 2 And 1 GTX-1080 by Gigabyte.

  4. #4
    Registered User
    Join Date
    Jan 2013
    ALL pictures gone! What a pity!
    Is Gerardo still here?

  5. #5

    unfortunately many of the pictures he uploaded are gone, yes.  

    + hope to see some Tone-mapping news in LW2020
    LW vidz   DPont donate   LightWiki   RHiggit   IKBooster   My vidz

  6. #6
    Super Member Chris S. (Fez)'s Avatar
    Join Date
    Feb 2003
    Needs to be totally integrated like Corona.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts