PDA

View Full Version : HDR question...



Integrity
11-22-2004, 03:04 PM
I know in truth this has nothing to do with Lightwave, but I was wondering how HDRI's are actually coded. I know when you sample a pixel in the Image Viewer it will show HDRIs' values as percentages...and when using them in LScript they are decimal values within 0-1. But is the actual data coded as 32 bit values (0-4294967296) or is it really in decimal?

When I render a quick scene with a ball of Luminous value 1000000 or some crazy amount against a black background...the value when rendered is that value. I'm assuming this is the maximum value that will be recorded to keep the dynamic range within the image (1000000 Luminous will be recorded as 4294967296). Is the white point always at a default value of 1.0 (when using the HDR Exposure plugin within Lightwave)? And, if the highest value, not the white point, is stored at the maximun allowed value in 32 bits, when you create higher dynamic ranges (let's say make the Luminous 10 to the 9th power), won't the quality or the data stored extend not representing specific values accurately?

I also don't get the reason of having it called differently (LDR and HDR). LDRI's can record the same dynamic range, just in lower quality (only 256 levels). Shouldn't it really be the resolution of the luminance or something?

All of these questions I have already answered from proving it, but I haven't totally done everything and would like to fully understand the HDR format before I go playing with the values.

Thank you.

KeenanCrane
11-22-2004, 08:18 PM
The answer is that there are a bunch of HDR formats, all of which work differently. Here's a page with a brief overview: http://luminance.londonmet.ac.uk/webhdr/formats.shtml
There isn't really a standard since each maps well to a certain domain. Paul Debevec's light probe images use the RADIANCE format, and I think that's the same format Lightwave supports. OpenEXR seems to be good choice for hardware accelerated applications, since it's compatible w/ NVIDIA's 16-bit floating point format.