PDA

View Full Version : What is correct .hdr wrap for image world?



slimbolina
05-15-2005, 03:43 PM
Hi folks,

I'm a LW newbie (v8.3), but I'm struggling to find the answer to this query. I hope someone can help me, as I'm learning to composite 3D renders onto 2D photos directly in LW. It goes like this...

HDR Shop offers several transformations for .hdr images: mirror ball, light probe (angular), latitude/longitide, etc. Which one gives the correct wrapping when used with Image World, please? Does IW recognise the form automatically, or do I have to stick with a particular one? I read somewhere that 'fish-eye' is what's needed, so HDR Shop might not be fully capable in this respect. Can anyone claim to be an authority on this subject?

Thanks in advance,
Slim

Captain Obvious
05-15-2005, 03:56 PM
Image world uses light probes, I think.

slimbolina
05-16-2005, 04:31 AM
Thanks Captain,

If anyone knows different for sure, please let me know. I might trying to confirm this by doing some experimental renders and zooming in close!

Slim :o

Captain Obvious
05-16-2005, 09:52 AM
I normally use light probes with it and it works as it should... I've never tried with anything else, so I don't know if it works. :p

gerardstrada
05-28-2005, 08:58 PM
Thanks Captain,

If anyone knows different for sure, please let me know. I might trying to confirm this by doing some experimental renders and zooming in close!

Slim :o


Yes, Captain Obvious is right. AngularMap is the way to go. However, to obtain a result 100% ACCURATE, is not enough with using a simple light probes (180x360 degrees) from a chrome ball. Imageworld has been made to map lightprobe images created taking 2 photographs from a mirrored ball at ninety degrees of separation and assembling these two radiance maps so that we have an image that represents complete 360×360 degrees of the environment. With this type of lightprobes, ImageWorld works perfectly.
If you have a lightprobe taken with just one shot from a chromeball (180x360 degrees), is advisable to transform this panoramic to latitude/longitude and to use Textured Environment (SphericalMap), this way we can recover a wider image range (270°x360°) :)



Gerardo

Thomas M.
05-29-2005, 02:35 AM
Just transform your self made .hdr (mirrored ball) into longitude/latitude. Don't use image world!!! Use textured environment with spherical mapping. That's all there is to know. Going this way is 100% accurate.

Cheers
Thomas

gerardstrada
05-29-2005, 06:33 PM
If you have a.hdr (mirrored ball) only taken from one angle and you transform it in latitude/longitude, you won't obtain a 100% accurate mapped, in any format, since you won't obtain 360° H x360° V.
ImageWorld maps perfectly lightprobes created taking 2 photographs from a mirrored ball at ninety degrees of separation.
This doesn't mean that a .hdr taken from one angle, don't serve to integrate CG elements with live action plates (using ImageWorld or SphericalMap); depending on the environment, illumination and material of our surfaces, is possible also to obtain good results with these simple lightprobes :)



Gerardo

Thomas M.
05-30-2005, 05:16 AM
So how do you generate this format using two hdr images? My point is that image world doesn't revreates your original environment in a correct way. Generate a hdr image from a mirrored ball in LW (e.g. from a Sky Tracer background) and use it afterwards as an environment in Image world or textured environment. The only way to get exactly the same image (now the environment gets reflected in the mirrored ball, not the Sky Tracer background) is to use textured environment, spherical mapping and longitude/latitude. Probably you can describe how you stitch the two hdrs together that I can give it another try with image world. It might change something.

Cheers
Thomas

gerardstrada
05-30-2005, 12:38 PM
So how do you generate this format using two hdr images? My point is that image world doesn't revreates your original environment in a correct way.


I undestand your point Thomas, what you say is because the hdris you are using, hasn't been taken from 2 angles, only from the front of the chrome ball.
To take two pictures of a mirrored ball at ninety degrees of separation and assembling the two radiance maps doesn't only serve so that ImageWorld maps the panorama perfectly, but to remove the photographer of this image. So all the HDRIs where the photographer doesn't appear, works well with ImageWorld. This is because ImageWorld uses a complex system of coordinates that is explained in the Debevec's site this way:
"the center of the image is straight forward, the circumference of the image is straight backwards, and the horizontal line through the center linearly maps azimuthal angle to pixel coordinate"
understanding azimuthal as the angle that forms an alignment with north-south direction.
This means that if you only take the picture from one angle, you will lose all the information of the azimuthal angle and part of the image that it is straight backwards.
Theoretically, if we use this image in ImageWorld, we only preserve 180° vertical; but according to my experience in practice, if we convert this mirrored ball to longitude/latitude, we can recover up to 270° of the image and to use it in a SphericalMap; however even this way, we don't have a complete panorama.




Generate a hdr image from a mirrored ball in LW (e.g. from a Sky Tracer background) and use it afterwards as an environment in Image world or textured environment. The only way to get exactly the same image (now the environment gets reflected in the mirrored ball, not the Sky Tracer background) is to use textured environment, spherical mapping and longitude/latitude. Probably you can describe how you stitch the two hdrs together that I can give it another try with image world. It might change something.

Cheers
Thomas


If you want to capture 360°Vx360°H of a LW secene with a single shot of a reflective ball, you won't achieve it, since you will have the same problem mentioned above.
It is better to use Enviro of WorleyLabs or JJ's Special_Projection (by Juan José González).
Also you can use the same technique to create LightProbes with 2 photographs. I know 2 techniques to make this, one with Panotools in Photoshop and the other one with HDRShop.
Personally (before Photoshop supported 32bits) I preferred to do it with HDRShop, since to do it in Photoshop implied to assemble each image with different exposures. The HDRShop technique is a little extensive so I suggest you to check HDRShop's site:

http://www.ict.usc.edu/graphics/HDRShop/tutorial/tutorial5.html

Is widely explained but if you have some inquiry just let me know :)



Gerardo