PDA

View Full Version : exr 2.0 suport



Funke_Maximilia
08-18-2012, 10:52 AM
please make LightWave 12 have the ability to render in the EXR 2.0 image format that contains deep data for deep composites. that would be great and make me like the LW render-er even more than i already do

nickdigital
08-18-2012, 04:22 PM
Be interesting to see what Lightwolf has to say about this.

xchrisx
08-18-2012, 05:54 PM
Lightwolf already talked briefly about this on another thread:
http://forums.newtek.com/showthread.php?p=1254636#post1254636

Lightwolf
08-19-2012, 07:07 AM
Be interesting to see what Lightwolf has to say about this.
Not that much for the moment. I am in the process of writing a proof of concept exporter which should work with a few caveats (one being the lack of volumetric support - both because of the lack of SDK access as well as due to the fact of how some volumetric plugins work).

Files will be massive and essentially there's also Nuke to try it out with.
OpenEXR 2.0 at least provides a standardised file format, but even then the logical layout of the buffers within the file (i.e. channel naming) doesn't seem to be rigidly standardised.

An interesting topic but still early days... not just for LW but for just about anything else that's publicly available.

This will take a lot longer to adopt than the initial OpenEXR did (It took Eyeon 24 hours to add support to Fusion after the first release of OpenEXR) because they changes go way beyond just reading and writing a file.

Cheers,
Mike

Lightwolf
10-09-2012, 04:15 AM
Just to bump the thread... I believe we've just written out the first deep OpenExr 2.0 image from LightWave 3D ever.

So far so good (this is using a simple proof of concept plugin).

Now comes the fun part... finding a publicly available compositor that actually supports them. Nope, Nuke doesn't (until the 7.x release). :)

Fun, but it surely is very early times.

Cheers,
Mike

MrWyatt
10-09-2012, 01:25 PM
Just to bump the thread... I believe we've just written out the first deep OpenExr 2.0 image from LightWave 3D ever.

So far so good (this is using a simple proof of concept plugin).

Now comes the fun part... finding a publicly available compositor that actually supports them. Nope, Nuke doesn't (until the 7.x release). :)

Fun, but it surely is very early times.

Cheers,
Mike

show us Mike.
;)

Lightwolf
10-09-2012, 01:29 PM
show us Mike.
;)
108451
This is a screenshot of the viewer, displaying every sample as an OpenGL point positioned in space. I rotated the view to make the depth clearer.

However, I just found an issue with either LW or my proof of concept plugin, it doesn't seem to register more than one sample coming from LW (but it should, I'm stumped).

Cheers,
Mike

MrWyatt
10-09-2012, 01:41 PM
108451
This is a screenshot of the viewer, displaying every sample as an OpenGL point positioned in space. I rotated the view to make the depth clearer.

However, I just found an issue with either LW or my proof of concept plugin, it doesn't seem to register more than one sample coming from LW (but it should, I'm stumped).

Cheers,
Mike

As I understand it. Deep images should also have samples from hidden/occluded/backfacing polygons, or did I missunderstand the tech. Shouldn't there be literally a sample per ray intersection in depth?

Lightwolf
10-09-2012, 01:54 PM
As I understand it. Deep images should also have samples from hidden/occluded/backfacing polygons, or did I missunderstand the tech. Shouldn't there be literally a sample per ray intersection in depth?
They can, but don't necessarily need to. Even without it they're still potentially useful (with the right support in the compositor) since they allow you to work around all AA related compositing issues, i.e. id buffers, halos due to image processing using the depth buffer, etc.

Cheers,
Mike

Cageman
10-09-2012, 02:35 PM
Wow!! This feels like a winner allready!! :)

Lightwolf
10-09-2012, 03:45 PM
Wow!! This feels like a winner allready!! :)
It is just the proof of concept stage at the moment: Is it possible or not, what are the limitations. Next would be a prototype and then maybe a product or (major) update to exrTrader.

A long way to go.

Cheers,
Mike

Lightwolf
10-09-2012, 04:25 PM
Just to keep this updated... I've fixed an issue that I had with the saver.

As an idea of how large the files will be:
As seen earlier, I've rendered the Mini Cooper scene.
1920x1080, 4-16 AA samples (unified sampling + AS), RGBA (as 16-bit float) plus Z (32-bit float) and a 32-bit per pixel sample count (mandatory): 98MB using the ZIPS compression scheme.
I'd consider this the best case for production type work in 1080p per frame (as in: I doubt the frames would get smaller, but they can surely get a lot bigger).

My OpenGL viewer, using a non-optimised memory structure, needs about 1.4GB to display that image (but I'm sure that can be tweaked down).

The image has around 2 million pixels made up of 20 million individual samples. If I haven't screwed up my calculation, then the uncompressed, raw memory required just for the image should be around 600MB (8 bytes colour per sample, 4 bytes depth, 4 bytes count = 16 bytes per sample, multiplied by 20 million) - EXR seems to compress quite effectively.

Oh, it's also less samples due to the AS. If it was, say, a straight AA value of 16 it'd be around 32 million samples.

Just because I love to throw a few numbers around... but it does give you a hunch of what's coming in the next couple of years. ;)

Cheers,
Mike

Lightwolf
10-10-2012, 08:16 AM
108475
This is probably the last update for some time.

Two interesting facts about the image:
It's rendered on a blank backdrop - samples for the backdrop are not stored at all, EXR 2.0 supports pixels with 0 samples.
The "stretching" is due to photoreal motion blur. I suspect that more information would need to be a) exposed by LW and b) used be the compositor to re-construct that properly.

Cheers,
Mike

Lightwolf
10-15-2012, 08:03 AM
For those of you not on facebook, here's a couple new tests:

108542
Here's a new deep image test, using the teapot sample scene.
It looks like I mis-interpreted some of the documentation on what is saved in deep images. It's a lot less (but also requires a bit more preparation when exporting).
This file is roughly 9MB, with 1-42 samples per pixel (and I think there's still potential to tweak that)
108543
This is the deep EXR 2.0 teapot scene composed together entirely from the samples in the image file.
I've added that functionality to our image viewer to validate the exported images.
108544
This image displays the number of samples stored per pixel for the teapot scene. It is mainly the areas where object overlap that get more samples.
It also seems like the saver needs to be more careful if you look at the areas in the shadows underneath the bowls. These should require less samples as well - the saver seems to need a little more tweaking.

Cheers,
Mike

Lightwolf
10-17-2012, 03:53 AM
Three more tests, isolating elements based on the object id and highlighting an element based on the object ID.
Notice the AAd edges.
The OpenEXR image is 5.3MB on disk at 720p, now including an id buffer as well. There's up to 7 "slices" per pixel in the image (I refuse to call them samples as they do in the EXR docs, because it leads to confusion - certainly in my case).
108563108564108565

Cheers,
Mike

Cageman
10-17-2012, 12:29 PM
Cool! :thumbsup:

Cageman
10-17-2012, 12:34 PM
They can, but don't necessarily need to. Even without it they're still potentially useful (with the right support in the compositor) since they allow you to work around all AA related compositing issues, i.e. id buffers, halos due to image processing using the depth buffer, etc.

Cheers,
Mike

Also, from what I understood from our conversation today, allowing a raytracer to sample unseen geometry (other than for reflection bounces etc), is more or less removing the optimizations such engines have? Making them quite moot to use for heavy duty raytracing if they also have to trace all this additional geometry?

As such, a scanline renderer could be used for generating the pointclouds, and a raytracer for the actual rendering of the variou shadingmodels (pretty much how we do it nowdays, so to speak)?

Lightwolf
10-17-2012, 01:08 PM
Also, from what I understood from our conversation today, allowing a raytracer to sample unseen geometry (other than for reflection bounces etc), is more or less removing the optimizations such engines have? Making them quite moot to use for heavy duty raytracing if they also have to trace all this additional geometry?

As such, a scanline renderer could be used for generating the pointclouds, and a raytracer for the actual rendering of the variou shadingmodels (pretty much how we do it nowdays, so to speak)?
Well... the idea of a raytracer is to only compute what is visible - tracing rays is otherwise to expensive.
Even scanline renderers sort (opaque) polygons before they get rendered (front to back) to not compute what isn't directly visible. This is what the classic camera does as well (even though it's not quite a scanline renderer, but close enough).

I actually don't know of a single renderer that would compute everything, especially not when it comes to production renderers.

However, you can still render different objects in passes, but won't need matte objects because the depth merge will be perfect. That's not taking "secondary" rendering, such as refraction or reflection into account of course.

The only case where the full "depth" of a scene is preserved is volume rendering. Density clouds that are raymarched and have no well defined surface.

Cheers,
Mike