PDA

View Full Version : I wish there was an Envelope for Stereo Eye Seperation



h2oStudios
11-18-2009, 04:27 PM
I mean how simple is that? Just an envelope, so I can animate my Eye Seperation when doing Stereoscopic Animation.

JMarc
11-19-2009, 09:43 AM
Agreed. I'm sure you are capable of the usual workarounds but to save you the trouble, I have attached my Stereo3D camera rig scene for you. Try it if you like. Unlike the built-in stereo3d setup you do have to render the Left and Right Eye cameras separately but you can animate the interoccular controller in my rig to get the effect you want. My rig is setup as converged so if you require a parallel setup just clear the convergence point null.

I really hope LW Core has some seriously pro-level stereo 3d tools.

Best,

h2oStudios
11-20-2009, 11:02 AM
Pretty cool. Yeah, It would be cool to have some serious Stereo Tools coming along.

Hieron
11-20-2009, 11:04 AM
Don't you run into issues all the time, using converged?

JMarc
11-20-2009, 11:14 AM
There can be some vertical misalignment issues but we prefer to use converged because it gives us a good head start with regard to where the screen-plane is. Everything between the camera and the Convergence point appears to be outside, floating off of the screen and everything beyond the Convergence Point appears to be within the screen window. It can all be edited in post but we like that we don't have to push convergence as far when it is shot/rendered converged.

Hieron
11-20-2009, 04:19 PM
hmk...

I run into convergence issues *real soon* when using toe in..
It's not very handy that you can't see the result when working though..

I tried some weird double plane projection wackiness once to see if I could get 2 camera views projected onto 2 planes and displace them. All in order to get a good view in Lightwave itself. It would work, if you could get the result of 2 cameras at the same time. Don't think LW allows that, sadly.

JMarc
11-23-2009, 10:10 AM
Being able to view a Stereoscopic display live in the OpenGL viewport is a must for an efficient stereo workflow. Even an anaglyghic (red/blue or whichever you can) display proves very useful. Fusion has it, Maya has it. Even After Effects can do it with a native effect. Once you have worked that way it is hard to go back.

My hopes are high for LW Core to have this. I doubt NT would add this to LW 9.6 or HC at this point.

Hieron
11-23-2009, 03:58 PM
totally agree. And Stereoscopic 3D seems to be rising fast atm, some support for it would be greatly appreciated.

JMarc
11-24-2009, 07:55 AM
Being able to view a Stereoscopic display live in the OpenGL viewport is a must for an efficient stereo workflow. Even an anaglyghic (red/blue or whichever you can) display proves very useful. Fusion has it, Maya has it. Even After Effects can do it with a native effect. Once you have worked that way it is hard to go back.

My hopes are high for LW Core to have this. I doubt NT would add this to LW 9.6 or HC at this point.

That should have read "anaglyphic", of course. Dang time limit on edits.

jrandom
01-23-2010, 07:02 PM
I have attached my Stereo3D camera rig scene for you.

Can you help me understand how this setup was achieved? I tried to create something like this on my own, but I don't know enough about lightwave yet to pull it off. Can you point me in the right direction documentation-wise so that I can learn how this works?

jrandom
01-24-2010, 11:20 AM
That sample stereo rig... The anaglyph filter is somehow working in with the plane of convergence object! How did you pull that off?

JMarc
01-25-2010, 08:41 AM
The 2 cameras in my rig are both targeted to a Null object which defines the convergence plane or the screen plane. You can set this up by selecting a camera, pressing the "m" key on your keyboard to bring up the Motion Panel for that camera and then clicking the "Target Item" dropdown menu near the top to select the target. Next click the "Rotation" tab and change the H,P & B Controllers to "Point at Target" for the axis' that you want to automatically target the null, or any other object in the scene.

Does that help?

jrandom
01-25-2010, 10:41 AM
The 2 cameras in my rig are both targeted to a Null object which defines the convergence plane or the screen plane. You can set this up by selecting a camera, pressing the "m" key on your keyboard to bring up the Motion Panel for that camera and then clicking the "Target Item" dropdown menu near the top to select the target. Next click the "Rotation" tab and change the H,P & B Controllers to "Point at Target" for the axis' that you want to automatically target the null, or any other object in the scene.

Does that help?

A bit, yes. Once I discovered the motion panel a lot more became clear.

One difference I noticed when doing anaglyph previews with the middle camera is that, unlike my original tests, there is a plane of convergence now? When using an untargeted camera I get stereoscopic 3D that emerges from the screen towards the viewer, but when using the targeted center camera from the posted file I get a plane of convergence halfway between the camera and the target (giving me both into and out-of the screen depth).

Because the POC for the anaglyph preview is exactly half the distance to the camera target, I'm now trying to figure out how to place a null object that is always 50% the distance of the original camera target -- this would let me point the left and right cameras at that automatically-in-the-middle object so my anaglyph preview and left/right camera renders will share the same POC.

Also, is there a way to tie the center camera's stereo with the interop object's scaling? That would let me keep the preview and left/right eye distances in sync.

JMarc
01-25-2010, 11:01 AM
The middle camera in my rig was intended to be used for central framing. The left and right cameras are the ones that should be rendered separately once you have the framing you like and the convergence and interoccular distance set up the way you want.

When viewing the left/right renders in stereo 3D, any object between the camera rig and the convergence null will appear to be outside of the screen. Any object further past the convergence null will appear to be inside of the screen.

As for interoccular settings... I don't think you can automate the built-in stereo 3d Interoccular or tie it in with another animated channel because it is not animatable by default so there is no channel to link with.

jrandom
01-25-2010, 12:31 PM
The middle camera in my rig was intended to be used for central framing. The left and right cameras are the ones that should be rendered separately once you have the framing you like and the convergence and interoccular distance set up the way you want.

Yes, but it's nice to be able to do single-frame preview renders with depth (red/cyan anaglyph) right in Lightwave w/out having to perform two separate renders plus combining them in photoshop/fcp.



When viewing the left/right renders in stereo 3D, any object between the camera rig and the convergence null will appear to be outside of the screen. Any object further past the convergence null will appear to be inside of the screen.

I actually figured this out just by thinking about it! :) I was wondering why very far objects seemed to have a large left/right divergence when in real life very distant objects converge. Once I realized that I'm also constantly changing my own POC via my eyeballs everything started to fall into place.



As for interoccular settings... I don't think you can automate the built-in stereo 3d Interoccular or tie it in with another animated channel because it is not animatable by default so there is no channel to link with.

Drat. I thought this might be the case. Now I just have to figure out how to wire up my "midpoint" POC null object so I can have a nice anaglyph-preview-enabled stereoscopic camera rig. Would bones be the right way to do this?

jin choung
01-25-2010, 02:03 PM
in maya, we use parallel cameras and adjust the horizontal offset setting to set the convergence point. in lw, you can use the shift camera to do the same thing.

definitely better to do this in your 3d app rather than in compositing so that you don't lose the sides of your render.

as for animatable interaxial, it might be better to use two cameras anyway. but yeah, the inability to preview what you're doing inside of lw is a pain.... although, if you setup the CCTV shader properly on a plane and aim a separate camera at that plane, you could get a somewhat ok workflow going by just doing low res renders every now and again... anyway, yah, not great.

jin

jin choung
01-25-2010, 03:02 PM
I actually figured this out just by thinking about it! :) I was wondering why very far objects seemed to have a large left/right divergence when in real life very distant objects converge. Once I realized that I'm also constantly changing my own POC via my eyeballs everything started to fall into place.

it's true that in real life, you place the ZPP (zero point plane) on whatever you look at.

but in real life, you NEVER see divergence unless you're deliberately going "wall-eyed".

and the only time you would get divergence in stereo is when you have something that's far away AND you have an interocular distance set farther apart than your own eyes.

in real life, no matter how far away something is, your eyes would only ever get parallel. not diverge.

but in stereo, you can be asked to look through the eyes of a person whose eyes are 1 ft away from each other (hyperstereo). in that case, the person whose eyes are 1 ft away are still fine - eyes are parallel looking at something at infinity. but for US, we would have to diverge.

so if you have human interocular, you have the same limitations of far away objects as in real life - NONE.

but if you're creating greater depth with hyperstereo and an exaggerated interocular, then you must be careful so that your farthest objects don't exceed a certain distance and thereby create divergence - which is incredibly uncomfortable simply by virtue of it being essentially absent in daily life.

jin

OnlineRender
01-25-2010, 03:11 PM
There was a news report on today about this ......... people wearing big silly glasses , why dont they make contact lenses ?

jrandom
01-25-2010, 03:46 PM
it's true that in real life, you place the ZPP (zero point plane) on whatever you look at.

but in real life, you NEVER see divergence unless you're deliberately going "wall-eyed".

Ah, but you can! Note some easy-to-see landmark in the distance. Now, hold out your finger at arms length and focus on that. You'll notice that the out-of-focus landmark has now diverged into two images. (This was the experiment where I finally thought "OMG it's obvious!" and I felt like a big dummyhead for not understanding it earlier.)


but if you're creating greater depth with hyperstereo and an exaggerated interocular, then you must be careful so that your farthest objects don't exceed a certain distance and thereby create divergence - which is incredibly uncomfortable simply by virtue of it being essentially absent in daily life.

Yep, the "depth budget".

jrandom
01-25-2010, 03:56 PM
in maya, we use parallel cameras and adjust the horizontal offset setting to set the convergence point. in lw, you can use the shift camera to do the same thing.

I'll have to look up Shift Camera, as I'm not familiar with it.


as for animatable interaxial, it might be better to use two cameras anyway. but yeah, the inability to preview what you're doing inside of lw is a pain....

I've settled on using a center camera w/ anaglyph filter for use in setting up a shot, but use parallel cameras aimed at the POC point for final rendering of the image/animation. Since the anaglyph filter seems to work with the POC point (if at 50% the distance of the target object) I'm trying to rig up something where the parallel cameras aim at a point 50% the distance of the center camera's target. I'm new enough to Lightwave that I don't quite know how to set this up.

For instance, I'd like to be able to move the target of the parallel cameras forward/backward and have the center camera's target automatically assume a position at double that distance so the preview anaglyph renders show the proper depth.

jin choung
01-25-2010, 04:23 PM
Ah, but you can! Note some easy-to-see landmark in the distance. Now, hold out your finger at arms length and focus on that. You'll notice that the out-of-focus landmark has now diverged into two images.

oh - we're talking about two different thngs.

yes, whatever is not in the zpp (convergence point) will be double imaged in our perception. and in real life, our convergence is linked with our accomodation (focus) so it's not that noticeable.

what i mean is that you shouldn't have a situation in a stereoscopic render where the viewer must diverge HIS EYES (go walleyed) in order to fuse the images. this is where the confusion came from - just like convergence, divergence is generally not about the images but about activity of the eyes or cameras (i.e. convergence/divergence - toe in/toe out). for the images, i just generally hear it described as "double image" or "not fused".

we never ever diverge our eyes in normal life. we converge to see things closer to us. but never diverge. the farthest thing away at infinity, we see by having eyes our eyes basically parallel. but in stereoscopy, by having a larger than human interocular and having things far away, it is possible to make an image where the viewer has to diverge their eyes to fuse.

jin