PDA

View Full Version : Shift Camera weirdness



jrandom
03-02-2010, 10:55 PM
So I'm building this stereoscopic camera rig, and when playing around with convergent cameras, I notice the depth distortion issue shown in the first attached image:

http://www.newtek.com/forums/attachment.php?attachmentid=82775&d=1267595439

I couldn't figure out how to get the edges to converge at the same plane as the target point... until I switched both cameras to Shift cameras. Both horizontal and vertical shift are set to 0.0, yet the edges of the image converge on the same plane as the target point (as seen in the second attached image):

http://www.newtek.com/forums/attachment.php?attachmentid=82776&d=1267595449

Okay, magic. Got it. As long as it works, I'm happy with it.

Only... after messing up my scene I decide to start from scratch, but now when I use shift cameras, they show the same problem as the regular perspective cameras did in image 1.

Does anyone know how I wound up with the convergence lines in the second image, and how I might get them back? I'd prefer to do this with perspective cameras, as it would make integration with other software easier, but if I have to use shift cameras, I will.

jin choung
03-03-2010, 12:43 AM
I read your post several times and I'm still not sure what the problem is. Is the issue:

- was working but then stopped working in scratch built scene

or

- youjust want to use perspective cam?

Also diagrams would be clearer with a wider stereo base.

I don't know of any way to induce a horizontal shift on the backplate in perspective cam but red oddity from other thread seemed to have found a trick recently.

Jin

jrandom
03-03-2010, 01:00 AM
I read your post several times and I'm still not sure what the problem is. Is the issue:

- was working but then stopped working in scratch built scene

or

- youjust want to use perspective cam?


I'd prefer to use perspective cameras but if shift cameras are the only way to correct for the depth aberration, I'll use 'em.

The real trouble is that the solution stopped working, and I don't know why. If you look at that first image, the red marker shows where the edge lines of the cameras cross -- any object at that position will show no divergence. Thing is, "no divergence" means "seen at screen depth" which is shows at the blue marker.

For some reason, the first time I switched to using shift cameras, the edge lines of the cameras intersected the same plane as the target point, which corrects this problem. All I did was switch the perspective cameras to shift cameras and unchecked "Use Cam Pitch?". Both shift amounts were 0.0.

I am unable to replicate this result. Now, even when I use shift cameras, the camera edge lines intersect well in front of the plane demarcated by the central point of convergence, just like the regular perspective cameras.


(I've solved this problem for parallel stereoscopy: just set your left camera shift amount to:


interocular_distance / (tan(hfov_in_radians / 2) * distance_to_POC)

and use that same value but negative for the right camera's shift distance. Now I just need to get something working for convergent stereoscopic setups...)

jrandom
03-03-2010, 01:04 AM
Here's a screenshot that illustrates the problem more clearly via a wider base:

http://www.newtek.com/forums/attachment.php?attachmentid=82778&stc=1&d=1267603418

Those outer camera edge lines need to intersect at the plane marked by the target NULL object instead of in front of it, and for that one brief moment in time I actually had a setup that behaved that way. No idea how I got it to work, no idea how I broke it.

jin choung
03-03-2010, 01:28 AM
hmmm... never saw that equation before... thanks... if i get time i'll see if it's any better than the p=fb/i thing that i got in my maya setup.

as for the toe in, haven't done any work on that but i assume you're using some kind of expression to adjust your shift based on your convergence point locator correct?

is that expression still activated and working?

jin

jin choung
03-03-2010, 01:29 AM
oh also, you can MANUALLY adjust the shift values to converge properly right?

jin

jin choung
03-03-2010, 01:32 AM
oh.... actually, just thinking about it, if you're shifting the back plane, you ARE changing the convergence point....

so wouldn't it be something like modify the ACTUAL convergence point to be DIFFERENT from the poc marker such that when your formula and your horizontal offset kicks in, it will THEN be at your marker?

jin

jrandom
03-03-2010, 02:00 AM
hmmm... never saw that equation before... thanks... if i get time i'll see if it's any better than the p=fb/i thing that i got in my maya setup.

I had to dust off my trig skills and derived it from scratch. I was only able to do so after I realized that the shift value in Lightwave is a percentage (0.0 - 1.0) of the width of the image plane.

Could you explain what p=fb/i means?


as for the toe in, haven't done any work on that but i assume you're using some kind of expression to adjust your shift based on your convergence point locator correct?

I'm assuming that's what I'll have to do. I just wish I knew why it worked that one time -- no expressions were involved; it just worked by default.


is that expression still activated and working?

For parallel cameras, yes. I posted that camera rig (parallel stereo) to that other thread, but I'll repost it here if you wanted to take a look:

Stereo Camera Parallel Basic.zip (http://www.rainybrain.org/Assets/Lightwave/Stereo%20Camera%20Parallel%20Basic.zip)

Just pull up the expressions in the graph editor to see how I wired it together. The tricky bit actually turned out to be converting the hfov into a zoom factor, since I needed the hfov for other calculations but had to give that value to the camera as a ZF. I finally tracked down some other post where the math had already been done.


oh also, you can MANUALLY adjust the shift values to converge properly right?

Yes, but that's next to useless if I'm animating the POC and IOC. I need it to be automated or I'll go crazy trying to work with it.


oh.... actually, just thinking about it, if you're shifting the back plane, you ARE changing the convergence point....

Aw crud, you're right. This didn't occur to me before because I thought the target point would keep the center of the image... centered.

That one time it worked with a shift of 0.0 -- it was actually doing what I wanted it to do (the second image I posted), undistorting the depth issue w/out distorting the image nor changing the POC at the center. HOW?? It's like some leprechaun snuck into my computer and magically made it work for half a day, just to taunt me. The more I look at it, the more it seems impossible that that second image should even exist.


so wouldn't it be something like modify the ACTUAL convergence point to be DIFFERENT from the poc marker such that when your formula and your horizontal offset kicks in, it will THEN be at your marker?

I think that's much closer to a real solution, but the math for that is a tad over my head. I'll keep pegging away at it, but you'd think somebody else would have solved this already.

jin choung
03-03-2010, 02:07 AM
kewl,

glad you're sorted.

the p=fb/i was in the other thread.

parallax on backplane = focal length (in mm) * stereo base (must be converted to mm) / i (distance to object or locator or whatever, also in mm [i.e. all units must be same])

of course, this will yield parallax in mms.

but in maya, the back plane horizontal shift is in pixels so multiply that by (backplane (convert inches to mms)/horizontal resolution). that gives me the pixels of parallax that i have to shift on one or .5 on both of the cameras to zero out that point in space.

i think if we did the math, my equation would turn into yours or viceversa... you just had the math skilz to bring in tan and stuff... : )

jin

jin choung
03-03-2010, 02:08 AM
I think that's much closer to a real solution, but the math for that is a tad over my head. I'll keep pegging away at it, but you'd think somebody else would have solved this already.

every time i have to google crap about this, that's all i ever say to myself... : )

jin

jrandom
03-03-2010, 02:45 AM
...in maya, the back plane horizontal shift is in pixels...

Yikes! The problem is easier to solve with percentages since it's resolution-independent. I don't think I would have figured it out on my own if I'd had to work with pixel amounts.

jin choung
03-03-2010, 03:06 AM
Yikes! The problem is easier to solve with percentages since it's resolution-independent. I don't think I would have figured it out on my own if I'd had to work with pixel amounts.

ooops... sorry... my mistake, my nearplane and farplane need to deliver parallax to the user in pixels

(so after i set my zp and np and fp, the interface tells me that the np has -34 pixels of parallax and fp has 10 pixels of parallax and i use pixels of parallax to define my usable volume depending on the project and target venue [i do tests to bracket]).

but the horizontal shift in maya is in inches (so i just have to convert that to mms for my expressions to work).

yeah, i am decidedly NOT a math wiz and so i had to google a crapload of stuff (and buy the american cinematographer manual!) and just let it simmer awhile in the back of my brain before anything started to make sense to me. i cite a crapload of my sources in the other thread.

jin

p.s. had a thought about your toe in rig... you only get really bad keystoning distortions with wide stereobases (i guess very close objects too so expression probably tied to camera rotation angle) right?

in which case, your expression may involve scaling the presence of the horizontal offsets so that at small stereobase distances, you are using little to no horizontal shift but as you increase the stereobase, you start lowballing your convergence and make up the convergence with horizontal shift so that at extremely wide stereobases, you're basically a parallel rig?

StereoMike
03-03-2010, 05:15 AM
I have a pdf about stereo/autosterocopic image generation (& formulas) I downloaded some time ago from a display vendor, the doc is not online there anymore. PM me your email address if you'd like to have it.

mike

vbk!!!
03-16-2010, 04:18 AM
Thanks Jin for the research you did and shared with us
Thanks Jrandom for your formula for the shift amount

You two made my day .

jrandom
03-16-2010, 11:07 AM
Thanks Jin for the research you did and shared with us
Thanks Jrandom for your formula for the shift amount

You two made my day .

I think there may have been something really strange going on with my original stereo camera rig, as I had to modify the formula for parallel camera shift slightly when I rebuilt it from scratch:


shift = interocular_distance / (4 * tan(hfov / 2) * distance_to_poc)

(remember to use the negative of the above value for the right-eye camera)

I think this change was necessary because of the change in shift camera behavior on my system. When the original behavior that fixed the convergent camera depth issue disappeared it made the parallel shift formula work differently.

vbk!!!
03-16-2010, 07:12 PM
I made the same modification myself, taking the half of the interoccular distance in your first formula.
With the formula of the of the zoomfactor you can add some modification to it.
shift = (interOccDist/2)/(2*(FrameAspectRatio/ZoomFactor))*pocDist

This way you can play with zoomfactor and get more interactive behavior for the rig.
It's just to bad there is no animation channel to play with Frame Aspect Ratio ( if you you pixel aspect is different than 1.0)...