PDA

View Full Version : Matchove suggestions? Experiences?



CROUTON213
11-20-2011, 11:06 PM
Hi, any thoughts on affordable matchmove programs for use with Lightwave? I am looking at Bouyou Silver Bullet.

Trevor
11-20-2011, 11:15 PM
Syntheyes
http://ssontech.com/

nikfaulkner
11-21-2011, 01:37 AM
boujou's lightwave support works perfectly (the last time a tried which was a while ago)
couldn't get the syntheses demo to be as accurate.

vncnt
11-21-2011, 02:32 AM
Syntheyes
http://ssontech.com/

Pamela G. Juust
11-21-2011, 05:29 AM
i use syntheyes also. it works very nicely with lightwave and after effects.

toeknee
11-21-2011, 03:25 PM
Hey LAD3D. I also think that syntheyes is very cool but if you are totally new and just want to play with match moving then you can try voodoo tracker. Its free. Personally I use PF Tracker and Match move sometimes. But here it is.
http://www.digilab.uni-hannover.de/docs/manual.html#overview

Dexter2999
11-21-2011, 03:53 PM
FWIW, I have heard in two separate interviews with two separate VFX Supervisors that matchmoving still considered very much more "dark arts" than a science. Apparently it is routine to run footage through one piece of software and if that doesn't work try another.

If it is just for personal use, I would suggest the free or cheap options. If you are purchasing it in hopes of getting professional jobs I'd consider Boujou or PFTrack. They appear to be industry leaders.

CROUTON213
11-22-2011, 12:30 PM
Thank you for all the suggestions. My goal for 2012 is to learn matchmoving with one of these programs and motion capture with Kinect or PS Move. I will report on my progress...

Lightwolf
11-22-2011, 04:26 PM
One more for SynthEyes here.
I also wouldn't count too much on automatic solving, especially not with demo footage. What you get in real life is usually a lot worse and that's where a good manual toolset comes in really handy. And SynthEyes is extremely good in that respect.
Which doesn't mean that the automatic solving is no good, it is very decent. But automatic solving in general is prone to fail.

Cheers,
Mike

vncnt
11-23-2011, 12:48 PM
FWIW, I have heard in two separate interviews with two separate VFX Supervisors that matchmoving still considered very much more "dark arts" than a science.

Actually there is almost always a clear reason why some shots do not solve (completely) in AUTO mode.

If the camera had made the right movements (creating parallax by making left <-> right and/or up <-> down movements), every detail was sharp in every frame with a minimum of motionblur and a minimum of noise, enough details are available all over the image, prevent reflections / glare / moving details (like water) to be tracked, prevent tracking details in the distance, scene has been lit well, the camera used a true progressive mode, the camera didnīt use a CMOS sensor or didnīt enhance jell-o-cam effects by using a shutter / too much zoom / fast movements, and the camera DID NOT use an electronic or optical image stabiliser, and you DID use a mechanical stabilizer, you delete "high error" trackers, etc, the AUTO mode will do just fine.

Poor footage tends to create poor results.
Better invest some time and start a dialog with the cameraman and director to plus the footage in a way that makes everybody happy: sometimes a shot can solve better if you add extra camera movement (for parallax) or better lighting conditions at the beginning or end of the shot - so itīs easy for the editor to cut unwanted parts.

Check out the information "How to Shoot": http://www.ssontech.com/learning.htm
and gain experience in solving shots with your package of choice to create awareness for general matchmoving limitations BEFORE you (or let someone) shoot new backplates.

vncnt
11-23-2011, 01:00 PM
Donīt forget to set the motionblur phase in LightWave by moving all Camera keyframes by 50% of the (real camera) shutter cycle.

Example for the Sony PMW-EX1:
When shooting 1080p25 with the Shutter set to "Off", the EX1 uses a cycle of nearly 360 degrees. So move the Camera keyframes in LightWave by +50% of one frame (= to the right side) and the motionblur should appear exactly right.

When shooting 1080p25 with the Shutter set to "180 degrees", move the Camera keyframes in LightWave by +25% of one frame (= to the right side).

When shooting 1080p25 with the Shutter set to "90 degrees", move the Camera keyframes in LightWave by +12,5% of one frame (= to the right side).

BTW, for EX1/EX3 sources the shutter settings are available in the META data.

Note that the jell-o-cam effect in the EX1/EX3 increases when using small shutter angles like 90 degrees.

archijam
11-23-2011, 01:03 PM
...

Excellent info in that post.

Also keep in mind that shots you see in films with shakes or vibration (think in car footage) can often be exaggerated in post .. ie. the original footage to be tracked was a smooth as possible.

High resolution and framerate can also help in these instances, if it suits (sharper details, less blur).

Imageshoppe
11-23-2011, 10:59 PM
Note that the jell-o-cam effect in the EX1/EX3 increases when using small shutter angles like 90 degrees.

Shutter speed is not a factor in the amount of sensor skew, it's just more visually apparent at the higher shutter speeds. Motion blur at the slower shutter speeds masks the effect, but in terms of the damage done to the image, slow or fast shutter, it's the same.

Also, the stock EX1/EX3 lens has a horizontal FOV of 63.274 degrees and a focal length of 4.199mmm, useful info when you've shot at full wide and you simply want to plug in the right values into your tracking software. This will of course vary a bit from camera to camera, but it will put you squarely in the ballpark for a good solve.

Regards,

Jim Arthurs

vncnt
11-24-2011, 05:11 AM
Shutter speed is not a factor in the amount of sensor skew, it's just more visually apparent at the higher shutter speeds. Motion blur at the slower shutter speeds masks the effect, but in terms of the damage done to the image, slow or fast shutter, it's the same.

Correct. Not directly.

The EX1/EX3 needs time to scan the sensor multiple times so images can be stacked on top of each other to mask jell-o-cam effects.
But at high shutter speeds it simply has not enough time to scan the sensor as many times as it psysically could have done during a full cycle. As a result: less effective masking of skew.

Anyway. If you move a camera with a speed that doesn't introduce motionblur then it's probably didn't introduce too much skew as well.

Imageshoppe
11-24-2011, 10:04 AM
The EX1/EX3 needs time to scan the sensor multiple times so images can be stacked on top of each other to mask jell-o-cam effects.
But at high shutter speeds it simply has not enough time to scan the sensor as many times as it psysically could have done during a full cycle. As a result: less effective masking of skew.

Actually, the sensor is only "scanned" once, no matter the shutter speed. Skew is a factor of the read/reset speed of the sensor and is a fixed amount, or rather a fixed "potential" that is revealed by any motion vectors in frame.

Years ago I did one of the first tests showing the impact of sensor skew, using both the EX1 and the RED, with the original sensor. Over time, RED has decreased the read/reset time significantly, but for the first year or so it was almost identical to the EX1.

http://ftp.datausa.com/imageshoppe/outgoing/EX1/REDvrsEX1_rollingShutter2.mov

I was curious what the EX1 tracking would be like if it had no skew at all, or at least greatly reduced skew, and did a test; I shot a very, very slow hand held dolly toward a table top, at 1/5th normal speed, then skip printed out every fifth frame, basically giving an image sequence with 1/5th the normal amount of skew. Then, after tracking, went into Lightwave and setup the same scene. The results were a perfect track with NO fun-house type adjustments to the modeled object, which is a 1' by 1' square box on a flat plane.

http://ftp.datausa.com/imageshoppe/outgoing/EX1/EX1_wide_undistort_NoSkew.mov

Also worth noting that I profiled the lens and removed all distortion before tracking, which is a whole 'nuther subject in complexity.

Skew is a nightmare for tracking because it presents false information to the tracking software, which will then try and do the correct thing with this false data. Yes, there are work-arounds and optical flow plugins, but the process is worse not better because of it. At the end of the day, you can't do some of the neat tricks with rolling shutter sensors with significant skew that you can with global trigger CCD sensors. And, unfortunately, the vast majority of cameras are now CMOS sensors with rolling shutter artifacts.

The good news is that every new generation has a faster read/reset than the previous, reducing the impact of the problem on each new high end camera system. The bad news is that so many folks try and track footage from video taken with Canon DSLR's and the like and can't understand why the point cloud looks like it's generated by a drunken sailor...

Regards,

vncnt
11-24-2011, 10:35 AM
Actually, the sensor is only "scanned" once, no matter the shutter speed.

Then why do I see +/- 5 dithered images in one frame during extreme motion?

Imageshoppe
11-24-2011, 06:09 PM
Then why do I see +/- 5 dithered images in one frame during extreme motion?

Examples? Samples? Under what conditions? I don't think I've ever seen this.

I would suspect the light source sync as the cause if you're seeing this, or perhaps failure of the long GOP MPEG if this is XDCAM HD sourced material.

Regards,

Dexter2999
11-24-2011, 06:46 PM
Noteworthy that Blender is working on a Matchmove solution as well.

From http://www.blendernation.com/

Camera/motion Tracking Docs now Available
Posted on November 21, 2011 by Bart
6 Comments and 0 Reactions

It was already mentioned in today’s developer meeting notes, but I think it deserves additional coverage: the manual for the camera/motion tracker is now available on the Blender wiki. This module is not available in the stable Blender release yet, so you’ll have to download a build from Graphicall.org.

vncnt
11-25-2011, 09:28 AM
Examples? Samples? Under what conditions? I don't think I've ever seen this.

I would suspect the light source sync as the cause if you're seeing this, or perhaps failure of the long GOP MPEG if this is XDCAM HD sourced material.

Regards,

It was shot outside, during daytime, from the passenger seat of a car.
High contrast between car window and outside.
So it canīt be a light source sync problem.

Mpeg2 causes often blockiness in those situations. Not the dithered layers of individual images.
Not strange you never have seen this because the camera movement at that moment was extreme and images like that are likely to be cut from the final version.

Iīll check next week if I can find this example.

Whelkn
07-05-2017, 04:41 PM
another for syntheyes been using it since the day it came out and it only gets better and better. its cheap and easy to use. tracking is not a dark art. if you cant get a track you are doing it wrong :-)

m.d.
07-06-2017, 12:10 AM
Then why do I see +/- 5 dithered images in one frame during extreme motion?

interframe encoding

you are only actually recording 3 full jpegs per second (i frames) the p frames are only storing the difference in the motion prediction between i frames.

Also for those cameras shooting interlaced (24psf is just interlaced with a pulldown vs true 24p) and encoding with 4:2:0 (ex1) you are actually averaging together pixels sampled at different points in time (usually more than double the full sensor reset time) even before being sent to the encoder.
4:1:1 and 4:2:2 and 4:4:4 all encode on a line by line sampling...whereas 4:2:0 takes a square block of 4 pixels and samples....unfortunately with an interlaced camera the lower half of the square is from minimum 1/48th of a second later.
Mix this with mpeg blocking, rolling shutter and interframe encoding and you have a heap of bad data for a tracker to work with.


vote for syntheyes as well....been using it since beta

a tip for noisy footage is the highpass filter in syntheyes

Imageshoppe
07-06-2017, 07:42 AM
I don't think it is/was an interframe encoding issue. I owned that particular camera from release until 2015 and did codec stress tests with it for a local company that manufactured high quality out-board recorders. And I could break it "real good" and observed all manner of issues, but not that... :) Beyond fast pans, sparks, and high shutter speed waterfalls, one really good method of "breaking" that codec is to put the camera into video time-lapse mode, set the shutter speed high so each frame is as sharp as possible, then run around recording so that every frame is actually completely different than the one before. That's a situation that will force the worst behavior out of the encoder.

In fact, here's just such a test from 2009 I did with the EX1... notice that there's no "frame stacking" or bleeding of images between sequential frames, in fact, it's pretty remarkable it looks that good, all things considered...

http://ftp.datausa.com/imageshoppe/outgoing/EX1/35Mb_LongGOPinterval_recording.mov

I suspect for the poster, it was more an issue with the playback decoding of the clip, and some mis-match in settings with the decoding. It could have been temporal noise reduction, but unlikely in broad daylight, and again, I've never seen it in the EX1, unlike that old Panasonic AF100...

Down in the weeds a bit, Sony manufactured their own encoder chips used on the EX1/EX3, F3 and most of the same vintage cameras. This chip was capable of much more than Sony ever allowed it to deliver. The third party recorder company I worked with bought the raw chips from Sony, used two of them together to create 4:2:2, then poured the bitrate at them... in the end, those same chips that Sony was throttling in down to 35Mb/sec in 4:2:0 or 50Mb/sec in 4:2:2 in camera were pushing out 220Mb/sec long GOP and 180 Mb/sec all I frame in their recorder!

Back to Syntheyes, as a Lightwave generalist I don't track on a day to day basis, but Syntheyes is the program I've used most for dedicated tracking and really like it. I can always get back into the flow in a short period of time and really like the export options to Fusion. I'm fortunate to shoot most of what I'm having to track, so I always do precise lens calibrations which is the difference between success and failure in many cases. Russ is a hoot and always fun to talk to when he shows at NAB.

Regards,

Jim Arthurs

m.d.
07-06-2017, 09:01 AM
I suspect for the poster, it was more an issue with the playback decoding of the clip, and some mis-match in settings with the decoding. It could have been temporal noise reduction, but unlikely in broad daylight, and again, I've never seen it in the EX1, unlike that old Panasonic AF100...



most likely situation...there could be ghosting from the interlaced encoding, but not spread over 5 frames....sounds more like a cadence issue

one thing for users to watch again for interframe codecs, if the cut in an editor isn't exactly on a I frame (1 in 8 or 1 in 10 chance).....the whole GOP (group of pictures) has to be re-rendered because the GOP has end on an I frame...basically even a simple cut in an editor can introduce a generational loss in an interframe codec...before it's rendered out. Off topic, but I have to bash interframe codecs for VFX every time i see it

I like your timelapse stress test :)
That would make some nasty results...