PDA

View Full Version : Kinect and LW?



Samoht VII
03-11-2012, 07:38 PM
Hi Guys,

I'm working on a project atm that tracks a persons skeleton and I want to have my model to replace the bones and act what the person does. Like MoCap. But I have no idea where to start.
Does Kinect and Visuak Studio support LW10 files?
If it's posible can you direct me in the right direction.
I've seen a lot of MoCap iPi but have no idea if it can help.
Any useful links out there for me?

Thanks

Greenlaw
03-12-2012, 02:43 AM
I use iPi DMC to capture motion for Lightwave, and I'm currently working on my second and third 'mocap' movie using this system. You can use iPi DMC with 4 to 6 synchronized PS3 Eye cameras, a single Kinect or two synchronized Kinects. These USB devices plug directly into your PC; the XBox or PS3 gaming console is absolutely NOT required. The PS3 Eye system is fairly accurate (up to 360 degrees) and has an effective capture space of 20 x 20 ft., but it can be a bit involved to set up and use. The single Kinect is far less accurate and has a small capture space, about 7 x 7 ft, but it's very easy to set up and use--more or less 'plug-and-play'. The dual Kinect configuration also has a small capture space of 7 x 7 ft, but it can do up to 360 coverage. It's not as accurate as PS3 Eye because of the lower framerate (PS3 Eye is 60fps; Kinect is 30fps,) but it's still pretty decent.

These days I prefer using dual Kinect with iPi DMC because it's very convenient and the motions I need can easily be captured in a small space. I keep the PS3 Eye system on hand in case I need bigger action moves, like running, tumbling, long walks, etc. That said, the Kinect's limited 7 x 7 is just big enough to record loopable cycles, so with a bit of post-mixing it's possible to some 'big' motions with this system too.

With either configuration, you need a computer with a decent graphics card for tracking the motion to a skeleton. Nvidia Geforce GTX 260 or better is recommended. I use a two year old GTX 460 and my tracking speed is about 0.6 seconds per frame, which is plenty fast for small productions like mine. Newer GTX cards are significantly faster.

Okay, so that's capturing. You still need to be able to retarget the data to a Lightwave character. Retargeting is the process of applying the motion from the capture skeleton to the animation skeleton, which may have completely different body proportions. Ideally, you would do this in iPi Studio (the tracking and retargeting program,) export an FBX, and then use Lightwave's Load From Scene>Merge Only Motion Envelopes to transfer the motion data from the FBX to your rigged Lightwave character. This assumes you have specifically created your Lightwave rig to receive the data.

I haven't done retargeting in iPi Studio, instead I use Motion Builder. In my case, I've been retargeting motion data captured from my fairly average human body to two cartoon cats and a 'Charlie Brown'-proportioned toddler body, and Motion Builder has handled this almost perfectly everytime. The retargeted data exported as FBX from MB transfers into my rig in Lightwave seamlessly--no tweaks or hierarchy changes or renaming required, it just works.

Technically, you should be able to do this directly from iPi Studio to Lightwave but I haven't tried this myself yet. I know Rebel Hill has tried it and he reports that there are some incompatibilities with iPi DMC's FBX and Lightwave. I keep meaning to check this out myself and since I'm actually doing some mocap tests now, I'll get to it in a day or two.

Another alternative is to use Ikinema Webanimate, which is a browser based program. With Webanimate you upload your data for free, do the retargeting and cleanup online, and when you're satisfied with the results you can purchase the final FBX output. I haven't used Webanimate but Rebel Hill says it works with Lightwave.

I've glossed over a few details but this is more or less the process in a big nutshell.

A couple of limitations with iPi DMC 1.0: No head or wrist tracking, and it has occasional problems tracking certain female body types. The lack of head and wrist capture hasn't been a problem for me; it's actually very easy to add in MB.

If this concerns you, iPi DMC 2.0 was just announced the other day, and it's supposed to include not only head and wrist tracking but possibly fingers too. It will also features a varieity of body types, including female body types for better tracking results. 2.0 also adds prop tracking (sword, rifle, basketball, etc.), tracking of multiple performers in a single capture, and improved accuracy. They've also added support for Kinect for Windows and the Asus Kinect clone. (iPi DMC 1.0 recently added Kinect for Windows support.) Details are a little sketchy at the moment but the first release of 2.0 is supposed to be very soon.

Some things to be aware of on the Lightwave side:

For motion capture, joints are preferred over bones but Lightwave joints have a few important quirks to be aware of. Most notable is the odd weight offset that is not just weird for Lightwave (the weightmap for a given joint is applied to the decendant joint or 'tip' instead of the actual joint or 'base' like you normally do with Lightwave Bones) but it's incompatible with other third-party joint's based system like the one in Motion Builder or Maya. To overcome this, you need to create an additional 'export' version of your Lightwave rig with weightmaps incorrectly applied for Lightwave before you export your FBX; or if you're like me, export the original Lightwave rig and completely redo your weightmaps in Maya so the rig will be compatible for Motion Builder. Either way, the weightmap offset makes the I/O process less efficient than it should be (and weightmap-ing joints-based rigs more awkward and confusing in Lightwave than it needs to be,) and I wish the issue would just get fixed in Lightwave.

Another issue is that exported Lightwave joints-based rigs require an extra 'dummy' joint added the ends of every joint because end joints will get deleted by Lightwave's I/O (I can't remember if this happens on import or export but it really doesn't matter.) Adding the 'dummy' joints is not a big deal but it's yet another workaround to another longstanding 'bug'. (I'm not sure if this is officially a bug; it might just be a side effect of the weight offset issue, which is apparently not a bug but an undesirable Lightwave feature. I don't know; AFIK, Newtek has never officially commented about why the weightmap offset exists so maybe it's supposed to be this way.)

Anyway, once you've dealt with these two problems, getting mocap data into Lightwave is actually a very smooth process and a lot of fun to work with. (No sarcasm...I really mean that sincerely!) :)

Speaking of Rebel Hill, he offers a good FBX rig in his Rhiggit! system for Lightwave. Rhiggit! doesn't do retargeting but it does give you a control rig for editing your mocap data after you bring in motions that you've retargeted elsewhere. If you're not doing your mocap editing in Motion Builder or Webanimate, his FBX control rig is a good option to have in Lightwave because iPi Studio itself does not have mocap editing tools.

Alternatively, you could use IK Boost to edit your mocap in Lightwave. Personally, when I used IKB a few years ago I ran into all kinds of interface errors, but other users in these forums say it works just fine for them so maybe it's okay now. IKB comes native with Lightwave so, if you're on a tight budget, it's worth exploring.

Now, if all this sounds like more than you want to deal with, another mocap system you can look into is Jimmy|Rig Pro. This program is in public beta right now and it features support for a single Kinect. The workflow is very simple: import a Lightwave character mesh, let J|R autorig it for you, then capture motion directly to your rig. You can edit the motion directly in J|R and when you're pleased with the results, export a Lightwave scene or FBX file. IMO, the motion capture quality doesn't appear to be as solid as what you get from iPi DMC but J|R Pro is probably the easiest system for an absolute beginner to wrap his head around quickly. (Note: my quality judgement is based on YouTube videos from other users; I have not used the J|R Pro Kinect mocap system myself.)

If this option interests you, I would talk to users at the J|R forums for more info. I personally have very little experience with J|R Pro and the above paragraph is pretty much all I know about it right now.

Good luck! I hope this info has been helpful to you and anybody else who's thinking bout 'indie' mocap production using Lightwave. Setting up an efficient mocap pipeline can be a lot of work but it's worth the effort. Once I got mine in place, I fretted a lot less about tedious tech stuff and was finally able to focus on the fun creative stuff.

G.

erikals
03-12-2012, 03:09 AM
i'd check Jimmy|Rig, as it's easy, link in my signature.
for better quality iPi is the way to go. (read Greenlaw's posts on it)

Samoht VII
03-12-2012, 06:58 PM
Ok I had a look at IpIsoft but i think I mislead you. I don't really want motion capture like iPi I want to, in real time, track my skeleton with the kinect and replace my body with the character.

Greenlaw
03-13-2012, 10:49 AM
Oh, I see. Are you using Brekel then or some other realtime Kinect system? There are several available now or in the works--for example, Jimmy|Rig Pro and iClone are two, and Kinect plug-ins for Lightwave and DAZ have been demonstrated. I haven't used any of the realtime systems (to me, iPi results look more stable and accurate,) but it is certainly possible to apply the data to rig in an animation program like Lightwave.

The key is to retarget the captured data to a compatible skeleton and get that data into a common file format like .bvh, fbx, or .dae (aka Collada). If your mocap software do doesn't do retargeting, you need to get another program to do this for you. Motion Builder, Ikinema Webanimate, and Animeeple Forever (if you can find it) are some options that have been frequently mentioned in these forums. (FYI, iPi Studio is capable of retargeting the data directly to your own character rig before exporting.) After retargeting, you export to .fbx or .dae for Lightwave, and then use Load From Scene>Merge Motion Envelopes Only to import only the animation to your prepared Lightwave character. This assumes you have properly set up your character to receive motion this way, which means the Lightwave skeleton's hierarchy needs to match the hierarchy in the imported data.

Just for example, here's what I do: Export a .bvh file from iPi Studio, which is about as simple and common as you can get. The .bvh is brought into Motion Builder and retargeted to my own character rig there, then the results are exported as .fbx. Then, in Lightwave, I use Load From Scene>Merge Motion Envelopes Only to import the motion to my Lightwave rig. That's just one way to do it. Check the programs you have and it's likely that they support some of the formats mentioned here.

Another method is to import the .bvh directly into Lightwave. This is a bit crude and less desirable because it requires a lot of 'brute force' work in Lightwave and the results may still be poor. IMO, it's much easier to do it as described above but if you choose to go this route, I believe the general methodology is to import your .bvh, reposition the bones to fit your mesh using Bone Tools, 'bind' the mesh (in LW terms, Use Bones From and Rest Bones,) and finally edit your motion with IK Boost if needed. This isn't ideal though because depending on your character's shape and how much you change the skeleton, you will likely experience 'slipping' and other offset issues. (It's been a long time since I last tried this, so the steps described may not be 100% accurate.)

Hope this helps.

G.

Greenlaw
03-13-2012, 11:09 AM
Hmm. I think I just repeated myself. :)

In either case (iPi DMC method or realtime method,) getting the data into Lightwave is more or less the same.

One exception may be Jimmy|Rig Pro because it generates a compatible rig for you and saves out an .lws. It's probably the simplest and most 'LW-friendly' option. It does realtime Kinect capture (single Kinect only,) can import .bvh from other programs, and export .fbx for other programs, so it's quite flexible. I don't know if any issues exist with the current version (it's in open beta for J|R 'standard' users,) but if you're interested in J|R Pro, I would carry this discussion over to the Origami Digital forums.

Good luck!

G.

marchermitte
04-09-2012, 02:00 AM
Thank you Greenlaw for this excellent and complete summary! It clears things out for some of us... I own Jimmy rig but slipping feet is an issue I have with most of the mocap data integrated in JM.

Greenlaw
04-09-2012, 08:44 AM
Slipping feet is a common mocap problem but there are tools to prevent or fix this. Of course, it helps if your original mocap data is not slipping to begin with.

iPi DMC's Kinect capture system is pretty good about tracking feet accurately. Here's a crazy iPi DMC capture example my five-year-old created two weeks ago using two Kinects positioned at 90 degrees:

Mocap Test with Small Child (1 m tall) (http://bit.ly/HFd5oJ)

There are a couple of errors in this video which we have since fixed in iPi Studio but even 'as is', I thought it looked pretty good.

This was an unusual case because the performer was so short (1 meter tall) and iPi DMC 1.0's current Actor is designed for fairly 'average' adult male bodies. (FYI, iPi DMC 2.0 will introduce new 'actor models' for different body types (i.e., men, women, and children of different proportions,) which should improve the tracking quality.) That said, it did a pretty decent job here.

It's interesting to point out that iPi DMC did not run into any major occlusion issues here. What did present an issue was the the environment: when Sienna leans too close to it, the Kinect merges it with the data from her leg, and this briefly confuses the tracker twice. But iPi does a good job at differencing the environment from the performer, so these kind of errors are normally minimized. (FYI, iPi Soft recommends a distance of 50 cm from surrounding surfaces.)

Now, if you experience feet sliding after retargeting the motion to a character rig, that's a whole other thing--the quality of the retargeting depends on the tool you used to do it. Lightwave, for example, doesn't really have retargeting, which is why .bvh data applied directly to a Lightwave rig can look wrong and be difficult to work with. It's much more common to retarget to your rig elsewhere and then import only the motions to your original Lighwave rig via FBX.

iPi has built-in retargeting, which seems to work very well with other programs. Be aware that some Lightwave users have reported problems with iPi's FBX--I haven't had a chance to test this myself yet so I can't really comment about it. FWIW, iPi says they will offer direct to .lws support later this year. Even with a .lws scene, my guess is that it will be better to load only the motion data to your rigged character in Lightwave, just like you would with an FBX file.

I like to use Motion Builder for retargeting because it works exceptionally well even with characters that have grotesque body proportions (in the case of 'Brudder's, 'Charlie Brown' style proportions.) Also, the editing tools (excellent built-in control rig, animation layers, NLE, various effectors including a nodal system,) are top-notch and very powerful. But as mentioned elsewhere, MB can be expensive and the learning curve steep.

J|R Pro provides its own method for retargeting but I haven't used it so I can't make any reliable comments about it. I know that J|R has tools to fix 'anti-skating' so you might look into that.

I'll say the same for Ikinema Webanimate--it's supposed to be pretty good but I haven't used it so I can't really comment.

The other thing to consider is 1 Kinect vs. 2 Kinects. One Kinect does record data in three dimensions but two Kinects give you additional depth data from a second vantage so you get a more complete representation of the motion.

In iPi DMS, it's technically possible to position two Kinects as wide apart as 180 degrees but because you're recording depth data it's really only necessary to position them at 90 degrees to get full coverage. (As in the above 'small child' example.)

FYI, when we made 'Happy Box', we had the Kinects positioned at only about 60 degrees apart and we still got very good results. (Though admittedly, the characters are not 'walking' in that short film.)

Even with a single Kinect however, iPi DMC can be pretty good with tracking feet accurately. If you visit iPi Soft's YouTube channel, you can see some good examples there:

iPi Soft YouTube Channel (http://www.youtube.com/user/mnikonov)

The 'single Kinect' video to pay attention to is probably this one from last year which shows data that was captured and tracked right on the floor at Siggraph:

Motion capture of SIGGRAPH2011 visitors (http://www.youtube.com/watch?v=98tRav8uN30&context=C4e135ebADvjVQa1PpcFOGrkddKQXRhx_iX8yoBJK6 k1QC3_r8aPs=)

You can still see a little slipping but it's really not bad for a single Kinect capture.

Hope this helps.

G.

jeric_synergy
04-09-2012, 11:00 AM
Dang, Greenlaw, thanks for the huge info-pop!! Yikes.

This could easily be a subject for Liberty3D's DVD series. Consider it, please.