PDA

View Full Version : How to target to custom rig?



hdace
12-30-2013, 09:14 PM
I'm trying to find the best method for importing Optitrack data for facial mocap, preferably using Nevron. I have a custom facial rig to target to. Getting the source is easy, but I can't work out how to apply the data to my custom rig. There must be a way to do it, but it's not documented anywhere. There are several target presets but no obvious means to select the channels I want.

Greenlaw said this, "I'm mainly intesrested in NM because it allows you to drive animation in unusual ways by feeding the data into any animateable channel." http://forums.newtek.com/showthread.php?138058-nevron-vs-ipi-motion-capture

But there's no other indication on the forum (or in the documentation) as to how to do it.

Ryan Roye
12-30-2013, 09:53 PM
The setup that Nevron is bundled with, in my opinion, is more complicated than it really needs to be. In many cases you don't need more than a single node to make it happen. I prefer either boosterlink or cyclist to drive facial controls because the keyframes can be manipulated in real-time in the graph editor to calibrate the controls to your face (IE: I can easily determine how wide my character's mouth should open in relation to my own without any guesswork, like you'd have to deal with in a nodal setup).

Check out this page if you haven't already, look at the animation units section:
http://msdn.microsoft.com/en-us/library/jj130970.aspx

Each facial expression is calculated based off your facial image and returns a value between -1 and 1 . To practice, add a single null to your scene and add a motion modifier called "Virtual Studio Trait", open "Studio", and go into the node editor for that null. Search for "Device", add that node, double click on it and you'll need to enter:

Manager Name: Kinect For Windows
Device Name: Kinect_A (or whatever you named your Kinect)

After hitting "aquire connections", the kinect node should change and give you options... you can copy/paste the kinect node if you'd like to save yourself some typing for other items. For demonstrative purposes, search "vector", and throw in a "make vector" node, hook it up to "X", and then into "Position"... see below:

http://www.delura.tanadrine.com/image_manualupload/VectorNodeSetup.png

After doing this, you should notice that your null will respond to your mouth movement when you have facial tracking working... when your mouth is fully open, the node will move ONE meter along the X axis. When closed, it'll be near the origin. You can use this as a starting point to construct something usable. I hope to post some free alternative Nevron rig setups in the near future.

Also, Adapting a custom rig (the skeleton) to Nevron isn't documented because it mainly involves Lightwave's other tools and is a process of manual targeting (IE: Translating the rotations of bones/objects to eachother so that even bones with differing rotational orders can yield the same results when manipulated). A lot of it encompasses "Same as Item" constraints, but it is a bit more involved than what can be explained in text.

Hopefully this sheds at least a little light on what I consider yet another poorly-documented but powerful tool.

hdace
12-30-2013, 11:16 PM
Wow Chazriker, you really went to town in your explanations. It should help others out there. Unfortunately I already know all about Nevron's Kinect capture functions. Tried that out many times over the last few months. That's why I stopped, because I don't like it. It's just not responsive enough. The best I ever got was about 22 fps which just doesn't cut it when it comes to lip sync. I understand all about the animation units, etc.

So I bought an Optitrack system. It's VERY expensive. It runs at 100 fps. Now you're talkin'!!

I was talking strictly about retargeting after mocap capture in Optitrack. It outputs fbx data which imports into LW as nulls. I thought Nevron's retargeting plugin would help, but clearly it doesn't. I've found an old thread that shows that this is not a route I can take.

http://forums.newtek.com/showthread.php?136811-NevronMotion-and-custom-rigs

So clearly I've got to manually translate the data. Doesn't look like anyone else has ever tried to import Optitrack data into LW before, so this is going to be fun (not). Optitrack is designed to be used with MotionBuilder or FaceRobot (which doesn't even exist anymore). Any advice from anyone appreciated.

Ryan Roye
12-31-2013, 09:37 AM
In this case, the solution you're looking for lies outside of Nevron. What you'll have to do is utilize channel follower, cycler/boosterlink, nodes, or other means of manually assigning minimum/maximum morph constraints to tell Lightwave what to do with the data you've provided via FBX. The process should be quite straightforward; you only ever have to calibrate the Optitrack nulls to Lightwave once, and you should be able to keep it as a template to drive whatever facial animations needed. My suggestion would be to play a small section of the track, and tweak graph editor keyframes until the results begin to look like what you get through Optitrack. Alternatively, you can skip the whole process and use MDD files, but of course you lose edit-ability going this route.

Nevron's retargeting is more for adapting the genoma template bipedal animations (which greatly eases custom rig adaptation)... not so much for stringing up constraints like what you'd need to do.

As for the kinect face tracking, I agree it isn't suitable for real-time stuff and at 30 FPS and it can struggle to keep up with what you're doing. I find I have to slow down the clip and get upwards of 90 FPS tracking... it takes a bit longer, but it yields quality results on a budget system.

hdace
12-31-2013, 06:20 PM
Thanks so much for your help, chazriker. You've given me several ideas I hadn't thought of. However, one thing you haven't mentioned is whether it is feasible to control a face using bones. I've tried it before with limited success. The Optitrack nulls move in 3D and it seems a waste of valuable data to apply small chunks of the data to morphs.

It's clever to slow down the track for Nevron tracking. That might have helped me previously. However, there is another problem I don't like which is that there is no left/right separation. Also, Optitrack allows for far more detail, like eyelids closing and cheek movement. If I can properly apply the data, that is...

Ryan Roye
01-01-2014, 11:32 AM
However, one thing you haven't mentioned is whether it is feasible to control a face using bones.

Bones can be utilized in the exact same way as objects in regards to being controllers for morphs. If you mean having the bones act as actual deformers and not constraint controllers, you could try changing the bones to joints and tweaking the falloff type and strengths to get an approximate equivalent result, but I'm not sure how well that will work... fortunately testing that should only take a minute or two to do.

RebelHill
01-01-2014, 11:58 AM
Yeah... you're pretty much stuffed here, there's not a lot you can do other than build the whole system yourself.

The same issue has come up a few times over the years, null positions from face capture... but there's virtually next to nothing you can do in LW to make use of this data Im afraid. You can setup some basics of either a boned face, or morph based face as you mention, but neither is going to be anywhere near ideal, and each will come with a slew of problems that cant easily be fixed.

hdace
01-01-2014, 06:23 PM
Thanks, guys. Yes, I meant how well bones themselves can control face movement, without morphs. RebelHill's Rhiggit has face bones built in that I've tried using once or twice, then abandoned. Now's the time to try again!

I've already created a rig that removes the head movement. Boy, that took a whole day. Luckily Optitrack supplies a headband with four special markers just for the head movement, so it was just a matter of figuring out how to cancel out that movement. Now to the real work. Another lucky thing is that I've learned tons about rigging from using Rhiggit over the last three years, so let's see if I'm any good at reverse-engineering! Of course, it's a whole different ball of wax or RebelHill would have done it already. I think you should, when you get around to it. God knows I'd pay for it. Might be awhile till I get version two of Rhiggit, though, since I'm very happy with what I've got there. Need to sit down and watch the new features movie when I have time (when I'm rendering!).

RebelHill
01-02-2014, 06:18 AM
I meant how well bones themselves can control face movement, without morphs. RebelHill's Rhiggit has face bones built in that I've tried using once or twice, then abandoned.

Getting bones to shape and animate your face with is VERY difficult and exceedingly complicated to setup. Sure you can do something very simple without much effort... but to get a really detailed facial setup... Im not entirely sure its possible in LW. As for the face joints in RHiggit 1... those are just there to provide a bit of secondary shaping on top of the main morph based rig, they won't help you here, not one bit.

hdace
01-02-2014, 06:28 PM
Ah, that explains it. I always wondered about those bones. Morphs. Bones. Who cares? Sooner or later I'll have a rig that works (probably with a few dozen expressions). I don't care if it takes a week or a month, we'll get there. You guys are great. I feel like Newton standing on the shoulders of giants. Let's hope I don't fall off!!