View Full Version : nevron vs. ipi motion capture ?

10-07-2013, 08:45 AM
nevron vs. ipi motion capture ....
anyone have any non-biased opinions
preferably based on production experience ?

the demo at siggraph seemed a little laggy and also it jumps around a bit ?

i realise nevron does more than simply capture and output mocap files.
it is a tool used for the application and refinement of mocap but im just asking about the actual mocap.
does nevron use 2 angles to track the same person to improve the solve , or on only one angle per character ?


10-07-2013, 09:04 AM
If you're concerned about quality then you'll have to use multiple kinects which iPi supports and neveron does not.

It would appear LW3DGroup went for the cost effective approach with neveron and will build on that over a longer period of time. So of course it's going to appear a little infantile when compared to iPi's motion capture.

10-07-2013, 10:01 AM
good answer ... thanks mate :D

unfortunate...i would really like ipi quality mocap and also try combine the studio tools at the same time as mocap recording so i can record camera motion.


10-08-2013, 01:17 AM
If you're only using lw, nevron is a better choice since it offers retargetting within lw and you're able to import mocap files from other sources. Hopefully its top priority for them to be able to use your own rigs like animeeple, mbuilder.

IPI is a lot more stable capturing performances even on single kinect. The only drawback is lw support is not so good there's quite a few steps importing fbx motion envelopes if you don't have mbuilder. For other software though its two simple clicks to import mocap from IPI to 3dmax for example.
Also IPI can only retarget mocap that it created itself. You can't use the tons of mocap files already available on the net.
I don't know if they plan to have the ability to load mocap into IPI and retarget that to a target rig then export to 3dmax, lw, etc.

10-10-2013, 08:24 AM
iPi gives the best quality mocap in its price bracket... end of. Its kinect stuff is better than the other offersings, and using multiple cams works even better. It also has retargeting built in, and the process for getting a character out of LW, retargeted in iPi to your mocap, and imported back to LW is quick and easy.

10-16-2013, 10:30 AM
I bought Nevron and the Kinect for Windows, but was not very exited about the unstable hand- and feet-capturing the nevron does. I tried iPi and was quite astonished how precise it works. The second thing is that I can choose the productive time-area on the timeline in iPi. It is more complicated to delete the first useless frames of the capture in LW. And it's possible to export the capture to e.g. BVH. So I use iPi for capturing, export it to BVH, and import it to LW and do the retargeting with Nevron. Didn't know that the retargeting is possible in iPi too. The results are very good.

10-23-2013, 04:03 PM
AFAIK iPi doesnt do facial.
Nevron (well virtual studio actually) also allows you to attach your XBOX controller for extra secondary morph triggers that can be really handy. (blink, swallow, keepLipsClosed) and can be used at the same time as your facial capture.

10-24-2013, 04:11 PM
For quality kinect-based mocap data, iPi Mocap Studio is the best value around. The latest version even supports three Kinect sensors with results comparable to using six PS3 Eye cameras. I use this system simultaneously with two Kinects and three PS Move controllers (one for each hand and the head), and it works great! I have a third Kinect in our studio but haven't had the time to try the triple Kinect configuration yet. I expect to be testing the triple system next month though because we might have some work coming in soon that will require it.

I purchased Nevron Motion when it was released during Siggraph. I'm mainly intesrested in NM because it allows you to drive animation in unusual ways by feeding the data into any animateable channel. Also, Nevron Motion can do basic face capture directly in LightWave. Sadly, I have had even less time to do much with this system yet. (Sigh! Too many projectrs, too little time.)


10-28-2013, 05:52 AM
Greenlaw, you used the PS3 controllers to track your hand and head rotations during the kinect mocap session? How does that work basically?

10-28-2013, 08:49 AM
Greenlaw, you used the PS3 controllers to track your hand and head rotations during the kinect mocap session? How does that work basically?

Yes. Wrist and head rotations are recorded to a PC using PS3 Move controllers via bluetooth (no gaming console needed.) For this to be accurate, you need to hold the controllers firmly but some users attach the controllers to the back of gloves to free up their fingers. I like to hold at least one controller because you can use the buttons to start and stop recording. The third controller can be attached to the head using a hat or harness. Here's my head rig:

Little Green Dog's DIY Mocap Helmet (http://littlegreendog.blogspot.com/2013/04/mocap-helmet-update.html)

Alternatively, you can use Wii Motion Plus for this but it's not as accurate as the PS Move's magnetometer. And, yes, this works while recording body motions using up to three Kinects simultaneously.

Currently, iPi Soft is looking into the myo controller, which you wear on your forearm. The myo reads electrical signals from your muscles to record wrist AND fingers. In theory, it should work simultaneously with the three Kinects for body capture and the PS Move for the head, and eliminate the PS Moves for the wrists.

After we finish our first 2D film (in a couple of weeks,) we're going to focus on 'B2' again, and will record the remaining mocap for the film using the triple Kinect system and triple PS Move controllers. (Funny note: we have all these game controllers but we do not own a single gaming console.)

Here's an old video that shows me using two PS Move controllers for the wrists (while recording the body with two Kinects.) This was our very first test using the PS Move controllers and I think it worked out decently.

Sister's Mocap Test (http://www.youtube.com/watch?v=jASC8IOsIqY)

FYI, no head control in this video because it was recorded last March before that feature became available. We did use the head controller in the 1 minute 'B2' excerpt we released a few months ago. Some of it was shown on the LightWave demo reel at Siggraph. Here's the excerpt:

Excerpt from 'Brudders 2': A Work In Progress (https://vimeo.com/channels/littlegreendog/68543424)

It's getting pretty crazy isn't it? :D


10-28-2013, 09:12 AM
Just so there's no misunderstanding, the iPi Mocap Studio system is not a 'realtime' system like Nevron Motion, and I know that's an important selling point for many users. After recording motion with iPi Recorder, Mocap Studio requires a separate tracking session using a computer with a decent gaming card (I use an old GTX 460.) The data then needs to be retargeted for LightWave. You can retarget within iPi Mocap Studio but I think some users have had issues with that. I retarget my iPi data in Motion Builder and then use LIFS MOME in LightWave to merge the data to my LightWave rigs.

Nevron Motion records directly inside of LightWave and it has basic retargeting built-in. Being able to do it all within your primary 3D program certainly has its appeal. Also, Nevron Motion can do face capture while iPi Mocap Studio does not.

To sum it up, the system you choose depends on what you need it for and how much time/money you're willing to invest in setting it up and learning how to use it.

Hope this info is helpful.


10-28-2013, 09:21 AM
Oh, I forgot to mention--even without the PS Move head controller, iPi Mocap Studio can record some head rotation. That's what you see in the Sister demo video above. It's not nearly as accurate as the PS Move head data though because it doesn't include Z rotation.


11-02-2013, 11:46 AM
Thanks again for all those tid bit of tips. Much appreciated Greenlaw.

12-30-2013, 07:11 AM
I'm mainly intesrested in NM because it allows you to drive animation in unusual ways by feeding the data into any animateable channel.


How the heck do you do that? I've just bought Optitrack for facial mocap because Nevron face stuff just isn't up to scratch. Trouble is, Optitrack outputs nulls to fbx. Import that into LW and I want to retarget the nulls to some face bones. But how do you set the targets? Import an .nmr file? What the heck is that?

12-30-2013, 02:15 PM
Another option if you are limited on budget and want better results than what you get out of Nevron is Brekel's Kinect Pro Body capture. I did a comparison between Brekel's solution and Nevron and documented the results in another post here (http://forums.newtek.com/showthread.php?139221-What-are-peoples-impressions-of-Nevron-Motion-so-far).

Brekel's Kinect Pro Body capture will only set you back only $99, and the ui is real intuitive and all on one screen. It captures in real time and by default uses some better algorithms for the capture smoothness as well as addressing occlusion. Recording takes is done with full video/audio along with the capture - much easier and more organized than doing it in lightwave. You get a good set of features for $100 and you can capture 2 characters at once, although with a single kinect the kinds of interactions with 2 characters are still going to be really limited. Jasper Brekel is very responsive to emails, which is really nice. He answered all my questions before I purchased it.

Of course if you want multiple kinects to fully address occlusion then iPisoft is the only solution that supports it.

BTW, the workflow taking an fbx and pulling into Lightwave for retargeting using Nevron isn't much more difficult than using Nevron to capture. With Nevron, you still have to import the Nevron capture skeleton, so you don't save a lot in terms of steps. I also find that I do my motion capture in batches so having that piece outside of Lighwave isn't that big of deal.

01-12-2014, 04:59 PM
I have to agree with a previous post that the Nevron with live capture was less than great results. Feet and hands twisted out of shape. I switched to ipisoft and the capture is far more accurate. I then imported the motion file as bvh into LW and used Nevron to retarget. This worked much better for me.

01-14-2014, 03:14 AM
I use Ipisoft for capture and export it as a BVH file into LW and then manually retargeted the bones and saved it as a general BVH setup, as the normal BVH preset doesn't recognise it. This works great for me and I can fine tune it in LW. Ipisoft gives much better capture results even with a single Kinect. I am awaiting arrival of a second Kinect so I can get even better results.

- - - Updated - - -

Sorry just realised I posted twice on this Lol.