PDA

View Full Version : What are peoples impressions of Nevron Motion so far?



Simon-S
12-16-2013, 03:48 AM
Hi, I'm looking into doing some home motion capture work using kinect and I wondered what everyones thoughts were on Nevron Motion now that its been out for a while?

Is it worth the investment and is it as easy to use as it appears in all the promo videos?

Whats the quality of the mocap capture like and does it require lots of clean up?

Also, when doing the full body tracking is head movement tracked too?

I'm also looking at ipisoft mocap software which looks pretty good so I'm just trying to weigh up which to go for.

Cheers.

snsmoore
12-16-2013, 03:55 PM
This is a good time to ask that question. ipisoft has a 30% sale they just announced, so you can get the basic edition for about $400. Here's my experience with Nevron so far.

I bought into Nevron at the beginning of August. I like the live motion capture which can be fairly easily applied to my characters. And it's a lot of fun to use motion capture to test out the deformations live. The re-targeting works well and is actually the feature I'll probably use the most.

That being said, I'm a bit disappointed with the single kinect motion capture quality. With a single Kinect, occlusion really is a problem. If you turn slightly, your character will contort into all sorts of unnatural positions. ;) The Nevron live capturing is also more "jerky" than other captures I've seen(CM site or even the ipiSoft samples) . (note: Nevron only supports a single Kinect, which is a limitation in the Kinect api implementation. If Microsoft were to add dual Kinect support to their api, then that would really improve things.)

For comparison, I took a .bvh file sample from the ipisoft site, loaded that up into a Lightwave scene and retargeted it with Nevron and the motion was great. (That sample was a 2-kinect setup capture, which can only be done with a "post processed" capture solution like ipiSoft.)

I'm seriously considering adding ipiSoft mocap to my workflow since I already have Nevron for retargeting. I'm going to run some more tests tonight but I think I know where I'm leaning.

If you can affort both, Nevron+ipiSoft may make a really good combination. (I think there are other options for retargeting using IKB as well as RHiggit, but I really like how it can be done in Nevron. )

-shawn

Simon-S
12-16-2013, 04:02 PM
That's some really useful information, thanks.

I will be very soon updating my Rhiggit license to version 2 which does boast (among a thousand other great features) easy retargeting of mocap data, so by the sounds of things I may not actually need Nevron at all. To be honest the dual kinect is kinda swaying me towards the ipisoft software, at least for now. I'll give the 30 day demo a shot to see for sure.

Thanks again, very helpful :)

Simon

UnCommonGrafx
12-16-2013, 04:33 PM
I went nevron, out of naivete, cheapskate-ed-ness and to support the lwd3dg; were I to go back and re-do it, I would go ipi, first, Nevron second. I would do Nevron and expect all the other folks to pick it up and give me a rig with which to work within their system. I am thinking Maestro and RHRiggitv2.0, as those are the third-party apps of pref for animating.

Watching one's object fall into a pile of ... (actually, it can be quite entertaining!) polygons is disconcerting, to say the least. Somewhere, that rear/front-extra-data needs to be included in the process or Nevron will be a disappointing solution. At this juncture, it's more novel and supportive than fully production-ready.

snsmoore
12-16-2013, 06:26 PM
I went nevron, out of naivete, cheapskate-ed-ness and to support the lwd3dg; were I to go back and re-do it, I would go ipi, first, Nevron second. .

I agree 100%

LW_Will
12-16-2013, 08:05 PM
I won a copy of NevronMotion at Siggraph this summer. Its been a bit of a slog, but I finally figured it out. (With some help... thanks, you know who you are...)

Its just not the same thing as iPi. With Nevron, its in real time. The things done best the Kinect and Nevron are bits of animation... more for standing or small pieces of animation, rather than full run cycles or rolling on the floor.

As to the jittery-ness of the motion, you can easily turn the dampening up... it slows it, but not noticeably so. Basically, you can't just plug it in, turn on the software, and then get AVATAR level animation out of it. You have to learn how to use it as a controler.

The other thing I think the Nevron will be great for is facial mocap. Having the face cage cover your mouth is pretty rad. I hope to have a full tutorial (or maybe a scene) in the next month or so on that one. ;-)

So for the investment, I think that Nevron (with retargeting) is the best.

iPi has a limitation that the mocap is a three part process. That is, shoot the video/kinect, process the footage to extract the mocap, and then retargeting the saved mocap (BVH or whatever) on to the model. Which is okay, but with a correctly made model in Nevron its a one step process. And you HAVE to have an nVidia card with CUDA cores on it.

It Nevron perfect? Far from it. Is iPi perfect? No... sorry. Each has its flaws and strengths.

Also, Brekel's working with mocap. His current crop of stuff is good, but again, it is stand alone and requires being imported. I really like Brekel, his stuff was free up until very receint, and his whole package is only $300. (the mocap is only $100, with the other two parts being facial and a scanner) I am very interested in seeing his work with the new Kinect... (you know, the one from the XBOX One, the Kinect 2? ) In his latest video, he already had the video setup and working from the Kinect, the skeleton detection next.

The new Kinect has much more resolution and sensors, but it is MUCH more expensive. I think its something like $400 for the PC version. The PC Kinect is REQUIRED to work with PCs.

Davewriter
12-16-2013, 09:44 PM
And there is also Jimmy Rig. Which has some nice things. You toss an unrigging object at it, Jimmy Rigs it. Using a single Kinnect and I think you are only three clicks away from having real time mocap.
It has many of the same single Kinnect issues that Nevron has - BUT - you really are just a couple of clicks to start it. There isn't any of the open "*" panel and set to "*" before trying to "*"
There is a very nice video that shows how to get it going, as well as videos on how to save and combine the clips you make. Then it all exports out as a LW scene file, or as BVH...etc.
And speaking as an idiot, it is pretty idiot proof.
The down sides are that it is the single camera - and that the folks at Jimmy get side tracked by paying jobs, so there hasn't been much movement / improvement for a while.

snsmoore
12-17-2013, 10:59 AM
Also, Brekel's working with mocap. His current crop of stuff is good, but again, it is stand alone and requires being imported. I really like Brekel, his stuff was free up until very receint, and his whole package is only $300. (the mocap is only $100, with the other two parts being facial and a scanner) I am very interested in seeing his work with the new Kinect... (you know, the one from the XBOX One, the Kinect 2? ) In his latest video, he already had the video setup and working from the Kinect, the skeleton detection next.

Yeah, that's another option I haven't really explored(and it looks like it can do 2 characters at a time, without additional cost).

I probably will hold off on going the iPi route for now. Lots on the horizon with Kinect 2 and I'd like to see what Newtek comes up with. In the mean time I'm hand animating, since my main characters are less anthropomorphic and have heavy interaction. (I may not save much time by introducing motion capture them since it would involve 2 captures + audio. Synchronizing all that may be way more work. )

-shawn

jeric_synergy
12-17-2013, 11:23 AM
I want to send a BIG THANKS to everybody here: info from actual users is SO valuable!!!

Simon-S
12-17-2013, 11:47 AM
I want to send a BIG THANKS to everybody here: info from actual users is SO valuable!!!

+1 :)

LW_Will
12-17-2013, 02:14 PM
Yeah, that's another option I haven't really explored(and it looks like it can do 2 characters at a time, without additional cost).

I probably will hold off on going the iPi route for now. Lots on the horizon with Kinect 2 and I'd like to see what Newtek comes up with. In the mean time I'm hand animating, since my main characters are less anthropomorphic and have heavy interaction. (I may not save much time by introducing motion capture them since it would involve 2 captures + audio. Synchronizing all that may be way more work. )

I suggest using Nevron for grabbing some mocap, then adding animation over the whole thing. You can rough out the motion and then edit... assuming you use IKBoost. ;-)

snsmoore
12-17-2013, 04:59 PM
Yeah, I've been getting up to speed on using IKB over the last couple months. It's the one LW animation option that seems to click with me. I'll probably add Nevron captured animations, but mainly for background stuff or if there's a scene that would fit using Nevron well.

Ryan Roye
12-19-2013, 02:38 PM
I just picked up NevronMotion. Although I don't have the kinect camera yet (next week), thus far I'm finding everything to be working fluidly for the parts of it that I currently have access to. I am very excited about the possibilities that Nevron will bring to my workflow even knowing the limitations of the Kinect hardware. Initially I didn't think its retargeting functions would be useful to me, but after using it for only a few minutes I found it to be incredibly convenient and time saving for situations where many different rigs are involved, and it provides a robust, standardized platform for transferring motion data between rigs.

The only thing that's missing is instructions on how to adapt a custom character rig onto the Genoma Nevron rig, which IS possible, and the work involved only ever has to be done once per rig template. I find the Nevron rig excellent for adjusting animation and adapting other rigs to it, but less optimal for creating it which is what i've noticed some people were not too happy about. Initially, I found the Genoma Nevron rig quite "silly" in design... but it makes sense to me now having worked with it in Nevron... it makes rig adaptation way more convenient because it provides so many points of reference for manual targeting.

I already have ideas on how to overcome some of the limitations of the Kinect, and intend to create a few free tutorials for Nevron that demonstrate that stuff (IE: how to improve facial capture tracking via manipulating the audio).

I'll be posting examples and test clips as soon as I get my Kinect in the mail.

snsmoore
12-19-2013, 08:40 PM
Initially I didn't think its retargeting functions would be useful to me, but after using it for only a few minutes I found it to be incredibly convenient and time saving for situations where many different rigs are involved, and it provides a robust, standardized platform for transferring motion data between rigs.

Yeah, the retargeting is what I've had the most success with. It works well and is great for taking in mocap from various sources (bvh/fbx). Ryan, looking forward to any tricks you find with your IKB based workflow. (i.e. wondering how useful the Nevron Genoma rig with retargeted motions will play in an IKB centric workflow....)

I took Will's advice and checked out Brekel's capturing software, and for $100 it may be well worth just being able to record motion capture on 2 characters at once. I'd still retarget them, one at a time, using Nevron, but that at least would cut out having to get the actor's separate takes synchonized. (I'm already going to have to sync up the dialog unless I can get good sound recording at the some time with the mocap, which may not be possible even with my office's acoustic modifications.)

edit: Did some testing with some of the Brekel sample motion files. So far I haven't had much success getting them to import cleanly. Rather than spending too much time tweaking things, my time may be best spent doing things 100% in Lightwave with Nevron, since capture and retargeting work well there. (just need to get a smoother capture, which I think I can fine tune)

-shawn

genesis1
12-21-2013, 11:57 AM
I've been experimenting with nevron and kinect. I'm a novice at this mocap. So far I have found nevron to be good in some ways such as introducing me to motion capture and the simple re-targeting side, but not found it that accurate so far. Using the kinect skeleton to record motion and then retarget to the genoma rig in my character, I found feet twisted upside down and a lot of twitching and hands twisting in the wrong direction. I used the smooth tool but its still twitchy.
The vids make it look simple online but I found myself spending lots of time correcting the characters limbs. Also when I used the other kinect skeleton (Is there a difference between them?) It wouldn't retarget at all using that. Also when using the live side I found the responce a bit slow. That is, the recorder was running slow along the timeline so when I played back the recorded animation it came out speeded up. So Its obviously geared towards a higher spec system than mine.

Thats just my first impressions as a novice and will keep working on it. But I have to say I have mixed feelings so far as I find the twitchyness even when standing still a bit dissapointing. As mentioned above, probably ipi would be better.

snsmoore
12-23-2013, 05:56 PM
I decided to do another pass of research for motion capture in Lightwave using Nevron, so I spent yesterday afternoon working on fine tuning my workflow and seeing if there were any obvious omissions to the process. I spent this afternoon doing some comparisons with Brekel as well.

Observations:

1) The Kinect does generate a somewhat noisy capture, so using the smoothing presets are necessary, otherwise you'll have lots to clean up. There is some noise (occasional "twitch" of a bone) even when I wasn't moving.
2) The video is, as expected, noisier in lower light. I wasn't sure how much this would affect the motion capture, so I took one of my 500W lights and directed it on the capture area. It didn't make a dramatic difference(IR check box was off), but the room was already lit and had some light from the window (shutters were somewhat closed).
3) I did a number of movement tests, but even with the smoothing options, I felt I needed a comparison (was thinking maybe I got a bad Kinect for Windows)

I decided to try the free evaluation of Brekel Kinect Pro for comparison. I did 5 different motions and noted my observations from them.

Motions:

1) Bend knees (by 30 degrees or so), arms in T-pose. (repeat a few times and )
Nevron: The feet would very noticeably drift downward, as if they were going through the floor. The knees showed some bending, but the downward drift was a distraction. (enabling IK on the Nevron targeting window helped a little, but there was still considerable drift. The drift was worse the faster I did the movement.)
Brekel: The feet had only a very minute amount of drift. The knees showed noticeable bending.

2) Sidestep motion
(both capture tools seemed to handle this well)

3) Flapping arms (similar to performing a "jumping jack" exercise without moving the legs)
Nevron: The skeleton drifted upward the faster I did the motion. Almost felt like I could get the skeleton to fly. (enabling IK on the Nevron targeting window helped just a little, but there was still a lot of drifing)
Brekel: No drifting of the skeleton.

4) Walking
(Both seemed to do ok, considering the limited space. What was interesting was that Brekel could almost handle walking perpendicular to the kinect. Seems to have a better occlusion handling algorithm)

5) "Hand shake" test (slight bending over of the back and moving right arm out to introduce yourself)
Nevron: Motion was ok, except Nevron favored tilting the wrist and elbow downward.
Brekel: The motion was closer to what I did, with the elbow and wrist in an arc downward (elbow "pointing" downward)

I figured out the problems I had(noted earlier) with the demo fbx imports from Brekel. (You basically have to set IK mode to ON in the Nevron retargeting window on the feet and hands, and not target any "Roll" bones. The fbx's also require a little more adjustment to get the motion capture skeleton to align with my character. )

Side note: the Brekel ui is very good. It is intuitive and the defaults tend to get you a decent capture. It appears that Brekel does use some of his own algorithms for noise reduction and occlusion issues in his product. Also, I don't see the fact that it is a separate application as too much of a disadvantage, as motion capture tends to be a separate activity - it may not even occur on the same day as the retargeting/editing.

So for $99 more, adding Brekel for capture and using Nevron for retargeting is looking like a good solution. (and you can capture 2 characters at once) Note: I really like using Nevron for retargeting.

-shawn

Ryan Roye
12-31-2013, 10:51 AM
After having a meaningful amount of time to play with the Kinect functions of Nevron, I'll leave my commentary:

The kinect has some limitations; it can't detect whether a person's wrists are rotating (on bank, it can sense pitch), and if the arms or legs overlap it can have trouble "guessing" how the skeleton should be positioned. Obviously, only 1 camera means you can't turn around too much... I'd say the maximum is around 40 degrees to the left or right.

For upper body the Kinect w/ nevron is excellent and highly usable; you can do clips and have them require next to no cleanup. It greatly increases productivity and it has opened some doors in terms of what I can do given my busy schedule.

The legs are another story and DO require substantial knowledge of editing and refining motion capture to remove errors (limitation of only using 1 camera). I didn't put a lot of work into stabilizing the legs in my full body example (http://www.youtube.com/watch?v=faX1H1GRthY) as it was only a quick and dirty test, those results were what I got with about 15 minutes worth of editing. You can get good results out of nevron/kinect mocap in terms of leg/body positioning, you just have to know how to process the motion data with Lightwave's motion tools to fix errors.

Facial tracking in real-time is not very usable because the Kinect wasn't designed to process a human face that quickly while preserving quality motion data. *However*, by playing back the audio clip at 1/2 or 1/4th of its original speed and then performing the facial tracking, this gives the kinect a lot more time to scan your face and therefor can yield very high quality results which can then be processed with Lightwave's toolset. Not as efficient doing it this way, but it is still way faster than hand-keying.

I still have some things to figure out in regards to integrating Nevron's mocap functionality to my animation workflow, but so far I'm very satisfied with the results I'm getting in relation to the cost of the software and equipment.

jeric_synergy
12-31-2013, 01:48 PM
Clever idea on the slow-playback, chaz.

samscudder
01-08-2014, 02:04 PM
The learning experience has been... interesting. The documentation seems a little summarized. Nevron is a very powerful tool, and only has a 24 page manual (the motion retargeting is only 2 pages long!). The folder indicated in the installation instructions didn't exist, and I had to create it, even though I had performed a default LW installation.

That aside, I've been experimenting with Nevron for a few weeks, and managed to retarget a BVH to the Nevron rig. Operation was pretty straightforward: I added the rig to my model, imported an FBX I downloaded with the mocap, and used retargeting to slave the Nevron rig to the mocap rig. Took a bit of work to figure out which joints I should retarget, and the T pose of the mocap was different to the T pose of the model, but it wasn't too hard to sort it all out.

I also managed to get the Kinect to drive the alien head, and I think I figured out how the Kinect data is transformed in the Virtual Studio.
To make me a very happy person, the documentation could demonstrate how to add facial mocap to the model. I've managed to convert a C3D file to an FBX (whole load of nulls moving around), and I'm now trying to figure out how I can use this to drive morphs , how to add a jaw and eye bones to the nevron rig, and how to get the jaw to follow the facial mocap data.

CaptainMarlowe
01-09-2014, 12:28 AM
I bought Nevron for painless retargeting. I am quite happy with it. No problem so far. Of course, I wish the kinect/whatever would be extended for mac users, but that was not my main goal when I purchased it at reduced price during siggraph.

lwaddict
04-06-2014, 12:27 AM
I bought Nevron thinking it would be useful... Turned out to be yet another learning curve I didn't need. Other tools out there work fine. My feeling, given the support so far... It'll go the way of Core, Aura, and SpeedEdit... Just my opinion though.

jwiede
08-08-2014, 01:34 PM
My feeling, given the support so far... It'll go the way of Core, Aura, and SpeedEdit... Just my opinion though.

My opinion is similar to lwaddict's.

The Kinect functionality is just too noisy and limited compared to other (similar priced or less expensive) capture products available, no word whether K2 will be supported, etc. The retargeting functionality is useful for basic tasks, but documentation is quite poor, and the capture rig requirements are too limited.

I had high hopes early on that LW3DG would continually improve Nevron's capabilities and provide real, detailed, useful documentation, but alas, none of that happened. Now it seems like LW3DG has lost interest in supporting / improving / maintaining this product altogether.

This is a plugin with a pricetag greater than the vast majority of commercial LW plugins available. Fit and polish should be perfect. _Everything_ should be documented, including detailed internal explanations, to enable and ease troubleshooting. New hardware support should either be timely, and really there should also be an SDK provided so we can implement our own hardware support as needed. Customers should absolutely not have to beg for scraps of info on how to set things up, or how it works.

LW3DG demands a premium price for Nevron, but has not delivered a premium product, nor even close (IMO).

Megalodon2.0
08-12-2014, 05:31 PM
My opinion is similar to lwaddict's.

The Kinect functionality is just too noisy and limited compared to other (similar priced or less expensive) capture products available, no word whether K2 will be supported, etc. The retargeting functionality is useful for basic tasks, but documentation is quite poor, and the capture rig requirements are too limited.

I had high hopes early on that LW3DG would continually improve Nevron's capabilities and provide real, detailed, useful documentation, but alas, none of that happened. Now it seems like LW3DG has lost interest in supporting / improving / maintaining this product altogether.

This is a plugin with a pricetag greater than the vast majority of commercial LW plugins available. Fit and polish should be perfect. _Everything_ should be documented, including detailed internal explanations, to enable and ease troubleshooting. New hardware support should either be timely, and really there should also be an SDK provided so we can implement our own hardware support as needed. Customers should absolutely not have to beg for scraps of info on how to set things up, or how it works.

LW3DG demands a premium price for Nevron, but has not delivered a premium product, nor even close (IMO).
Unfortunately I have to agree. I feel that Nevron Motion will end up like SpeedEdit, Vidget and Rendition. The "lost interest" comment seems perfectly suited. I was hoping to see support for the Asus XTion Live Pro, but no word at all. I feel that I've purchased yet another piece of software that will languish and die. Oh well...