Page 1 of 2 12 LastLast
Results 1 to 15 of 23

Thread: What are peoples impressions of Nevron Motion so far?

  1. #1
    Super Member
    Join Date
    Jun 2010
    Location
    Derbyshire, UK
    Posts
    455

    What are peoples impressions of Nevron Motion so far?

    Hi, I'm looking into doing some home motion capture work using kinect and I wondered what everyones thoughts were on Nevron Motion now that its been out for a while?

    Is it worth the investment and is it as easy to use as it appears in all the promo videos?

    Whats the quality of the mocap capture like and does it require lots of clean up?

    Also, when doing the full body tracking is head movement tracked too?

    I'm also looking at ipisoft mocap software which looks pretty good so I'm just trying to weigh up which to go for.

    Cheers.
    Last edited by Simon-S; 12-16-2013 at 03:01 AM.

  2. #2
    Member snsmoore's Avatar
    Join Date
    Apr 2005
    Location
    Santa Rosa, CA
    Posts
    200
    This is a good time to ask that question. ipisoft has a 30% sale they just announced, so you can get the basic edition for about $400. Here's my experience with Nevron so far.

    I bought into Nevron at the beginning of August. I like the live motion capture which can be fairly easily applied to my characters. And it's a lot of fun to use motion capture to test out the deformations live. The re-targeting works well and is actually the feature I'll probably use the most.

    That being said, I'm a bit disappointed with the single kinect motion capture quality. With a single Kinect, occlusion really is a problem. If you turn slightly, your character will contort into all sorts of unnatural positions. The Nevron live capturing is also more "jerky" than other captures I've seen(CM site or even the ipiSoft samples) . (note: Nevron only supports a single Kinect, which is a limitation in the Kinect api implementation. If Microsoft were to add dual Kinect support to their api, then that would really improve things.)

    For comparison, I took a .bvh file sample from the ipisoft site, loaded that up into a Lightwave scene and retargeted it with Nevron and the motion was great. (That sample was a 2-kinect setup capture, which can only be done with a "post processed" capture solution like ipiSoft.)

    I'm seriously considering adding ipiSoft mocap to my workflow since I already have Nevron for retargeting. I'm going to run some more tests tonight but I think I know where I'm leaning.

    If you can affort both, Nevron+ipiSoft may make a really good combination. (I think there are other options for retargeting using IKB as well as RHiggit, but I really like how it can be done in Nevron. )

    -shawn

  3. #3
    Super Member
    Join Date
    Jun 2010
    Location
    Derbyshire, UK
    Posts
    455
    That's some really useful information, thanks.

    I will be very soon updating my Rhiggit license to version 2 which does boast (among a thousand other great features) easy retargeting of mocap data, so by the sounds of things I may not actually need Nevron at all. To be honest the dual kinect is kinda swaying me towards the ipisoft software, at least for now. I'll give the 30 day demo a shot to see for sure.

    Thanks again, very helpful

    Simon

  4. #4
    I went nevron, out of naivete, cheapskate-ed-ness and to support the lwd3dg; were I to go back and re-do it, I would go ipi, first, Nevron second. I would do Nevron and expect all the other folks to pick it up and give me a rig with which to work within their system. I am thinking Maestro and RHRiggitv2.0, as those are the third-party apps of pref for animating.

    Watching one's object fall into a pile of ... (actually, it can be quite entertaining!) polygons is disconcerting, to say the least. Somewhere, that rear/front-extra-data needs to be included in the process or Nevron will be a disappointing solution. At this juncture, it's more novel and supportive than fully production-ready.

  5. #5
    Member snsmoore's Avatar
    Join Date
    Apr 2005
    Location
    Santa Rosa, CA
    Posts
    200
    Quote Originally Posted by UnCommonGrafx View Post
    I went nevron, out of naivete, cheapskate-ed-ness and to support the lwd3dg; were I to go back and re-do it, I would go ipi, first, Nevron second. .
    I agree 100%

  6. #6
    Super Member LW_Will's Avatar
    Join Date
    Feb 2003
    Location
    Somewhere, Out there...
    Posts
    1,188
    I won a copy of NevronMotion at Siggraph this summer. Its been a bit of a slog, but I finally figured it out. (With some help... thanks, you know who you are...)

    Its just not the same thing as iPi. With Nevron, its in real time. The things done best the Kinect and Nevron are bits of animation... more for standing or small pieces of animation, rather than full run cycles or rolling on the floor.

    As to the jittery-ness of the motion, you can easily turn the dampening up... it slows it, but not noticeably so. Basically, you can't just plug it in, turn on the software, and then get AVATAR level animation out of it. You have to learn how to use it as a controler.

    The other thing I think the Nevron will be great for is facial mocap. Having the face cage cover your mouth is pretty rad. I hope to have a full tutorial (or maybe a scene) in the next month or so on that one. ;-)

    So for the investment, I think that Nevron (with retargeting) is the best.

    iPi has a limitation that the mocap is a three part process. That is, shoot the video/kinect, process the footage to extract the mocap, and then retargeting the saved mocap (BVH or whatever) on to the model. Which is okay, but with a correctly made model in Nevron its a one step process. And you HAVE to have an nVidia card with CUDA cores on it.

    It Nevron perfect? Far from it. Is iPi perfect? No... sorry. Each has its flaws and strengths.

    Also, Brekel's working with mocap. His current crop of stuff is good, but again, it is stand alone and requires being imported. I really like Brekel, his stuff was free up until very receint, and his whole package is only $300. (the mocap is only $100, with the other two parts being facial and a scanner) I am very interested in seeing his work with the new Kinect... (you know, the one from the XBOX One, the Kinect 2? ) In his latest video, he already had the video setup and working from the Kinect, the skeleton detection next.

    The new Kinect has much more resolution and sensors, but it is MUCH more expensive. I think its something like $400 for the PC version. The PC Kinect is REQUIRED to work with PCs.
    Will Silver
    Animator
    digital TRiP
    "Frak'em all! I USE LIGHTWAVE!"

  7. #7
    And there is also Jimmy Rig. Which has some nice things. You toss an unrigging object at it, Jimmy Rigs it. Using a single Kinnect and I think you are only three clicks away from having real time mocap.
    It has many of the same single Kinnect issues that Nevron has - BUT - you really are just a couple of clicks to start it. There isn't any of the open "*" panel and set to "*" before trying to "*"
    There is a very nice video that shows how to get it going, as well as videos on how to save and combine the clips you make. Then it all exports out as a LW scene file, or as BVH...etc.
    And speaking as an idiot, it is pretty idiot proof.
    The down sides are that it is the single camera - and that the folks at Jimmy get side tracked by paying jobs, so there hasn't been much movement / improvement for a while.

  8. #8
    Member snsmoore's Avatar
    Join Date
    Apr 2005
    Location
    Santa Rosa, CA
    Posts
    200
    Quote Originally Posted by LW_Will View Post
    Also, Brekel's working with mocap. His current crop of stuff is good, but again, it is stand alone and requires being imported. I really like Brekel, his stuff was free up until very receint, and his whole package is only $300. (the mocap is only $100, with the other two parts being facial and a scanner) I am very interested in seeing his work with the new Kinect... (you know, the one from the XBOX One, the Kinect 2? ) In his latest video, he already had the video setup and working from the Kinect, the skeleton detection next.
    Yeah, that's another option I haven't really explored(and it looks like it can do 2 characters at a time, without additional cost).

    I probably will hold off on going the iPi route for now. Lots on the horizon with Kinect 2 and I'd like to see what Newtek comes up with. In the mean time I'm hand animating, since my main characters are less anthropomorphic and have heavy interaction. (I may not save much time by introducing motion capture them since it would involve 2 captures + audio. Synchronizing all that may be way more work. )

    -shawn

  9. #9
    Axes grinder- Dongle #99
    Join Date
    Jul 2003
    Location
    Seattle
    Posts
    14,732
    I want to send a BIG THANKS to everybody here: info from actual users is SO valuable!!!
    They only call it 'class warfare' when we fight back.
    Praise to Buddha! #resist
    Chard's Credo-"Documentation is PART of the Interface"
    Film the cops. Always FILM THE COPS. Use this app.

  10. #10
    Super Member
    Join Date
    Jun 2010
    Location
    Derbyshire, UK
    Posts
    455
    Quote Originally Posted by jeric_synergy View Post
    I want to send a BIG THANKS to everybody here: info from actual users is SO valuable!!!
    +1

  11. #11
    Super Member LW_Will's Avatar
    Join Date
    Feb 2003
    Location
    Somewhere, Out there...
    Posts
    1,188
    Quote Originally Posted by snsmoore View Post
    Yeah, that's another option I haven't really explored(and it looks like it can do 2 characters at a time, without additional cost).

    I probably will hold off on going the iPi route for now. Lots on the horizon with Kinect 2 and I'd like to see what Newtek comes up with. In the mean time I'm hand animating, since my main characters are less anthropomorphic and have heavy interaction. (I may not save much time by introducing motion capture them since it would involve 2 captures + audio. Synchronizing all that may be way more work. )
    I suggest using Nevron for grabbing some mocap, then adding animation over the whole thing. You can rough out the motion and then edit... assuming you use IKBoost. ;-)
    Will Silver
    Animator
    digital TRiP
    "Frak'em all! I USE LIGHTWAVE!"

  12. #12
    Member snsmoore's Avatar
    Join Date
    Apr 2005
    Location
    Santa Rosa, CA
    Posts
    200
    Yeah, I've been getting up to speed on using IKB over the last couple months. It's the one LW animation option that seems to click with me. I'll probably add Nevron captured animations, but mainly for background stuff or if there's a scene that would fit using Nevron well.

  13. #13
    I just picked up NevronMotion. Although I don't have the kinect camera yet (next week), thus far I'm finding everything to be working fluidly for the parts of it that I currently have access to. I am very excited about the possibilities that Nevron will bring to my workflow even knowing the limitations of the Kinect hardware. Initially I didn't think its retargeting functions would be useful to me, but after using it for only a few minutes I found it to be incredibly convenient and time saving for situations where many different rigs are involved, and it provides a robust, standardized platform for transferring motion data between rigs.

    The only thing that's missing is instructions on how to adapt a custom character rig onto the Genoma Nevron rig, which IS possible, and the work involved only ever has to be done once per rig template. I find the Nevron rig excellent for adjusting animation and adapting other rigs to it, but less optimal for creating it which is what i've noticed some people were not too happy about. Initially, I found the Genoma Nevron rig quite "silly" in design... but it makes sense to me now having worked with it in Nevron... it makes rig adaptation way more convenient because it provides so many points of reference for manual targeting.

    I already have ideas on how to overcome some of the limitations of the Kinect, and intend to create a few free tutorials for Nevron that demonstrate that stuff (IE: how to improve facial capture tracking via manipulating the audio).

    I'll be posting examples and test clips as soon as I get my Kinect in the mail.
    Last edited by Ryan Roye; 12-19-2013 at 02:17 PM.
    Professional-level 3d training: Ryan's Lightwave Learning
    Plugin Developer: RR Tools for Lightwave

  14. #14
    Member snsmoore's Avatar
    Join Date
    Apr 2005
    Location
    Santa Rosa, CA
    Posts
    200
    Quote Originally Posted by chazriker View Post
    Initially I didn't think its retargeting functions would be useful to me, but after using it for only a few minutes I found it to be incredibly convenient and time saving for situations where many different rigs are involved, and it provides a robust, standardized platform for transferring motion data between rigs.
    Yeah, the retargeting is what I've had the most success with. It works well and is great for taking in mocap from various sources (bvh/fbx). Ryan, looking forward to any tricks you find with your IKB based workflow. (i.e. wondering how useful the Nevron Genoma rig with retargeted motions will play in an IKB centric workflow....)

    I took Will's advice and checked out Brekel's capturing software, and for $100 it may be well worth just being able to record motion capture on 2 characters at once. I'd still retarget them, one at a time, using Nevron, but that at least would cut out having to get the actor's separate takes synchonized. (I'm already going to have to sync up the dialog unless I can get good sound recording at the some time with the mocap, which may not be possible even with my office's acoustic modifications.)

    edit: Did some testing with some of the Brekel sample motion files. So far I haven't had much success getting them to import cleanly. Rather than spending too much time tweaking things, my time may be best spent doing things 100% in Lightwave with Nevron, since capture and retargeting work well there. (just need to get a smoother capture, which I think I can fine tune)

    -shawn
    Last edited by snsmoore; 12-19-2013 at 08:51 PM.

  15. #15
    Registered User
    Join Date
    Aug 2006
    Location
    UK
    Posts
    200
    I've been experimenting with nevron and kinect. I'm a novice at this mocap. So far I have found nevron to be good in some ways such as introducing me to motion capture and the simple re-targeting side, but not found it that accurate so far. Using the kinect skeleton to record motion and then retarget to the genoma rig in my character, I found feet twisted upside down and a lot of twitching and hands twisting in the wrong direction. I used the smooth tool but its still twitchy.
    The vids make it look simple online but I found myself spending lots of time correcting the characters limbs. Also when I used the other kinect skeleton (Is there a difference between them?) It wouldn't retarget at all using that. Also when using the live side I found the responce a bit slow. That is, the recorder was running slow along the timeline so when I played back the recorded animation it came out speeded up. So Its obviously geared towards a higher spec system than mine.

    Thats just my first impressions as a novice and will keep working on it. But I have to say I have mixed feelings so far as I find the twitchyness even when standing still a bit dissapointing. As mentioned above, probably ipi would be better.

Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •