PDA

View Full Version : MuscleSimulation -Bones driving an ImageSeqence, possible?



erikals
10-08-2007, 04:05 AM
I've been playing around with the thought that I can fake muscle simulation by using a displacement map.
I would simply use Bones to drive the Image Seqence, kind of like JointMorphPlus, but with an Image Sequence instead.

I've been trying to figure how to make this work
The principle is pretty simple, but I just can't figure out how to make it work with LW's built-in tools.

Is there no way to "drive" an Image Sequence in Lightwave?
Right now, from what I can see, the Img.Seq. cannot be controlled by e.g. a Bone or using Cycler.

There is a trick where I could envelope each frame manually, but it is alot of work having to do this, as it would be done to 2x150 frames.

Any ideas on this one? http://erikalstad.com/smiley/Notsatisfied.gif

loriswave
10-08-2007, 04:29 AM
image for displace a muscle is good idea, but why sequence ?
i think you can use 2/3 image and blend it With a trasparence. trasparence of texture have envelope, so you can link a envelope of transparence with a bone rotation. Is not a sequence but i think that this is a nice solution.

erikals
10-08-2007, 04:46 AM
Yep, you are right I wouldn't need 150, I could cut it down, maybe I'd have to use 50 or so, as it would be rendered in HD format.

But blending 2/3 wouldn't work in that case, as the "blend" would show.
http://erikalstad.com/cgtemp/Badblend.gif
The 2/3 blend would look something like this, so it wouldn't work unfortunatly.

loriswave
10-08-2007, 05:04 AM
I'm not sure but i think that if in "video world" your concept is right in deformer or displace the think is little difference. You vork with a height of mesh. In your gif you have do a mix of A in B, but in reality you must to ad A over B. there B have some information of A into. Is difficult to explain witha my poor english, but if try to do with a right displace map you can obtain great result. Think that is not a mix from blue to red, but a blak with blak and white in white. so were thare are a muscle flat thare are a blak that become withe, in the midle some gray pass. if you put a white on blak vit blene you obtain a grey in start and after ( 0% trasparence ) white. I hope I'm are clear.

erikals
10-08-2007, 05:42 AM
Yup, pretty sure I know what you mean, one can achive that with the Darken or Multiply filter, and there would be improvement, good thinking. However it still wouldn't blend smooth enough for 2/3 frames.

Unfortunatly the problem arrises even more when having a slow-motion walk.
I'm thinking of using this technique on a Dino, where the muscle-groups are very big, so it is hard to make use of few images.

I think this is on to something, just need to make it work... :)

pooby
10-08-2007, 07:06 AM
It's very annoying that you can't control image sequences with a driver.
I asked for this as a feature for a similar reason to you Erikals.. It would definately work for muscles with a bit of fiddling.

I DO have another idea that should work, I have done some early testing that looks promising, but it involves doing your animation.. 'filming' the muscles, then applying that back to the surface mesh as a displacement.

In stages, here's the concept.

1 make a bunch of muscle objects, rigged to work under the skin.

2 Use the basic rig to also deform your target mesh.

3 You should end up with a mesh that contains all the moving muscles underneath. The muscles should not penetrate The target mesh. so the tartget mesh wil need to be oversized. (it doesn't need much modelling detail as this will be provided entirely by the muscles)

4 This is the clever part.
You use the surface mesh as a camera to make a depth map of the underlying muscles using the surface baking camera and Fog.
make the muscles 100% luminous and the fog black, and fine tune the fog distance so the nearest point is absolutely white and the furthest part Black.

5- render the sequence (blur it a bit to get rid of any sharp edges) and then bring back the sequence and apply it with Normal Displacement to the Uv's of your target mesh.

6- see if it worked or not.

erikals
10-08-2007, 07:49 AM
Thanks for suggestions : )
Heh, that was basically 96% what I was thinking of :)
It should work, I've tested something almost the same in the past.

Cool thing is that images need no pre-calculation,
Umf, NT really should make the driver, shouldn't be that difficult.

Well, unless NT comes up with something smart, this is what I will do.

Thanks again, hope I'll finish the project, if so, I will post it for sure, should turn out to be something cool.

pooby
10-08-2007, 10:02 AM
By the way, you 'COULD' drive an image sequence with a bone if you wanted to spend hours etting it up.
You'd have to bring each image in the sequence in individually then use 'cycler' on the opacity of the channel to reveal them in a sequence timed to the rotation of the bone
It would be a pain, but do-able.

The only problem with linking an image sequence to a null is that you wont get 'in between' frames, so whilsts its fine on broad moves, If your arm movement was very slow, you'd suddenly get a pop when it jumped from one image to the next.
So, to counter this, youd have to have loads of in between images.

It's a shame you can't use the surface baking camera as a Camera projection, to apply an image for displacement to the mesh, because then you could use morphing to move the displacement of the muscles around in an organic fashion.

erikals
10-08-2007, 01:25 PM
By the way, you 'COULD' drive an image sequence with a bone if you wanted to spend hours etting it up.
You'd have to bring each image in the sequence in individually then use 'cycler' on the opacity of the channel to reveal them in a sequence timed to the rotation of the bone. It would be a pain, but do-able.
Was thinking about this one, and I agree I think it should work, though it would take a Lot of time to set up. It only had to be done once though.

The only problem with linking an image sequence to a null is that you wont get 'in between' frames, so whilsts its fine on broad moves, If your arm movement was very slow, you'd suddenly get a pop when it jumped from one image to the next.
So, to counter this, youd have to have loads of in between images.
Didn't think of that one :) Umf, that makes a limit...

It's a shame you can't use the surface baking camera as a Camera projection, to apply an image for displacement to the mesh, because then you could use morphing to move the displacement of the muscles around in an organic fashion.I know,.. that should have been a feature request too actually...

In the end I think the 1-6 steps you mentioned above is the way to go, should be a pretty safe way to do it.
And if I prepare/set it up right, it shouldn't be too hard to make fixes/adjustments.

erikals
10-08-2007, 01:54 PM
...there is no way to make the displacement map use the CCTV shader, is there? (without rendering out, for then to import the seq.)
The displacement option don't have access to the shaders, e.g. CCTV.
Also one can't fake it using the "Textured Filter", as neither that has access to the shaders.
Darn, could have saved some time.

svintaj
10-08-2007, 02:20 PM
Interesting thread! I'm also looking for a nice way to simulate muscles and other organinc deformations. I know some C/LW-API and may some day try to write something for this purpose? but first I'll keep thinking a bit more. It's inspiering to hear your thoughs.

One downside with the 'bone driven image-sequence' is that you first have to render several animations, one for each limb... but, maybe it's good enough to do a 'general' muscle-animation and just re-use it for all limbs?

/ Svante

pooby
10-08-2007, 02:23 PM
CCTV couldn't work because displacement would have to happen after raytracing.
It would be great to be able to have an evolution of this idea implemented in future though.
maybe LW could do a preliminary render pass which could be held in a buffer that is used in the scene.
The problem is, it's such a specialist request, I can't see Newtek feeling the urge to put it in.

pooby
10-08-2007, 02:29 PM
One good thing about XSI is that you can use the FX tree (a compositing app) to meddle about with images that are used by displacements and textures.
So you could have a null drive a parameter on a compositing operator.
You can morph between images (proper morphing, not just opacity blending. )This makes a image based muscle system quite possible
In fact.. you can track points on images, and have those track points drive nulls.. Im in the process of using that to make a facial motion capture system.

In LW, you dont have sophisticated image manipulation, but by using the nodal displacement, you do have SOME degree of warping of images that can be driven by nulls. I havent done much testing, but I think there might be some solutions there.

erikals
10-08-2007, 03:44 PM
Hm, hehe, have another idea now... it's actually going a bit back to SplineGods technique using PointFit. As I recall I read he said the calculation took too long on big objects, but I'm thinking why not give it a second try as that was some years ago.
What I'm thinking is sort of a mix between these methods...

1-Dino Object
pointfit applied under 'properties' and 'Muscle Object' added

2-Muscle Object
several ways to do this, I'd end up using a disp.map, then apply that to a morph. Hard to explain, basically the result would be a Muscle-Morph object. Think I'd stick to the upper leg only, to avoid difficulties in overlapping polygons in the knee-area, alternatively I could 'skip' the knee area and only model the Upper+Lower Leg.

3-Muscle Object
it would use the same rig as the Dino Object, then using JointMorph or Cycler.

4-Smaller muscle deformations
Smaller, less noticeable muscle deformations could be made using other methods, such as a displacement map (Note, PoinFit is not directly a displacement map, but pretty close as far as I can see)

Well, that's for the theory, to realtime-calculate things faster the SubD's should have a low value when animating. (yup, PointFit is realtime, which is pretty cool)

I'm no Pro rigger, I'm actually a pretty horrible rigger, is what I said in point 3) valid? Can you use the same set of bones on two different objects, or do you have to clone the rig?

erikals
10-08-2007, 03:59 PM
Sorry, PolyFit was the name, not PointFit (PointFit is the Modeler version)

For the curious,
SplineGod's PolyFit testvideo
http://www.3dtrainingonline.com/support/muscle_deformer.mov

Ztreem
10-09-2007, 06:09 AM
Thanks for the link, I havn't seen that one before, looks useful. If only LW could get a lot faster deformation speed.

erikals
10-09-2007, 06:34 AM
No problemo :)

Well, for low-res objects it ain't that bad, and one can test with the final high-res versions first, then animate with a low-res, then go back and render with the high-res version again.
Also, this was some time ago, if you have a quad cpu and an allright gpu the speed should be quite a lot faster then it was back then.

The minus here is that some hours of testing is required to get the final result, but it might very possibly be worth it.

Edit>
Oh, again, anyone knows about that point 3) ?
Is it possible to make a Bone affect two layers(objects) at once, or do I have to duplicate/copy the rig?

pooby
10-09-2007, 06:37 AM
I hope you get better results than I did using it

erikals
10-09-2007, 07:19 AM
I tested it in Med-res, whitch looked quite allright, haven't tried High-res yet...
Is that what you are refering to? That a high-res model didn't deform properly?

pooby
10-09-2007, 07:37 AM
I just about got one muscle working in an arm. (but not looking very realistic. It was practically just an extrended sphere, Then when i tried adding more I realised that it just didn't want to know.. I can't remember the exact details, It was a few years ago now. I think maybe it has problems with more than 1 collision object, I just remember that my conclusions were that it wasnt going to allow a robust system for muscle sim.
However, I'm interested in seeing what you come up with. I'm not trying to put you off.

erikals
10-09-2007, 08:17 AM
During the tests I did I started get into problems in some specific areas, esp in the Joint areas, +also in the areas where the muscles interacted with special skin deformation, such as skin stretching, my conclusion was I need to fake those areas e.g. using a displacement map instead. That's in theory.
Think I'll make a rough test first, to see if this works..

erikals
10-10-2007, 08:34 AM
I forgot mentioning the use of Stressmap,
this is the first test I saw where it's use has come in handy, looks very good too. A cool plugin imo, we'll probably see more of it in use in the future.
http://noboyama.sblo.jp/archives/200701-1.html