PDA

View Full Version : Facial animation technology



jeric_synergy
04-30-2013, 11:24 AM
JFYI:
http://www.redsharknews.com/post/item/668-how-a-uk-studio-is-redefining-facial-animation?utm_source=www.lwks.com+subscribers&utm_campaign=dc353e00d0-RSN_April_30_2013&utm_medium=email&utm_term=0_079aaa3026-dc353e00d0-74951153

erikals
05-01-2013, 02:10 PM
WANT IT!! :rock:


but is it available to the public?... (edit: nope.)



http://youtu.be/oySSgNfrfF8

Megalodon2.0
05-01-2013, 03:15 PM
IMO, it comes down to how good your morphs/blendshapes are. If they suck, it doesn't matter how good the lipsync program is.

You can get good auto lipsync in Poser, or Mimic for DAZ.

Me, I'll take TAFA - so very powerful and so easy to use.

OTOH... competition is good. :thumbsup:

erikals
05-01-2013, 03:19 PM
what i like about this is that it is automated, and looks to work very well for realistic characters.
too bad it's not available though...

it'd be nice to mix TAFA and this one...

geo_n
05-02-2013, 04:45 AM
Wow the lipsync is very good. Hope they make a plugin for 3d appz since they are targetting games.

bazsa73
05-02-2013, 06:20 AM
heck, robotic programms do our job

Hail
05-02-2013, 07:17 AM
heck, robotic programms do our job

It does look like we will be losing ours jobs to robots in a couple of years :D

ianr
05-02-2013, 08:04 AM
Book- Mark & send to Mr.Grandi,why not?

Greenlaw
05-02-2013, 10:18 AM
...I'll take TAFA - so very powerful and so easy to use.

I've had a license for TAFA for years now but never applied it to production. I should take another look at it since I think I'm going to need it on my next project.

Here's a curious 2D thing I was experimenting with recently with my 6 year old daughter:

Sienna's CrazyTalk Test (http://www.youtube.com/watch?v=vWorof6tjWU)

Thankfully, she's not this creepy in person. :)

The interesting thing about this test is that the animation is entirely audio driven and for only a few minutes work, including setup, the result is not that bad. As mentioned, it's strictly a 2D trick though.

A 3D alternative for LightWave is to use the free Papagayo (http://www.lostmarble.com/papagayo/index.shtml)program from Lost Marble. This is a 2D lipsync program developed for Anime Studio but Mike Green (Dodgy) wrote a LightWave script called Papagayo Importer (http://www.mikegreen.name/) that converts the data for Morph Mixer.

G.

erikals
05-02-2013, 02:58 PM
Funny stuff...

http://erikalstad.com/backup/misc.php_files/smile.gif


http://www.youtube.com/watch?v=qxXqFTaSTzQ

Surrealist.
05-02-2013, 04:22 PM
If they can get that tech working on the rest of the face then they'd have something.

erikals
05-02-2013, 04:33 PM
yes, but the frog needs a bit more work. :P

cresshead
05-02-2013, 05:08 PM
simple fix...just make all your characters that need lipsync have beaks! open-close - done!

but but..your characters are Robots you say?

simple...have a light grill for the mouth area...just on/off it for audio volume/waveform

i did this test back in 2007 with mimic for lightwave 8


http://www.youtube.com/watch?v=mFDJDM_MMvI

Greenlaw
05-02-2013, 05:17 PM
yes, but the frog needs a bit more work. :P
LOL!

Regarding what I believe Surrealist was actually referring to, I think something like that is entirely possible with the Speech Graphics system.

In that silly video I posted, for example, CrazyTalk is driving the entire face based on tone and rate of speech. The result is not 'fantastic' of course but this was done with a fairly inexpensive consumer program, and I have admit I was surprised that it worked this well.

I can see how a more sophisticated form of 'auto-emote' tech could be applied to the 3D character in the Speech Graphics demo at the beginning of this thread. As a matter of fact, I have seen something like that--does anybody remember the link Mr. Rid posted a couple of years ago?

G.

Greenlaw
05-02-2013, 05:24 PM
I found it--the product was called HapTek and it's much cruder than I remember. Nevermind. :)

Greenlaw
05-02-2013, 05:34 PM
I completely forgot about this auto lip-sync test I did with my daughter little over a year ago:


http://www.youtube.com/watch?v=lzLqzSW8Hh0

I used Annosoft's Lip-Sync Tool (http://www.annosoft.com/). It was super easy to use and seemed promising. At the time, the developer was looking for LightWave testers but I don't think they got many takers. I worked with it for a while but then got too busy at work and had to drop it. Probably should contact them again and see if they're still interested in developing for LightWave.

G.

geo_n
05-02-2013, 11:29 PM
I remember annosoft. Its too expensive.
Nothing matches the quality on this new facial lipsync. Not even Di-O-Matic.

erikals
05-03-2013, 01:43 AM
agree, not worth $3000... i'd rather do it manually (TAFA)

... or, hope Speech Graphics gets available soon... (for a decent price.. wishing.. )

Greenlaw
05-03-2013, 08:53 AM
Lipsync Tool is $500. The version I tested above seemed to be a beta and I don't think it was finished. You're thinking of the SDK which does cost $3000.

In any case, I wasn't saying any of the above mentioned was better than the latest tech. Just a timeline of what automated lip sync software has come out before, and what's actually available now.

G.

erikals
05-03-2013, 08:56 AM
$500,... hm, more interesting...

Greenlaw
05-03-2013, 10:16 AM
$500,... hm, more interesting...
LipSync Tool was quite interesting--very easy to use and fast, even in the version I tested in the above video over a year ago. But there were a few issues that needed to be addressed to make it work completely with LightWave, which was the reason I never recorded a part 2 for that video. Unfortunately, I ran out time for further testing (work schedule, etc.,) and had to drop the experiments.

I had a feeling that I was the only LightWave user who volunteered to test it so I'm not sure if they're still actively developing for LightWave. They're very nice people, btw. I should contact them and find out what the status is.

G.

Greenlaw
05-03-2013, 10:27 AM
I also used Magpie Pro in automatic mode about 15 years ago in a proof-of-concept piece for a gaming company. It was pretty neat--I generated a two minute monologue of lipsync for this bunny character in just a few minutes. It wasn't the greatest lipsync, but we anticipated doing a few hours of lipsyncing on a tight schedule so the technique was seriously being considered. Shortly after that, I got hired at as a lead artist at a low budget movie studio (this was long before my R+H days,) and had to drop that project.

The tricky thing back then was that you had to 'teach' the software your voice patterns--this meant that the system only worked accurately for performers who had their voice profiled for the system. Newer programs like Speech Graphics and LipSync Tool don't seem to care about that. (To be fair, Magpie Pro might not care any more either--I just haven't used it this way since that proof 15 years ago.)

G.

jeric_synergy
05-03-2013, 12:36 PM
I'll have to check and see if Magpie is still supported....

Greenlaw
05-03-2013, 12:53 PM
I'm about to dip back into Magpie Pro for some R&D for 'B2'--not for automated lipsync but, if I have time, I could check that feature out. We used Magpie Pro for 'Happy Box' two years ago but obviously we weren't using the 3D features. Working with Magpie Pro was very fast--the entire film was lipsynced in a few hours over two evenings. However, we might have been finished even sooner if the program hadn't crashed so much during production.

This time I'm hoping to use the 3D import/export features for 'B2'. I'm still not sure why Magpie Pro was so crashy last year (on two different computers,) but the other day I was playing with it and it seemed more stable. Let you know how it goes. If it gets crashy again during R&D, I'll probably switch to TAFA or Papagayo.

G.

bpritchard
05-03-2013, 04:04 PM
LipSync Tool was quite interesting--very easy to use and fast, even in the version I tested in the above video over a year ago.

G.

Drop me a line if your interested in testing it. I work in the same building as Mark over at Annosoft, and we've worked together on quite a bit of the tooling for the tool. Not a lot of progress was made on the LW side beyond what you last tested with, but we'd still love to get a solid LW user base!

Greenlaw
05-03-2013, 04:11 PM
Thanks for popping in! Sorry, I had to drop out of testing a while back but, yes, I'm still interested in getting this tool all sorted out and ready for LightWave. Will write privately to you soon.

G.

Rayek
05-04-2013, 02:49 AM
I'll have to check and see if Magpie is still supported....

Not too certain what's going on with MagPie - A couple of weeks ago I sent out an email with a request to update my old v1 version to the newest one, and mentioned I was on a deadline. Never got an answer back.

I've been looking into Tafa today again, and it looks really nice. The MDD output test file works well in Lightwave and Blender.

Is Tafa really that good? What is the opinion of the people here that have used it in production? I am thinking about getting a license.

Megalodon2.0
05-04-2013, 02:59 AM
Is Tafa really that good? What is the opinion of the people here that have used it in production? I am thinking about getting a license.

I absolutely LOVE IT.

It is extremely easy to use and VERY fast at creating lipsync.

Here's another thread discussing TAFA:

http://forums.newtek.com/showthread.php?135359-TAFA-still-the-best-bang-for-my-facial-animation-buck

Surrealist.
05-04-2013, 03:28 AM
What I was referring to earlier was, if you could tap into this technology to involve the entire face, and with a good understanding of the key muscle groups that convey emotion, some very interesting results would be possible. Because unless you are clowning about and forcing your face into contortions, there is really very little variance from person to person as to the muscles that get triggered and what they do when people emote. Emotion in a face is just a simple trigger of a few muscle groups to contract in certain ways. And each emotion is just a different combination of the same contractions of the same muscles. There is really no mystery to it. So theoretically, it would not be hard at all to map out an emotional state within which the person would be speaking with a simple set of presets. And also of course certain phonemes within an emotional state will trigger the facial muscles in a different way. As well as completely different movements of the mouth and facial muscles depending on the emotion being conveyed with a particular word.

geo_n
05-04-2013, 05:05 AM
Is Tafa still being developed though? I bought mimic some time ago and the problem with it is that it creates too many keyframes. The time to edit an export in mimic would be equal to manually doing it with poorer results. I rather use maestro since its easier to use and edit. Less morphs to use also.

bpritchard
05-04-2013, 09:05 AM
Thanks for popping in! Sorry, I had to drop out of testing a while back but, yes, I'm still interested in getting this tool all sorted out and ready for LightWave. Will write privately to you soon.

G.

No worries Greenlaw! :) We're always around... whenever you feel like you have the time we'll ramp back up. I still use lightwave from time to time but am mainly modo so i don't get a lot of visibility into the software myself.

Surrealist - Thats one of the things we've been looking at is more emotional response. We've started developing some basic functionality related to eyebrows and other parts of the body. Its not an easy process but is possible.

Greenlaw
05-04-2013, 12:25 PM
Is Tafa still being developed though?

Yes, according an email from Mac yesterday. He's been thinking about 2.0 in fact.

G.

Megalodon2.0
05-04-2013, 03:01 PM
Is Tafa still being developed though? I bought mimic some time ago and the problem with it is that it creates too many keyframes. The time to edit an export in mimic would be equal to manually doing it with poorer results. I rather use maestro since its easier to use and edit. Less morphs to use also.

I've also used Mimic for Lightwave and agree that is adds FAR too many keyframes. I found through actually using both that TAFA was FAR better and very easy to use and gave better results. And it was FAST. I don't know how fast Maestro is for facial animation so I can't comment to that, but TAFA is very fast and often you can create great lipsync for a few second dialog in just a few minutes. Then you can add the performance of things like blinks, smirks, nose and cheek movements (etc.) via either drag and drop or puppeteering. Since this is the program that Timothy Albee essentially designed, I'm betting it's probably the best "out there" for lipsyncing and facial animation that isn't automated. And even when you stack it against automated... it depends on how good your morphs are.

Also, you can use minimal morphs with TAFA as well. There was a video on Youtube that showed the usage of something like only 2 or 4 morphs - it looked great.

erikals
05-04-2013, 03:45 PM
Mega, do you have a reference link to the lip-sync anim that was made?

TAFA is nice, the only thing bad about it is that it's not integrated into LW as a plugin...

Megalodon2.0
05-04-2013, 04:05 PM
Mega, do you have a reference link to the lip-sync anim that was made?

TAFA is nice, the only thing bad about it is that it's not integrated into LW as a plugin...

I actually found that 3-morph TAFA clip - and YOU posted it back in October! :)

http://www.youtube.com/watch?v=nXlI_6RYSy4

erikals
05-04-2013, 10:37 PM
ah,.. http://erikalstad.com/backup/misc.php_files/smile.gif
didn't know that was the one, not bad, it's very good considering it's only 3 morphs... http://forums.cgsociety.org/images/smilies/arteest.gif

Megalodon2.0
05-05-2013, 02:56 AM
ah,.. http://erikalstad.com/backup/misc.php_files/smile.gif
didn't know that was the one, not bad, it's very good considering it's only 3 morphs... http://forums.cgsociety.org/images/smilies/arteest.gif

I agree.

TBH, I prefer more phoneme morphs since it's easier - IMO - to create the lipsync. Even if you use the same morph for different phonemes like Free and Vee - I name two different morphs Free and Vee - so I don't have to think about which morphs are which. I just look for the Free when I need the F and the Vee when I need the V. And of course the more morphs you have, the more flexible your character is.

erikals
05-10-2013, 04:35 AM
somewhat related >


http://www.youtube.com/watch?v=7bX0qpsLfpE

Surrealist.
05-10-2013, 07:30 PM
They have a cool looking body mocap as well:

http://www.snapperstech.com/

vncnt
05-11-2013, 06:47 AM
I actually found that 3-morph TAFA clip - and YOU posted it back in October! :)

http://www.youtube.com/watch?v=nXlI_6RYSy4

I think this post should have been titled "Lip-sync animation technology" instead of "Facial animation technology" because I donīt see a lot of expressive facial animation going on in the examples. It looks more like Supermarionation, as used in Thunderbirds.

In many animations the lip-sync hasnīt even the highest priority. Itīs far more important to be able to use pantomime and show the public what you want to tell.

Mr Rid
09-15-2013, 02:19 AM
I found it--the product was called HapTek and it's much cruder than I remember. Nevermind. :)

I was searching for this again myself- AI facial and body expression, along with speech- http://forums.newtek.com/showthread.php?118315-Haptek-character-emotion
It looks like Haptek dev has rolled over into Xprevo. http://www.youtube.com/watch?v=llAH4GJ7WVA

jwiede
09-15-2013, 12:12 PM
I think this post should have been titled "Lip-sync animation technology" instead of "Facial animation technology" because I donīt see a lot of expressive facial animation going on in the examples. It looks more like Supermarionation, as used in Thunderbirds.

In many animations the lip-sync hasnīt even the highest priority. Itīs far more important to be able to use pantomime and show the public what you want to tell.

Agreed. In real-life, people barely enunciate speech most of the time. Focusing so much on lip-sync gives this odd, over-enunciated appearance visible in so many animations these days. Focusing on expressions first, then mixing in the barest phonemic morphs needed to give continuity with the speech occurring seems to give much more realistic results for "casual speech". Obviously, yelling or cases where the character would explicitly enunciate their speech (like public speaking) require stronger applications of the phoneme morphs, but those are exceptional cases. The "auto-generated lip-sync" systems seem particularly prone to such over-enunciation.

Part of the reason this stuff dives so quickly into the uncanny valley is because animators apply different "priorities" to character motion versus what happens in real life, leading to that creepy, stilted appearance. From the time we learn to speak, we're continually "experimenting" with how little effort we can put into enunciation and still produce comprehensible speech, optimizing for how to spend the least energy (just as we do with virtually everything else). Even real people can make themselves look creepy trying to over-enunciate words while speaking, over-emphasizing the movements, because normally we do the exact opposite.