PDA

View Full Version : My first lipsync effort - C&C welcome



Sekhar
10-02-2018, 03:57 PM
Hi folks, thought I'd share my first ever lipsync effort I made for my new matching site. I did this on LW 2018, with lipsync data from Adobe character animator and a python script I wrote to apply it on the character. I built the character myself from scratch and plan to make it the site's persona, with animations in the future. C&C welcome.

http://matchtowed.com

Click on "What Is MatchToWed?" to view the video, double click on it to make it full screen, if you prefer.

TheLexx
10-02-2018, 04:12 PM
I think for the type of character you've created it is good, but to really bring it to life more, the project seems a perfect candidate for Chilton's Glycon motion capture system. Congrats anyway.

https://forums.newtek.com/showthread.php?158168-Ann-Glycon-VR-Motion-Capture&

:)

Sekhar
10-03-2018, 07:52 AM
Thanks for the link, I will check it out for the animation part - I was planning to either animate manually or use NevronMotion (assuming the thing still works, haven't seen an update for some time). Anyway, this was really about lipsync, which you will need regardless.

MonroePoteet
10-03-2018, 08:46 AM
Right off the bat, I think the lips should close completely for phonemes like "M", "P", and the lower-lip should be tucked in for "F" and "V" sounds. For example, with the current lip motion, I think "match maker" would be spoken by the character as "atch aker".

mTp

Sekhar
10-09-2018, 07:27 AM
Right off the bat, I think the lips should close completely for phonemes like "M", "P", and the lower-lip should be tucked in for "F" and "V" sounds. For example, with the current lip motion, I think "match maker" would be spoken by the character as "atch aker".

mTp

Thanks, yes, I will be improving these in the next version. However, Character Animator produces viseme data, which has less info than phonemes to begin with, so I believe it's not going to be accurate even if the morphs are perfect.

MonroePoteet
10-09-2018, 08:27 AM
OK, thanks. The realm "viseme data" was new to me when I read your post, but a little research shows some references:

http://manual.reallusion.com/3DXchange_6/ENU/Pipeline/04_Modify_Page/Face_Setup_Section/Setting_Lips_Shape_Data_for_G6.htm

http://manual.reallusion.com/3DXchange_6/ENU/Pipeline/04_Modify_Page/Face_Setup_Section/Basic_Lip_Shapes.htm

The second reference has the 15 basic visemes mapped to various subsets of the 15 lip / tongue shapes. The table is a little hard to read, but the light-gray viseme tag beneath each picture to the left (e.g. AE, Ah, B-M-P, etc.) cross references to the amount each of the lip / tongue shapes across the top should be applied. For example, the first viseme, AE, shows "Open" at V(40), "Wide" at V(100) and "Lip Open" at V(40). I'm not sure what the numbers mean, but I'd initially interpret them as "morph percentages" in LW's Morph Mixer.

The 3rd (B-M-P) and 7th (F-V) show the "Explosive" and "Dental" lip shapes at V(100) respectively as I'd expect from my previous post.

Anyway, thanks for the introduction to "viseme data" - looks like a very powerful way of doing lip sync given the viseme data and a set of jaw / lip / tongue morphs.

Have fun!

mTp