PDA

View Full Version : Can animations be synchronized with any sound?



Bernie2Strokes
04-07-2017, 04:40 AM
Hello. This idea originally came from how to animate an audio spectrum with any song I attach into Lightwave.

I've been going over the motion mixer and the techniques for lip syncs. Then I wondered if multiple objects in a scene can be animated to behave according to different sounds. But now I wonder if I could make a whole scene where different objects will move according to whatever song I put in.

roboman
04-07-2017, 07:22 AM
Check out several threads hear about midi plugins and the animusic videos that can be found around the net. The animusic started out as demos done by a company to show how well they could do that and turned into a business in it self. So yes it can be done

TheLexx
04-07-2017, 10:58 AM
I don't want to hijack the thread, but could that also apply for decent automatic dialogue lip-sync ?

Bernie2Strokes
04-07-2017, 08:58 PM
Thanks a bunch! As the idea grew, I was worried different objects couldn't be synched to one audio source.

- - - Updated - - -


I don't want to hijack the thread, but could that also apply for decent automatic dialogue lip-sync ?

It should given the proper morphs.

MonroePoteet
04-09-2017, 05:19 PM
MIDI is very different from an audio source. MIDI (Musical Instrument Digital Interface) consists of a stream of binary commands used to control the sounds produced by MIDI sounds sources and various other parameters (e.g. sustain, pitch bend, control wheel positions, etc.). An audio source is usually an analog recording from a CD or an MP3 file.

LW provides an Audio Channel modifier in the Modifiers tab of the Graph Editor. This allows you to designate an audio file to affect any channel (e.g. X,Y,Z,H,P,B,scale, color, lens flare intensity, etc.) that has an Envelope associated with it (the "E" button). However, this just varies the affected channel by the intensity / loudness of the sound in the audio file. If you apply the same audio file to multiple channels, you can change a few parameters (scale, offset, filter strength), but it will still affect the various channels in basically the same way.

I've had a little luck creating narrow band-pass filters in Audacity (Open source audio editor) to break down a single audio file into multiple WAV / MP3 files centered around particular frequencies in the original audio. The Analyze=>Display Spectrum... tool in Audacity can be used to find prevalent frequencies in the audio spectrum, and then the Effect=>Equalization... can be used to create narrow band-pass filters around the primary frequencies. Each band-pass filter is applied seperately and is rendered to separate WAV / MP3 files and then specified in different Audio Channel modifiers in LW. The Audacity Equalization can't get very narrow on the target frequency, but it helps.

The Animusic animation were almost certainly done using MIDI event streams to control the animations. I was working on a MIDI plugin for Lightwave, but then my life got convoluted so it's been back-burnered for quite a while. Hopefully, I'll get to working on it again, but it still implies that you have a MIDI event stream for the audio performance to operate the plugin.

Good luck!
mTp

jwiede
04-09-2017, 06:26 PM
I don't want to hijack the thread, but could that also apply for decent automatic dialogue lip-sync ?

While audio-discrimination tools like CrazyTalk, etc. have become much better at producing auto-generated lip-sync, their quality of results still leaves a LOT to be desired even for a single speaker, let alone multiple, overlapping speakers (dialogue).

Most production-quality animation lip-sync work still involves extensive manual finishing, which says a lot.