Yup, I saw those and I even have a genoma version of the rig. I made my own so that I know why each bone and controller is there and how they are relating to each other. Genoma puts alot of bone and joints that I have to go through and figure out why its there.
I finished my rig..fixed my cyclic dependencies and have a pretty stable rig that handles what I need. Even grafted the head rig onto it....then did it again, when I realized I had the wrong version of the head. The body rig itself is relatively simple (I'll be deciding if I want to add a seperate deform rig on top of it when I start making some animations with it)..most of my rig is in the head. I'll be posting some pics and maybe a quick video highlighting the controls in the next day or so.
Had some time and started refreshing myself on character animation while getting to know my rig. This is a scene that will be pretty much growing as I work out doing more moves, which I can apply the techniques to the main project...
In this vid, its just a walk and a turn around....
The next phase will be walkin backwards, maybe with some other activities. Once I get through the first pass and end the scene, I'll go back through and work on face and hands (though some I will do as I go too).
Ok...so I was avoiding it...didnt wanna do it.....but finally I relented.....
I did an x sheet for the project...
Luckily I was able to find a clip with just Idina Menzel singing, no music. which made the process much easier...though it still took me around 8-10 hours of scrubbing back and forth to get it all down.
I didnt have any blank x sheets and didnt want to print out the 80+ pages I would need for it, so I made a spreadsheet on google sheets, which worked out ok (prolly woulda been qucker in the end on paper, but ahh well...)
I only put in the columns I needed and I can easily change it for future projects.
You can see the full PDF of the whole thing on my Patreon page.
To those folks who do these things for a full length feature, my hats off to you...lol
For further development of the digital X-sheet features in Legato I'd like to know how you actually use your spreadsheet.
- do you prefer to create the speech transcription in one single process for an entire song or sound track?
- usually I'd animate per syllable to get the impression of natural muscle movements -> how is your detailed speech column working for you?
- do you actually use a printed copy?
- do you update the spreadsheet during or after animation? -> do you use it for "administration" or just as a guide?
- how do you apply a single event to multiple rows in the spreadsheet -> drawing arcs?
I'm thinking about new workflow features like import/sync from another scene, export to a printable format, auto-distribute functions.
When I've made them I usually run through as much as I can at a time (before i go batty)..hehe.
I use a detailed speech column, but its more so I know whats being said, but animate more to the natural movement.
In the past I've used paper sheets..I just didnt have any blank ones, and didnt wanna print all the pages I'd need to handle the 5270 frames.
I like that digitally I can make a change as needed, and its available to me on any device...but to be honest its a little easier on paper because I can use more shortcuts to indicate holding a sound, etc.
I havent found a way with a spreadsheet to span a single event over multiple rows, sadly....I just used 'Start' and "End" to note something that spanned frames between them.
Hoenstly I'd love to see software made specifically for the x sheet to allow for the different shortcuts, curves, etc that a spreadsheet isnt really designed for.
When adding a new text (by a double click in the upper/middle/lower section per frame) it can span a word or lines over multiple frames.
As soon as you entered the word or words, don't press Enter yet, and don't click on another frame yet.
Just drag a target frame range with the left mouse button. The text string is assigned to multiple frames as soon as you release the mouse button. To edit the text again (or its frame range): double click on that specific text track on its start frame.
The text tracks can also be pasted from an external LWS library. This means it's possible to create a complete transcription of the entire song / sound track and use it as the master for other LWS scene files.
Simply apply a Region to a frame range in the "full song" LWS scene file that represents one of the shots you want to produce, then in a new LWS scene file Transfer the text track data (new or update) to any start frame or to any specified frame range.
It shouldn't be too dificult for me to add some new features in Legato:
- to "Transfer" the audio definition of that particular section as well, just by regulating the start offset of the audio file so it's still in sync with the text tracks in the target frame range.
- to export all text tracks per frame to a printable text file, and create arcs with a few simple ascii characters
- to import TXT or HTML files, and (re-)distribute lines, words, syllables
- to import/export to/from CSV files
I think this should make the entire process more flexible.
Also thinking about a way to expand the mood text track (edit "Lights") with a colorscript using color samples and/or external image files. It could be added to the X-script export too.
You can run some tests with the current build.
I'll see what I can do this afternoon.
I will post the video about backplate switching (with the Low/Mid/Max buttons) too if Vimeo agrees.
X-Sheets in CSV format can now be imported back into Legato, for example after modifications in a spreadsheet.
With or without Offset.
With the "Import TextTrack.txt" button, you're able to import audio transcriptions of lyrics into Legato.
Currently limited to text track #7: the Speech track.
A single Line per frame, a single Word per frame, a single Syllable per frame (work-in-progress), or a single Character per frame.
After import you can select a frame range and move or retime the text tracks.
Decided Im going to print my models, starting with Elsa in her coronation dress.
Set up the pose in Lightwave and saved the transformed models. Did some tweaking in Modeler, then exported to Blender (to generate an obj in far less time than Modeler) to import into ZBrush so I can dynamesh and do other things to prep it as a printable model.
For the hair, I used FFX to convert to polys, then used Zbrush to make it all into a single object.
Details on the clothing I used displacement maps in zbrush.
Should have the first prints going in a day or so...will try on both my Photon and RoboR1.
Ran a test at 4k to get an idea if I will be able to render 4k or 1080p. I chose one of the longest sets to try and guage times.
I used 1 camera sample with .01 adaptive, 16 sample, reflection/refraction samples at 2 and subsurface at 3.
my goal was to see if I could achieve decent results with DeNoise and RSMB...overall I think it will work.
The 600 frame render took about 4 days.
After looking at the renders and the output from AE using denoise and rsmb, I decided to render the hair itself..
It took about 28 hours to render the hair suing 32 gi samples and 16 camera samples (they really need to update FFX).