PDA

View Full Version : Can you create a virtual set using Lightwave?



PabloMack
02-06-2013, 12:04 PM
I have been a Lightwave user for 9 years and a SpeedEDIT user for about 2 years. I am writing a science fiction screen play that will use virtual sets extensively but they will not be limited to the rather narrow kinds of sets (studios) used on news and weather programs. All of my virtual sets are designed in Lightwave. Ideally I would love to be able to create a virtual set for a live movie shoot using Lightwave as the final rendering will be done with Lighwave anyway. Can you import a Lightwave scene into a TriCaster as a virtual set or are the architectures incompatible? Even if I don't use the virtual studio facility in TriCaster 40, can I still use the built-in Chroma Key features to composite a background still or video, a foreground still or video with alpha and insert a live video with green screen chroma keyed in between? My aim is to have some live feedback to the director, cameraman and actors for a shoot where the live action and CG will be well coordinated. I would like to avoid the redundant work involved in duplicating the virtual sets in the Lightwave and TriCaster formats. Am I barking up the wrong tree or am I on track?

SBowie
02-06-2013, 12:17 PM
Even if I don't use the virtual studio facility in TriCaster 40, can I still use the built-in Chroma Key features to composite a background still or video, a foreground still or video with alpha and insert a live video with green screen chroma keyed in between?This will work, yes - but creating the compiled LiveSet effects that permit zooming in and out after the fashion of the supplied sets requires the use of the (extra cost) Virtual Set Editor.

PabloMack
02-06-2013, 01:45 PM
Is it possible to import Ligthwave Scenes (LWS) using this editor? Even if I could import Lightwave Objects (LWO), it would be better than having to create everything from scratch. My fear is that the features in the Virtual Set Editor are too canned to be very flexible.

SBowie
02-06-2013, 02:03 PM
Is it possible to import Ligthwave Scenes (LWS) using this editor?VSE takes layered .psd files as input, and creates LiveSet projects from that. You render the various layers of your scene as background or foreground composition elements in your graphic creation weapon of choice (it could be LW, but you could use a carved up photo, water color, line drawing, or what have you), arrange them - along with video input proxy layers - as required, and import in to VSE.

I think there's a video on the site depicting the VSE end of things.

PabloMack
02-07-2013, 09:05 AM
One thing I don't understand is how a Virtual Studio can be in one layer but has both foreground and background elements in it. In a news cast, the reporter is in front of the background part of the studio but behind a desk. To me it seems that the main video would not always be in the background but would be in between two other layers that would provide the virtual set. I don't see how the TriCaster 40's architecture provides this. If your live video were always background (the BKGD layer) then you could never put anything behind it. Being able to chroma key your main video stream would be somewhat pointless because all you could ever see beyond the transparency created by the chroma keying process would be black.

jmmultex
02-07-2013, 09:20 AM
If you are creating the set in Lightwave, you need to pull elements apart into distinct layers. As a basic approach: You can first render the background without the desk and save it. You can then render the desk without the background and save it. Open Photoshop and import the background layer. Add a new layer and create the 'Input A' talent layer. You can the import the foreground layer. Saving this photoshop file would give you a basic set you could use in VSE. Keep in mind that when creating these sets, you should render from Lightwave in DOUBLE the resolution you plan on use the set at. When you zoom in to a set, you don't want to lose resolution - VSE will handle this for you, but you need to begin with the extra resolution there.

Best,
John

SBowie
02-07-2013, 09:21 AM
In general, think of compiled LiveSet effects as multi-layer compositing effects - an opaque background layer, a foreground layer (typically a desk on a transparent field, and one or more live video input layers (the number of video layers supporting keying varies by model) placed in appropriate ways between the bg and fg, or even in front of the fg (to simulate reflections, for example).

If you don't use VSE to create a LiveSet, you can still get a more limited virtual set effect using Virtual Inputs (M/Es, on 8000). All HD models <8000 have Virtual Inputs that support three inputs that can be drawn from a variety of sources, including files. It's easy to stack up a bg, video layer, and foreground with alpha. The switcher handles this composition as a single input.

kltv
02-07-2013, 11:33 AM
One thing I don't understand is how a Virtual Studio can be in one layer but has both foreground and background elements in it. In a news cast, the reporter is in front of the background part of the studio but behind a desk. To me it seems that the main video would not always be in the background but would be in between two other layers that would provide the virtual set. I don't see how the TriCaster 40's architecture provides this. If your live video were always background (the BKGD layer) then you could never put anything behind it. Being able to chroma key your main video stream would be somewhat pointless because all you could ever see beyond the transparency created by the chroma keying process would be black.

Check out this tutorial I wrote a few years ago for the NewTek magazine. I think it will help you visualize that. There have been some updates to how this works, mainly that you can use a placeholder image in Photoshop to gain the ability for warp and perspective without adjusting it in VSE, but basically it still works like this.

http://digital.turn-page.com/i/61737

Around page 10 or so…

Kris

PabloMack
02-07-2013, 04:40 PM
It's easy to stack up a bg, video layer, and foreground with alpha. The switcher handles this composition as a single input.

And the video layer can be individually chroma keyed using the little sprocket icon in the upper right corner to produce alpha (thus two top layers have alpha)? And is the "switcher" you are refering to the mechanism that selects preview vs. program or something else?


If you are creating the set in Lightwave...but you need to begin with the extra resolution there.

This all makes sense. Do I have to have VSE to perform these actions without change? I recall something someone wrote indicates that you don't have to have the add-on in order just to import and compile a Photoshop PSD to make a Virtual Set. Thanks.


"You've got to ask yourself one question ... 'Do I feel lucky?' Well, do ya, spammer?"

Reminds me of Dirty Harry. He say's "Do you feel lucky today? Well, do ya? Go ahead. Make my day!". I just have to feel that Steve has a S&W 44-Magnum hidden somewhere but easy to reach.

BeeVee
02-07-2013, 05:33 PM
Crikey, it used to be the *only* way you could create virtual sets... :(

B

SBowie
02-07-2013, 06:25 PM
And the video layer can be individually chroma keyed using the little sprocket icon in the upper right corner to produce alpha (thus two top layers have alpha)? And is the "switcher" you are refering to the mechanism that selects preview vs. program or something else? Yes, and yes.

Do I have to have VSE to perform these actions without change? I'm not 100% sure what you have in mind by "without change", but VSE is required to compile a LiveSet Effect, which uses 'oversize' (bigger than video res) imagery to let you zoom in without pixelization. A simple keyer effect (background, keyed midground, 32 bit foreground) can be performed in a Virtual Input without VSE.

Reminds me of Dirty Harry.Gee, that never even crossed my mind ;)
(I do live in Texas now, y'all) :2guns:.

joseburgos
02-10-2013, 04:34 PM
Crikey, it used to be the *only* way you could create virtual sets... :(

B

You could use Aura or Lightwave originally :)
edit: Well I should say not to much after LiveSet was released so not right away but not far from original release. As a FYI, I still use Aura to create SD LiveSets for clients all the time :)

Netzari
02-11-2013, 06:09 PM
Hi PabloMack, we made a virtual set in Cinema 4D and found that we can't import it into the TCXD300 without converting it with the NewTek Virtual Set Editor software, we just have the one set, 3 camera angles, a and b monitors. Do you have any idea how much time it would take for someone to convert such a file using the NewTek Virtual Set Editor software? It seems that there are others in the same predicament, maybe there are individuals in this forum who do freelance work and can convert such files for others to use. I would imagine this would also be in NewTek's best interests as smaller players would then be able to effectively use the equipment. :) Any solutions are very welcome.

PabloMack
02-17-2013, 03:26 PM
VSE takes layered .psd files as input, and creates LiveSet projects from that. You render the various layers of your scene as background or foreground composition elements in your graphic creation weapon of choice (it could be LW, but you could use a carved up photo, water color, line drawing, or what have you), arrange them - along with video input proxy layers - as required, and import in to VSE.

For my purposes, a set made of just still layers is not adequate. I can see how the background layer could be one of the "virtual video monitors" sized to cover the whole screen area configured through which you could either play a prerecorded video file, pipe a live video feed or just a still from a file for trivial cases. The foreground layer, however, needs to be able to directly handle alpha that is embedded and played from a video or still file. Of course, a camera feed that is used as a foreground would have to have its own chroma keyer (which I believe TC can handle). Otherwise, if the foreground layer couldn't handle alpha then I might have to generate a video foreground where the pixels are supposed to be transparent are colored instead with the chroma key color and then use the keyer to make it transparent.

Perhaps one of the virtual studios that comes with TC40 is set up so that both the foreground and background are sized to the screen and come from either still or video files or live feed when the foreground can handle alpha if present. This should be the standard trivial virtual set. Can someone check on this? If there is such a thing then I may have no need to incure the extra expense of purchasing VSE and the basic TC40 should be able to meet all of my plans.

jmmultex
02-17-2013, 04:50 PM
My apologies in advance if I am misunderstanding what you are saying...

All sets for TriCaster are made with still layers, combined with two elements bound to outside sources. You would normally set the background layer as a full screen image, and add any layers on top of that as elements that cover only part of the screen. This is the basic model used when developing images using layers in Photoshop. When stacked one on top of another, these layers form a composite image. Virtual sets allow you to add two special additional layers that can be bound to outside video/image sources. These layers need to be named either "Input A" (usually used to map in the talent) and "Input B" (for additional video or still image sources). These special layers can be a simple black rectangle that tells the TriCaster where to map the full input source, or a special UV image that allows an input source to be cut up into smaller elements or warped/distorted. (NOTE: If you want to use the UV images, your photoshop file must be in either 16bpp or 32bpp)

Just as in normal Photoshop the order of the layers allows layers in front to hide parts of layers behind them. If you have a background layer, you can put "Input A" on top of that (to display the talent) and then a layer with a desk in it on top of the talent layer to have the talent stand behind the desk. If you have an image of a monitor on the front of the desk, you can then put another layer called "Input B" on top of the desk layer with a black box in it exactly overlaying the screen of the monitor. When built as a set using Virtual Set Editor, you can select a video of talent in front of a green screen (keyed out with Livematte) for Input A and then have Stills or a DDR selected as Input B.

If you don't have Virtual Set Editor, you can still use virtual inputs to create a simple set. Pick one of the virtual inputs, and choose the default "A over B" set. For Input B, Select a still image as a background - This could be a JPG of a wall in an office. For input A, select a talent video over green screen (keyed out using Livematte). Then choose a transparent PNG file of a desk as the overlay, creating a 3 layer Virtual 'set' that can be accessed using a single selection (V1-V8 whatever virtual input you set it up on). If you needed an additional layer, that can be handle using a DSK outside of the virtual input. By carefully lining everything up, you can have a fairly believable virtual set without building a 'real' virtual set with VSE.

Hope this clears things up. Feel free to contact me if you need some help...

Best,
John

SBowie
02-17-2013, 05:46 PM
For my purposes, a set made of just still layers is not adequate.I don't think anyone suggested that, but when it comes to live sources in a virtual set on a TriCaster 40, you're going to have at most 2 potentially live video input layers plus one live overlay layer (which does not zoom with the LiveSet). If you're using VSE, you can include numerous still layers along with these in creative ways as elements of the composition. All layers other than the background can have an alpha channel. (Note that most of the original sets from NewTek do not support alpha for the B input, but VSE2 can create sets that do.)

John is right that you can also easily set up a multilayer composition using the A/B effect (and some of our competitors would refer to such a composition as a virtual set, though we do not, since it's really just a 'locked background image') but it will be limited to the two input layers and overlay, and it will not include any other layers (upstream of DSK channels).

PabloMack
02-18-2013, 04:05 PM
jmmultex: Given your narrative above, what if I want that desk that is sitting in front of the talent to be moving as from a video file containing embedded alpha? Let me give you an explicit scenario. I want three layers. The background layer is a still, a video or live feed with forest as a background where the leaves are blowing in the wind. The talent goes in the middle layer. This is an actor in front of a green screen and the green is chroma keyed out. The foreground layer is a T. rex that is approaching the actor who is sometimes hidden from the camera behind the body of the theropod that is approaching his possible next meal. This foreground would probably come from a file where the video has embedded alpha but possibly a live video that would also have to be chroma keyed. Then the three layers have to be composited together in real time. I think the thing that makes me unsure that the TC40 architecture can handle my requirements is that the foreground layer has to cover the whole screen and the layers behind only show through when the alpha in the foreground layer makes it transparent. It seems to me that what you said is that the foreground layer video does not contain alpha and that the only reason why the other layers show through is because the foreground layer occupies only a fixed portion of the screen but always obscures what is underneath it because its footprint is not moving (even though an opaque video may be playing in that fixed location of the screen).

SBowie
02-18-2013, 04:10 PM
Providing you don't need to zoom in the entire composition while live (the VI overlay layer, which is where your T-Rex is, doesn't zoom) you could do this without VSE or a LiveSet effect, apart from the default A over B effect. (the difficult part would be finding a trained dinosaur - or talent willing to work with one that isn't house-broken.)

jmmultex
02-18-2013, 04:16 PM
That would not be a problem, even without Virtual Set Editor, but you will need a TriCaster with 2 DDR's. With the A over B selected in a Virtual Input - 'VI#1":

Put the background image in Stills, and select Stills as your 'B Layer'
Put the talent video in DDR 1, apply Livematte to key it, and select it as the 'A Layer'
Put the 'T-Rex' video in DDR 2 (I'm assuming it ' pre-keyed, since it can be hard to get T-Rex's to do what you want in front of a green screen), and add that as the overlay.

This should give you a complete composite of the elements. Set DDR 1 to auto play, and have DDR 2 visible on the lower right tab. Switch VI1 to program, and your talent will be keyed over the background and talking. Simply press PLAY on DDR 2 when you want the T-REX to run onto the set.

Hope this helps...

-john

jmmultex
02-18-2013, 04:18 PM
Hey Steve, Guess that T-Rex jokes are too hard to pass up! :-)

-john

PabloMack
02-18-2013, 04:22 PM
I take that the TC40 doesn't have two DDR's.

SBowie
02-18-2013, 04:43 PM
Hence the live T-Rex jokes. :)

PabloMack
02-18-2013, 05:08 PM
(the difficult part would be finding a trained dinosaur - or talent willing to work with one that isn't house-broken.)

I see your point. Since I am planning on doing the shoot at my house, using an untrained T. rex I am sure to get my house broken.

jmmultex
02-18-2013, 10:22 PM
Hi PabloMack,

Just to throw out another point...

If you have the talent working live, you can do everything you want on the TC-40. Instead of using the DDR to play back recorded talent, you can set "Input A" to a camera covering the talent on a green screen (keyed with Livematte) and use the DDR on the overlay to bring in the T-Rex. If the talent isn't live, but is recorded on an external source like a camera, you could play it back 'live' from the camera into the TC-40 and use it the same way you would a live camera feed. Good Luck!

Best,
John

And on a side note:

Never forget that the TC-40 is incredibly powerful - even with fewer features than the 455/855. Speaking as someone who started working on an Amiga-based Video Toaster, you will find that limitations can sometimes be the grains of sand that motivate pearls of genius - just be open and creative about the process and don't let people tell you it can't be done.

PabloMack
02-20-2013, 04:02 PM
Hey jmmultex. That is a great idea. It fits well into my work flow. Using the system configured in only one way, the camcorder could either feed in live or replay just the green screened talent layer and the TC40 wouldn't know the difference. To instantly replay a take, the system wouldn't even have to be reconfigured just for the replay then retroconfigured back to record the next take. Only the camera would know the difference. The one caveat is that the camera man would have to start the camera's record or playback of live talent in synch with whoever is starting the background and/or foreground (T. rex) videos on the TC40. There is the need for the talent layer to be replayed alone through the TC40 on the same monitor to be evaluated on its own merits. The forgoing alone seems to be acceptable but I think it maybe gets better.

If the TC40 can also record and playback the final composite on the same monitor that was used to view the live composite, that gives me the extra flexibility I need. I have no need for the TC40 to be able to record any other channel during a take. Since the isolated talent layer was recorded in the camcorder, I can't see the need to redundantly record/replay the same thing in the TC40 during a shoot. The TC40 only needs to be able to pipe it through from the camcorder to the monitor for group review. Also important is the need to be able to replay the composite in order to judge the quality of the take regarding it as a "rough draft" composite of the take; the final composite to be done in post. If we have instant replay of the composite readily available, the only reason I can think why we might want to replay the talent layer and review a live composite is if we want to see how well the take meshes with a different foreground and/or background layers drawn from video files. For example, we want to see how well the recorded talent video works with a different background (such as a temperate pine forest instead of a tropical rain forest) and/or a different foreground (such as an Allosaurus gracilis instead of a Tyrannosaurus rex). But that would be an infrequent need and the live composite with manual synching of camcorder and TC40 would be good enough. (Anyone know any A. gracilis jokes?)

This turns me back on to thinking about the TC40 as a strong choice. I just don't want to spend 5K only to find that it won't meet my requirements. I have had a serious talk about justifying that amount of money with "the boss lady". A mistake of that magnitude would be a serious matter.

Thanks to the both of you for your quality advice and expertise.

jmmultex
02-20-2013, 04:06 PM
That great, PabloMack. I'd love to see some of the productions that you end up doing - definitely sounds interesting!

Best,
John

SBowie
02-20-2013, 04:11 PM
If at all possible, you should sit down with a knowledgeable reseller, get some hands-on time to makes sure you're clear on things...

PabloMack
02-24-2013, 09:59 PM
If at all possible, you should sit down with a knowledgeable reseller, get some hands-on time to makes sure you're clear on things...

I used to attend a monthly Lightwave Users group meeting held at BiWay Media across town in Houston. I got to know the place and met a couple of guys that work there. They would be the ones I will probably contact for this. Back then I never thought I would be buying a TriCaster. ;)

PabloMack
05-18-2013, 10:26 AM
Here is a 40 second "walk through" of the virtual set I created with Lightwave in case anyone is interested. It took two weeks to render, mostly because of the complex plant models and all of the glass in the "fish bowl" meeting room that is located in the middle of the large centralized lounge area. I haven't figured out how to make these reflective surfaces interact correctly in a composite with live video. Perhaps I am just asking for trouble.

http://www.youtube.com/watch?v=91zM-m96Etw

The scene "out back" the rear garage door is a video I took in New South Wales, Australia.

joseburgos
05-18-2013, 10:54 AM
I have some old post explaining this but I found this one that I posted last Christmas that should help you;
http://forums.newtek.com/showthread.php?132481-My-private-stash-secret-Lightwave-LiveSet-texture-tip

abdelkarim
07-14-2013, 08:15 PM
easy

PabloMack
09-26-2013, 09:00 AM
Question: In a system like you see with a weather man, the live radar map is a video shown behind the live talent but in TriCaster parlance, the talent is actually in the background layer. But because the map is actually behind the weather man who is standing in front of it, I suppose it is the virtual set providing this "background behind the background" and is not considered to be one of the "layers". In scenarios where this weather map covers the whole screen, then it is essentiall like a four-layer system with one of the virtual video monitors acting as a fourth layer. TriCaster only has one DDR so it can only play one video at a time. However, this weather map must also be a video so it seems to me that if the TriCaster can do this then it can actually play two recorded videos at once. This is where the background layer contains live chroma keyed action, the middle or front layer contains a recorded video and the virtual studio provides the background (behind what TriCaster calls the background layer). So effectively, there can effectively be the equivalent of "four layers". Is this a correct assessment or do I misunderstand the architecture? If my understanding is correct, then I don't need the middle or foreground layer for this scenario unless I use it to add text or some other kind of aid.

kltv
09-26-2013, 10:59 AM
Typically with weather, both the live camera and weather graphics are a live video input. Weather folks usually have their own graphics computer that provides a normal video output. In this case, the talent video could be in any number of places depending on how you want to do it. You could load any of the two, three or four layer options for M/E EFFECT in the TriCaster and place your talent in any of those background layers above or below any other graphics in the stacking order. You could also utilize any of the keyers in the M/E. If you were using DDRs or internal graphics sources in the TriCaster to do the weather graphics, I would keep the M/E in MIX mode and use the A/B layers to do transitions between background graphics while leaving the talent keyed over the top in one of the M/E's keyers.

Kris

PabloMack
09-26-2013, 11:13 AM
This is TriCaster 40 (specifically) I am talking about. As I understand it, the weather map graphics does not occupy one of the three "layers" (background, middle and foreground) as it must actually appear behind the background layer which has the live talent. Is this correct? In other words, it is background even behind the "background layer"?

PabloMack
11-12-2013, 11:45 AM
I went down to my local Newtek rep (BiWay Media) and had someone demo the TC40 for me. Really nice guy. He told me that the Version 2 actually includes the Virtual Studio Editor/compiler that was a $2K option for the Version 1. Can someone verify this for me?

SBowie
11-12-2013, 12:36 PM
VSE (Virtual Set Editor) is still an extra cost option for all TriCasters (not just 40). Perhaps your reseller was thinking of ASC (Animation Store compiler), which is included with 40 V2, but wasn't in the earlier model.

Buddy.Hannon
11-12-2013, 12:44 PM
If you're talking about the Tricaster 40, then you have two input layers, one for the talent and one for another source (Video or Graphics), which would be your weather map, for example. Depending on how you setup your file, you can put your video/graphic layer behind or in front of your talent.

PabloMack
11-16-2013, 02:45 PM
VSE (Virtual Set Editor) is still an extra cost option for all TriCasters (not just 40).

I talked with BiWay's engineer and he confirms what you say. It just dawned on me. According to Joe, the VSE software comes with the TC40 but no license so it only runs in demo mode. A watermark is produced if you compile a virtual set. You have to pay the fee if you want full function. However, for my purposes, a watermark is probably okay because I am only planning to use the system for real time feedback of a live shoot. The program output will not be used in the final production. So with this is mind, it looks like I might have free use of VSE anyway. It may not be that I need it, though.

I just ordered a TC40. It should come next week. I am all giddy....

SBowie
11-16-2013, 04:45 PM
A watermark is produced if you compile a virtual set. You have to pay the fee if you want full function.That's true, there is a demo version on the system.

EventsHd
03-16-2015, 05:04 PM
VSE takes layered .psd files as input, and creates LiveSet projects from that. You render the various layers of your scene as background or foreground composition elements in your graphic creation weapon of choice (it could be LW, but you could use a carved up photo, water color, line drawing, or what have you), arrange them - along with video input proxy layers - as required, and import in to VSE.

I think there's a video on the site depicting the VSE end of things.
Hi, i had a look for this video can't seem to find it, what site are you referring to, and any idea on what the video was called, thx

SBowie
03-16-2015, 06:52 PM
Hi, i had a look for this video can't seem to find it, what site are you referring to, and any idea on what the video was called, thxThere have been so many new vidoes added it's hard to keep up with them all, but I think U had this one in mind (which now seems to appear only on an external site):

https://www.youtube.com/watch?v=r1WgXf7D9EI

It's a bit out of date, but still helpful.