Page 2 of 5 FirstFirst 1234 ... LastLast
Results 16 to 30 of 69

Thread: Realistic Camera

  1. #16
    Member
    Join Date
    May 2006
    Location
    France
    Posts
    4,019
    "hither, yon
    The distance to the currently set near and far plane, primarily for use with the preview function. " (from the older beta 9 SDK doc)

    Was used before in the (opengl) preview projection matrix,
    no more needed, handled internaly by layout.

    Denis.

  2. #17
    NewTek Developer jameswillmott's Avatar
    Join Date
    Dec 2004
    Location
    Gold Coast
    Posts
    3,171
    Front clipping plane and rear clipping plane for OpenGL preview, if I'm not mistaken.

    EDIT: Ah, Dpont got in just before me

  3. #18
    Member
    Join Date
    May 2006
    Location
    France
    Posts
    4,019
    Trying to clarify some points about (still utopic) port
    of "Realistic Camera" pbrt plugin to lw camera plugin,
    understanding "pbrt" transformation matrix system
    (I'm not thinking in C++), avoiding confusion between
    film and raster coordinates, and leaving useless code.
    Correct me if I'm wrong or miss something!

    -In our lensviewer, we can parse lense-stack and store each
    lens specifications (radius, zpos, ior, aperture).
    -In lw camera evaluate function, we get film position,
    camera world position and world camera orientation matrix.

    -We must init our camera with its lens-system, Raster is
    for pixel coordinates, but we have already film coordinates in LW,
    we must translate our system with film distance to back lens.

    -To generate ray, we build our beginning ray
    in camera space translate to back lens distance with our film coordinates
    we don't need concentric mapping (except if we want to do ray-samples
    in our lens-viewer).

    -We use "pbrt" disk (zero radius) and "RC" lenscomponent(sphere)
    function to get dpdu & dpdv differential vector and then get the normal
    to intersection by cross product, or nothing if there's no intersection.

    -Compute refracted ray with this oriented normal (for convex or concave)
    and take IOR from parsed lens data.
    repeating for each lenscomponent.

    -Finally transform resulted
    ray direction in world system with camera matrix.

    -We don't use Weighting factor (for exposure?)

    ("RC" link-references are on the top of this thread).



    Denis.

  4. #19
    LightWave Engineer Jarno's Avatar
    Join Date
    Aug 2003
    Location
    New Zealand
    Posts
    597
    Quote Originally Posted by Pavlov
    apart from being first Sigr Ros's album, what is Yon - and hither too, in simple words ?
    As mentioned, hither and yon are distances to the near and far clipping planes. They are conventionally called that because "near" and "far" tend to be reserved words in many C and C++ compilers, so can't be used as variable or parameter names.

    ---JvdL---
    (who missed a Sigur Ros concert to work on LW9)
    ((and it was "Von", not "Yon" ))
    ()

  5. #20
    Member
    Join Date
    May 2006
    Location
    France
    Posts
    4,019
    ..."we have already film coordinates in LW"...
    In fact we get (centered) normalized film coordinates in LW camera,
    so here we need the film resolution diagonal (from filmsize?) and real film diagonal from panel request,
    to get a scaling factor for a real film position, and then the ray
    we are evaluating.

    -Major difficulty for me, comes from different coordinate
    systems (Camera, Object, World) and each
    specific space-transformation we need for each step.
    Disk and lenscomponent function operate in object space,
    to get derivatives, but normal intersection is calculated
    in world space, these transformations are simple positive
    or negative translations on the optical axis, given the distance from
    each lenscomponent to front stack-lens.

    -From first idea about ray marcher, we keep only a basic
    function to compute lensComponent hit position (intersection).

    A lot of possible errors or mistakes in calculation...

    Wrong?Tedious?
    Could someone upload a Carl Zeiss Lens specification :-)(ascii file)?

    Denis.

  6. #21
    Member
    Join Date
    May 2006
    Location
    France
    Posts
    4,019
    Lens specification sample :


    # Muller 16mm/f4 155.9FOV fisheye lens
    # MLD p164
    # Scaled to 10 mm from 100 mm
    # radius sep n aperture
    30.2249 0.8335 1.62 30.34
    11.3931 7.4136 1 20.68
    75.2019 1.0654 1.639 17.8
    8.3349 11.1549 1 13.42
    9.5882 2.0054 1.654 9.02
    43.8677 5.3895 1 8.14
    0 1.4163 0 6.08
    29.4541 2.1934 1.517 5.96
    -5.2265 0.9714 1.805 5.84
    -14.2884 0.0627 1 5.96
    -22.3726 0.94 1.673 5.96
    -15.0404 0 1 6.52


    Denis.

  7. #22
    LightWave Engineer Jarno's Avatar
    Join Date
    Aug 2003
    Location
    New Zealand
    Posts
    597
    Quote Originally Posted by dpont
    In fact we get (centered) normalized film coordinates in LW camera, so here we need the film resolution diagonal (from filmsize?) and real film diagonal from panel request
    The camera handler evaluate function is called with position in film coordinates. Film coordinates are centred on the film, but not normalized. The position is given in metres.

    ---JvdL---

  8. #23
    LightJustice Panikos's Avatar
    Join Date
    Feb 2003
    Location
    Nicosia Cyprus
    Posts
    1,727
    Very interesting all these

    Paolo, vignetting can added with many many ways without calculations, using Advanced Camera, or an ImageFilter or ImageProcessing Foreground, or using a Compositing App.

    As far as non-linear light sampling, Virtual Darkroom works great as a post effect though is a bit difficult to jump into it

    Thank all for useful info
    Bon chance

  9. #24
    LightJustice Panikos's Avatar
    Join Date
    Feb 2003
    Location
    Nicosia Cyprus
    Posts
    1,727

    TiltShift Camera

    The biggest camera was manufactured in 1900 in USA. Due to its dimensions was called "Mammoth"
    The train company "Chicago & Alton" asked for this camera in order to shoot its new luxurious train.
    The camera weight was 635 kilos and required 15 people to operate it.
    A photograph had dimensions 1.4 x 2.4 m and required 45 litres of chemicals in the darkroom.

    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	camera_a.jpg 
Views:	519 
Size:	51.6 KB 
ID:	34780  
    Last edited by Panikos; 07-20-2006 at 07:47 PM.

  10. #25
    ehhh i know about Image processing.... but since Fprime cannot see them, they dont exhist

    Paolo

  11. #26
    Member
    Join Date
    May 2006
    Location
    France
    Posts
    4,019

    Smile

    Just an piece of text I have read about NT at SIGGRAPH:


    "With the addition of Advanced Camera Tools (ACT) in LightWave v9, LightWave now sets the standard for the ability to recreate any real-world camera and provides a selection of cameras perfect for film and television use, and ideal for architecture and visualization. The new capabilities showcased at SIGGRAPH will include two sample cameras, a RealLens and a two-point perspective. The RealLens camera applies lens distortions in images based upon data files describing the characteristics of real camera lenses. The two-point perspective camera is useful for traditional architectural renderings."

    May be in the next Open Beta?...

    Denis.

  12. #27
    i guess 2-pts perspective is derived from J.Wilmott's (mmmh yes maybe he's the one, maybe i'm wrong - just look in beta threads) "Architectural camera".

    Nice to hear they implemented it, good Viz move.

    Paolo

  13. #28
    I am Jack's cold sweat Karmacop's Avatar
    Join Date
    Feb 2003
    Location
    Bathurst, NSW, Australia
    Posts
    2,117
    Not derived from from, that'd suggest that James had something to do with it (and I'm making and assumption that he wasn't). You can do 2 point perspective with the advanced camera too, James' camera is just quicker to set up

  14. #29
    LightJustice Panikos's Avatar
    Join Date
    Feb 2003
    Location
    Nicosia Cyprus
    Posts
    1,727
    I saw the recent LW9.* videos.
    Realistic Camera is cool, however I see a small weakness.

    Some imagefilters are HDR sensitive, like HDR_Exposer, LWVDR, or any colour processing filters.

    Vignetting should be applied as the last filter, a separating plugin using values retrieved from the Camera Lens, eitherwise Vignetting becomes a parameter for invalid calculations.

    Of course, I am only guessing, havent tried the tool in practice or saw it in real application.
    Last edited by Panikos; 08-18-2006 at 03:23 AM.

  15. #30
    I saw some Vray 1.5 demos, it features (also) nice realworld cameras.
    These seems to give the same vignette results as Maxwell. This vignetting seems to be "non linear", it seems to feel local image's exposure so it's not present everywhere or not in the same amount.
    Adding a simple vignetting in post is easy but it is not as "scientific" as the one which derives from real camera calculations.
    Since we're getting real cameras, is there any chance they gives this effect ?

    Paolo

Page 2 of 5 FirstFirst 1234 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •