Results 1 to 5 of 5

Thread: Why CPUs Are Much Better at Large-Scale Rendering

  1. #1

    Intel says CPUs Are Much Better at Large-Scale Rendering

    I just got this email titled - Why CPUs Are Much Better at Large-Scale Rendering, Plus Lots More in the New Parallel Universe Magazine

    "Intel® Rendering Framework Using Software-Defined Visualization shares how new multi- and many-core CPUs are your true ally for large rendering tasks, thanks to their lower costs and larger memory capacity. We look at how they’re terrific for software-defined data visualization—yielding 100x better rendering performance."



    PDF for the magazine is provided https://software.intel.com/sites/def...lq_cid=4350433

    My question is if GPU will eventually overcome large scale production issues, or will it forever be bottleneck by memory access to system ram.
    Last edited by thomascheng; 03-03-2019 at 07:29 AM.

  2. #2
    Registered User
    Join Date
    Aug 2016
    Location
    a place
    Posts
    1,848
    i don’t know but i guess that it’s all in context and lw seems to be used by a lot of one man or small team setups where gpu is a distinct advantage. even on larger scale projects, i am sure a hybrid cpu/gpu setup would be great for fast prefiz and quicker turnaround to get to final.

  3. #3
    Almost newbie Cageman's Avatar
    Join Date
    Apr 2003
    Location
    Malmö, SWEDEN
    Posts
    7,650
    Quote Originally Posted by thomascheng View Post
    My question is if GPU will eventually overcome large scale production issues, or will it forever be bottleneck by memory access to system ram.
    Eventually... when GPUs have enough VRAM so that it doesn't need to communicate with the CPU

    As long as the GPU has to communicate with regular RAM, it has to go through the CPU and I do not see any way around that.
    Senior Technical Supervisor
    Cinematics Department
    Massive - A Ubisoft Studio
    -----
    Intel Core i7-4790K @ 4GHz
    16GB Ram
    GeForce GTX 1080 8GB
    Windows 10 Pro x64

  4. #4
    Registered User
    Join Date
    Mar 2012
    Location
    Socal
    Posts
    403
    GPUs work best when all the simd cores are all doing the same thing. Raytracing is highly divergent. Rays go off and do different things.

  5. #5
    Not saying a GPU rendering is bad. But I would rather have CPU for accurate calculations.

    GPU's renders fail when conditions come into play. SSS, Volumetrics, Caustics and Refraction are some of the areas that require this type of branch conditions. Sure there are hacks to fake it. In the end a CPU can do IF/THEN/ELSE where a GPU cannot. And to overcome these limitations of a GPU require large amounts of memory to generate textures for storing data. (Deep Texture Methods) Where photonmapping comes into play. If one has ever used this type of renderer (Mental Ray). One would understand how much memory gets sucked away.
    I was thinking of the immortal words of Socrates - "I Drank What??"

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •