PDA

View Full Version : Human brain rendered in 3D with UsefulProgress technology



oliversimonnet
08-24-2010, 07:20 PM
i dont knwo if this is Lightwave but i had to link it its prety impresive
(by the way i do not see a brain hahah)
http://www.youtube.com/watch?v=kAXoQbHTAOA

oliversimonnet
08-24-2010, 07:22 PM
a heart here aswell
http://www.youtube.com/watch?v=3v_x8HNkxxE

Elmar Moelzer
08-24-2010, 10:41 PM
LW has had much better than that for years now through our VoluMedic plugin.
Sorry for the shameless plug, but you should really have a look:
http://www.volumedic.com/

wesleycorgi
08-25-2010, 04:46 AM
LW has had much better than that for years now through our VoluMedic plugin.
Sorry for the shameless plug, but you should really have a look:
http://www.volumedic.com/

No shame in plugging cool stuff!

oliversimonnet
08-25-2010, 06:26 AM
aa cool plugin :)
thanks for the info

TheGarf
08-25-2010, 08:30 AM
The technique to display the scanned slices of for instance a CT or MR scanner as 3D voxels is not very new at all. Here at Philips we deliver real-time solutions to display those kind of medical data sets to clinics all around the world for at least 6 years as far is I can remember.

Take for instance the Brilliance Workspace: Brilliance Workspace (http://www.healthcare.philips.com/in_en/products/ct/products/ct_brilliance_workspace/index.wpd)

We don't support LightWave though.

Elmar Moelzer
08-25-2010, 08:50 AM
Yeah, we have not been doing it for quite 6 years yet, but almost ;)
VoluMedic also does not require special volume rendering hardware, like most solutions do...

stefanbanev
08-26-2010, 02:35 PM
The technique to display the scanned slices of for instance a CT or MR scanner as 3D voxels is not very new at all. Here at Philips we deliver real-time solutions to display those kind of medical data sets to clinics all around the world for at least 6 years as far is I can remember.

Take for instance the Brilliance Workspace: Brilliance Workspace (http://www.healthcare.philips.com/in_en/products/ct/products/ct_brilliance_workspace/index.wpd)

We don't support LightWave though.

6-years!!!, are you kidding? VolPro board goes back to 90th, Siemens used it from 2000th. The major issue is the quality of interactive volume rendering and Philips has inferior one (at least what I've seen at RSNA2009). Today, the best Interactive VR is provided by multi-core CPU not by GPU and difference is quite dramatic.

TheGarf
09-02-2010, 03:17 PM
6-years!!!, are you kidding? VolPro board goes back to 90th, Siemens used it from 2000th. The major issue is the quality of interactive volume rendering and Philips has inferior one (at least what I've seen at RSNA2009). Today, the best Interactive VR is provided by multi-core CPU not by GPU and difference is quite dramatic.

The 6 years I mentioned was a quick guess from my memory, it must be much longer. I'm interested though what was inferior about the volume render quality? Currently multi core CPU is the way to go not in the least due to the large size of some of the datasets although cards like this (http://www.nvidia.com/object/product-quadro-6000-us.html) might change things in the future.

stefanbanev
09-03-2010, 10:54 AM
The 6 years I mentioned was a quick guess from my memory, it must be much longer. I'm interested though what was inferior about the volume render quality? Currently multi core CPU is the way to go not in the least due to the large size of some of the datasets although cards like this (http://www.nvidia.com/object/product-quadro-6000-us.html) might change things in the future.

Well, the observation below is general in nature and reflects some aspects of current state of volume rendering for medical application.

The quality compromises for interactive rendering is the major class-differentiator. For Interpolation Classification (IC) case
(1) the reduction of sampling density along ray has very profound impact on quality and
(2) traditionally 2D compromises contributes to quality degradation as well (for both IC &CI cases).

1) Low sampling density along ray:

Thin-film structures is very sensitive to this compromise. Besides apparent one like skin tissue, more critical and more practically relevant is sharp color gradient cases; the differentiations of close HU ranges solely by means TF is the most common needs; such scenario as to visualize different bone density underneath is one of common cases where this compromise is most apparent (a good example would be http://commons.wikimedia.org/wiki/File:High_Definition_Volume_Rendering.JPG ).The vascular case is another example (with brighten agent) of overlapping HU ranges. The common cheating to hide a low sampling density is setting of color-uniformed transfer function or pre-segmentation of vessels. To have HDVR level of interactive quality you need to provide x16 samples per cell, clearly without effective adaptive sampling strategy it is too computationally expensive. I've seen people have tried to apply more sophisticated then trivial space leaping adaptivity, yet in my opinion nobody comes even close to HDVR. The most apparent manifestation of low sampling-density deficiency is the colonoscopy; in many cases people just use a polygonal representation instead of direct volume rendering; well, clear, such drastic compromise is fine with pragmatic mind-set but quite indicative for limitation of VR engine; try fly out of "tube" or set transparency to see other parts of body; the "workflow" is the common excuse to prohibit such activity.

2) 2D compromises

This compromise is intended to reduce the number of casted rays. Trivial case is just rendering small image and scratch it. People try to apply more sophisticated techniques as emitting rays selectively from projection plane. The result I've seen usually marginally better then trivial stretching still long way behind HDVR.

Stefan

Elmar Moelzer
09-03-2010, 04:17 PM
yet in my opinion nobody comes even close to HDVR. The most apparent manifestation of low sampling-density deficiency is the colonoscopy;

Funny that you mention this, we made a version of our CPU- renderer for interactive rendering in a virtual colonoscopy product. It works very well there.

Generally I agree with the general sentiment that GPU rendering is inferior to multi CPU rendering. VoluMedics software renderer is a testament to this sort of thinking. We do however offer a GPU pendant that is also high quality, though offers more speed at higher screen resolutions. In contrast, the CPU version offers more speed at higher dataset resolution. It also looks much better. We have resorted to offering both and let the user choose what is best for their needs.
I think that both have their place and their strengths. It is a matter of the application that defines what should be the preferred method.

stefanbanev
09-03-2010, 05:35 PM
>Generally I agree with the general sentiment that GPU rendering is inferior
>to multi CPU rendering. VoluMedics software renderer is a testament to this
>sort of thinking.

No kidding it is. I'm curious how many of your customers consider GPU VR interactive quality inferior to CPU VR interactive quality without pages of preconditions? In general, very few shares such point of view for a good reason....

Stefan

stefanbanev
09-03-2010, 06:50 PM
Currently multi core CPU is the way to go not in the least due to the large size of some of the datasets although cards like this (http://www.nvidia.com/object/product-quadro-6000-us.html) might change things in the future.

One more point I missed to address:

>Currently multi core CPU is the way to go not in the
>least due to the large size of some of the datasets

The major problem of GPU is not the memory size but its SIMD architecture. Memory size limitations is easily comprehensible and hides the actual fundamental GPU handicap - its SIMD architecture; so memory and PCI-Ex bottleneck are quite popular excuses among GPU funs while SIMD limitation is rarely pointed.

>although cards like this might change things in the future.

Well, it just like joke about Russia: Russia has had ever a great future... and will have forever ;o) "This" card is based on the same SIMD engine (several packed in) so it is good only for SIMD friendly algorithms - adaptive algorithms require effective MIMD architecture such as multi-core i7 or/and Opteron 6100 but with dramatically more cores - hundreds...thousands of i7 like cores where 3D future is. NVIDIA is so far on the beefier SIMD path for apparent reasons.

Stefan

Elmar Moelzer
09-04-2010, 02:54 AM
I'm curious how many of your customers consider GPU VR interactive quality inferior to CPU VR interactive quality without pages of preconditions? In general, very few shares such point of view for a good reason....

Not quite sure what you are trying to say here.
Our customers can very easily point out the difference in quality. It is like night and day. Plus depending on who you are referring to with "your customers", our partners are also very much aware of the implications of having to support a GPU dependend solution. It makes this a very obvious choice for them.
Though "choice" is always good and I like options. I really do. This is why we are supporting both. It gives us more flexibility. Plus with VoluMedic, we currently have to do GPU rendering for previews, since LW does not (yet) give us a way to do interactive software rendering in the OpenGL Viewports. Considering what NT has demonstrated at Siggraph though, there is hope that this will change with LW 10.

stefanbanev
09-05-2010, 10:44 AM
Elmar>Though "choice" is always good and I like options.
Elmar>I really do. This is why we are supporting both.

The reason is probably more pragmatic and rational then just diversity-sake motive:

- there are scenarios where GPU VR provides better interactive quality over your CPU VR engine and these scenarios are important enough to have GPU based VR in addition to your major CPU based jewelry.

Once you may posses a substantial expertize in usage of GPU for VR, I would like to ask you a couple questions:

1) Which single GPU in your opinion provides the best overall VR performance?

2) Which SLI GPU in your opinion provides the best overall VR performance?

3) Can you specify the most challenging for CPU and most favorable for GPU rendering scenarios for mid-size medical CT data sets 512x512x512/12bits?

Thanks,
Stefan

Elmar Moelzer
09-05-2010, 03:08 PM
- there are scenarios where GPU VR provides better interactive quality over your CPU VR engine and these scenarios are important enough to have GPU based VR in addition to your major CPU based jewelry.
Yes there are some of these scenarios, mainly when the use of progressive rendering techniques is not possible.




1) Which single GPU in your opinion provides the best overall VR performance?
I can only give you a slice of information here. I have not had the opportunity to test every single GPU on the market.
We use OpenGL exclusively. No Direct3D and no CUDA, no OpenCL (yet) either. So I can only provide you with an oppinion from this POV:
We have generally had better experience with Nvidia than with Ati. That is mainly because of Atis OpenGL drivers being inferior to Nvidias. This has been the same story for decades. The takeover by AMD did unfortunately not improve the situation. The lead of Nvidia over any of the other competing GPU manufacturers is even larger.
So - and I have unfortunately not had the opportunity to test this yet- the fastest Nvidia GPU should give the best performance with our( and results may differ for others ) GPU volume rendering code.


2) Which SLI GPU in your opinion provides the best overall VR performance?
I dont think very highly of SLI GPU solutions. This is something for enthusiasts and gamers. It is such a small market that we dont really bother benchmarking SLI solutions. Also, unless you want to have insanely high resolutions for your renderoutput SLI is not needed. It is better to have more RAM on your graphics card (and SLI solutions do unfortunately not share the RAM). So if you asked me whether I prefer two 1GB cards in the system or one 2GB card in the system, I will tell you that I prefer the single 2GB card. Otherwise, I can not really provide you with a lot of information here, sorry.


3) Can you specify the most challenging for CPU and most favorable for GPU rendering scenarios for mid-size medical CT data sets 512x512x512/12bits?
Biggest problem with the GPU volume rendering comes with scene complexity.
Examples:
1. Multiple GPU rendered volume objects in the same scene. Things can get really tricky, when they overlap.
2. Multiple 3d- Clipping objects that can intersect.
3. Integration and intersection with polygon geometry in the scene. In this case it does not matter whether said polygon geometry is rendered by the CPU, or whether it is an OpenGL rendered preview, like in LWs viewports.
The only way to integrate them is via z- Buffer. This of course makes problems when there are transparencies on some of these objects.
Most favorable for the CPU are said scenarios and when more complex effects such as (soft) shadows, many light sources, radiosity and other rendering effects come in and particularily when all of these effects have to interact with other volume or polygon objects in the scene.
Of course the CPU also is much faster when datasets have been manipulated with tools that run on the CPU (e.g. a segmentation algorithm that is run on the CPU so it also works with datasets that exceed the available memory on the GPU). Updating the dataset on the GPU takes time and that reduces interactivity. The result is emmediately available on the CPU of course.
Generally whenever data has to be sent back and forth between the GPU and the CPU, it is often faster to just do things the CPU only.
These are just a few examples. There are many more.
Of course there are scenarios where all this is still possible on the GPU as well and you can work arround all this, but some of these require you to use CUDA, or they will only run on the latest generation of graphics cards, or they will be so slow, that the advantage of the GPU is almost lost (then add the disadvantages into the balance and the scales again tip in favor of the CPU).
For a practical application the requirement for the latest GPU, or support for the latest shadermodel is not very useful. We here, e.g. are still waiting for the day when all our customers will finally have GPUs that fully support the OpenGL 2.0 (!) standard.

stefanbanev
09-07-2010, 09:53 AM
Elmar> Biggest problem with the GPU....
Elmar> Most favorable for the CPU are...

Thank you for so elaborated reply about CPU advantages however, the scenarios where GPU provides better interactive quality vs. CPU counterpart for typical mid-size medical data-set is equally if not more important to see complete CPU/GPU landscape. The advantages where GPU is considered by other VR solutions (not by HDVR) to have an edge vs. CPU (in context of VR for medical applications) is interesting to have for GPU/CPU comparison.

Thanks anyway,
Stefan