PDA

View Full Version : Realtime 3D Processing Approach render Quality?



sysrpl
08-14-2006, 04:12 PM
Here is a story with pictures and video (http://www.codebot.org/articles/?doc=9488) of a new game engine that looks pretty darn close to render quality. Read the story, watch the video, look at the screens, decide for yourself, then let me know if you think about realtime rendering.

Lamont
08-14-2006, 04:29 PM
Looks killer gonna DL it when I get home.

kmaas
08-14-2006, 08:02 PM
Ah, but realtime rendering for games and rendering for LW is MUCH different. Games are specifically built to run fast. Fast models, fast algorithms, and cheats wherever you can get away with it. For non-realtime rendering, like LW, it has to be more flexible than that. It can't always do some of the cheats you can in games. I've studied game programming for years, and there are some pretty ugly hacks that they get away with. So don't be too hard on NewTek. :D

T-Light
08-14-2006, 08:21 PM
:agree: What KMas said. Hardware's getting better all the time but we're still someway off from replacing LW with a game engine :)

Something else from that site that I don't think enough people know about. Bush and his Croneys giving the planet and the environment yet another serious kicking.

Same site as above.
http://codebot.org/articles/?doc=9480

Sorry for the hijack but The Bush governments record on this is just appalling.

Bog
08-14-2006, 08:50 PM
With realtime engines, You Mileage May Vary. They're geared towards one job, and doing it very well. Also, they play huge amounts of tricks - refraction effects (example, Half-Life 2) can be very convincing, but they're a distorted view from the glass itself, re-applied as a texture map to the glass. Nice to look at, but not accurate. But yeah, modern GPUs are shifting a huge amount about very pretty geometry - I think the average car in PGR3 for the X360 is about 200,000 polys. Heck, I remember worrying about reaching those counts! I gave one of my clients a little cup to commemorate one of their scenes reaching the half-million poly mark.

I was having a chat week before last with someone about using GPUs in things like LightWave. I was wibbling on that - to my understanding - game engines were written for specific shader tech - so all the pretty things in HL2 and what-have-you are written for DirectX9, generally aimed at nVidia and ATI cards. So if something like LW tried to tie into that, it'd leave Mac users high and dry and anyone not using specifically nVidia or ATI cards would be out of luck. I did mention that I was mainly just a pixel-shover, though, and might have been talking out of my hat. Seemed to make sense, though.

sysrpl
08-14-2006, 10:26 PM
With realtime engines, You Mileage May Vary. They're geared towards one job, and doing it very well. Also, they play huge amounts of tricks - refraction effects (example, Half-Life 2) can be very convincing, but they're a distorted view from the glass itself, re-applied as a texture map to the glass. Nice to look at, but not accurate. But yeah, modern GPUs are shifting a huge amount about very pretty geometry - I think the average car in PGR3 for the X360 is about 200,000 polys. Heck, I remember worrying about reaching those counts! I gave one of my clients a little cup to commemorate one of their scenes reaching the half-million poly mark.

I was having a chat week before last with someone about using GPUs in things like LightWave. I was wibbling on that - to my understanding - game engines were written for specific shader tech - so all the pretty things in HL2 and what-have-you are written for DirectX9, generally aimed at nVidia and ATI cards. So if something like LW tried to tie into that, it'd leave Mac users high and dry and anyone not using specifically nVidia or ATI cards would be out of luck. I did mention that I was mainly just a pixel-shover, though, and might have been talking out of my hat. Seemed to make sense, though.

Well I acutally write graphics software on top of 3D hardware, and I wanted to point a few things out that you seem to have mistated or misunderstood.

First, both the Mac and PC have an option to use a unified 3D API, and it's called OpenGL, not DirectX.

Newtek uses OpenGL to 'render' its user interface which includes the OpenGL viewports and advanced shading previews.

The shaders tech you mentioned is also now unified under various card manufactures, uncluding ATI and NVIDIa among others. It's called the OpenGL Shader Language, or GLSL for short.

Yes, realtime 3D programs tend to take advantage of cheap tricks. This is no fault of realtime 3D graphics, it's just that games engine tend to be geared towards specific purpose, and not general purpose 3D such as with lightwave. By virtue of being specific purpose, game developers take advantage of tricks to lower hardware requirements and/or increase game detail/framerate.

What does this all mean? It means that realtime 3D is not constrained to only cheap game tricks, or a single API on a specific platform. Realtime 3D is what the developers make of it, and with today's GPUs that can actually be quite a bit.

Bog
08-15-2006, 05:14 AM
Thanks for that, sysrpl! I did point out that the hat-talking-through quotient may have been higher than normal. Yes, I know that OpenGL is the Common Tongue of graphics languages, but our chat was specifically orbiting around "My PC-based computer game does this. Why doesn't LW?"

Am I right in thinking, though, that game engines get very high performance in a fairly narrow scope, as opposed to something like LightWave whose OpenGL doesn't just get to do what NewTek intended it to, but also be open to 3rd Party Fiddling (the volumetrics in OpenGL from Dynamite spring to mind)?

stone
08-16-2006, 01:33 AM
you cant really compare the two. an engine does all it can to fake stuff as much as possible, while a 3d renderer has to do everything as preciese and correct as possible.

examples;
- shadows are always low resolution or inpreciese in games. often lacking in places, always with small errors here and there.
- light never looks completely right. it can look very good, but you are limited at how many light can hit one object and stuff like real radiosity is still a no go.
- the car with 200.000 polygons only have that in extreme close ups. as soon as it gets on a bit of a distance it will be reduced to maybe 5.000 and a nice normalmap.
- game engines forces the graphicans to jump through loop due to limitations. having to make level-of-detail models, optimized models for specific siturations and so on.
- game engines doesnt have to be limited to a specific purpose, but for a specific game you can be sure they push the envelope to show very few specific effects thats relevant only to that particular game.

it all boils down to the fact that in lightwave and similar you can do everything and make it look completly right. in a game engine you cant really do much and nothing looks completly right.

they are just two different beasts with different purposes, hence isnt comparable.

/stone

Sensei
08-16-2006, 05:00 AM
First, both the Mac and PC have an option to use a unified 3D API, and it's called OpenGL, not DirectX.

Bog in his message refered directly to Half-Life 2 and that's DirectX game.. LW uses OpenGL to be more easily ported to Macintosh since ever, therefore HL2 DirectX tricks and techniques cannot be directly used by LW, and that's what I believe Bog wanted to tell people.

Jorel
08-16-2006, 06:53 PM
light never looks completely right. it can look very good, but you are limited at how many light can hit one object and stuff like real radiosity is still a no go.

Animated Radiosity is a 'no go' for anyone not in possesion of a renderfarm or the cash to buy time at one. Baking with normal/shadow maps is still possible for any not-moving thing but you can put normal maps in game engines as well.

Oh and light can never 'look right' as long as vertex-based geometry is the de facto medium for 3D. polygons are only 'lit' due to color values that are passed on to each vertex the light targets, which must still be interpolated with a smoothing algorithm after the fact - it's a total approximation. It will never look %100 right.


the car with 200.000 polygons only have that in extreme close ups. as soon as it gets on a bit of a distance it will be reduced to maybe 5.000 and a nice normalmap.

So? You think anyone worth their salt in the animation industry isn't going to pull this same time-saving trick if they can get away with it, especially since time on their company renderfarm literally = tens of thousands of dollars?


game engines forces the graphicans to jump through loop due to limitations. having to make level-of-detail models, optimized models for specific siturations and so on.

And lightwave doesn't do this? What did you think that LOD plugin for distance based subdivision tessellation was all about? What do you think the per-polygon tesselation plugin Newtek replaced it with is all about?


game engines doesnt have to be limited to a specific purpose, but for a specific game you can be sure they push the envelope to show very few specific effects thats relevant only to that particular game.

Yeah, because everyone knows that a unified lighting model (Doom3) HDR (Half-life) curved surfaces (Quake 3) Cube environment mapping (Gran Turismo 3 among others) were only useful to those specific games and a are totally worthless everywere else. right.:rolleyes:

Bog
08-16-2006, 07:01 PM
Sing it, Jorel! :D

I grok the passion, but Stone's points are well made. It is apples and oranges.

Still...there's got to be a balance to strike. Somewhere.

Ivan D. Young
08-16-2006, 08:10 PM
I think that Id software Games are OpenGL based and have been for a long time. John Carmack started a major keep OpenGL in Windows campaign agianst Microsoft some years ago. Also the newer specs for OpenGL does have provisions for real time rendering, as part of the standard. I could be mistaken, but I think on Windows XP the OpenGL is only either 1.4 or 1.5 in actual usage.(Depending on your your Video card) However, with Vista and the way that Microsoft has rewritten the driver system in the OS, being able to update to the latest and greatest of OpenGL standards will be easier than ever. As a note* There will most likely be an explosion of OpenGL tools and 3D programs since 3D is incorporated into the interface of Vista. I should say that this will mostly come from not 3d professionals since we have 3D now, but form the average users that will experience new ways to manipulate their desktops. This will inevaitably create more development, both in OpenGl and Direct 3D. I am not a Microsoft fanboy, but I give them credit for at least trying to put all this stuff into average user hands. For the 3D market it will mean more work for us, Just look at Shaxam it's for Vista and Lightwave.

Ivan D. Young
08-16-2006, 08:29 PM
Just as that last post was'nt enough. Open Gl does do real time rendering; however as said earlier OpneGL does not use cheats. So the renders will slow down from real time as what ever being displayed gets more complex. Also OpenGl can render quite complex visuals, but you do not record the frames the same as an offline renderer. In some cases, I have heard of OpenGL renderers that do frame grabbing or something like that. So how Frames would be grabbed and saved is an issue, that has not really been delved into. That is why Gelato exsists, it is an interface that functions like a renderer. I think the real question is not about Real Time rendering, but Near Time rendering. The ability to render with a GPU with the quality of an offline rendering, just at a slower rate than Real Time. Heck I would take 2 FPS if the renders I got were of good quality, Hey even if they were 2 Frames a minute that would be excellent.

Captain Obvious
08-16-2006, 08:39 PM
Every time someone makes a new breakthrough in hardware performance, or rendering algorithms, to make everyone scream "film-quality in real-time," some of us 3D folk is going to think hey, if I can render film quality of yesterday in real-time, let's see what happens when we turn on multiple bounces of monte carlo GI or something like that, and we're back to multiple hours per frame. Then we'll see another breakthrough in a few more years, and people will start using real blurred reflections instead of specular highlights. Then we'll see another few increases in hardware performance, and Maxwell will suddenly be used for rendering for film at a few hours per frame...

Heck, if nothing else, the increased performance just means we'll turn up the resolution!

Real-time film-quality rendering will not be a reality for a LONG while, unless you're looking at the film-quality of yesteryear.





Oh and light can never 'look right' as long as vertex-based geometry is the de facto medium for 3D. polygons are only 'lit' due to color values that are passed on to each vertex the light targets, which must still be interpolated with a smoothing algorithm after the fact - it's a total approximation. It will never look %100 right.
Modern GPUs can do per-pixel lighting, can't they? That's how they do stuff like normal mapping, is it not?

"The other app" actually does that. It looks cool to paint bump maps in 3D. :)

Jorel
08-17-2006, 01:54 AM
I grok the passion, but Stone's points are well made.

No they aren't. If you want to say 'game engines shall never compare to lightwave for rendering complexity' I'll be forced to agree with you, because it's true.

But if your want to support that statement, you will have to offer examples that aren't contradictory. Like normal maps and LOD which have been used both in games and 3d software packages like Lightwave for ages.

That last bit is what really threw me - how are game designers making special effects 'relevant to a single game' when said games basically act a launch vehicles for these special effects? Take cube mapping - GT3 wasn't the first game to use it, but I'm sure it was the first game to do so on the ps2 hardware, which the naysayers said couldn't be done because the hardware was pigsh$t (The fact that they were right is beside the point) and now you can't buy a game that doesn't have something shiny and metalic in it that doesn't imploy cube mapping somewhere.

DragonFist
08-17-2006, 03:04 AM
I think Capt hit the nail on the head.

But I will say that I am looking forward to the advances in hardware and software for the gaming community as it can only help us.

I just read an article in which it spoke about two research projects being done. One by Intel and the other by some University. Both are about making ray-tracing doable in realtime at resolutions and poly counts usable in games. Intel's is a software based solution in which certain algorithyms take advantage of multiple cores and greatly reduce the time to calculate the rays as the threads are scaled. The university one deals with special hardware called a ray-tracing unit (RTU) that apparently is approaching speeds that makes Ray-tracing usable in games. From what I understood, this was actual full blown ray tracing not tricks to simulate it.

Now, as Capt. brought up, this will just mean we will add more stuff bring our render times back to days... err... minutes a frame, but I would love to have my current renders happening in seconds or even fractions there of. And according to that acticle, the capability of that is only a year or so off. (That's not to say that we'll see it LW in that time)

The thing is that really, everytime the bar is raised in the gaming arena, it also raises in the artist arena. I mean, you could render in near real-time now, what would take 30min per frame back in the Amiga days. (Times are an estimate to make a point, not real-world measured renders. But hey, how about getting some old LW2.0 scenes and render them with LW9. Willing to bet the terms "Knife", "Butter" and "Hot" will come to mind in a sentence that will not necessarily be in that order.)

stone
08-17-2006, 07:51 AM
Animated Radiosity is a 'no go' for anyone not in possesion of a renderfarm or the cash to buy time at one. Baking with normal/shadow maps is still possible for any not-moving thing but you can put normal maps in game engines as well.

actually realtime fake radiosity is possible, just not implented in any games out there yet. ofcourse the people with the renderfarms doesnt want 'fake', they want the real thing, while game engines would love the fake version - which is preciesly the point illustrating the difference between the two.


Oh and light can never 'look right' as long as vertex-based geometry is the de facto medium for 3D. polygons are only 'lit' due to color values that are passed on to each vertex the light targets, which must still be interpolated with a smoothing algorithm after the fact - it's a total approximation. It will never look %100 right.

basing all lightning on vertex lighting really is a thing of the past. we already do better than this on the ps2, and upcomming titles will further advance in this area. per pixel lightning doesnt interpolate across the polygons, and per pixel lightning isnt exactly a new thing.


So? You think anyone worth their salt in the animation industry isn't going to pull this same time-saving trick if they can get away with it, especially since time on their company renderfarm literally = tens of thousands of dollars?

point isnt what people do in the animation industry. the point is that the 200.000 polygon car is mostly to make people go wow, it doesnt actually render on screen realtime.


And lightwave doesn't do this? What did you think that LOD plugin for distance based subdivision tessellation was all about? What do you think the per-polygon tesselation plugin Newtek replaced it with is all about?

sure lightwave can do it, but there is no point if you want things to look right. and while lightwave can do it, a game engine has to do it, which again is the point of the thread - game engines has to cheat, while a real renderer can render realism.


Yeah, because everyone knows that a unified lighting model (Doom3) HDR (Half-life) curved surfaces (Quake 3) Cube environment mapping (Gran Turismo 3 among others) were only useful to those specific games and a are totally worthless everywere else. right.:rolleyes:

go somewhere else and roll your eyes. it seems to me your game dev knowledge is years old and that you dont actually work inthe field. i never claimed specific technology isnt useful across the board. actually i claimed the opposite. however, if you ever tried to make an aaa game, you would know that you pick out a couple specific features and technologies that you push to their limits, optimizing the engine to make them stand out.


IAlso the newer specs for OpenGL does have provisions for real time rendering, as part of the standard

there isnt really that much difference between opengl and directx. it comes down to the api and the shaderlanguages, but simply put, you can do the same things in them. talking about wheter it can do realtime rendering doesnt make sense, since it will be as realtime as the application makes it.


however as said earlier OpneGL does not use cheats

its not the api that uses cheats, its the programmers who uses it for displaying 3d. when we use opengl for our playstation/gamecube games, we obviously cheat as much as we do with directx for the pc/xbox versions.


No they aren't. If you want to say 'game engines shall never compare to lightwave for rendering complexity' I'll be forced to agree with you, because it's true.

you havnt actually pointed anything out that i wrote, which is wrong. saying 3d engines shall never compare to lightwave, however is doomed to be wrong sooner or later.


[..]you will have to offer examples that aren't contradictory. Like normal maps and LOD which have been used both in games and 3d software packages like Lightwave for ages.

please read my post, im not making contradictions, and im not even concerned with overlaping technology - im only concerned with the fact that they are different beasts thats designed to do different things.


That last bit is what really threw me - how are game designers making special effects 'relevant to a single game' when said games basically act a launch vehicles for these special effects? Take cube mapping - GT3 wasn't the first game to use it, but I'm sure it was the first game to do so on the ps2 hardware, which the naysayers said couldn't be done because the hardware was pigsh$t (The fact that they were right is beside the point) and now you can't buy a game that doesn't have something shiny and metalic in it that doesn't imploy cube mapping somewhere.

we make spciel effects based on game content. what really gives us something that helps the gameplay, or makes it stand out. its quite possible that we here at io interactive have the most advanced ps2 engine in the world, and with each update we push the technology further. however, while we might use normalmapping extensivly in hitman on ps2 or a crowd system that allows for many hundreds of characters at screen at once on ps2, it might not be a concern to us in freedom fighters, so we aim at other effects or technologies that gives that particular game a boost.

/stone

Jorel
08-17-2006, 02:12 PM
actually realtime fake radiosity is possible.

Yes it is, when you render it ahead of time and store the information in normal or shadow maps - a technique you said differentiated game engines from 3d software animation packages, even though it's commonly used in both.


just not implented in any games out there yet.

Wait a minute - your talking about actually calculating out radiosity? in real-time?:D :D Yeah, I'm sure a lot of people will be willing to play their games at 3 frames per hour just so they can shrug and say "well...at least it's physically accurate"


which is preciesly the point illustrating the difference between the two.

Yeah. Minus the part about this technique being in common use in both games and rendered 3d animation. Rignt.:rolleyes:


basing all lightning on vertex lighting really is a thing of the past.

And yet, as implied by this sentence and seen everywhere including lightwave, some of the lighting still is per-vertex based, only now we have shading algorithms fancier than "90-degree smoothing angle" thrown on top of it.


we already do better than this on the ps2, and upcomming titles will further advance in this area. per pixel lightning doesnt interpolate across the polygons, and per pixel lightning isnt exactly a new thing.

So? The foundation for lighting objects in 3-d is still per-vertex and always will be as long as we continue to employ the use of geometry. The fact that we throw fancy shading algorithms on top of it doesn't mean it's isn't done anymore. Oh, and by definition, 'per-pixel' lighting can never 100% replace per-vertex lighting in a 3d pipeline, because pixels only come into play at the end of that pipeline.


point isnt what people do in the animation industry.

It is when you’re trying to illustrate the differences between the tricks employed by game developers and 3d animators by pointing to a trick that they both use.


the point is that the 200.000 polygon car is mostly to make people go wow, it doesnt actually render on screen realtime.

Then how does a 200k poly object get onscreen without choking the game's framerate to death?

What makes you think 200k polys is a lot of geometry? Game developers (http://www.bit-tech.net/news/2005/10/14/pgr3_poly_count_lie/) are already budgeting their in game objects for a six-figure polygon count. Things have changed dramatically since the introduction of the PS2: exponential increases in hardware power have allowed for exponential increases in polygon budget, to the point where we can now render thousands of soldiers and have a whole tropical island, complete with brush and reflective water on screen. Have you been asleep for the past half-decade or something?


sure lightwave can do it, but there is no point if you want things to look right.

Then there's no point to 3d at all. The images generated by lightwave are by definition artificial and their complexity, and by extension proximity to reality will always be short changed by what computational resources you have at your disposal. You can extend that by building you own, small renderfarm, or even by buying time at one, but that gets real pricey real quick, and you don't have infinite funds.

A few talented artists will make the absolute best of what little they have to produce some truly fantastic images, but even then, there is still a gap - there will always be a difference between what you want and what you can do and that difference still exists for those who have literally spent billions trying to shrink it. Since it's not real, it can never look 'right' because everyone's an expert on what reality is/looks/behaves like.

Since doing 3-D means making concessions, theirs no point in rendering a object on the far horizon at full tessellation if you can cut a significant portion of it's geometry out - and by extension it's impact on rendertime - without anyone noticing.


and while lightwave can do it, a game engine has to do it, which again is the point of the thread - game engines has to cheat, while a real renderer can render realism.

No, Production renders are more likely than games to use LOD because, unlike games, render time has a severe impact on the bottom line of the project - for example how many games have you played that feature [email protected]*ty framerates over the years?


go somewhere else and roll your eyes.

No.:rolleyes:


it seems to me your game dev knowledge is years old and that you dont actually work inthe field.

Appeals to authority do not an argument make.


i never claimed specific technology isnt useful across the board.

but for a specific game you can be sure they push the envelope to show very few specific effects thats relevant only to that particular game.

Your post history begs to disagree. Why don't you tell me how an effect that, in your apparent view is "relevant only to a particular game" "useful across the board"?


actually i claimed the opposite. however, if you ever tried to make an aaa game, you would know that you pick out a couple specific features and technologies that you push to their limits, optimizing the engine to make them stand out.

Actually a real AAA game tries to make it's GAMEPLAY stand out. Any game can have cube environment mapping or bump mapping, but how many games do you know play as well as Jedi-knight 2 or grand-theft-auto?


you havnt actually pointed anything out that i wrote, which is wrong.

:lol:


saying 3d engines shall never compare to lightwave, however is doomed to be wrong sooner or later.

....Because you say so? Whatever, Ms. Cleo. The rest of us without psychic powers can see the fact that as computational prowess increases, the programmers ability to take full advantage of that prowess increases with it. Game engines will be comparable to lightwave 9 someday, but will be totally surpassed by future versions of lightwave - just like game engines of today being comparable to early versions of lightwave.


please read my post, im not making contradictions.

Read your own posts. In them, you will find that you claimed that LOD and normal mapping were key differences between games and 3d animation, yet they are used extensively in both fields. This is a contradiction. I can't believe I have to illustrate that for you again.


we make spciel effects based on game content. what really gives us something that helps the gameplay, or makes it stand out. its quite possible that we here at io interactive have the most advanced ps2 engine in the world, and with each update we push the technology further. however, while we might use normalmapping extensivly in hitman on ps2 or a crowd system that allows for many hundreds of characters at screen at once on ps2, it might not be a concern to us in freedom fighters, so we aim at other effects or technologies that gives that particular game a boost.

and when that game sells a million copies, the other game makers go "ZOMGGOLDMINE!!!111eleven" and fight to get those effects into their games so they can say their product is comparable to others in the marketplace, thus the propagation of special effects into other games.

DragonFist
08-17-2006, 02:42 PM
I think you may be reading more into stone's posts than he has typed into them.

stone
08-17-2006, 04:04 PM
Yes it is, when you render it ahead of time and store the information in normal or shadow maps - a technique you said differentiated game engines from 3d software animation packages, even though it's commonly used in both.

one basic fact you still havnt understood- i dont care about what techniques are used in what program at all. im saying a game engine is using certain techniques because they have to, being limited by budgets, fps, ram and so forth, being forced to employ cheats. a 3d program basicly isnt, so they can be allowed to do stuff right compared to faking it.


Wait a minute - your talking about actually calculating out radiosity? in real-time?:D :D Yeah, I'm sure a lot of people will be willing to play their games at 3 frames per hour just so they can shrug and say "well...at least it's physically accurate"

yes i am - and as i write, if you'd ever bother to actually read anything before you post, i wrote fake radiosity. and as with everything its a matter of wanting to waste clockcycles on that compared to another feature. you design your games to do something very well, but have to downgrade other areas - another fact you still doesnt seem capable of understanding.


Yeah. Minus the part about this technique being in common use in both games and rendered 3d animation. Rignt.:rolleyes:

one more time for the few, but exceptionel slow people out there - i dont once in either of my posts care about where they are used - only why.


And yet, as implied by this sentence and seen everywhere including lightwave, some of the lighting still is per-vertex based, only now we have shading algorithms fancier than "90-degree smoothing angle" thrown on top of it.

again you have to choose per pixel lightning over vertex lightning. if you do so, you gain realism and complexity in the light model, but you use render power in doing so. is it worth it? most games will still do vertex lightning for basic lightning, which illustrates my point - games engines cheats because they have to.


So? The foundation for lighting objects in 3-d is still per-vertex and always will be as long as we continue to employ the use of geometry. The fact that we throw fancy shading algorithms on top of it doesn't mean it's isn't done anymore. Oh, and by definition, 'per-pixel' lighting can never 100% replace per-vertex lighting in a 3d pipeline, because pixels only come into play at the end of that pipeline.

actually pixel shading doesnt come in at the end of the pipeline and teoreticaly, you can do 100 percent correct light. even speaking of the 'end of the pipeline' is bogus, since it depends on the engine in question. sorry.


It is when you’re trying to illustrate the differences between the tricks employed by game developers and 3d animators by pointing to a trick that they both use.

definitly. as clearly stated, some 15 times by now, that isnt the point in any way.


Then how does a 200k poly object get onscreen without choking the game's framerate to death?

What makes you think 200k polys is a lot of geometry? Game developers (http://www.bit-tech.net/news/2005/10/14/pgr3_poly_count_lie/) are already budgeting their in game objects for a six-figure polygon count. Things have changed dramatically since the introduction of the PS2: exponential increases in hardware power have allowed for exponential increases in polygon budget

i never said 200.000 polys is a lot. on a ps2 few engines get above 75.000 a frame though. i merely stated, for those who are able to read, that the car in question doesnt actually render in that resolution unless its an extreme closeup.

when you have 12 of those cars and the enviroment, you have to employ techniques to ensure level of detail to make the game run smoothly - a technique that isnt required in a 3d renderer. the engine has to cheat, the renderer doesnt.


[..] to the point where we can now render thousands of soldiers and have a whole tropical island, complete with brush and reflective water on screen. Have you been asleep for the past half-decade or something?

no, i have been developing that technology, thankyouverymuch.


Then there's no point to 3d at all. The images generated by lightwave are by definition artificial and their complexity, and by extension proximity to reality will always be short changed by what computational resources you have at your disposal. You can extend that by building you own, small renderfarm, or even by buying time at one, but that gets real pricey real quick, and you don't have infinite funds.

prices arnt really a concern since its only a matter of the artificial barrier of money. what people have on their desktops are however, since you cant get around those limits at any costs and so have to cheat.


A few talented artists will make the absolute best of what little they have to produce some truly fantastic images, but even then, there is still a gap - there will always be a difference between what you want and what you can do and that difference still exists for those who have literally spent billions trying to shrink it. Since it's not real, it can never look 'right' because everyone's an expert on what reality is/looks/behaves like.

there is no reason it wont be real some day. the physics on the area is simple enough. and at a later point game engines might even catch up.


Since doing 3-D means making concessions, theirs no point in rendering a object on the far horizon at full tessellation if you can cut a significant portion of it's geometry out - and by extension it's impact on rendertime - without anyone noticing.

sure you can - but you dont have to in a render. however, in a game engine you do. are you going to write anything thats actually relevant to the points i made?


No, Production renders are more likely than games to use LOD because, unlike games, render time has a severe impact on the bottom line of the project - for example how many games have you played that feature [email protected]*ty framerates over the years?

all games use lod. if nothing less, then frustum colling. not all renderes does. besides, if you dont do lod in a game, you simply have to cut quality off somewhere else. its a tradeoff in quality all through. renders arnt necessarily.


Appeals to authority do not an argument make.

neither has your pages long posts yet. atleast not any arguments relevant to my posts.


Your post history begs to disagree. Why don't you tell me how an effect that, in your apparent view is "relevant only to a particular game" "useful across the board"?

no it doesnt. and your quotations of me doesnt either, even when taken out of context.

let me give you a really simple example. hitman bloodmoney uses a revolutionary crowd system that even works on ps2. thats specifically developed because it gives our game something special that we want to aim for.

the feature is still useful across the board when used in other context. perhaps on another platform where more cpu cycles are available, or a cut down version in another game where we need a somehow differently behaving crowd.

it shouldnt really be difficult to understand.


Actually a real AAA game tries to make it's GAMEPLAY stand out. Any game can have cube environment mapping or bump mapping, but how many games do you know play as well as Jedi-knight 2 or grand-theft-auto?

get serious. very few games these days sell on new inovative gameplay. most all are shooters with a few new fancy features. what was the last tripple a game you helped develop?

any game can have cubemapping or bumpmapping, which are technically decades old technology - not everyone can have the same quality of light and shadows, streaming and so forth.

its all a game of 'see we can cheat more than you can and make our stuff look prettier'. while in a render its a matter of artist capabilities and time since you dont have the same limits.


...Because you say so? Whatever, Ms. Cleo. The rest of us without psychic powers can see the fact that as computational prowess increases, the programmers ability to take full advantage of that prowess increases with it. Game engines will be comparable to lightwave 9 someday, but will be totally surpassed by future versions of lightwave - just like game engines of today being comparable to early versions of lightwave.

fotorealism isnt utopia. neither is the phycis or knowledge of how the human eye works - i dont claim we get anywhere near it tomorrow. but you would be a fool to put money on the fact we never do.

besides. lightwave will develop to be better at producing renders, animations and build models and texture them. game engines will take a different path and be better at constructing games in - they are two different kinds of programs.


Read your own posts. In them, you will find that you claimed that LOD and normal mapping were key differences between games and 3d animation, yet they are used extensively in both fields. This is a contradiction. I can't believe I have to illustrate that for you again.

please quote me where i make that claim. i dont even make any claims about which can do what - i only claim that one does it because it can, the other because it has to.


and when that game sells a million copies, the other game makers go "ZOMGGOLDMINE!!!111eleven" and fight to get those effects into their games so they can say their product is comparable to others in the marketplace, thus the propagation of special effects into other games.

atleast you made one valid point. nice.


I think you may be reading more into stone's posts than he has typed into them.

thank you, but it seems to be more a matter of not reading them, not being able to udnerstand the points and misquoting.

/stone

Jorel
08-18-2006, 01:17 PM
I think you may be reading more into stone's posts than he has typed into them.

So, I have exaggerated his position? Then certainly, you can provide evidence for this. The posting history is available to everyone who can read, so If I’ve done this, you can certainly point it out.


one basic fact you still havnt understood- i dont care about what techniques are used in what program at all.

And yet, your first post on this issue was a bulleted list of (drum roll please...) techniques. More specifically the ones that you claimed, in their use differentiated 3d animation from game development saying that you "can't really compare the two" because games use "fake stuff as much as possible"

I'm not going to post it verbatim here because a) in on the first flappin' page of this thread and B) I'm hoping you not going to be foolish enough to deny the main idea of your own post. I mean, the material so far doesn't look promising, but someone who can assemble barely legible posts and operate the computer necessary to load this web-page must posses the reading comprehension of at least a child, right?


im saying a game engine is using certain techniques because they have to, being limited by budgets, fps, ram and so forth,being forced to employ cheats. a 3d program basicly isnt, so they can be allowed to do stuff right compared to faking it.

(Passage 1)and I'm saying the same budget, RAM, rendered frames per hour limitations and so forth force 3d animators to employ the same tricks. trying to use these tricks to differentiate the two industries when they both use them Is. A. Con-tra-dic-tion. Re-read that last part real slow and say it out loud so it sinks in, because pointing this out to you is becoming a fruitless chore.

and no, 3d programs are not totally exempt from faking it. You don't have infinite time, you don't have infinite RAM, you don't have infinite resources, so yes if you can, you will 'fake it' because only an idiot will waste precious rendering time on extravagant and unnecessary effects when he can get 98% of what he wants while spending 1/10th of the time getting there.


one more time for the few, but exceptionel slow people out there - i dont once in either of my posts care about where they are used - only why.

and why do you care about why their used? It couldn't be because you clearly think their usage has something to do with the differences in 3d animation and game development now....could it?

But I'm the slow one. right.:rolleyes:


yes i am - and as i write, if you'd ever bother to actually read anything before you post, i wrote fake radiosity.

No you aren't. If it's "fake radiosity", as you so boldly claim in the second part of this sentence, then it's different enough from 'real' radiosity not to qualify as such (which is why you call it ''fake'), thus making it a contradiction to answer 'yes you are' if someone asks you if you're doing real radiosity in real-time. If it is, then what was the point of making this distinction? Do you find contradictions fun or something?


again you have to choose per pixel lightning over vertex lightning.

Oh really? (http://en.wikipedia.org/wiki/Graphics_pipeline)


if you do so, you gain realism and complexity in the light model, but you use render power in doing so. is it worth it? most games will still do vertex lightning for basic lightning, which illustrates my point - games engines cheats because they have to.

Please re-read passage number one. Louder this time. and in two different languages, because it apparently still has not sunk in.


actually pixel shading doesnt come in at the end of the pipeline and teoreticaly, you can do 100 percent correct light.


The rendering pipeline is mapped onto current graphics acceleration hardware such that the input to the graphics card (GPU) is in the form of vertices. These vertices then undergo transformation and per-vertex lighting. At this point in modern GPU pipelines a custom vertex shader program can be used to manipulate the 3D vertices prior to rasterization. Once transformed and lit, the vertices undergo clipping and rasterization resulting in fragments. A second custom shader program can then be run on each fragment before the final pixel values are output to the frame buffer for display.

The graphics pipeline is well suited to the rendering process because it allows the GPU to function as a stream processor since all vertices and fragments can be thought of as independent. This allows all stages of the pipeline to be used simultaneously for different vertices or fragments as they work their way through the pipe. In addition to pipelining vertices and fragments, their independence allows graphics processors to use parallel processing units to process multiple vertices or fragments in a single stage of the pipeline at the same time.

I bolded, underlined and super-sized the most important parts, because I know reading comprehension isn't your strong suit. I considered cutting and pasting the relevant parts for your benefit, but judging from your latest material, I know you have problems detecting context properly as well.:confused:

But no worries. I posted the entire relevant passage so there's no question about the apparent magnitude of your blunders in understanding 3d technology! Yay!

so, *BZZZT* wrong. You do not get to choose to do per-pixel lighting over vertex lighting, you do not get to put pixels where ever you freakin' well please in the rendering pipeline and you do not get to do 100% per-pixel lighting, nor will you lighting be '100%' accurate considering the massive resources you'd need (and don't have) to render 100% accurate lights in real time. Furthermore I find it highly suspicious that someone fancying themselves a game developer wouldn't be aware of basic 3d pipeline knowledge that any gamer would know.


even speaking of the 'end of the pipeline' is bogus, since it depends on the engine in question. sorry.

Way to demonstrate you spectacular ignorance of basic 3d technology yet again. EVERY 3d pipeline in history that uses geometry follows the same formula: Vertices -> Transform-> Lighting -> Raster and the pixels, as you can plainly see are at the very end of the pipeline! you can't get pixels before your done lighting, because the color information for said pixels has to come from somewhere. You can't do the transforms before the vertices enter, because then there would be nothing to transform. Pixels can't come at the beginning because they need all the information you get from following the first three steps - placing them squarely at the end! Every time! Without exception!

Even the fancy garbage we pour over the pipeline nowadays is becoming universally standardized, what with DX9 and OpenGL using universal shading languages - oh wait, you, mister game developer weren't aware of that either? Even though it's common knowledge to anyone who simply plays games?

Are you sure you make games at IO interactive? Or do you just sweep the floors there?


definitly. as clearly stated, some 15 times by now, that isnt the point in any way.

Then why did you declare 'what people do in the animation industry' not to be the point if it 'definitely' now is? If it is central to your point the why do you, not even a sentence later declare it not to be again?

You know its not safe to use a computer around open chemical bottles, right? It may be fun to trip off of bleach and ammonia, but the fumes are flammable and the damage it does to the speech processing centers in you brain is permanent. Obviously.


i never said 200.000 polys is a lot.


...the car with 200.000 polygons only have that in extreme close ups.

Not familiar with the subtleties of implication, are you? Saying that a polygon count like that will get axed unless under extreme close-up implies (It's the word of the day! everybody scream it!) that you think this is a significant enough amount of geometry to throttle the game and throttling the frame rate of the game takes (drum roll please...) a lot of geometry!


i merely stated, for those who are able to read, that the car in question doesnt actually render in that resolution unless its an extreme closeup.


the point is that the 200.000 polygon car is mostly to make people go wow, it doesnt actually render on screen realtime.

and for those of us who can read clearly see you contradicting yourself again. Obviously enough not to require explanation this time.


when you have 12 of those cars and the enviroment, you have to employ techniques to ensure level of detail to make the game run smoothly - a technique that isnt required in a 3d renderer. the engine has to cheat, the renderer doesnt.

Please purchase a plasma torch and burn passage 1 into your forehead in reverse lettering and gaze at the nearest mirror for 24 hours. Normal knowledge absorbsion techniques (reading) don't appear to be working, so now you force us toward more extreme measures.


no, i have been developing that technology, thankyouverymuch.

Uhh huh. Just like I've been developing the warp core, and I got a working sun crusher in my garage. :lol:


prices arnt really a concern since its only a matter of the artificial barrier of money.

So, not only are you rampantly ignorant about 3-d, you're rampantly ignorant about simple finances as well. That 'artificial barrier' would seem pretty real to someone who needs 20 DP dual core rendering nodes with 16 GB of ram each, but hasn't the cash to buy them.


what people have on their desktops are however, since you cant get around those limits at any costs and so have to cheat.

People with those rendering nodes I was talking about still have to cheat because now their strapped for cash if they weren't before and that rendernode system has to start cranking out the cash cows post haste and the faster you get your frames back, the more projects you can do, and the more money the studio makes.


there is no reason it wont be real some day. the physics on the area is simple enough. and at a later point game engines might even catch up.

Yes, yes, stone. We all know how shiny your Red Herring is. Now put it away. It's time for grown-up talk now! LOD scaling, according to you is 'pointless' in a 3-d animation because it doesn't 'look right' I pointed out that this means that all of 3D is pointless because none of it looks right. Mumbling something about what will happen tomorrow does not constitute a rebuttal to this argument.


sure you can - but you dont have to in a render.

You do if you want to save your company precious time and money and not get your ***** fired for wasting valuable corporate resources.

You do if you want to make the most efficient use of what little you have as a hobbyist.

You do if you aren't mentally deranged enough not to see the very visible benefit to shaving hours (or days for some of you insane render people) off of your render time by sniping a few percentage points off your total vision.


however, in a game engine you do. are you going to write anything thats actually relevant to the points i made?

Are you going to increase your reading comprehension beyond that of cold molasses? Are you going to make one post not riddled with logical fallacies and blatantly ignorant information? Are you going to close those open chemical bottles near your computer?


neither has your pages long posts yet. atleast not any arguments relevant to my posts.

Personal fiat requires a willing and receptive audience. If you think my points are irrelevant, you will address them one by one explaining why, because your personal say-so is worth about same as a 3dfx Voodoo1 is for gaming today.


all games use lod. if nothing less, then frustum colling. not all renderes does. besides, if you dont do lod in a game, you simply have to cut quality off somewhere else. its a tradeoff in quality all through. renders arnt necessarily..

1) Incorrect. Most fighting games wouldn't use LOD because the whole environment is quite limited as it is, and the most detailed objects (the fighters themselves) never leave the immediate sight of the camera. Everything else ie either close up, far away or in the middle. LOD is dependent on you distance from the camera, and if that never changes then there's no reason to waste processing time on it.

2) Incorrect. Large tradeoffs in game-quality LOD objects will never be the same as the LOD employed by, say, lightwave which, unlike any game engine you can buy, can determine the tessellation level for each sub division surface seamlessly. you can work the plugin so that, as your object gets farther away, it drops geometry unnoticeably. So “huge tradeoffs” aren't necessary.

Jorel
08-18-2006, 01:18 PM
no it doesnt. and your quotations of me doesnt either, even when taken out of context.

Great! Now let me quote the part where you prove that by providing a better context to the sentences provided.


...

Oh wait... that’s right. You didn't. Here's a tip stone; when using the "out of context" trick on someone, you have to not only explain why the context is wrong, you must provide contextual information that better fits the usage of the phrase - or phrases - in question, because your personal say-so ain't worth spit.

Wait - discovering contextual information in a paragraph requires at least college-level reading comprehension skills, and if you've got them, you certainly haven't been using them in this thread. sigh. Well, no worries. I'll do the legwork for you; just fill in the blanks for the following couple of sentences and your assertions will be supported by evidence. Anything to help our friend stone!


When someone says "specific effects thats relevant only to that particular game.", it's incorrect to think they believe that they think special effects are relevant only to particular games because__________________and the actual main Idea is______________. when someone says "i never claimed specific technology isnt useful across the board." it's not a contradiction for them to also say "specific effects thats relevant only to that particular game." because________________.



let me give you a really simple example. hitman bloodmoney uses a revolutionary crowd system that even works on ps2. thats specifically developed because it gives our game something special that we want to aim for.

the feature is still useful across the board when used in other context. perhaps on another platform where more cpu cycles are available, or a cut down version in another game where we need a somehow differently behaving crowd.

it shouldnt really be difficult to understand.

I know it shouldn’t. The rest of us clearly get the meaning of ‘relevant only to a particular game’ and why declaring those same effects are “useful across the board” later is contradictory. But after all, we can’t really blame you for not having proper reading ability – intelligence is the luck of the draw and not everyone will get the longest stick.


get serious. very few games these days sell on new innovative game play.

Nice Strawman. I didn’t say “new and innovative” I implied that the game must play well in order for it to qualify as a AAA game, and ‘new’ and ‘innovative’ are not required for that. Duh.


what was the last tripple a game you helped develop?

Appeals to authority do not an argument make. Note for the slow – the validity of any argument is not dependent on the authority of the person making it. Invalidating a premise requires contradictory evidence, or to show that the conclusion you’ve reached doesn’t follow due to some other logical rules…wait a minute. Look who I’m talking to. :rolleyes:


any game can have cubemapping or bumpmapping, which are technically decades old technology –

Irrelevant. The point stands – any game can have “technology” but few games are executed well enough to be worth playing over and over again.


its all a game of 'see we can cheat more than you can and make our stuff look prettier'. while in a render its a matter of artist capabilities and time since you dont have the same limits.

So…someone playing farcry on an athlon X2 with a Geforce 7900Gt and 2 GB of RAM and then stops to render the scene they put together in Lightwave on an athlon X2 with a geforec 7900 GT and 2 GB or RAM wouldn’t have the same resources? Wait – you’re right, they don’t, since they can’t use the video card for rendering, so now they actually have less resources than before! But it’s okay. They aren’t like you, so they’re probably going to compensate for their lack of resources by using tricks, for the ten-thousanth time.


fotorealism isnt utopia. neither is the phycis or knowledge of how the human eye works - i dont claim we get anywhere near it tomorrow. but you would be a fool to put money on the fact we never do.

Nice strawman. I never said ‘photo realism’ was impossible. I said ‘game engine complexity comparable to lightwave’ is being made impossible due to the fact that lightwave advances are as frequent and significant as game engine advances and lightwave is already several steps ahead.

I did say that the gap between what is real and what is rendered is ever-present, and it is. That gap will shrink over the years of advances in computer hardware and software, but it will not disappear. Reality possesses far greater resources than any computer can ever hope to, which is why that gap isn’t going anywhere.


please quote me where i make that claim.

You remember in the beginning where I hoped you would pull this trick? Well, I’m taking it back – You are foolish enough to deny the contents of your own posts! Unbelievable! You do know this isn’t live conversation, right? It’s hard to be openly dishonest about things you’ve clearly typed when your posting history is available for anyone to read.


you cant really compare the two. an engine does all it can to fake stuff as much as possible, while a 3d renderer has to do everything as preciese and correct as possible.


- the car with 200.000 polygons only have that in extreme close ups. as soon as it gets on a bit of a distance it will be reduced to maybe 5.000 and a nice normalmap.


- game engines forces the graphicans to jump through loop due to limitations. having to make level-of-detail models, optimized models for specific siturations and so on.

I think that speaks for itself. Oh, and be prepared to show why if you’re about to declare these out of context, because personal say-so does not constitute evidence.


atleast you made one valid point. nice.

If this point is valid then why did you say certain special effect are only relevant to a particular game?

stone
08-18-2006, 03:05 PM
this is really becomming a waste of everyones, and in particular my, time.

filling your replies with cleverly constructed sentences solely to dance around the points without ever actually producing any constructive arguments, doesnt really lead either of us anywhere - so excuess me for having sorted all your ramblings and personal assults out.


And yet, your first post on this issue was a bulleted list of (drum roll please...) techniques. More specifically the ones that you claimed, in their use differentiated 3d animation from game development saying that you "can't really compare the two" because games use "fake stuff as much as possible"

lets try once more:
- my first post doesnt use any wording that should lead you to belive my list is supposed to highlight techniques that differentiate renders from engines.
- my first post doesnt claim anything, it simply lists a few areas where game engines uses cheats to achive results. cheats that will make visible artifacts on a screenshot. cheats you wouldnt allow if making a high profile render stillshot.

its as simple as that. you are the one, seemingly the only one in this thread, that reads more into it. you are the one inventing the contradiction where there is none. you are the one who by all means tries to have it stand out as claims of areas that differentiaes the two - even despite being told repeatedly that it isnt the case. even when failing repeatedly to produce any evidence that it should be the case - let it go already.


[..] irrelevant verbal gymnastics to avoid producing any valid points [..]

yawn.


(Passage 1)and I'm saying the same budget, RAM, rendered frames per hour limitations and so forth force 3d animators to employ the same tricks. trying to use these tricks to differentiate the two industries when they both use them Is. A. Con-tra-dic-tion. Re-read that last part real slow and say it out loud so it sinks in, because pointing this out to you is becoming a fruitless chore.

you seem to fail to understand that it doesnt matter that both uses them. i dont anywhere claim otherwise. unless you can actually show me where i do, please stop wasting my time.

there is no contradiction. only lack of understanding.


and no, 3d programs are not totally exempt from faking it. You don't have infinite time, you don't have infinite RAM, you don't have infinite resources, so yes if you can, you will 'fake it' because only an idiot will waste precious rendering time on extravagant and unnecessary effects when he can get 98% of what he wants while spending 1/10th of the time getting there.

in a game engine i have to fake it or it wont run. in a render engine i just have to wait longer but it will work - its beyond me why you keep traversing around a subject that i never touched and doesnt even care for.

clear cut point that everyone can understand:
- a game engine HAS to fake it.
- a render engine DOS NOT have to fake it.


[..] irrelevant verbal gymnastics to avoid producing any valid points [..]


No you aren't. If it's "fake radiosity", as you so boldly claim in the second part of this sentence, then it's different enough from 'real' radiosity not to qualify as such (which is why you call it ''fake'), thus making it a contradiction to answer 'yes you are' if someone asks you if you're doing real radiosity in real-time. If it is, then what was the point of making this distinction? Do you find contradictions fun or something?

this paragraph is utter nonsense - everything in a game engine is already faked. you dont seem to get the basic concept still.

a game engine fakes everything. it fakes the light, it fakes the shadows, it fakes the reflection map, and it fakes radiosity - otherwise it simply wont run fast enough.


Oh really? (http://en.wikipedia.org/wiki/Graphics_pipeline)

yes really. your argument is flawed. its beyond the scope of relevancy for me to cut everything out in cardboard for you.


[..] irrelevant verbal abuse to avoid producing any valid points [..]


[..]You do not get to choose to do per-pixel lighting over vertex lighting, you do not get to put pixels where ever you freakin' well please in the rendering pipeline and you do not get to do 100% per-pixel lighting, nor will you lighting be '100%' accurate considering the massive resources you'd need (and don't have) to render 100% accurate lights in real time.

i can combine vertex and per pixel lightning as i please in a game engine. i can do basic vertex enviroment lightning and add per pixel lightning to individual objects if i please.

i can choose when i want to shade my pixels. i can even get feedback from my shading and do a second iteration or a new renderpass. i am fully in control of my own render pipeline in an engine.

and i wrote teoretically 100 percent correct. but i guess you once again neglect the actual text to produce irrellevant points.


Furthermore I find it highly suspicious that someone fancying themselves a game developer wouldn't be aware of basic 3d pipeline knowledge that any gamer would know.

right.



[..] completly conceptual misunderstanding of a modern 3d engines render pipeline [..]

considering the hard time you have understanding; 3d engines fake stuff - render engines doesnt have to - its way beoynd relevance trying to explaing pixel/vertex shaders to you.


Even the fancy garbage we pour over the pipeline nowadays is becoming universally standardized, what with DX9 and OpenGL using universal shading languages - oh wait, you, mister game developer weren't aware of that either? Even though it's common knowledge to anyone who simply plays games?

and this is somehow suposed to be a point, or relevant to the topic at hand? you are weaving up nonsense in the absense of factual points. im well aware of pixel shaders, but im not aware of why you would write this paragraph out of thin air.


Are you sure you make games at IO interactive? Or do you just sweep the floors there?

i think i know where i work.


[..] irellevant verbal abuse to avoid producing any valid points [..]


Not familiar with the subtleties of implication, are you? Saying that a polygon count like that will get axed unless under extreme close-up implies (It's the word of the day! everybody scream it!) that you think this is a significant enough amount of geometry to throttle the game and throttling the frame rate of the game takes (drum roll please...) a lot of geometry!

im not implying things. you are imagining things. if i didnt write it, then dont claim i did - if you want to invent meanings, words and entire sentences that arnt there, then please go do it somewhere else where you dont waste my time.


[..] irrelevant verbal gymnastics to avoid producing any valid points [..]


Uhh huh. Just like I've been developing the warp core, and I got a working sun crusher in my garage. :lol:

good for you. is yours on sale in your local game shop too?


So, not only are you rampantly ignorant about 3-d, you're rampantly ignorant about simple finances as well. That 'artificial barrier' would seem pretty real to someone who needs 20 DP dual core rendering nodes with 16 GB of ram each, but hasn't the cash to buy them.

money is not an actual hinderence. it might be for yours and mine budget, but it CAN be overcome. whatever hardware people have on their desktops, and whatever consoles they have bby their tv CANT be changed regardless of how much money you spend - that is why the game engine is limited and has to be desinged to run on specific specifications, and thats why a renderer is teoretiacally limitless.


People with those rendering nodes I was talking about still have to cheat because now their strapped for cash if they weren't before and that rendernode system has to start cranking out the cash cows post haste and the faster you get your frames back, the more projects you can do, and the more money the studio makes.

completly irrelevant that 'those people' cant afford it. its completly besides the point, which is the differences between the two engines.


[..] irrelevant verbal abuse to avoid producing any valid points [..]


LOD scaling, according to you is 'pointless' in a 3-d animation because it doesn't 'look right' I pointed out that this means that all of 3D is pointless because none of it looks right. Mumbling something about what will happen tomorrow does not constitute a rebuttal to this argument.

making up more sentences that i never wrote, i see. as before it would help if you could point me to the place where i actually write that "lod is pointless in 3d animation".


You do if you want to save your company precious time and money and not get your ***** fired for wasting valuable corporate resources.

You do if you want to make the most efficient use of what little you have as a hobbyist.

You do if you aren't mentally deranged enough not to see the very visible benefit to shaving hours (or days for some of you insane render people) off of your render time by sniping a few percentage points off your total vision.

all completly irrelevant to the point at hand. its not about what you can do or what is clever to do to make profit. it never was and i never even touched that subject. if you cant find any arguments thats relevant to what i actually post, then please just dont reply at all.

im ONLY concerned with the different nature of the two engines.


[..] irrelevant verbal gymnastics to avoid producing any valid points [..]


[..] irrelevant verbal abuse to avoid producing any valid points [..]

im sorry, but your entire post #22 is beoynd the point of anything related to what ive written on the topic of this thread.

if you had left out the personal assults and only dealt with what actually write, i wouldnt have had to cut out all these pages of irrelevant nonsense.

/stone

Paul Lara
08-18-2006, 03:20 PM
MODERATOR STEPS IN

Don't you just hate it when people go to ridculous lengths to say, "no..but wait.."

Guys. The sentence-by-sentence tit-for-tat has grown wearisome.
This thread is now closed.

Go do something productive with your weekend, ok?
We're done here.


The moderator.