Are game engines the future of rendering?

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 03 March 2013   #1
Are game engines the future of rendering?

I bring this up just because I want to hear everyones thoughts and inputs on this, I share my line of thinking only to start the discussion. I'm not arguing this as true, this is just what I am currently thinking.

Sometime around the release of Maxwell, and everyone seeing how easy it was to produce awesome renders. I think alot of people got the idea in their heads that the direction forward in rendering was to just put more cores to rendering, and using more computationally intensive rendering algorithms. Or if you didn't get that idea, then you got the idea, or held the idea, that CPU and core count would get so plentiful and cheap that rendering full animation in VRay would become super fast and highly economical.

But I've started to think, the world of offline raytrace rendering may actually be surpassed by realtime game rendering techniques. With the advent of UDK 4 I think we are really starting to get to the point of asking, 'OK vray still looks better, is it really THAT much better?' When it comes down to the matter of, you can render a 5 minute vray animation in a day, or a 5 minute UDK4 animation in 5 minutes, I'm already encountering the situation where, more clients would rather it in 5 minutes at less than absolutely perfectly photoreal. In the next year, in world of architectural visualization, I can very very easily see myself never rendering in Vray or Mental Ray again, and outputting video from UDK4.

This isn't just a rendertime thing either, but an entire client experience thing too. If I use game engines to do the rendering, the client can then sit there with me, and in realtime, we can adjust materials, lighting, everything. Cycle through dozens of revisions of a design in a day, and he can leave that day, with even a 30 minute full HD video on a thumbdrive. I think given that selling point, it's going to be very hard to sell a client on a vray rendered animation. I feel like incoporating game engine rendering into my toolset will basically make selling visualization work alot easier, and alot more enjoyable, and alot of more engaging for the client.

Now, how much longer is it going to take until realtime game rendering really can produce something that looks just as good as vray? As far as I can see, realtime game technology is advancing faster in how good it looks, than how quickly the speed at which technology is progressing to make raytraced rendering fully realtime. I think the methods of rendering in games is going to reach perfectly photoreal before raytraced rendering reaches fully realtime speeds.

Of course major studios, WETA, ILM and whatnot won't jump onto realtime rendering in game engines quickly. But for smaller studios, individuals, freelancers (which I am, and always intend to work at that scale), I feel like beginning to focus solely on game rendering tech is really the way forward.

Plus, with the advent of the mobile GPU's and the iPad. I am encountering more and more clients interested in realtime 3D graphical visualizations on their iPad. Where, the interactivity of the visualization is FAR more valuable to them than perfect photorealism. When I really think about this, I think this is true for alot of people in visualization, they just haven't had the technology fully demonstrated to them. I haven't encountered one single client thus far where, I show them there product or building running on an iPad 2 with baked lighting, fully interactive, who does not immediately want that more than a perfectly photoreal vray animation. In the world of visualization, full interactive of realtime graphics takes computer graphics beyond just visualization, it makes it a central tool of functionality for them. In the world of visualization, I think utility, vs just visual impressiveness, is far more valuable. Which interactivity has much more utility.
 
Old 03 March 2013   #2
OK.. This topic gets raised a lot... There's like this percent chance a thread like this is started after someone sees the latest Unreal Engine demo or something.

BUT... what you raise are two important perspectives: The Client Perspective (or what I would call the Audience Experience) and the Production Perspective (or basically the Creator Experience).

From the Client perspective it's all the same. And this is a tricky position to be in. Because remember that even if you are an experienced 3D animator or modeler, when you play Assassin's Creed, for example, you are engaging in the Audience Experience and it can still look as good for you as you think it can in your Vray based workflow or whatever.

From the Production Perspective, you play a game in Unreal engine, you look around the world inside of the game, and you think: "Well it's not that bad... but in addition to that... this is all real time!"

And that's where the notion comes up time and time again. From the desire to get a near-final result in real time.

However, what has been pointed out many times is that game engines still do engage in perspective-based or texture trickery to make it only seem like it can replicate an effect that "stands up to a dramatic close-up in cinemas." Videos and screenshots don't help you to disprove otherwise. The solutions in many of these game engines precisely exploit the view you are given.

What I have heard (recently) is that when you try to make a film with a game engine, you end up working with a very strict set of parameters and not the "atomic-level of freedom" that you have in a full-on workflow.

One classic example is in the game Assassin's Creed 2. If you walk under various lighting conditions and you REALLY look carefully, you will notice that Ezio really doesn't reflect lighting from any nearby objects and doesn't receive any true lighting from lamps. There's a standard lighting that applies to him, and just a couple of "shadowed" shades that occur when he walks under a doorway, enters a tunnel, or walks under a bridge.

It seems to work when you're "just playing". But from a cinematic standpoint, this will not stand up to scrutiny and isn't "100% Directable". In contrast, you can "light straight" in a 3D app without doing a lot of tricks just to make it seem like you have some going on.

Another issue, for example, that I red-marked when assessing a possible move to Cinebox using Crytek Engine was this limitation in the number of bones for an actor. There is no question that you'd need different bone layouts for different faces, and that you'd want as many of them as you need to get the performance you want. Game engines still can't get you that freedom.

Some other things are done in a way that is very different to what you are accustomed to.

This is the reason why all the VFX and CG houses have not switched to using game engines and this is also why you don't see even video game cinematic trailers being made with game engines. There was probably one exception where Blur made a trailer using Gears of War's game engine, but that was a PR stunt - and there might be a good reason not more became of it.

The other thing is that the main backbone of Game Engine power - the use of dedicated Video cards and GPU's - has become an increasingly popular branch of development in 3D applications. In Blender, for example, a technology called Cycles allows you to see a very good approximation of a finished render result right there in the GUI workspace in realtime, and even gives you the option to use the Cycles render result to fire out image sequences.

That real time capability to see the result you want instantly, allied with the fact that "everything else in your workflow is as you know it" means that while some of the principles of Game Engine efficiency will probably be asborbed into 3D applications into new features, the Game Engines themselves will probably remain just that.
__________________
"Your most creative work is pre-production, once the film is in production, demands on time force you to produce rather than create."
My ArtStation

Last edited by CGIPadawan : 03 March 2013 at 04:49 AM.
 
Old 03 March 2013   #3
CGIPadawan, I know exactly what you speak of. The studio I work at currently has an offline rendering pipeline set up with C4D and Vray, and they are getting money for realtime game engine work, and thus putting money to developing a pipeline for that, which I am currently developing.

It is a constant technical back and forth to remain within the bounds of what the game engine allows. Entirely new workflows are having to be developed, entirely new standards maintained, entirely new concepts of design adhered to. Which is why I say major studios, with tons invested will not switch for a long long time, if ever. But once you go through some of the technical R&D if ironing things out, the route of going with a game engine does not necessarily have to be much more limiting in the end result, and it actually opens up other doors which are less limiting. There is a tradeoff of course. It should be noted, we are a visualization and design studio primarily in interior design, which is highly suited to realtime rendering due to a number of factors, which may be why I am getting into this now, why some still can't see the immediate easy application. Certainly if your rendering particle effects, or massive exterior scenes, or anything that requires intricate character setup, or dynamics. The route of replicating that in a game engine is currently a workflow with tons of friction, unless you work well within bounds. I think these aspects of CG will be the last to follow realtime rendering, but over the next years, this pipeline is going to get refined and more easily accessible.

But right now, in the area of visualization of interior design. The seperation of workflows between ending up in a game engine, and ending up in with a vray animation is really not that huge of a gap, and there isn't really that much additional boundary put on you. The freedom realtime brings to the table is actually offering more versatility than what vray can provide. I think visualization will be the first to jump to realtime rendering, if it does occur.

Now, with going through the process of R&D for interior visualization with realtime graphics, and seeing the benefit, I am starting to wonder, what else will follow suit? The gap to jump in order to do this, and get other things to conform to the bounds of a game engine doesn't seem that absurd to me. What happens when there is a perfected workflow to do cinema quality character animation in realtime? What happens when a GPU dynamics system can replicate particle effects well enough to be marketable against offline rendering? I don't think this is far off. Studios that adapt to realtime I suspect will ultimately be much more agile, and marketable.

Again major studios, no way. But small studios and individuals? I think the payoff has significant potential. Of course it's not there yet, but the thought that is really floating around in my mind is, how long? And at what point does the agility of realtime outweigh it's pitfalls compared to offline rendering? And what technical hurdles will continue to exist, and for how long? I don't know.

I am also interested to hear other peoples experiences in dealing with clients where, they have begun to express interest that the current benefits of realtime rendering outweigh the benefits that offline rendering provides.

Last edited by techmage : 03 March 2013 at 05:10 AM.
 
Old 03 March 2013   #4
The whole "are game engines the future" is simply calling a spade by the wrong name.

Game engines are qualified by three things:
1) Narrow and strict input requirements and output domain
2) Specialized and GPU deferred computation
3) Highly kitted with content creation tools

When you look at it like that some times those things are acceptable, some times they aren't.

A balance of those three will determine which one you pick.

Thank God we have choice.
You have Arnold, which is CPU heavy but can take any amount and quality of crap you throw at it, and avoid sacrificing artist time and replace it with a few more blades on the farm (and unhappy artists waiting for renders that they have to tweak magic numbers to even see are a lot pricier than a large stash of blades).

You have PRMan, which is heavier on artist and engineer time, but can be made jump through hoops on fire and has lasting power.

You have UDK, or crytek's cinema oriented engine version, which are severely limited, but in that narrow domain they operate in are mind blowing.

And you have Octane and other engines like that, which have a relatively narrow domain, are GPU centric, but aren't as constrictive as game engines.

The future isn't any of those.
The future is the fact that nowadays we have many technologies, players of all sizes (some are two people teams, some 50), and they ALL have an accessible cost, and you have the unseen before privilege of truly choosing the best tool for the job.

Why would ANYBODY in their right mind use an engine limited to a couple gigs of VRam with horrible outdoor performance when you have a scene requiring a singular setting with tons of textures in an outdoor scenario? Just use Arnold.

Or why would anybody not use something like Octane when all you need is turning around an interior shot with no moblur, textures, displacement, where all you want is fast iteration of complex shading models at the speed of light that can fit in a reduced footprint?

Why would you use either if you have a wealth of pre-existing shaders doing magic tricks, and satisfactory performance, and already had 2000 seats of PRMan, for the show at hand, and didn't need a superior and more intrinsicly designed unbiased approach?

Thinking any of them will trump all the others in all scenarios, and wishing for such thing to eventuate, is short sighted and indicative of a lack of perspective and insight in how these things work.

You have unprecedented choice between amazing products that leverage different paradigms to excel at a different mix of parameters. Choose what's best for you and drop that technophiliac love for the bleeding edge and the latest fad, you will be better off in the long run.

It's a question that shouldn't be asked. It's like asking if a 100$ fiat will ever trump Jeep's or Ferrari. It sure as hell won't if you live in the mountains, or if you need to race, it will if you only have 100$ to spend and it gets you A to B though.
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles

Last edited by ThE_JacO : 03 March 2013 at 05:14 AM.
 
Old 03 March 2013   #5
Originally Posted by techmage: But I've started to think, the world of offline raytrace rendering may actually be surpassed by realtime game rendering techniques. With the advent of UDK 4 I think we are really starting to get to the point of asking, 'OK vray still looks better, is it really THAT much better?'

I think that there is a kind of uncanny valley that game rendered scenes dive into. The higher the resolution the output, the higher the frame rate, the more clearly I see diffuse textures and fake reflection maps. Game engine!

There is cinema real too that runs out 24 frames per sec. Game frame rates are usually higher if not stuttering at a higher frame rate. Game scenes can't do proper motion blur, instead some sort of fake variety at best which I can easily pick up on.

Game engines are too sharp. They use some sort of constant blur at reduced resolution to simulate depth of field; well at least in cut scenes. Doesn't look like a proper effect from a real lens.

Shadows. I see all games so far either doing sharp shadows or some sort of constant blurred shadow. My eyes are used to seeing shadows being sharper where an object makes contact with the floor and then gradually blurring more away from the point of contact.

Game engines can't do simple things like frosted glass or light perturbing through the atmosphere.

Too much lens flares

Indirect lighting is not from proper integration of light energy to follow physical laws but a hack to make up for lack of GPU cycles that is not even a proper approximation.

The list can go on and on...

The net effect IMHO is the brain perceives the moving scene generated by a game engine as not real. So even for low budget feature, I think you wouldn't want to use a modern deferred rasterized game engine.

Jules
 
Old 03 March 2013   #6
Originally Posted by techmage: CGIPadawan, I know exactly what you speak of. The studio I work at currently has an offline rendering pipeline set up with C4D and Vray, and they are getting money for realtime game engine work, and thus putting money to developing a pipeline for that, which I am currently developing.


Why? Who is behind it? I can understand though if it is tied to an NDA.
But somebody is putting money behind realtime game engines to make a movie?

It seems weird that someone would push this hard for something that has, as everybody has pointed out, predefined disadvantages that are innate to its design.
__________________
"Your most creative work is pre-production, once the film is in production, demands on time force you to produce rather than create."
My ArtStation
 
Old 03 March 2013   #7
Well, I just posted in another thread here so I won't repeat, but I will post the link to what I said
and I will post this picture with a real actor composited in real-time in cryengine


gif anim http://imageshack.us/a/img21/6198/s...htthirdrest.gif


plus I will mention again
camera sync, raytrace like dof and motion blur and others
 
Old 03 March 2013   #8
Game engines could be used for environmental stuf and such maybe. I guess there are already examples of film and broadcast work where a realtime game engine is put to use for that. But film quality cg characters with game engines has some way to go.

Also, game engines are constantly evolving and hardware makers like nvidia are demoing hardware capable of realtime raytracing, radiosity whatnot... So maybe in 20 years, even cg characters from game engines might make it into film production.

EDIT: Oh! Sebastian beat me to it with a nice demo! Cool stuff!
__________________
"Any intelligent fool can make things bigger, more complex & more violent..." Einstein
 
Old 03 March 2013   #9
Hey, I want to chime in too!

- Companies are serious into this. Sebastian made a good point. Crytex would not invest into CryEngine for Cinema if there is no money in it.

- When it come to ArchViz, this is already a big stuff. I think I saw a demo of Unity (of all engine, ha ha!) where a person walks into a room, and you can change wall paint, furniture, and furniture types. And it is awesome!

- Last but not least, I think this 'game engine' and 'render' will merge into a brand new thing. Real time, non game engine render. Not like current game engine where you need to follow its limitation (specially import export and many other, level modelling via brushes, etc), but more to current renderer, but more realtime. There are probably limitation and rules need to be followed, but the pipeline would be much easier. And with render engine such as Otoy, or GPU grid computing (also Otoy and other) I think we are nearing a time where you can use a three, or four monitor PC, and one of the monitor can show final film level result. And if you are viewing master scene, any changes by other artist on any models referenced will shows automatically.

Imagine avatar motion capture except in this case, what the director see in the portable camera is the 'final result' already.

The advancement of GPU computing and software technology such as Clarisse (maybe a future version of it) could make this kind of thing possible.

Anyway, real time tools such as CryEngine can make saturday morning cartoon easy.
 
Old 03 March 2013   #10
Originally Posted by Jules123: Game scenes can't do proper motion blur, instead some sort of fake variety at best which I can easily pick up on.

Game engines are too sharp. They use some sort of constant blur at reduced resolution to simulate depth of field; well at least in cut scenes. Doesn't look like a proper effect from a real lens.

Shadows. I see all games so far either doing sharp shadows or some sort of constant blurred shadow. My eyes are used to seeing shadows being sharper where an object makes contact with the floor and then gradually blurring more away from the point of contact.


Proper motion blur :


longer mp4 movie http://img684.imageshack.us/img684/6955/y33.mp4

proper depth of field


not the best example but here - dof through glass. Only raytraced dof can do that


near camera defocused objects. Again only a proper depth of field solution can do that


again not the best example but area shadows. Sharper at the base and softer further
 
Old 03 March 2013   #11
realtime engines have a ways to go before they catch up to cpu rendering, but I think they will. There's a lot of good examples and tech demos out there showing some pretty insane features rendered on the fly.

And then we have cpu render engines coming over to the GPU for semi-realtime viewport rendering

It's going to happen. In 20 years from now, I don't think there'll be much distinction between the two. It's all about how clever the programmers are to make a function optimized to take advantage of the GPU and parallelism. The more code becomes massively multithreaded and the larger the graphic libraries GPU's natively support, the more it makes sense to put code on the GPU.

GPU development has shown no sign of slowing down and continues to branch out with more parallelism since that's the primary way they function, while CPU's have shown many signs of slowing down and have been forced to move into parallelism with multiple cores.

Cryengine and Unreal are the obvious companies to keep an eye on.

Last edited by sentry66 : 03 March 2013 at 04:08 PM.
 
Old 03 March 2013   #12
element

Guys, just look at Element 3d for after effects and see how it's used in the media !! There is your real-time engine, ready to use in any kind of visual media project. And it's cheaper than any of those game engines... NOW, as one of the posters earlier pointed out, it's all about choice and needs of the project. Element 3d was good enough for the title sequence for "Fringe", but probably not the best choice for Transformers 4 VFX...
 
Old 03 March 2013   #13
Originally Posted by sebastian___: Proper motion blur :

proper depth of field
not the best example but here - dof through glass. Only raytraced dof can do that

near camera defocused objects. Again only a proper depth of field solution can do that

again not the best example but area shadows. Sharper at the base and softer further

I must explain what I mean by 'proper'. Essentially there is currently not enough GPU or CPU cycles to do physically correct rendering real-time.

CryEngine only has say 40ms to draw each frame, so it has to employ a lot of cheats. The motion blur in the image you have is a 2D image processing technique I believe where you follow the vector motion of the object and blur surrounding pixels along the line of motion.

A physically correct renderer would re-render the scene many time intervals at different time points whilst the 'shutter' is open and then average those rendered images over the frame time.

For dof for a game engine, it's a similar 2D image processing cheat as motion blur. Take surrounding pixels of the rendered frame and blur them around an area of a circle where the radius depends on how out of focus the image is. If to do bokeh effects, draw 2D sprites on top with a shape of hexagon or whatever aperture shape you are after.

A physically correct renderer would take a single ray direction and Monte Carlo (slightly perturb) its direction of travel, averaging the result from each ray sample here as they bounce around the scene being reflected, refracted, occluded etc. But you have to do this Monte Carlo for all rays going through the lens.

Again refraction/reflection in a game is another image processing cheat. You can take the rendered frame (already rendered with the motion blurred and DOF 2D image processing cheat), cut out the bit that should be seen as a reflection and draw that over the reflecting surface. Or use a pre-rendered environment map as a material. You could probably cheat better by re-rendering just the bit the reflection can see but from the correct point of view of the reflection to pick up the correct lighting and then paste just that section on to the reflecting surface. Yeah, so the dof refraction in the picture there is a cheat.

Shadows are currently a massive problem for game engines to make it look good. The best technique so far, is extensions to shadow mapping. You're limited by resolution and takes a relatively long time to do soft shadows, there are artefacts etc. Not sure what technique used; maybe back-projection. YOu probably couldn't do that right now in a game engine for a complex scene, takes too many GPU cycles. For two or three teapots probably could.


Yeah, 'cause there is so little time (milliseconds) to render each frame, a game engine does a lot of image processing tricks. And hence its not based on real physics throughout the render, the net effect is it's going to look not real.

But if your film is highly stylised or going for a more cartoony feel, then a game engine maybe good enough.

Jules
 
Old 03 March 2013   #14
not this again...
 
Old 03 March 2013   #15
I think it is a good question but like other folks have said every tool has its purpose. What I mean is that game engines are built for games. Game engines are therefore specialized tools developed to generate believable imagery with a lot of shortcuts. For example, most game engines do not support real time ray tracing. Therefore until they are able to support ray tracing, it will be hard to see game engines being used for movies to a large degree. But at the end of the day the reason the tools are built differently ultimately because of hardware.

Game engines have historically been built to use a completely different pipeline for generating images (frames) for output to a display device and lost forever. And this pipeline was designed around a very optimized set of data and algorithms to allow the max number of frames possible to be generated each second. In this pipeline, hardware is always the limiting factor based on what is available in the average consumer grade PC/console at the time.

CG in film on the other hand has historically been about reproducing reality in as much accuracy that is possible per frame for fooling the viewers into believing something is real when it isn't. In this pipeline hardware is not an issue as large numbers of servers are the norm for rendering each frame. And it is not expected that these frames will be rendered in real time. In this system the reality and fidelity of the image is more important than the time it takes to render a frame.

If you look at the history of graphics systems and hardware, you will see that we are in the midst of a cycle that is returning to the use of specialized hardware as part of the rendering process. This is going to have much more of an affect on the tools we use across the full spectrum of graphics companies than anything else, but it is going to take time for the hardware to mature and software to be built to take advantage of it. 20 years ago, such specialized hardware was the norm with systems like those from silicon graphics built around specialized processors that contained specialized hardware for optimizing calculations related to graphics. Everything about rendering was based on specialized hardware. But this philosophy slowly became obsolete and was replaced with the mantra of "brute force" represented by more and more powerful general purpose processors created by intel. Now, most of your rendering is done on general purpose hardware. But Moore's law has run into a brick wall lately and now Intel is going back to the idea of specialized processors being integrated onto the chip along with more and more general purpose processing cores. Hence the rise of the GPGPU and architectures like Intel's larrabee as the next generation of supercomputing architectures.

Whether Larrabee or GPGPUs actually become prominent in the near term or not, it seems more and more likely that at some point in the future computing processors will consist of multiple types of cores both general purpose and specialized. And when that happens you will see less of a distinction between tools for games and the tools and pipelines built for movie rendering. It is already happening to some extent now albeit slowly in the use of GPUs for some parts of rendering in various 3d packages.
__________________
Well now, THATS a good way to put it!
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 11:28 AM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.