I know of one renderer - Spectral Studio http://www.spectralpixel.com/ - which has implemented their own version of OSL so that may be a way to go. But obviously that’s a lot of work! At least you get the benefits of shaders developed for other renderers - eg Cycles and Vray.
Well you have C++ networks of shading nodes (very traditional) and monolitic shading language like RSL (also very traditional). Then you have OSL that sits somehow in the middle but not really. It is a shading language where you write nodes, but then it compiles the network down to a single piece of code. Code is then given to a JIT compiler to make it run fast.
I can see a number of permutations of those elements. Sometimes a great recipe comes down just on how you mix the ingredients.
Hello Max and congrat everybody at Animal Logic for the great work on TLM.
Thanks also for your answers above, very insteresting discussing!
Glimpse average render time for LEGO large shots final frames was around 10-30 minutes (a lot longer when in the hybrid mode with Prman). Rendering hardware was 16 cores sandy-bridge (32 virtual cores).
I’m Sorry, I just block on this: I can’t believe you had such low render times for TLM (with dof and motion blur?). So my question is: How many percent (averagely) of the shots was rendered in “hybrid mode”?
Because tech was constantly evolving and getting faster and more features reach, I don’t have precise data. So I report my very rough projection of render times based on the tech we had a the very end. One thing I must apologise for is that I don’t remember if those 30 minutes were for a stereo or a mono render. So let’s double that up to be conservative.
If I remember right, rendering in “hybrid mode” were around 5-7 hours wallclock per frame (in stereo), time including all render passes that contributes to the frame.
I would say than 95% or more of the movie was rendered in “hybrid mode”. I think only a handful of shots were rendered in Prman alone at the very beginning and a handful rendered in pure Glimpse at the very end.
Only a few shots had motionblur. Motionblur in Glimpse adds around 8-15% to render time. DOF comes almost for free thanks to the nature of sampling in a path tracer. But we rendered without DOF and applied it in Nuke because of the need to tune it in stereo right a the last minute.
Thanks for the explaination! This seems more “common” production render times.
I will bounce on another question; Renderman seems to still be used in many productions while it’s considered as slow and/or less efficient for raytracing stuff. From your point of view, why is there still a need to use Renderman? What are the features you can’t have in Glimpse and that make Renderman use still relevant?
It’s a tricky question and I can’t speak for everybody. I’m confident some studios are using it because they know the engine very well and they have tools and pipeline that is bound to it.
It is not about the features, rather how the computation is carried that can make the difference.
Pixar showcased a production at last year Siggraph where they had to render lots of soap foam. I have to agree that some effects like that are much more difficult to render in a raytracer. Traditional cheating you can do in REYES can still be compelling in those cases. But for the regular production, people want a simpler and more predictable renderer.
Renderman is changing too. It’s becoming a pure raytracer… Only time will tell.
First of all congratulations for the great work on the movie. It really was awesome. Everything, is awesome.
Having tried many render engines in the years doing this job, I have always thought that the real difference to speed up the lighting workflow in an effective way would be to have a realtime preview, cutting out the translation time and having a rough/noisy preview frame that gets progressively refined but that already gives a good “glimpse” of what lighting is all about: values.
I remember the Worley FPrime on Lightwave 3D years ago, how revolutionary that was!
I think as for now most of the engines have a “interactive preview” but they don’t really have a full interactive rendering.
Modo I think is the only one that has a full quality realtime progressive refinement and I love it because everything gets updated instantly and there is basically no waiting time to have a quick frame good enough for lighting evaluation.
Maybe you already said that, but does Glimpse actually render the final version of the frame right away, progressively refining it, or does it have a close enough preview that the artists use to light?
How much has this approach speeded up the lighting team productivity?
I don’t just mean in terms of final rendering time, but in terms of iterations that the artist were able to reach with this way of rendering.
I imagine having something that is noisy but meaningful lets you iterate through shot notes very quickly and the whole lighting approach becomes way more organic.
Or at least this is how it is using Modo, where you can continuously change lights transforms and values getting instant results.
Do you think that this is a real value in a production and do R&D departments focus more on that, or is it just good enough to wait for frame translation and full quality rendering (talking about VFX and CGI movie production)?
Was Glimpse used also as Lookdev realtime rendering tool? I found very valuable to have a fast shading iteration when lookdeving characters.
Yes, the only difference between preview and final rendering in Glimpse is the amount of samples used and the order in which they fire. To me this is a very desirable property.
It depends how technical and organised the production is. I know that in some companies Lighters are left to do quality control in place of other departments, to debug assets and versions and to verify if the scene contains everything that is supposed to. That can pose nasty obstacles in the benefits you can get because majority of the time is spent troubleshooting rather than lighting.
You need good artists too. If you have that type of technical but not artistic lighter I was referred in a previous post, they will spend long time poking the lights around without understanding if the result they are producing looks good or not and you will have little improved productivity out of them.
I am not sure I understand the question. If you are asking if R&D within a company should spend time in making the work simpler for the departments… well, that is by definition the purpose of R&D. The challenge is to hit the narrow optimal spot between the cost of R&D and cost of running the departments. If you want to revolutionise you need really good engineers and transformational leaders, otherwise you get average solutions.
No. Lookdev was almost complete when I joined Lego crew. But we can now! Next production
I know you didn’t mean it this way, but lighting is so not about values! Many in the industry actually think that way and it is so far from the truth.
Lighting and camera work are the essence of cinematography. Cinematography is the essence of story telling in moving pictures and photography (cinematography is a superset of photography). In moving pictures, story telling is everything.
If you want to put it down to the mechanics of lights being stored as values, then everything digitally created is stored as, and manipulated through, values, including, models, animation, textures, FXs…
Sorry if I wasn’t clear, what I meant regarding R&D is if you think developing a progressive refinement render gives an effective boost in production to justify the investments (in terms of time and resources) required to pursue that road and so if it makes a real difference in production compared to the standard solutions available at the moment.
Can you please expand on the difference between the order in which the samples are fired in the final vs the “preview” mode in Glimpse?
I intended values not in a mathematical sense, but as the tones any image is made of.
I was referring exactly to cinematography or photography (A.Adams zone system for example) but also painting or other visual arts where the harmony of the tones define an image that would otherwise just be a visual noise.
To define the lighting of a shot a good artist (not just technical) usually need to be able to quickly define where to put the lights and to see how they affect shadowing in order to quickly match a shot lighting or to setup a mood and this is way quicker and more organic when using a progressive refinement render than a traditional one where placing a light, wait for the translation, see the render, and start all over again is a tedious and in my opinion not-really-productive process.
Most commercially available render engines offers some form of interactive preview these days. Ask yourself it you can use one of these and what that will mean for the company (cost of licences, training, pipelining, etc…)
Implement something in house only if nothing out there will serve the specific set of circumstances and requirements you face.
To implement a basic render engine is simple. To make it fast is hard. To make it production ready for high demanding large studios and big Hollywood production is really really really hard work.
Is in-house development more convenient that buying off the shelf products? (We are not talking about pipeline here). Most likely not, unless that development is part of a bigger picture and the “value” of the overall system is much bigger than the sum of the parts.
Also if development is worthwhile really depends on the skills of the R&D and who is leading it. When playing at this level, as I said before, it is a very delicate balance and there is an high risk associated with it. Only knowing your people very well will tell you if you can dare and if its worthwhile. It was for us.
To visualise the refinement in progress, interactive rendering tends to scan across the whole image many times. That is less than ideal because as you move from pixel to pixel, different geometry, different textures, different paths will require access to a wide amount of data which must be fetched from memory. Every time the renderer comes back to the same pixel, none of the data it need is in the CPU cache anymore. If you can sample one pixel at a time, the computation would be faster because of the improved cache affinity. Incoherent nature of path tracing makes it all harder and reduces the benefits of the approach. Still…
I am hoping this meet the artist session is still open.
I’ve been involved with the artistic side of animation now for many years, but have always had a fascination in producing my own tools and understanding the technology I use on a daily basis.
Would you have any advice that you’ve learned over the years to pass on in regards to building a good foundation for developing rendering technologies; perhaps some good books that have helped you understand the theory and core mathematics (I am going through computer graphics - principles and practices and the pbrt book, but the latter can be a little overwhelming.)
Also, have you any thoughts on the rendering engine Cycles, and if so, do you see any major shortfalls in it compared to an engine such as Glimpse that would make it unsuitable of big productions. i.e. lack of any major technologies such as alembic, or ptex support?
That is excellent. Artists that understand the challenges of a technical medium tends to make better choices in his/her daily work.
If I would have to give you only one advice that would be to not feel overwhelmed or inadequate. Sure writing a compelling and competitive production renderer is not for everybody, but writing a renderer for educational purpose is most definitely achievable.
It is easy to feel overwhelmed reading a few pages of any good rendering book. There are so many systems and disciplines that comes into play.
I don’t know what your background is. You need to have a solid base of a few subjects before you take on writing a rendering or other complex systems. If you skip this step, everything is going to be so much harder.
[li]Learn a bit about math, trigonometry, matrices and calculus. You don’t need to be able to solve indefinite integrals or differential equations, but at least be able to read and understand the formulas in a papers and books is much needed. There are some excellent free open university courses at MIT and Stanford (and certainly others). You might want to check that.[/li][li]Software engineering… Need to know fluently how to read and write C++ code. If your experience is Phython scripting. Sorry but you might have to unlearn what you know. If you don’t have a good base of C/C++, study that first and take on simpler projects, like simple animation tools, deformers, other plugins that Maya, 3dMax or your tool of choice allows you to implement. Write simple command line tools, like a tool to rename frames on a folder, or to fill missing frames by copying and/or interpolating from neighborhoods. Write a frame buffer to load a display images. If you find difficulties in these simple tasks you are not ready yet to take on a renderer. Writing a renderer will stress to the limit any foundation you have of software engineering and some times you will have to unlearn what you think you know and learn again (but unfortunately this is not a step that can be bypassed and still understand why you need to do it).[/li][li]Optics and radiometric quantities. Can’t leave without that. In particular geometric optics. You can find plenty of resources all over the web.[/li][/ul]The fact is that most subjects look like voodoo magic when you know nothing about it.
Many will leave the learning path feeling overwhelmed. If you stick to it and persevere, one bit at a time the picture (or should I say potion) will become clearer.
PBRT book you have is excellent. One advice, you don’t need to read its chapters in order. Read and have an overview here and there, then go back to which part you are ready to take on. Example, sampling and reconstruction might seem just nonsense pain when you don’t know yet why you are going to desperately need that.
Don’t know almost anything about Cycles, apart that is opensource and connected to Blender so I can’t help you with that. I read once that it used to be fast, then it went slower and slower as more features were added to it. In Glimpse lots of my time went into fighting that battle: adding features while not letting performance to deteriorate. It have been very challenging and that is a dark art even when you know how things work.
One thing worth saying, if it is hard to add support by yourself for something like Alembic in a renderer, then that renderer is not something worth using in production. Not because of Alembic itself, but because during production you will have to add supports for random things and you will have virtually no time to do it. This is the main difference between production renderers and the thousands of rendereres (either open or closed source) you find all over the world.
I hope this helps and motivates you.
Thank you for your response Max, I appreciate it immensely
I shall keep focused on building my foundations.
I really like the suggestions for taking on smaller tasks, this is something I am actively pursuing and I believe will help me on a lot of levels later on when I get to the stage of implement more complex productions tools.
Currently I am using Python, of course I will need to take the dive into C/ C++ in the future, but I had thought developing advance skills in python would help me, as a lot of 3D applications can have complex add ons/ plug-ins developed purely in Python.
Technically you could cross the ocean with a kayak… The fact that most apps allows to write complex plugins in python doesn’t mean that people should
Python is not the fastest or most efficient scripting language (far from it). Yes, as a scripting language is easy to write and it is versatile enough and for this reason people indulge and use it and abuse it for the wrong reasons… like writing complex plugins, modules or entire applications with it.
When you are about to implement something, ask yourself this question: how many times is this code going to run? Once per day, once per hour? Once per mouse click? Or once for each asset I load or visit? Is 1 millisecond per iteration fast enough? What if the code runs once per asset in a million asset at scene load? At 1ms That would be 17 minutes of wait… Don’t have scenes that big… yet… If there is a chance, it will. Once the bad practice of abusing scripting languages is widespread it is very hard to profile what is slow in a pipeline and bring it back in control.
Second point about getting too familiar/skilled with python before getting comfortable with C++.
Python is too simple and it forces you to indulge in bad habits like abusing of object oriented design and ignore all together the implication of memory access patterns.
If high performance computing is you goal, then most of the “good practice” you learn as advanced python developer are pretty much useless and they might get in the way because on them you constructed the foundation of the way you think about software development (remember, sometimes you have to unlearn, which can be very hard thing to do).
Third point. I strongly argue writing python code is easier and faster than writing good C++ code, if you are organized. People often compares these 2 situations:
[li]well skilled and organized python development[/li][li]low skilled and disorganized C++ development.[/li][/ol]You must compare apples with apples. Writing good C++ code is not slower than prototyping in Python.