Color Management - Filmic


I found this video pretty interesting and am wondering if there is something similar in C4D.
I’m doing more interior renderings lately and yearning to continually improve.

Thanks for your input,


This is simply the linear workflow right? So no need for it.


They’re quite different things. Linear workflow is really more of a bugfix for software that wasn’t taking into account gamma curves (ie virtually all software). The filmic LUTs being referred to are how a typical camera interprets brightness in an image, particularly how bright highlights roll off and how dark areas can be boosted.

The answer to the question though is no, c4d can’t do this. Your best bet would be a 32bit render then colour grading in photoshop.


Ok. But would be LW in C4D not be the base to acomplish similar things? And apply a LUT for example in post?


Don’t confuse linear workflow (the project setting) with rendering into linear colour space. They’re related topics but not the same thing. The linear color workflow setting tells c4d how to interpret and internally process colour data. Rendering that data into a linear space format is another thing you can choose to do.

In a nutshell. Keep lwf enabled in the project settings, then switch output to 32bit, this then forces c4d to output linear data for finer control in post.


I watched the Blender guys video a few years ago and I’ve been both curious and puzzled about it ever since.

What we are talking about is color stops…the latitude between the darkest and lightest. My understanding has been that rendering to 32-bit will retain all dynamic range that is available in a render, but it won’t expand the dynamic range that gets processed. What you are saying suggests something different…that c4d will expand the range of light that gets processed. Are you sure about that?

By no means am I sure about my view, but I’m a bit dubious that changing color depth of render expands dynamic range of c4d’s processing.

Obviously 32-bit is a better choice for grading, but does it change how the image gets rendered…or just what data gets stored?


I’m not sure how blender works, but it seems to me that Blender Guru’s explanation is not correct.

AFAIK, internally all 3D renderers, including C4D, compute at least in 32bit per channel. That’s billions of values per channel.

The question is how it is then translated to 8 bit per channel for output on our monitors or into different common file formats (jpg, PNG etc…).

That’s the moment when Tone Mapping enters the game to decide how to squish in super bright highlights and very dark shadows inside the visible space.

As far as I understand, “Filmic” in Blender is just a better tone mapping curve than the default one that seems to blow out the higlights on the monitor.

And we have that already built in C4D (R19 at least) : “Tone mapping” is a post effect that requires the render to be set as 32bit. There are various modes based on film response.

Most other renderers I know of have it too.

Note that if you render to 32 bits anyway, you don’t have to apply tone mapping in C4D and can tweak it in post in AE, Fusion, Nuke etc… to get the same “filmic” look in post.


Eric is pretty much there. By rendering to 32bit you are rendering an infinite number of fstops. The problem is where black and white are defined and how the image calms down the brightness as it approaches white or how it boosts dark shades to make them more visible. Tone mapping or color mapping post effects attempt to move these white and black points around, along with the curves inbetween, but theyre not amazing at replicating the curves of a camera. For the most control, render to 32bit then look at processing the image afterwards, photoshops camera raw filter is useful for this.

32bit does change how things get rendered, even if you don’t tonemap the image. Certain parts will get brighter and darker. Specifically motion blur in physical render will look better rendered at 32bit compared to 8. Also reflections from hdri sources will be brighter when rendered to 32bit even if they simply get clamped down to 8bit after rendering!


Image processing requires working within a limited color space. I don’t think any rendering technology has ‘infinite’ dynamic range. The question becomes not if it is constrained—but how constrained.

A 3D renderer must operate within a defined color model.

As I understand “Filmic Blender” is not just tone mapping but is a color management system with greater visual latitude.

Obviously there are other constraints: a monitor’s ability to display a range, file formats and bit depths…even the human eye.

Anway…this is my understanding which I concede could be incomplete. And what I’d like to investigate is what color model c4d is using and how it is employed.


Most digital stuff renders to RGB, with up to 64 bits per channel or “double float” precision - giving you 192 Bit Color effectively. More than any screen or projector I know of can actually display.

What Blender is probably doing with “Filmic” is to apply a Bezier curve or curves to the rendering math that tries to mimic how photochemical film - good old 35mm celluloid - would record the same light. Nothing special actually. You are just offsetting how a rendered photon/ray behaves during render with a curve function, look up table (LUT) or similar.

That, or the Blender people are rendering to bog ordinary 32 Bit RGB, applying some stock image processing to those RGB values just after each pixel has rendered, and pretending that they have some extra-special “Filmic Rendering” thing going on.

Remember that these are the people who promoted one of the most idiotic UI designs in the 3D world as a “revolutionary, non-blocking UI” just a few years ago.

If you are at 32 or 64 bits, there are hundreds of tricks you can use to get a “Filmic” result.

Basically, if Photoshop or DaVinci Resolve or other color grading software can do it, you can do that very quickly during the rendering of a pixel as well. Its just ordinary image processing functions under the hoods of most of these softwares.

I mean SweetFX, ReShade and similar mods - also using stock image processing methods that have been in the public domain for a long time - can make 3D games look filmic in realtime. How hard is it to add that to a render engine that crawls along at a few hundred pixels per second?

So its a) filmic curves or LUTs applied to virtual photon/ray behavior or b) filmic curves and LUTs applied to pixels immediately post-render or c) a bit of both.