This can be called a colouring WIP I reckon. As always, the first step is to get the values right. This is simply the most crucial aspect and comes before colour, saturation and texture. If you get the values wrong, it’s going to read badly, simple as that. That’s not to say that the values have to be accurate necessarily, here I’ve ignored a lot of what the 3D render was ‘telling’ me about the way the light was falling on the scene, and I think it works better for it.
A general rule of thumb for pics set in an atmosphere is that the further away an object is, the lighter it’s darkest dark is. This is one way the brain interprets distance and the hierarchy of objects it is looking at. In other words, if you have an object with one side in light (effectively white, say) and the other side in shade, if it’s close to the ‘camera’ then the shade can be black, but the further away the object is, the more ambient light is mixed in with the atmosphere, raising the shade’s value. If it’s very far away the lit side would be white, and the shade side might only be fractionally darker than the sky / atmosphere / surroundings. This is called atmospheric perspective and affects colour too. A green tree is green close by, but a wooded hillside a couple of miles away has a lot of sky blue mixed in, making the green appear more like a purple. Again, this makes the pic more believable to the human eye.
Again, realism isn’t always the idea. It’s only recently clicked with me, but I reckon I can get away with the fact that the tower-like strut on the ambassador ship is darker than it ‘should’ be because it is surrounded by very light areas. I increase the contrast here (making the strut darker) because I imagine the retina shrinking or camera iris stopping down to cope with the brightness of the surrounding area. This lowers the exposure, bringing the value of the area in question down. And letting me fudge what’s going on and make the pic read better!
The topic of exposure is also something I’m only really beginning to ‘get’. When I was first struggling with digital art a couple of years back I was dismayed at how flat and unrealistic my efforts turned out. I started to consider why. A long time later I began to understand the relationship between dynamic range, exposure and the image.
In a nutshell, consider this. You’re standing in dimly lit church, looking at a doorway, and outside is a beautiful summer’s day. You have a camera and want to take a pic of the church wall, and the door. How to expose the pic? If you set the exposure on the wall so you can see the detail of the stonework, a picture hanging there etc you’ll end up with the doorway being nearly flat white. If you set the exposure to the brightness outside, you’ll see a rectangle of a sunlit cemetary and the rest of the pic will be black. You cannot get both ‘ranges’ of value information in one pic.
The eye, the camera and the film camera can select a particular range of light values (the exposure, making up 0 - 100% of pixel luminosity) from a far greater range (the dynamic range). Anything outside of this chosen range gets ‘clipped’ or moved towards the dark or light end of the exposed range respectively. This is why the buildings above the waterfall are defined in the lightest values possible, there’s very little difference between their lightest lights and their darkest darks.
In my pic, then, I have chosen to ‘expose’ in favour of the shaded areas in the central ‘dip’ and the shaded areas of the buildings. This lets me bring out detail there and makes me bleach out much lighter areas. The end result (and I did say I’m still learning!), should be an image that looks more photographic, or something that you could expect to see in the cinema.
EDIT: Thinking about it, maybe the dynamic range IS the exposed range. Anyone care to clear this up for me?