color scheme to objects rendering


I read through color scheme and theory…

But i am confused how to carry that into realistic rendering, in pixel precist value of sorts.
Like, after deciding complementary scheme, how does the light, shadow, reflectivity, transparency fits in the scheme?
Especially during reflections and refraction between colors…
and shadow colors which are affected by the its surrounding.

This is assuming we are using 2D rendering tools, no 3d whatsoever.


If you are after very realistic lighting/values/colors, it helps a lot to use photo references. It’s possible to achieve fairly realistic results working out of your head for certain subjects, if you have studied and analyzed and painted from reality or photo references enough (by enough, I mean for years of hardcore practice doing it), but photo references or reality will always be the most accurate and reliable source to base your judgement on.


it’s abit confusing since i want to achieve something as realistic as these,
animated and moving. basicaly, 2d pretending as 3d.
and when you say “study” it is hard to even start since i am not sure what to learn from the image.


Why would you want to make 2D animation look like 3D? Why not just do it in 3D? What’s the point of making 2D animation look like 3D? And it’s not possible anyway–it’ll take far too long as you’d need to paint every single frame and that’s just not feasible, unless you want to spend many decades on one single animation. There’s a reason why no one’s done it–it’s pretty much impossible, or very impractical.

If you want to learn how to paint realistic 2D, then that’s something different. Because we’re talking just one single image. If you want to do that, then this is what you need to do:

Have you actually tried to paint from reality or photo references before? If you’ve never even done it, then you need to start doing it, and lots of it.

There are specific ways different materials reflect light (plastic, metal, wood, skin, silk, etc.), as well as different types of textures. You need to actually try to paint those different materials convincingly in order to learn how to depict them accurately. So for example, you need to actually do some digital still life paintings of various types of objects in front of you, like these:

When you have painted enough of those, you’ll have experience depicting a wide range of different surface materials and forms. Ideally you’ll also vary the lighting on each too so you can also learn different how to depict lighting too, because the direction and quality of light can drastically alter how things appear.

If you don’t have a space where you can control the lighting and keep it consistent (such as using indoor light only, since outdoor light will constantly change throughout the day, and these still life studies can take many hours or days to complete), using good photo references is a good alternative (emphasis on “good” references with proper exposure, good lighting, good choice of subjects), such as these:…0…1c.1.64.psy-ab…3.9.362…0i67k1j0i8i30k1.0.jdlye4zSsQY

Other than still life, you’ll also want to tackle other subjects like people, animals, landscape, architecture, vehicles, etc. Do painting studies of those and you’ll become more familiar with different types of surfaces materials and forms and lighting.

Beyond doing those still life painting studies (study means you actually paint them, not just look at them), you’ll also need to learn about the foundations of visual art related to lighting, values, colors, shapes, forms, etc. There are very specific reasons why lighting works the way it does, or why different levels of specularity appear differently, or how color bleeds in radiosity, how shadows are cast, how to maintain value coherency, how the way you render values alters the way the form reads, etc. I actually teach all this (and so much more) in my online workshop, Becoming a Better Artist:

By combining foundation knowledge with painting studies, you’ll eventually be able to paint the kind of 2D works that have the realism of 3D works.

Now, if you apply that to your question about making 2D animation look like realistic 3D animation, imagining having to paint 24 frames of realistic digital painting for every second of animation. Just one frame will take you several hours to paint. Add up all those seconds for however long your animation will be–it’ll take you far too long. And also, you can’t just change your mind during the production about certain details like you can in 3D animation. In 3D you can swap out an entire character, change design, swap out materials, change colors and lighting, alter the animation, alter the camera movement, etc., and all you have to do is redo the renders. If you do it in 2D, you’d have to completely repaint everything that needs to be changed.


okay i can see where you are coming from.

i use 3d too still, but now i am looking for alterntive to compete with footage like this


especially to compete with renders like fur and stone textures.

one thing i noticed while being an engineer for PBR render was that, all object has the same properties, and that
is the base of 3d logic anyway. everything is blinn or phong actually.

what do you think of this drawing? it takes only a few seconds to few minutes at best.
lacking fur and textures to inbetween.

unless it is a 3d game, all 3d renders are basically pointing to 2d outlook. with these logic,
i am trying to have something natural to drawing, mostly automated to make tiny details
with high density from fur, textures, to decals, lens flares and so on.

taking life references, i only know that every objects are roughly separated into highlight,
midtones to dark tones, and their inbetween is microscopic.

i wonder if there is a solution yet for this?

Indeed the pose and angle cant be changed too much. which the drawback.
would be nice if there is a way to make a drawing “pulled” by adding zcoordinates
to the imagery.


What you’re talking about sounds more like procedurally generated textures, and that’s already available in 3D. There’s no way to do that in 2D, especially if you also want lighting information applied, because if there’s form, the surface texture itself will have to be generated with different lighting depending on where it sits on the form (at the highlight, mid tone, in the shadow, with ambient bounced light). This is the kind of thing people use 3D for. And as you already mentioned, you wouldn’t be able to do any kind of rotation, and with such limited application, what’s the point?

And images aren’t just a bunch of textures. There’s shape, form, lighting, color, perspective, anatomy, body language, facial expressions, etc., so even if you can procedurally generate textures, you can’t generate all those other aspects of the image–someone still has to actually draw and paint all that, not to mention have the artistic knowledge and skill to do so.

There’s also the alternative of using photobashing, but that still requires the same set of knowledge and skills. Someone who can’t draw or paint at a high level won’t be able to photobash convincingly because all the photo elements will end up mismatched in lighting, perspective, and other issues (I’ve seen this too often from people who try to do “matte paintings” but have no actual drawing and painting skills or basic visual foundation knowledge).


Yeah, i guess there isn’t yet.

the point is to make it easier to the 2D artist in my department toproduce something in
3D with their 2D skillset.
but i guess there isn’t.

the closest to what we’re looking is probably QUILL in VR but the process
is RAM expensive, and it requires VR which not very natural.

i guess that’s it then… thank you for your assistance :smiley:


wow, the footage of this is really stunning! I never managed to make it look that realistic


The OP’s friend emailed me asking about the same thing, and I think his questions and my answers will help illuminate this subject, so I’ll post them here below, for those who are interested in this topic.

OP’s friend’s questions:

The hardest part is that 3d work does not seem to be automatically equipped with PBR capacity.Even the artists in our team can make more convincing artwork than our 3d softwares. Especially maya and 3dsmax.

These softwares does great job in rigging and inbetweening. But very hard to build good rendering.

Animation works in vector.

Rotation in 3d space basically is 2d vector on screen moving around.

That one is not a big problem.

Hardest part now seems to be about building the good coloured artwork
Things like glass, water and crystal are the hardests. But even these subjects, the light that goes through it works linearly. From 0 to 1 than in xyz.
If we can build it in 2d with speed, 3d follows. And that is how the concept art function in the pipeline right?

My answers:

That logic is flawed because 2D artists who work in realistic styles rely heavily on photo references, and often also use photobashing, which means they are in fact, still trying to emulate realistic 3-dimensional space. So basically, instead of a 3D software doing all the calculation to result in realistic rendering of lighting and form and surface materials, the 2D artists are doing it in their heads and then painting the result. They are like human 3D renderers, and still have to make all the assessments about the location of the light sources in relation to the surfaces being rendered, the turning of the forms, the surface property characteristics. When they work from photo references, the photos themselves are realistic 3-dimensional spaces. The camera only capture the image in 2D, but it’s “mother nature” that did all the calculations that provided the “rendering,” while the camera only “printed out” the results. When they photo bash, they are taking the textures and lighting information from photos–and again, they are from realistic 3-dimensional spaces. When they paint from scratch, they are still referring to 3-dimensional reality that they have studied and analyzed and now trying to replicate. Everything is based on realistic 3-dimensional spaces and how real life physics works in our reality.

The only way you can create software the does the same thing 2D artists do is to also have it make decisions thinking in 3-dimensional space, otherwise it’s impossible to do the calculations required. This is because 2D artists who work in realistic styles are thinking in realistic 3-dimensional spaces. They think about light source location, radiosity/color bleed, ambient light, specularity, texture roughness, turning of form, local colors, ambient occlusion, subsurface scattering, natural color shifts. Those are all the same things that 3D rendering has to consider, except 2D artists do it in their heads while using photo references or photobashing, or observations of reality.

And no, that’s not the function of concept art. Concept art is about visual design. It’s about determining how everything should be designed in a game or movie or TV show. The monsters, the locations, the characters, the weapons, the vehicles, the props, etc. It’s not about trying to paint realistically, because what if you need concept art for a stylized cartoon? That doesn’t require realism but you still have to do all the concept art to design everything that can be seen in the movie/game.

The way you’re thinking is very misguided, because you don’t understand how the basic foundation of visual art works, and how 2D artists think and work. You are thinking like a programmer and not an artist, and unless you have gained more experience as an artist and have actually tried to paint realistically from photo references and used photobashing and also painting from your head by observing reality (and not just dabble it in, but actually do many painting studies in realistic styles), you’re not going to understand why your mentality is very misguided.