Meet Max Liani from Animal Logic and his renderer, Glimpse.


With an incredible 330 mil taken so far at the box office (and 96% positive tomatoes ), Warner’s The Lego Movie is the CG flick of the moment. But we all know this kind of monumental success isn’t possible without hours and hours of work by experienced, talented and hard-working artists. This week in MTA we have an exclusive treat - an interview and a chance to talk with Animal Logic’s Max Liani. Max is one of the key reasons all those tiny bricks look so real. Why? He wrote a renderer especially to get them that way. From scratch. Not bad, huh? To find out more about Max Liani, his journey and his renderer Glimpse, read on. Remember, you can ask Max questions in this thread, or you can log into our live webinar session with Max at 6pm PST (LA time) on Tues 25th March. Registration essential - click here to register. (Here is a timezone converter, so you can work out what time that is for you.)

Q: Thanks for joining us Max! Can you tell us how you got your break as an artist, and what inspired you to join the industry?

A: I’ve always loved movies. But when I was a kid I never imagined I would make my living out of them. Of course I was inspired by the spectacular imagery of Jurassic Park and Toy Story, but that was so far from where I was. I grew up in Italy, the visual effects industry there is not much.
My passion developed in the early 90’s when I was 16, taking photos and creating some rather unlikely compositions of cubes and spheres with my shiny new 386 PC.

Q: How did you learn? It must have been difficult to find material to learn from - much harder than it is today.

A: It was hard back then to find information outside a good university. Books were difficult to find and very expensive. None of us had an internet connection at home and there were no tutorials, no schools to teach me. I studied on my own and I met a good friend, sharing my same passion. We refined our skills by collaborating on projects. I went to my very first real job interview holding a couple of 3.5” floppy disks in my hand… that was all I had… and I got the job! I spent most of my time on those Silicon Graphics workstations I couldn’t possibly buy. I studied software engineering to implement solutions to the technical problems I faced. I stated to teach students.
Jumping between small companies, 10 years went like that.

Q: You’re in Australia working at Animal Logic at the moment. How did you get that break after 10 years in Italy?

A: Despite all the hard work, it had been difficult to get into the big game. I needed a decent reel, good skills, some connections and some lucky timing. A friend I knew from one of my classes told me that Animal Logic was looking for people for the upcoming Happy Feet, back in 2005. My reel was weak, but he pushed from inside to get me hired. He knew I was good, I only needed a chance to prove myself. I took a plane for Australia on a Saturday, with my beloved wife. I started working on Monday morning, jet-lagged like hell after 24 hours flight.

For the first time in a long while I was surrounded with people with great experience, I felt like a dry sponge, absorbing everything I could. In 6 months I went from mid lighter to senior lighter to key lighter. I got trusted with some of the most strange and visually complex sequences and hero shots in the movie. When production ended, most artists where let to go, I was one of the few lighters that were chosen to stay.

Q: Why do you think that was?

The fact that most of my life I had to find solutions by myself, certainly made me learn how to connect the dots. To me, knowing how things get done is not enough, I want to know how things really work, from a theoretical stand-point, I mean. Knowing what is inside that box allows me to think outside of it.
In most studios, it doesn’t matter how good you are when you get in. It is far more important the pace at which you improve when you are there. Now that I’m on the other side of the desk, running the interviews, I value more an artist that comes in as a junior and in 6 months performs as a “mid”, than an artist that starts as a “high mid” but doesn’t improve much from there. Does that make sense?

Q: How has your role developed since you’ve been at Animal Logic?

After a couple of years in Animal Logic I started to innovate. I designed from first principles three generations of our shading system, used in every production since 2007. I grew from senior lighter in “Happy Feet”, to lead in “Legend of the Guardians, the Owls of Ga’Hoole”, to supervisor in “Walking with Dinosaurs”. It has been a constant learning path. When the lesson was not artistic or technical, it was about management, diplomacy and trust. In general, when you want to make changes you make friends and supporters, but some people might still be unhappy. You might be the best artist or engineer in the world but you have to learn a great deal about diplomacy and respect if you want to succeed in a team.

Lighting work on “Walking with Dinosaurs”.

Q: Ok, so let’s talk about The Lego Movie. You stepped out of your regular role for this movie to create something new and very necessary. Can you tell us about this?

A: During production of The Lego Movie I stepped aside from my supervision role to fill a much more needed position. We already had a great team of Lighting Supervisor (Craig Welsh) and Art Director (Grant Freckelton). What the production needed was technical solutions that were outside the box.

Our facility was using Pixar’s RenderMan. It’s a good engine. Unfortunately what we had to render for The Lego Movie was a toxic combination of everything that makes that engine suffer. I’m talking about raytracing without radiosity caching (specular and glossy reflections and refractions), subsurface scattering and dense geometry, all of it, everywhere.

Think about “plastic”. Most of us will say, “How hard can it be?” But the fact is everybody has an imprinted idea from childhood on what that Lego plastic looks like! There is nowhere to hide. Traditional CG lighting tricks won’t work. Some approximate global illumination won’t cut it. The last thing we wanted was this movie to look like… well “CG”.

Q: So what led you down the path of writing your own renderer? Surely it can’t have been your first solution?

A: We discussed many possible solutions, most of which were either forcing us out of time or budget or with too little margin of success. I’m sure that there are other renderers out there that would have been ok with the specific mix of rendering challenges we were facing, but the truth is you do not switch render engines during a production. It’s like “the first rule” in Fight Club. You could do that in a small studio, for a small production where artists are working almost entirely with off-the-shelf tools. In large studios there are millions of dollars of investment in proprietary tools, pipeline and automation. It takes years to adopt a technology as pervasive as a new render engine, unless, of course, you write your own renderer, to be compatible with existing assets and processes. But even that takes years. In the end it comes down to intuition of what you feel you are going to need in few years time, so that you are ready when the time comes. This takes us back to the end of 2010.

Q: It began before “Lego”?

A: When I got assigned the lighting supervision of Walking with Dinosaurs 3D I spent one year in research and development before hiring my crew. I was pushing to renew or shading system to better service both “Walking with Dinosaurs” production and the upcoming “The Great Gatsby”. So for one year my effort went first hand into research, design and develop our physically based shading and lighting system “PHX” (before Prman had any). But I wanted to give lighters some edge. I wanted to push the boundaries of the quality we could deliver within the budget we had. In my own time I began developing an experimental interactive path tracer under the name of “Glimpse”.

Lighting work on “Gatsby”

Q: Why the name “Glimpse”?

A: Artists are hardly creative if they don’t see what they are doing, if they are unable to explore the creative process. For a lighter, Glimpse is the rationalisation of that process. It is the compelling need to work interactively; to “have a glimpse” of what the lighting looks like.

Q: Can you explain how you went about achieving this lighting glimpse within your renderer?

A: You can argue there are many ways to achieve that. I believed stochastic path tracing was the solution. My crew, for the first time, was able to work interactively. I could do my rounds, ask them to change something, give them my approval on the spot and tell them to render overnight.

Q: How is that different from relighting systems like LPics or “LightSpeed” or even modern openGL rendering?

A: Stochastic path tracing is fundamentally the very same algorithm that can produce the final high quality result. It allows for very fast interaction without any setup or pre-computation. You press “render” and in 1 second you are there. It won’t do 60 frames per second but if you sculpt a piece of geometry to “paint with light and shadows”, or tap the timeline to get to another pose of your characters, it’s accurate and instantaneous.

Q: But that was just to preview lighting. How did you make use of it for final rendering?

A: That’s right. If I’d gone for some openGL solution I would have been stuck.
For Lego we just had to push the boundaries once more. I needed to turn my “experimental” engine into a solid production renderer, and we had less than a year to do so, while people needed to use it all along from day one. I’m talking about everything we take for granted, from a robust API, scene file format, plugins, programmable shaders, procedural geometry, per-light channels, extra output channels, statistics and error logging, unit tests, stereo rendering, depth of field, and the list goes on and on. Plus we were mainly two engineers. We had some help from other R&D engineers to pipeline the tool and in some other math heavy challenges, but for most of the ride it was me and the talented Luke Emrose. We had to write it, we had to integrate it inside Prman also, so that we could migrate our rendering one feature at a time, while supporting the production crew. At first it was just shadow casting, quickly followed by global illumination, then subsurface scattering. As we were switching more over to Glimpse, the quality of our imagery improved substantially, the lighting setup got simpler and quicker, while rendering was getting faster and faster.

Q: So now you have a complete renderer?

A: Can I answer with a “yes and no”? Towards the end of the movie, I believe it was 2 or 3 weeks before the end, we had our renderer! We achieved more than we planned for. Unfortunately not all lighters switched over because it was a bit late in the game and some kept using the hybrid approach. But some brave fellows did it and we had a small bundle of shots with background/midground and even close-up elements and characters renderer directly in Glimpse.
To better answer your question, we have a great renderer for hard surfaces. It doesn’t handle volumetrics, hair or particles yet. So we are still using Prman plus Glimpse for many of our daily challenges.

Please make Max Liani welcome in our thread -ask him any questions you like! And don’t forget to sign up for the live webinar here .


I guess I’m going to break the ice :slight_smile:

I want to start with a thanks to Kirsty Parkin to run the interview and Raffaele Fragapane to suggest this to happen.

Many thanks to Eric Veach for his inpirational work of path-integral formulation of light transport. His dissertation opened my mind.

Thanks to Matt Pharr and Greg Humphreys for their amazing PBRT book. It have been my bible for years now. Thanks Matt for the honor of sharing a lunch with me in Anaheim.

Thanks to Carsten Benthin, Ingo Wald and Sven Woop to teach me how to write better and faster code.

Thanks to Wenzel Jakob, which I have never have the honor to meet, for his inspirational work on Mitsuba.

Thanks to Jacopo Pantaleoni to show me, many years ago, that to implement a render engine is lots of fun and that it can be done.

Thanks to Juan Cañada to tell me, when I needed it most, how much potential there was in my early interactive rendering work.


As a Glimpse end user, I can’t emphasis how fast it is. That’s one of the biggest things I miss moving back to commercial ray tracers. Glimpse actually seemed faster than the Maya view-port.

Onya Max!


Hehehe, cheers Nick. Truth must be told. Maya viewport is very slow when there are many objects. That is also why Autodesk is pushing viewport 2.0 as a replacement (such a weird name).
But it’s right. Geometry in Lego scene was so dense we couldn’t use the wireframe in viewport. Only bounding boxes. So lighters were using Glimpse with the “ambient occlusion” integrator to navigate the set and see how it looked like.

There was this test scene i used… it was about 2 million objects, each object 5 million polygons. The bounding boxes in viewport was taking about 30 seconds to redraw. The only way to move the camera or the light was to use the “set view selected” and see nothing but the lights and the ground plane. In glimpse it was rendering about 2-3 frames per second (noisy render, but not pixellated) with indirect illumination :slight_smile:

I’m going to show some of the test images in the webinar.



It might be good to mention how much memory that scene used too. The figures I remember seeing were shockingly low!



Good point. I don’t remember the memory stats of that particular test. It was quite some time ago. Also lots of things changed since then.
I’ll re-run the test when I go back to the office on monday. I’ll have some stats ready for the webinar.


Hello Max!

Thank You for taking the time to do this session. I’m sure that a lot of people share the same sentiment, but I have to say that the Lego Movie is simply amazing! Truly incredible work.

I was wondering whether you can talk about about the architecture of Glimpse a little bit. How can Glimpse handle so much data (dense models, textures, SSS, etc.), and not slow down significantly? Most rendering software seems to have some type of caching period, so an instantaneous preview sounds quite amazing. Was the GPU used at all to accelerate the process?

Thanks again for your time, and looking forward to the webinar!


Thank you. There were some remarkable people working on the project. For me it was lots of pressure and responsibility, but it was worth it.

I hope you understand I can’t reveal too much about Glimpse internals without breaking confidentiality. I must respect that.

Glimpse slows down indeed. I can have even 30fps on a simple scene at vga resolution. But as the scene get intricate that performance might drops to 1 or 1/2 fps. Speed is not directly proportional to primitives/objects count ( O Log(N), it means that as you add in more and more the performance tends to stabilize to some level instead than degrading steadily).

2 millions objects is little data for the system. I have rendered up to 30 millions objects in my laptop (16GB of memory). I never had the time to do a full load test.
Half billion non instance polygons were not even close to fill our 64GB blades (i guess double that would have come close). With instancing it gets almost non sense to measure. The scene I was referring to in previous post was 10 trillion triangles (25 millions raw triangles instanced to many) and I was far far away from been worried. I believe I have roughly estimated I could render 1 quadrillion (is that even a word? 1000 trillions…) before filling up the memory with instances transforms. But this is just some LOL math more than something that would be required in production (for now…)

When I say “instantaneous” I really mean that the user won’t feel bothered in the wait. A few seconds startup feels petty good when we were used to wait 5 minutes just for the frame translation to finish and the rendering to start. Glimpse updates only what really changed, therefore after the initial “within seconds” startup time, everything else is generally processed in a matter of milliseconds.

No GPU computing yet, rather very well threaded and vectorized execution of most components with minimal to no thread locking. I began studying CUDA and OpenCL only very recently. It’s going to be another interesting learning path.


Now this is the type of “Meet the Artist” I’d really like to see more often! I wish I noticed it sooner. Thanks a lot to CGS and you Max for taking the time and providing this opportunity to share the knowledge with the community.

In fact, I have quite a lot of questions but I’m going to try and keep it way brief. (:

I’m wondering what scope of rendering you see that can be improved/optimized in a way that would not get too technical while retaining some artistic quality to itself that would keep an artist motivated and encouraged throughout the development process. I’m basically asking this because I’m about to commence working on my thesis and trying to find an (almost) definite route for my research that won’t make me lost and stuck in it later, so, your input would indeed have an impact on my decision of which direction to follow.

Another question out of interest, do you see fuzzy logic to have any potential role in shading or light transports? I believe it has had some effects in depth perception though.

Lastly but not least, I’m curious what Jacopo Pantaleoni showed you that made implementing a render engine fun for you? :slight_smile:

I think that’s it for now and thanks a lot once again.


It’s a good question and to me it seems it comes from academic perspective. You see, that is not the way my brain works. I see a problem, I think what could I do about it, I try to fix it. To me it is much harder to think to what I could do if I have nothing yet to do.

That said, definitely follow your passion first!! It’s going to show trough your work if you are passionate about what you do/discover.

When talking about rendering engineering, perhaps the most “artistic” part of it is “content creation” and shading. When I was showing to my wife my achievements in light transport, or in performance engineering, the typical conversation was:
me: “hey, look at this!!!”
her: “it looks the same as last month…”
randomly choose between
me: “no, there is less noise now!!”
me: “yes but it’s faster…”
me: “yes but it uses half the memory”
me: “yes but I rewrote the whole subsystem to be more […] and now it eventually works again…”
her: “ok…? Cool…? I guess…”

Speaking of shading, there have been a lot more research in how to efficiently simulate light transport, or to cast rays fast. Less on how to efficiently evaluate complex materials.

Well, fuzzy logic is a very generic construct. It’s like saying a hammer, or a wrench. I can see complex logic to be of great interest in content synthesis. Where you compute something complex once, then store it’s result. You could go much further to neural networks and genetic algorithms. But that is not the sort of logic you want to carry trough to shading. Especially if your planning to make use of it in stochastic sampling. In this case it is far more efficient and scalable to have minimal to no logic in your algorithm, for 2 main reasons:
[li]it’s going to be evaluated billions of times in a frame.[/li][li]it would most likely produce high code divergence, therefore poor performance) when executed in massively parallel architectures (unless your algorithm does just data modulation).[/li][/ol]Does that answer your question?

2 words: his passion. It goes back to 15 years ago when we got to know each other. I was a 3d artist trying my best to make compelling imagery. He was a Uni student implementing his own renderer “Lightflow”, which at the time was capable of effects no commercial renderer was capable of. I remember thinking: “One day I’m going to make something like that”. After that I bought a couple of absurdly expensive rendering books filled with “hieroglyphs” and thought myself, “I’m going to be happy the day I can read this stuff”.


First off …awesome show and awesome work!

It’s a very bright movie with lots and lots of lights. How did you handle importance sampling the lights and/or adaptively reduce shadow rays ?

Totally understandable if this is a secret sauce and not willing to answer!

Once again, you have a very awesome renderer!


Thank you for all the “awesome”.

The technique we used is so simple I don’t consider it part of the “secret sauce”.

Yes we had loots of lights. Lights were embedded into the assets. If there was a model with 20 bright brights, it was coming with 20 area lights. Glimpse have only area lights (and infinite of course, like dome and distant).

Basically we estimate the irradiance contribution of each light at a time and apply termination or russian roulette if it falls under some user defined threshold. The estimate is much quicker to compute than the light itself but it is close enough to apply the heuristic.
Russian roulette will give you variance but your estimator would still be an unbiased. In the other side, termination is biased, it dims small lights contributions even further, but it will carry no additional variance, therefore this is the technique we used the most (don’t want tiny specular to flicker).

In general we got a 2-5x performance improvement in scenes with hundreds or thousands of lights. Still not ideal but that is what I could implement with the time I had. Certainly something I’ll have to revisit in the future.


It would be great to get your opinion on Open Shading Language and Sexpr - both of which seem to be gaining increasing popularity (for pattern generation at least). Or do you think a good C++ is the future of shading? Having used Arnold for a bit now my opinions are starting to change a bit :wink:

It might be good to discuss some of the broader similarities / differences with other renderers - particularly Arnold.

Also is this webinar going to be recorded? I am not planning on being awake at 1am! (not that I am the target audience anyway).



Hi Simon.

C++ is not a language that offers native constructs for parallel programming. You can get lot’s of mileage with a good scheduler and wrapping intrinsic inside a good inlined math library, but even with that, implementing otherwise simple algorithms with simple conditional logic and simple data structures gains that extra complexity that becomes too much for a regular artist/shader-writer. You know what I’m talking about if you tried to vectorize some non trivial code.

Modern compilation technology allows higher level languages to offer that level of abstraction and also efficient computation. In light of this OSL comes a bit short.
Sure it is faster than a traditional C++ implementation of a shading network, but it is still far from the theoretical throughput of the hardware it is running on. I guess that got driven by which component of the renderer is the slowest. If the renderer takes 80% of the time in his ray casting core, then optimizing the shading language further gives you little benefits.

That said, shading language and the rest of the rendering architecture must compute data in a way that is compatible with each other. I.e. if you want a vectorized execution of your shader, you must vectorize your renderer too and often that brings you to a completely different rendering architecture (no, I’m not talking about REYES).

For the better or the worse, and in respect to which tool?

Glimpse and Arnold at glance?
They are both path tracers, they both uses instancing as first class primitive, they both decide to not cache computation (apart from texture tiles, which is not “computation” anyway, but I thought to make it clear) to allow for better scalability. They both computes beautiful motion-blur.

Arnold is a production renderer that can do interactive preview. Glimpse is an interactive renderer that can compute final frames.
It might seems the same, but there is a substantial difference.

From a “physics” perspective Glimpse is more strict than Arnold. In Arnold you can still have point lights, or non-Fresnel-driven-reflections and little things like that. You can see it as a positive or negative thing.
Glimpse is more opened trough its plugin system. The main frame renderer itself is a plugin of the system.
Arnold handle more variety of primitives, i.e. volumetric and hair. We are going to support that in the future…
Glimpse is much simpler to use than Arnold (believe it or not). In part because we don’t handle all the features that Arnold does, but mostly because of a different design (part of the secret sauce) that makes lots of controls redundant.
I don’t want to go into specific of each individual feature. In the end I don’t have a product to sell.
I guess more specific questions would work better in this format.


Thanks for answering my question, and I understand that you can’t mention too much. Glimpse still sounds instantaneous, especially with such incredibly short translation times. It seems like it would be quite disruptive to all the technology that’s currently out there, if you were to release it as a product :slight_smile: .

As a side question, I was wondering if you could talk about the average render times per frame. When you began to build Glimpse, was the main goal to accelerate both preview and final frame rendering? It seems that with the ever increasing complexity and advancement in technology, render times have been stable at about 30 hours per frame.


That’s the idea, yes. If I render for 60 minutes I can afford to wait 2 for frame translation. But if a meaningful render (preview quality) takes 1 second and translation still takes 2 minutes, then it is pointless.

Problem of data translation if that it is often a serial process. If you can parallelize that, you get bis step forward.

We run in this problem during production. Glimpse was extractig data from Maya very fast. At some point we added support for procedurals in Glimpse. We wanted to bypass Maya and read models directly from our cache files. We quickly got confirmation how slow these were. They were old designs, not supporting multiple threads, not scaling well. This sparked a whole lot of work in most parts of the pipeline. Sure R&D hates me by now :wink:

I hope no! I hope it would be stimulating! I give you a fact. I was testing a commercial rendered (won’t say which, but one of the major). Using the very same mid size scene, Glimpse was taking 2 seconds to get the data out of Maya, translator of the other product was spending 2 minutes alone. I reported the problem back to the developers, at first I got told Maya slow API was to blame. Then I said how quickly I could get the same data out. They spent several days profiling and working hard. They got their translator down to 6 seconds I think, for the enjoyment of their customers.
Sometimes you need an example on how fast something could be to motivate you to push further.

Glimpse average render time for LEGO large shots final frames was around 10-30 minutes (a lot longer when in the hybrid mode with Prman). Rendering hardware was 16 cores sandy-bridge (32 virtual cores).

My original goal was educational, I wanted to understand more. Then I saw the potential and wanted to have a sophisticated interactive preview render. But I was also gazing at making a full render engine, so when I saw the opportunities lining up along the years, I worked very hard get there.

The reason of the constant amount of time along the years is simply because that is what is deemed “acceptable” by production. If it renders faster more will get pushed inside each single shot, or less optimizations will be done because optimizing is time consuming (both from an content and technology point of view).


Hey Max,

Its seems there are alot of renderers now competing and there is no magic bullet. Aside from glimpse what is most exciting you about the current rendering solutions/tech?

Where do you see rendering software/tech/raytracing going commercially in the next 10-20 years?


Magic bullet=monopoly, monopoly ~ evil. I’m glad there is no such thing as magic bullet.

Consider that there is a very wide range of users. From the preset drag&drop one, to the script everything one. Each one don’t like the workflow of the other. It is very unlikely one production would serve everyone well.

I’m very satisfied in seeing the movie industry recognizing raytracing as the most appropriate solution these days. Up to 3-4 years ago you could get in trouble by just pronouncing that word in some places…
I’m very excited about how GPUs are evolving from realtime graphics to supercomputing. Massive parallel architectures are changing how we think and write mainstream software.

Ouch, this is a really though question. I’m certainly going to disappoint in the answer because either being too conservative or to be proven wrong by time. 10-20 years ago nobody knew we would have walked around with supercomputers in our pockets. Now we have smart phones that run full HD 3d graphics.

I can see how CPU and GPU are going to merge into a unique heterogeneous architecture. This would define new standards for computing and establish new rendering architectures that could take advantage of that.

Some says point rendering will gain momentum, some says voxels… I’m not leaning in that discussion. There are some other exotic form of rendering like pencil tracing and cone tracing. You never know if they are going to become popular at some point… see what happened with subdivision surfaces that got deemed exotic for nearly two decades since late 70’ before becoming the most common form of modeling.

I will state the obvious in saying rendering will get faster and simpler to operate. In the last decade I saw a proliferation of lighting/rendering TDs, some with little to no artistic talent, building their career on technical understanding of overly complicated products and architectures, while some fine artists were left to struggle. If rendering gets simple, those TDs will be of little help in a production which will favor people that can create beautiful work of art instead. If the reader identifies himself/herself in that I would strongly advise them to rethink their area of interests:

[li]become more of a pipeline TD.[/li][li]learn a great deal about art and cinematography.[/li][li] toughen up further and become a render engineer.[/li][/ol]


For the better in respect to coding for Arnold. The “everything is a node” concept is great and the C++ you need to know generally isn’t much more than for RSL. But when you need it you have the tools and libraries from C++ in a much more accessible form (rather than going through DSO’s).

Being able to run C++ debugger and profiling tools to debug shader code is definitely a great step forwards.



In general I agree with that.

But let’s consider how a traditional C++ shading system will scale. Let’s take a 16 cores sandy-bridge with 8 lanes vector instructions. It is somehow a 128 ways parallel computer. The shading API is generally using only 16 out of the 128. The math library can be written to compute color and vector operations using SSE instructions (4 lanes) but a large chunk of the code is still scalar (int, float, conditional logic). So that API will use between 1/4 and 1/8 of the theoretical throughput.

Intel released last year Xeon-phi which have 16 lanes vector instructions, it won’t be long before those instructions will become part of main stream CPUs. At which point the shader is using 1/8 to 1/16 of the computing power. Ok, my calculations are a bit wonky, but you can see where I am going.

Vectorized code that would take full advantage of the computing power is not simple to write in C++. So when you ask me if C++ is the future of shading I’d say no, it’s the present.

What is your opinion about this?