CGTalk > Software > Compositing Software > The Foundry Nuke
Login register
Thread Closed share thread « Previous Thread | Next Thread »  
 
Thread Tools Search this Thread Display Modes
Old 09-19-2013, 03:44 AM   #1
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
Deep from point position pass?

Is this possible?

I've been playing with Vray3.0's deep channel and it's pretty sweet how easy it is to comp stuff together.

Problem is that a few of my plugins are not ready for Vray3 and 2.4 won't output the deep channel.

So I'm trying to establish if/how this can be done without a deep channel. If it's not possible then no problem I'll create the appropriate mattes. If it is then I'd like to learn how.

So far I can get the point cloud from my position pass

How would I position for example a sphere plugged into a scanline render node between two objects in my render? Can it be an automatic process of just placing objects correctly in 3D space using the point cloud as a guide or is any user intervention required?
 
Old 09-20-2013, 05:16 AM   #2
earlyworm
car unenthusiast
 
earlyworm's Avatar
Will Earl
craftsperson
Grizzly Country, Canada
 
Join Date: Mar 2005
Posts: 1,685
I don't think you can convert Points to Deeps - you might be able to do it via the scanline renderer, by re-rendering the points and outputting deeps, but even if you can there is probably going to be some quality loss.

Have you tried the ZMatte node? It's a bit ol' fashioned, but it does occasionally produces acceptable results given a beauty pass with a depth channel.
 
Old 09-20-2013, 05:58 AM   #3
beaker
Meep!
 
beaker's Avatar
CGSociety Member
portfolio
Deke Kincaid
VR Pipeline Supervisor
DD
Los Angeles, USA
 
Join Date: Apr 2002
Posts: 8,540
Send a message via ICQ to beaker Send a message via AIM to beaker Send a message via MSN to beaker Send a message via Yahoo to beaker
There is DeepFromFrames or DeepFromFrames. The problem is that a position pass suffers from the same issues as z does. You loose the ability to deal with semitransparency, motion blur and anti-aliased edges. There are a few tools on Nukepedia though for other methods of converting images to deep.
__________________
-deke
 
Old 09-26-2013, 01:07 AM   #4
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
Sorry for delayed response, just moved house.

Earlyworm - Where did you find the zmatte node?

Beaker - You just confirmed what I was going to try. Whether I can get the good AA from the position path (still trying to get my head around a lot of things). Turns out I can get a deep channel from a nightly build of vray 2.0. Go vray!

On a similar note - any ideas where I can learn how to rebuild my beauty pass from render elements but using deep? I assumed it would be as per oldskool methods and all I needed to do was:

Deep to image> unpremult > shuffle and comp everything > premult > deepfromimage > and then plug this into my deep merge node.

Note: The last deep merge node has a card3d plugged into deep from image set to the correct depth. Plugging my original deep EXR2 into the deep merge node comps as expected with the card between two teapots. But when I plug my render elements comp in then the appearance is not correct.

Sorry for crappy screen grab - not hooked up to web at the moment.

Some notes about the script: I've set shuffle elements to rbg out, merges are all rbga, premults apply to all channels.

The problem with the final image is that the card3d (with chequer board node) looks as if it has been multiplied over the teapot render ie not between them at the correct depth.

Edit: sent from my iPhone - hope it makes sense, I hate typing on these things!

Last edited by MisterS : 09-26-2013 at 01:27 AM.
 
Old 09-26-2013, 01:26 AM   #5
beaker
Meep!
 
beaker's Avatar
CGSociety Member
portfolio
Deke Kincaid
VR Pipeline Supervisor
DD
Los Angeles, USA
 
Join Date: Apr 2002
Posts: 8,540
Send a message via ICQ to beaker Send a message via AIM to beaker Send a message via MSN to beaker Send a message via Yahoo to beaker
You should use the deepRecolor node. It in essense projects the color data onto the deep data.
__________________
-deke
 
Old 09-26-2013, 01:47 AM   #6
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
Aha! That worked thanks.

With deep images, do I still need to unpremult and then premult for colour corrections?
I'm getting a light halo (rendered against black background) but it goes away if I disable the unpremult and premult nodes.
 
Old 09-26-2013, 02:43 AM   #7
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
Another problem is the deeprecolor is darkening my image compared to beauty render.

Actually the real problem is trying to do some decent research with my iPhone screen for web browsing over my usual 30" monitors.

I've found that by disabling the alpha channel on any of my merge nodes, or by disabling my deeprecolor node I get the correct result. Not sure if there is a legitimate reason for this behavior.

Last edited by MisterS : 09-26-2013 at 03:02 AM.
 
Old 09-26-2013, 06:56 AM   #8
earlyworm
car unenthusiast
 
earlyworm's Avatar
Will Earl
craftsperson
Grizzly Country, Canada
 
Join Date: Mar 2005
Posts: 1,685
Sorry, was thinking of zmerge. Anyway looks like your sussed on deeps.

When adding your secondaries back together, just add the rgb channels together and then copy the alpha back in before your deeprecolor node.
 
Old 09-26-2013, 08:12 AM   #9
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
Sussed is a strong word, I'm getting there!

Copying my alpha did the trick thanks. I have another question though:

Out of interest - if you have the deep information in your exr, then why can I not simply add a camera to my card 3d node and set the film back and focal length to my vray camera and have a match?

I've been using the NukeEm script on other occasions which works great but figured if everything is from camera space then I'd have thought you wouldn't need to export your Max camera.

I realized there was a mismatch between the deeptopoints and card3d compared to how it looks visually correct in my final image i.e. correct depth composite. Like I said NukeEm will fix this but am trying to understand things better.
 
Old 09-26-2013, 09:18 AM   #10
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
I spoke too soon - it's still not quite right. I've attached another image.

Again sorry for the image quality. There's a slight white halo which appears to be with te rawlighting (divide) node.

The shadows are identical is the three nodes ate enabled or disabled so they're not the culprit.

 
Old 10-04-2013, 02:19 AM   #11
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
One thing I'm still trying to get my head around is whether I can fully eliminate halos using deep workflow.

Example. I created a multimatte element in a test scene and I rendered my scene on one pass. Objects are all the same distance from my camera, some are overlapping. I want to use multimatte to CC my 2D elements similar to the above post (I never resolved those AA issues either).

Deep read > deepcolorcorrect (multimatte channel, 0 gain on GB channels) > deep to image > shuffle multimatte R channel to 'redshuffle.r'

The above isolatrs my Red channel from the multimatte pass and is then piped in as a mask to a grade node in my 2D stream.

This appears to work to selectively allow drastic color correction to overlapping objects.

However, I get a light halo on any objects that were rendered against a black background, which have been CCed using this multimatte) All overlapping rendered objects appear to be halo free.
 
Old 10-04-2013, 02:42 AM   #12
MisterS
seeking knowledge
 
MisterS's Avatar
portfolio
David Spittle
Civil Design Drafter
Brisbane, Australia
 
Join Date: Jan 2008
Posts: 813
Turns out that there's no difference between regular shuffle of multimatte and the over complicated method I tried. Also I had forgotten to unpremult before my grade node.

On closet inspection there are still halos around my CCs. Is there a workflow to eliminate these?
 
Old 10-04-2013, 02:42 AM   #13
CGTalk Moderation
Expert
CGTalk Forum Leader
 
Join Date: Sep 2003
Posts: 1,066,478
Thread automatically closed

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
__________________
CGTalk Policy/Legalities
Note that as CGTalk Members, you agree to the terms and conditions of using this website.
 
Thread Closed share thread


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 04:13 AM.


Powered by vBulletin
Copyright ©2000 - 2016, Jelsoft Enterprises Ltd.