Hello.
I really need help with this one!
I have a large scene in 3dstudio that I rendered in two parts, two rpf files with all necessary channels but I’m having some trouble composing them in C4.
I’m using a composite component but it doesn’t merge the channels.
I want to apply a 3dfog effect to the whole image but I don’t know how to do it!!
If I apply the effect to each part before the composite the result isn’t correct.
How is this supposed to be done??
Is there any component that will composite all channels?
I farly new to combustion and I guess that I could try to create a capsule for this but…I really don’t understand a lot of stuff envolved like the coverage channel that has a visible effect on how perfectly things merge.
Can someone help?!
Thanks
How to composite RPF's channels??
I think it does not make sense to render out rpf in that case. Try rendering two tga or sgi sequences and render out the zdepth and coverage for the whole scene. After that you could merge the two RGBA Passes and apply a G-Buffer Builder to the merged footage to assign the extra channels.
If you do not want to rerender the scene use a G-Buffer Extract Operator to extract the zdepths and coverage, merge them and commit them to disk. Then Merge the rpfs and apply the G-Buffer Builder to the merged footage, too.
Hope that helps.
Thanks Neo!
Unfortunatly I think there’s a lack of options for working with rpf files because I really think that it would make sense to use them but…if there’s no functionality for that…what can we do.
I’m goint to try what you sugested! Hope I don’t have problems with pixel coverage.
You said that I could also extract the channels from the rpf, merge them and commit them to disk. I guess that this would also be possible without commiting to disk right!?
If that’s the case do you think it would be possible to create a capsule for this??
I’m trying this right now but still have some trouble merging the channels…I can’t find a way to get the alpha from the rpf sources.
Well…do you think this can be done? Correctly of course…without any coverage problems…
It would be much simpler to just render the rpf files.
Thanks again
sure it could be done without commit to disk. THe setup would be the same. Creating an capsule is a ggod idea if you have another 10 shots with the same problem. So far I know a capsule is limited to one input footage only, so you might have to create two or more capsules. I am not in front of combustion atm, I will try tommorrow and give some feedback if it works.
Regards
Neo
Hi Neo
You can have capsules with several inputs. Take a look at the “advanced merge” capsule
Well…the problem is that I can’t even do it with normal components 
Right now I just would like to join the zdepth channels but have no clue how to do it…
One source may not be entirelly in front of the other but theorecticaly it should be possible to do it in this case also, right?!
I guess that to merge the zdepth channels the resulting pixels would be the ones having the higher value between the two but…for coverage and the rest I have no clue either.
Anyone knows how to do this?
Thanks
If I understand you right, you get aliasing and coverage problems when compositing the RPFs with 3D Depth turned on. I read a making of several months ago where they rendered the Z and Coverage channels at twice the resolution and scaled them down to get rid of the artifacts while 3D compositing. I know that this solves not your actual problem, but rendering out the channels is quite faster than rerendering the RGBA image. Hope that helps.
Cheers
NEogeo
Hello Neo
The main task I’m trying to achieve is to composite two rpf’s along with the channels.
The composite component cleans all the channels.
The problems I’m having in each step are:
1st - really stupid but…how do I extract the alpha channel from an rpf??
“Show Gbuffer” doesn’t have an option for this.
2nd - Eventualy alpha channels information isn’t enought to merge the two rpf because I can’t garantee that one footage is completely in front of the other so…for each pixel I need to evaluate the highest corresponding depth.
3rd - Zdepth isn’t supposed to be antialiased, the coverage channel is there for this. Don’t really know way coverage works internaly.
Another thing is about capsules!! Not that important because they’re not abolutely necessary but it’s anoying.
I can apply a color correct (just an exemple) to each footage then merge the two and create a capsule. As you can see it accepts two inputs.
If instead of the “color correct” I use a “show Gbuffer” and then merge I can’t create the capsule! Strange…
PS: As I’m writting this I noticed that if I mantain the “color correct” before the “show Gbuffer” I can create the capsule…
At least for 1st I have a solution. I have created a capsule for merging several alpha channels out of another footage. Thanks for the tip with several inputs, I did not realized that in past. More later on, I have to finish a job first.
Apply the capsule to a solid layer with transparency checked off.
Use the Compound RGB Arithmetic Controls (Operators “Set”, Input “Alpha”), afte use the Set matte or Compound Alpha Arithmetic to reassign the alpha channel.
2nd - Eventualy alpha channels information isn’t enought to merge the two rpf because I can’t garantee that one footage is completely in front of the other so…for each pixel I need to evaluate the highest corresponding depth.
You can use the Zdepth information as alpha channel (Show Gbuffer>>Set Matte). the problem it can be the different scales os Z-depth channels. I believe that you would have to make again the render for extract de Z-depth for all scene
Another thing is about capsules!! Not that important because they’re not abolutely necessary but it’s anoying.
I can apply a color correct (just an exemple) to each footage then merge the two and create a capsule. As you can see it accepts two inputs.
If instead of the “color correct” I use a “show Gbuffer” and then merge I can’t create the capsule! Strange…
Are you using the Merge operator?
Hello.
Thanks again.
I asked about extracting alpha from RPF but meanwhille I found about SetMatte and Compound Alpha arithmetics. Also I wanted this to merge the two ZBuffer channels but I also found that using a RGB Arithmetics with “Max” Should be better for this because the result is the pixel with the maximum luminance wich is the nearest one.
Nevertheless…and because normaly Zbuffer isn’t antialiased and also have some “X values” (wich I don’t know how to work with) I still can’t have a decent result.
Strangely…if I use “show GBuffer” to extract the zdepth from a RPF and then use it again in a “Gbuffer builder” the result is poor…I think because these “X values” are lost.
I’m having so much trouble understanding the way some things behave that I just think writting about them would be too much confused 
Does anyone wants to give a try merging these two files in order to get a correct Fog (or any other 3deffect for that matter)
Here’s a link to a zip containing the files.
http://clientes.netvisao.pt/ruquita/compositing/mergechannels.zip
Theres also an rpf with the whole scene and a complete Zdepth file as well has the the cws where I’m testing.
In the upper part of the cws scheme you can see how the messed “x values” create aliasing.
If someone is willing to give it a try would be great.
Thanks
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.