View Full Version : ".rpf" advantage
06 June 2009, 11:34 AM
HEY GUYS! Why all professionals exporting .rpf format from 3d softs to composition? What will happen if i'll export JPEG or Quick time or another files!
06 June 2009, 10:09 PM
RPF are Rich Pixel Format, which is a 3ds max file. First of all if you exporting your renders you dont want anything compressed so that's why pros dont use Jpegs of .mov files. You shouldn't compress before you edit or composite since it will just get compressed again. RPFs retain a few things in the file such as pixel velocity. If you have a compositor that can take advantage of it you can do things like post motion blur which saves in render time out of 3ds Max. This is not the only reason why though there are other things that it can retain as well. Think of it as container with more then color information which gives you more options in post rather then having to re render to correct issues.
06 June 2009, 10:18 AM
actually, rpf is a bit outdated (personally i don't know of anybody who still uses it professionaly). usualy passes are rendered out in tga/tiff or openEXR sequence. with openEXR you can also embed different passes in 1 file.
06 June 2009, 11:40 AM
RiKToR (http://forums.cgsociety.org/member.php?u=115316) & Tagger (http://forums.cgsociety.org/member.php?u=44510) thank you for info!
06 June 2009, 12:10 AM
Isn't rpf/rla the only format that support g/y buffer out of 3dsmax? If your using combustion, DF or AE then they are still great file formats to use because of the multiple buffer support. If you use Nuke or Shake then not so much since they dont support it.
06 June 2009, 03:08 AM
I used to use RPF and RLA for the coverage attribute out of max. You can use it to recreate cameras in AE as an easy way to comp over your back ground images. I would always render out another image though because I found the RPF/RLA images to have really bad edges.
07 July 2009, 12:34 PM
It is imperative that you capture your source-material at the highest resolution and accuracy possible. When there are multiple channels of information, e.g. alpha, velocity, color, Z-distance, you need them all and in their original form... distinct from one another although in just one file. No compression, no aliasing, no adjustment of output-levels, because all of those processes are both "lossy" and "noisy."
Output-file formats that are designed for "capture" have all of these characteristics ... and as a consequence, the files are quite enormous.
When you finally "mix down" the finished sequence, the very last step undoubtedly will be to produce a deliverable version in a file-format that is designed for "delivery." But this needs to be the very last step, and you should produce it from a "final cut" file that is in a capture format. This final step is the only step where you "throw information away."
07 July 2009, 12:34 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.