PDA

View Full Version : Lightprobe help


derkoi
05-04-2006, 04:19 PM
Hi all,

I'm getting a few problems with making my own HDRI probes, mainly when rendering.

Here's the work flow so far:

I take a photo of the background plate

Then i position the lightprobe and take photos of the probe, using a Nikon D50 on a tripod, i change the apeture from -5.0 up to +5.0 in 1 stops, so i get -5.0, -4.0 etc I use jpeg format, although i've read i should use the raw format, i havnt yet. I have an 8 inch silver xmas baurball for the probe.

I use photoshop CS2 to build the hdri and then trim around the edges of the probe and save as a radience .hdr file.

When i import the probe i notice the colours seem oversaturated. When i hit render using radiosity in Lightwave 8, i get horrible blotches, and the lighting doesnt seem to have the same depth as commercial hdr images i've used.

Any ideas where im going wrong?

I can upload the .hdr probe and or a render of the blotches i'm getting if required.

Thanks for your time.

h2o
05-05-2006, 11:09 AM
Does the HDRI in CS2 still look good or oversaturate?

derkoi
05-05-2006, 05:45 PM
The Hdri in CS2 looks fine. I think i may have it now, i used the camera RAW filetype and took the photos at 2/3rd stops, it seems to work better now, i'm gonna experiment more though...

rendermaniac
05-07-2006, 12:34 AM
Digital cameras tend to apply some sort of correction when saving as Jpegs which can cause problems if you are making a HDR image. This is the other reason RAW files are used (the first is the higher dynamic range they have anyway) because you are getting raw pixel values that haven't been screwed around by the camera.

Also if you don't have enough range or things are moving (eg clouds) then yuo can get artefacts.

Simon

derkoi
05-07-2006, 02:34 AM
ok, cheers for the help. So what you're saying is, i should always shoot in raw and maybe shoot every stop as opposed to 2/3rd stop?

JulianS
05-07-2006, 05:39 AM
I can help you with the the F-stops.
Go to the R&D section of my website and take a look in the Techniques link under
" F-Stops need it for a good HDRI Sequence (http://www.creating3d.com/rnd/HDRI%20Exposure%20Sheet.htm) "

http://www.creating3d.com/rnd/index.html

Here is an example of one of the tables you can find there.



9 Exposures Exposure IRISIRIS (f-stop)




1/500 1/250 1/125 1/60 1/30 f / 4.0


1/15 1/8 1/4 1/2

http://www.creating3d.com/rnd/images/HDRI_Seq.jpg

http://www.creating3d.com/rnd/images/HDRI_Tripot_example.gif

derkoi
05-07-2006, 08:23 AM
Thanks Julian, that's really helpful!

So i lock the iris on an f-stop and change the exposure time? I'll have to look into how this is done with my nikon D50.

Cheers

gerardo
05-08-2006, 09:57 AM
Use raw files type is better for sure, but consider the main reason why you are viewing differences between Photoshop and Lightwave is because PSCS2 gamma-encode your image, while Lightwave linearizes your HDRI. You need to take that into account when you assemble your hdr from your image sequence and when you work within LW.



Gerardo

h2o
05-08-2006, 12:55 PM
D70 have a bracket function, it's really useful & convenient.

I always use this function to take 3 raw files(+2,0,-2) quickly.

I don't know D50 have this or not :shrug:

Tlock
05-10-2006, 12:19 AM
gerardo, you are correct upto a point. When shooting with most digital cameras the process of saving any none RAW file formats will apply a Camera Response Curve to the image. This is done to emulate the same behaviour that film has when capturing light. A CCD in a digital camera captures light in a linear manner so no response curve needs to be applied.

Now here is where i think your real problem is, when you are using *.hdr files a gamma correction is applied during the saving process and this is true of all hdr applications, HDR Shop, Photoshop CS2, Artizen HDR all do this. So since Lightwave didn't test their *.hdr format against the rest of these applications they don't apply a reverse correction to the image when loading it. To avoid this when you export out to the *.hdr file format, always use 1.0 for a gamma value and you shouldn't have a problem with Lightwave.

Note: This WILL AFFECT all other applications since Lightwave is the one doing it differently. Good Luck.

rendermaniac
05-10-2006, 10:09 AM
Thanks Julian, that's really helpful!

So i lock the iris on an f-stop and change the exposure time? I'll have to look into how this is done with my nikon D50.

Cheers

If you change the iris then your depth of field will change between exposures. This will mean the pixels will not line up as the brighter exposures will be more blurred. The combining algorithm depends on having a pixel correspodence between exposures (which is what makes doing it with film much harder).

gerardo
05-10-2006, 12:36 PM
gerardo, you are correct upto a point. When shooting with most digital cameras the process of saving any none RAW file formats will apply a Camera Response Curve to the image. This is done to emulate the same behaviour that film has when capturing light. A CCD in a digital camera captures light in a linear manner so no response curve needs to be applied.

I agree if you are referring to the hdri assembling and not to the raw file conversion. Let's remember the bit depth doesn't means high dynamic range and a camera raw file is still considered LDR (several reasons for this, but just consider the raw file dynamic range is theoretically 16bits (65000:1), it sounds good, but most real camera bit depth is no more than 12bits (4,096:1), however again, this is talking theoretically, since if we take the noise factor into account, the dynamic range is mostly barely 1,000:1), there are some cameras raw files that can allow slightly wider ranges in conversions for sure and apps like Capture One or Bibble can help us to get better results depending of the image and lighting conditions, but even in this way, the room is very small to be considered HDR. Besides considering our camera and display limitations (ratio of these cameras is limited and the contrast ratio of our standard monitors are even more limited and our system and processing packages are gamma-encoded), we still need to tone-map the images in the conversion and here is indeed where using the raw files we get a great advantage. The way how we adjust this, is important mainly if we are making HDRIs to light and integrate CG elements with live action plates :)



Now here is where i think your real problem is, when you are using *.hdr files a gamma correction is applied during the saving process and this is true of all hdr applications, HDR Shop, Photoshop CS2, Artizen HDR all do this. So since Lightwave didn't test their *.hdr format against the rest of these applications they don't apply a reverse correction to the image when loading it. To avoid this when you export out to the *.hdr file format, always use 1.0 for a gamma value and you shouldn't have a problem with Lightwave.

Note: This WILL AFFECT all other applications since Lightwave is the one doing it differently. Good Luck.

There is a misunderstanding, I think. Lightwave's linearization is not a problem at all. In fact, we need it if we are working in LCS.
What happens with HDRIs is that Lightwave assumes the hdr has been made in a non LCS (which is commonly the case), in other words, it has been created in a gamma-encoded space and this have altered its gamut, so Lightwave tries to correct this, since for a "proper" hdr lighting, it must have linear values. In the other hand, most of the image processing programs that we use to make HDRIs assume that 32-bit images haven't any gamma encoding. So PSCS2, HDRSHop, etc, doesn't apply a "gamma correction" when we save the images, what happens is that they gamma encode the images when display them at 8 bits (what Lightwave doesn't with its rendered images, but it can); thus, we work and save the images below this paradigm, almost without noticing it. We can specify the display curve in most of these programs (while in some of them we can even blend colors in a colorimetricly correct way). This manner of displaying the image is the reason why a hdri looks good in Photoshop for example, but looks too contrasted in Lightwave. So don't try to "linearize" the HDRI for LW in an external app. seeing that LW does it by itself (besides that will look bad in other apps). What we may to do in this case is to choose if we go to work in LCS or not, and use the tools and procedures to facilitate this kind of workflow.



Gerardo

Tlock
05-10-2006, 02:48 PM
gerardo, i was referring specifically to hdri assembly. You are correct a bit depth doesn't determine dynamic range of the image, but is rather a place holder for storing hdr images. That is why a direct conversion from 8bit to 32bit doesn't mean the image is now hdr.

In regards to Lightwave, i don't know much. But just to clearify, an image editor like PSCS2 assumes that an HDR image is the raw data so a gamma correction needs to be applied when loading. When you save the files back to *.hdr it will reverse this process otherwise the image remains gamma corrected when saving back. And then the next image editor tries to open it, it well apply an additional gamma correction. Since Lightwave is trying to render a really world image it will use the HDR data since it's concern is not to make this hdri monitor ready. So what you said about Lightwave makes sense to me now.

gerardo
05-11-2006, 08:30 AM
yep, that's gamma conversion is just for displaying purposes, it doesn't affect the image internally, this is present in all images processing packages as you say. The curious thing about PSCS2 with some formats (TIFF, HDR, etc) is that it adjust the display curve according to our system gamma and not with the color space we are working with (as is common with other packages).
Btw, Lightwave's linearization is the the "correct" way to work with HDRIs for IBL, as far as I now, all 3d packages work in the same way too; there must have some packages that gamma-encode the HDRIs for displaying purposes, perhaps, but internally, they should work in LCS. The main Lightwave difference is that it doesn't gamma-encode the output rendered images (automatically), so we need to deal with that if we are working in LCS. There are some thirdparty plugins (some free, others commercial) with previews systems that allow us to adjust LUT settings or other solutions to help us with this compensation, we can also do it in some image processing program if we save our output images in some FP format, we can use the same Lightwave tools to solve this as well; we can even choose what aspects they will be blended in LCS and which not; I guess most of these methods are applicable to most 3d packages too :)



Gerardo

CGTalk Moderation
05-11-2006, 08:30 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.