The Red Digital Camera


#21

Hey Colin,
Whats your opinion then about color fidelity by downsampling ? I mean the camera can record what 10 bit at 4k ? So if downsampled to 2k that’ll mean roughly 4 color samples per pixel at 10 bit…


#22

the sensor might be capable of wide dynamic range, but the data format wont be able to display it.(well as far as i understand, even if its got 4 colour samples per pixel, sampled at 10 bits per channel, thats only 1000million levels compared to floating point which has 9x10E99 nuber of levels)

plus your limeted by the fact that 4:4:4 colour sampling sucks a huge amount of bandwidth doing hdri sucks even more.

the idea of it outputting straight to a nas/san is pretty neat, espeacially if they manage to shove a fibre channel or 1/10 gig ethernet on it


#23

I shall call it… CAM BOT!


#24

My comment about reducing lighting costs is purely because it’s an electronic camera.

High quality professionally done lighting looks great on film. I’m assuming that electronic cameras are a lot more forgiving of bad lighting than film is. Therefore, some production companies may opt for easier and cheaper lighting set-ups, using this Red camera. I can see more productions switching from film, and using the Red Digital Camera instead.

I predict CMOS image sensors will replace the CCD ones being used today. Maybe early CMOS sensors have been noisy, but I think it holds more potential than CCD. Eventually everything will be CMOS.


#25

unfortunately electronic imaging has (historically) been even less forgiving, as the dynamic range has been considerably lower and more difficult to compensate bad lighting with. maybe this will change with this (and other) new breed 4k CMOS cameras, but i wouldn’t get my hopes up.


#26

Lighting is not a factor of post, but of proper production. Video being/not being more forgiving towards bad lighting, doesn’t matter.
Bad lighting is a matter of poor production values, just as poor art direction and poor cinematography.


#27

All digital cameras, even CMOS ones add a lot of noise in very low light situations(especially the blue channel). Even the Red doesn’t have enough range for extreme dark situations like film does. Having a larger range of stops doesn’t make much of a difference if it doesn’t work in float. Raw still only holds usually 12 to 14 bit clamped, not float.

Dave Fincher movie is a prime example(panic room). The guy underexposes everything by 2 stops and uses soft kino flows for light. The subtlety of that type of shooting would get lost in digital and were talking noisy like blue channel chunks the size of your fist. Look at a movie like Colatteral which was shot all in digital. That movie pushed digital about as far as it could go into the darks and was pretty damn noisy and some nasty artifacting in there. Luckily it worked fairly well with the style of the movie and it’s subject matter.


#28

Hey guys, this hole thing isn´t that new! Arri is already selling a similar product.


#29

Ok, I’m not an expert but I think this is pretty much wrong. First of all. Collateral is not a good example for judging the qualities of the viper. The noise was added on purpose (they pushed the gain)! So we don’t know if it could have exposed with less noise (one of the reasons they shot digital was to keep backgrounds less blurred under low light conditions, and the larger DOF helped to achieve this)
If RED claims Higher Dynamic Range they mean as compared to other video cameras (usually between 8-10 fstops) so 11-15 would get close to film lattitude.
Considering 10bit log format has been the standard for digitized film images for years and is still the standard res for digital intermediates, I’d guess that it would be a reasonable bit depth for Reds dynamic range.
Plus there usually are lots of steps performed in camera to convert the originally higher depth to the final output (color matrix, knee, etc.).
And to be precise, RAW just means the unprocessed (see above) signal but it says nothing about the bitdepth or if clamped or float. It could be any. In fact when I shot HDRIs with the spheron camera the images first were RAW too.

-k

EDIT: Actually Fincher shot a commercial with the viper too. It looks similar to the stuff he did before, so that does not really seem to be a problem.

http://www.claudiomiranda.com/heineken.html
Rumour is he will shoot his next movie digital too.


#30

That was my point, you can push film much more without revealing the noise and other artifacts. Also I said that the look they ended up with lended itself well to what they were going after (ie it was on purpose). This isn’t a bad thing but it is a current limitation of the medium.

Other things to note,

  1. There aren’t any float codecs as far as I know and only a few that support logrithmic space. Yes the viper does, but it writes out raw dpx files. Red is talking about writing out to some type of codec.
  2. Most of the DI’s out there are working at 10 bit lin not log (big difference). It still limits how far you can push things in the DI.
    3.You said 10 bit is good enough for DI, well there is a huge difference there. When you shoot on film and then scan it and do DI, you get to pick the best 10 bits to use. When you are shooting digital at 10 bit, you don’t get to choose those bits, its going to chop out information no matter what you do. Atleast in the DI process you can pick what information is useless.

You shouldn’t get all bent out of shape just because I’m saying that the Viper and other technologies currently aren’t as good as film. It’s a known fact. This will be easily resolved in the next 5 years though.


#31

And my point was that collateral is not a good example since they might have been able to just expose it correctly. Its rather difficult to say if it was actually necessary(!) to amplify the signal

Not an expert, but I thought the whole fuzz about (3d) Display LUTs was to keep the data in log.

Not quite. As I said what you get on “tape” already went through alot of processing to “compress” your range to the final outputs bandwith. Even the Vipers filmstream is not raw. Plus, I might be wrong about the DI pipeline but don’t you usually simply scan the film and do the grading with the now limited range? I can’t imagine the scanning process to be like a best light TK where you have the client present deciding on what to “throw away” and “what to keep”. So in that respect a DI would be identical to digital capturing.

I know this. I “shot” a no-budget movie with the viper (don’t ask) and I know its limitations. In many aspects 35mm film is still technically superior to digital capturing. As you said, it’s a known fact. I was not on a mission to advocate anything I just felt some info might not be correct.

-k


#32

A 3d lut simply gives 3 separate values for R,G, B, where a 1d lut gives an equal curve for the entire image.

The viper is raw, thats what is so cool about it. It captures direct to dpx files on a hard drive and bypasses any tape.

From looking around Lustre page on the Autodesk page, it apparently works in float colorspace now. Previously it was only working in 10 bit. This is no different then compositing in something other then float (8,10,16 bit clamped). AE, Discreet(FFI, C*, etc…) people have been doing it for years.

Scans are done at 10 bit cin/dpx log files, convert log to lin give it a range of black and white point to grab the color(95-685 default). So if you were to use those settings, anything in the 1-94 and 686-1024 is clamped. This is exactly like the loglin node in Shake. You just adjust that range to compensate for the whites or blacks so your not cutting off too much data at the top and bottom. That way the DI work is done with the “good bits” that you have destined for it.


#33

I probablymisunderstood this post, but afaik Luster (Aka 5D/Colorfront Colossus) worked in 10-bit log, as do Da Vinci. That’s the whole point of these products : real time color grading of 10-bit DI without converting to linear and clamping. Unmodified footage goes through without loosing any data. There would be conversion to linear except for display.


#34

They all convert to linear, none actually work in log space. The math just doesn’t work properly and would end up with strange results(even cineon worked in linear internally). Actually if you ready carefully it says on the Lustre product page that it has “Logarithmic (printer light) and linear (telecine) style color correction”. Not the same as it processing in log space.

Oh, and I researched around to make sure what I said about working in 10 bit lin was right or wrong. I was right and wrong :slight_smile: 5d Colossus/Lustre originally converted 10 bit log to 16 bit linear files internally(not 10 bit lin as I previously said). Lustre 2.5 added 32 bit float support. It only works at 16 bit on 1k proxy images to get realtime feedback and then in the conform with Burn nodes, it will render the final 2k files that are processed in float.


#35

No. A normal LUT already supports different look up values / curves for the three different channels.
I think 3d LUTs are more complicated than that.
wikipedia:


A 3D LUT is defined as a 3D Lattice deformer, which deforms 3D RGB color cube. Often 17x17x17 cubes are used as 3D LUTs. Most of the time RGB 10bit/component log images are used as the input for 3D LUTs. An interpolation engine is needed for calculating the values, which are between vertices, defined by the 3D LUT cube. Current products utilize trilinear interpolation for calculating these values.

Taken from the viper specs:
12-bit linear A-D conversion, mapped to 10-bit logarithmic signals for downstream
processing
Thats what I mean: the originally higher bitdepth (12) gets converted to 10bit log with a fixed lut. Thats what they call “RAW” or “filmstream”. That’s why some people think it might make sense not to shoot filmstream and rather use the in-camera processing tools to get the most out of the original data. The quoted spec however might indicate that its not the 12bit linear data that gets into the processing pipeline but rather the filmstream. In which case filmstream would be the best you could get out but still not the initial “raw” data. Its marketing.

So what? Its the exact thing that happens in internal processing of digital cameras. You have higher res data from the CCDs and convert it to lower res output with gamma, knee, etc. to get the most out of the signal.
The only way to get more out of the scanning process (as opposed to video, assuming they had the same dynamic range) would be to adjust the scanners LUT for downconversion to 10log, for every specific shot (like in a supervised TK session). I might be wrong but thats not what happens.

-k


#36

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.