A Site for Photographers by Photographers

Linear RAW data

Candy McG , Feb 22, 2007; 03:42 p.m.


I have some larger questions about RAW but I believe this specific one may sort things out for me or at least lead to some pointed follow up questions.

It is said that if one viewed the RAW histogram of a file in its original, linear(meaning non-gamma corrected) form, the data would be skewed to the left of the histogram and the image would appear dark, and not as contrasty. I think I have my brain wrapped around why the image would appear darker but why would the histogram would be pushed to the left? If one could answer this in terms of the popular ETTR (expose to the right) methodology, it would be appreciated.


    1   |   2   |   3   |   4   |   5   |   6     Next    Last

Emre Safak , Feb 22, 2007; 03:50 p.m.

It is merely encoded in linear gamma. The color management engine then converts it to your monitor's color space (color temperature and gamma). What you said would be true of a naive application that did not do the appropriate color space conversions.

In other words, you can encode your data with any gamma you want without it looking wrong (dark or bright), so long as you use a color-managed application.

Rainer T , Feb 22, 2007; 04:46 p.m.

To explain this, I have to simplyfy things a bit ...

Human perception is "nonliner" this means, if you hear a tone and then a second tone twice as loud, the energy needed to produce the second tone is more than twice as much.

If you see light and a second later you see the brightness doubling, the number of photones per square-mm (or square-inch) has more than doubled.

Now the sensor of a digital camera ... imagine a single sensor cell would just 'count' photons. If the brightness doubles for a human, the number of photons has more than doubled for the sensor.

The relation between the perception of the sensor and the human perception can be expressed by a function (the gamma function).

Since the perception functions are not identical, the histogram of the two perceptions will not be identical either. So why would the linear histogram of a raw would have so much values on the left side (making it so important to expose for the right side)?

The numerical values of the sensor have a certain range ... 0 to 4095 for a sensor woth 12bit a/d conversion. Half of this range (2048-4095) is obviously dedicated to the las doubling of number of photones (which is not even twice as bright for humans). Another quarter (1024-2047) is dedicated to another photone doubling (again not even twice as bright for humans).

If you expose for the highlights (so that they are just not blown), the brightest portions in an image would have the sensor value of 4095. one f-stop below this (still quite bright) the value is already below 2048 ... and so on.

So half of the uncorrected linear raw-histogram would just be for highlights.

It should be noted, that the histogram displayed in the cammera is always a corrected histogram.

I hope that made some sense ... Rainer

Candy McG , Feb 22, 2007; 04:53 p.m.


I'm not sure of you understood the question or I didn't understand your answer. In case it's on your end, some edification: RAW data is always encoded with a gamma of 1.0, meaning yes, it's linear. For it to be seen properly, a conversion takes place when you process your file. This gamma correction of .45 distributes the data out and makes the image look "normal" to us because that's the way our vision works, with this sensitivity curve. Every jpeg taken with a digital camera already has had this curve applied to it and even while viewing RAW files in their viewers (such as Adobe Camera Raw), most display the preview image and histogram as what it would be after conversion, not the actual RAW image as it looks to us or its histogram.

So when I say the image looks dark, it's because it "looks" dark to us because it hasn't been corrected yet. It has nothing to do with color management- indeed I don't care how sophisticated the software is, it's not going to correct our eyes! My question was, I know why it looks dark. But why is the histogram, which should be spread out as far as I can figure, hugging the left of the dial?

Emre Safak , Feb 22, 2007; 05:50 p.m.

It should not be spread out evenly because brightness drops with the square of the distance. Gamma correction is a clever way of coping with this. Otherwise we would be spending all our energy trying to figure out what is going on in the shadows, where all the information is huddled into. Gamma encoding is what makes the histogram even.

Candy McG , Feb 22, 2007; 06:15 p.m.


thanks for the reply but this last bit is precisely where I have trouble:

"So half of the uncorrected linear raw-histogram would just be for highlights."

So why would the linear RAW histo be pushed to the left if half it's values are in the upper register (2048-4096)?

Just so we get our terms right, what are you putting on the x and y axis of the histogram and why? It seems that you're putting fstops on the x....? To me, I would have "photons" on the y and the respective levels (0-4096) on the x. A "normal" scene would. to the sensor, be evenly distributed across the whole histogram, or as you say, half the information would be on the upper register, meaning the right hand side. If we were looking at it with respect to f stops on the x axis, I suppose it would be compressed.....? I'm still blocked here, as you can see. Some help?


We all know gamma encoding is a way of distributing the info, my question is why that info is packed to the left in the first place. And I think we are trying to spend all of our time trying to figure out what's going on in the shadows if the "expose to the right" method is correct.

Kuryan Thomas , Feb 22, 2007; 06:55 p.m.

I think the camera histogram is gamma-corrected. So if it were showing non gamma- corrected data, it would be pushed to the left.

In fact, this is one reason why camera histograms can be misleading. At least on most Nikons, they show the histogram for the JPEG image that would result if the raw file were converted using the current in-camera settings. So even your contrast, white balance, and other settings that have no effect on the raw file, will affect the histogram.

I believe that if the histogram were shown without any gamma correction, i.e., as a straight linear bin count of sensor values straight off the AD converter(s), it would not be skewed to the left. Since the typical camera uses a 12-bit ADC, you would get a histogram whose x axis ran from 0 through 4095 and a count of the number of pixels at each value. No color information could be shown for Bayer-interpolated cameras, because each pixel is just a linear grayscale value. To even have a concept of color, the raw data must be colorimetrically interpreted and gamma-corrected.

ETTR works because there are more levels between 2048 and 4096 than there are between 1 and 2. So imagine a light source slowly increasing in brightness and being recorded as a linear value. There would be a step from 1 to 2 - a doubling - that couldn't be resolved any further. There would be more than 2,000 steps from 2048 to 4096 - also a doubling - so you'd get a much better tonal ramp, with less posterization, in that last f-stop than you would in the first f-stop.

Plus, at lower light levels, the sensor's output is more noisy.

Kuryan Thomas , Feb 22, 2007; 07:05 p.m.

I believe that if the histogram were shown without any gamma correction, i.e., as a straight linear bin count of sensor values straight off the AD converter(s), it would not be skewed to the left.

Thinking about this some more, I think I may be wrong in my statement above. It could be that the histogram is pushed to the left even for a non-gamma corrected histogram - simply because, at a properly exposed level, there are more dark tones than bright tones. It's just that our gamma-corrected eyes "push up" the dark tones and we see a "normal" scene.

I admit I don't know for sure.

Andrew Rodney , Feb 22, 2007; 09:04 p.m.

No, you're correct that the camera histogram represents a gamma corrected image, your JPEG based on the camera settings, NOT the Raw file (which would look odd, all pushed to the left).


Mike Blume , Feb 23, 2007; 12:35 a.m.

Candy, you aren't the only one confused by this information. First of all, half of any histogram (raw or processed) is composed of highlights simply because of the way "highlights" is defined, i.e. the upper 50% of the intensity distribution. Let me begin by describing my understanding of histograms:

A histogram is a graphical representation of a population distribution. The X axis is divided into any number of non overlapping bins (intervals) and each of these bins is assigned a value or range of values based on a variable of the population under consideration. Thus, if age were the variable, the first bin might contain all individuals up to and including 5 years of age; the next bin all those greater than 5 years but not more than 10 years in age; and so on. The Y axis is simply the number of individual instances which fall within the category defined on the X axis. Also note that most histograms use a linear scale on the X axis, i.e. each bin has the same absolute width (there may be exceptions, but they don't concern us here). The common histogram we deal with in photography has the X axis bins defined as equal intervals on a scale of intensity, and this scale will usually encompass the lowest to the highest intensity values that can be measured. The X axis is not logarithmic (f stop). The Y axis then represents the number of individuals (pixels or photosites) whose intensity falls within the defined range.

Now Rainer has provided an excellent explanation of the difference between intensity as it is detected by the physical sensor (number of photons), and intensity as it is sensed by the human eye. In constructing a histogram from this information we must first decide which of these intensity values we wish to graph. First choose the so called "linear" values, i.e. those as recorded by the sensor, and let's call this the "physical" histogram (the raw histogram). Next let us consider how this histogram would change if we substitute the intensity values as sensed by the eye and let's call this the "sensory" histogram .

This is where, for me, the explanation begins to get confusing. Consider the pixels whose intensity on the physical histogram falls between 4095-4096. These are the "brightest" pixels and would remain so when translated to the sensory histogram, so their position would not change. Next consider the pixels whose intensity on the physical histogram falls between 2047-2048 (half way on the scale). As Ranier explains it, these pixels would be sensed as less than half as bright and consequently their position on the sensory histogram would shift to the left, i.e.be situated at some value less than half of the full scale. And the same translation from physical values to sensory values, for all values greater than zero and less than full scale, would result in the bins of pixels being shifted to the left on the intensity (X) axis. Now I know that is just the opposite of what has been described, and therein lies my confusion.

While I am in an expostulatory mode let me comment on another common practise in describing the information captured by digital sensors (or film for that matter). This involves a mathematical sleight-of-hand which attempts to make profound that which is obvious. I am referring to the frequently repeated statement that the highlights of an image contain fully half of all information and that the shadows contain very little information, as if this revelation should be enlightening if not surprising. If you start out by dividing the information into grossly unequal size containers (not analogous to the bins in our histogram) and define the size of these containers on a log (base2) scale, then this is nothing more than belaboring the obvious. I keep thinking that I must be missing something and if someone could explain what that is, I would be most grateful.


    1   |   2   |   3   |   4   |   5   |   6     Next    Last

Back to top

Notify me of Responses