Using Image Calibration to Reduce Noise in Digital Images
by Jeff Medkeff
This article is about a method of reducing noise in images taken with digital cameras -
especially long exposure and/or high ISO images taken with digital SLRs. The simplified
workflow presented here is derived from more advanced techniques of noise reduction (known
as calibration) that are used in scientific and technical imaging. Because of
limitations in the software tools commonly available to mainstream photographers, and
because of limitations to the image-making equipment, there are several significant
differences between the technique described here and a proper calibration of a scientific
image. Still, the method here can result in significant aesthetic improvement to many
digital images that suffer from noise.
I believe that understanding the sources of noise in digital sensors, and understanding
how the noise changes under different conditions, and how the design of the camera might
affect both of these things, will significantly increase the success rate when reducing
noise in digital images using this calibration method. For that reason I've included a
section on the theory and engineering considerations that impact these techniques. But if
you don't agree that knowing the theory is useful, you can just skip
to the workflow. However, there is a lot that can go wrong during image calibration.
Many photographers consider it to be an advanced technique that takes a lot of knowledge
and skill to perform successfully. Calibration can fail in a very ugly way when something
is done wrong - but it works when you know all the variables that can affect it, and
attend to them.
A note concerning the images: The noise-only images in this article were taken
with a Canon 10D. All of them have had levels adjusted so as to make the noise especially
conspicuous. Please do not consider the visual appearance of these images to be
representative of the actual noise you will encounter in a typical exposure with this (or
any other) camera! The process of adjusting levels greatly exaggerates the appearance of
noise; this is by design. Note, though, that all related images have had their levels
adjusted in an identical manner; this is so those images can be compared to one another
without any differences in processing affecting the visibility of noise. Other than
conversion from RAW with a daylight color temperature setting, adjustment of levels, and
bicubic resampling to make the size manageable, no processing has been done on any image.
The fundamental part in a digital camera sensor is the photosite. This is the
part of the sensor that actually detects light when you are taking an image. The photosite
achieves the detection of light by converting as many photons that strike it as possible
into electrons1. These electrons are then stored until the exposure is
completed. Once the exposure is over, the charge at each photosite is measured, and the
measurement is converted into a digital value. This measurement process is called readout.
There are two common types of sensor in digital cameras - CCDs, and CMOS sensors. They
differ in several significant respects.
In a CCD, the electrons in each photosite are transferred, in 'bucket-brigade' style,
to one corner of the chip, where the readout is performed. CCDs use a single amplifier (or
set of amplifiers) to read out the entire chip, so each photosite sends its charge to the
same amplifiers that all the other pixels use.
In CCDs, this readout circuitry sits on top of the photosites and partially obscures
them, so that some of the light falling on a sensor doesn't make it to the photosite to be
detected. Two methods have been devised to address this. One is shaving the CCD chip until
it is very thin, and then mounting it upside down so that light enters the CCD from the
bottom (so that the readout circuitry is then on the back of the chip, underneath the
photosites). CCDs in this configuration are called "back-illuminated" and are
found only on very expensive cameras. Another technique is to place a microlens above the
photosite and its adjacent readout circuitry, which redirects some of the light that would
otherwise strike the circuitry into the photosite instead.
In CMOS sensors, each photosite's amplifier and related circuitry are adjacent to the
photosite, directly on the sensor. Therefore, CMOS also has the problem of a significant
amount of sensor area being taken up by devices that are not sensitive to light just as
CCDs do, but with CMOS sensors, the problem is usually quite a bit worse. With CMOS
sensors, the microlens method is very commonly used to help overcome this.
Sources of noise in digital SLRs:
There are four main sources of noise in digital camera images:
- Dark noise: Dark noise is an accumulation of heat-generated electrons in the sensor,
which end up in the photosites and contribute a snow-like appearance to the image. The
related term "dark current" refers to the rate of generation of these electrons,
most of which come from boundaries between silicon and silicon dioxide in the sensor.
- Readout noise aka Bias Noise: Constructing an image from the sensor's photosites
requires that the charge in each photosite be measured, and converted to a digital value.
Making this measurement is part of the process of "reading out" the sensor. But
doing so is an imperfect process. The amount of charge in the photosite is too small to be
measured without prior amplification, and this is the main source of trouble: no perfect
amplifier has been invented, and the amplifiers used on digital imaging sensors add a
little bit of noise, similar to static in a radio signal, to the charge they are
amplifying. The readout amplifier in a sensor is the main contributor to readout noise.
- Photon noise, aka Poisson noise: Photon noise is caused by the differences in arrival
time of light to the sensor. If photons arrived at a constant rate, as though they were
being delivered to the photosite by a conveyor belt at an efficient factory, then there
would be no photon noise. But that isn't how it works. Photons arrive at the photosite
irregularly. One pixel might be lucky enough to be hit with 100 photons in a given amount
of time, while its neighbor only receives 80. If the photo is of an evenly illuminated
surface, this photon noise will show up as one pixel having an improperly low value
compared to an adjacent one.
- Random noise: The remaining noise is traceable to erroneous fluctuations in voltage or
current in the camera's circuitry, to electromagnetic interference, and who-knows-what.
Random noise will vary from image to image and is a result of many influences. One of the
most significant might be random variation in the way electronic components operate at
different times, temperatures, and conditions. Whatever the case, random noise is almost
always infinitesimal - in most modern digital cameras, random noise will not be detectable
in an 8-bit image; it may be barely measurable in a 16-bit image but will very rarely be
visible in a conventional photo.
In addition to these sources of noise, variations in photosite sensitivity across the
sensor, as well as shadows cast on the sensor by dust and dirt, can appear to contribute
"noise" to the image in the form of snow or regions of greater or lesser
apparent sensitivity. Those interested in reducing this form of pseudo-noise may wish to
research the topic of flat fields, but I won't cover it here because these problems are
mostly solved in digital SLRs (as long as the sensor is clean).
Finally, if a cosmic ray strikes a sensor during an exposure, it can result in a very
hot pixel or a spurious streak in the image. This too might look like noise, but it isn't
- it is a legitimate detection of a high-energy particle by a sensor efficient at
detecting high-energy particles.
Characteristics of Photon, Dark, and Bias Noise:
Photon noise is pseudo-random. The arrival times of photons at a photosite describe a
Poisson distribution, and there is essentially nothing post-exposure that can be done
about photon noise. However, the impact of photon noise in the resulting image will be
greater with (a) fast shutter speeds, (b) dimly lit subjects, and/or (c) high
amplification of the signal. So to reduce the visibility of photon noise, longer exposure
times, brighter illumination, and low ISO settings may help.
Dark noise accumulates over time, and does so in a very convenient manner: an exposure
time twice as long can be expected to have roughly twice the amount of dark noise. In part
for this reason, long-exposure photographs are troublesome with some digital cameras; but
the increase of dark noise over time suggests a strategy for dealing with the problem.
Dark noise, 32 minute exposure taken at 22° C. Taken with a Canon
10D, levels adjusted to increase noise visibility and image resampled to a manageable
Dark noise, 62 minute exposure otherwise identical to the above. As
theory predicts, the dark noise in this image is almost exactly double that in the
previous image in terms of individual pixel values.
Dark noise is caused by heat-generated electrons making their way into the photosites,
so the temperature of the camera's sensor also affects the amount of dark noise in the
images. As the temperature of the sensor goes up, dark noise increases. Different sensors
behave differently, but in general, increasing the temperature of a sensor by six to ten
degrees C will result in the dark noise in the resulting image doubling. While this is a
nonlinear effect, it is at least easy to describe mathematically.
Dark noise is not random; in fact, it is highly repeatable. A given photosite on a
sensor will accumulate almost exactly the same amount of dark noise from one exposure to
the next, as long as temperature and exposure duration do not vary.
Bias noise is also highly repeatable - but since it is a result of reading out the
sensor, it does not even depend on shooting conditions being the same. Practically the
only variable affecting readout noise in a digital camera exposure is the amount of
amplifier gain. As long as the amplifier gain remains the same, readout noise will be
nearly identical from shot to shot. In general, doubling amplifier gain can be expected to
approximately double the amount of readout noise.
In digital cameras, photographers have nearly direct control over amplifier gain by
adjusting the ISO setting. Increasing ISO increases amplifier gain, and reducing ISO
reduces gain. As you would expect, bias noise in digital images is usually less
conspicuous when lower ISO settings are used. At any given ISO setting, the bias noise is
going to be very nearly the same from one image to the next.
Bias noise at 1600 ISO in a Canon 10D. Levels adjusted and image
resampled to a manageable size.
Bias noise at 3200 ISO in a Canon 10D, otherwise identical to above.
Bias noise in this frame is approximately double that in the previous frame.
Since dark and bias noise is not random and is consistent from image to image,
techniques have been developed to allow scientific and technical imagers to remove these
sources of noise from their images. This process is called "calibrating" the
image. Dealing with both dark and bias noise involves making two special images and
subtracting them from the photo. The first image is a bias frame - a zero-duration
exposure in which the sensor is reset and immediately read out, without any light falling
on the sensor and with no time gap between the reset and readout. The image that this
process creates is a snapshot of what the sensor's bias noise looks like, since the only
contribution to the resulting image is the readout amplifier's static.
The other special image is the dark frame. This is most commonly an exposure of the
same duration, taken at the same sensor temperature, as the photo. Since no light is
allowed to fall on the sensor, the resulting image shows only an accumulation of dark
noise (plus bias noise - since to get the image you have to read out the sensor). For
various reasons, in most scientific imagery the bias and dark frames are generated as
separate steps and subtracted from the photo separately, in a defined sequence.
However, most digital cameras do not allow a zero-duration exposure without the use of
special software - such as testing software used by camera service departments, or
expensive software written specifically for science and engineering applications, which
might require that physical modifications be made to the camera to operate properly. For
this reason, most photographers who are calibrating their digital SLR images are doing so
with a single combined bias and dark frame, taken as an exposure at the same ISO, shutter
duration, and ambient temperature as the photograph.
A bias frame (3200 ISO).
A bias frame shot approximately an hour later at the same ISO
The result of subtracting the second bias frame from the first.
Theory suggests that this process should result in a nearly black image, save for any
remaining random noise. The dramatic reduction in noise is clearly visible in this
calibrated image. All three images' levels have been adjusted identically.
CMOS sensors allow the placement of both photosites and transistors on the sensor
itself. (CCDs cannot have any processing circuitry built into the sensor - just
transfer gates and the like, which are controlled by off-sensor control circuitry.)
Because of this, CMOS sensors generally have at least the readout amplifier built in to
the photosite. There may be other transistors as well, which perform other processing
steps. It is now very common for a CMOS sensor to include noise-reduction circuitry
directly on the sensor alongside the readout amplifier. In some designs, a sort of small
dummy photosite, shaded from light, is used to quantify the likely dark noise level in the
actual photosite, and this quantity is subtracted during readout. In other designs, a
constant - corresponding to the tested dark current of the sensor - is subtracted from the
photosite value during readout. If anything like this is happening, expectations such as
"dark noise will double with twice the exposure duration" may turn out to be
In addition, this on-sensor circuitry can be designed to subtract the amount of bias
noise that the sensor designer expects will be contributed to that particular pixel. This
is a design-time decision, so bias noise may still be introduced due to manufacturing
variations, erroneous expectations on the part of the designer, changes in other circuitry
at a later point in development that the designer decided not to compensate for, and so
forth. In any case, if bias noise is being addressed in a CMOS sensor camera - and it is
being aggressively dealt with in all known current DSLRs - the relationship between
ISO and readout noise in a particular camera's images might not be as simple or as
repeatable as expected.
Note that both of these kinds of on-sensor processing affect the camera's RAW image.
That is to say, the RAW image is not necessarily "exactly what the sensor
detected," as is often said. Instead, it is exactly what the sensor detected, plus or
minus whatever built-in, on-sensor processing is being done in that particular camera. The
raw image lacks any post-readout processing, of course - the point is that on CMOS sensors
some processing may be unavoidable and its effects will be present in the raw
Of course, in-camera processing after readout alters the noise profile a great deal as
well. No JPEG image can have its noise reduced by the calibration steps described here -
the data is too drastically altered by compression to allow dark or bias subtraction to
work right. In-camera resampling, resizing, binning2, sharpening, and noise
reduction will all change the appearance of the noise in the image and the way it varies
by exposure time, temperature, and ISO setting.
Despite this, digital camera photos can often be beneficially calibrated to reduce
noise. Although aggressive noise reduction is probably occurring in any modern camera
either during or just after readout, the residuum of noise that is not addressed is often
largely non-random and consistent from shot to shot.
A ten-second exposure, taken with the lens covered, at ISO 3200 and
An identical shot taken about two hours later.
The result of subtracting the second shot from the first. Theory
predicts that this will result in a nearly noise-free image, which is clearly visible
here. All three images' levels have been set identically.
In practical terms, for the average digital SLR user, a combined bias-dark frame is the
only practical calibration frame to apply to their images.
In Photoshop, there are probably a dozen ways to subtract one image from another. In
the workflow below, I will describe how to do it using layers. Use whatever method works
In the following, a "photo" is a picture of a subject that you want to
calibrate. A "calibration frame" is a special image of the camera's noise
characteristics that you will subtract from the photo - in this case a combined bias and
Taking the photo and calibration frame:
- Set the camera to take RAW format images.
- Turn off any in-camera sharpening. (Contrary to popular opinion or general rules of
thumb, with some cameras the RAW image will be affected by in-camera sharpening.)
- Set the camera properly, paying special attention to ISO and exposure time.
- Take the photo.
- Put the dust cap on the lens.
- Put the eyepiece cover, if available, on the eyepiece so that no light can get in from
the back end of the camera.
- Double check that the ISO and exposure time settings are the same as used when taking
the photo in step 4. (Taking the calibration frame at a different lens aperture is not
recommended, since this can introduce random noise of a different profile than that in the
- Wrap the camera with a dark towel or other fabric. (This may be overkill if the lens cap
is good - use your judgment, but insure no light reaches the sensor while performing the
- Take the exposure.
- Open the photo in your raw conversion software.
- Select a white balance for the photo.
- Make no other alterations in the raw conversion. In particular, do not modify levels
in such a way as to clip dark values, and do not allow the RAW converter to apply
- Open your calibration frame in the raw conversion software and apply the same conversion
settings to it as will be used for the photo.
- Convert the raw images to maximum bit depth TIFF files.
- Load both the TIFF files in Photoshop.
- Select the calibration frame.
- Press "ctrl-a" or choose "All" from the Select menu to select
all of the calibration frame.
- Press "ctrl-c" or choose "copy" from the Edit menu to copy
the calibration frame to the clipboard.
- Select the photo.
- Press "ctrl-v" or choose "paste" from the Edit menu to paste
the calibration frame into your photo as a new layer.
- Close the calibration file.
- Select the new layer in your photo - the one that was created by copying the calibration
frame (we will call this the calibration layer).
- Open the Blending Options dialogue (or make the following adjustments at the top
of the Layers window).
- Select "difference" for Blend Mode.
- Select 100% for Opacity.
At this point, your photo is calibrated, and if the photo lacks significant amounts of
photon noise and random noise, it should look significantly better than it did before
A 1:1 crop of a photo taken in poor lighting in a coffee shop at ISO
The same part of the photo after calibration.
You can now proceed however you like, as long as you follow a simple pair of rules:
- If you use adjustment layers, place them above both the photo (background) layer
and the calibration layer. Putting an adjustment layer in between the two will destroy the
- If you make any destructive alterations to the image, flatten the image first. A
"destructive alteration" is anything in Photoshop that changes the image, for
which an adjustment layer is not available.
If you are not very familiar with Photoshop, or the two rules don't make a lot of
sense, it is probably best to just flatten the image immediately after calibration.
Of course, after you have done the calibration you can still apply various
noise-reduction algorithms to further attack photon noise and random noise, for example if
Noise Ninja or similar software is available.
If there are problems:
- If you suddenly lost a lot of dynamic range in your image, and white stuff turned gray,
but at least the snowy noise disappeared, congratulations - this is expected. You are
after all subtracting from the pixel values in the original image. This means that getting
the proper exposure in the first place is even more critical if you want to maximize
dynamic range. If you have a severely underexposed image in which the brightest value is
only halfway to the right in the histogram, you can expect those highlights to move even
farther to the left after calibration. Calibration is not a good way to rescue impossible
images; it can only help reduce the appearance of noise in a well-exposed image that lacks
significant photon and random noise.
- If your hot (bright) grainy pixels have turned to dark grainy pixels, reduce the opacity
of the calibration layer. You might find that somewhat lesser opacity results in a good
calibration. If no opacity level does any good for your image, it may be time to blame
photon and random noise (possibly exacerbated by the photo being badly underexposed?), and
give up by moving on to Noise Ninja or the like.
- If you find that a few unusually hot pixels in the calibration frame are punching dark
holes in your photo after calibration, you might calibrate using a tool like Blackframe NR freeware, which
detects and corrects this condition.
- If you see a moire pattern in your image after calibration - especially in dark portions
of the photo - you can take the usual steps to filter this out. You might protest that the
moire wasn't there in your original photo. You are right; it was obscured by noise. By
calibrating out the noise, you are now showing just how few bits you were using to
represent that portion of the photo. You will just have to deal with this, as it is one of
the costs of having a nearly noise-free image.
- If your image erupts with a case of what look like JPEG artifacts, clusters of bright
pixels, rivers of dark areas, bad halos, and the like, then something has gone badly
wrong. Possibly your calibration frame was taken with significantly different camera
settings or at a significantly different temperature. Possibly you have turned on some
mysterious (to the author) adaptive or heuristic noise-reduction feature of the camera or
RAW converter, which is making the noise profile vary wildly. Maybe you mistakenly
selected the wrong blending mode for the calibration layer. Possibly you have tried to
calibrate a JPEG. Frequently when this happens the camera is found to be sharpening the
image. Go back through the process and try to figure out what went wrong. If you can't
figure it out, you might re-convert your RAW frames to linear TIFFs and see if subtraction
works with those.
Advanced calibration methods:
- You can take multiple calibration frames if you want. Doing so reduces the impact of
random noise in the calibration frame, and results in a better sample of the repeating
bias and dark noise. The way to do it is to take your multiple calibration frames and
"median combine" them. (This is not the same as using a median filter in
Photoshop.) The process of median combination makes a list of each pixel value in each
channel of each calibration frame at each pixel location, and builds a new calibration
frame by selecting the median value from that list for that pixel and channel. Various
software packages (Maxim DL, GIMP) allow convenient median combination of TIFF images, but
as far as I know Photoshop is not one of them.
- You can take your calibration frame at a different exposure time, ISO, or temperature if
you like, and scale the frame accordingly. For example, you can take a calibration frame
at the same ISO and temperature, but at twice the exposure duration. You can then divide
the pixel values in that image by two before calibrating the photo. However, check
"complicating factors" above for why this might not work well.
1 There are a few sensor types in which photons result in a charge
dissipation rather than accumulation - but I want to limit the scope of this discussion so
it is manageable; I'll just assume every sensor works the same way.
2 Binning is technically impossible in most CMOS sensors, requiring a
digital (and post-readout) simulation of binning to achieve. It is however very commonly
available in CCDs.
© Copyright 2004 Jeff Medkeff