"From Light to Ink" featured the work of Canon Inspirers and contest winners, all printed using Canon's imagePROGRAF printers. The gallery show revolved around the discussion of printing photographs...
Getting photographs right in the camera is a combination of using your imagination, creativity, art, and technique. In Part 3 of this three part series, we focus on shooting strategy and the role of...
There seems to be a lot of confusion among some new digital camera owners
about exactly what the difference is between RAW, JPEG and TIFF files. This
article is intended to be a very basic guide to these file types and how they are
related in a typical digital camera.
First some basics
The digital sensor in the majority of digital cameras is what is known as a
BAYER PATTERN sensor. This relates to the arrangement of red, green and blue
sensitive areas. A typical sensor looks like this:
Each pixel in the sensor responds to either red, green or blue light and there
are 2 green sensitive pixels for each red and blue pixel. There are more green
pixels because the eye is more sensitive to green, so the green channel is the
most important. The sensor measures the intensity of light falling on it. The
green pixels measure the green light, the red the red and the blue the blue. The
readout form the sensor is of the form color:intensity for each individual pixel,
where color can be red, green or blue and intensity runs from 0 to 4095 (for a
A conventional digital image has pixels which can be red, green, blue of any
one of millions of other colors, so to generate such an image from the data
output by the sensor, a significant amount of signal processing is required. This
processing is called Bayer interpolation because it must interpolate (i.e.
calculate) what the color of each pixel should be. The color and intensity of
each pixel is calculated based on the relative strengths of the red, green and
blue channel data from all the neighboring pixels. Each pixel in the converted
image now has three parameters: red:intensity, blue:intensity and
green:intensity. In the end the calculated image looks something like this:
RAW data (which Nikon call NEF data) is the output from each of the original
red, green and blue sensitive pixels of the image sensor, after being read out of
the array by the array electronics and passing through an analog to digital
converter. The readout electronics collect and amplify the sensor data and it's
at this point that "ISO" (relative sensor speed) is set. If readout is done with
little amplification, that corresponds to a low ISO (say ISO 100), while if the
data is read out with a lot of amplification, that corresponds to a high ISO
setting (say ISO 3200). As far as I know, RAW isn't an acronym, it doesn't stand
for anything, it just means raw, unprocessed, data.
Now one of two things can be done with the RAW data. It can be stored on the
memory card, or it can be further processed to yield a JPEG image. The diagram
below shows the processes involved:
If the data is stored as a JPEG file, it goes through the Bayer interpolation,
is modified by in camera set parameters such as white balance, saturation,
sharpness, contrast etc, is subject to JPEG compression and then stored. The
advantage of saving JPEG data is that the file size is smaller and the file can
be directly read by many programs or even sent directly to a printer. The
disadvantage is that there is a quality loss, the amount of loss depending on how
much compression is used. The more compression, the smaller the file but the
lower the image quality. Lightly compressed JPEG files can save a significant
amount of space and lose very little quality. For more on JPEG compression see
RAW to JPEG or TIFF conversion
If you save the RAW data, you can then convert it to a viewable JPEG or TIFF
file at a later time on a PC. The process is shown in the diagram below:
You'll see this is pretty similar to the first diagram, except now you're
doing all the processing on a PC rather than in the camera. Since it's on a PC
you can now pick whatever white balance, contrast, saturation, sharpness etc. you
want. So here's the first advantage of saving RAW data. You can change many of
the shooting parameters AFTER exposure. You can't change the exposure (obviously)
and you can't change the ISO, but you can change many other parameters.
A second advantage of shooting a RAW file is that you can also perform the
conversion to an 8-bit or 16-bit TIFF file. TIFF files are larger than JPEG
files, but they retain the full quality of the image. They can be compressed or
uncompressed, but the compression scheme is lossless, meaning that although the
file gets a little smaller, no information is lost. This is a tricky concept for
some people, but here's a simple example of lossless compression. Take this
string of digits:
Is there a way to store this that doesn't lose any digits, but takes less
space? The answer is yes. One way would be as follows
Here the string 33333 has been replaced by 3 - meaning a string of 5 3s,
and the string 888888 has been replaced by 8 - meaning a string of 6 8s.
You've stored the same exact data, but the "compressed" version takes up less
space. This is similar (but not identical) to the way lossless TIFF compression
I said above that the data could be stored as an 8 or 16-bit TIFF file. RAW
data from most high end digital camera contains 12 bit data, which means that
there can be 4096 different intensity levels for each pixel. In an 8-bit file
(such as a JPEG), each pixel can have one of 256 different intensity levels.
Actually 256 levels is enough, and all printing is done at the 8 bit level, so
you might ask what the point is of having 12 bit data. The answer is that it
allows you to perform a greater range of manipulation to the image without
degrading the quality. You can adjust curves and levels to a greater extent, then
convert back to 8-bit data for printing. If you want to access all 12 bits of the
original RAW file, you can convert to a 16-bit TIFF file. Why not a 12-bit TIFF
file? Because there's no such thing! Actually what you do is put the 12 bit data
in a 16 bit container. It's a bit like putting a quart of liquid in a gallon jug,
you get to keep all the liquid but you have some free space. Putting the 12 bit
data in a 8 bit file is like pouring that quart of liquid into a pint container.
It won't all fit so you have to throw some away.
When to shoot RAW, when to shoot JPEG?
The main reason to shoot JPEG is that you get more shots on a memory card and
it's faster, both in camera and afterwards. If you shoot RAW files you have to
then convert them to TIFF or JPEG on a PC before you can view or print them. If
you have hundreds of images, this can take some time. If you know you have the
correct exposure and white balance as well as the optimum camera set parameters,
then a high quality JPEG will give you a print just as good as one from a
converted RAW file, so you may as well shoot JPEG.
You shoot RAW when you expect to have to do some post exposure processing. If
you're not sure about exposure or white balance, or if you want to maintain the
maximum possible allowable post exposure processing, then you'll want to shoot
RAW files, convert to 16-bit TIFF, do all your processing, then convert to 8-bit
files for printing. You lose nothing by shooting RAW except for time and the
number of images you can fit on a memory card.
Note that some cameras can store a JPEG image along with the RAW file. This is
the best of both worlds, you have a JPEG image which you can quickly extract from
the file, but you also have the RAW data which you can later convert and process
if theres a problem with the JPEG. The disadvantage is, of course, that this
takes up even more storage space. Many cameras also store a small "thumbnail"
along with the RAW file which can be read and displayed quickly without having to
do a full RAW conversion just to see what's in the file.