A Site for Photographers by Photographers

Community > Forums > Digital Darkroom > Archiving > 16 bit vs 8 bit?

16 bit vs 8 bit?

Nathan Gardner , Aug 27, 2009; 04:06 p.m.

Should I be saving my RAW photos as a 16 bit TIFF or an 8 bit? I am totally ignorant about this. What is the difference between the two?


Mike Blume , Aug 27, 2009; 04:08 p.m.

If you intend to do any significant amount of editing/postprocessing of these images, then saving at 16 bits is preferred.

Rene GM , Aug 27, 2009; 04:16 p.m.

Maybe it is now recommended to save the photos in DNG format.

Mendel Leisk , Aug 27, 2009; 07:38 p.m.

As long as you've got the RAW it's not that important.

Why are you saving tiff format, say as opposed to jpeg? Are you doing a fair amount of editting to the tiff? If so I'd first produce the tiff from the RAW in 16 bit. Then do your editting. Now if there's a fair bit of labour put in the editting process, and your adamant about keeping tiff, maybe I'd keep it 16 bit. OTOH, if you're just cranking out the tiff's through something like ACR, and not doing any editting on the tiff, 8 bit should be fine.

But if there's not a lot of labour in the RAW conversions, I wouldn't bother with tiff (of any bit depth), just go straight to jpeg.

Andrew Rodney , Aug 29, 2009; 11:55 a.m.

Christopher Hanlon , Aug 29, 2009; 06:35 p.m.

If you keep your RAW photos around, and presumably the ability to read in that type of RAW image, then saving 16 bit or 8 bit depends on your end goal. FWIW, RAW formats aren't all the same and it may be worthwhile to convert the RAW image to a standard format - providing you can find a way to convert the RAW data to a standard format without losing any information. If the manufacturer of the camera that the RAW image came from suddenly goes belly up - then conceivably you could lose a method for reading in that manufacturers RAW images. (fwiw, I think that's improbable - but perhaps there are some RAW images from KM that can't be read in any more).

The difference between 8 bit & 16 bit - is obviously 8 bits. In this case 8/16 bits refers to the size of data available per channel of the image. 8 bits refers to the possible values a channel can contain - 2^8 = 256 - assuming RGB colorspace, then each channel (i.e. red, green or blue) can contain between 0-255 values (let's think of values as 'shades' of a certain color). 16 bit channels can contain 2^16 or 65536 possible values. You can see that 16 bit per channel images can contain a lot more data than an 8 bit per channel image.

Now, as to whether software does 8 bit or 16 bit image operations, regardless what the software marketing says - you can never really be sure what the heck happens internally. Most software does not use a standard format to store its data internally - for instance a tiff image read into photoshop most certainly is not stored in the tiff format internally.

Image manipulation is primarily *math* based. And thus - evilly - most pixel data may be converted into a more math friendly form - which likely will be a normalized 32 bit floating point value. So an 8 bit R value of 33 becomes .12890625 (i.e. 33/256). The same value in 16 bit form is 8448 (i.e. .12890625 * 65536). Lets say you have a 16 bit image where at one pixel R = 8447 - to convert that to 8 bit you need to do the following: (8447/65536 * 256) - using floating point math the resulting value is 32.9960...for 8 bit per channel storage this number needs to be turned into an integer (I'll assume the storage is integer based & not floating point) - assuming that the software rounds the number - then the value becomes 33.

Back to top

Notify me of Responses