A Site for Photographers by Photographers

Community > Forums > Casual Photo Conversations > Don't Delete Those Old 1.3 MP...

Featured Equipment Deals

Latest Equipment Articles

Sony a6300-First Impressions Read More

Sony a6300-First Impressions

When Sony's invitation to spend a couple of days shooting with the new a6300 in Miami arrived via email, I didn't have to think twice before sending my RSVP. Announced in February and shipping this...

Latest Learning Articles

Macro Photography Slideshow Read More

Macro Photography Slideshow

*These are some of the highlights from our recent Macro Photography Contest!* Click the arrow to begin the slideshow. h1. William Banik "Bayou...

Don't Delete Those Old 1.3 MP Image Files Just Yet.

Jay F , Mar 01, 2010; 08:45 a.m.

This is and interesting bit of high-tech math with a photo application:


Cheers! Jay


    1   |   2     Next    Last

Matthew Newton , Mar 01, 2010; 09:27 a.m.

Interesting. I can certainly see the applications and they deffinitely mentioned them. The only issue I see is the computing power required. I agree that with some things, such as space probe image capture/enhancement it could prove invaluable and revolutionary. Same thing with image enhancement. However, I don't see this being a common thing for digital cameras or for 'compression'. As mentioned a couple of hours of computer time to enhance a single MRI image. Not sure on the resoultion of an MRI image, but I'd imagine that a 12mp image is at least on par.
I really don't want to snap a couple of thousand pictures at 'lower resolutions' to save battery power and memory card space and then have to spend the next year 24/7 processing the images up to spec.
Computing power only grows at an approximate doubling every 18-20 months. At that rate to get processing times down to maybe 10 seconds an image for post capture enhancement you're going to have something like 40+ years of computing improvements. Of course in 40 years you might still have a 20mp sensor, but you are creating 200mp images in the end because of enhancement. Personally I see this as a wonderful replacement for upresing an image when you need to print really big.
I do wonder what the limit of the algorithm is for upresing or other enhancement before the error introduced actually becomes significant. A 4x enhancement, 10x, 100x, 1,000x? I can't imagine it is truely infinnite. How could you turn a 50 pixel 'image' in to a 12mp image and have resonable accuracy (or for that matter how can you do it without a few trillion petaflops of processing and the answer might be you can't).

David W. Griffin , Mar 01, 2010; 09:48 a.m.

Reminds me of those CSI/NCIS shows that show some grainy low resolution photo and magically transform it into a close up with the ability to read a license plate. Maybe this makes that actually feasible given enough time. Maybe it just gives us a new version of genuine fractals.

Steve Gubin , Mar 01, 2010; 09:51 a.m.

Fascinating. The medicial applications are, of course, most important. But there is some great potential here for photographic applications....to say nothing of photographic debates!

Having lost some original RAW, TIFF and JPEG files to a hard drive crash, this would be a good way to resurrect them from lo-res copies. And what potential would Compressed Sensing have for original files? A theoretical increase in resolution? Imagine the noise reduction potential of CS. What would a digital scan of Niepce's 1826 photograph look like if run through a CS engine?

Tom Mann , Mar 01, 2010; 11:49 a.m.

OK, who's going to write a PS plugin for this algorithm so that we can all play with it?

Tom M.

Bob Atkins , Mar 01, 2010; 12:17 p.m.

I'd wait to see examples of actual photographic applications (not simulated applications) before getting too excited. The press often "simplify" things to the level of misrepresentation. The technique does look like it has some applications in scientific imaging, but I'm not so sure it's capable of turning that 1MP image of your cat into something suitable for a 20x30" print.

Think about this too - If you can turn a low res 1MP image into a high res 16MP image (or even a 2MP image), then you can probably turn an image shot with a $100 kit zoom lens into the equivalent of the same image shot with a $2500 prime lens. In fact that's probably a much easier task based on what's being reported in the "popular press". Personally I'm skeptical. Not about the technique and it's aplication to scientific imaging, but about the extrapolations that are being made.

I just don't see a "digital Diana" camera giving results similar to a consumer DSLR, even with a bank of parallel processing PCs filling up your darkroom.

Ben Goren , Mar 01, 2010; 12:52 p.m.

Information theory is quite clear on the matter. One cannot create from whole cloth which is not there to begin with. Any journalist who seems to quote a reputable scientist who says otherwise is writing fluff, and any seemingly-reputable scientist who actually says otherwise is selling you perpetual-motion digital snake oil.

That writ, there’s nothing that says there isn’t room for huge improvement in our interpolation algorithms. If you think about it, a skilled artist should be fully capable of taking a blurry, grainy, faded postage stamp and painting a detailed, realistic wall-sized mural of the same scene. As a child, I listened to distant radio stations playing music and had no trouble hearing the music through the hiss and pops.

And there’s no theoretical reason a computer couldn’t do the same.

Just don’t fool yourself into thinking that it’s an accurate representation of the scene. It may be surprisingly good, and almost certainly more than “good enough” for lots of purposes. But it’s still made-up fake data filling in the holes.

And, yes, damnit. I do want the PS plugin! NOW!



Tom Mann , Mar 01, 2010; 01:37 p.m.

BG: "... Information theory is quite clear on the matter. One cannot create from whole cloth which is not there to begin with. ... Just don’t fool yourself into thinking that it’s an accurate representation of the scene. It may be surprisingly good, and almost certainly more than “good enough” for lots of purposes. But it’s still made-up fake data filling in the holes. ..."
All absolutely true. I think that what we are starting to see, ie, whether it's this algorithm, fractal-based algorithms, or something else, is the beginning of what might be called "smart filling-in".

A sufficiently smart algorithm could figure out which parts of the original image contain the bark of trees, OOF branches in the background, skin, the iris of an eye, grass, dirt, light direction, etc., and then look up in a suitably large database very high resolution information on texture, color, shadowing, etc. for that material, and then fill in the gaps between your original pixels with new pixels synthesized from the DB.

This will be a blending of traditional photography with automated computer graphics and is almost exactly what Ben's hypothetical painter would do to paint his mural from a postage stamp. I think that in the near future, we are going to see huge advances in this area, and the results will be instantly accepted by the majority of the users of cameras (eg, look at the success of "Avatar" in computer graphics), but nevertheless, still "fake". This will be yet another hurdle for traditional photographers to grapple with.

Just my $0.02,

Tom M.

Michael Attewell , Mar 01, 2010; 02:43 p.m.

I wonder if this algorithm could be used for better noise reduction on photos? Very interesting concept with many applications.

Bob Atkins , Mar 01, 2010; 04:05 p.m.

Reading a little more it sounds like the principle is maybe somewhat similar to the "minimum entropy" approach which is well known

The key to finding the single correct representation is a notion called sparsity, a mathematical way of describing an image’s complexity, or lack thereof. A picture made up of a few simple, understandable elements — like solid blocks of color or wiggly lines — is sparse; a screenful of random, chaotic dots is not. It turns out that out of all the bazillion possible reconstructions, the simplest, or sparsest, image is almost always the right one or very close to it.

This appears to suggest that the technique fills in the "gaps" using the simplest possible functions. By doing so it obviously doesn't increase the amount of information in the image, nor does it increase the image resolution. It does make the image look better. There is no suggestion that it's looking for "textures" and filling in "texture" data from other known images. I suppose that's possible, but it's not what this is about.

It seems, based on this simple "journalistic" explanation that's it's not going to work on sparsely sampled high resolution images, but it would work on sparsely sampled low resolution images. By photographic standards, X-rays and MRI images are pretty low resolution.

    1   |   2     Next    Last

Back to top

Notify me of Responses