Screen2.0

 
 
 

archive // 2005.01.14 01:38:41 [her]

Photo Tools #07: Interview mit Chris Russ, Autor der Filtersammlung "Optipix 3"

Mit Chris Russ, Autor der Filtersammlung "Optipix 3" und bereits seit 20 Jahren in diesem Bereich tätig, sprach Eckehart Röscheisen über die technologischen Hintergründe der Bildbearbeitung.

Screen2.0: "Refocus" is propably the most remarkable plug-in in your "Optipix 3" collection. What is the basic difference between your product "Refocus" and "Focus Magic" (www.focusmagic.com)?

Chris Russ: I don't know exactly what Focus Magic is doing. It appears to be some kind of iterative deconvolution.

Screen2.0: How does "Refocus" work in principle?

Russ: We're doing Weiner Deconvolution with a Gaussian-shaped Point Spread Function and showing a live preview so people can interactively find a setting that works for them.

Screen2.0: That is really a lot of math stuff. Who is interested, may find more background on the algorithms used here: http://ict.ewi.tudelft.nl/old/html/education/courses/et4_085/sheets/WDMF.pdf. What exactly is a Point Spread Function?

Russ: A Point Spread Function (PSF) is what you get when you photograph a point (make a black point on a white sheet of paper). Ideally this should be one pixel. However, it will often spread out to neighboring pixels. This effect is often seen with headlights when driving at night. It is caused by either a filter or more likely the sides of the aperture in the lens, but what would ordinarily be a single point (like a star) is spread out. By modeling this function we can mostly undo the effects, as long as the noise is minimal.

Screen2.0: What are the principle trends that you see in image processing?

Russ: Because computer power has increased so much in recent years, it is now possible to provide FFT (Fast Fourier Transformation algorithms needed to effectively process the images) tools that are fast. When I was a graduate student, we would perform FFTs that took over a week to run on our lab computer. Now, equivalent FFTs can run in seconds.

I would expect to see more of these techniques that were previously computationally prohibitive work their way into the mainstream. The biggest challenge is to present a user interface (UI) that is usable. I certainly cannot present a photographer with the Power Spectrum, but there are some extremely powerful techniques that he could use with one, including halftone removal, better blurring and sharpening, noise removal, etc.. If I can only find a way to reduce the process to two or three sliders...

Screen2.0: What is the main challenge in developing algorithms like "refocussing" images?

Russ: One of the biggest problems with deconvolution is handling noise. The equation involves division -- often division with a noisy small number. The artifacts that one sees as a result look like "ringing" around the edges of objects. They can also look like "ripples." Anything that you can do to get a better image, e.g. shooting RAW, using a lower ISO and a longer exposure, brighter lights, and frame averaging, make a difference in your eventual ability to deconvolve.

An example is the Hubble Telescope. When it was first launched, it had the wrong optics (actually the mirror was formed using the wrong function). Roughly 85% of the light coming in was out-of-focus. However, they managed to get good images anyway. How was this possible? We knew exactly what the error in the mirror was -- we could deconvolve with the correct Point Spread Function and get a good image. Now that Hubble has a correcting optic in the light path, it is collecting ALL of the light and the images are that much better, but for those years before the first servicing mission they were able to use the telescope productively.

Screen2.0: When can your "refocussing by deconvolution" plug-in be succesfully applied?

Russ: If the image has a lot of noise in it, you will not get good results.

Last week I had a forensic examiner ask me about deconvolving a video image. On a good day, you might have 5 or 6 bits of real information in a video frame. This one had even less. There was no way we were going to get a good image. What you see in the movies is simply Science Fiction.

Screen2.0: Why does refocus require images be smaller than 4000 by 4000 pixels? That's a real constraint with today's digital cameras, for example the 12 megapixel models.

Russ: Because a 4000x4000 image will round up to 4096x4096 (FFT's require powers-of-two: 256x256, 512x512, 1024x1024, etc) and internally we use approximately 128 bytes per pixel for a color image. This translates to 2GB of RAM or Virtual Memory. The next size up would use an 8192x8192 FFT and approximately 8GB of RAM, which is more than a single application can use.

I am investigating ways around the limit, but they are either extremely difficult to code (using tiles makes matching the color/contrast settings in each tile difficult) or very slow (using my own Virtual Memory).

There is a lot of demand to make it work with larger images. I am working on it.

Screen2.0: Do you have any particular tips in using "Refocus"?

Russ: The plugin works pretty well, actually. :) I've noticed that Adobe uses very soft bicubic functions for resizing, and that if you use Refocus with settings 0.5 for radius and 0.1 (or less) for noise you can sharpen the images quite a bit after changing Image Size. This is true for making images smaller as well as larger. Generally, you will get better results in 16-bits.

Screen2.0: Your "Optipix 3" bundle has a lot more to offer than refocusing images. What makes your package unique compared to other products like "Power Retouche" or "FixerBundle"?

Russ: I'm an engineer, not a marketeer, so I really couldn't tell you why we're better or worse than other folks. However, I've tried to put together a package that works well, covers a lot of bases (building HDR images, compressing them to something useable, sharpening, etc.), and provides a useful workflow at a reasonable price.

That is the key: Workflow. I try to tell people that if they're spending a lot of time in Photoshop repeating the same steps over and over, then they have a good candidate for automation. People should unload the repetitive tasks to the computer so they can do what they're good at -- in this case taking pictures or composition or enhancement.

This is also a request -- if you have repetitive things that should be automated, I'd like to hear about them. I write custom plugins, too. This is one of the reasons behind my free plugin "Select Edges." I got irritated with the process of using Adobe's Find Edges function to select just the edges of an image so I could use Unsharp Mask afterwards and created this plugin. We had over 20,000 downloads in the first month, so I guess I wasn't the only person who was irritated.

Screen2.0: What ist the focus with your scientific imaging products?

Russ: "Fovea Pro" turns Photoshop into a complete and inexpensive Quantitative Image Analysis system. One of our competitors is "Image Pro Plus" (approximately US$ 5000). It is expensive and hard to use. Everybody already has Photoshop and it is an ideal platform for doing scientific processing and measurement. The only thing you cannot do in Photoshop is automate a stage or camera to acquire a lot of images.

Screen2.0: How do you think scientific algorithms from "Fovea Pro" have influenced your "Optipix 3" bundle?

Russ: Tremendously. Optipix is a spin-off of many things that we've learned from our scientific line. (I've been writing scientific image processing software since 1979 on the Apple . My father is the author of "The Image Processing Handbook.") Since I'm also a photographer, this was my attempt to customize the scientific tools for other photographers and make them a lot easier to use.

 

Werbung