Russell Croman Astrophotography  





  • NEW! StarShrink
    A sophisticated star-sharpening plug-in for Adobe Photoshop®

  • GradientXTerminator
    A light-pollution gradient removal plug-in for Adobe Photoshop®



  • Preserving Star Colors
    How to avoid loss of star color during image processing (originally presented at the 2004 Advanced Imaging Conference).

Links to other Astrophotographer's websites

Other Links

  • Trezora Glass
    Beautiful fine art glass jewelry that draws inspiration from our universe. Check out their Galaxy collection. The artists are friends of mine.

  • Oceanside Photo & Telescope
    Astronomy Telescopes from Meade, Celestron, TeleVue, Takahashi, accessories and eyepieces, CCD imaging cameras from SBIG and more.

  • Anacortes Telescope & Wild Bird
    Another online astro retailer I like.

  • AstroMart
    A very cool online classified ad and discussion forum service.

  • Astronomics
    Yet another online astro retailer.

LLRGB Processing Flow

Here is my current processing flow for LLRGB images. There are two guiding principles that to me determine how I proceed with the data. The first is that the signal-to-noise ratio (SNR) in the data must be meticulously preserved at every processing step. It is extremely easy to perform a processing step in such a way that noise is actually introduced into the data. You worked very hard to acquire your image data... maximize its potential.

The second principle is that the final image must not look processed. This is of course very subjective. An example of what would look processed to me would be the dark halos that appear around stars in over-sharpened images.

With those principles in mind, here is the procedure by which I currently process most of my images:

  1. Calibrate, align and combine raw images for each channel (L, R, G, and B).
    1. Keep all images, including darks and flats, in floating point format at all times to preserve SNR in the faint areas.
    2. Do any necessary de-blooming or bad pixel repair prior to alignment and combination.
    3. Use manual two-star alignment for all alignment operations, taking care not to use saturated stars.
    4. Use sigma-rejection for all combination operations, including darks and flats (see above plug-ins for implementations of this algorithm).
  2. Do a preliminary alignment of the L, R, G and B channels using MaxIm. Images should still be in floating-point representation.
  3. Crop the images such that no image has any black edges.
  4. Perform the following color balancing steps on each color channel image:
    1. Multiply the image by the camera gain factor (for my ST-10XME, with the SBIG standard RGB filters, the factors are RGB = 1.00:0.88:1.28).
    2. Measure and subtract the sky foreground level, minus 100 counts as a pedestal. (For example, if the sky foreground is 257 counts, subtract 157. This prevents any negative values that would get clipped.)
    3. Multiply the image by the atmospheric extinction correction factor. This is determined by noting the average altitude of the object for this color channel and looking up the atmospheric extinction factor in, say, the Handbook of Astronomical Image Processing. The correction factor is the reciprocal of the extinction factor.
    4. Subtract the error introduced into the pedestal by the previous step. (For example, if the correction factor was 114%, the 100-count pedestal will become 114 counts. 14 counts must be subtracted from the image to correct for this or improper color balance will result.)
  5. Combine the channel images into an LRGB color image using MaxIm.
  6. Optional: If the colored star halos are misaligned (multi-colored), do a precision alignment of the color channel images to the luminance image using RegiStar, and recombine in MaxIm.
  7. Boost the saturation to taste.
  8. Close all but the luminance and LRGB images. Make sure these are saved in floating-point FITS format as master copies. Perform all subsequent steps on secondary copies of these two files.
  9. Perform a DDP-style stretch on the luminance image.
    1. Set the background level by hand to just below the image background (avoid clipping).
    2. Set the mid-point level by mouse-click on a moderately faint part of the object of interest. This may need to be adjusted by hand.
    3. Don't do any sharpening as part of DDP. This is accomplished by selecting the "user" kernel filter, and setting the filter coefficients to 1.0 in the center, and zeros elsewhere. All we want out of DDP right now is a non-linear stretch.
  10. Perform a DDP on the LRGB image using the same parameters as for the luminance image.
    1. Boost saturation after DDP if needed. (DDP tends to wash out the colors.)
  11. Now it is finally okay to save the images in something other than a floating-point format. Save both images in 16-bit TIFF format.

    Reason: the DDP operation boosted the contrast in the faint areas such that there are now large differences in pixel values in these areas. Prior to this, saving in integer format would have introduced excessive rounding errors (noise).

  12. Open both the luminance and LRGB TIFF images in PhotoShop.
  13. Perform a "big unsharp mask" on the luminance image:
    1. First, drop the output white level to about 200 using the levels command.
    2. Invoke the unsharp mask filter. Set the radius to 250 pixels and the percentage to between 20 and 40.
    3. Make sure nothing important (e.g., core of galaxy) got clipped. If it did, undo, drop the white level again, and repeat the filter.
  14. Perform smaller unsharp masks as desired to increase contrast in features of interest. I usually do a 100- or 50-pixel radius unsharp mask, followed perhaps by a 10-pixel radius, all at fairly low percentages. Again, the image should not "look processed."
  15. Adjust the levels and gamma to very close to what you would like for the final image.
  16. Save this image in 16-bit TIFF format with a new file name.
  17. Convert the image to 8-bit format, and save in a new file as the base layer for your final luminance image. (Note: you can leave it in 16-bit format in PhotoShop CS, a major advantage.)
  18. Do some form of fine sharpening on the 16-bit luminance image. This could be Lucy-Richardson deconvolution in your program of choice, or simply some fine unsharp masking. Don't worry about mild mottling or increased noise showing up in the faint areas... this will be dealt with later.
  19. Load the sharpened image into Photoshop. Convert it to 8-bit format. Select all, copy, and paste it onto your base layer created above. (Again, you don't need to convert to 8-bit in PhotoShop CS.)
  20. Do the followings steps to blend the sharpened foreground image with the unsharpened (and thus smoother) background image:
    1. Use the Select->Color Range command to select the background areas of the image.
    2. Feather the selection by a hefty amount (40-200 pixels, depending on image size and content).
    3. Make sure the top (sharpened) layer is selected in the layer window.
    4. Hit "delete." This will make the background area of the top layer transparent, allowing the smoother background of the unsharpened image to show through.
    5. Be willing to undo and re-do these steps with different settings, etc., until you have the luminance image looking the way you want it.
  21. Save this layered luminance image to a file, retaining the layers in case you want to adjust it later.
  22. Flatten the layered luminance image.
  23. Copy and paste the flattened luminance image onto the color image you loaded earlier.
  24. Set the blending mode to "luminosity." This will apply the color of the background (color) image, while retaining the brightness information of the luminance image you built.
  25. You might need to reduce the opacity of the luminance layer and/or increase the saturation of the color layer to get the color looking how you want it.
  26. Season to taste, flatten, and save!

M42 Luminance Processing

M42 has such a huge contrast that I used luminance layering in photoshop, in addition to DDP stretching, to reduce that range so all of the detail could be seen.

Basically, I created four versions of the image, each stretched to give detail to increasingly brighter areas. The base layer has detail in the dimmest portions, but has the core and most of the surrounding area totally saturated. The next image has the dim areas nearly black, and the next-brightest part of the nebula in good contrast. Again the core is saturated. Two more images, working in towards the core, proceed in the same fashion. The last image has just the trapezium and the small area right around it in good detail... the rest of that image is nearly black.

In photoshop, you can start with the base layer and then paste the next image on top of it, and then delete the dim portions of that second image. This is done using the "select color range" tool, feathering the result by, say, 100 pixels, and hitting "delete." This lets the dim outer areas show through from the base layer. The procedure continues in a similar fashion for the rest of the images -- paste, select, feather, delete. It takes some fiddling to get the selection and feathering parameters right, but as you can see it can be made fairly seamless.

Ha/RGB Processing

This is my current flow for combining H-alpha and RGB or LRGB images.

  1. Start with well-processed H-alpha (as grayscale) and RGB images. Process each separately at first.
  2. Register the two images.
  3. Convert the H-alpha image to RGB.
  4. Delete (make black) the green and blue channels.
  5. Select all, copy, and paste this red image onto the RGB image.
  6. Change the blend mode on this layer to "lighten."
  7. Adjust the histogram (mainly gamma) of the H-alpha layer to bring its brightness up a bit (gamma ~1.3).
  8. Flatten this image.
  9. Select all, copy, and paste this image onto the original grayscale H-alpha image.
  10. Change the blend mode to "lighten."
  11. Flatten this image.
  12. Select all, copy, and paste this image onto the RGB+H-alpha image.
  13. Change the blend mode to "luminosity."
  14. Change the opacity to ~50%.
  15. Season to taste.

Solar Image Processing

Here is my current processing flow for solar images taken using the Canon D60:

Take a number of individual raw-format images and...

  1. convert to 16-bit TIFF
  2. extract only the red channel from each, saving as 16-bit grayscale
  3. align and combine (average mode) in MaxIm/DL
  4. save result as 16-bit TIFF
  5. load in PS and compress the output levels a bit (to avoid clipping in next step)
  6. 50 iterations Van Cittert deconvolution in Images Plus, 9x9 PSF
  7. back in PS, slight (about 10%) unsharp mask at radius 250 to handle limb darkening, boost contrast on disk
  8. about 70% unsharp mask at radius 1 to sharpen things up a touch
  9. adjust levels for detail in photosphere
  10. convert to 8-bit, save as base layer in a new file
  11. undo a few steps, adjust levels for chromosphere and prominences
  12. save as a new file
  13. open base layer image
  14. convert edge image to 8-bit, select all, paste onto base layer image
  15. select main disk using "select color range"
  16. adjust selection to edge of photosphere
  17. feather one or two pixels, delete
  18. re-do the last few steps about five times until it looks right (this is the trickiest part)
  19. save layered image to another file (to be able to back-track if needed)
  20. flatten image, save as a new file
  21. convert to RGB mode
  22. duplicate layer, set top layer blend mode to "luminosity"
  23. add color to bottom layer using curves tool (zero blue, bend red upward, bend green downward)
  24. make any final tweaks to luminance (top) layer and color layer
  25. flatten image
  26. celebrate with a trip to the refrigerator!

Astrophotography Books

  • The New CCD Astronomy, Ron Wodaski