AstroPy work 9/30/07


New code: AstroPy.093007.tar.gz (all scripts).

Subtraction of bracket-gamma composite frame

This past week I've been working on the bracket-gamma subtraction for the 7/24/07 data. To see more info about constructing the br-gm composite frames, see the documentation for stack.py. Two different br-gm frames were created from two sets of images. The script 'subbrgm1.py' shows an initial attempt to remove the reflected light glare from the continuum-k images using these frames:
> subbrgm1.py

(Show initial stats of data (072407/aligned/im0215.a.fits) and brgm frames:)

 data: min=       0.000000, max=    7626.549707, mean=     978.230662, std=    1198.549834
brgm1: min=       0.000000, max=   10842.000626, mean=     955.766348, std=    1418.421822
brgm2: min=       0.000000, max=    5561.934213, mean=     931.196243, std=    1136.828770

(brgm3 = (brgm1 + brgm2) / 2:)

brgm3: min=       0.000000, max=    7231.927343, mean=     943.481296, std=    1158.323276

(Perform subtraction (e.g. sub1 = data - brgm1:)

 sub1: min=   -9017.182093, max=    3780.796769, mean=      22.464315, std=    1230.958634
 sub2: min=   -4363.185074, max=    3587.835446, mean=      47.034419, std=     564.946476
 sub3: min=   -4437.959262, max=    3038.857669, mean=      34.749367, std=     778.957375
 
(Get background and foreground averages:)
 
 data: bg=     205.476465,  fg=    1795.071213
 sub1: bg=    -210.056639,  fg=    1125.333054
 sub2: bg=      14.411019,  fg=     903.235478
 sub3: bg=     -97.822810,  fg=    1014.284266
 
(Rescale subtractions to produce same bg and fg values as data:)
 
rescaling...
 sub1: min=  -10278.175193, max=    4956.029545, mean=     482.260155, std=    1465.284171
 sub2: min=   -7623.520238, max=    6596.273485, mean=     263.820927, std=    1010.363680
 sub3: min=   -5998.115132, max=    4688.903398, mean=     394.969042, std=    1113.405876
 data: bg=     205.476465,  fg=    1795.071213
 sub1: bg=     205.476465,  fg=    1795.071213
 sub2: bg=     205.476465,  fg=    1795.071213
 sub3: bg=     205.476465,  fg=    1795.071213

Here are the results of the various subtractions. The two blue boxes in each image show where the background and foreground averages are calculated, so that the overall range of the resulting images can be maintained. Note that while maintaining the dynamic range, the average of each subtracted image is reduced, indicating that part of the reflected component has been removed:


Note that the least glare is removed by brgm1, the most by brgm2, and an intermediate amount by brgm3 = (brgm1 + brgm2) / 2. However, also note that the removal of glare leaves behind a considerable amount of systematic noise, especially near the terminator. Most of the remainder of this page will be devoted to tracking down the cause of this residual noise, and attenuating it in the production brgm runs. The eventual conclusion (see below) is that it is the result of insufficient flat fielding of the image, especially in the high intensity part of the video response.

Here are histograms of the data and rescaled subtracted images, from -5000 to 5000:

The script 'subbrgm2.py' shows an enlargement and histogram of a region of the sub2 subtraction near the terminator:

> subbrgm2.py
 data: min=       0.000000, max=    7626.549707, mean=     978.230662, std=    1198.549834
 brgm: min=       0.000000, max=    5561.934213, mean=     931.196243, std=    1136.828770
  sub: min=   -4363.185074, max=    3587.835446, mean=      47.034419, std=     564.946476
 data: bg=     205.476465,  fg=    1795.071213
  sub: bg=      14.411019,  fg=     903.235478
rescaling...
   sub: min=   -7623.520238, max=    6596.273485, mean=     263.820927, std=    1010.363680
   sub: bg=     205.476465,  fg=    1795.071213
subset: min=   -1319.445685, max=    2156.316799, mean=     541.093934, std=     330.678975
hist3 max at 360.0

The subregion histogram looks like the full image histogram (range = -2000 to 2000), and there are very few negative values. Both the mean and the mode are above the background value, although the backgound is only about 1 std down from the mean (i.e. about 1/6 of the pixels are below background). There is nothing in either the stats or histogram to indicate the source of the pathological noise which is clearly visible.

Enlargements and histograms (range = 0 to 8000) of the same subregions in the data and the brgm2 composite begin to suggest the cause of the noise:

> subbrgm3.py
    data: min=       0.000000, max=    7626.549707, mean=     978.230662, std=    1198.549834
   brgm2: min=       0.000000, max=    5561.934213, mean=     931.196243, std=    1136.828770
 subset1: min=       2.046169, max=    5857.370015, mean=    2211.724459, std=    1148.258395
hist1 max at 1360.0
 subset2: min=      10.873922, max=    5323.800074, mean=    2009.652393, std=    1083.810468
hist2 max at 1200.0

The stats and histograms of the subregions of both frames are very similar, suggesting that taking the difference between them would tend to accentuate any small variation (e.g. noise) between them. Indeed, although the brgm2 image subset appears fairly smooth, the data subset appears to contain a slight medium resolution semi-random digital pattern.

This effect can be seen more clearly if the subsets are further enlarged and the displayed pixel values are allowed to wrap around the gray scale instead of being clipped (each fringe is 500 apart):

Thus it appears that the resulting subtraction noise is intrinsic to the data itself. This can be seen even more clearly if a high-pass Fourier filter is used to enhance the noise. The filter is of the form 1 - exp(-(r / 64)**2) and is shown below:

> noise.py 072407/aligned/im0215.a.fits 
    data: max =     7626.549707, min =        0.000000, mean =      978.230662
  filter: max =        1.000000, min =        0.000000, mean =        0.950913
filtered: max =     4183.110735, min =    -1856.812946, mean =        0.000000, std =       99.082014
 subset1: max =     5857.370015, min =        2.046169, mean =     2211.724459, std =     1148.258395
 subset2: max =      966.454187, min =     -626.125968, mean =        0.427639, std =       68.979611


The noise appears to consist of correlated horizontal bits in a byte, and correlated vertical bits every byte. If one compares the noise in 3 consecutive images (214-216) which have been flattened --but not filled or shift-averaged--, it appears to be identical:

This suggests that the noise is the residue of insufficient flat-fielding. Indeed, both the intercept and slope frames show similar bit correlations (see makeflat.py page). However, the noise in 3 consecutive flattened sky images is not the same, nor is the repeated bit correlation apparent:

Also note that the stars show up in the flattened planet! This suggests that the sky flats should be dithered a bit (or a lot) more. If the flattened --but not rescaled-- sky frames are used to make another intercept and slope frame, the results are:

> makeflat2.py 072407 im0141.a.fits im0150.a.fits 0.25 im0161.a.fits im0170.a.fits 1.0
reading 072407/raw/im0141.a.fits
reading 072407/raw/im0142.a.fits
reading 072407/raw/im0143.a.fits
reading 072407/raw/im0144.a.fits
reading 072407/raw/im0145.a.fits
reading 072407/raw/im0146.a.fits
reading 072407/raw/im0147.a.fits
reading 072407/raw/im0148.a.fits
reading 072407/raw/im0149.a.fits
reading 072407/raw/im0150.a.fits
reading 072407/raw/im0161.a.fits
reading 072407/raw/im0162.a.fits
reading 072407/raw/im0163.a.fits
reading 072407/raw/im0164.a.fits
reading 072407/raw/im0165.a.fits
reading 072407/raw/im0166.a.fits
reading 072407/raw/im0167.a.fits
reading 072407/raw/im0168.a.fits
reading 072407/raw/im0169.a.fits
reading 072407/raw/im0170.a.fits
flat1 is (10, 512, 512), mean = 0.24024870267, std = 0.0490800244368
flat2 is (10, 512, 512), mean = 0.960994720474, std = 0.196424823298
av1 is (512, 512), mean = 0.24024870267, std = 0.04840188262, median = 0.25
av2 is (512, 512), mean = 0.960994720474, std = 0.193607509436, median = 1.0
exposure1 = 0.25, exposure2 = 1.0
slope is (512, 512), mean = 0.960994690406, std = 0.193607503698, median = 1.0
intercept is (512, 512), mean = 3.00681610923e-08, std = 1.11127341815e-05, median = 0.0


The median values of 0.25 and 1.0 are the correct exposure times for the sky flats, and the median values of 0.0 and 1.0 are the correct values for the (now normalized) intercept and slope. There is a slight variation in the intercept and slope. If the two are confined to their masked values and plotted from +-0.00000001 about their medians, the following are obtained:

masked av1 is (512, 512), mean = 0.25000002347, std = 8.50201730066e-06, median = 0.25
masked av2 is (512, 512), mean = 1.00000000002, std = 2.26026794351e-08, median = 1.0
masked slope is (512, 512), mean = 0.999999968727, std = 1.13360154937e-05, median = 1.0
masked intercept is (512, 512), mean = 3.12885809382e-08, std = 1.13360136629e-05, median = 0.0

This variation is probably due to floating point math error, rather than to anything intrinsic in the images. Replacing the intercept with its average in the flattening process has little effect on the resulting noise:

However, replacing the slope with its average has a profound effect:

Therefore its seems likely that the slope has not been correctly computed --or, more likely, is not a straight line-- and this is the cause of the residual noise in the images and in the subsequent brgm subtraction. This hypothesis is supported by the observation that the noise occurs mostly near the bright crescent, where the raw image counts are highest. Having a third set of sky flats with counts in excess of 10000 might solve this problem by allowing a parabola, rather than a line, to be fit to the response curve of each pixel. The two sets of current sky flats have average counts of 105.38 and 1222.54, whereas the raw counts in the clouds under the glare (and in the noise) are from about 10000 to 40000:

One way to reduce the subtraction noise is to multiply the brgm frame by a scale factor. For example, here are 3 subtractions using brgm2 at factors of 0.5, 0.75, and 1.0:


Each increasing scale removes more of the glare, but also increases the resulting noise. Another way to reduce the noise is to shift-average the data before subtraction. Here is data - brgm2 * 0.85 with the data pre-shifted and averaged by +-0.5, 1.0, and 1.5 pixels:

Although this has the undesirable effect of slightly blurring the data, it does reduce the noise. One way to ameliorate the overall blurring effect is to use a 'blended blur' which mixes the data with a blurred version of itself in proportion to the brightness of the image at every location. This has the effect of reducing the noise in the glare, but otherwise unaffecting the darker parts of the image. Since the clouds are the dark parts of the image, this alters their edges the least.

Here is an example of pre-blurring the data with a convolution filter of the form exp(-(r**2) / s) for s = 12.5, 25.0, and 50.0. The blending equation was blend = (im + blur * (im / 1000)) / (1.0 + (im / 1000)). The subtraction was blend - brgm2 * 0.85:

The final filter chosen was for s = 25. This provides an intermediate compromise between reducing the glare and increasing contrast, and increasing the subtraction noise. Here is a comparison of the original data, the blended subtraction, and the full (noisy) subtraction:

The blended subtraction can also be performed using the brgm1 frame. Here is a comparison using the same filter, but subtracting brgm1 * 2.25 (left):

Still to do

  1. Perform production runs of brgm subtraction.
  2. Continue with automatic night alignment.
  3. Create additional Python/PyQt registration tools.
  4. Perform all steps of image processing on other days in 7/07.


İSky Coyote 2007