Estimating image sharpness via cusp autocorrelation -- 3


Summary

I've used the 2d value vs. radius autocorrelation scatter points from last time to compute a single value which estimates the image (or at least the cusp) sharpness. This number is based on:

  1. Binning the scatter points to the nearest integer radius,
  2. Calculating the standard deviation at each radius,
  3. Summing the stds multiplied by number of points in each bin,
  4. Summing the results for both cusps.

So, the resulting number has no meaningful dimension, nor is it scaled or normalized to anything. It simply provides a way to rank images. See below for a suggestion about how to scale and normalize these numbers to make them more meaningful. Here is sample output:

> cuspsharp.py -c 072407/intermediates/coords.171-276.processed.txt 072407/processed/im0194.a.fits
Cusps at (r, theta):
   (200.65, 64.26)
   (192.02, 240.14)
Autocorr max at: (10, 10)
Autocorr max = 1.000000, min = 0.115417, mean = 0.527830, sum =   232.773020
std[ 0] =     0.000000, n =    1
std[ 1] =     0.017715, n =    8
std[ 2] =     0.018575, n =   12
std[ 3] =     0.022085, n =   16
std[ 4] =     0.039980, n =   32
std[ 5] =     0.042537, n =   28
std[ 6] =     0.055852, n =   40
std[ 7] =     0.058357, n =   40
std[ 8] =     0.066608, n =   48
std[ 9] =     0.072902, n =   68
std[10] =     0.070679, n =   56
std[11] =     0.093825, n =   44
std[12] =     0.096887, n =   24
std[13] =     0.099207, n =   20
std[14] =     0.103738, n =    4
Weighted sum = 28.721976
Sharpness 1 = 28.721976
Autocorr max at: (10, 10)
Autocorr max = 1.000000, min = 0.102731, mean = 0.501882, sum =   221.330067
std[ 0] =     0.000000, n =    1
std[ 1] =     0.020209, n =    8
std[ 2] =     0.022035, n =   12
std[ 3] =     0.025029, n =   16
std[ 4] =     0.040935, n =   32
std[ 5] =     0.043180, n =   28
std[ 6] =     0.055519, n =   40
std[ 7] =     0.057294, n =   40
std[ 8] =     0.063905, n =   48
std[ 9] =     0.068426, n =   68
std[10] =     0.064789, n =   56
std[11] =     0.085311, n =   44
std[12] =     0.087255, n =   24
std[13] =     0.088437, n =   20
std[14] =     0.091610, n =    4
Weighted sum = 27.189577
Sharpness 2 = 27.189577
Combined sharpness = 55.911552
Cusp 1 max at:   (19, 0)
Cusp 2 max at:   (18, 3)
Cusps at (x, y):
   (441.56, 432.24)
   (258.82, 84.97)

I've run this program on images 171-276 from 7/24/07 and then sorted the results. 10 images were rejected because 'roughalign.py' couldn't get a bead on the limb (out of frame), and 7 more were rejected because the estimated circle was off by enough to give bad cusp locations. However, the 'best' image was one for which the circle estimate was questionable, and yet the cusp-finding procedue worked well. Here is sorted output for the entire run.

Here are the new results in order, presented every 5 'sharpness units':

072407/processed/im0194.a.fits      55.911552

072407/processed/im0249.a.fits      49.769526

072407/processed/im0257.a.fits      45.185305

072407/processed/im0275.a.fits      40.033374

072407/processed/im0201.a.fits      35.717362

072407/processed/im0233.a.fits      30.287413

072407/processed/im0217.a.fits      25.575608

While the progression from sharp to blurry images seems correct, it is still questionable whether the paired ranking of any two images with similar sharpness scores agrees with visual estimates. Nevertheless, this technique does seem to be a viable alternative to the frequency-based technique I was using previously.

Eliot has suggested making a further enhancement which involves creating synthetic 'ideal cusps', and using those to perform a cross-correlation with the actual cusps in order to determine both their location and their sharpness (i.e. degree of similarity to ideal). I have yet to do this, but if successful this should provide a way to scale and normalize cusp sharpness to values from 1.00 = ideal synthetic cusp, to 0.00 = random noise. This will be my next task.


© Sky Coyote 2007.