Week 5: a bad case of bad pixels

This week I wanted to run our reduction steps on another night of data, to move towards reducing all the data we have, but also to ensure the code and steps will actually work across all the data we’ll be working with.

The first thing I did was create a function I called “timesample” which sorts a list of files by the time they were taken. The user inputs the windows of time they’d like to capture and the function returns lists of images corresponding to only images taken within the time frame(s) specified. For example, if you wanted to create a stack of one cycle of images, but you have a list of a night’s worth of images, you simply pass the start and end time for the cycle and the function returns a list of those images for your stack. I’ve included the code below.

def timesample(samples, images):
    '''
    This function takes a list of tuples which contain start and end times 
    in UT format and a list of image paths
    
    It returns a list of list containing the images taken inbetween each 
    tuple's stand and end times.
    
    example input:
    timesample([('02:07','03:01'), ('04:13','05:05'), ('06:07','06:45')], 
    imagelist)
    example output:
    [[imagelist[0]], [imagelist[1:3]], []]
    '''
    lists = []
    for sample in samples:
        print('collecting images taken between: '+str(sample))
        start = dt.datetime.strptime(sample[0],'%H:%M').time()
        end = dt.datetime.strptime(sample[1],'%H:%M').time()
        sample_imgs = []
        for im in images:
            head = fits.getheader(im)
            date, time = head['DATE-OBS'].split(' ')
            time, milli = time.split('.')
            hours, mini, sec = time.split(':')
            time = hours+':'+mini
            time = dt.datetime.strptime(hours+':'+mini,'%H:%M').time()
            if start <= time <= end:
                sample_imgs.append(im)
            else:
                pass
        lists.append(sample_imgs)
    return(lists)

I cleaned up the structure of sorting, bias correcting, flatfielding, and then registering and stacking images, and ran it on the first night of data, Jan 19th. All was running smoothly until I checked our median combinations. They were blank, filled with row upon row of NaN (Not a Number) values. I checked the bias, flat reduced files, and they looked normal, so I isolated the issue to the cross correlation and median combination function.

As it turns out, there are a few spots on the HDI CCD that have pixels that have died, meaning their flatfield values are very low (less than 1). Dividing a normal number in a science image by these very low values then produces regions of very very large pixel values (for example, 10 divided by 0.1 equals 100). This produces a very sharp slope that the new cross correlation function does not handle well; as a response the function returns NaN values that get propagated throughout the entire image.

the most egregious example of a bad pixel spot, which I’ve dubbed the “eldritch evil.” for comparison, normal stars have been labeled as well.

Once we found the regions of bad pixels on our detector, the way to handle them was to create a pixel mask; image patching your jeans with a strip of denim. Thankfully our instructor created a HDI specific pixel mask and gave us a function, “badpixelcorrect” which I’ve included below.

def badpixelcorrect(data_arr, badpixelmask, speed = 'fast'):
    '''
    badpixelcorrect
    -----------------
    Performs a simple bad pixel correction, replacing bad pixels with image 
    median.
    Input image and bad pixel mask image must be the same image dimension.
    inputs
    ----------------
    data_arr      : (matrix of floats) input image
    badpixelmask  : (matrix of floats) mask of values 1.0 or 0.0, where 1.0 
    corresponds to a bad pixel            
    speed         : (str) whether to calculate the median filtered image 
    (computationally intensive), or simply take the image median. Default = 
    'fast'
    outputs
    ---------------
    corr_data     : (matrix of floats) image corrected for bad pixels
    '''
    corr_data = data_arr.copy()
    if speed == 'slow':
        # smooth the science image by a median filter to generate 
        replacement pixels
        median_data = ndimage.median_filter(data_arr, size=(30,30))
        # replace the bad pixels with median of the local 30 pixels
        corr_data[badpixelmask == 1] = median_data[badpixelmask == 1]
    else:
        corr_data[badpixelmask == 1] = np.nanmedian(data_arr)
    return corr_data
the result of running the same image through the “badpixelcorrect” function: the eldritch evil has been defeated!

This function replaces troublesome pixels, as indicated by a pixel mask (essentially a grid that says whether a given pixel in the image is good or bad) with the median pixel value of the image – a fairly good measure of the background level of the image.

With that debacle settled, I was able to pass timescale defined lists of images into our cross correlation function and get real valued images again! Its important to align all our images to one image so they can be easily compared, analyzed, and combined, so I chose to stack to the first night’s R-band image, and stacked each cycle (~3 per night). The results are a bit dimmer than a stack of a night’s worth of data, but means more finely sampled photometry later on in our analysis.

We also determined which members of the HL Tau field are foreground/background stars, and which are members of the Taurus association. We did this by overlaying on sky coordinates (Right Ascension and Declination ) on each pixel in our image of HL Tau using Astrometry.net, creating a list of coordinates for each star in our image, and then comparing those coordinates to a list of known Taurus association members on VizieR (a data catalog service). Doing this narrows down the targets who’s photometry we will have to extract in the coming weeks. Below is a table of the confirmed members in our HL Tau field.

Table of confirmed Taurus association members

Interesting to note is the delta r column, which shows how far off our estimates of RA, Dec were from the catalog’s. Also interesting is the disk column, which details the catalog’s record of whether or not the star has a circumstellar disk, and what type it may be. Because Taurus is such a young star forming region, the stars in our field haven’t had the time to eat up or blow away their circumstellar disks, and therefore the disks we do see are labeled “full,” meaning they don’t have any holes or structure indicative of more mature systems.

A stacked R-band image of our HL Tau field in blue, with Taurus members

This week I skimmed a number of papers to prepare for my final proposal submission, but one that I read that really caught my eye was not directly related to my proposal. The paper, published recently to arxiv, is titled “13C 17O suggests gravitational instability in the HL Tau disc ” and is authored by Alice S. Booth and John D. Ilee. It details the second ever measurement of an isotope of carbon monoxide, 13C 17O, which improves the disk mass estimate for the system as well as implying gravitational instability and clumping in the disk.

Figure 1 from the paper, showing the clumpy distribution of 13C 17O gas in the disk. This, along with a refined measurement of the disk mass, and modeling of the disk’s radial Toomre Q parameter imply portions of the disk are unstable.

This is interesting because, if you look carefully, HL Tau is (in addition to being the namesake for our image field) included in the sample of stars in our images. Our photometric monitoring of this object will likely be influence by this clumpy disk.

Until next week, clear skies 🙂


Posted

in

by

Tags:

Comments

One response to “Week 5: a bad case of bad pixels”

  1. Katherine Harmon Avatar
    Katherine Harmon

    Saved by the pixel mask! You had me at tuple. No, really, you had me at tuple. I had to go find out what the heck a tuple is before I could continue. How long do you have until your final submission? Here’s to good outcomes for your fine work. Check PP. LOL Gr K

Leave a Reply

Your email address will not be published. Required fields are marked *