heart of darkness (dark subtraction)

In this series, I’ll walk through each of the steps necessary to go from taking data at an (optical) telescope to doing science with that data. This article goes into detail about dark current and the dark subtraction process. If you’d like to follow along and create your own image of the NGC 663 cluster you can clone the github repository. I’ll be learning something too, because I haven’t studied the NGC 663 cluster in detail. The data I use in this explainer series was taken from the Amherst College Observatory in October 2019, from a 11in Cassegrain telescope and a SBIG CCD, and the sorted raw data is available to download from this website as a tarball. Much of the code I present here is modeled after code handed down to me by Sarah Betti and Kim Ward-Duong, who taught me most of what I know about data reduction (thanks guys!).

The first post in this series, which explains bias correction, is posted here.

Thermal noise

A CCD (discussed in the previous post) not only introduces inherent uncertainty and bias voltage noise into the images it gathers, but also introduces “thermal noise.” The camera, telescope, and the immediate environment, like all other warm bodies, emits photons. These thermal photons can get collected in the pixels of the CCD, but they do not arrive from the sky and therefore do not represent astrophysical signal. In order to accurately measure the photon flux of an astrophysical source, astronomers therefore must account for thermal noise in their instruments.

More precisely, vibrations of the lattice structure of the solids that comprise a CCD produce electron-hole pairs, the former of which will gather in the pixel and be read out with the rest of the signal.

One way of mitigating thermal noise is by cooling the CCD, so that fewer thermal photons are emitted near the detector and therefore fewer thermal photons are collected while an image is being exposed. Even when a CCD is cooled significantly (for example, ACO’s Beta telescope’s CCD is cooled to about -20°C), significant thermal noise can still be present in images.

In most cases, thermal noise grows in an image linearly over time, because the CCD remains at a relatively constant temperature during observations. In a CCD, the rate of electron-hole pair generation occurs proportionally to temperature following d(pair)/dt ~ T^(3/2) exp(–a/kT) , where a is a constant. From this proportionality, you can see that if temperature is locally constant, that d/dt is constant and therefore that pairs are generated linearly over time. What this means is that the amount of thermal photons gathered in an image increases in a constant, predictable way. This is great news, because it means that we can easily account for the thermal noise in our images and subtract it out with a simple piece of python code.

The process of accounting for thermal noise is referred to as “dark subtraction,” and involves taking calibration images called “dark frames.” Dark frames take their name from the way they are collected: to isolate the thermal noise of the camera, the telescope and camera are shuttered, so that no outside signal is gathered. The CCD is exposed in the dark.

Like bias correction, dark subtraction involves taking a series of dark frames, taking their median, and subtracting that median from every image we take. Dark frames are commonly exposed for the same amount of time as our science images, so that they gather the same amount of thermal noise. In some cases, when dark frames of the same exposure time are unavailable, a dark frame can be scaled to the exposure time of the science image (because thermal noise scaled linearly).

Dark subtraction

Dark subtraction is functionally very simple; the majority of the following code involves simply reading in the data, preparing it for subtraction, and saving the resulting dark subtracted images. The dark subtraction step only occurs in one line! This script relies on the median-combine.py file written in the last post, so be sure to have completed that step before utilizing this script (or writing your own).

# import statements
import os
import glob
import numpy as np
from astropy.io import fits
from mediancombine import mediancombine
from bias import bias_subtract, run_master_bias


def dark_subtract(filename, path_to_dark, outpath):
    '''
    performs dark subtraction on your flat/science fields.
    '''

    # open the flat/science field data and header
    frame_data = fits.getdata(filename)
    frame_header = fits.getheader(filename)

    # open the master dark frame with the same exposure time as your data.
    master_dark_data = fits.getdata(path_to_dark)

    # subtract off the dark current
    if frame_header['EXPTIME'] != fits.getheader(path_to_dark)['EXPTIME']:
        scale = frame_header['EXPTIME']/fits.getheader(path_to_dark)['EXPTIME']
        master_dark_data = scale * master_dark_data

    dark_subtracted = frame_data - master_dark_data

    new_filename = filename.split('\\')[-1]
    # write to disk
    fits.writeto(outpath+'/d'+new_filename, dark_subtracted, frame_header,
                 overwrite=True)
    return


def run_master_dark(fil):
    '''
    create master darks for each exposure time
    '''
    masterbiaspath = fil+'/Masters/MasterBias.fit'
    masterdarkpath = fil+'/Masters/'

    darkmaster_test = glob.glob(f'{fil}/Masters/MasterDark*.fit')
    if len(darkmaster_test) == 0:
        print('Making Master Darks')
        # create master dark
        dark_outpath = fil+'/darks'
        b_dark_test = glob.glob(fil+'/darks/b_*.fit')
        for im in b_dark_test:
            os.remove(im)
        dark_fits = glob.glob(fil+'/darks/*.fit')

        # bias subtract darks
        for darks in dark_fits:
            bias_subtract(darks, masterbiaspath, dark_outpath)

        # median combine bias subtracted dark frames with same exposure time
        b_dark_fits = glob.glob(fil+'/darks/b_*.fit')

        # sort darks into folders based on exposure time
        for b_darks in b_dark_fits:
            exptime = fits.getheader(b_darks)['EXPTIME']
            filname = b_darks.split('\\')[-1]

            if not os.path.exists(fil+'/darks/darks'+str(exptime)):
                os.makedirs(fil+'/darks/darks'+str(exptime))

            if not os.path.exists(fil+'/darks/darks'+str(exptime)+'/'+filname):
                os.rename(b_darks, fil+'/darks/darks'+str(exptime)+'/'+filname)

        # glob all folders
        b_dark_exptime_folder = glob.glob(fil+'/darks/darks*')
        # for each exposure time, take median of darks and write to disk
        for exp_folder in b_dark_exptime_folder:
            dark_time = exp_folder.split('\\')[-1]
            time = dark_time.split('s')[-1]
            print(f'exposure time {time}')
            b_dark_exptime_fits = glob.glob(exp_folder+'/*.fit')
            median_dark_exptime = mediancombine(b_dark_exptime_fits)
            print('path to dark: '+masterdarkpath+'MasterDark'+time+'.fit')
            fits.writeto(masterdarkpath+'MasterDark'+time+'.fit',
                         median_dark_exptime,
                         header=fits.getheader(b_dark_exptime_fits[0]),
                         overwrite=True)
    else:
        print(f'Master Darks: {darkmaster_test}')
    return


def bias_dark_sub_images(fil, targs):
    '''
    bias and dark subtract science targets
    '''
    for target in targs:
        print()
        print('------------o------------')
        print('target: ', target)
        print()
        masterbiaspath = fil+'/Masters/MasterBias.fit'
        masterpath = fil+'/Masters/'

        # bias subtract targets
        bias_images = glob.glob(f'{fil}\\{target}\\b_*.fit')
        scidata = glob.glob(fil+'\\'+target+'\\*.fit')

        filters = []

        if len(bias_images) != len(scidata):
            [os.remove(im) for im in bias_images]
            print('Bias subtracting ')
            for sci_image in scidata:
                filtername = fits.getheader(sci_image)['FILTER'][0]
                sci_outpath = fil+'\\'+target+'\\'+filtername+'band'
                if not os.path.exists(sci_outpath):
                    os.makedirs(sci_outpath)
                bias_subtract(sci_image, masterbiaspath, sci_outpath)
                filters.append(filtername)
        filters = np.unique(filters)

        # dark subtract bias targets
        for filtername in filters:
            b_scidata = glob.glob(fil+'\\'+target+'\\'+filtername+'band\\b_*.fit')

            dark_images = glob.glob(f'{fil}\\{target}\\{filtername}band\\db*.fit')

            if len(dark_images) != len(b_scidata):
                print('Dark subtracting ', filtername, ' band')
                [os.remove(im) for im in dark_images]
                sci_outpath = fil+'\\'+target+'\\'+filtername+'band'
                for b_sci_image in b_scidata:
                    exptime = fits.getheader(b_sci_image)['EXPTIME']
                    if os.path.exists(masterpath+'MasterDark'+str(exptime
                                                                  )+'.fit'):
                        masterdark = masterpath+'MasterDark'+str(exptime
                                                                 )+'.fit'
                    else:
                        masterdark = glob.glob(masterpath+'MasterDark*.fit'
                                               )[-1]

                    dark_subtract(b_sci_image, masterdark, sci_outpath)

    return()


if __name__ == '__main__':
    # Running as a script
    root = os.getcwd()
    fil = root+'\\Data\\20191025'
    targs = ['NGC 663', 'Entered Coordinates']
    run_master_bias(fil)
    run_master_dark(fil)
    bias_dark_sub_images(fil, targs)

Note that this script includes the bias subtraction step, so you can simply run dark.py and complete the two previous steps in one go. This will be the same for the next scripts as well, so if you are using these scripts instead of importing the functions to make your corrections in an interactive environment, make sure you keep track and aren’t running the script on already bias subtracted data (or you’ll be over subtracting).

From here, our images are ready to be flatfielded.

The next post will detail “flat fielding” and bring us to the end of our data reduction journey. From there, we will analyses our data and enjoy the fruits of our labor. Until then, you can star the github repo or follow this blog. Clear skies, and happy astronomizing!

Leave a Reply

Your email address will not be published. Required fields are marked *