Week 2: Overscan, don’t overthink

This week we were tasked with correcting our images for bias. I wrote first about what bias is and why it’s important. Here I’ll discuss my pseudocode and present my bias correction function.

Biasing is an electronics term that describes the application of a voltage/current to a circuit or piece of circuit to create more optimal operating conditions. As a piece of electronics, the Charged Coupled Device (CCD) camera we took images with applies a bias to its pixels in order to avoid negative values (which require more space to read and store than positive integers).

In a perfect world, this would result in every pixel in any image having some offset from zero of one value; we could simply subtract this value from our images after taking them to get the “true” pixel values for a given image. A bias frame essentially attempts to capture this value. By taking a zero second exposure, letting no light or heat build up on the detector, the given bias frame should have pixel values equal to only the bias voltage applied; you can then subtract this uniform image from a science image in order to remove the bias voltage. In reality, bias voltage isn’t uniformly applied, and a CCD’s amplifier cannot convert voltages to digital units with perfect accuracy. In this way, a bias frame captures more information to help us mitigate these sources of error.

Readout refers to the CCD’s process of recording the voltage induced in a given pixel and converting that voltage into some unit of ‘counts’ (which gives a sense of the relative brightness a certain pixel records versus another pixel). A signal is induced in the pixel by a number of photons from an astronomical source striking the pixel and (via the photoelectric effect) generating some voltage across the pixel. You could imagine that other processes within the camera might cause thermal photons to strike pixel, causing additional voltage, or that there is some inherent uncertainty in the measurement of said voltage, or that depending on how long it takes for a given pixel to be readout (in a camera with thousands of pixels) the voltage might change; these are some of the sources of noise in images.

A bias frame is then an (imperfect, but proficient) way to capture any spurious readout patterns or affects and remove them from science images, in addition to correcting for the applied bias voltage. Let’s look at some examples from the last night of our observing run.

An example bias frame from Jan. 26th 2020. Pixel values are colored according to the colorbar on the left hand side of the image. Notice the subtle patterns in the image.
A master bias (median combination) for the Jan. 26th 2020 observations.
A kernel density estimation (a smoothed out histogram) for pixel values in the master bias. Over plotted in black is the median value ~3000. We can assume the bias voltage pushes the detector values up by about 3000.

In order to monitor the bias level throughout the night, the HDI camera employs overscan. At the edge of each image it takes it reads out a hundred or so pixel strip of bias, essentially taking a tiny piece of a bias frame during a science image. By taking a median of this region, we can get an approximate measure of the bias level for any given image. We can construct a ratio between this overscan median and the master bias median that will indicate how far the bias level has changed between the beginning of the night (when bias frames are taken) and the observation. We can then further correct for this shift in bias level by scaling our master bias by this ratio before performing a bias subtraction. Our pseudo code from last week’s class lays this process out:

1) night's bias frames  →  median combine = master bias
2) overscan region → median = numerator
3) master bias → median = denominator
4) numerator/denominator = scale factor
5) bias corrected image = image - (scale factor * master bias)

Our python code follows the same structure:

def biascorrect(path_to_img, path_to_master_bias):
    """
    This function subtracts a master bias scaled by the overscan of a given 
    HDI image

    The two arguements are :
    1) a path to a science image (the minuend) and
    2) a path to a master bias image (a factor of the subtrahend)

    The function uses the median of the overscan corner to scale the master 
    bias before bias correcting the image

    The image is then written to a fits file with the 'b_' prefix

    *Note that this process is specific to both the image dimensions of an 
    HDI image and a CCD that computes overscan columns*

    """
    # gather image information
    img = fits.getdata(path_to_img)
    head = fits.getheader(path_to_img)
    mbias = fits.getdata(path_to_master_bias)

    # perform subtraction of scaled master bias
    bias_sub_img = img - ((np.nanmedian(img[4100:4140,4100:4140])/np.nanmedian(mbias))*mbias)

    # save subtracted images
    new_img_name = path_to_img.replace(".fits","_b.fits")
    fits.writeto(new_img_name, bias_sub_img, head, overwrite=True)

    #let the user know what's happened
    print('Wrote bias subtracted image to '+new_img_name)

    return

The results can be seen on this visual band image of HL Tau, first with bias and then without. Notice how the colorbar values move from being centered around 3000 to 0.

A raw science image of HL Tau in V-band from Jan. 26th 2020
A bias corrected image of HL Tau in the V-band from the Jan. 26th 2020 observations. The background has been corrected for the bias voltage, as can be seen via the colorbar.

The feedback I received on my proposal outline from Owen was really helpful! He told me that I should focus on making the context of my abstract and science justification sections align and contain similar questions. He also suggested honing in on what I planned on accomplishing with the project, what products I want to create and what results I expect. He also told me I had a good scientific justification.

My next steps for my proposal are as follows:

  • Research: do BD’s have the same angular momentum problem as MS stars?
  • Rework abstract to align with scientific justification
  • Decide what is most important to get out of data, focus the project scope

In addition to the Herbst 2007 paper I found a number of references for my outline. The one I chose for this week is Esplin et al. 2014, “A WISE Survey of Circumstellar Disks in Taurus” which I hope to use to determine how we can quantify the presence of a disk around a target and how we might use that data to create an interesting comparison between the rotations we find for non-disk and disk systems in our field.

My annotated bibliography for Herbst 2007 is attached as a pdf here.


Posted

in

by

Tags:

Comments

2 responses to “Week 2: Overscan, don’t overthink”

  1. Katherine Avatar
    Katherine

    So you are imagining various possible margins of error so you can correct for them later? Don’t mind me. I have no idea what I’m talking about.
    But I’m going to return to this and reread it until I see a glimmer of light far off on the edge of the universe and can say, Ah, ha, I think I’ve got it.
    Or just, Ha, I think it’s got me. Time will tell. Laterz, GrK

    1. willb Avatar

      Close but not quite! We dedicate a few columns on the margin of our images to measuring the current noise level of the camera, and we use that measurement to refine our corrections we perform in preprocessing our images (trust me, no one knows what we’re talking about!)

Leave a Reply

Your email address will not be published. Required fields are marked *