Week 3: Trouble in Taurus

This past week it was valentines day, and I wanted to make something pretty, so I made a preliminary three color image of our HL Tau field, which includes L1551 (a really pretty molecular cloud) and a handful of Herbig Haro (HH) objects, which are dramatic outflows, jets, and other signs of star forming activity. More on that later in the post.

In order to make a color image of the field, I had to make sure the data was properly reduced (bias subtracted, but also corrected for dust, debris, optical aberrations and more) and then registered (all the images are aligned to one another). Then I can stack multiple images together and assign them different colors, depending on the kind of filter each image was taken through.

First, I had to finish correcting each image. After bias correction (detailed in the last post) the error inherent to the Half Degree Imager (HDI) camera has been mitigated, but errors incurred from other obstacles along the light’s path to our detector still remain. Aside from building better telescopes, hoping for clear conditions, or using space based telescopes, we don’t have many ways of mitigating the uncertainty we’ll accrue from the light’s path through our atmosphere.

What we can account for at the WIYN 0.9m is the variation in the pixel to pixel sensitivity of the detector. This helps mitigate problems caused by dust or other foreign material in the optics, the shape of the optics and the distortions in the image those shapes create, and badly behaved pixels or patterns of pixels. We create a ‘Flat-field’ by uniformly illuminating the detector, which reveals those variations (if the same amount of photons are passed to two pixels, and one reads a pixel value of 100 while the other reads 10, we then know how sensitive our pixels are relative to each other and can correct for that variation (important to do, because we want to measure the brightness of stars in our images).

Flat-fields, like any other image we take on the telescope, are first bias-corrected. We make the assumption the pixel to pixel variation of our detector won’t change much during a given night of observing, so we take a number of flat-fields at the beginning of the night, combine them, and use these “master flats” to correct each subsequent science image. We take flat-fields in each filter we observe in, because often different wavelengths of light will be affected differently by obstacles in their path and this results in a different pixel to pixel variation depending on filter. As an illustration, you could imagine a more energetic photon barging past an obstacle, while a less energetic photon gives the obstacle a wide berth; you could also imagine a pixel in the detector could be more inclined to let in certain types of photons which another pixel might have a different preference; by taking flat-fields in each filter we account for these effects.

HDI R-band master flat for Jan. 26th, 2020. Notice the line of dim pixels through the center of the image, the multiple dust spots and speckles, as well as the vignetting at the corners of the image.

As you can see in this example master flat, there are many pixel to pixel variations to account for in our images. Noticible is the vignetting around the corners of the image, the various dust donuts, and if we zoom in… the dreaded strand of hair. “What the heck is a dust donut?” I imagine hearing you ask. Because the dust is so close to the camera when compared to the objects the telescope focuses on (stars incredibly far away) the dust is so out of focus it appears as circular donuts of darkened pixels. Zooming into the center of the image reveals another obstacle… The Hair.

Ominous music as The Hair is revealed.

We use normalize our master flats to the median of the image, so that a master flat-field’s pixel values are centered around 1. This way, we can divide our science images by our master flat, and pixels with obstructions (which will have flat-field values < 1) will be slightly boosted, which pixels that are too sensitive (values > 1) will be toned down. A great illustration of this process the following video, which shows the same science image blinked between being uncorrected and flat-fielded. You can see the vignetting disappear in the first half of the video, and the hair disappear as the flat-fielding is applied in the second half of the video.

I proceeded to apply this flat-fielding procedure to my night of data, following my bias-correction from last week. Because this process was almost exactly the same as I had performed in a previous class, it went off without a hitch. My only modifications to my code from that previous class were to make it easier to pass path strings to the functions. Below is my code for creating a master flat, and for flatfielding a science image.

# flatfield division function
def flatfield(filename, masterflat, color=''):
    '''
    This function takes an image file and divides it by a master flat image 
    file.
    '''

    # read in data, headers.
    data = fits.getdata(filename)
    header = fits.getheader(filename)
    mflat = fits.getdata(masterflat)
    filt = color
    #divide science by flat
    div = data/mflat
    #write to fits file
    path, file = filename.split('\\')
    file = file.replace('.fits', '')
    newpath = path+'\\'+filt+'-flat-'+file
    fits.writeto(newpath+'.fits', div, header, overwrite=True)
    #affirm the code monkey
    print(filename+' Successfully flattened')
    return

#norm combine flats
def norm_combine(filelist, save='n', name = 'med'):
    '''
    This function takes in a list of image files, and adds them to a cube 
    with dimensions img[x], img[y],
    len(filelist) such that they can be normalized before median combined 
    into a master image along the 3rd axis. The median is then
    returns the median frame.
    The function can be commanded (y/n) to write out the median or return it 
    to be held in memory
    A name for the written function can be specified if writing the file via 
    save='yes'
    '''
    # saves length of list as variable
    n = len(filelist)

    # gather first image info
    first_frame_data = fits.getdata(filelist[0])
    first_frame_head = fits.getheader(filelist[0])

    # saves shape of images as variable
    imsize_y, imsize_x = first_frame_data.shape

    # creates empty stack of depth n
    fits_stack = np.zeros((imsize_y, imsize_x , n))


    # adds all images in list to stack
    for ii in range(0, n):
        im = fits.getdata(filelist[ii])
        fits_stack[:,:,ii] = im

    # takes median of stack, saves as var, normalizes the median
    med_frame = np.median(fits_stack, axis = 2)
    med_frame = med_frame/np.median(med_frame)

    #save or return
    if save == 'y':
        fits.writeto(name+'.fits', med_frame, first_frame_head, 
    overwrite=True)
        print('Median file written as '+name)
        return
    elif save == 'n':
        print('Returned median combination')
        return med_frame
    else:
        print('Incorrect save arguement given, please input either y or n. 
        \n Will return file in memory to preserve disk space')
        return med_frame

The next steps in the data reduction process are to align each image to every other image so that we can combine them as we see fit. I did this following a previous class’ instruction to make my rough color-composite image for valentines day, but will detail those steps when we finalize them next week.

The result of aligning, combining, and making colorful a nights worth of our observations of the region surrounding HL Tau were pretty striking.

A rough composite RGB image of one night of data on our HL Tau field; Red is R-band, Green is H-alpha continuum, Blue is H-alpha

I used our our narrowband hydrogen emission filter as the blue and the R filter as the red colors in the image (green didn’t show up because I foolishly used a very dim narrowband filter for the green colors). With only a night of data the molecular cloud L1551 to the south of the HL Tau system is revealed in beautiful detail, as well as a couple other interesting features.

Zooming in on the molecular cloud L1551 and the HH28 outflow, which initially appears to be originating from the pink star at the center of the lower-right inset

I had intended to read a paper (Esplin et al. 2014) which discusses circumstellar disks in the Taurus star-forming region, but after making this false color image I was really taken with the structure of the molecular cloud. I did some digging into L1551 and found there’s a lot of interesting stuff going on. I ended up reading a lot of papers (Graham & Heyer 1990, Carkner et al. 1996, Devine et al. 1999, Schneider et al. 2011, and Takakuwa et al. 2017), but for the purposes of this blog prompt I’ll focus on Devine 1999 (in part because their data was taken at Kitt Peak National Observatory too).

The HL Tau field, labeled after my bushwhack through the simbad catalog. Note a misprint, IRS is misprinted as “IRC” in this image I made while watching TV.

First I used simbad and identified as many of the interesting looking objects in the field as I could before diving into published results. Carkner et al. 1996 contend that the bright star to the northeast of HL Tau – HD 285845 – is a foreground, high proper motion star. More interestingly, they also contend that LP 415-1166 is a high proper motion foreground star, which is only superimposed by chance onto the HH28 outflow feature.

“Source 13 is LP 415-1165 [sic, 1166], a foreground dM star… It is projected onto the L1551 IRS 5 bipolar flow in the middle of the cloud (Cudworth, Herbig 1979).” If LP 415-1166 isn’t responsible for the butterfly-like outflow, then what is?

Devine et al. would contend that it’s a “deeply embedded class 0 source” called L1551 NE. According to the paper, it was previously thought that both HH 28 (our little butterfly) and HH 29, another knot of hydrogen emission snuggled up against L1551 were both driven by a jet propagated from L1551 IRS 5. Both sources are complex, active protostars; L1551 IRS 5 is a pair of circumstellar disks surrounded by a circumbinary envelope (Pyo et al. 2009, Schneider et al. 2011) which produce the powerful jets that illuminate much of the L1551 cloud. L1551 NE is a similar pair of circumstellar disks that is surrounded by arcs and spiral arms of circumbinary material (Takakuwa et al. 2017).

Devine et al. Figure 1 below shows the alignment of the L1551 NE system and its jet relative to the HH 28/29 outflows, and the sources they’ve also revealed that appear to be driven by L1551 IRS 5. What’s incredible about this system is that it is being illuminated by various jets (HH 30, L1551 NE, L1551 IRS 5) which each produce gorgeous features in the surrounding ionized material.

Devine et al. 1999, Figure 1: the almost parallel alignment of the two jets from these active systems are likely responsible for different HH features in the field.

Devine uses the position as well as the proper motions of the clouds to support this arguement, contending that because HH29 has proper motions directly away from L1551 NE it is likely being generated by that jet. All this goes to show just how interconnected star forming regions are and how intense growing up can be.

This coming week is gonna be a bit tight, because I have to have a draft of my project proposal done by Wednesday night. Hopefully I can quit looking at the clouds and get to writing!


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *