what does an exoplanet look like?

“not much!” “ohheehehehehehe”

What does an exoplanet look like? Our current technology allows us to resolve the surfaces of the planets in our own solar system, but not the surfaces of planets orbiting other stars. So, we don’t know for certain what any exoplanet “looks like” in the same sense as we know what Jupiter “looks like” to our eye as we stare at it through a telescope’s eye piece.

Nevertheless, large telescopes have allowed us to take unresolved pictures of exoplanets around a few dozen nearby stars, in many different colors of light. Indirect techniques have revealed the signatures of many thousands of exoplanets by their small influence on their host stars. Some of these observations are senstivie to variations in temperature or composition across the surface of these planets, giving us very, very coarse maps of the temperature or weather on these worlds.

This raises a classic science communication conundrum: how can one represent technical, scientific data accurately, honestly, and intuitively?

In astronomy, as in all other fields of science, data takes many steps of processing, analysis, and interpretation on its way from a telescope to a peer-reviewed publication. I work to take pictures of exoplanets. What the burden of proof is for any scientific claim I made about that work is up to peer review and scientific consensus, but what is the burden of explain-ability for my work? There is not yet a widely accepted, workable answer to that question.

If you downloaded the uncalibrated, unprocessed, “raw” images of a galaxy from the James Webb Space Telescope and opened it on your computer’s default image viewing program, you’d see an unimpressice black field of dark pixels pocked by tiny flashing pixels. These bright pixels are refered to as cosmic ray hits (literally rogue gamma rays or nuclei striking the detector). The astrophysical signals you see in vibrant press release images or astrophotographic prints are the result of calibrating, cleaning, and stacking many separate images and assigning them RGB (or HSV) colors. The calibrations and cleaning processes applied to images are typically done algorithmically, insofar as astronomers have mathematical models that can very accurately describe what noise and stochastic imperfections can exist in our data, and can subtract these modeled noise sources from our observations to reveal otherwise hidden astrophysical signals. So these are relatively “objective” because they are intended to reveal the “true” signals buried in noisy data. Aesthetic, artistic choices go into assigning (invisible, to the human eye) infrared colors onto visible colors, and stretching or resizing these images to fit prints, computer screens, or three dimensional models. These choices are often quite subjective, and can result in both useful discussions about the nature of observations, but also difficult criticisms from skeptical audiences.

Regardless of the inherent subjectivity of the final few steps taken when translating astrophysical images into paper figures and aesthetically pleasing images, the telescope and method of observation impose their own massive barriers to the interpret-ability of an experiment’s data. The very telescope that one uses, and the color of light one chooses to observe at, among other scientific choices, will dramatically influence how a given observation appears. The absence or presence of signals in the data used for scientific analysis will warrant explanation to the public when the observations are presented.

If I want to directly image an exoplanet, I have to contend with that planet’s host star, only a fraction of a millionth of a degree away from the planet, and typically a thousand to a million times brighter than the planet. The most common refrain is that we attempt to image a firefly next to a lighthouse, viewed from across the continent. My science relies on “starlight suppression” using a coronagraph or interferometer, and “starlight subtraction” using optical and statistical modeling, to remove the signal of the host star and reveal its planet in my images.

A cartoon of exoplanet imaging – a simple coronagraph is placed in front of a bright star, revealing a faint exoplanet nearby. NASA

This means the triumphant, end-result images I work towards include very little signal from the star itself. I might create a figure for a paper where I mark the location of the star with a cartoon, but this decision can draw raised eyebrows (or, more often, lighthearted jokes) from viewers suspicious of a NASA branded image with a cartoon star at the center.

My image of the cold, giant exoplanet 14 Herculis c, from JWST. A coronagraph blocks the central star, which is indicated by a cartoon circle and star, and labeled. The exoplanet is labeled, and appears as a point of light to the lower left, outside the coronagraph. NASA, ESA, CSA, STScI, William Balmer (JHU), Daniella Bardalez Gagliuffi (Amherst College)

Besides the difficulty of indicating the presence of a very real, but very obfuscated star, the “look” of an unresolved planet depends strongly on the geometry of the telescope and instrument optics used to take the image. This is because thanks to the wave nature of light, a given train of optics will spread and refocus light in a very particular pattern, called a (don’t get mad) “point spread function” or PSF. Even your eyeball, with its lenses, has a characteristic PSF.

PSF models for different telescope apertures, showing how each component of the telescope contributes to the pattern of a single point source seen by JWST. R. B. Makidon, S. Casertano, C. Cox & R. van der Marel, STScI/NASA/AURA (from JWST-STScI-001157, SM-12)
The PSF of a human eye with different levels of aberration, for three sizes of pupil. G. Westheimer (1970), Optica Acta

Single, bright sources of light (like headlights or a street-lamp at night) make the aberrations of the PSF of your eye more noticable. The same goes for astronomical images. An image of a large galaxy might not take on a very hexagonal-looking shape, because the PSF is “convolved” with the shape of the galaxy itself, and each part of the galaxy is relatively faint on its own. The PSF blends out together and you’re left with a beautiful picture. Nearby bright stars, lying in between JWST and said galaxy, will appear quite sharp and exhibit the typical six spikes of the telescope’s PSF.

This JWST/NIRCam image of Stephan’s Quintet of galaxies illustrates this effect. Nearby, photobombing stars (e.g. top right) have sharp PSF diffraction features. The galaxies themselves are bright, but no one part of any of a galaxy is bright enough compared to the rest to exhibit the characteristic diffraction pattern. NASA, ESA, CSA, STScI.

The PSF of any telescope changes depending on the color of light being observed, because redder colors of light will have a lower spatial resolution, since they physically have a broader wavelength. As you increase the wavelength of a JWST PSF, you can see the central core and wings become fatter. This tends to “round out” or blur longer wavelength images from the telescope.

PSFs computed for different near infrared filters on JWST. F stands for filter, and the number stands for the wavelength in microns, e.g. 0.9 or 2.77 microns. W and M stand for wide or medium, filters that span a wider or smaller range of colors in the near infrared spectrum. Medium band filters, focused on smaller portions of light, are sharper because their diffraction pattern is spread out across fewer wavelengths. Filters centered on shorter wavelengths overall also appear sharper, because short wavelengths have higher resolution than long wavelengths. JWST Documentation

Even after the telescope aperture, the coronagraphs needed to block bright starlight and reveal faint planets impart their own strange optical footprints onto JWST’s PSF. I wrote about seeing a physical copy of the coronagraphs I use in my own research in Tuscon earlier this year. On JWST, these coronagraphs create a “position dependent” PSF that changes depending on how far away from the center of the coronagraph a source is. These coronagraphic PSFs have a distinctly “flower petal” shape thanks to the strange interactions between coronagraphic optics and telescope aperture.

In the course of an exoplanet observation, scientists often take multiple images in different filters to assess what kinds of light a planet is emitting or absorbing. Planets are extremely faint in optical light, but relatively bright in the infrared, so the colors they emit can’t be seen by humans. We have to map infrared colors to visible colors in order to make false-color composites (this is true for nearly any JWST image, but not necessarily true for a telescope like Hubble, which observes UV and optical light). My preference is to assign shorter infrared wavelengths to bluer colors, and longer wavelengths to redder colors, though other astronomers have different opinions.

This combination of effects results in the image I shared a few years ago, of JWST’s first image of an exoplanet (not the first image of an exoplanet, those were taken on the ground nearly 20 year ago, but the first image of an exoplanet using JWST). The gas giant planet HIP 65426 b is the white source in the center, just to the lower left of the star symbol. You can see the strange flower petal shape of the PSF both in the planet’s shape and the blue tinged background stars in the upper right corner.

This flower petal patterned PSF isn’t always visible – sometimes our detections of planets are difficult enough that only the very core of the PSF stands out, while the rest of the PSF is buried under the reaidual noise from the bright starlight. These are actually easier to explain, because the very core of the PSF is still round and looks enough like a point source that you might observe using a circular aperture. That was the case for this image of the gas giant AF Leporis b we took last year. The annotations here help guide the viewer’s understanding a bit more, too.

Image of AF Lep (the host star). Zoomed in, the JWST image of AF Lep b. Kyle Franson (UT Austin), Digitized Sky Survey

This position dependent coronagraphic PSF proved difficult to explain during the first year of JWST’s operation, and so I’ve been looking into ways to get around it. In principle, the billion-dollar space-borne telescope is stable and powerful enough that we could apply a “deconvolution,” dividing our images by a model of the PSF, so that individual points of light become just that, points. Deconvolution has a troubled past in astronomy, especially after the scandalously misshapen Hubble mirror resulted in a fundamentally aberrated first generation of operations for the then premiere space telescope.

The very first image taken a few weeks after the launch of the Hubble Space Telescope (HST) showed evidence for spherical aberration in the HST optics. The results of the next month of testing proved conclusively that the HST primary mirror has about 1/2 wave RMS of spherical aberration (A = 5000 A). The recently published paper by Burrows et al. (1991) gives a detailed description of the HST spherical aberration. In this paper we briefly summarize the problem and discuss its effects on HST imaging science.

The Problem

The HST primary mirror is too flat.

The HST Spherical Aberration and Its Effects on Images
Richard L. White &: Christopher J. Burrows, STScI

Most people reflect on Hubble as one of the greatest astronomical observatories of all time, largely due to the fantastical images of nebulae, galaxies, and planets released by the observatory over its 30+ year lifetime (knock on wood). Few remember the disaster that was the first generation optics, thanks to the corrective lenses and mirrors installed in the later instruments, but the observatory’s mirror is misshappen, dramatically so. This is due to a notorious error made by the contractors paid to construct the mirror, Perkin-Elmer, involving an offset lens in the “reflective null corrector” calibration device used to measure the surface of the mirror during finishing.

HST Wide-Field/Planetary Camera images of Saturn; the unprocessed “raw” image is shown on the left, the deconvolved image is shown on the right. From the abstract of the paper, “Owing to relatively high signal levels over most of the planet’s image, a dramatic improvement in the visibility of image detail was achieved by deconvolving the raw images, which had suffered severely from the spherical aberration of the HST optics. The deconvolved images are superior in quality to anything now achievable with ground-based telescopes. On Saturn, the polar hexagon seen by the Voyager spacecraft is still there, but some of the structure of the belts and zones has changed. The B-ring spokes were not visible.” Westphal et al. 1991, ApJL

Before corrective optics could be shipped to the telescope by Space Shuttle astronauts, astronomers came up with ways to sharpen their aberrated HST data. Because HST images are unaffected by the Earth’s atmosphere, the (very aberrated) PSF of the telescope is incredibly stable. With a high fidelity model of the aberrated PSF, astronomers could deconvolve their images and recapture their inherent resolution. This process introduces additional noise into the final images, however, meaning that the data is no longer as deep or sensitive as it would be if the mirror wasn’t manufactured wrong in the first place.

So, when I started fielding questions from the public and from my colleagues about the strange coronagraphic PSF on JWST, I reached out to a colleague working on deconvolution for their own science. Dr. Lawson works on circumstellar disks, which are belts of dust and ice in orbit around stars. He uses deconvolution to best understand the shape of these disks in his images, and so the deconvolution of one or two planets in a JWST image was a walk in the park for his software. This was very important for a crowded planetary system I studied earlier this year, HR 8799.

HR 8799 is a star that hosts 4 known giant exoplanets, b, c, d, and e. The first three were discovered in 2008, from two ground based telescopes observing in the near infrared (1-2 microns).

The discovery of HR 8799, as seen from the ground. Blue and green are 1 and 1.5 micron filters, and red is a 2 micron filter. C. Marois (NRC), B. Macintosh (Keck)
Images of HR 8799 bcde and 51 Eri b from JWST. Balmer et al. (2025)

In our initial JWST images of the system after starlight subtraction, the flower-petal PSF from the outer three planets, b,c, and d all overlapped, and covered up the innermost planet e. We could model and subtract out the outer planets to reveal the inner planet, which was sufficient for our scientific aims (modeling all the planets, and subtracting them from the image, allows us to measure how bright they all are). But explaining this Lovecraftian set of three or four overlapping evil eyes was proving challenging at meetings and conferences, especially explaining our apparent detection of one planet, buried under the light from three others. An iterative deconvolution algorithm employed by Dr. Lawson, known as the Lucy-Richardson (R-L) deconvolution, paired with our model of the coronagraphic PSF, worked like a charm.

Deconvolved image of HR 8799 bcde from JWST, derived from the previous data. First, the deconvolution, then, a circular re-smoothing of the deconvolved image. Balmer et al. (2025)

We made the choice to re-smooth our images after the deconvolution, since individual pixels were often difficult to see on different screens or across conference rooms. The fact that these points of light are blurred doesn’t mean they are “resolved” in the same way that Saturn is resolved in the above HST image, unfortunately. This is one misunderstanding that became a theme when these images were released publicly, especially since the pixel-y data shown above was subsequently smoothed further by the Space Telescope Science Institute press team. The shortest wavelengths in the set are assigned the bluest colors, and the longest, the reddest colors. The relative brightnesses of the four planets are preserved, so that if one was significantly brighter, it would appear so. The individual images were combined and labeled, making the final false color image used in the press release.

HR 8799 bcde as seen by JWST in the F410M (blue), F430M (green), and F460M (red) filters. NASA, ESA, CSA, STScI, William Balmer (JHU), Laurent Pueyo (STScI), Marshall Perrin (STScI)

In the near-infrared, the planets appear strikingly similar, all “red” meaning that they are brighter at 2 microns than they are at 1 micron. In the mid-infrared wavelengths we observed at with JWST, the four planets are strikingly distinct from one another, each a slightly different hue ranging from dark blue (b) to light off-white (d) to ruddy orange (e). This is because of subtle differences in the chemistry and clouds in the upper atmosphere of each planet, perhaps owing to their different orbital distances from their host star. This was a super exciting result, and one that I was glad we could emphasize in our “public” facing image of the system.

reddit conversation underneath a repost of our HR 8799 JWST image

So, what do these exoplanets ‘look like’? I’m not sure. They don’t look like anything when they first come down in raw images from the telescope; they’re swamped by the bright light of their host star, which has been partially blocked by a coronagraph.

ibid.

If you mean, “What would they look like if we could travel to them?” We likely won’t have a proper answer for a very long time.

If you mean, “What do they look like before they were deconvolved?” They look like strange flower petals thanks to JWST’s mirror and its coronagraphs.

ibid.

If you mean, “Are they really blue and white?” No, but we’ve assigned infrared light visible colors, so we can show you these images, and we’ve done our best to make that color assignment indicate something scientific (planet b is relatively brighter at 4.1 microns than 4.6 microns, whereas planet c is similarly bright across all the filters – these scientific facts help us learn about how different these planets are compared to one another).

ibid.

Astronomers often try to strike a balance between showcasing the wonder and beauty of their observations, and maintaining the scientific veracity of their work as it is presented to the public. It doesn’t always work, and different representations might not resonate with different groups of people. Its something worth trying, and trying again, I think. Seeing people’s response to these images that carry my name (and the names of my colleagues) can sometimes frustrate, or embarrass, or confuse me, but more often than not, it warms my spirit and reminds me why we continue to push to do this kind of cutting edge astronomy. Widening our perspective on the kinds of solar systems in our galaxy tends to widen our perspective on what is possible in our own lives, on our own world.

until next time, clear skies.


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *