Here’s what the James Webb Space telescope will observe next
The world came together last week in a rare show of international unity to stare in wonder at the first scientific images produced by the James Webb Space Telescope. Decades in the making and the result of the efforts of thousands of people from around the globe, the telescope is set to revolutionize astronomy by allowing us to peer deeper into the cosmos than ever before.
Webb has the largest mirror ever launched into space, as well as the largest sunshield, and it is the most powerful space telescope ever built. The first images are just a taste of what this remarkable piece of technology is capable of doing. So to find out more about what future scientific research will be enabled by this behemoth, we spoke to Mark McCaughrean, Webb Interdisciplinary Scientist at the European Space Agency.
McCaughrean will be one of the first researchers to use Webb for his work into the Orion Nebula, and he has been involved in the planning of the telescope for more than 20 years. He told us all about how Webb will push the frontiers of astronomy and enable discoveries we haven’t even begun to imagine.
Seeing the universe in infrared
When astronomers first began to imagine Webb in the 1980s, they had a specific plan in mind: They wanted a cosmology research tool to look back at the earliest galaxies in the universe.
Scientists knew these early galaxies were out there and were close to being accessible to us because the Hubble Space Telescope had observed some pretty early galaxies. When looking in the visible light wavelength, Hubble could identify hundreds of these galaxies, which formed within a few hundred million years of the Big Bang. But these galaxies had already formed, and researchers wanted to look back even further, to see them actually forming.
To do that, they needed a tool that could look in the infrared wavelength, beyond visible light. That’s because the earliest galaxies gave off visible light just like galaxies do today. But the universe is expanding over time, and that means that the galaxies we see in the sky are moving away from us. The older the galaxy, the further away it is. And this distance causes a phenomenon called redshift.
Similar to the Doppler effect, in which sounds change their perceived pitch as the distance between the source and the observer changes, the wavelength of light changes as its source moves away from us. This light is shifted to the redder end of the spectrum, hence the name redshift.
The very oldest galaxies, then, have light which is redshifted so much that it is no longer observable as visible light. Instead, it is visible as infrared — and this is the wavelength in which Webb operates.
This is how Webb is able to detect and identify the very earliest galaxies. If Webb can see a galaxy that is shining brightly in the infrared, but which is dim or invisible to primarily visible light-based telescopes like Hubble, then researchers can be confident they’ve found a galaxy which is extremely redshifted – meaning it is very far away, and hence very old.
Even in the first deep field image from Webb, you can see some extremely old galaxies. The galaxy cluster that is the focus of the image is 4.6 billion years old, but because of its mass, it bends spacetime around it. This means light coming from galaxies behind this cluster is bent as well, so the cluster acts like a magnifying glass in an effect called gravitational lensing. Some of the galaxies seen in this deep field are around 13 billion years old, meaning they formed in the first billion years of the universe.
Expanding to do more
If Webb was originally conceptualized as a cosmology tool, though, it soon expanded to become far more than that.
Over decades of planning for Webb, the designers realized that the tool they were building could be used for far more diverse fields than just cosmology. They added new instruments, like MIRI, which looks in the mid-infrared wavelength rather than the near-infrared and is more useful for studying star and planet formation than cosmology. That difference brings its own challenge as this instrument has different detectors from the other instruments and requires its own cooler. But, along with other instruments, it expands what Webb can do into a whole range of possibilities.
“The original focus of the telescope was much more on the high redshift universe,” McCaughrean summed up. “That was the highest goal, to find these first stars and galaxies that formed after the Big Bang. Everything else after that is a ‘nice to have.’ But over the progress of the project, we managed to turn that into four themes: cosmology, star formation, planetary science, and galaxy evolution. And we made sure that the observatory would be capable of all of those.”
Cameras and spectrographs
Webb has four instruments on board: the Near-Infrared Camera or NIRCam, the Near-Infrared Spectrograph or NIRSpec, the Near InfraRed Imager and Slitless Spectrograph or NIRISS, and the Mid-Infrared Instrument or MIRI. There’s also a sensor called the Fine Guidance Sensor (FGS), which helps to point the telescope in the right direction.
The instruments are a mix of cameras and spectrographs, which are instruments for splitting light into different wavelengths so you can see what wavelengths have been absorbed. This lets you see what an object is composed of by looking at the light it gives off.
While the images taken by the cameras garner the most public attention, the spectrographs shouldn’t be underestimated as a scientific tool. Around half of the currently allocated observing time is dedicated to spectroscopy, for tasks like analyzing the composition of exoplanet atmospheres. Partly, that’s because it takes more time to take a spectrum of an object than to take an image of it, and partly that’s because spectroscopy can do things that imaging can’t.
Cameras and spectrographs work together as well, as the filters used in imaging are useful for selecting objects to study with the spectrographs.
“Imagine you do a deep field, taking some deep images with NIRCam,” McCaughrean explained. “Then you use different filters to select out candidates, because there are going to be way too many things to look at in that field one by one with spectroscopy. So you need the imaging to find the candidates,” such as by looking at the colors in an image to decide that a given object is, say, a high redshift galaxy and not a faint nearby star.
This has already been demonstrated in practice, with Webb’s first deep field image. The imaging was done with the NIRCam camera ,which was able to pick up vast numbers of galaxies both near and far in one stunning image. Then particular targets, like a galaxy over 13 billion years old, were picked out and observed with the NIRSpec spectrograph, gathering data about this early galaxy’s composition and temperature.
“It’s such a beautiful, clean spectrum,” McCaughrean said. “Nobody’s ever seen anything like that before from anywhere. So we now know that this machine works incredibly powerfully.”
Multiple modes
To understand Webb’s full capabilities, you should know that the four instruments don’t have just one mode each – they can be used in multiple ways to look at different targets. In total, there are 17 modes between the four instruments, and each of these had to be tested and verified before the telescope was declared ready to start science operations.
For example, take the NIRSpec instrument. It can perform several types of spectroscopy, including fixed slit spectroscopy, which is a highly sensitive mode for investigating individual targets (such as analyzing the light given off by merging neutron stars called a kilonova), or field unit spectroscopy, which looks at spectra for multiple pixels over a small area to get contextual information about a target (like looking at an extremely distant galaxy that has been warped by gravitational lensing).
The third type of spectroscopy NIRSpec does is something really special called multi-object spectroscopy. It uses tiny window-like shutters arranged into a format called a microshutter array. “They’re basically small devices a couple of centimeters across, of which we have four. In each one of those devices, there are 65,000 little individual shutters,” McCaughrean said.
Each of these shutters can be individually controlled to open or close, allowing researchers to select what parts of a field they are looking at. To use these microshutters, researchers first take an image using another instrument like NIRCam to select the objects of interest. Then they command the shutters corresponding to these objects of interest to open, while the others remain closed.
This allows the light from the targets, such as particular galaxies, to shine through onto the telescope’s detectors, without allowing light from the background to leak through as well. “By only opening the door where the galaxy is and closing all the other doors, when the light comes through from that object, it gets spread out into a spectrum, and you don’t have all the other light coming through,” McCaughrean said. “That makes it more sensitive.”
This multi-object spectroscopy can be used to look at particular galaxies in deep field images, which is especially useful for studying the earliest galaxies that are highly redshifted. And this method is capable of getting spectra from up to 100 objects at once – making it a very efficient way to collect data.
Dealing with too much light
As the microshutters demonstrate, one tricky part of working with highly sensitive instruments is dealing with too much light. Take the work James Webb will do on Jupiter in its first few months of operation – it’s actually very hard to image the rings and moons around Jupiter because the planet itself is so bright. If the faint object you are trying to observe is next to a very bright one, it can blow out your readings so all you see is light from the brighter object.
A similar problem arises when you try to observe distant exoplanets, which are very dim compared to the stars they orbit. To deal with this challenge, James Webb has another trick up its sleeve called coronagraphy.
Both NIRCam and MIRI have coronagraphy modes, the simplest form of which is to place a small metal disk in front of the bright object to block out its light. Then you can observe the other, dimmer light sources around it more easily. But this approach has its limitations:if the bright object moves around behind the disk, its light can spill out over the edges and ruin the observations. You could make the disk smaller so it blocks out just the central brightest point of the object, but then you’d still have a lot of excess light to deal with. You could make the disk bigger, but then it would block out other objects that are close to the bright object.
So there’s another form of this coronagraphy mode that uses hardware called the four-quadrant phase mask. “This is a very clever piece of optics,” McCaughrean said. “It doesn’t have a metal disk, but it has four different pieces of glass which impart different phases into the light coming in. When we think about light as a wave, rather than as photons, light has a phase. If you put the bright source right on the cross where those four different phase plates meet, you can work it out such that the light will actually cancel from the star, due to the wave interference effect.”
That means that if you line it up just right so the bright object is exactly in the middle of these quadrants, the light from the star will be canceled out, but the light from other objects like planets will still be visible. That makes it ideal for observing exoplanets orbiting close to their host stars that might otherwise be impossible to see.
Making use of time
Yet another way to handle a mix of bright and dim objects is to take multiple readings over time. Unlike something like your phone, which takes a picture and then immediately resets, the detectors in Webb can take multiple readings without resetting.
“So we can take a series of pictures over time with the same detector, as it builds up the light from the faint sources,” McCaughrean explains. “But when we look at the data, we can use the first images for the bright sources before they saturate, and then keep building up light from the faint sources and get the sensitivity. It effectively extends the dynamic range by reading the detectors out multiple times.”
Another mode the instruments can operate in is called time series observations, which is basically just taking many readings one after another to capture objects that change over time. That’s useful for capturing objects that flash, such as pulsing neutron stars called magnetars, or for looking at exoplanets that move across the face of their host star in a motion called a transit.
“As a planet transits in front of the star, you want to catch it at the edges of the transit as well as in the middle of the transit,” McCaughrean said. “So you just keep watching it, and you keep taking data.”
One challenge with this method is that it requires the telescope to stay in near-perfect alignment because if it moved even slightly, it would introduce noise into the data. But the good news is that the telescope is performing extremely well in terms of pointing at an object and staying in place, thanks to the Fine Guidance Sensor that locks onto nearby stars and adjusts for any disturbances such as solar winds.
Challenges in working with Webb
As with every piece of technology, there are limitations on what Webb can do. One of the big practical limitations for scientists using Webb is the amount of data that they can collect from the telescope. Unlike Hubble, which orbits around the Earth, Webb orbits the sun at a position called L2.
That’s around 1 million miles away from Earth, so Webb is equipped with a powerful radio antenna that can send data back to Earth at a rate of 28 megabits per second. That’s pretty impressive — as McCaughrean pointed out, that is substantially faster than the Wi-Fi at his hotel that we were using to talk, even over a much larger distance — but it’s not close to the total amount of data that the instruments can take per second.
The observatory does have a small amount of solid state storage, around 60 GB, which can record data for a short time if the instruments are collecting more data than can be sent back, acting as a buffer. That might not sound like a lot compared to the kind of storage you typically get on a phone or laptop, but the requirements of hardware that is safe against radiation and can stand up to decades of use are rather different.
This limitation means researchers have to be selective about what data they prioritize in downlinks from the telescope, choosing only the most vital data for their needs. You might wonder why Webb isn’t positioned closer to Earth in that case, but the L2 orbit is essential for the way it operates – and the reason is due to temperatures.
“People think space is cold, well, not if you’re next to a big object which is heating you up every day like the Earth or the sun,” McCaughrean said. “So if you want to look in the infrared, you need to make sure your telescope is incredibly cold, so it’s not emitting at the wavelengths that you’re trying to detect.” That’s why Webb has an enormous sunshield to help keep it cool, and why it is at L2 so the sunshield can block out heat from both the sun and the Earth.
“We’ve built an observatory that needs to be at L2, it needs to be there to get cold, so it can deliver this science. And because it’s at L2, we only have a certain bandwidth,” McCaughrean explained. “There’s no such thing as a free lunch, let’s put it that way.”
The community decides
The first year of Webb observations are carefully planned out. In the first five months of science operations, it will work on early release science programs, which are those designed to push the limits of Webb’s hardware and see what it is capable of. Within its first year, it will work on programs that have been selected into Cycle 1, including research into exoplanets, black holes, deep fields, and more.
Beyond that, though, the future work to be done using Webb is largely open. Researchers submit proposals for what data they want to collect using Webb, and these proposals are peer-reviewed to select those that are most scientifically interesting. “The community decides what gets done with the observatory,” McCaughrean said.
This community involvement has already changed the way Webb is used — for example, exoplanet research currently takes up about one-third of available observing time in the first round of research. When McCaughrean and his colleagues were planning out how Webb could be used in the early 2000s, they didn’t imagine there would be anywhere near this much exoplanet research being done because so few exoplanets had been discovered at that time.
This makes Webb different from missions with a very specific purpose, like ESA’s Gaia observatory, which is designed specifically to make a 3D map of the galaxy, and more like Hubble, which was designed to meet many research needs. “It’s very definitely a general-purpose observatory,” McCaughrean said. “You only have to look at Hubble and how it has evolved over the years. Partly through putting new instruments on, but mostly through the scientific community deciding that there are different priorities and different areas that need to be done.”
This flexibility is possible because Webb is designed to be useful for research into a whole lot of fields — including applications we haven’t thought of yet. Webb is projected to last at least 20 years, and we’ve barely begun to explore what it could do in that time.
“That’s the exciting thing. If you build a very powerful, very capable general-purpose observatory, it’s in many ways limited just by the creativity of the community,” McCaughrean said. “Webb is what we make of it now.”
Editors’ Recommendations