It has become exceedingly clear, over the past few months, that NASA’s James Webb Space Telescope does exactly what it set out to do. Just as its creators had hoped, the multibillion-dollar machine is flawlessly “unfolding the universe” by revealing cosmic light we cannot see with our own eyes — and its excellent results make even the most unlikely of stargazers feel alive.
Because of this gold-plated telescope, Twitter went wild one day over a bleary red dot. For 48 hours, people worldwide were gawking at a galaxy born shortly after the birth of time itself. It would appear that, thanks to the technological prowess of the JWST, humanity stands united over stardust.
But here’s the thing.
Amid personal awe, scientists from the Massachusetts Institute of Technology warn that we ought to consider one crucial scientific consequence of having a superhero telescope.
If the JWST is like a zero-to-100 ‘scope upgrade, they wonder, is it possible our science models need a zero-to-100 reboot, too? Are the datasets scientists have been using for decades unable to match the device’s power and therefore falling short in revealing what it’s trying to tell us?
“The data we will be getting from the JWST will be incredible, but … our insights will be limited if our models don’t match it in quality,” Clara Sousa-Silva, a quantum astrochemist at the Center for Astrophysics, Harvard & Smithsonian, told CNET.
And, according to a new study of which she’s a co-author, published Thursday in the journal Nature Astronomy, the answer is yes.
More specifically, this paper suggests some of the light-parsing tools scientists normally use to understand exoplanet atmospheres aren’t totally equipped to deal with the JWST’s exceptional light data. In the long run, such a hindrance may impact the most massive JWST quest of all: the hunt for extraterrestrial life.
“Currently, the model we use to decrypt spectral information is not up to par with the precision and quality of data we have from the James Webb telescope,” Prajwal Niraula, graduate student at MIT’s department of Earth, atmospheric and planetary sciences and co-author of the study, said in a statement. “We need to up our game.”
Here’s one way to think about the conundrum.
Imagine pairing the newest, most powerful Xbox console with the very first iteration of a TV. (Yes, I know the extreme hypothetical nature of my scenario). The Xbox would be trying to give the TV awesome high-resolution, colorful, beautiful graphics to show us — but the TV wouldn’t have the capacity to compute any of it.
I wouldn’t be surprised if the TV straight up exploded. But the point is you wouldn’t know what the Xbox is trying to provide for you, unless you get an equally high-res TV.
Similarly, in the vein of exoplanet discoveries, scientists feed a bunch of deep-space light, or photon, data into models that test for “opacity.” Opacity measures how easily photons pass through a material and differs depending on things like light wavelength, material temperature and pressure.
This means every such interaction leaves behind a telltale signature of what the photon’s properties are, and therefore, when it comes to exoplanets, what kind of chemical atmosphere those photons passed through to get to the light detector. That’s how scientists sort of reverse-calculate, from light data, what an exoplanet’s atmosphere is composed of.
In this case, the detector liaison lies on the James Webb Space Telescope — but in the team’s new study, after putting the most commonly used opacity model to the test, the researchers saw JWST light data hitting what they call an “accuracy wall.”
The model wasn’t sensitive enough to parse stuff like whether a planet has an atmospheric temperature of 300 or 600 Kelvin, the researchers say, or whether a certain gas takes up 5% or 25% of the atmosphere. Such a difference is not only statistically significant, but per Niraula, also “matters in order for us to constrain planetary formation mechanisms and reliably identify biosignatures.”
That is, evidence of alien life.
“We need to work on our interpretive tools,” Sousa-Silva said, “so that we don’t find ourselves seeing something amazing through JWST but not knowing how to interpret it.”
Further, the team also found its models kind of disguising its uncertain readings. A few adjustments can easily paper over uncertainty, deeming results a good fit when they’re incorrect.
“We found that there are enough parameters to tweak, even with a wrong model, to still get a good fit, meaning you wouldn’t know that your model is wrong and what it’s telling you is wrong,” Julien de Wit, assistant professor at MIT’s EAPS and study co-author, said in a statement.
Going forward, the team urges opacity models be improved to accommodate our spectacular JWST revelations – especially calling for crossover studies between astronomy and spectroscopy.
“There is so much that could be done if we knew perfectly how light and matter interact,” Niraula says. “We know that well enough around the Earth’s conditions, but as soon as we move to different types of atmospheres, things change, and that’s a lot of data, with increasing quality, that we risk misinterpreting.”
De Wit compares the current opacity model to the ancient language translation tool the Rosetta Stone, explaining that so far, this Rosetta Stone has been doing OK, such as with the Hubble Space Telescope.
“But now that we’re going to the next level with Webb’s precision,” the researcher said, “our translation process will prevent us from catching important subtleties, such as those making the difference between a planet being habitable or not.”
As Sousa-Silva puts it, “it’s a call to improve our models, so that we will not miss the subtleties of data.”