there’s an exercise, exercise 3.8, in *Stellar Interiors: Physical Principles, Structure, and Evolution (2nd ed)* by Hansen, Kawaler, and Trimble (HKT) that’s stuck in my brain. let me try to explain.

the exercise states,

Section 3.6 discusses “imperfections” in equations of state that make life difficult for the stellar modeler. One of these imperfections arises from electrostatic interactions between ions. These cause modifications in the ideal gas equation of state. The severity of the modifications depends on density and temperature in the sense that low temperatures and/or high densities means you have to work harder…

pg 188

basically, when you want to explain how pressure and temperature are related in the inside of a star, normally you can say “it behaves like an ideal gas.” You might remember the ideal gas from high school chemistry, but basically it says that for some nicely behaved gases, when you turn up the temperature, or the density, the pressure that gas exerts will go up proportionally. You probably saw it in the form PV = nRT, but in physics we often write it as P = nkT. The ideal gas is ideal because its constituent particles don’t interact with each other, aside from elastic collisions. Turns out real gases aren’t like that, and the exercise brings up a specific complication. Since the gas is really a bunch of atoms and molecules with electrons and protons, when they collide, the electrical forces between particles can become (more) important for lower temperatures or higher densities.

what do we mean by important?

the exercise goes on to ask you to use a specific theory, the Debye–Hückel theory, to modify the ideal gas law to account for electrostatic interactions. it turns out that this modification takes the form of

P = nkT (1- B)

where B is a function of temperature, density, and composition. When B is small, this “equation of state” basically equals the ideal gas law, because 1 minus a tiny number is basically 1, and 1 times nkT gives you nkT back. For certain temperatures, densities, and compositions, however, B can significantly affect the relationship between P, n, and T (k is a constant), causing it to deviate from the ideal gas law.

the mass of a star determines how hot and dense it is, so the exercise wants you to calculate a plot of mass (in units of solar masses) vs B (the deviation from an ideal gas). the point being, there will be some masses where B is a big enough number to become important.

when is B “big enough” to become “important”?

obviously, a rigorous theoretician (not me) would always consider B “big enough,” probably because they have access to high powered computers and can make as many extra calculations as they want. physics and astronomy didn’t always have access to powerful computers, and it became important to carefully pick and choose your battles. deciding where and when you made key approximations is basically the most important skill an astronomer can have, even in the age of big computers.

i suppose the skill is picked up by repeated exposure. it becomes a sort of common-sense that an astronomer gains a grasp of over their career, by seeing it applied in textbooks and in their field of research. really, it becomes a question of effort vs return. in this case, it would be how much longer it takes a computer to calculate the evolution of a star when assuming the ideal gas law applies versus when assuming this D-H electrostatic modification applies, and comparing that to how wrong you’d be when ignoring this modification. if you were studying a star many times the mass of the sun, B is so small, your theoretical result probably wouldn’t be that different from some observation you can make. usually, because you’re comparing imperfect models or equations to imperfect measurements, an astronomer does some delicate balancing between the uncertainty of the measurement and the complexity, accuracy, and effort that goes into the theory.

for the past year, I’ve been writing a paper on some observations. most of the paper has been written and finished since last march, but i’ve been learning and trying to model one facet of this dataset (there are often multiple measurements you can make from any given observation) since then. learning, applying a new and usually more complex model to the data, screwing up, and then learning more (ad nauseum). i haven’t really put much though towards when i should stop until this past month.

i have an imperfect dataset, and ive been comparing permutations of imperfect models to the data. these models take months to generate (lots of math can take computer a long time to compute). im under some (not a lot, but some) pressure to publish this paper before fall 2023 for grad school reasons, so its become necessary to make this subjective approximation.

it was excruciating. i want to be able to say something definite about this measurement, but so much of my final conclusions are couched in qualifiers. “if we assume this, then this,” and “when we tried this, this broke,” and “we generally saw this, which makes sense because of this.” i wanted to make the models more complex, to better describe this part of the data or to account for this particular effect. but doing so would mean adding months to a project, for perhaps diminishing returns on the accuracy of the analysis.

turns out that doing real science is a lot more contextual and interrelated and the best you can do is lay all your assumptions and your data and your theory out on the table so that when context changes, someone else in the smarter and better future can still make use of what you did.

i think a lot of physics students become interested in the subject because, compared to messy chemistry or biology, the answers to most early physics exercises are clean, the equations concise and analytic, and most everything is derived from first principles. i know that was part of the appeal to me in high school. i could take a few equations and apply them to any (in hindsight, carefully constructed) problem and get a satisfying answer.

its only later we expose young physicists and astronomers to these key exercises. maybe that does them a disservice. these problems that say, here’s an issue that’s more about diminishing returns. here’s an approximation. here’s some uncertainty and weirdness and complexity that you can’t make a neat equation out of. what you can do is kinda fudge it and gesture towards a solution. now, *make a judgement call*. that kind of skill is what real, everyday, 9-5 science hinges on.

exercise 3.8 in HKT isn’t even really that nasty; Debye–Hückel theory and this modification to the equation of state has an mostly analytical form (meaning you can write an equation in terms of constants and variables that has a closed form). it just asks you to plot this weird equation and compare it to the nicer equation. then, the kicker, is that it asks you, “when is this extra complexity important to consider?”

i pulled the plug on my analysis for this paper a few weeks ago, and sent my first draft to my co-authors. they don’t seem agonized over where i decided to stop, or the concessions i’ve made. for now, i’ll take that as a sign that i shouldn’t be too worried either. i suppose we’ll see what the reviewer has to say.

until then, clear skies. -WB