The Limits of Scientific Modeling

Written by Mary Anne Wentink

When I was about nine years old, the computer scientists at Michelson Laboratory at NOTS China Lake set up a demo for a public open house on Armed Forces Day. The computer played checkers, and I was that particularly obnoxious kid who greeted this wonder of system-level programming with the question: “Does it play chess?” The scientist monitoring the display took me to one side to explain in all of his adult wisdom that computers would never be smart enough to play chess.

Over time, I became a computer nerd myself and trained and worked with some of the best in the business. Over the years, we tested the limits of our medium as its capabilities grew exponentially. In layman terms, our machines were becoming smart. Smart enough to send a man to the moon and launch the space shuttle, and smart enough to play chess.

Smart, but not intelligent. Sure, in computer terms they could now process millions if not billions of instructions per second, but note the word “instructions.” No computer built to date has the intelligence to formulate its own hypotheses and test them. Every computer is dependent on its programming and the input given to process a solution—be it to calculate your grocery tab or predict climate variations or the predations of a pandemic.

Programming is complex and can always be a problem. Bugs can take years to reveal themselves. Faulty input—garbage in, garbage out, or GIGO for short—continues to be the bane of computer processing. Together or separately, bugs and GIGO can produce results that might appear reasonable but are false at their core. Hence, some of the models at the beginning of this coronavirus predicted as many as four million deaths; no wonder our leaders panicked. They, like their advisors, trusted computer models that we now know were processing iffy inputs.

For once, we should not blame the politicians’ reactions. Sure, some really overdid it, but that’s for a different discussion. More culpable are the advisors and the computer “scientists” who advised them that the models were to be trusted over instinct, common sense, and experiment. Computers don’t lie, they said.

In recent years this belief has extended far beyond their narrow purview. Much of science has been corrupted by this mindset, and any scientist who questions certain models is called a “denier” or a “sell-out.” When scientists tell—and worse, teach—other scientists that experimentation doesn’t matter, that the models all agree and who are you to question them, science is no longer science, but a belief system.

The revelation that the initial coronavirus models were deeply flawed has pulled the curtain away from the supposed computer know-it-all. Models are great tools, but only tools—tools that can be misused and are always manipulated, sometimes with the best intentions, but manipulated all the same. Every programmer programs, and every modeler builds their models and their datasets with certain preconceptions and the expectation that the results will support their hypotheses. Is it any wonder that the models so often do?

I hope that young man who spoke to me 60-some years ago lived long enough to see Deep Blue beat Garry Kasparov in 1997. As his learned assertion was shattered, let us hope the coronavirus modeling fiasco has provided a much-needed lesson to all modern scientists. Science, to be science, must continue to question all of its hypotheses including the results of its own modeling.