The history of wildfire modeling
Jen Ciarochi writes (and creates silly illustrations) for Triplebyte’s Compiler blog. Her background is in neuroscience, but she’s known to nerd out about basically any topic that merges science and technology.
What are the origins of fire modeling? Who actually runs fire models today? Why aren’t fire models as impactful as weather models, and will they ever get there?
Fire modeling was born in the 1940s, against the backdrop of World War II, the looming Cold War, and a fire-phobic Forest Service. The pioneer of fire modeling was a mechanical engineer named Wallace Fons, who built wind tunnels and crib fires to study the behavior and properties of fire.
Fons noted that fire spreads by successively heating neighboring fuel particles up to ignition temperature. He reasoned that the rate of fire spread is largely controlled by how long it takes fire to ignite the type of fuel as well as how far apart the fuel particles are.
In 1946, Fons published the first mathematical model of wildfire spread. The model applied the energy conservation equation to a uniform fuel bed exposed to fire, and found a logarithmic relationship between the rate of fire spread and the temperature of the fuel bed. Despite the model’s flaws (it linearized the contribution of radiation, i.e., ignored the fourth power of temperature in the radiation heat transfer equations), it was validated by experiments with pine needles.
Like most other accomplished fire scientists in the country at the time, Fons worked for the Forest Service—which was and is part of the United States Department of Agriculture (USDA). Since its establishment in 1905, the Forest Service had essentially become a hegemon, and was waging a well-documented “war on fire.”
The Forest Service controlled fire science, with the singular goal of suppressing wildfires; whether that goal was also the motivation behind the first fire model isn’t totally clear, but it undoubtedly played a role in the agency’s decision to employ Fons.
The birth of fire modeling coincided with the end of WWII, at which point the focus of fire research had shifted from suppressing fire to weaponizing it.
After the war, authorities were convinced that the next war would also be a fire war, and it’s pretty easy to understand why. Japan had launched bomb-carrying hydrogen balloons (called Fu-Gos, or “fire balloons”) into the US in an attempt to start wildfires. While the launch was largely unsuccessful, it was the longest-range attack the world had ever seen. Then, there were the firestorms—massive, bomb-induced fires that created hurricane-level winds. After allied forces bombed Dresden, Hamburg, and Tokyo, unexpected firestorms raged in the cities. The Hiroshima atomic strike produced yet another firestorm, which destroyed over four square miles.
Recognizing the need to understand fire, the federal government began investing heavily in multidisciplinary fire research and large-scale field experiments—and continued to do so through much of the Cold War. The Forest Service became actively involved in nuclear blast tests, employing the country’s best fire scientists. While the US didn’t release another major fire model until the 1960s, the war-inspired boost to fire research uncovered fundamental knowledge about fire that formed the basis of future fire models.
Fons himself, who worked for the Forest Service until his death in 1963, was involved in several classified experiments studying the impact of detonation-induced fires on forests and other materials.
One of these experiments, part of Operation Tumbler-Snapper, explored whether trees can provide safety from a nuclear blast. To measure how trees bend or break under shockwave forces, Fons drove trees around at fixed speeds in the bed of a specially-equipped truck. The Forest Service extended this work to study blast effects on an artificial pine forest they set up in Nevada. A video of this experiment (around 32:05) shows several men who all seem to be able to singlehandedly lift AN ENTIRE TREE...until you notice the crane in the corner later in the video.
Can we also pause to appreciate the irony of these incredibly smoky film transitions?
Anyway, the first fire modeler was apparently also an asset to national defense; in 1961, then Vice President Lyndon Johnson presented Fons with the USDA Superior Service Award for “notable pioneering contributions to forest fire research and to national defense including the establishment of the thermal and blast effects of nuclear explosions on forests and other natural cover.”
Some of the Forest Service’s other fire-related initiatives during this time were less fruitful. For example, they launched Project Skyfire in 1953 with the goal of preventing fires by modifying weather. Specifically, researchers tried to suppress lightning by seeding thunderstorms with silver iodide.
In the 1960s and 1970s, several countries released new fire models, with the US, Australia, Russia, and Canada leading these efforts. Like Fons’s model, many of the newer models were physical—based on the laws of fluid mechanics, combustion, and heat transfer.
However, Australia and Russia also released the first empirical (McArthur, 1966) and semi-empirical (Molchenov, 1957) models. This new wave of fire models—based on statistical correlations from experiments or historical wildfire studies—was made possible by the experiments and data-gathering of the previous decades.
One of the most influential fire models is Dick Rothermel’s semi-empirical model of forward fire spread, published in 1972. Rothermel was an aeronautical engineer turned USDA fire modeler. He based his fire spread model on Frandsen’s 1971 heat balance model, as well as data from wind tunnel experiments and Australian wildfires. The model calculates the rate of forward fire spread by dividing heat source by heat sink:
The Rothermel equations were suitable for so many wildfires that the Forest Service implemented them in the first release of the National Fire Danger Rating System (NFDRS), which initially consisted of lookup tables and nomograms. Using paper and pencil, firefighters manually plugged in the wind and slope angle to estimate the speed and direction of fire spread.
Today, the NFDRS is computerized, but still based on Rothermel’s groundbreaking equations. In fact, the Rothermel model is the backbone of every fire model used in a field application today.
Another fire modeling breakthrough of the 1970s was the use of Huygens principle of wave propagation to model fire spread in all directions. Huygens principle, originally proposed to describe traveling lightwaves, treats each point on the edge of a wave-front as an independent source of secondary wavelets that propagate the wave.
Applied to fire modeling, Huygens principle simulates fire spread using wavelets (typically elliptical wavelets). At each time point, the wind-slope vector determines the shape and orientation of each ellipse, while the fuel conditions determine their size (spread rate). The wavelets form a kind of envelope around the original fire perimeter, and the outer edge of this envelope is the new fire front.
Sanderlin and Sunderson were the first to apply Huygens principle to fire modeling. Their computerized “radial fire propagation model,” published in 1975, projected fire growth using a three-dimensional wind field and a gridded fuel and topography landscape. Shortly after, in 1982, Hal Anderson at the Missoula Fire Sciences Laboratory applied Huygens principle to perimeter data from a test fire. To this day, Huygens principle is one of the two most common methods for propagating fire (the other method spreads the fire based on direct contact with, or close proximity to, neighboring cells).
While fire modeling was making a comeback, the Forest Service was eating humble pie. Other federal agencies had grown weary of the Forest Service’s monopoly on fire science, and were eager to implement their own policies. The National Park Service, the Department of the Interior, and the National Science Foundation all became involved in fire research. At the same time, the Forest Service’s funding from the defense department, which had been flowing in steadily since the end of WWII, started to dry up.
With this shift in fire research came a pivot in the attitude toward prescribed burning (setting intentional fires), which had been gradually reemerging as a forest management strategy since the 1940s. Wildfires, once viewed as nothing more than a threat to life and valuable forest resources, were increasingly being recognized as a vital part of the earth system.
As this funding-fueled frenzy of fire research came to a close, many questions about fire physics and chemistry were left unresolved. Nonetheless, the impending era of computers greatly advanced fire modeling in the decades that followed.
Before computers, people forecasted fire growth using physical maps, nomograms, spread rate calculations, and vectors of slope and wind effects. With the advent of computers came computerized fire simulation models, which converted the existing 1D point models of forward fire spread into 2D planar models that propagate the whole fire perimeter across a landscape. The Forest Service released the first wildland fire behavior prediction program—called Behave—in 1984. Behave was based on the Rothermel equations and initially programmed on a TI-59 calculator.
A lack of fuel and terrain data, however, severely limited early fire spread software. In the 1990s, remote sensing capabilities, Geographical Information Systems (GIS), and greater computing power revived interest in fire behavior modeling. Behave and other fire modeling software packages were integrated with GIS, bringing landscape data into fire simulation.
In the 90s and early aughts, researchers released several new GIS-based fire simulators. Notable examples in the US included Dynafire (1991), Firemap (1992), FARSITE (1993), Burn (1994), and Embyr (2000). With the exception of Embyr, each of these models was based on the Rothermel equations.
In 1996, Garcia Vega and other Forest Service researchers published the first application of machine learning to wildfire modeling. They used an artificial neural network, trained and tested on historical wildfire data, to predict human-caused wildfires in Alberta, Canada. Using the weather index, regional size, and district as input data, their model correctly predicted where fires wouldn’t occur 85% of the time and where they would occur 78% of the time.
It was also in 1996 that National Science Foundation researcher Terry Clark showed that fire spread models could be coupled with numerical atmospheric models. This coupling allowed fire to interact with the atmosphere and “create its own weather” in simulations, as it does in the real world. The atmospheric humidity, temperature, wind speed, and wind direction affect the fire environment, while the smoke, heat fluxes, and moisture fluxes from the fire influence the atmosphere.
Clark’s model—called CAWFE—ushered in a new generation of coupled fire-atmosphere models that fall largely into one of two camps. The first camp, exemplified by CAWFE and WRF-SFIRE, pairs a simplified empirical fire spread model with a 3D numerical weather prediction model (with a resolution of hundreds of meters or more).
The second camp of coupled fire-atmosphere models includes models like the Wildland-Urban Interface Fire Dynamic Simulator (WFDS) and HIGRAD/FIRETEC. These programs pair fire models with a computational fluid dynamics (CFD) model, simulating turbulent airflow at a very high resolution (single meters) over a relatively small area.
This brings you pretty much up to speed on the history of fire modeling in the US, but where does the field stand today?
Fire modeling today
In this final section, I address some of the most interesting questions about fire modeling today.
- Who actually runs fire models?
- How are fire models used for real firefighting?
- Why aren’t fire models better, and how can they improve?
Who actually runs fire models?
Spoiler alert: it’s not firefighters. The US manages wildfire responses through the Incident Command System, an interdepartmental effort that was established in the 1970s after devastating California wildfires. Under this system, a Fire Analyst (or Fire Behavior Analyst) runs a fire model and relays the highlights of the model output to the Incident Commander (IC).
The IC coordinates an emergency response based on many streams of information, one of which is the fire model output. In addition to the model, the IC must also consider where crews are and whether they are safe, which structures are the most at-risk, how the fire can be accessed, where the nearest water sources are, what the weather is like, what type of terrain they’re dealing with, and so on. The IC uses all this information to make quick decisions about where resources should be focused and what crews on the ground should do.
How are fire models used for real firefighting?
Firefighting organizations in the US use several types of fire models for real-world wildfire management, with notable examples including FARSITE (Flammap) and the Wildland Fire Decision Support System (WFDSS)—built in 2009. All fires under federal government jurisdiction are run through WFDSS.
That being said, we still suppress 97% of the wildfires in the US (it’s the other 3% that cause all the devastation in the news). Since modeling a fire that won’t spread is futile, analysts only model about 1% of wildfires (and 3% of those on federal lands). As such, the most common application of fire model output by far is staging—deciding where to move firefighting resources based on where wildfires are most likely to occur.
When it comes to modeling ongoing wildfires in real time (rather than predicting where they will start), most operational fire models are pretty basic and rely on simple input data. All of these systems use empirical 1D fire spread models, which are faster and less complex than their physical counterparts. As computers get faster and more powerful, physical models are increasingly being incorporated into fire spread simulations. Some of the more complex models, like the coupled fire-atmosphere model WRF-SFIRE, are already fast enough for real-time use!
Wildfire management teams are also leveraging AI-based tools. For example, the California Department of Forestry and Fire Protection (CalFire) is using Wildfire Analyst Enterprise—developed by the startup Technosylva—to predict wildfire behavior. Wildfire Analyst Enterprise uses fire spread models and machine learning to compare current and historical fires, then uses this information to predict where a fire will go and when it will get there.
At the end of August, CalFire Battalion Chief Jon Heggie sent firefighters and equipment to Felton, California after the Wildfire Analyst Enterprise predicted that the CZU Lightning Complex fire would spread there. They were able to save many homes as a result of the early intervention.
Why aren’t fire models better, and how can they improve?
Right now, the biggest barrier standing in the way of better fire models is a lack of knowledge about the physics and chemistry of fire—particularly large-scale wildfires.
Ironically, between the excessive fuel buildup from decades of wildfire suppression and the hotter, drier seasons of recent decades, our wildfires are starting to look more and more like the highly unpredictable, bomb-induced fires we studied so intently after WWII.
Fortunately, the gradual reintegration of prescribed burning as a forest management strategy provides an excellent opportunity to boost fire science and improve fire models. To understand why, consider the closely-related problem of modeling weather. Fire and weather are intimately interlinked, and in many ways, fire modeling is a weather modeling problem. Fire and weather (unlike earthquakes, for example) can both be directly observed. Why, then, has weather modeling outpaced fire modeling?
Aside from funding, another (closely-related) reason is data. Every day, weather modelers wake up to more weather data, which they can use to help validate their models. The same is not even remotely true for fire modelers. Remember, we still suppress 97% of wildfires in this country, so it’s really difficult for fire modelers to validate their models and gather data at scales relevant to modeling real wildfires.
For this reason, the historical shift in the prescribed burning policy is really exciting for fire modeling. Prescribed burns are much more similar to real wildfires than fires in laboratory settings, yet much easier to collect data from. After all, we know exactly when, where, and how prescribed burns are starting. Couple this with faster, more powerful computers and better remote sensing technology (e.g., LIDAR), and fire modeling is well-poised to rapidly improve in the near future—IF these efforts are sufficiently funded.
While scientists and policymakers alike now recognize prescribed burning as the most broadly cost-effective fire management strategy, this paradigm shift has not been accompanied by a commensurate increase in prescribed burning. In the Western US, prescribed burning activity has actually remained stable or even decreased between 1998 and 2018.
In areas that are carrying out more prescribed burns, the federal government isn’t leading the effort; serious wildfires are increasingly forcing federal agencies to devote more of their resources to fire suppression. In the last five years, the Bureau of Indian affairs was the only federal agency that allocated over 25% of its fire suppression budget for prescribed burning; it was also the only federal agency to considerably ramp up prescribed burning activity. In the Southeast, where prescribed burning increased the most, 70% of the burns were led by non-federal organizations.
To make matters worse, many states canceled planned prescribed burns in 2020 due to COVID-19, out of well-founded concern for how diminished air quality could worsen the pandemic. So, while prescribed burns can improve fire science, which can in turn improve fire models, we aren’t exactly on the right track to get there.
Fire models, like fires, don’t develop in a vacuum. Like the physical landscape, the political landscape in the United States shapes how Americans deal with—and model—wildfires.
The 40s and 50s saw the birth of fire modeling and war-driven improvements to fire science. As the Cold War raged on, the 60s and 70s witnessed a flourishing of new fire models. The 80s and 90s brought fire simulators that elevated those 1D models to two dimensions. The 2000s brought coupled fire-atmosphere models, increasingly faster computers, and new AI tools.
What we need now is a better scientific grasp of large-scale wildland fires, and more prescribed burns to help us get there.
- Andrews, P.L. “The Rothermel surface fire spread model and associated developments: A comprehensive explanation.” 2018.
- “A Century of Wildland Fire Research,” 2017. https://doi.org/10.17226/24792.
- Ciarochi, Jen, and Adam Kochanski (co-developer of the coupled fire-atmosphere model WRF-SFIRE). Fire Modeling Interview. Personal, August 14, 2020.
- Duane, Daniel. “The West's Infernos Are Melting Our Sense of How Fire Works.” Wired. 2020. Conde Nast. https://www.wired.com/story/west-coast-california-wildfire-infernos/.
- Fons, Wallace L. and T.G. Storey, “Operation Castle, Project 3.3, Blast Effects on Tree Stand: Report to the Test Director.” 1955. WT-921. Washington, DC: USDA Forest Service, Division of Fire Research.
- Fons, Wallace L., Sauer, F.M., and W.Y. Pong, “Blast Effects on Forest Stands by Nuclear Weapons,” Technical Report AFSWP-971 (Washington, DC: USDA Forest Service, Division of Fire Research, 1957).
- Jain, P., Coogan, S., Subramanian, S.G., Crowley, M., Taylor, S., and Mike D. Flannigan. “A Review of Machine Learning Applications in Wildfire Science and Management.” Environmental Reviews, 2020, 1–28. https://doi.org/10.1139/er-2020-0019.
- Kochanski, A.K., Jenkins, M.A., Mandel, J., Beezley, J.D., Clements, C.B., and S. Krueger. “Evaluation of WRF-SFIRE Performance with Field Observations from the FireFlux Experiment.” Geoscientific Model Development 6, no. 4 (2013): 1109–26. https://doi.org/10.5194/gmd-6-1109-2013.
- Kolden, Crystal A. “We’re Not Doing Enough Prescribed Fire in the Western United States to Mitigate Wildfire Risk.” Fire 2, no. 2 (2019): 30. https://doi.org/10.3390/fire2020030.
- Pastor, E. “Mathematical Models and Calculation Systems for the Study of Wildland Fire Behaviour.” Progress in Energy and Combustion Science 29, no. 2 (2003): 139–53. https://doi.org/10.1016/s0360-1285(03)00017-0.
- Sauer, F.M., Arnold, R.K., Fons, W.L., and C.C. Chandler, “Operation UPSHOT-KNOTHOLE, Nevada Proving Grounds, Project 8.11b, Ignition and Persistent Fires Resulting from Atomic Explosions—Exterior Kindling Fuels: Report to the Test Director.” 1953. WT-775. Washington, DC: USDA Forest Service, Division of Fire Research.
- Sullivan, Andrew L. “Wildland Surface Fire Spread Modelling, 1990 - 2007. 3: Simulation and Mathematical Analogue Models.” International Journal of Wildland Fire 18, no. 4 (2009): 387. https://doi.org/10.1071/wf06144.
- ———. “Wildland Surface Fire Spread Modelling, 1990 - 2007. 1: Physical and Quasi-Physical Models.” International Journal of Wildland Fire 18, no. 4 (2009): 349. https://doi.org/10.1071/wf06143.
- ———. “Wildland Surface Fire Spread Modelling, 1990 - 2007. 2: Empirical and Quasi-Empirical Models.” International Journal of Wildland Fire 18, no. 4 (2009): 369. https://doi.org/10.1071/wf06142.
- "United States Strategic Bombing Survey: The Effects of the Atomic Bombs on Hiroshima and Nagasaki.” June 30, 1946
- Weise, D.R. and T.R. Fons. “Wallace L. Fons: Fire Research Pioneer.” 2014. Forest History Today.
Triplebyte helps engineers find great jobs by assessing their abilities, not by relying on the prestige of their resume credentials. Take our 30 minute multiple-choice coding quiz to connect with your next big opportunity and join our community of 200,000+ engineers.