Over 20 years of numerical groundwater modelling

In the late 1990s, groundwater modelling, whilst not in its infancy, was still a relatively new field and the models developed to simulate activities such as mining, pumping or resource development were relatively simple. The capability and capacity of desktop computers in the late 1990s was a significant limitation in computation and functionality.

By Andrew Durick

Historically, the other limitation of particular note was simulating dewatered or unsaturated conditions, which occur around excavations. MODFLOW does have a ‘drying and rewetting’ function, however this invariably proves numerically unstable because of the threshold nature to trigger the wet or dry condition. Furthermore, having a cell convert to dry conditions usually increased the chances of neighbouring cells going dry, an effect referred to as cascading dry cells.

In the late 1990s, the bulk of groundwater modellers were using graphical user interfaces (GUI) such as PMWin, Visual MODFLOW and GMS. A handful of users had developed customised code to generate a series of input files and batch files to run those files with MODFLOW executable. Early utilities such as those developed by John Doherty were early examples of simplistic coding to manipulate and transform input and output data. During this time I was lucky to work with John Doherty and his utilities at the then Queensland Department of Primary Industries while at the same time gaining exposure into regional and local scale modelling for groundwater resource developments.

When I joined AGE in 2006, I brought this methodology of using code to manipulate and process data to make development of complex models easier. This use of coding enabled numerous start/stop models and time variant changes to be made for hydraulic properties – critical to simulating the impacts from longwall mining. This was occurring at AGE well before any commercial time variant hydraulic property package was available.

In 2006, our modelling transitioned from the standard MODFLOW and FEFLOW software packages to be dominated by MODFLOW SURFACT. SURFACT provided the ability to simulate unsaturated conditions and bypassed the inherent problems of the standard MODFLOW’s drying – rewetting function. Transitioning existing models from MODFLOW to MODFLOW SURFACT was simple because of the majority of the model setup remained the same.

In the recent past, we have again seen a transition at AGE, this time from SURFACT to the latest version of MODFLOW – MODFLOW-USG, with ‘USG’ referring to the ability to have unstructured grids. There are advantages in this for us and our involvement in mining applications where complex or sub-cropping geologies are generally represented. The ability to truncate a model layer where the geology pinches out in real life is something that was missing from all previous versions of MODFLOW. In the past, we developed a method of thinning and ‘wrapping’ these non-existent layers, which were assigned appropriate hydraulic properties, but it would have been better to have the overlying and underlying units directly connected. The other advantage of MODFLOW-USG is the step away from rectangular/orthogonal meshes to variable shaped meshes. We have found that the majority of our work is ideally suited to the voronoi mesh option, where we can add detail to the model where we have data and where refinement is required, and then rapidly coarsen the grid away from those areas. The voronoi mesh combined with the truncation of layers results in reduced number of model cells, meaning shorter model run times which is significant for calibration and other analysis involving significant number of model runs.

Model calibration in the last 20 years has certainly been revolutionised by PEST. While PEST is a great tool, it must be used appropriately and with constraints that relate back to the field data and to the resulting conceptualisation. PEST is continuing to evolve with ever expanding functions supporting the changing demands of not only calibrating a model, but also about quantifying the potential wrongness in the model predictions. This level of wrongness is more commonly referred to as uncertainty.

Uncertainty analysis has become a focus for the industry in the last five years and this will certainly continue with requirements for it written into legislative guidelines. Uncertainty was not commonly undertaken within the industry five years ago because the Monte Carlo technique available at the time was too cumbersome to undertake efficiently. Sensitivity analysis was used as a substitute, where it was extended from the usual calibration period, to the predictive simulation to examine the potential variation in model results. At AGE, we have embraced the challenge of uncertainty analysis by applying clever techniques (both linear and nonlinear) to address requirements for uncertainty analysis. We have also purchased ‘supercomputing’ resources to enable the significant model runs required during the analysis to occur as efficiently as possible.

In terms of modelling at AGE in the last 20 years, we have thankfully seen some significant changes. We would not like to be answering today’s questions with only the tools available 20 years ago. It is an exciting time for modelling at AGE, and it is mind boggling to think where we could be in another 20 years, with software, with computational power and with the types of questions needing answers.