Agricultural systems models worldwide are increasingly being used to explore options and solutions for the food security, climate change adaptation and mitigation and carbon trading problem domains. APSIM (Agricultural Production Systems sIMulator) is one such model that continues to be applied and adapted to this challenging research agenda. From its inception twenty years ago, APSIM has evolved into a framework containing many of the key models required to explore changes in agricultural landscapes with capability ranging from simulation of gene expression through to multi-field farms and beyond. described many of the fundamental attributes of APSIM in detail. Much has changed in the last decade, and the APSIM community has been exploring novel scientific domains and utilising software developments in social media, web and mobile applications to provide simulation tools adapted to new demands. This paper updates the earlier work by and chronicles the changing external challenges and opportunities being placed on APSIM during the last decade. It also explores and discusses how APSIM has been evolving to a “next generation” framework with improved features and capabilities that allow its use in many diverse topics.
Air quality forecasters, emergency responders, aviation interests, government agencies, and the atmospheric research community are among those who require access to tools to analyze and predict the transport and dispersion of pollutants in the atmosphere. Because of this need, the unique web-based Real-time Environmental Applications and Display sYstem (READY) has been under continuous development since 1997 to provide access to a suite of tools for producing air parcel trajectory and dispersion model results and displaying meteorological data. READY provides a “quasi-operational” portal to run the HYSPLIT atmospheric transport and dispersion model and interpret its results. Typical user applications include modeling the release of hazardous pollutants and volcanic ash, forest fire and prescribed burn smoke forecasting, poor air quality events, and various climatological studies. In addition, READY provides the user with quick access to meteorological data interpolated to the location of interest, helping in the interpretation of the HYSPLIT model results.
In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus on numerical, graphical and qualitative methods. General classes of direct value comparison, coupling real and modelled values, preserving data patterns, indirect metrics based on parameter values, and data transformations are discussed. In practice environmental modelling requires the use and implementation of workflows that combine several methods, tailored to the model purpose and dependent upon the data and information available. A five-step procedure for performance evaluation of models is suggested, with the key elements including: (i) (re)assessment of the model's aim, scale and scope; (ii) characterisation of the data for calibration and testing; (iii) visual and other analysis to detect under- or non-modelled behaviour and to gain an overview of overall performance; (iv) selection of basic performance criteria; and (v) consideration of more advanced methods to handle problems such as systematic divergence between modelled and observed values. ► Numerical, graphical and qualitative methods for characterising performance of environmental models are reviewed. ► A structured, iterative workflow that combines several evaluation methods is suggested. ► Selection of methods must be tailored to the model scope and purpose, and quality of data and information available.
Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. In this paper we review the SA literature with the goal of providing: (i) a comprehensive view of SA approaches also in relation to other methodologies for model identification and application; (ii) a systematic classification of the most commonly used SA methods; (iii) practical guidelines for the application of SA. The paper aims at delivering an introduction to SA for non-specialist readers, as well as practical advice with best practice examples from the literature; and at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.
The development and application of evolutionary algorithms (EAs) and other metaheuristics for the optimisation of water resources systems has been an active research field for over two decades. Research to date has emphasized algorithmic improvements and individual applications in specific areas (e.g. model calibration, water distribution systems, groundwater management, river-basin planning and management, etc.). However, there has been limited synthesis between shared problem traits, common EA challenges, and needed advances across major applications. This paper clarifies the current status and future research directions for better solving key water resources problems using EAs. Advances in understanding fitness landscape properties and their effects on algorithm performance are critical. Future EA-based applications to real-world problems require a fundamental shift of focus towards improving problem formulations, understanding general theoretic frameworks for problem decompositions, major advances in EA computational efficiency, and most importantly aiding real decision-making in complex, uncertain application contexts.
The design and implementation of effective environmental policies need to be informed by a holistic understanding of the system processes (biophysical, social and economic), their complex interactions, and how they respond to various changes. Models, integrating different system processes into a unified framework, are seen as useful tools to help analyse alternatives with stakeholders, assess their outcomes, and communicate results in a transparent way. This paper reviews five common approaches or model types that have the capacity to integrate knowledge by developing models that can accommodate multiple issues, values, scales and uncertainty considerations, as well as facilitate stakeholder engagement. The approaches considered are: systems dynamics, Bayesian networks, coupled component models, agent-based models and knowledge-based models (also referred to as expert systems). We start by discussing several considerations in model development, such as the purpose of model building, the availability of qualitative versus quantitative data for model specification, the level of spatio-temporal detail required, and treatment of uncertainty. These considerations and a review of applications are then used to develop a framework that aims to assist modellers and model users in the choice of an appropriate modelling approach for their integrated assessment applications and that enables more effective learning in interdisciplinary settings.
Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Bayes theorem states that the posterior probability, of a hypothesis, is proportional to the product of the prior probability, ( ) of this hypothesis and the likelihood, of the same hypothesis given the new observations, , or . In science and engineering, often constitutes some numerical model, ℱ(x) which summarizes, in algebraic and differential equations, state variables and fluxes, all knowledge of the system of interest, and the unknown parameter values, are subject to inference using the data . Unfortunately, for complex system models the posterior distribution is often high dimensional and analytically intractable, and sampling methods are required to approximate the target. In this paper I review the basic theory of Markov chain Monte Carlo (MCMC) simulation and introduce a MATLAB toolbox of the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm developed by Vrugt et al. (2008a, 2009a) and used for Bayesian inference in fields ranging from physics, chemistry and engineering, to ecology, hydrology, and geophysics. This MATLAB toolbox provides scientists and engineers with an arsenal of options and utilities to solve posterior sampling problems involving (among others) bimodality, high-dimensionality, summary statistics, bounded parameter spaces, dynamic simulation models, formal/informal likelihood functions (GLUE), diagnostic model evaluation, data assimilation, Bayesian model averaging, distributed computation, and informative/noninformative prior distributions. The DREAM toolbox supports parallel computing and includes tools for convergence analysis of the sampled chain trajectories and post-processing of the results. Seven different case studies illustrate the main capabilities and functionalities of the MATLAB toolbox.
is an R package primarily developed for the analysis of air pollution measurement data but which is also of more general use in the atmospheric sciences. The package consists of many tools for importing and manipulating data, and undertaking a wide range of analyses to enhance understanding of air pollution data. In this paper we consider the development of the package with the purpose of showing how air pollution data can be analysed in more insightful ways. Examples are provided of importing data from UK air pollution networks, source identification and characterisation using bivariate polar plots, quantitative trend estimates and the use of functions for model evaluation purposes. We demonstrate how air pollution data can be analysed quickly and efficiently and in an interactive way, freeing time to consider the problem at hand. One of the central themes of is the use of conditioning plots and analyses, which greatly enhance inference possibilities. Finally, some consideration is given to future developments.
Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of environmental models. Here we present a Matlab/Octave toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody). It implements several established GSA methods and allows for easily integrating others. All methods implemented in SAFE support the assessment of the robustness and convergence of sensitivity indices. Furthermore, SAFE includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE is open source and freely available for academic and non-commercial purpose. Ultimately, SAFE aims at contributing towards improving the diffusion and quality of GSA practice in the environmental modelling community.
This paper updates and builds on ‘Modelling with Stakeholders’ Voinov and Bousquet, 2010 which demonstrated the importance of, and demand for, stakeholder participation in resource and environmental modelling. This position paper returns to the concepts of that publication and reviews the progress made since 2010. A new development is the wide introduction and acceptance of social media and web applications, which dramatically changes the context and scale of stakeholder interactions and participation. Technology advances make it easier to incorporate information in interactive formats via visualization and games to augment participatory experiences. Citizens as stakeholders are increasingly demanding to be engaged in planning decisions that affect them and their communities, at scales from local to global. How people interact with and access models and data is rapidly evolving. In turn, this requires changes in how models are built, packaged, and disseminated: citizens are less in awe of experts and external authorities, and they are increasingly aware of their own capabilities to provide inputs to planning processes, including models. The continued acceleration of environmental degradation and natural resource depletion accompanies these societal changes, even as there is a growing acceptance of the need to transition to alternative, possibly very different, life styles. Substantive transitions cannot occur without significant changes in human behaviour and perceptions. The important and diverse roles that models can play in guiding human behaviour, and in disseminating and increasing societal knowledge, are a feature of stakeholder processes today.
This paper reviews state-of-the-art empirical, hydrodynamic and simple conceptual models for determining flood inundation. It explores their advantages and limitations, highlights the most recent advances and discusses future directions. It addresses how uncertainty is analysed in this field with the various approaches and identifies opportunities for handling it better. The aim is to inform scientists new to the field, and help emergency response agencies, water resources managers, insurance companies and other decision makers keep up-to-date with the latest developments. Guidance is provided for selecting the most suitable method/model for solving practical flood related problems, taking into account the specific outputs required for the modelling purpose, the data available and computational demands. Multi-model, multi-discipline approaches are recommended in order to further advance this research field.
Stakeholder engagement, collaboration, or participation, shared learning or fact-finding, have become buzz words and hardly any environmental assessment or modelling effort today can be presented without some kind of reference to stakeholders and their involvement in the process. This is clearly a positive development, but in far too many cases stakeholders have merely been paid lip service and their engagement has consequentially been quite nominal. Nevertheless, it is generally agreed that better decisions are implemented with less conflict and more success when they are driven by stakeholders, that is by those who will be bearing their consequences. Participatory modelling, with its various types and clones, has emerged as a powerful tool that can (a) enhance the stakeholders knowledge and understanding of a system and its dynamics under various conditions, as in collaborative learning, and (b) identify and clarify the impacts of solutions to a given problem, usually related to supporting decision making, policy, regulation or management. In this overview paper we first look at the different types of stakeholder modelling, and compare participatory modelling to other frameworks that involve stakeholder participation. Based on that and on the experience of the projects reported in this issue and elsewhere, we draw some lessons and generalisations. We conclude with an outline of some future directions.
There is an increasing need for environmental management advice that is wide-scoped, covering various interlinked policies, and realistic about the uncertainties related to the possible management actions. To achieve this, efficient decision support integrates the results of pre-existing models. Many environmental models are deterministic, but the uncertainty of their outcomes needs to be estimated when they are utilized for decision support. We review various methods that have been or could be applied to evaluate the uncertainty related to deterministic models' outputs. We cover expert judgement, model emulation, sensitivity analysis, temporal and spatial variability in the model outputs, the use of multiple models, and statistical approaches, and evaluate when these methods are appropriate and what must be taken into account when utilizing them. The best way to evaluate the uncertainty depends on the definitions of the source models and the amount and quality of information available to the modeller.
Integrated environmental modeling (IEM) is inspired by modern environmental problems, decisions, and policies and enabled by transdisciplinary science and computer capabilities that allow the environment to be considered in a holistic way. The problems are characterized by the extent of the environmental system involved, dynamic and interdependent nature of stressors and their impacts, diversity of stakeholders, and integration of social, economic, and environmental considerations. IEM provides a science-based structure to develop and organize relevant knowledge and information and apply it to explain, explore, and predict the behavior of environmental systems in response to human and natural sources of stress. During the past several years a number of workshops were held that brought IEM practitioners together to share experiences and discuss future needs and directions. In this paper we organize and present the results of these discussions. IEM is presented as a landscape containing four interdependent elements: applications, science, technology, and community. The elements are described from the perspective of their role in the landscape, current practices, and challenges that must be addressed. Workshop participants envision a global scale IEM community that leverages modern technologies to streamline the movement of science-based knowledge from its sources in research, through its organization into databases and models, to its integration and application for problem solving purposes. Achieving this vision will require that the global community of IEM stakeholders transcend social, and organizational boundaries and pursue greater levels of collaboration. Among the highest priorities for community action are the development of standards for publishing IEM data and models in forms suitable for automated discovery, access, and integration; education of the next generation of environmental stakeholders, with a focus on transdisciplinary research, development, and decision making; and providing a web-based platform for community interactions (e.g., continuous virtual workshops). ► A roadmap for the future of integrated environmental modeling (IEM) is presented. ► IEM landscape consists of applications, science, technology, and community elements. ► IEM current practices, issues, and challenges are described. ► A call for science and technology standards established by the global IEM community.
The GIS software sector has developed rapidly over the last ten years. Open Source GIS applications are gaining relevant market shares in academia, business, and public administration. In this paper, we illustrate the history and features of a key Open Source GIS, the Geographical Resources Analysis Support System (GRASS). GRASS has been under development for more than 28 years, has strong ties into academia, and its review mechanisms led to the integration of well tested and documented algorithms into a joint GIS suite which has been used regularly for environmental modelling. The development is community-based with developers distributed globally. Through the use of an online source code repository, mailing lists and a Wiki, users and developers communicate in order to review existing code and develop new methods. In this paper, we provide a functionality overview of the more than 400 modules available in the latest stable GRASS software release. This new release runs natively on common operating systems (MS-Windows, GNU/Linux, Mac OSX), giving basic and advanced functionality to casual and expert users. In the second part, we review selected publications with a focus on environmental modelling to illustrate the wealth of use cases for this open and free GIS.
Environmental policies in Europe have successfully eliminated the most visible and immediate harmful effects of air pollution in the last decades. However, there is ample and robust scientific evidence that even at present rates Europe’s emissions to the atmosphere pose a significant threat to human health, ecosystems and the global climate, though in a less visible and immediate way. As many of the ‘low hanging fruits’ have been harvested by now, further action will place higher demands on economic resources, especially at a time when resources are strained by an economic crisis. In addition, interactions and interdependencies of the various measures could even lead to counter-productive outcomes of strategies if they are ignored. Integrated assessment models, such as the GAINS (Greenhouse gas – Air pollution Interactions and Synergies) model, have been developed to identify portfolios of measures that improve air quality and reduce greenhouse gas emissions at least cost. Such models bring together scientific knowledge and quality-controlled data on future socio-economic driving forces of emissions, on the technical and economic features of the available emission control options, on the chemical transformation and dispersion of pollutants in the atmosphere, and the resulting impacts on human health and the environment. The GAINS model and its predecessor have been used to inform the key negotiations on air pollution control agreements in Europe during the last two decades. This paper describes the methodological approach of the GAINS model and its components. It presents a recent policy analysis that explores the likely future development of emissions and air quality in Europe in the absence of further policy measures, and assesses the potential and costs for further environmental improvements. To inform the forthcoming negotiations on the revision of the Gothenburg Protocol of the Convention on Long-range Transboundary Air Pollution, the paper discusses the implications of alternative formulations of environmental policy targets on a cost-effective allocation of further mitigation measures. ► An integrated assessment model for air pollution control has been applied in European negotiations. ► The model identifies the least-cost emission control measures that achieve air quality targets. ► Air quality will improve up to 2020, but threats to human health and environment will persist. ► Cost-effective measures are available to further protect living conditions and the environment.
Mathematical modelers from different disciplines and regulatory agencies worldwide agree on the importance of a careful sensitivity analysis (SA) of model-based inference. The most popular SA practice seen in the literature is that of ’one-factor-at-a-time’ (OAT). This consists of analyzing the effect of varying one model input factor at a time while keeping all other fixed. While the shortcomings of OAT are known from the statistical literature, its widespread use among modelers raises concern on the quality of the associated sensitivity analyses. The present paper introduces a novel geometric proof of the inefficiency of OAT, with the purpose of providing the modeling community with a convincing and possibly definitive argument against OAT. Alternatives to OAT are indicated which are based on statistical theory, drawing from experimental design, regression analysis and sensitivity analysis proper.
Landslide susceptibility assessment of Uttarakhand area of India has been done by applying five machine learning methods namely Support Vector Machines (SVM), Logistic Regression (LR), Fisher's Linear Discriminant Analysis (FLDA), Bayesian Network (BN), and Naïve Bayes (NB). Performance of these methods has been evaluated using the ROC curve and statistical index based methods. Analysis and comparison of the results show that all five landslide models performed well for landslide susceptibility assessment (AUC = 0.910–0.950). However, it has been observed that the SVM model (AUC = 0.950) has the best performance in comparison to other landslide models, followed by the LR model (AUC = 0.922), the FLDA model (AUC = 0.921), the BN model (AUC = 0.915), and the NB model (AUC = 0.910), respectively.
Spatially continuous data of environmental variables are often required for environmental sciences and management. However, information for environmental variables is usually collected by point sampling, particularly for the mountainous region and deep ocean area. Thus, methods generating such spatially continuous data by using point samples become essential tools. Spatial interpolation methods (SIMs) are, however, often data-specific or even variable-specific. Many factors affect the predictive performance of the methods and previous studies have shown that their effects are not consistent. Hence it is difficult to select an appropriate method for a given dataset. This review aims to provide guidelines and suggestions regarding application of SIMs to environmental data by comparing the features of the commonly applied methods which fall into three categories, namely: non-geostatistical interpolation methods, geostatistical interpolation methods and combined methods. Factors affecting the performance, including sampling design, sample spatial distribution, data quality, correlation between primary and secondary variables, and interaction among factors, are discussed. A total of 25 commonly applied methods are then classified based on their features to provide an overview of the relationships among them. These features are quantified and then clustered to show similarities among these 25 methods. An easy to use decision tree for selecting an appropriate method from these 25 methods is developed based on data availability, data nature, expected estimation, and features of the method. Finally, a list of software packages for spatial interpolation is provided.
Data collection for landslide susceptibility modeling is often an inhibitive activity. This is one reason why for quite some time landslides have been described and modelled on the basis of spatially distributed values of landslide-related attributes. This paper presents landslide susceptibility analysis in the Klang Valley area, Malaysia, using back-propagation artificial neural network model. A landslide inventory map with a total of 398 landslide locations was constructed using the data from various sources. Out of 398 landslide locations, 318 (80%) of the data taken before the year 2004 was used for training the neural network model and the remaining 80 (20%) locations (post-2004 events) were used for the accuracy assessment purpose. Topographical, geological data and satellite images were collected, processed, and constructed into a spatial database using GIS and image processing. Eleven landslide occurrence related factors were selected as: slope angle, slope aspect, curvature, altitude, distance to roads, distance to rivers, lithology, distance to faults, soil type, landcover and the normalized difference vegetation index value. For calculating the weight of the relative importance of each factor to the landslide occurrence, an artificial neural network method was developed. Each thematic layer's weight was determined by the back-propagation training method and landslide susceptibility indices (LSI) were calculated using the trained back-propagation weights. To assess the factor effects, the weights were calculated three times, using all 11 factors in the first case, then recalculating after removal of those 4 factors that had the smallest weights, and thirdly after removal of the remaining 3 least influential factors. The effect of weights in landslide susceptibility was verified using the landslide location data. It is revealed that all factors have relatively positive effects on the landslide susceptibility maps in the study. The validation results showed sufficient agreement between the computed susceptibility maps and the existing data on landslide areas. The distribution of landslide susceptibility zones derived from ANN shows similar trends as those obtained by applying in GIS-based susceptibility procedures by the same authors (using the frequency ratio and logistic regression method) and indicates that ANN results are better than the earlier method. Among the three cases, the best accuracy (94%) was obtained in the case of the 7 factors weight, whereas 11 factors based weight showed the worst accuracy (91%).