Google Earth Engine is a cloud-based platform for planetary-scale geospatial analysis that brings Google's massive computational capabilities to bear on a variety of high-impact societal issues including deforestation, drought, disaster, disease, food security, water management, climate monitoring and environmental protection. It is unique in the field as an integrated platform designed to empower not only traditional remote sensing scientists, but also a much wider audience that lacks the technical capacity needed to utilize traditional supercomputers or large-scale commodity cloud computing resources. (C) 2017 The Author(s). Published by Elsevier Inc.
Landsat 8, a NASA and USGS collaboration, acquires global moderate-resolution measurements of the Earth's terrestrial and polar regions in the visible, near-infrared, short wave, and thermal infrared. Landsat 8 extends the remarkable 40 year Landsat record and has enhanced capabilities including new spectral bands in the blue and cirrus cloud-detection portion of the spectrum, two thermal bands, improved sensor signal-to-noise performance and associated improvements in radiometric resolution, and an improved duty cycle that allows collection of a significantly greater number of images per day. This paper introduces the current (2012-2017) Landsat Science Team's efforts to establish an initial understanding of Landsat 8 capabilities and the steps ahead in support of priorities identified by the team. Preliminary evaluation of Landsat 8 capabilities and identification of new science and applications opportunities are described with respect to calibration and radiometric characterization; surface reflectance; surface albedo; surface temperature, evapotranspiration and drought; agriculture; land cover, condition, disturbance and change; fresh and coastal water; and snow and ice. Insights into the development of derived 'higher-lever Landsat products are provided in recognition of the growing need for consistently processed, moderate spatial resolution, large area, long-term terrestrial data records for resource management and for climate and global change studies. The paper concludes with future prospects, emphasizing the opportunities for land imaging constellations by combining Landsat data with data collected from other international sensing systems, and consideration of successor Landsat mission requirements. (c) 2014 The Authors. Published by Elsevier Inc.
Leaf area index (LAI) collected in a needle-leaf forest site near Ruokolahti, Finland, during a field campaign in June 14-21, 2000, was used to validate Moderate Resolution Imaging Spectroradiometer (MODIS) LAI algorithm. The field LAI data was first related to 30-m resolution Enhanced Thermal Mapper Plus (ETM+) images using empirical methods to create a high-resolution LAI map. The analysis of empirical approaches indicates that preliminary segmentation of the image followed by empirical modeling with the resulting patches, was an effective approach to developing an LAI validation surface. Comparison of the aggregated high-resolution LAI map and corresponding MODIS LAI retrievals suggests satisfactory behavior of the MODIS LAI algorithm although variation in MODIS LAI product is higher than expected. The MODIS algorithm, adjusted to high resolution, generally overestimates the LAI due to the influence of the understory vegetation. This indicates the need for improvements in the algorithm. An improved correlation between field measurements and the reduced simple ratio (RSR) suggests that the shortwave infrared (SWIR) band may provide valuable information for needle-leaf forests. (C) 2004 Elsevier Inc. All rights reserved.
The remote sensing science and application communities have developed increasingly reliable, consistent, and robust approaches for capturing land dynamics to meet a range of information needs. Statistically robust and transparent approaches for assessing accuracy and estimating area of change are critical to ensure the integrity of land change information.We provide practitionerswith a set of “good practice” recommendations for designing and implementing an accuracy assessment of a change map and estimating area based on the reference sample data. The good practice recommendations address the three major components: sampling design, response design and analysis. The primary good practice recommendations for assessing accuracy and estimating area are: (i) implement a probability sampling design that is chosen to achieve the priority objectives of accuracy and area estimationwhile also satisfying practical constraints such as cost and available sources of reference data; (ii) implement a response design protocol that is based on reference data sources that provide sufficient spatial and temporal representation to accurately label each unit in the sample (i.e., the “reference classification” will be considerably more accurate than the map classification being evaluated); (iii) implement an analysis that is consistent with the sampling design and response design protocols; (iv) summarize the accuracy assessment by reporting the estimated error matrix in terms of proportion of area and estimates of overall accuracy, user's accuracy (or commission error), and producer's accuracy (or omission error); (v) estimate area of classes (e.g., types of change such as wetland loss or types of persistence such as stable forest) based on the reference classification of the sample units; (vi) quantify uncertainty by reporting confidence intervals for accuracy and area parameters; (vii) evaluate variability and potential error in the reference classification; and (viii) document deviations from good practice that may substantially affect the results. An example application is provided to illustrate the recommended process.
Land surface temperature (LST) is one of the key parameters in the physics of land surface processes from local through global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from space. However, retrieving LST is still a challenging task since the LST retrieval problem is ill-posed. This paper reviews the current status of selected remote sensing algorithms for estimating LST from thermal infrared (TIR) data. A brief theoretical background of the subject is presented along with a survey of the algorithms employed for obtaining LST from space-based TIR measurements. The discussion focuses on TIR data acquired from polar-orbiting satellites because of their widespread use, global applicability and higher spatial resolution compared to geostationary satellites. The theoretical framework and methodologies used to derive the LST from the data are reviewed followed by the methodologies for validating satellite-derived LST. Directions for future research to improve the accuracy of satellite-derived LST are then suggested. (c) 2012 Elsevier Inc. All rights reserved.
Identification of clouds, cloud shadows and snow in optical images is often a necessary step toward their use. Recently a new program (named Fmask) designed to accomplish these tasks was introduced for use with images from Landsats 4-7 (Zhu & Woodcock, 2012). In this paper, there are the following: (1) improvements in the Fmask algorithm for Landsats 4-7; (2) a new version for use with Landsat 8 that takes advantage of the new cirrus band; and (3) a prototype algorithm for Sentinel 2 images. Though Sentinel 2 images do not have a thermal band to help with cloud detection, the new cirrus band is found to be useful for detecting clouds, especially for thin cirrus clouds. By adding a new cirrus cloud probability and removing the steps that use the thermal band, the Sentinel 2 scenario achieves significantly better results than the Landsats 4-7 scenario for all 7 images tested. For Landsat 8, almost all the Fmask algorithm components are the same as for Landsats 4-7, except a new cirrus cloud probability is calculated using the new cirrus band, which improves detection of thin cirrus clouds. Landsat 8 results are better than the Sentinel 2 scenario, with 6 out of 7 test images showing higher accuracies. (C) 2014 Elsevier Inc All rights reserved.
The surface reflectance, i.e., satellite derived top of atmosphere (TOA) reflectance corrected for the temporally, spatially and spectrally varying scattering and absorbing effects of atmospheric gases and aerosols, is needed to monitor the land surface reliably. For this reason, the surface reflectance, and not TOA reflectance, is used to generate the greater majority of global land products, for example, from the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors. Even if atmospheric effects are minimized by sensor design, atmospheric effects are still challenging to correct. In particular, the strong impact of aerosols in the visible and near infrared spectral range can be difficult to correct, because they can be highly discrete in space and time (e.g., smoke plumes) and because of the complex scattering and absorbing properties of aerosols that vary spectrally and with aerosol size, shape, chemistry and density. This paper presents the Landsat 8 Operational Land Imager (OLI) atmospheric correction algorithm that has been developed using the Second Simulation of the Satellite Signal in the Solar Spectrum Vectorial (6SV) model, refined to take advantage of the narrow OLI spectral bands (compared to Thematic Mapper/Enhanced Thematic Mapper (TM/ETM +)), improved radiometric resolution and signal-to-noise. In addition, the algorithm uses the new OLI Coastal aerosol band (0.433-0.450 mu m), which is particularly helpful for retrieving aerosol properties, as it covers shorter wavelengths than the conventional Landsat, TM and ETM + blue bands. A cloud and cloud shadow mask has also been developed using the "cirrus" band (1.360-1.390 pm) available on OLI, and the thermal infrared bands from the Thermal Infrared Sensor (TIRS) instrument. The performance of the surface reflectance product from OLI is analyzed over the Aerosol Robotic Network (AERONET) sites using accurate atmospheric correction (based on in situ measurements of the atmospheric properties), by comparison with the MODIS Bidirectional Reflectance Distribution Function (BRDF) adjusted surface reflectance product and by comparison of OLI derived broadband albedo from United States Surface Radiation Budget Network (US SURFRAD) measurements. The results presented clearly show an improvement of Landsat 8 surface reflectance product over the ad-hoc Landsat 5/7 LEDAPS product. Published by Elsevier Inc.
A new method called Fmask (Function of mask) for cloud and cloud shadow detection in Landsat imagery is provided. Landsat Top of Atmosphere (TOA) reflectance and Brightness Temperature (BT) are used as inputs. Fmask first uses rules based on cloud physical properties to separate Potential Cloud Pixels (PCPs) and clear-sky pixels. Next, a normalized temperature probability, spectral variability probability, and brightness probability are combined to produce a probability mask for clouds over land and water separately. Then, the PCPs and the cloud probability mask are used together to derive the potential cloud layer. The darkening effect of the cloud shadows in the Near Infrared (NIR) Band is used to generate a potential shadow layer by applying the flood-fill transformation. Subsequently, 3D cloud objects are determined via segmentation of the potential cloud layer and assumption of a constant temperature lapse rate within each cloud object. The view angle of the satellite sensor and the illuminating angle are used to predict possible cloud shadow locations and select the one that has the maximum similarity with the potential cloud shadow mask. If the scene has snow, a snow mask is also produced. For a globally distributed set of reference data, the average Fmask overall cloud accuracy is as high as 96.4%. The goal is development of a cloud and cloud shadow detection algorithm suitable for routine usage with Landsat images. (C) 2011 Elsevier Inc. All rights reserved.
MODIS global evapotranspiration (ET) products by Mu et al. [Mu, Q. Heinsch, F. A., Zhao, M., Running, S. W. (2007). Development of a global evapotranspiration algorithm based on MODIS and global meteorology data. Remote Sensing of Environment, 111, 519-536. doi: 10.1016/j.rse.2007.04.015] are the first regular 1-km(2) land surface ET dataset for the 109.03 Million km(2) global vegetated land areas at an 8-day interval. In this study, we have further improved the ET algorithm in Mu et al. (2007a, hereafter called old algorithm) by 1) simplifying the calculation of vegetation cover fraction; 2) calculating ET as the sum of daytime and nighttime components; 3) adding soil heat flux calculation; 4) improving estimates of stomatal conductance, aerodynamic resistance and boundary layer resistance; 5) separating dry canopy surface from the wet; and 6) dividing soil surface into saturated wet surface and moist surface. We compared the improved algorithm with the old one both globally and locally at 46 eddy flux towers. The global annual total ET over the vegetated land surface is 62.8 x 10(3) km(3), agrees very well with other reported estimates of 65.5 x 10(3) km(3) over the terrestrial land surface, which is much higher than 45.8 x 10(3) km(3) estimated with the old algorithm. For ET evaluation at eddy flux towers, the improved algorithm reduces mean absolute bias (MAE) of daily ET from 0.39 mm day(-1) to 033 mm day(-1) driven by tower meteorological data, and from 0.40 mm day(-1) to 0.31 mm day(-1) driven by GMAO data, a global meteorological reanalysis dataset. MAE values by the improved ET algorithm are 24.6% and 24.1% of the ET measured from towers, within the range (10-30%) of the reported uncertainties in ET measurements, implying an enhanced accuracy of the improved algorithm. Compared to the old algorithm, the improved algorithm increases the skill score with tower-driven ET estimates from 0.50 to 0.55, and from 0.46 to 0.53 with GMAO-driven ET. Based on these results, the improved ET algorithm has a better performance in generating global ET data products, providing critical information on global terrestrial water and energy cycles and environmental changes.
Global Monitoring for Environment and Security (GMES) is a joint initiative of the European Commission (EC) and the European Space Agency (ESA), designed to establish a European capacity for the provision and use of operational monitoring information for environment and security applications. ESA's role in GMES is to provide the definition and the development of the space- and ground-related system elements. GMES Sentinel-2 mission provides continuity to services relying on multi-spectral high-resolution optical observations over global terrestrial surfaces. The key mission objectives for Sentinel-2 are: (1) To provide systematic global acquisitions of high-resolution multi-spectral imagery with a high revisit frequency, (2) to provide enhanced continuity of multi-spectral imagery provided by the SPOT (Satellite Pour l'Observation de la Terre) series of satellites. and (3) to provide observations for the next generation of operational products such as land-cover maps, land change detection maps, and geophysical variables. Consequently, Sentinel-2 will directly contribute to the Land Monitoring, Emergency Response, and Security services. The corresponding user requirements have driven the design toward a dependable multi-spectral Earth-observation system featuring the Multi Spectral Instrument (MSI) with 13 spectral bands spanning from the visible and the near infrared to the short wave infrared. The spatial resolution varies from 10 m to 60 m depending on the spectral band with a 290 km field of view. This unique combination of high spatial resolution, wide field of view and spectral coverage will represent a major step forward compared to current multi-spectral missions. The mission foresees a series of satellites, each having a 7.25-year lifetime over a 15-year period starting with the launch of Sentinel-2A foreseen in 2013. During full operations two identical satellites will be maintained in the same orbit with a phase delay of 180 degrees providing a revisit time of five days at the equator. This paper provides an overview of the GMES Sentinel-2 mission including a technical system concept overview, image quality. Level 1 data processing and operational applications. (C) 2012 Elsevier Inc. All rights reserved.
A new algorithm for Continuous Change Detection and Classification (CCDC) of land cover using all available Landsat data is developed. It is capable of detecting many kinds of land cover change continuously as new images are collected and providing land cover maps for any given time. A two-step cloud, cloud shadow, and snow masking algorithm is used for eliminating "noisy" observations. A time series model that has components of seasonality, trend, and break estimates surface reflectance and brightness temperature. The time series model is updated dynamically with newly acquired observations. Due to the differences in spectral response for various kinds of land cover change, the CCDC algorithm uses a threshold derived from all seven Landsat bands. When the difference between observed and predicted images exceeds a threshold three consecutive times, a pixel is identified as land surface change. Land cover classification is done after change detection. Coefficients from the time series models and the Root Mean Square Error (RMSE) from model estimation are used as input to the Random Forest Classifier (RFC). We applied the CCDC algorithm to one Landsat scene in New England (WRS Path 12 and Row 31). All available (a total of 519) Landsat images acquired between 1982 and 2011 were used. A random stratified sample design was used for assessing the change detection accuracy, with 250 pixels selected within areas of persistent land cover and 250 pixels selected within areas of change identified by the CCDC algorithm. The accuracy assessment shows that CCDC results were accurate for detecting land surface change, with producer's accuracy of 98% and user's accuracies of 86% in the spatial domain and temporal accuracy of 80%. Land cover reference data were used as the basis for assessing the accuracy of the land cover classification. The land cover map with 16 categories resulting from the CCDC algorithm had an overall accuracy of 90%. (C) 2014 Elsevier Inc. All rights reserved.
Information related to land cover is immensely important to global change science. In the past decade, data sources and methodologies for creating global land cover maps from remote sensing have evolved rapidly. Here we describe the datasets and algorithms used to create the Collection 5 MODIS Global Land Cover Type product, which is substantially changed relative to Collection 4. In addition to using updated input data, the algorithm and ancillary datasets used to produce the product have been refined. Most importantly, the Collection 5 product is generated at 500-m spatial resolution, providing a four-fold increase in spatial resolution relative to the previous version. In addition, many components of the classification algorithm have been changed. The training site database has been revised, land surface temperature is now included as an input feature. and ancillary datasets used in post-processing of ensemble decision tree results have been updated. Further, methods used to correct classifier results for bias imposed by training data properties have been refined, techniques used to fuse ancillary data based on spatially varying prior probabilities have been revised, and a variety of methods have been developed to address limitations of the algorithm for the urban, wetland, and deciduous needleleaf classes. Finally, techniques used to stabilize classification results across years have been developed and implemented to reduce year-to-year variation in land cover labels not associated with land cover change. Results from a cross-validation analysis indicate that the overall accuracy of the product is about 75% correctly classified, but that the range in class-specific accuracies is large. Comparison of Collection 5 maps with Collection 4 results show substantial differences arising from increased spatial resolution and changes in the input data and classification algorithm. (C) 2009 Elsevier Inc. All rights reserved.
The importance of characterizing, quantifying, and monitoring land cover, land use, and their changes has been widely recognized by global and environmental change studies. Since the early 1990s, three U.S. National Land Cover Database (NLCD) products (circa 1992, 2001, and 2006) have been released as free downloads for users. The NLCD 2006 also provides land cover change products between 2001 and 2006. To continue providing updated national land cover and change datasets, a new initiative in developing NLCD 2011 is currently underway. We present a new Comprehensive Change Detection Method (CCDM) designed as a key component for the development of NLCD 2011 and the research results from two exemplar studies. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (CV, RCVMAX, dNER, and dNDVI) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. For NLCD 2011, the improved and enhanced change products obtained from the CCDM provide critical information on location, magnitude, and direction of potential change areas and serve as a basis for further characterizing land cover changes for the nation. An accuracy assessment from the two study areas show 100% agreement between CCDM mapped no-change class with reference dataset, and 18% and 82% disagreement for the change class for WRS path/row p22r39 and p33r33, respectively. The strength of the CCDM is that the method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and anthropogenic disturbances potentially associated with land cover changes on different landscapes. (C) 2013 Elsevier Inc. All rights reserved.
Landsat occupies a unique position in the constellation of civilian earth observation satellites, with a long and rich scientific and applications heritage. With nearly 40 years of continuous observation - since launch of the first satellite in 1972 - the Landsat program has benefited from insightful technical specification, robust engineering, and the necessary infrastructure for data archive and dissemination. Chiefly, the spatial and spectral resolutions have proven of broad utility and have remained largely stable over the life of the program. The foresighted acquisition and maintenance of a global image archive has proven to be of unmatched value, providing a window into the past and fueling the monitoring and modeling of global land cover and ecological change. In this paper we discuss the evolution of the Landsat program as a global monitoring mission, highlighting in particular the recent change to an open (free) data policy. The new data policy is revolutionizing the use of Landsat data, spurring the creation of robust standard products and new science and applications approaches. Open data access also promotes increased international collaboration to meet the Earth observing needs of the 21st century. Crown Copyright (C) 2012 Published by Elsevier Inc. All rights reserved.
New and previously unimaginable Landsat applications have been fostered by a policy change in 2008 that made analysis-ready Landsat data free and open access. Since 1972, Landsat has been collecting images of the Earth, with the early years of the program constrained by onboard satellite and ground systems, as well as limitations across the range of required computing, networking, and storage capabilities. Rather than robust on-satellite storage for transmission via high bandwidth downlink to a centralized storage and distribution facility as with Landsat-8, a network of receiving stations, one operated by the U.S. government, the other operated by a community of International Cooperators (ICs), were utilized. ICs paid a fee for the right to receive and distribute Landsat data and over time, more Landsat data was held outside the archive of the United State Geological Survey (USGS) than was held inside, much of it unique. Recognizing the critical value of these data, the USGS began a Landsat Global Archive Consolidation (LGAC) initiative in 2010 to bring these data into a single, universally accessible, centralized global archive, housed at the Earth Resources Observation and Science (EROS) Center in Sioux Falls, South Dakota. The primary LGAC goals are to inventory the data held by ICs, acquire the data, and ingest and apply standard ground station processing to generate an LIT analysis-ready product. As of January 1, 2015 there were 5,532,454 images in the USGS archive. LGAC has contributed approximately 3.2 million of those images, more than doubling the original USGS archive holdings. Moreover, an additional 23 million images have been identified to date through the LGAC initiative and are in the process of being added to the archive. The impact of LGAC is significant and, in terms of images in the collection, analogous to that of having had two additional Landsat-5 missions. As a result of LGAC, there are regions of the globe that now have markedly improved Landsat data coverage, resulting in an enhanced capacity for mapping, monitoring change, and capturing historic conditions. Although future missions can be planned and implemented, the past cannot be revisited, undetscoring the value and enhanced significance of historical Landsat data and the LGAC initiative. The aim of this paper is to report the current status of the global USGS Landsat archive, document the existing and anticipated contributions of LGAC to the archive, and characterize the current acquisitions of Landsat-7 and Landsat-8. Landsat-8 is adding data to the archive at an unprecedented rate as nearly all terrestrial images are now collected. We also offer key lessons learned so far from the LGAC initiative, plus insights regarding other critical elements of the Landsat program looking forward, such as acquisition, continuity, temporal revisit, and the importance of continuing to operationalize the Landsat program. Crown Copyright (C) 2015 Published by Elsevier Inc All rights reserved.
In this study we evaluate the skill of a new, merged soil moisture product (ECV_SM) that has been developed in the framework of the European Space Agency's Water Cycle Multi-mission Observation Strategy and Climate Change Initiative projects. The product combines in a synergistic way the soil moisture retrievals from four passive (SMMR, SSM/I, TMI, and AMSR-E) and two active (ERS AMI and ASCAT) coarse resolution microwave sensors into a global data set spanning the period 1979-2010. The evaluation uses ground-based soil moisture observations of 596 sites from 28 historical and active monitoring networks worldwide. Besides providing conventional measures of agreement, we use the triple collocation technique to assess random errors in the data set. The average Spearman correlation coefficient between ECV_SM and all in-situ observations is 0.46 for the absolute values and 0.36 for the soil moisture anomalies, but differences between networks and time periods are very large. Unbiased root-mean-square differences and triple collocation errors show less variation between networks, with average values around 0.05 and 0.04 m(3) m(-3), respectively. The ECV_SM quality shows an upward trend over time, but a consistent decrease of all performance metrics is observed for the period 2007-2010. Comparing the skill of the merged product with the skill of the individual input products shows that the merged product has a similar or better performance than the individual input products, except with regard to the ASCAT product, compared to which the performance of ECV_SM is inferior. The cause of the latter is most likely a combination of the mismatch in sampling time between the satellite observations and in-situ measurements, and the resampling and scaling strategy used to integrate the ASCAT product into ECV_SM on the other. The results of this study will be used to further improve the scaling and merging algorithms for future product updates. (C) 2014 Elsevier Inc. All rights reserved.
In using traditional digital classification algorithms, a researcher typically encounters serious issues in identifying urban land cover classes employing high resolution data. A normal approach is to use spectral information alone and ignore spatial information and a group of pixels that need to be considered together as an object We used QuickBird image data over a central region in the city of Phoenix, Arizona to examine if an object-based classifier can accurately identify urban classes. To demonstrate if spectral information alone is practical in urban classification, we used spectra of the selected classes from randomly selected points to examine if they can be effectively discriminated. The overall accuracy based on spectral information alone reached only about 63.33%. We employed five different classification procedures with the object-based paradigm that separates spatially and spectrally similar pixels at different scales. The classifiers to assign land covers to segmented objects used in the study include membership functions and the nearest neighbor classifier. The object-based classifier achieved a high overall accuracy (90.40%), whereas the most commonly used decision rule, namely maximum likelihood classifier, produced a lower overall accuracy (67.60%). This study demonstrates that the object-based classifier is a significantly better approach than the classical per-pixel classifiers. Further, this study reviews application of different parameters for segmentation and classification, combined use of composite and original bands, selection of different scale levels, and choice of classifiers. Strengths and weaknesses of the object-based prototype are presented and we provide suggestions to avoid or minimize uncertainties and limitations associated with the approach. (C) 2011 Elsevier Inc. All rights reserved.
The knowledge of impervious surfaces, especially the magnitude, location, geometry, spatial pattern of impervious surfaces and the perviousness-imperviousness ratio, is significant to a range of issues and themes in environmental science central to global environmental change and human-environment interactions. Impervious surface data is important for urban planning and environmental and resources management. Therefore, remote sensing of impervious surfaces in the urban areas has recently attracted unprecedented attention. In this paper, various digital remote sensing approaches to extract and estimate impervious surfaces will be examined. Discussions will focus on the mapping requirements of urban impervious surfaces. In particular, the impacts of spatial, geometric, spectral, and temporal resolutions on the estimation and mapping will be addressed, so will be the selection of an appropriate estimation method based on remotely sensed data characteristics. This literature review suggests that major approaches over the past decade include pixel-based (image classification, regression, etc.), sub-pixel based (linear spectral unmixing, imperviousness as the complement of vegetation fraction etc.), object-oriented algorithms, and artificial neural networks. Techniques, such as data/image fusion, expert systems, and contextual classification methods, have also been explored. The majority of research efforts have been made for mapping urban landscapes at various scales and on the spatial resolution requirements of such mapping. In contrast, there is less interest in spectral and geometric properties of impervious surfaces. More researches are also needed to better understand temporal resolution, change and evolution of impervious surfaces over time, and temporal requirements for urban mapping. It is suggested that the models, methods, and image analysis algorithms in urban remote sensing have been largely developed for the imagery of medium resolution (10-100 m). The advent of high spatial resolution satellite images, spaceborne hyperspectral images, and LiDAR data is stimulating new research idea, and is driving the future research trends with new models and algorithms. (C) 2011 Elsevier Inc. All rights reserved.
Classifying surface cover types and analyzing changes are among the most common applications of remote sensing. One of the most basic classification tasks is to distinguish water bodies from dry land surfaces. Landsat imagery is among the most widely used sources of data in remote sensing of water resources; and although several techniques of surface water extraction using Landsat data are described in the literature, their application is constrained by low accuracy in various situations. Besides, with the use of techniques such as single band thresholding and two-band indices, identifying an appropriate threshold yielding the highest possible accuracy is a challenging and time consuming task, as threshold values vary with location and time of image acquisition. The purpose of this study was therefore to devise an index that consistently improves water extraction accuracy in the presence of various sorts of environmental noise and at the same time offers a stable threshold value. Thus we introduced a new Automated Water Extraction Index (AWEI) improving classification accuracy in areas that include shadow and dark surfaces that other classification methods often fail to classify correctly. We tested the accuracy and robustness of the new method using Landsat 5 TM images of several water bodies in Denmark, Switzerland, Ethiopia, South Africa and New Zealand. Kappa coefficient, omission and commission errors were calculated to evaluate accuracies. The performance of the classifier was compared with that of the Modified Normalized Difference Water Index (MNDWI) and Maximum Likelihood (ML) classifiers. In four out of five test sites, classification accuracy of AWEI was significantly higher than that of MNDWI and ML (P-value < 0.01). AWEI improved accuracy by lessening commission and omission errors by 50% compared to those resulting from MNDWI and about 25% compared to ML classifiers. Besides, the new method was shown to have a fairly stable optimal threshold value. Therefore, AWEI can be used for extracting water with high accuracy, especially in mountainous areas where deep shadow caused by the terrain is an important source of classification error. (C) 2013 Elsevier Inc All rights reserved.