Northern Prairie Wildlife Research Center
Understanding of remote sensing models and their interrelationships can benefit from a system view of the image-forming process (Swain and Davis 1978). An important concept is the distinction between the scene, which is real and exists on the earth's surface, and the image, a collection of spatially arranged measurements from the scene (Strahler et al. 1986). The purpose of a remote sensing model is to provide a conceptual and explicit framework for inferring the characteristics of the scene from the image. A remote sensing model may be generalized as having three components: a scene model, an atmospheric model, and a sensor model.
A scene model quantifies the relationships of the objects or targets of interest and their interactions with radiation through the processes of reflectance, transmittance, absorbance, and emittance. Characteristics of the scene objects could include their type, size, number, and spatial and temporal distributions. The model also must consider the background or nontarget components of the scene, including shadow.
An atmospheric model describes the transformation of the radiance due to scattering by molecules and aerosols, and gaseous absorption during the path from the sun to the earth's surface and between the surface and the spacecraft. If an atmospheric model is omitted, the parameters developed to extract information from the image are not transferable and the entire procedure must be repeated for other images. Several methods for the normalization or radiometric calibration of remotely sensed data have been developed (Ahern et al. 1987, Schott et al 1988, Chavez 1989, Tanre et al. 1990).
The sensor model quantifies how the instrument collects the measurements of the scene and includes four key parameters: spectral, spatial, and temporal resolution, and view angle (Duggin 1985). The spectral resolution of the sensor specifies what wavelengths of the electromagnetic spectrum are measured. The spatial resolution specifies the size of the area on the ground from which the measurements that comprise the image are derived. The spatial resolution relative to the spatial structure of the scene objects determines the appropriate analysis methods for scene inference (Woodcock and Strahler 1987). The temporal resolution specifies the frequency with which images are obtained in time. View angle is an important component of the imaging geometry. View angle and illumination geometry (solar zenith and azimuth angles) are important determinants of the measured reflectance since adjustments in observation and illumination geometry result in different sampling of the bidirectional reflectance distribution function, the most fundamental property describing the reflection characteristics of a surface (Silva 1978). Multidirectional observation of this reflectance anisotropy will be possible with the new generation of sensors (Ormsby and Soffen 1989).
Digital image processing, the numerical manipulation of digital images, includes procedures for preprocessing, enhancement, and information extraction. Preprocessing involves procedures applied to the original data before enhancement or information extraction. Calibration of image radiometry for atmospheric conditions and illumination and view geometry, the correction of geometric distortions and georegistration of the image, and noise suppression are examples of image-preprocessing procedures (Schowengerdt 1983).
Image enhancement involves the application of procedures designed to facilitate the interpretation of images. These procedures include contrast and color manipulations and spatial-filtering methods (Schowengerdt 1983). The "Tasseled Cap" is a well-known spectral transformation, which derives new variables that allow vegetation and soils information to be extracted, displayed, and understood more easily (Crist et al. 1986). Hodgson et al. (1988) used this transformation with Landsat TM data in a study of wood stork foraging habitat. Jackson (1983) provided a general procedure to develop spectral indices for user-defined features in a scene.
The development of scene models for extracting information from remotely sensed data requires an understanding of the image-forming process. Strahler et al. (1986) provided a framework for identifying appropriate scene models given the characteristics of the image and the scene. The most common information-extraction methods used with remote sensing data are spectral classifiers in which each pixel is processed independently of its neighbors or location in the image. A discrete scene model is appropriate when the scene objects are larger than the spatial resolution of the sensor.
The parameter estimation process for spectral classifiers can be generalized as being supervised or unsupervised (Swain and Davis 1978, Schowengerdt 1983). In supervised classification, a sample of image elements for each land cover class is used to estimate parameters, typically a mean vector and covariance matrix, for input to the classifier. In unsupervised training, a clustering algorithm is used to partition a sample of the data into populations of pixels with similar reflectance, which are referred to as spectral classes and parameters estimated for these spectral classes (Richards and Kelly 1984). In unsupervised training, the analyst then attempts to establish a correspondence among the spectral classes and the land-cover classes. A statistics file consisting of a mean vector and covariance matrix for each land-cover class then is input to a classification algorithm. The output from a maximum likelihood classification, a common method that produces results having the minimum probability of error over the entire set of data classified, is an image in which each pixel is assigned the label of the land-cover class for which the a posteriori probability was the maximum. An enhancement to the standard output from the maximum likelihood classification would be to create a raster for each land-cover class wherein the pixel value would be the a posteriori probabilities of membership for the category. The result is a probabilistic digital map of the geographic distribution for each land-cover class. This would increase the computational and storage requirements, but technological progress in these areas is great (Faust et al. 1991).
In a continuous-scene model, the scene objects are smaller than the resolution element of the sensor. A relationship between the reflectance and a property of a scene, such as canopy coverage, is established and used to estimate the property in each pixel in a continuous fashion. Mixture models are a type of continuous-scene model, in which the objective is to estimate the proportions of scene objects in each pixel. Mixture models have been used for a variety of resource inventories, including waterfowl habitat (Work and Gilmer 1976), rangeland vegetation and soil cover (Pech et al. 1986), and wintering geese (Strong et al. 1991).
Spectral-spatial scene models exploit the spatial structure of images as well as their spectral characteristics to infer the properties and processes at the land surface. A variety of spectral-spatial models is available. Some of these scene models segment the image into contiguous groups of pixels that meet a spectral similarity criterion and perform the classification using all the pixels of the feature (Strahler et al. 1986). Other spectral-spatial models exploit a measure of image texture or the spatial autocorrelation function as an additional feature in the classification process (Shih and Schowengerdt 1983, Pickup and Chewings 1988).
Spectral-temporal models use the change in the spectral properties of images acquired at different times to infer properties or processes at the land surface. The "Tasseled Cap" is an example of a spectral-temporal model of the phenological development of agricultural crops that can be used to identify crops and forecast yields (Kauth and Thomas 1976, Wiegand et al. 1986). Time series of the normalized difference vegetation index (NDVI), calculated from the red and infrared spectral reflectance measurements of the AVHRR sensor, have been used to describe and map the intra- and inter-year phenological dynamics of biomes at regional, continental, and global scales (Justice et al. 1985), to infer net primary productivity (Goward et al. 1985), and to measure the dynamics of vegetation at the transition zones between biomes (Tucker et al. 1991). Various techniques for detecting change (Singh 1989) use images acquired at different times to infer changes in land cover.
The flow of information between remote sensing and GIS should not be one-way. The accuracy of information derived from remote sensing can benefit from access to accurate spatial data within a geographic information system. Integration of the parallel technologies of GIS and RS will be important to the fullest maturation of both areas.