Northern Prairie Wildlife Research Center
We required a classification of wetland basins to estimate breeding duck populations and a classification of upland and wetland nesting habitat to estimate duck production. Wetland was treated different from the other habitat classes. The National Wetland Inventory mapped wetland and upland on the plots as a special project. Wetland was mapped according to the classification and definitions in Cowardin et al. (1979) and with some exceptions according to the current mapping conventions by the National Wetland Inventory. The exceptions were that no codes for unknown water regime or mixed classes were allowed. Wetland mapping of the plots, except the addition of a unique number for each polygon in each basin, was essentially identical to the National Wetland Inventory operational inventory (Wilen 1990). Cowardin (1982) illustrated the difference between classifications of wetland area and of wetland basins. Available data for constructing pair-wetland regressions were from various basin classifications. Therefore, we had to translate the cover classes mapped by the National Wetland Inventory into basin classes. The technique was a simplification of the rules by Stewart and Kantrud (1971) for forming pond classes from wetland zones. Their pond classes (equivalent to our basin classes; Table 2) were derived from the water regime of the zone with most permanence and with an aerial cover of 5% or more. Our algorithm first summed the area of all wetland polygons in a basin by a unique identifier coded at the time of digitization. It then searched for the polygon with the most permanent water regime regardless of polygon size. If two or more polygons had the same water regime, the algorithm selected the largest. That polygon became the basin-naming polygon and was used to determine the class of the basin (Table 2). If the basin contained only one wetland polygon, that polygon became the basin-naming polygon by default.
|Basin Class||Basin-Naming Polygon|
|Temporary basin||Water regime temporary (a)b or saturated (b)|
|Seasonal basin||Water regime seasonal (c)|
|Semipermanent basin||Water regime semipermanent (f)|
|Lake||System Lacustrine (L) or water regime intermittently exposed (h) or permanent (g)|
aThe basin-naming polygon is the polygon with the most permanent
water regime in a wetland basin.
bLetters in parentheses refer to symbols on National Wetland Inventory maps.
Habitat on each plot was interpreted, mapped, and digitized by the National Wetland Inventory. The mapping was a special project conducted for us prior to production of standard wetland maps by the National Wetland Inventory. Data were from high-altitude (1:63,360), color-infrared photographs taken during the late 1970's and early 1980's. All features on the plots were delineated with a 5-aught pen on acetate overlays. Wherever possible, areas were shown as closed polygons, but some features such as roads, trails, and rock piles had to be shown as lines or points because of the small scale. At the time of digitization, a unique basin number was added as an attribute to all polygons in a single wetland basin. Polygons, linear features, and points were transferred to 1:24,000 USGS quadrangle maps by a Bausch and Lomb Zoom Transfer Scope.
Fig 2. Regression of number of blue-winged teal (Anas dicors), gadwall (Anas platyrhynchos), northern shoveler (Anas clypeata), and northern pintail (Anas acuta) pairs on pond size during 1987-90 in the prairie pothole region of the United States.
The resulting maps were then digitized on a digitizing tablet and converted to Map Overlay and Statistical System files (Pywell and Niedzweadk 1980). A second set of 1:24,000 plot maps showing landownership boundaries was prepared from data on file at realty offices of the U.S. Fish and Wildlife Service. These maps were also digitized into Map Overlay and Statistical System files. The two Map Overlay and Statistical System files were overlaid to produce a file with the landownership attributes of all polygons. Text files of each polygon with single records were produced from these files.
Because the remote-sensing-based system required that all habitats have some area, line and point features were buffered by multiplying length by average feature width (linear features) or by pie times the radius squared (point features). The following average dimensions were determined from aerial photographs and were used for buffering linear and point data on maps: 8.2 m wide for shelterbelts; 14.6 m diameter for rockpiles and brush or grass areas, too small to enclose with a polygon; 14.6 m wide for linear wetland basins; and 15.3 m diameter for point wetland basins. Linear road features were buffered for the width of the road surface, which we equated to being barren, and for the distance from the road surface to the border of the right-of-way. Distances from the center line of the road were 3.1 m to the edge of the road surface and 10.1 m to the far edge of the right-of-way on gravel roads and respectively 3.8 m and 19.1 m on hard-surface roads and 6.1 m and 19.8 m on railroads. The average width of fence rows and vegetated strips between fields was 3.1 m.
Areas for buffered linear features and points were added to the text files derived from polygon data. This inflated the total plot area. All polygon areas were then resealed to the true plot area by calculating a correction factor (true plot area ÷ inflated plot area) and by multiplying the area of each polygon by the correction factor.
The maps and databases contained data on all wetland basins (wet or dry) when the photographs were taken. We assumed that the maps had no errors from omissions or commissions. The remote-sensing-based system also required that we know the numbers and sizes of all ponds (wetland basins with water) each spring. We selected aerial video taken during flights in early May of each year as the technique to obtain these data. Video, although lacking the good resolution of photographs, has advantages over photographs (Sidle and Ziewitz 1990). Video is less expensive than aerial photography. Because a monitor is in the aircraft, the user can guide the pilot to the target area and knows whether the target area was recorded. The data are ready for immediate use at the completion of the flight. The Map and Image Processing System software (Miller et al. 1990) allows instant capture of the images in digital form directly from the video signal (unlike photographs, which must be scanned to produce digital data).
We used a Panasonic D 5000 video camera with a 5.9-mm Angenieux lens, a Panasonic AG 2400 video recorder, and a Panasonic CT 500V monitor. The camera was controlled by a Panasonic WVCR12 controller board. The camera was mounted in a custom-designed aluminum Study Area mount that allowed the leveling and rotation of the camera to correct for crabbing of the aircraft. The use of the short lens did not allow use of the automatic white-balance and iris. The aperture had to be set before the flight, and the camera was white-balanced just before or during the flight.
We used several aircraft including Cessnas 172,172RG, and 182 and a Partenavia Surveyor. A variety of camera port sizes and locations was used. Camera ports from 12.7 to 30.5 cm in diameter proved adequate, but larger ports were easier to use because the chance of including part of the aircraft skin in the video frame is less. Video data were obtained from an altitude of 3,812.5 m above ground level. This elevation combined with our type of lens permitted a 5.2-km swath width and adequate room for navigational error.
Analysis of Image Data
Video data were captured in digital form by a microcomputer with a Targa 16 image-capture board and The Map and Image Processing System (MIPS) software. This procedure produced a 16-bit composite raster. After capture, we classified ponds by the Feature Mapping procedure in MIPS. The objective was to classify all areas that are covered by water, including vegetation growing in water. The procedure required skill in interpretation and knowledge of local wetland conditions. The percent of the wetland basin covered by water was recorded during the counting of the breeding pairs and furnished ground truth. Feature Mapping in MIPS can be used in either an automated mode or by drawing boundaries of a pond on the screen with a mouse. Video data seldom furnish sufficient spectoral separation for completely automated classification of a scene. We picked and classified training pixels, known to contain water from ground observation, until errors of commission began to appear. It was then necessary to begin on-screen interpretation by drawing boundaries around areas interpreted as wet. Where the basin contained emergent vegetation, we looked for water along the shore or in openings in the vegetation. This was a good indication of vegetation underlain by water. Sun-glint problems were resolved by referring to the original video tape and by observing sun-glint move across the scene as the aircraft moved over the wetland basin. Cloud-shadow problems were overcome by comparing the relation between the darker shading of water and the lighter shading of upland in shadows and clear areas.
Interpretation is subject to errors, and consistency among interpreters is important. Two people interpreted the same scenes of most video data to identify errors in the classification of amount of water. When inconsistencies occurred, the area was reclassified.