Sunday, December 11, 2016

Spectral Signature Analysis and Resource Monitoring

Introduction

Spectral Signature Analysis is based on one of the most basic principles of remote sensing, that electromagnetic radiation is found in three states when it strikes an object, reflected, absorbed or emitted. The reflection of electromagnetic radiation of an object is different depending on the object and how the objects absorbs, reflects or transmits the radiation that strikes it. When the reflectance of electromagnetic radiation is graphed after EMR strikes an object, the curve is called a spectral signature curve, because the curve of reflected energy is unique to different objects. The ability analyze and extract reflectance information can help remote sensing analyst to identify objects, vegetation, soil types, mineral conditions and many other geospatial phenomenon. 

With advanced modeling we can monitor vegetation, monitor moisture content of plants, monitor soil, soil moisture content, and soil mineral content, as well as additional geospatial phenomenon with out ever leaving a computer. 

Methods

Using Landsat ETM+ data we identified and measured 12 different surfaces for signature analysis including, Standing Water, Moving Water, Forest,  Riparian Vegetation, Crops, Urban Grass, Dry Soil, Moist Soil, Rock, Asphalt Highway, airport runway and Concrete surfaces. 

Using ERDAS and the Drawing Tool, a polygon can be created on any surface of an image. then using the Raster tabs Signature Editor, a graph can be generated based on the mean spectral reflection of that surface. Below is the spectral signature curve for standing water (Figure 1), the blue band is actually reading higher than it should be due to the Landsat ETM+ data not having been corrected for scattering or haze.

Figure 1. Spectral Signature Curve for Standing Water in ERDAS Imagine, with Landsat ETM+ data.
Another thing that can be accomplished is comparing conditions of surfaces. For example we can look at the spectral signature curve of dry soil as compared to moist soil. Due to the water in the moist soil, the spectral reflectance curve is different than dry soil. We know that water is able to absorb the MIR which is band 5 on this spectral reflectance curve. Therefore we can tell that the top spectral reflectance curve of the graph below (Figure 2, green) is on top and the dry soil spectral reflectance curve is on the bottom (blue).

Figure 2. Dry Soil and Moist soil spectral reflectance curve.
In addition to comparing spectral signatures, an analyst can also employ computer modeling and algorithms to develop details about the surfaces which are being observed. Two such examples are a model for identifying Ferrous Minerals (iron) in soil, and the Normalized Difference Vegetation Index (NDVI). 

The Ferrous Mineral model is based on the equation:
FM =  (MIR1) / NIR
which is that the Ferrous Mineral = Middle Infrared 1 band / Near Infrared band.

The Normalized Difference Vegetation Index is based on the equation:
NDVI = (NIR - RED) / (NIR + RED)
which is that the NDVI = (Near Infrared band - Red band) / (Near Infrared band + Red band).

Both of these models are built into ERDAS, so the analyst just has to run the model, which will output an image which can then be used in further analysis (Figure 4 and Figure 5, below). 

Results
Figure 3. The Spectral Reflectance Curves of 12 surfaces.

When comparing all 12 of the spectral reflectance curve from the surfaces listed above (Figure 3), the outcome is what appears to be a cacophony of spectral reflectance curves, most information is not derived from individual curves themselves, but rather the comparison of the curves in relationship to each other. Here we can see that the spectral reflectance curve for standing water is higher than standing water. At first that would seem to not makes sense, as both standing water and moving water are made up of the same substance, water. However when we understand the nature of spectral reflectance we can determine that there are other factors that come into play even when comparing extremely similar surfaces. The spectral reflectance of water is actually comprised of Volume Reflectance (water column), Bottom Reflectance, and Surface reflectance, with each component aggregating to derive a particular water bodies spectral reflectance. Another factor in the difference other than the difference in values between waters three portion reflectance, could be the difference in the waters surface. Moving water is going to have a rougher and less uniform surface than a calm lake, the difference in surface composition could result in a more diffused reflection in rough water as opposed to a specular reflection which would occur in calm water, in which a diffused reflection will not lead to as high of a specular reflectance curve for that surface.

As you can see, it is the analysis which results from comparison of the spectral reflectance curves that allows the analyst the ability to recognize the patterns and process that lead to the identification of surfaces.

Both the Ferrous Mineral and the NDVI models will output an image that can be brought into a mapping program and then be classified into unique values. Here the we classified the values which the model generated into equal interval classes, and then were able to map both of the desired classes.

The Ferrous Mineral soils map (Figure 4), shows that the overall concentration of Ferrous Minerals are in the Western portion of Chippewa and Eau Claire Counties, as depicted by the darker orange and yellow color. The highest concentration is just northwest of Lake Wissota.

Figure 4. Ferrous Mineral Map of Chippewa and Eau Claire Counties.
The NDVI map displays how dense vegetation is in both Chippewa and Eau Claire counties. The dense vegetation is in the Eastern portion of the counties and is similarly placed to the low Ferrous Mineral areas in Figure 4, this may be a reason as to why the ferrous minerals in the soils here read generally low, perhaps it was due to the blocking of the spectral reflectance of the soil by the dense vegetation. The areas with the least dense vegetation are in the western portions of the counties, which are also the more populated areas with built up urban development. 
Figure 5. Normalized Difference Vegetation Index of Chippewa and Eau Claire Counties.
Sources

United States Geological Survey, Earth Resources Observation and Science Center (Satellite Image). 2000. 


Tuesday, December 6, 2016

Photogrammetry

Introduction

Photogrammetry is study of the geometric characteristics of aerial photographs. In order to extract correct and accurate geospatial information from imagery, the images need to be free of errors and spatially accurate. Specifically, photos need to be free from three major errors: Perspective Convergence, Scale Variation, and Relief Displacement.

Perspective Convergence is an error that occurs when pixels are not the same size, this error stems from camera tilt, where the camera captures an image at an angle. The angle of the captured photos makes pixels further away from the camera squeezed. 
Figure 1. Respective Convergence.  

Scale Variation results from differences in terrain, resulting in various scales for the different elevations of terrain captured in the aerial imagery. 


Figure 2. Scale Variation. 
Relief Displacement is an error that occurs when tall objects appear to be leaning towards or away from the principle point of the photograph. If the object is taller than the datum the object will appear to lean away from the principle point. If the object is lower than the datum the object will appear to lean towards the principle point. The taller or lower the object is from the datum the greater the displacement. If the object is further away from the principle point, the object will also appear to have greater displacement. 
Figure 3. Relief Displacement. As objects vary in distance from the principle point, the relief Displacement will be increased. In the image to the right, the points have been corrected of the relief displacement error.

Figure 4. Relief Displacement In An Aerial Photograph. Relief Displacement as depicted on the upper campus of UWEC. The tower is leaning away from the Principle Point and will have to be corrected.   

Once the images have had the errors removed, the images are plotted as an anaglyph, 
a type of 3 dimensional image, and used for Digital Elevation Models or Digital Terrain Models.

The process of Orthorectification is the simultaneous removal of positional and elevation errors from one or more satellite images. The process also obtains real world X, Y and Z coordinates of pixels on aerial photographs.

Methods

We wanted to compare two anaglyphs using two forms of ground control points. One of the ground points will be from a Digital Elevation Model (DEM) at 10 meter spatial resolution, while the other will be from a LiDAR derived Digital Surface Model (DSM) at 2 meter spatial resolution. 

For the first Anaglyph a photo of Eau Claire was combined with DEM shown below (Figure 5).  The second Anaglyph was also a photo of Eau Claire but this time it was combined with the LiDAR-Derived DSM (Figure 6). As noted above, not only with the Anaglyphs be different because they are based on different ground control points in the form of a DEM and DSM, but the spatial resolution of the output images will also be different. 

Figure 5. Anaglyph 1: Derived from DEM.

Figure 6. Anaglyph 2. Lidar-Derived DSM.

For Orthorectification, we will be using Erdas Imagine Lecia Photogrammetric Suite to accomplish the creation of a planimetrically true orthoimage.

We will be using two corrected orthoimages as a source of ground control images of panchromatic imagery at the 10 meter spatial resolution. The images displayed below are the images that will be corrected, both are SPOT panchromatic images. When displayed in one viewer, both panchromatic images share the same space, one overlapping the other. 

Figure 7. Original Images Before Orthorectification.
After putting in one of the panchromatic images and a control image, ground control points are placed throughout the panchromatic images, using the known corrected orthoimages. 

Figure 8. Ground Control Points.
After placing all of the control points, the second panchromatic image can now be added to the block model. After adding ground control points to the second pan chromatic image, the automatic tie point collection can occur, followed by triangulation of the images. The output images then need to be resampled and finally output using the orthoresampling process, which generates an output image of both of the panchromatic images with spatial accurate X,Y and Z coordinates. 

Results

Both Anaglyphs are displayed below. The anaglyph on the left is Anaglyph 1, which was created using the DEM. You will only be able to see the 3D effect if you have polarized glasses ever, the effect of the 3D representation is stronger in Anaglyph 2. Anaglyph 2 contains imagery and a DSM of higher spatial resolution source imagery, as well as not containing relief displacement in the original images. If you have polarized glasses, you will be able to detect the differences in the 3 dimensional effect of the images. 
Figure 9. Anaglyph 1 and Anaglyph 2. Anaglyph 1 uses an Eau Claire image at 1 meter spatial resolution and a DEM to create the anaglyph (left). Anaglyph 2 is a image of Eau Claire at 2 meter spatial resolution and a LiDAR-derived DSM (right).

After completing the model for Orthorectification, both panchromatic images are now overlaid into one image (Figure 10). The geographic features in the photo below are lined up.
Figure 10. Orthorectified SPOT Panchromatic Images. After the tool has been run we can see that both of the Spot Panchromatic Imagery are now in the correct position and orthorectified and are now overlapping. From a distance we can see the boarders of the images. 

A close up of the two orthorectified panchromatic images (Figure 11), shows a seamless transition between the two images. 
Figure 11. Closeup of the Orthorectified SPOT Panchromatic Images. When zoomed in on the overlap of the two orthorectified images we can see the high degree of  spatial accuracy at the boundaries, where a seamless transition between features exists. 

Sources

Chippewa County Governemnt | LiDAR-derived Surface Model of Chippewa County , WI (DSM). 

Eau Claire County Government | LiDAR-derived Surface Model of Eau Claire, WI (DSM). 

ERDAS Imagine | SPOT Satellite Images ( Panchromatic Images A and B). 2009.

ERDAS Imagine | Digital Elevation Model (DEM), (Pam Springs, CA DEM). 2009 

ERDAS Imagine | National Aerial Photography Program. (NAPP Images at 2 meter resolution). 2009.

Geog 258: Maps and GIS. http://gis.depaul.edu/shwang/teaching/geog258/lec7.htm Retrieved December 7, 2016.

United States Department of Agriculture | National Agriculture Imagery Program (NAIP). 2005.

United States Department of Agriculture | USDA Natural Resources Conservation Service. (Eau Claire Digital Elevation Model (DEM)). 2010. 

Thursday, November 17, 2016

Geometric Correction

Goal and Background

Before analysis on remotely sensed images can begin, the images need to be prepossessed to ensure planimetric accuracy of the image and thus the biophysical and sociocultural information which the image contains. There are two broad types of errors, Internal Errors and External Errors. Internal errors are systematic and stem from an error with the sensor itself. External errors are non-systematic and usually stem from the atmosphere. Internal errors are categorized into three broad types, Image Offset, scanning system induced variation, and scanning system one dimensional relief displacement. Due to the systematic nature of internal errors, most of these errors are predictable and can be corrected by the data vendor before the data is given to the analyst.

However, external errors are categorized into two types, altitude changes and attitude changes. Altitude changes are caused by gradual changes in altitude of a remote sensing platform as it collects data. Increases in altitude cause a decrease in spatial resolution (smaller scale) in the resulting image. While decreases in altitude cause an increase in spatial resolution (larger scale) in the resulting image. Attitude changes are caused by changes in roll, pitch and yaw. 

Here is a short video explain Roll pitch and yaw in an aircraft. 


As a plane flies through the air, various atmospheric conditions cause a plane to alter its roll, pitch and yaw so that the plan can stay airborne. However, these changes alter the sensors orientation and cause changes in data collection. While gyro stabilization is employed in aircraft remote sensing, and can compensate for changes in roll and pitch a gyro stabilizer does not account for errors in yaw. External Errors are non-systematic and need to be corrected by tools or ancillary data to be in a state where accurate information can be derived from the image.

The goal of this lab is to understand how to correct for these external errors in remotely sensed images using ERDAS. The method employed will be Image to Map Rectification and Image to Image Rectification, where the correction takes place via ground control points and polynomial equations. Ground Control Points (GCPs) are placed in both uncorrected image that has errors and simultaneously placed on the reference image in the same location. A reference image may be a known corrected image or an accurate map.

The GCPs are placed systematically using known features for reference, such as intersections, airports, or road cross sections. It is important to note that GCPs can never be placed on lakes or water bodies, as well as vegetation, as these areas are too variable and change frequently to be relied upon to be used as GCPs. The number of GCPs to use is dependent on the size of the imagery as well as the extent of the distortion, as well as the order of polynomial used in the correction process. A higher order polynomial outputs a more robustly corrected image, however needs a higher number of GCPs in order to compute the correction. Once the minimum GCPs are used (it is recommended to use the number of GCPS at 1.5 times the minimum for greater accuracy), we need to make sure that we have a very small Root Mean Square Error (RMSE), which is a measure of accuracy in the placement of the GCPs between the uncorrected image and the reference image. The RMSE varies based on the order of polynomial used in the image correction, but generally needs to be lower than 0.5. 

Methods
The procedure for placing GCPs is as follows:
1.      Locate candidate points and collect GCPS
2.      Compute and test transformation
3.      Create an output image file with new coordinate information, pixels are resampled in the        process.

The first image that needs to be corrected is a Digital Raster Graphic of the Chicago area from the United State Geological Survey, and the reference image will be in the form of a map of Chicago. A first order polynomial was used for this model, so a minimum of 3 GCPs was needed. The total RMS error acceptable was anything under 2.0 but ideally the RMS should be under 0.5.

Figure 1. Image to Map Rectification. The uncorrected image of Chicago is in the left window, while the accurate map is in the right window. Once the minimum number of GCPs are placed the total RMS control point error appears in the lower right hand corner of the screen (Control Point Error). 


Once the GCPs were placed the geometric correction model was ran with a nearest neighbor resampling technique.

The next image that needs to be corrected is of Sierra Leone, the photo is not in the correct planimetrically (X,Y) position as you can see in the picture below. The top image is of the uncorrected photo, and the bottom images is of the reference photo, which is the second method of geometric correction, or Image to Image rectification. 

Figure 2. Image to Image Rectification. Here the uncorrected and correct images are stacked on top of each other. One can note the error in planimetric (XY) space that the uncorrected image (top)  takes as it does not line up with the corrected image (bottom).

For this model a third order polynomial was used, which required a minimum of 10 GCP’s, again we used more than the minimum, and used 12 GCPs. This time the RMS needed to be below 0.5, which was achieved by fine tuning the placement of the GCPs used in the calculation. 
Figure 3. Image to Image Rectification of Sierra Leone. When using a third order polynomial the minimum GCPs increase to 10. 


Results


For the Chicago Image we can see both the uncorrected and corrected images side by side (Figure 4). The uncorrected is on the left hand side and the corrected is on the right hand side. While the images look similar, the corrected image has under gone the nearest neighbor interpolation method, which has changed the cell values in the corrected image to the closest cell value in the uncorrected image, if the two were to be overlapped. Its also important to note that a first order polynomial correction model is not as robust as higher order polynomial corrections and is used for images that have minimal inaccuracies, so we would not expect to see a drastic change in the Chicago DRG once it has been corrected. 

Figure 4. Comparison of the the original uncorrected image of Chicago (left) and the corrected image (right). 

The Sierra Leone image on the other hand, had a much greater degree of inaccuracy than the Chicago Image (Figure 2). Therefore, the Sierra Leone image underwent a higher polynomial correction model in the form of a third order polynomial model, After the image to image geometric correction, Sierra Leone is now rectified (Figure 5), and in the correct planimetric position (x,y), and can now be used for analysis. Bilinear Interpolation was used in this model which did result in the output appearing to be hazy. While the haze may affect analysis, the haze could be corrected by choosing a different interpolation model or by using a tool to reduce haze. In this instance haze is not as important as correcting the planimetric position of the original data. 
Figure 5. Comparison of the reference image for Sierra Leone (left) and the corrected image for Sierra Leone (right). The corrected image now lines up planimetrically with  reference image.  

Sources


Illinois Geospatial Data Clearing House [USGS, 7.5 Minute Digital Raster Graphic (DRG)]. Retrieved November 16, 2016, from https://clearinghouse.isgs.illinois.edu/

USA | Earth Resources Observation and Science (EROS) Center. (Landsat TM Image for Eastern Sierra Leone). Retrieved November 16, 2016, from http://eros.usgs.gov/usa

YouTube. Airplane control - Roll, Pitch, Yaw. Retrieved November 16, 2016, from http://youtu.be/pQ24NtnaLl8

Thursday, November 10, 2016

Light Detection And Ranging

Background and Goals


LIDAR stands for Light Detection and Ranging, and is method used in remote sensing to capture three dimensional data of the characteristics of the Earths surface. LIDAR can be captured by a variety of remote sensing platforms, such as airplanes, drones, satellites (no longer used) and ground units. The data that LIDAR is referred to as a “Point Cloud”, which looks exactly like it sounds, many points of captured data in a 3d plane. LIDAR point clouds can be used to create a variety of models such as Digital Surface Models(DSM), Digital Terrain models (DTM) and can also be used in a variety of analysis about Earths man made and natural features. Here is a background video on the basics of LiDAR and some of the uses that LiDAR can have for data analysis. 


In this Lab we will look at the processing of LIDAR data into various models, such as DTMs and DSMs, as well as other images which can be derived from the LIDAR point cloud. 

Methods

First: we are given the following scenario; You are a GIS manager working on a project for the City of Eau Claire, WI, USA. You have acquired LiDAR point cloud in LAS format for a portion of the City of Eau Claire. First, you want to initiate an initial quality check on the data by looking at its area and coverage, and also verify the current classification of the LiDar.

First the LAS dataset needed to be added to ArcMap, and the properties of the LAS data set needed to be examined. When a new LAS dataset is brought into ArcMap, the statistics need to be calculated from the header information that is included in the .las data file header information. This step is important because the statistics are used for quality assurance and quality control of the overall data set and individual las files.

The next step is to assign both the vertical (Z) and horizontal (XY) coordinate systems which should be included in the LAS dataset’s metadata. The metadata also includes the units of the data which make up the point cloud.  Once the coordinate systems and units are determined are assigned, the data is now ready to be analyzed.

To make sure that the data cloud has been correctly spatially located, a shapefile can be added and used as a reference check.



Figure 1. Tiles of the LAS dataset after the statics have been calculated, the horizontal and vertical coordinate system have been assigned and the shapefile has been added to make sure the LAS dataset is spatially correct.

The reason that the LAS dataset is displayed in the form of tiles is because the point cloud has an extremely large amounts of data points and displaying them constantly would take a lot of computational power and time to redraw the points over and over as the dataset is examined (Fig 1). Instead the dataset is displayed as tiles and does not show the point cloud until a specific tile is chosen to zoom in to.
  

Below we can see an example of the point cloud (Figure 2). Here the point cloud is displaying data points organized by elevation and first return. In comparison we can also see the base map displayed of the UWEC campus (Figure 3). By comparing the two images we can see both the elevation of the buildings being represented in the point cloud, but we can also see the difference in upper and lower campus, upper campus being the orange area represented in the point cloud, and lower campus being represented by the green area, with the yellow area in between representing the hill and change in elevation between the two areas. 
Figure 2. The LAS dataset point cloud. Here the points displayed are of elevation based on first return data. This area is of the UWEC lower campus (green area with orange/yellow buildings), and the upper campus (yellow area) and the surrounding area. 
Figure 3. World Imagery Basemap. This is the same are shown in Figure 2 for reference. 


The data of the point cloud can be separated by returns. Returns occur when the LiDar pulse strikes an object and is able to return multiple pulses of information.  and the LAS dataset 3D view.

Figure 4. Number of Returns. This image depicts how a LiDar pulse can interact with an object and cause multiple returns, by interacting with various objects in the pulses path. 

The next task was to look at the LAS dataset in 3D view. By using the 3D view, a box can be created which allows the analyst to see the data in profile. The image shown below (Figure 5) is the profile view of a pedestrian foot bridge which cross the Chippewa river, connecting the UWEC campus.

Figure 5. LAS 3D Viewer. This image depicts a cross sectional view in 3D of the first returns of the LiDar data set. The Area is the pedestrian foot bridge that runs across the Chippewa river. 

Similar to being able to see specific areas in profile, we can create digital surface models using a similar technique. There are two models which we are interested in for this report. The Digital Surface Model (DSM), and the Digital Terrain Model (DTM). The tool used to create both models the same, the raster to surface tool, however the changing the returns determines which output will be achieved. 

For the DSM we want to choose the first return, because in this model we want to see the variation in terrain of the three dimensional objects on the earth’s surface (Figure 6).  

For the DTM we want to choose ground returns because we want to model ground surface features, not the three dimensional man made or natural objects which would obscure the surface

Once we have a DSM or DTM we can run tools to enhance our analysis of the LiDar data, such as creating a hillshade model. There are a variety of tools to use in the 3D analyst tools for raster in ArcGIS. 
Figure 6. Digital Surface Model with hillshade effect. The area depicted is of the confluence of the Eau Claire and Chippewa Rivers. 


Once both models are complete we can layer both models on top of each other and use the swipe tool to "peel away" the DTM to see the DSM underneath it (Figure 7). By doing this we can compare and contrast both models simultaneously. 

Figure 7. Digital Terrain Model and Digital Surface Model with hillshade effects. Here both the DSM and DTM are displayed at the same time. By using the swipe tool we can peel back the DSM and see the DTM underneath it. 
The last objective for this report was to use the LiDar returns to create a LiDar intensity image, using the LAS dataset to raster tool. 
Figure 8. LiDar Intensity Image.

 Results

LiDar is a very powerful tool that can be used in remote sensing, and is not only interesting to preform analysis on, but it is also very interesting and fun. But it must be understood how LiDar works if a robust analysis is to be preformed. LiDar point clouds hold much data and the tools we have to process this data let us explore a remotely sensed world in 3D, as well as create various models that we can use to analyze the world around us. We can create models of the earths bare surface (Figure 9), and map fluvial features or asses flood risk in areas of low elevation.  

Figure 9. Digital Terrain Model With Hillshade Effects
Or we can see the features which are both man made and natural, which dot the landscape (Figure 10). The amazing thing about a digital surface model is that it can allow us to see places we may never go, and has many possibilities in analyzing and understanding our world.  
Figure 10. Digital Surface Model With Hillshade Effects
               

Data sources
Margaret Price, 2014. Mastering ArcGIS 6th Edition. Eau Claire County Shapefile.

[The Science of Measuring Ecosystem: NEON Education]. (2014, June 18th). LiDAR – Introduction to Light Detection and Ranging. Retrieved November 10, 2016, from  https://youtu.be/EYbhNSUnIdU

"What is Light Detection and Ranging" (2016, July 30). A Complete Guide to LiDAR: Light Detection and Ranging - GIS Geography. Retrieved November 10, 2016, from http://gisgeography.com/lidar-light-detection-and-ranging/

"Basemap" ESRI 2011. ArcGIS Desktop: Release 10. Redlands, CA: Environmental Systems Research Institute.

Eau Claire County, 2013. [LIDAR Point Cloud Data and Tile Index]. 

Tuesday, November 1, 2016

Miscellaneous Image Functions

Goals and Background:

The goals of this lab are to use various image-processing tasks to our study of remotely sensed images by accomplishing the following tasks:

1.     Use various methods to create a study area derived from a larger satellite image scene.

2.     Use spatial resolution techniques to optimize an image for improved image interpretation.

3.     Use radiometric enhancement techniques to enhance image quality for improved image interpretation.

4.     Link images in Erdas to Google Earth, thereby taking advantage of Google Earths ability to be a high-resolution image interpretation key for ancillary information.

5.     Introduce resampling techniques.

6.     Explore the differences in Erdas Mosaicking tools by examining the tool’s output.

7.     Introduce Binary change techniques, as well as using various GIS platforms to interpret those changes.
Methods: 

There are two ways to create an Image Subset in Erdas, by using the Inquire box and by using a shapefile. In order to create an inquire box you can right click on a satellite image in Erdas and select the inquire box option. The inquire box can then be moved and manipulated to the specific area of interest that you would like to subset. Once the area has been determined, using the Subset and Chip tool and selecting Create Subset Image will allow you to make any final adjustments and to finalize the creation of the Image subset (Figure 2).

To create an Image Subset with a shapefile is a very similar process. By overlaying a shape file over a satellite image, we can use the same process of creating an Image Subset by using the Subset and Chip tool and creating an output file of the Image Subset with the same boarders of the shapefile (Figure 3).

Pan sharpening is a great way of improving image quality for interpretation. By taking a image and merging it with a higher resolution image results in increased clarity while still retaining the original images data. In Erdas, using the raster tool, “Resolution merge” two images can be pan sharpened using a multiplicative algorithm and the Nearest Neighbor resampling technique, a higher resolution image is produced (Figure 4).  

Another problem often encountered by analysts is atmospheric haze, which results in cloudy images. In order to correct haze, a radiometric enhancement technique is used, known as Haze Reduction. This technique brings out the true color of the satellite image and results in a clearer image (Figure 5).
Linking Google Earth to a satellite image in Erdas can serve to provide the analyst with extra information in order to undergo the image interpretation process to fully intemperate images in remote sensing. Google Earth can provide this information easily, as Google Earth uses up to date, GeoEye high-resolution satellite data. Simply by clicking on the Connect to Google Earth option, and by selecting sync views an analyst can have a very powerful image interpretation key (Figure 6).

If an analyst is required to increase or decrees the size of an image’s pixels, a resampling technique will be required. By resampling up, the size of the pixels will decrease, while resampling down will reduce the size of the pixels. By selecting “Resample Pixel Size” in the spatial tools of the raster tab in Erdas, an analyst can determine both the resampling technique required and how those pixels will be interpolated. Different interpolation methods have positives and negatives (Figure 7).

An additional issue a remote sensing analyst may encounter is that the study area that the analyst is interested in may be too large to be represented by a single image of a satellite data. To fix this issue the analyst must mosaic two images together, creating a larger scene to analyze. Erdas presents two options for image mosaicking, Mosiac Express and Mosaic Pro. Both options require the image files to be mosaicked to be added in a particular way, before the images are added clicking the multiple tab, and then selecting the radio button multiple image in virtual mosaic must be completed prior to adding the images or the mosaic tools will not run. Once added the analyst will need to select the Mosiac Express either tool or the Mosaic Pro tool to run the mosaicking options. Mosaic express can be ran with little input, however Mosaic Pro needs many options specified in order to complete a more accurate image mosaic (Figure 8 and Figure 9).

Remote sensing can be used to analyze the change in land cover over time by using a powerful model known as Binary Change Detection or image differencing. By comparing the brightness value of two remote sensing images taken years apart, an analyst can model the differences in land cover that have occurred. By using the two input operators interface in Erdas, an analyst can add images from many years apart and run the model to see the change in land cover. An analyst can also estimate which data will change by determining thresholds and looking at the images histogram, to see which data values will change. In this case the rule of threshold is used to establish the point at which the data will change. Using the mean of the image values taken from the metadata, and then by adding 1.5(Standard Deviation of the mean), the analyst can determine the upper threshold, or the lower limit of the data change on the histogram. Similarly by subtracting 1.5(Standard Deviation of the mean) and then making the number negative, the lower limit or the upper boundary of the data change on the histogram will be determined  (Figure 1). 

Text Box: -24.3

Text Box: 71.6
Figure 1. Binary Difference Calculation. Histogram depicts the original image, before processing. The red dashed lines indicate the Lower Change Threshold and Upper Change Threshold calculated by the mean ± 1.5 (standard deviation), with the Lower Change Threshold always negative in value. Changes to the histogram when undergoing the binary difference process will take place on the left and right of the thresholds respectively.


The analyst can use model maker do the calculation for them by simply subtracting the newer image from the older image and adding the constant. The output of the model can then be put in another model, which will create a binary image of the changed pixels by using a conditional argument,  EITHER 1 IF (the output image > change/no change threshold value) or 0 OTHERWISE. By importing that output into ArcMap an analyst can create a map showing the land cover change over time (Figure 10)

Results:

The various methods to create a study area derived from a larger satellite image scene can be seen below. Both subsets are of the boundaries of the inquire box or the shape file respectively. In both cases the result is a smaller image which contains just the area of interest.


Figure 2. Subsetting with the use of an Inquire box. Left Viewer, Original image of the Eau Claire area from 2011 with Inquire box (gold). Right Viewer, Eau Claire subset image extracted from the Eau Claire area image, boundaries consist of the inquire box from the left viewer.

Figure 3. Subsetting with the use of a shapefile. Left Viewer, Original image of the Eau Claire area from 2011 with an overlaid shapefile. Right Viewer, New subset image whose boundaries consist of the AOI from the Eau Claire shapefile.   


The results of the Pan sharpen technique results an image (Figure 4, right) which now has the same pixel size as the panchromatic band (15mx15m) and has darker tones and colors to aid in the image interpretation process.

Figure 4. Image Fusion. The left viewer contains an image of Eau Claire and Chippewa counties in its original form. The right viewer contains the same image that has been pan sharpened.

Similar to the Pan Sharpening process, the Haze reduction process also results in a truer image color to help in the image interpretation process, however haze reduction does not resize the images pixels.

Figure 5. Radiometric Enhancement. Left Viewer, original image (zoomed in) of the city of Eau Claire. Right viewer, image after undergoing haze reduction.


By linking Erdas to Google Earth we can see just how powerful of a selective key Google Earth can be. The Image below displays a zoomed in image of down town Claire displayed both on the
typical view of a satellite band image and what that view looks like in pictures on Google Earth. As far as image interpretation goes, Google Earth can be an extremely valuable tool.


Figure 6. Linking Image Viewer to Google Earth. The viewer on the left displays down town Eau Claire. The viewer on the right is linked to google Earth and displays downtown Eau Claire at a linked distance. This showcases how useful Google Earth can be in the image interpretation process.


From a large scale, it is very hard to tell the difference in the original image to the interpolated image. Both the Nearest Neighbor and Bilinear Interpolation image results are indistinguishable from their original images at this scale. However when viewed from a small scale the differences between the interpolation methods is striking. The Nearest Neighbor Interpolation derives cell values from the closest cell of the original image to the closest cell of the new image, which results in most cells values being similar in the interpolated image as the old image. However, the bilinear interpolation averages the cell values from the 4 closest cells of the old image to the interpolated image. This results in a very “blury” interpolated image where the cells begin to blend in to one another. (here is an example of what both interpolation methods look close up)

Figure 7. Resampling Up. The left viewer shows the original image (30mx30m) compared to the right viewer, which has the Resampled Up image (15mx15m). The method of Interpolation is Nearest Neighbor, which means that the values for the cells in the new image were determined by the closest cells of the old image.

  
While Mosaic Express is faster and not as computationally intensive as Mosaic Pro, the results are of both tools are drastically different. Mosaic Express leaves the distinct boarder of both images intact and is unable to adjust the images to have similar color outputs. Mosaic Pro is able to do both of these functions, however Mosaic Pro has more inputs required from the analyst in order to complete the operation. Despite the extensive inputs Mosaic Pro is the preferred choice for mosaicking in image interpretation because of the higher quality of the output image produced.

Figure 8 and Figure 9. Comparison of Image Mosaicking Tools from Erdas: Mosaic Express and MosaicPro. The viewer on the right is displaying two images mosaicked with the Mosaic Express tool. The viewer on the right is displaying two images mosaicked with the Mosaic Express tool.

After completing the calculations of the binary change detection, the image can be loaded into ArcMap and turned into a proper map, with base data to put the land cover changes into context. If we look at the distribution of land cover change, the largest changes take place in areas around urban areas, such as small towns and suburbs. Further analysis may reveal that this is due to population shifts where people are moving towards more urban areas. Alternatively, it may be that there has been an increase in forested areas being converted to farmland throughout the area during this time.

Although the actual answer may be neither hypothesis, the model can depict many stories caused by the changes in land cover over time.

Figure 10.  Mapping Pixel changes via spatial modeler and ArcGIS. Map depicts change in pixel values from August 1991 and August 2011. Green areas Indicate pixels which changed.



Sources:

Extension: Remote Sensing Resampling Methods, Retrieved 11/1/2016.

“What Are you Looking at?” Discovery Station. Retrieved 11/1/2016.
http://www.smithsonianconference.org/climate/wp-contents/uploads/2009/09/ImageInterpretation.pdf

Satellite images are from Earth Resources Observation and Science
Center, United States Geological Survey. Shapefile is from Mastering ArcGIS 6th edition Dataset
by Maribeth Price, McGraw Hill. 2014.


Change in Land cover from 1991-2011 Map Sources are from Esri, Delome, Gebco, NOAA, NGDC and other contributors. Cartography by Paul Cooper