Thursday, November 17, 2016

Geometric Correction

Goal and Background

Before analysis on remotely sensed images can begin, the images need to be prepossessed to ensure planimetric accuracy of the image and thus the biophysical and sociocultural information which the image contains. There are two broad types of errors, Internal Errors and External Errors. Internal errors are systematic and stem from an error with the sensor itself. External errors are non-systematic and usually stem from the atmosphere. Internal errors are categorized into three broad types, Image Offset, scanning system induced variation, and scanning system one dimensional relief displacement. Due to the systematic nature of internal errors, most of these errors are predictable and can be corrected by the data vendor before the data is given to the analyst.

However, external errors are categorized into two types, altitude changes and attitude changes. Altitude changes are caused by gradual changes in altitude of a remote sensing platform as it collects data. Increases in altitude cause a decrease in spatial resolution (smaller scale) in the resulting image. While decreases in altitude cause an increase in spatial resolution (larger scale) in the resulting image. Attitude changes are caused by changes in roll, pitch and yaw. 

Here is a short video explain Roll pitch and yaw in an aircraft. 


As a plane flies through the air, various atmospheric conditions cause a plane to alter its roll, pitch and yaw so that the plan can stay airborne. However, these changes alter the sensors orientation and cause changes in data collection. While gyro stabilization is employed in aircraft remote sensing, and can compensate for changes in roll and pitch a gyro stabilizer does not account for errors in yaw. External Errors are non-systematic and need to be corrected by tools or ancillary data to be in a state where accurate information can be derived from the image.

The goal of this lab is to understand how to correct for these external errors in remotely sensed images using ERDAS. The method employed will be Image to Map Rectification and Image to Image Rectification, where the correction takes place via ground control points and polynomial equations. Ground Control Points (GCPs) are placed in both uncorrected image that has errors and simultaneously placed on the reference image in the same location. A reference image may be a known corrected image or an accurate map.

The GCPs are placed systematically using known features for reference, such as intersections, airports, or road cross sections. It is important to note that GCPs can never be placed on lakes or water bodies, as well as vegetation, as these areas are too variable and change frequently to be relied upon to be used as GCPs. The number of GCPs to use is dependent on the size of the imagery as well as the extent of the distortion, as well as the order of polynomial used in the correction process. A higher order polynomial outputs a more robustly corrected image, however needs a higher number of GCPs in order to compute the correction. Once the minimum GCPs are used (it is recommended to use the number of GCPS at 1.5 times the minimum for greater accuracy), we need to make sure that we have a very small Root Mean Square Error (RMSE), which is a measure of accuracy in the placement of the GCPs between the uncorrected image and the reference image. The RMSE varies based on the order of polynomial used in the image correction, but generally needs to be lower than 0.5. 

Methods
The procedure for placing GCPs is as follows:
1.      Locate candidate points and collect GCPS
2.      Compute and test transformation
3.      Create an output image file with new coordinate information, pixels are resampled in the        process.

The first image that needs to be corrected is a Digital Raster Graphic of the Chicago area from the United State Geological Survey, and the reference image will be in the form of a map of Chicago. A first order polynomial was used for this model, so a minimum of 3 GCPs was needed. The total RMS error acceptable was anything under 2.0 but ideally the RMS should be under 0.5.

Figure 1. Image to Map Rectification. The uncorrected image of Chicago is in the left window, while the accurate map is in the right window. Once the minimum number of GCPs are placed the total RMS control point error appears in the lower right hand corner of the screen (Control Point Error). 


Once the GCPs were placed the geometric correction model was ran with a nearest neighbor resampling technique.

The next image that needs to be corrected is of Sierra Leone, the photo is not in the correct planimetrically (X,Y) position as you can see in the picture below. The top image is of the uncorrected photo, and the bottom images is of the reference photo, which is the second method of geometric correction, or Image to Image rectification. 

Figure 2. Image to Image Rectification. Here the uncorrected and correct images are stacked on top of each other. One can note the error in planimetric (XY) space that the uncorrected image (top)  takes as it does not line up with the corrected image (bottom).

For this model a third order polynomial was used, which required a minimum of 10 GCP’s, again we used more than the minimum, and used 12 GCPs. This time the RMS needed to be below 0.5, which was achieved by fine tuning the placement of the GCPs used in the calculation. 
Figure 3. Image to Image Rectification of Sierra Leone. When using a third order polynomial the minimum GCPs increase to 10. 


Results


For the Chicago Image we can see both the uncorrected and corrected images side by side (Figure 4). The uncorrected is on the left hand side and the corrected is on the right hand side. While the images look similar, the corrected image has under gone the nearest neighbor interpolation method, which has changed the cell values in the corrected image to the closest cell value in the uncorrected image, if the two were to be overlapped. Its also important to note that a first order polynomial correction model is not as robust as higher order polynomial corrections and is used for images that have minimal inaccuracies, so we would not expect to see a drastic change in the Chicago DRG once it has been corrected. 

Figure 4. Comparison of the the original uncorrected image of Chicago (left) and the corrected image (right). 

The Sierra Leone image on the other hand, had a much greater degree of inaccuracy than the Chicago Image (Figure 2). Therefore, the Sierra Leone image underwent a higher polynomial correction model in the form of a third order polynomial model, After the image to image geometric correction, Sierra Leone is now rectified (Figure 5), and in the correct planimetric position (x,y), and can now be used for analysis. Bilinear Interpolation was used in this model which did result in the output appearing to be hazy. While the haze may affect analysis, the haze could be corrected by choosing a different interpolation model or by using a tool to reduce haze. In this instance haze is not as important as correcting the planimetric position of the original data. 
Figure 5. Comparison of the reference image for Sierra Leone (left) and the corrected image for Sierra Leone (right). The corrected image now lines up planimetrically with  reference image.  

Sources


Illinois Geospatial Data Clearing House [USGS, 7.5 Minute Digital Raster Graphic (DRG)]. Retrieved November 16, 2016, from https://clearinghouse.isgs.illinois.edu/

USA | Earth Resources Observation and Science (EROS) Center. (Landsat TM Image for Eastern Sierra Leone). Retrieved November 16, 2016, from http://eros.usgs.gov/usa

YouTube. Airplane control - Roll, Pitch, Yaw. Retrieved November 16, 2016, from http://youtu.be/pQ24NtnaLl8

Thursday, November 10, 2016

Light Detection And Ranging

Background and Goals


LIDAR stands for Light Detection and Ranging, and is method used in remote sensing to capture three dimensional data of the characteristics of the Earths surface. LIDAR can be captured by a variety of remote sensing platforms, such as airplanes, drones, satellites (no longer used) and ground units. The data that LIDAR is referred to as a “Point Cloud”, which looks exactly like it sounds, many points of captured data in a 3d plane. LIDAR point clouds can be used to create a variety of models such as Digital Surface Models(DSM), Digital Terrain models (DTM) and can also be used in a variety of analysis about Earths man made and natural features. Here is a background video on the basics of LiDAR and some of the uses that LiDAR can have for data analysis. 


In this Lab we will look at the processing of LIDAR data into various models, such as DTMs and DSMs, as well as other images which can be derived from the LIDAR point cloud. 

Methods

First: we are given the following scenario; You are a GIS manager working on a project for the City of Eau Claire, WI, USA. You have acquired LiDAR point cloud in LAS format for a portion of the City of Eau Claire. First, you want to initiate an initial quality check on the data by looking at its area and coverage, and also verify the current classification of the LiDar.

First the LAS dataset needed to be added to ArcMap, and the properties of the LAS data set needed to be examined. When a new LAS dataset is brought into ArcMap, the statistics need to be calculated from the header information that is included in the .las data file header information. This step is important because the statistics are used for quality assurance and quality control of the overall data set and individual las files.

The next step is to assign both the vertical (Z) and horizontal (XY) coordinate systems which should be included in the LAS dataset’s metadata. The metadata also includes the units of the data which make up the point cloud.  Once the coordinate systems and units are determined are assigned, the data is now ready to be analyzed.

To make sure that the data cloud has been correctly spatially located, a shapefile can be added and used as a reference check.



Figure 1. Tiles of the LAS dataset after the statics have been calculated, the horizontal and vertical coordinate system have been assigned and the shapefile has been added to make sure the LAS dataset is spatially correct.

The reason that the LAS dataset is displayed in the form of tiles is because the point cloud has an extremely large amounts of data points and displaying them constantly would take a lot of computational power and time to redraw the points over and over as the dataset is examined (Fig 1). Instead the dataset is displayed as tiles and does not show the point cloud until a specific tile is chosen to zoom in to.
  

Below we can see an example of the point cloud (Figure 2). Here the point cloud is displaying data points organized by elevation and first return. In comparison we can also see the base map displayed of the UWEC campus (Figure 3). By comparing the two images we can see both the elevation of the buildings being represented in the point cloud, but we can also see the difference in upper and lower campus, upper campus being the orange area represented in the point cloud, and lower campus being represented by the green area, with the yellow area in between representing the hill and change in elevation between the two areas. 
Figure 2. The LAS dataset point cloud. Here the points displayed are of elevation based on first return data. This area is of the UWEC lower campus (green area with orange/yellow buildings), and the upper campus (yellow area) and the surrounding area. 
Figure 3. World Imagery Basemap. This is the same are shown in Figure 2 for reference. 


The data of the point cloud can be separated by returns. Returns occur when the LiDar pulse strikes an object and is able to return multiple pulses of information.  and the LAS dataset 3D view.

Figure 4. Number of Returns. This image depicts how a LiDar pulse can interact with an object and cause multiple returns, by interacting with various objects in the pulses path. 

The next task was to look at the LAS dataset in 3D view. By using the 3D view, a box can be created which allows the analyst to see the data in profile. The image shown below (Figure 5) is the profile view of a pedestrian foot bridge which cross the Chippewa river, connecting the UWEC campus.

Figure 5. LAS 3D Viewer. This image depicts a cross sectional view in 3D of the first returns of the LiDar data set. The Area is the pedestrian foot bridge that runs across the Chippewa river. 

Similar to being able to see specific areas in profile, we can create digital surface models using a similar technique. There are two models which we are interested in for this report. The Digital Surface Model (DSM), and the Digital Terrain Model (DTM). The tool used to create both models the same, the raster to surface tool, however the changing the returns determines which output will be achieved. 

For the DSM we want to choose the first return, because in this model we want to see the variation in terrain of the three dimensional objects on the earth’s surface (Figure 6).  

For the DTM we want to choose ground returns because we want to model ground surface features, not the three dimensional man made or natural objects which would obscure the surface

Once we have a DSM or DTM we can run tools to enhance our analysis of the LiDar data, such as creating a hillshade model. There are a variety of tools to use in the 3D analyst tools for raster in ArcGIS. 
Figure 6. Digital Surface Model with hillshade effect. The area depicted is of the confluence of the Eau Claire and Chippewa Rivers. 


Once both models are complete we can layer both models on top of each other and use the swipe tool to "peel away" the DTM to see the DSM underneath it (Figure 7). By doing this we can compare and contrast both models simultaneously. 

Figure 7. Digital Terrain Model and Digital Surface Model with hillshade effects. Here both the DSM and DTM are displayed at the same time. By using the swipe tool we can peel back the DSM and see the DTM underneath it. 
The last objective for this report was to use the LiDar returns to create a LiDar intensity image, using the LAS dataset to raster tool. 
Figure 8. LiDar Intensity Image.

 Results

LiDar is a very powerful tool that can be used in remote sensing, and is not only interesting to preform analysis on, but it is also very interesting and fun. But it must be understood how LiDar works if a robust analysis is to be preformed. LiDar point clouds hold much data and the tools we have to process this data let us explore a remotely sensed world in 3D, as well as create various models that we can use to analyze the world around us. We can create models of the earths bare surface (Figure 9), and map fluvial features or asses flood risk in areas of low elevation.  

Figure 9. Digital Terrain Model With Hillshade Effects
Or we can see the features which are both man made and natural, which dot the landscape (Figure 10). The amazing thing about a digital surface model is that it can allow us to see places we may never go, and has many possibilities in analyzing and understanding our world.  
Figure 10. Digital Surface Model With Hillshade Effects
               

Data sources
Margaret Price, 2014. Mastering ArcGIS 6th Edition. Eau Claire County Shapefile.

[The Science of Measuring Ecosystem: NEON Education]. (2014, June 18th). LiDAR – Introduction to Light Detection and Ranging. Retrieved November 10, 2016, from  https://youtu.be/EYbhNSUnIdU

"What is Light Detection and Ranging" (2016, July 30). A Complete Guide to LiDAR: Light Detection and Ranging - GIS Geography. Retrieved November 10, 2016, from http://gisgeography.com/lidar-light-detection-and-ranging/

"Basemap" ESRI 2011. ArcGIS Desktop: Release 10. Redlands, CA: Environmental Systems Research Institute.

Eau Claire County, 2013. [LIDAR Point Cloud Data and Tile Index]. 

Tuesday, November 1, 2016

Miscellaneous Image Functions

Goals and Background:

The goals of this lab are to use various image-processing tasks to our study of remotely sensed images by accomplishing the following tasks:

1.     Use various methods to create a study area derived from a larger satellite image scene.

2.     Use spatial resolution techniques to optimize an image for improved image interpretation.

3.     Use radiometric enhancement techniques to enhance image quality for improved image interpretation.

4.     Link images in Erdas to Google Earth, thereby taking advantage of Google Earths ability to be a high-resolution image interpretation key for ancillary information.

5.     Introduce resampling techniques.

6.     Explore the differences in Erdas Mosaicking tools by examining the tool’s output.

7.     Introduce Binary change techniques, as well as using various GIS platforms to interpret those changes.
Methods: 

There are two ways to create an Image Subset in Erdas, by using the Inquire box and by using a shapefile. In order to create an inquire box you can right click on a satellite image in Erdas and select the inquire box option. The inquire box can then be moved and manipulated to the specific area of interest that you would like to subset. Once the area has been determined, using the Subset and Chip tool and selecting Create Subset Image will allow you to make any final adjustments and to finalize the creation of the Image subset (Figure 2).

To create an Image Subset with a shapefile is a very similar process. By overlaying a shape file over a satellite image, we can use the same process of creating an Image Subset by using the Subset and Chip tool and creating an output file of the Image Subset with the same boarders of the shapefile (Figure 3).

Pan sharpening is a great way of improving image quality for interpretation. By taking a image and merging it with a higher resolution image results in increased clarity while still retaining the original images data. In Erdas, using the raster tool, “Resolution merge” two images can be pan sharpened using a multiplicative algorithm and the Nearest Neighbor resampling technique, a higher resolution image is produced (Figure 4).  

Another problem often encountered by analysts is atmospheric haze, which results in cloudy images. In order to correct haze, a radiometric enhancement technique is used, known as Haze Reduction. This technique brings out the true color of the satellite image and results in a clearer image (Figure 5).
Linking Google Earth to a satellite image in Erdas can serve to provide the analyst with extra information in order to undergo the image interpretation process to fully intemperate images in remote sensing. Google Earth can provide this information easily, as Google Earth uses up to date, GeoEye high-resolution satellite data. Simply by clicking on the Connect to Google Earth option, and by selecting sync views an analyst can have a very powerful image interpretation key (Figure 6).

If an analyst is required to increase or decrees the size of an image’s pixels, a resampling technique will be required. By resampling up, the size of the pixels will decrease, while resampling down will reduce the size of the pixels. By selecting “Resample Pixel Size” in the spatial tools of the raster tab in Erdas, an analyst can determine both the resampling technique required and how those pixels will be interpolated. Different interpolation methods have positives and negatives (Figure 7).

An additional issue a remote sensing analyst may encounter is that the study area that the analyst is interested in may be too large to be represented by a single image of a satellite data. To fix this issue the analyst must mosaic two images together, creating a larger scene to analyze. Erdas presents two options for image mosaicking, Mosiac Express and Mosaic Pro. Both options require the image files to be mosaicked to be added in a particular way, before the images are added clicking the multiple tab, and then selecting the radio button multiple image in virtual mosaic must be completed prior to adding the images or the mosaic tools will not run. Once added the analyst will need to select the Mosiac Express either tool or the Mosaic Pro tool to run the mosaicking options. Mosaic express can be ran with little input, however Mosaic Pro needs many options specified in order to complete a more accurate image mosaic (Figure 8 and Figure 9).

Remote sensing can be used to analyze the change in land cover over time by using a powerful model known as Binary Change Detection or image differencing. By comparing the brightness value of two remote sensing images taken years apart, an analyst can model the differences in land cover that have occurred. By using the two input operators interface in Erdas, an analyst can add images from many years apart and run the model to see the change in land cover. An analyst can also estimate which data will change by determining thresholds and looking at the images histogram, to see which data values will change. In this case the rule of threshold is used to establish the point at which the data will change. Using the mean of the image values taken from the metadata, and then by adding 1.5(Standard Deviation of the mean), the analyst can determine the upper threshold, or the lower limit of the data change on the histogram. Similarly by subtracting 1.5(Standard Deviation of the mean) and then making the number negative, the lower limit or the upper boundary of the data change on the histogram will be determined  (Figure 1). 

Text Box: -24.3

Text Box: 71.6
Figure 1. Binary Difference Calculation. Histogram depicts the original image, before processing. The red dashed lines indicate the Lower Change Threshold and Upper Change Threshold calculated by the mean ± 1.5 (standard deviation), with the Lower Change Threshold always negative in value. Changes to the histogram when undergoing the binary difference process will take place on the left and right of the thresholds respectively.


The analyst can use model maker do the calculation for them by simply subtracting the newer image from the older image and adding the constant. The output of the model can then be put in another model, which will create a binary image of the changed pixels by using a conditional argument,  EITHER 1 IF (the output image > change/no change threshold value) or 0 OTHERWISE. By importing that output into ArcMap an analyst can create a map showing the land cover change over time (Figure 10)

Results:

The various methods to create a study area derived from a larger satellite image scene can be seen below. Both subsets are of the boundaries of the inquire box or the shape file respectively. In both cases the result is a smaller image which contains just the area of interest.


Figure 2. Subsetting with the use of an Inquire box. Left Viewer, Original image of the Eau Claire area from 2011 with Inquire box (gold). Right Viewer, Eau Claire subset image extracted from the Eau Claire area image, boundaries consist of the inquire box from the left viewer.

Figure 3. Subsetting with the use of a shapefile. Left Viewer, Original image of the Eau Claire area from 2011 with an overlaid shapefile. Right Viewer, New subset image whose boundaries consist of the AOI from the Eau Claire shapefile.   


The results of the Pan sharpen technique results an image (Figure 4, right) which now has the same pixel size as the panchromatic band (15mx15m) and has darker tones and colors to aid in the image interpretation process.

Figure 4. Image Fusion. The left viewer contains an image of Eau Claire and Chippewa counties in its original form. The right viewer contains the same image that has been pan sharpened.

Similar to the Pan Sharpening process, the Haze reduction process also results in a truer image color to help in the image interpretation process, however haze reduction does not resize the images pixels.

Figure 5. Radiometric Enhancement. Left Viewer, original image (zoomed in) of the city of Eau Claire. Right viewer, image after undergoing haze reduction.


By linking Erdas to Google Earth we can see just how powerful of a selective key Google Earth can be. The Image below displays a zoomed in image of down town Claire displayed both on the
typical view of a satellite band image and what that view looks like in pictures on Google Earth. As far as image interpretation goes, Google Earth can be an extremely valuable tool.


Figure 6. Linking Image Viewer to Google Earth. The viewer on the left displays down town Eau Claire. The viewer on the right is linked to google Earth and displays downtown Eau Claire at a linked distance. This showcases how useful Google Earth can be in the image interpretation process.


From a large scale, it is very hard to tell the difference in the original image to the interpolated image. Both the Nearest Neighbor and Bilinear Interpolation image results are indistinguishable from their original images at this scale. However when viewed from a small scale the differences between the interpolation methods is striking. The Nearest Neighbor Interpolation derives cell values from the closest cell of the original image to the closest cell of the new image, which results in most cells values being similar in the interpolated image as the old image. However, the bilinear interpolation averages the cell values from the 4 closest cells of the old image to the interpolated image. This results in a very “blury” interpolated image where the cells begin to blend in to one another. (here is an example of what both interpolation methods look close up)

Figure 7. Resampling Up. The left viewer shows the original image (30mx30m) compared to the right viewer, which has the Resampled Up image (15mx15m). The method of Interpolation is Nearest Neighbor, which means that the values for the cells in the new image were determined by the closest cells of the old image.

  
While Mosaic Express is faster and not as computationally intensive as Mosaic Pro, the results are of both tools are drastically different. Mosaic Express leaves the distinct boarder of both images intact and is unable to adjust the images to have similar color outputs. Mosaic Pro is able to do both of these functions, however Mosaic Pro has more inputs required from the analyst in order to complete the operation. Despite the extensive inputs Mosaic Pro is the preferred choice for mosaicking in image interpretation because of the higher quality of the output image produced.

Figure 8 and Figure 9. Comparison of Image Mosaicking Tools from Erdas: Mosaic Express and MosaicPro. The viewer on the right is displaying two images mosaicked with the Mosaic Express tool. The viewer on the right is displaying two images mosaicked with the Mosaic Express tool.

After completing the calculations of the binary change detection, the image can be loaded into ArcMap and turned into a proper map, with base data to put the land cover changes into context. If we look at the distribution of land cover change, the largest changes take place in areas around urban areas, such as small towns and suburbs. Further analysis may reveal that this is due to population shifts where people are moving towards more urban areas. Alternatively, it may be that there has been an increase in forested areas being converted to farmland throughout the area during this time.

Although the actual answer may be neither hypothesis, the model can depict many stories caused by the changes in land cover over time.

Figure 10.  Mapping Pixel changes via spatial modeler and ArcGIS. Map depicts change in pixel values from August 1991 and August 2011. Green areas Indicate pixels which changed.



Sources:

Extension: Remote Sensing Resampling Methods, Retrieved 11/1/2016.

“What Are you Looking at?” Discovery Station. Retrieved 11/1/2016.
http://www.smithsonianconference.org/climate/wp-contents/uploads/2009/09/ImageInterpretation.pdf

Satellite images are from Earth Resources Observation and Science
Center, United States Geological Survey. Shapefile is from Mastering ArcGIS 6th edition Dataset
by Maribeth Price, McGraw Hill. 2014.


Change in Land cover from 1991-2011 Map Sources are from Esri, Delome, Gebco, NOAA, NGDC and other contributors. Cartography by Paul Cooper