Tuesday, May 9, 2017

Lab 8: Spectral Signature Analysis

Background and Goals
The goal of this lab is to introduce students to the process of analyzing and collected various spectral signatures from satellite images. This includes digitizing areas of different materials in order to collect their unique spectral signature and graphing and analyzing the results. Along with creating NDVI images and analyzing ferrous mineral distribution.


Methods
To begin the lab, I brought in an image Western Wisconsin. Once the image was into the viewer I digitized AOI’s (Areas of Interest) across many different land surface features. These features included agricultural fields both planted and un-planted, different types of forest, rocks, and urban development. Once the AOI’s were created I brought them into the Signature Editor (Figure 1)where further analysis could be done. From the Signature Editor I then proceeded to graph the different materials and analysis their signatures across different bands. I then compared the different materials spectral signatures against each other both amongst similar features and amongst features that are different (Figure 2).
Figure 1. Signature Analysis Table
Figure 2. Graph showing the different spectral signatures

After analyzing the different spectral signatures that different features reflect, the next task that I was given was to perform resource monitoring in the form of vegetation health monitoring by creating a NDVI (Normalized Difference Vegetation Index) image and Ferrous Mineral content image of Eau Claire-Chippewa Falls area.  To this I brought in an image of the area and used the NDVI tool to create the new NDVI Image and the Indices Tool to create an image that showed the abundance of Ferrous Minerals. The new images where in black and white so I imported the images into ArcMap and classified the data and created the maps below.


Figure 3. Map displaying the NDVI for the Eau Claire-Chippewa Falls Area
Figure 4. Map displaying the distribution of Ferrous Minerals in the Eau Claire-Chippewa Falls Area
Sources

Earth Resources Observation and Science Center, United States Geological Survey

Tuesday, May 2, 2017

Lab 7


Goals and Background
The goal of this lab was to demonstrate the ability to perform a variety of photogrammetric tasks on satellite and other aerial images. This includes calculating the mathematics behind calculating scales, area of objects, and relief displacement. The lab also includes stereoscopy, creation of anaglyph images, and orthorectification of aerial and satellite images.

Methods
The first section of the lab involved calculating scales of images, measurements of objects, and relief displacement for distorted objects in images. To calculate scales for aerial images I used the height at which the image and the length of the focal lens of the camera to compute the scale of the images. To calculate the area of objects I used the Measure Perimeters and Areas tools to digitize polygons around features and record their measurements. To measure the relief displacement of a smoke stack located on campus, I measured the height of the tower, the height of the camera, and the radial distance from the top of the tower to the principal point which I measured to be .38 inches.

The second section of the lab involved reading stereoscopic images and creating anaglyph images. To begin this section I was given two images, one had relief displacement and the other had already corrected. The objective of this section was to point out the distortions and come those distortions to the corrected image. Bringing in the two images into two separate viewers and zooming into the same objects in both images. When looking at the two images side by side it was obvious the image on the left clearly had relief displacement errors. The next section of the lab involved created two anaglyph images. The first anaglyph image was created using images that had relief displacement and the second was an image was created using a Digital Surface Model (DSM) created from a LiDar point cloud. (Figure 1) After creating the two anaglyph images and looking at them with 3D glasses, it was obvious the effects that relief displacement can cause on the accuracy of stereoscopic image. The first image was severely distorted and elevation changes were very exaggerated and unrealistic compared to the second anaglyph derived from Lidar data.

The next section of the lab involved Orthorectifying images. This involved creating a new block file in Erdas Imagine and setting accurate parameters for that block file so that the orthorectification process could be done correctly. I choose the correct Horizontal Reference Coordinate System, Spheroid, Datum and Projection Type. Once the parameters were correctly inputted into the model I then began to bring the images and collect Ground Control Points (GCP’s) (Figure 2). After collecting GCP’s from multiple images and storing the GCP’s in the block file and verifying the accuracy of the points, I used the Automated Tie Point generator to create tie points based off of the GCP’s that I collected earlier. After running the tool and verifying that the tie points were in the correct position, I used the Ortho Resampling Process tool to complete the Orthorectification process. Once was tool was completed, I brought the new images into a new viewer (Figure 3).

Results
The images below show the results from the lab above.

Figure 1. Anaglyph Created using a Digital Surface Model

Figure 2. Collection of GCP's during the Orthorectification Process
Final 3. Final Orthorectified Image
Sources
National Agriculture Imagery Program (NAIP) images are from United States Department of Agriculture, 2005.

Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of Agriculture Natural Resources Conservation Service, 2010.

Lidar-derived surface model (DSM) for sections of Eau Claire and Chippewa are from Eau Claire County and Chippewa County governments respectively.

Spot satellite images are from Erdas Imagine, 2009.

Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009
.  
National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.


Sunday, April 16, 2017

Lab 6

Goals and Background
The purpose of this lab is to introduce students to geometric correction using both Image-to-map and Image-to-image rectification.

Methods
To begin the lab imported an image of the Chicago area into Erdas Imagine. This image was distorted and needed to be corrected using image-to-map rectification. To do this I used the Control Points function under the Multispectral tools tab. This function allowed me to place Ground Control Points (GCPs) onto another image to improve its spatial accuracy. I chose to place the GCPs onto a reference map and perform a first order polynomial transformation. I then brought in the digital reference map and placed four GCPs spaced across the images, placing them at places such as road intersections. After adjusting the GCPs to make sure they had minimal error I ran the Display Resample Image tool that created my newly resampled using the nearest neighbor method.

The next section of the lab involved following a similar process to correct a spatial distorted image. Instead of using a reference map in the previous section I used an image of the same area. I brought in the image and used the Control Points function and adjusted the setting to create a third order polynomial transformation. I brought in the reference image and began to plot my GCPs. Because I was performing a third order polynomial transformation I needed to place a minimum of 9 GCPs. I plotted 12 to ensure that the new image would be more spatially accurate. Once the GCPs were adjusted, I resampled the image using Bilinear Interpolation, which made the image more spatial accurate but it did reduce the contrast in the newly created image.

Results
Image-to-map rectification

Image-to-image rectification

Resampled image using Bilinear Interpolation


Friday, April 7, 2017

Lab 5

Background and Goals
The objective of this lab was to be exposed to the processing and data structure of LiDar data. This included processing various surface and terrain models and the creation of intensity images and similar models using derivative products from point cloud data. Data was presented in LAS file format.

Methods
The first section lab was to create a LAS database in ArcMap using the LAS files in the class folder. Once I created the data base, I calculated the statistics of the data. The next process that I needed to complete to bring the data into ArcMap was to assign the data coordinate system for both XY and Z. To find the correct coordinate system for the data I looked in the metadata for the LAS files where I found the correct coordinate systems. For the XY (horizontal) coordinates the coordinate systems was D_North_American_1983 and the Z (vertical) was North American Vertical Datum of 1988.
I then imported the newly created LAS dataset into ArcMap (figure 1). To make sure the data was spatially located correctly, I overlaid the LAS data set with a shapefile that contained the outline of Eau Claire.  After verifying that the data was indeed correct I proceeded to the next section lab. 

The next section of the lab involved using the LAS to Raster and Raster Surface tools in ArcMap to create both Digital Surface (DSM) and Digital Terrain Models (DTM). To create the DSM, I first used the LAS Raster Tool to convert my LAS Dataset into a raster image where the the Raster Surface Tool could process it. In the LAS TO Raster Tool, I used the elevation field, set the cell assignment to Maximum, and Void Fill Method to natural neighbor. Changed the cell size to 6.56 (~2 meters). Once the raster tool had processed the image, I brought this new image into a blank new ArcMap browser, where I ran the Hillshade Raster Surface tool to create the DSM image (Figure 2).

To create the DTM model I filtered the original LAS data set to only show the Ground category and used the LAS TO Raster Tool again used similar parameters as used to create the DSM image. I set the Interpolation method to Binning, Cell Assignment Type to Minimum, Void Fill Method to Natural Neighbor, and Sampling Cell value to 6.56 feet. I then used the Raster Surface Tool and processed the image using the Hillshade option.  Where I created the image in (Figure 3). 

The DTM model shows the information only at the ground level and excludes surface features such as buildings and vegetation. DSM models are useful for identifying surface features and determining spatial relationships between them. DTM models are better for studying the shape and topology of the actual bare surface.


I then created an Intensity Image from the LAS data. Intensity Images measure the highest voltage captured by the sensor. This can be used to aid in identifying classified Lidar data. To create this model, I filtered the LAS Dataset to First Return. I then used the LAS To Rater Tool set the Value Field to Intensity, Binning Assignment to Average, Void Fill to Natural Neighbor, and Cell Size to 6.56 feet. Once the Image was finished processing I converted the image to a TIFF file where I could then open it in Erdas Imagine (Figure 4). 

Results
Figure 1 Point Cloud

Figure 2 Digital Surface Model

Figure 3 Digital Terian Model

Figure 4 Image Intensity Model
Sources
LiDar point cloud and Tile Index are from Eau Claire County, 2013.
Eau Claire County Shapefile is from Mastering ArcGis 6th Edition data by MArgret Price, 2014.

Tuesday, March 28, 2017

Lab4


Goal and Background:
This lab was designed for students to display their ability to perform multiple functions in regards to remotely sensed images. Such processes include the ability to produce an area of interest (AOI), optimization of radiometric data for the purposes of visual interpretation, radiometric enhancement techniques, ability to link images to Google Earth, resampling of satellite images, image mosaicking, and binary change detection.

Methods
The first section of the lab was designed to was to show the ability to show an area of interest or AOI. This was done in two parts; the first part was to select the area of interest by using the Inquire Box tool to draw a rectangle around an area, in this case it was the Eau Claire area. The second part of this section involved importing a shapefile of both Eau Claire and Chippewa counties. To complete this section I imported a shapefile of both Eau Claire and Chippewa counties and made them translucent so that the image could be seen underneath.

The next section of the lab was to increase the resolution of remotely sensed images. This again was done in multiple parts. The first part of this section involved pan sharpening. To do this I opened two separate viewers in Erdas Image Engine. In the first viewer, I brought in an image of the Eau Claire region and in the second view I brought in an image of the same area only this image was in the panchromatic band. Using the Pan Sharpen tool I used the Nearest Neighbor method of resampling to create a new pan sharpened image. The next section of the lab was to reduce the haze that existed in a remotely sensed image. To do this I used the Haze Reduction tool located under radiometric tools.

Following the section of the lab was to link images in Erdas Image Engine to Google Earth for the purposes of creating a selective key that could aid in image interpretation. To do this, I opened two separate viewers. I brought in an image of the Eau Claire in to the first viewer. In the second viewer, I used the Connect to Google Earth tool to link the viewer to Google Earth. Once Google Earth was brought into the second viewer I synchronized the two viewers together.

I was then tasked with resampling an image of the Eau Claire area using the methods of Nearest Neighbor and Bilinear Interpolation. To do this I used the Resample Pixel Size tool where I changed the dimension of the pixels from 30x30 meters to 15x15 meters. For the first image, I chose to use the Nearest Neighbor method and for the second image I chose the Bilinear Interpolation method creating two new images with the method of Bilinear Interpolation producing a more visually pleasing image.

Following the previous section, I was tasked with mosaicking two images. This was done by using the Mosaic Express and Mosaic Pro tools. The Mosaic Express tool produced a better image with smoother transitions between the two images than that of Mosaic Pro.

The final section of the lab was to show land use changes in the Eau Claire area between 2011 and 1991. I brought in image for each of the years into separate viewers.  I then used an image differencing tool to create a new image that displayed the differences between the two images. After looking at the image metadata and histograms to detect where change had occurred in the data. I then created a model that would create a new image displaying the values form the 2011 image from the 1991 image that had changed. I then took the newly produced image and imported it into ArcMap where I overlaid it over an image of the Eau Claire area that I colored light grey to improve the contrast between the two layers.

Results
The Images below show the results of the discussed sections above.

Subsetted Image





Pan Sharpened Image
(Original: Left, Pan Sharpened: Right)

Haze Reduction

Linked Views
Mosaicked Images
Binary Image Detection
Credits
Earth Resources Observation and Science Center, United States Geological Survey. Shapefile is
          from Mastering ArcGIS 6th edition Dataset by Maribeth Price, McGraw Hill. 2014.













Lab 8: Spectral Signature Analysis

Background and Goals The goal of this lab is to introduce students to the process of analyzing and collected various spectral signatures...