Sunday, December 14, 2014

Term Project


Edge Enhancement and Vegetation Change of

the Tampa Bay Region from 1995 to 2010

Joel Weber      Introduction to Remote Sensing        December 14, 2014

 

Introduction

The Tampa Bay Region (Figure 1) is located on the West Central portion of the Florida peninsula. It consists of four adjacent counties surrounding Tampa Bay. The counties include Hillsborough County, Pinellas County, Pasco County, and Manatee County. The overall population of the region, according to the 2010 U.S. Census Bureau (2014), is 2,933,298, which is a 150% increase from the estimated population of 1,948,063 people in 1995. The population density varies significantly for each county. Pinellas County has the highest density of 3,300 people/mi2 while Manatee County has a density of only 356 people/mi2 (U.S. Census Bureau, 2014).

Text Box: Figure 1 The counties located within the Tampa Bay Region include Hillsborough, Pinellas, Pasco, and Manatee Counties.This significant density and population growth from 1995 to 2010 has led to various land use changes. It is important to monitor land use changes to determine environmental impacts in the area. Therefore, I will be using satellite imagery taken from Landsat 5 to determine some environmental impacts of the population growth and urban sprawl between 1995 and 2010. This will be achieved by downloading the Landsat 5 data from the USGS Global Visualization Viewer for each year, mosaicking the images together, creating a subset of the region. Then I will run a Normalized Difference Vegetation Index for both years to determine where vegetated areas are located. I will also run a nonlinear edge-enhancement for both years to determine where new roads and subdivisions have been added. Finally, I will calculate where land surface has significantly changed using binary change.

Remote Sensing Analysis

            The first step in the process of any remote sensing analysis is downloading data from an online source. For my analysis I used the USGS Global Visualization Viewer to download Landsat 5 data from 1995 and 2010. It is important to select a temporal resolution that is consistent between both images (i.e. using images from the same month of the given years). It is also important to select data that has a relatively low atmospheric distortion to provide a more accurate analysis. Since my study area covers more than one image scene I had to download three images from 1995 and three images from 2010. I found it frustrating to locate images from similar months that had high quality images for the given years. After finding acceptable images from both years I downloaded them and unzipped the files. The next step is to convert the individual .TIFF bands that were downloaded, into .img images that can be used in Erdas. This was done by importing bands 1-5 and 7 from the Landsat 5 satellite into Erdas and stacking the layers into one image.

Text Box: Figure 2 The MosaicPro tool was used to mosaic the 3 image scenes that covered my study area for each year.Next, I had to mosaic the three images for 1995 together and the three images for 2010 together, using “MosaicPro” in Erdas (Figure 2). This was also frustrating because there are several different settings the user must set in order to create a seamless mosaic. For my mosaic (Figure 3) I used histogram matching of overlap areas with a feather function. I tried several other settings which didn’t provide quality mosaics.

Text Box: Figure 3 Mosaics of the three image scenese for 1995(left) and 2010(right) that were created using MosaicPro in Erdas.           

Text Box: Figure 5  Subsets of the Tampa Bay Region from 1995(left) and 2010(right) used in analysis.Text Box: Figure 4  Shapefile used to create a subset of the four counties in the Tampa Bay Region from 1995(left) and 2010(right).After the mosaics were created the next step is to create a subset of the area of interest. To do this I imported a shapefile of the four counties in the Tampa Bay Region into Erdas (Figure 4). Then I needed to create an .aoi file by using the “Save as AOI Layer As…” function. Finally, this .aoi file was used to create a subset of the Tampa Bay Region (Figure 5), which will be used for further analysis.

 

            Next, I ran a Normalized Difference Vegetation Index (NDVI) on the Tampa Bay Region for 1995 and 2010 (Figure 6). NDVI uses an equation to calculate how much vegetation is in each pixel. The NDVI equation is

NDVI=(NIR-Red)/(NIR+Red)

Text Box: Figure 6  NDVI values of the Tampa Bay Region show how the amount of vegetation has changed from 1995(left) and 2010(right).Where NIR= reflectance in the near infrared band and Red= reflectance in the red band. Pixels with high vegetation will appear lighter in color and pixels with low vegetation will appear darker. It is important to use images with similar temporal resolution because images taken at different months will have significantly different NIR reflectance values, which will lead to inaccurate analysis.

             Next, I ran a non-directional edge enhancement technique on images from both years (Figure 7). This process shows where dramatic changes in brightness values occur in a relatively short distance. Edges are commonly found along river banks and along roadways. Comparing non-directional edges from two different years can provide detail about where new roads are being built and possibly where population is increasing. Figure 7 shows where new roads were Text Box: Figure 7  Non-directional edge enhancement that shows where dramatic changes in brightness values occur over a short distance. This can provide insight into where urban sprawl is taking place from 1995(left) and 2010(right). built in the lower portion of Manatee County as urban sprawl occurs.

            Finally, I ran a binary change detection between 1995 and 2010 to determine where vegetation has changed (Figure 8). To determine where change has occurred we simply subtract NIR brightness values of one image from NIR brightness values of another image. Then we apply a change threshold to determine whether or not that area has changed. After the 1995 image has been subtracted from the 2010 image we use the histogram to determine the threshold. In this case we use 1.5 times the Standard Deviation plus the mean brightness value. Any areas that have NIR brightness values above the threshold have changed.

Conclusion

            In conclusion, all remote sensing projects begin with data acquisition, then images must be mosaicked together, then a subset of the study area must be created. Finally, you are able to run various analysis techniques. I found that the NDVI values were much lower in 2010 than in 1995 throughout the entire image. This could be caused by urban sprawl in some areas and differences in crop production in more rural areas. I also found a large increase in edges that occur in the lower Text Box: Figure 8  Binary change values from subtracting NIR relfectance values in 2010 by NIR reflectance values in 1995.portion of Manatee County and other suburban areas between 1995 and 2010 while areas that are already densely populated will have similar edges between the two years. This can definitely be attributed to urban sprawl as populations increase. Finally, the binary change detection between 1995 and 2010 shows us where vegetation has changed (Figure 9). Very little change has occurred in dense urban areas while significant changes have occurred in rural farmland and suburban areas.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Text Box: Figure 9  Binary change used to determine where vegetation has changed between 1995 and 2010.

 

 

 

References

 

U.S. Census Bureau. (2014). State and county quickfacts: Florida. Retrieved December 12, 2014, from http://quickfacts.census.gov/qfd/states/12000.html.

 U.S. Geologic Survey. (2014). Global visualization viewer. Retrieved December 10, 2014, from http://glovis.usgs.gov/.

Thursday, December 11, 2014

Lab 8: Spectral Signature Analysis

Spectral Signature Analysis

Goal and Background

The main goal of Lab 8 is to gain experience in measuring and interpretating spectral signatures of various earth surfaces.

Methods

In this lab we will be creating spectral signatures of 12 common surfaces found on earths surface. These surfaces include standing water, moving water, vegetation, riparian vegetation,  crops, urban grass, dry soil, moist soil, rock, asphalt highways, airport runways, and concrete parking lots. To create spectral signature curves we find each surface, use the drawing tool to create a polygon inside that surface (Figure 1), then using the "Signature Editor" tool we can create a spectral signature of each land surface.
Figure 1  Using the polygon draw tool to create spectral signatures
 
 
 

Results

Figure 2 below shows the spectral signatures of all 12 surfaces that were collected. As you can see the majority of surfaces have similar spectral signatures in the visible spectrum. The largest difference in signatures occurr in band 5 (middle infrared). As you expect the reflectance of water starts highest in the blue and decreases across the green and red bands of the spectrum and becomes negligible across the NIR, MIR and thermal bands. You can also see that vegetation has a typical spectral signature, in that it has really high relfectance in the NIR band. The differences in reflectance between similar surfaces can be attributed to differences in color, water content, and algae content.
Figure 2  Spectral signatures of 12 common earth surface features.

 

Conclusion

In conclusion, the signatures of each surface are the most similar in the visible portion of the spectrum and have to largest difference in reflectance in the MIR portion of the spectrum. Therefore, to differentiate between surfaces it is best to use reflectances in the MIR portion.

Sources

Satellite image is from Earth Resources Observation and Science
Center, United States Geological Survey.
 
 


Wednesday, December 3, 2014

Lab 7: Photogrammetry

Lab 7: Photogrammetry

 
 

Goal and Background

Lab 7 consists of several main goals. Part 1 is aimed at understanding how to calculate image scales based on altitude of a flight path and the focal length of the camera used to take aerial photographs. Part 1 also looks calculating relief displacement based on height of the object and distance from the principal point of the image. Part 2 of Lab 7 involves creating an 3D anaglyph of Eau Claire, WI. Part 3 uses Erdas Imagine Lecia Photogrammetric Suite to create orthorectified images.

Methods

 Part 1

Using the equation Scale= focal length/altitude of flight path we are able to calculate the scale of the aerial photograph. Given is the focal length is 152mm and the flight path is 19204ft above the land surface we can calculate the scale using simple unit conversions. Next we are able to calculate relife displacement using the equation displacement=(height of the object * radial distance)/flying height of the camera.

Part 2

Part 2 of Lab 7 uses a digital elevation model of an area and combines it with a geometrically correct aerial photograph to create a 3D anaglyph that shows general surface features.  

 

Part 3

 Part 3 of the lab was learning how to use previously orthorectified images to geometrically correct other images from the Palm Springs, CA area. We start by defining the projection of the image to be corrected. Next we collect ground control points on the already geometrically corrected image and the image that needs to be corrected. After collected 10 GCPs we also need to collect GCPs from another geometrically correct orthophoto and a digital elevation model to create elevation coordinates. Next, we use LPS to generate 40 automatic tie points in the images.  Finally, we use triangulation and resampling to create the orthorectified photograph.
 
 

Results

 Part 1

Based on the scale equation above we can calculate that the scale of the Eau Claire_West-se image is 1:38498. We can also conclude, based on the relief displacement equation above, that the relief displacement of the smoke stack located on the UWEC campus has a relief displacement of .35inches on the image.
 

Part 2

Using 3D glasses we are able to distinguish some of the 3D features in the anaglyph of Eau Claire. These features are only noticeable when zoomed in. Figure 1 below shows a zoomed in screenshot of the anaglyph focusing on the UWEC campus area. There is a significant change in elevation in the forrested area that spans horizontally across the image.
Figure 1 3D anaglyph of the UWEC campus area.

Part 3

 Figure 2 below is the final output of the orthorectification of the aerial photo of Palm Springs, CA. The orthophoto has a very high degree of spatial accuracy.
 
 
Figure 2 Orthorectified photo of Palm Springs, CA

Sources

 
National Agriculture Imagery Program (NAIP) images are from
United States Department of Agriculture, 2005.
 
Digital Elevation Model (DEM) for Eau Claire, WI is from United States Department of 
Agriculture Natural Resources Conservation Service, 2010.
 
Spot satellite images are from Erdas Imagine, 2009.
 
Digital elevation model (DEM) for Palm Spring, CA is from Erdas Imagine, 2009.
 
National Aerial Photography Program (NAPP) 2 meter images are from Erdas Imagine, 2009.
 

 



Friday, October 31, 2014

Lab 4: Miscellaneous Image Functions 1


Lab 4: Miscellaneous Image Functions

 

Goal

     The goal of Lab 4 is to become familiar with image functions in Erdas Imagine 2013. The functions we will be looking at include creating a subset of an image, image fusion techniques, radiometric enhancement techniques, and resampling pixel size, which are basic functions that can be used in any remote sensing project. We will be employing these techniques on false color satellite images taken by Landsat satellites over the Eau Claire, WI area (Figure 1).


 

Methods

     The first image function we looked at is creating an image subset which decrease file size and speeds up image processing. Subsets can be created in two ways. The first technique creates a subset using an inquire box. To do this you create a rectangular box around the study area and extract the pixels that are found within the box (Figure 2). The second method creates a subset by extracting the pixels that are located within an Area of Interest shapefile. This second method is useful for creating subsets with oddly shaped boundaries, such as county boundaries or lakes.

     The next image function we looked at is pan-sharpening. This techniques combines coarse resolution reflective images and combines them with high resolution panchromatic images to create a high resolution color image (Figure 3). To pansharpen an image we selected the "Resolution Merge" option under the "Pan Sharpen" tool. We then input the high resolution panchromatic image, the multispectral image, and the output file location. Next we select the pansharpening method, in this case multiplicative. We also select the resampling method, nearest neighbor. Finally, we run the model and add the pansharpened image into the view to compare it to the original multispectral image (Figure 4).

     The third technique we looked at is haze reduction. To apply a haze filter we select the "Haze Reduction" option under the Radiometric tab. Select the input image and the output location. In this case we accepted all the default settings and ran the program.


     Finally, we looked at resampling an image. This technique allows you to change the pixel size of an image to match the pixel size of another image. In this lab we resampled an image of Eau Claire with 30 meter spatial resolution down to 20 meters using the nearest neighbor and bilinear interpolation methods.

 

Results

     The process of creating an image subset is a really important process to decrease file size and increase computation speed. The 2 most common ways to create subsets include the use of an  inquire box (Figure 2) and the use of a shapefile of the study area (Figure 3). Inquire box's are good for creating rectangular subsets. However, in many cases the study area is is irregularly shaped. In these cases it is appropriate to use a shapefile to extract the subset. Figure 3 is an image subset that was created using a shapefile of Eau Claire and Chippewa Counties in West Central Wisconsin.



Figure 2  Image subset of Eau Claire, WI using an inquire box.
  
Figure 3  Image subset of Eau Claire and Chippewa County using a shapefile.

     The next technique we examined is pan sharpening. Pan sharpening increases the spatial resolution of reflective bands of an image by combining it with the panchromatic band of an image. The results of a pansharpened image (Figure 4) are incrased spatial resolution and an increase in contrast.
Figure 4 The effect of pan sharpening an image (left) compared to the original image (right).

     The next technique we examined is the process of haze reduction (Figure 5). This technique reduces haze in an image. This can be useful in areas where significant haze is present. However, in areas where no haze is present the image becomes less clear. Because of this problem we only apply haze to sections of the image that contain a significant amount of haze. 
Figure 5 Haze reduction of an image (left) compared to the original image (right).

 

     Finally, the last image function we examined is resampling. Resampling is a mathematical technique used to change pixel size. Resampling is frequently used when comparing 2 images with different pixel size. In this lab we looked at 2 common methods for resmapling (Figure 6). The Nearest Neighbor technique (left) uses the brightness values of the closest input pixel to assign values of the output pixel. The nearest neighbor technique is useful for the resampling of thematic rasters. As you can see below, the output image has a rough appearance and curved features often have a stepped appearance. Another technique of resampling that we looked at is Bilinear Interpolation (right). Bilinear interpolation creates an output image by averaging the 4 closest input pixels. Bilnear interpolation creates a much smoother image that is more spatially accurate. However, bilinear interpolation also reduces image contrast.
Figure 6  Common resampling techniques of Nearest Neighbor (left) and Bilinear Interpolation (right).

  

Conclusion

     In conclusion, there are many image function techniques that can be used to enhance satellite images. Creating a subset can reduce file size and increase processing speed,  pansharpening increases spatial resolution of reflective images, haze filters reduce the amount of haze in an image that creates distortion, and resampling changes pixel size so images from different sources can be compared.