Tuesday, May 6, 2014

Remote Sensing Lab 8

Goal and Background - The goal of this final lab of the semester is to learn how to identify surface features using their spectral signatures. We learn how to collect spectral signatures from satellite images and decipher their spectral graphs.

Methods and Results - We used a image of the Eau Claire region of Western Wisconsin and started looking for the following features:
 
1. Standing Water
2. Moving water
3. Vegetation
4. Riparian vegetation
5. Crops
6. Urban Grass
7. Dry soil (uncultivated)
8. Moist soil (uncultivated)
9. Rock
10. Asphalt highway
11. Airport runway
12. Concrete surface (Parking lot)
 
We went through the image and digitized polygons of the each of the features, then using the Create Signatures tool we were able to create graphs of their spectral signatures. Below in figure 1 you can see each of their individual signatures (Click to zoom in). Each feature has a distinctive signature that can be used to identify them using remote sensing.
fig 1
 
 


Sources - All satellite images provided by Cyril Wilson.

Tuesday, April 29, 2014

Remote Sensing Lab 7

Goal and Background - The goal of this lab is to learn the skills of Photogrammetry, Stereoscopy, and Orthorectification. These are all practiced using the program Erdas Imagine 2013.

Methods and Results - The first part of the lab was about scale, measurements, and relief displacement. We took a photo of Eau Claire, WI and found the scale of the image by measuring a feature with a ruler and then extrapolating the scale against the actual size of the feature. We also found how to determine the scale with just the altitude of the aerial photograph and the focal lens length. The equation is: Scale equals focal lens length divided by the altitude of the aircraft minus the ground elevation. Next we measured the perimeter and the area of a local lagoon by digitizing the lagoon into a polygon. The last section of the first part was dealing with relief displacement. The object that is displaced is a smokestack in Eau Claire. The smokestack appears to be leaning because it is not close enough to the principal point which is the point underneath the aircraft when the picture was taken. The equation for that is as follows: Relief Displacement equals height of object in real life multiplied by the radial distance in the image divided by the height of aerial camera above datum. The radial distance is the distance between the principal point and the object. Now, once the relief displacement is calculated you can adjust the object to lean towards or away from the principal point.

The second part of the lab was learning to make a stereoscopic image in Erdas Imagine. I took a image of Eau Claire and a digital elevation model (DEM) file of Eau Claire and using the Anaglyph Generation tool I created a new file. The new file when looked at with polaroid glasses, conveys a 3D image of the area.

In the third and final part of the lab, we learned Orthorectification using Erdas Imagine Lecia Photogrammetric Suite (LPS). We had two images of an area that overlap each other, however they currently do not match up. To match them up we first took the first image and created twelve GCPs to connect the image to a datum and a projected coordinate system. Next I used a elevation DEM to add the elevation of each point in the data. I then did the same process with the second image matching it up with the first and therefore both had the correct coordinate data. In figure 1 it shows the first twelve points that I connect to both images.

fig 1

Next we used the automatic tie point generator to connect the images and then the Triangulation tool. The output LPS screen is show below in figure 2.
fig 2

Lastly I brought the images in the viewer in Erdas to see if the images boundaries were fixed and accurate. The image below (fig 3) shows the two images with their new projections.

fig 3

The degree of accuracy at the boundaries of the images is very high. From zoomed out you can see a line between the images, but when zoomed in you can barely even tell the where the two images border at certain areas. The boundary twists and turns on the west border over the mountains and creates a twisting and super accurate boundary.

Sources - All satellite images provided by Cyril Wilson.

Thursday, April 17, 2014

Remote Sensing Lab 6

Goal and Background - The goal of this lab is to practice geometric correction. There are two types of geometric correction; Image-to-map rectification and Image-to-image registration. These are both used in the program Erdas Imagine 2013.

Methods and Results - We first took an image of Chicago from 2000 that was distorted and we wanted to correct the image and we first took a topographic image of the Chicago area and brought it into Erdas with the distorted satellite image. Using image-to-map rectification we will be able to correct this image. We opened up the Geometric Correction tool and selected a first order polynomial model with a nearest neighbor resampling method. Since this is a first order polynomial model we only need three ground control points or GCPs, but we should always do more than the minimum, so we'll use four. The image below (fig 1) shows the four GCPs. We then perform the Geometric Correction and our output image is no longer distorted.
fig 1





















In the second part of the lab we use a severely distorted image of a region in Sierra Leone in 1991. For this image we will be using the image-to-image registration method. Since this is more distorted than the pervious image we will be using a third order polynomial model with a bilinear interpolation resampling method. Since this is a third order polynomial it is more complex and will require at least ten GCPs, therefore we will use twelve to be on the safe side. The image below (fig 2) shows the twelve GCPs. We then perform the Geometric Correction and our output image is no longer distorted.

fig 2




Sources - All satellite images provided by Cyril Wilson.










Thursday, April 10, 2014

Remote Sensing Lab 5

Goal and Background - This lab focused on analytic processes such as image enhancement, binary change detetion, image mosaic, band ratio, and spatial modeling. All of these within the context of Erdas Imagine 2013.

Methods and Results - The first skill we learned was image mosaics. I took two satellite images that over lap spatially both taken in May 1995. We first used the Mosaic Express tool (fig. 1) and then using the MosaicPro tool (fig. 2). The one made using the MosaicPro comes out better due to the use of Color Corrections which blended the two images more smoothly along the shared boundaries.
fig. 1





fig. 2

The second section deals with Band Ratios. We used the normalized difference vegetation index (NDVI) on an image of the Eau Claire area (fig. 3). It shows vegetation as the very white portions of the output image. The dark areas indicate rivers, roads, urban areas, and less healthy vegetation.
fig. 3




















The next part introduced us to spatial image enhancement. The low frequency image from Sierra Leone on the left (fig. 4) shows how it is hard to see details when their is little contrast. The image on the right is the enhanced image and shows much more contrast. It is a very dark image, but we learn how to fix that in a later part.
fig. 4

We then performed a Laplacian convoultion filter on another image of Sierra Leone (fig. 5) which increases the contrast at discontinuities. It brings out features such as roads, cities and rivers.
fig. 5




















In the next section we practiced spectral enhancement. We performed minimum-maximum contrast stretch (fig. 6) on an image of Eau Claire that had a Gaussian histogram. On the second image of Eau Claire which was an NIR image we used a piecewise contrast stretch (fig. 7). We also performed a Histogram Equalization which spread a low frequency histogram across the whole range creating a high contrast image.
fig. 6




















fig. 7




















The final part dealt with binary change detection and image difference. We took an image of the Eau Claire region from 1991 and another of the same area from 2011 and we wanted to find the parts that changed in those 20 years. We first created a difference image (fig. 8) to show what pixels changed from 1991 to 2011.
fig. 8

 We then used the Model Maker tool to create a model that got rid of the pixels that stayed the same from 1991 to 2011 and created a difference image that only showed the areas that changed in the time period. I then opened ArcMap and overlaying the image of the changed area on top of a map of the region (fig. 9) showing the areas that changed in the region between 1991 and 2011 in red. It showed that the areas that changed were all agricultural lands.
fig. 9


















Sources - All satellite images provided by Cyril Wilson.

Thursday, March 27, 2014

Remote Sensing Lab 4

Goal and Background - This lab was focusing on image subset for finding an area of interest (AOI), haze reduction on satellite images, Google Earth, and resampling of images to improve resolution. All of this within the program Erdas Imagine 2013.

Methods and Results - For the first part of the lab we learned how to create a subset image using the inquire box and the subset and chip tool. I first selected the AOI using the inquire box tool; then, using the subset and chip tool I created a subset image from the inquire box (see fig. 1).

fig. 1
The next section was the same as the first but instead using a shapefile as a AOI. I first took a shape file of a specific AOI and put it on top of a satellite image. I then used the paste from selected object tool to select the area of the satellite image that lies in the AOI shapefile. Next, I used the subset and chip tool from the previous section to create a subset image of the shapefile AOI (see fig. 2).

fig. 2
Part two of the lab was about image fusion. I took a reflective image with a spatial resolution of 30 meters and using the resolution merge tool I merged the original image with a pansharpened image with a 15 meter spatial resolution. This created a new pan-sharpened image with higher spatial resolution.

Part three was focused on radiometric enhancement techniques. I took a image and using the haze reduction tool I reduced the haze making the image clearer and more colorful.

Part four was about linking the image viewer in Erdas to Google Earth. I opened Google Earth in Erdas by clicking connect to Google Earth under the Good Earth tab. I then opened an image in the image viewer and clicked match GE to view and then sync GE to view. This can be used to interpret images since the Google Earth image has such high spatial resolution and 3D capabilities.

The final part was on resampling. I took a image with 30 meter spatial resolution and used two different resampling techniques to enhance the resolution and lower the pixel size. The first technique was nearest neighbor with when used it creates a pixel pattern (see fig. 3). The second technique was bilinear intepolation which resampled the pixels into smaller 20 meter resolution uniform pixels giving us a more detailed image.
fig 3






Sources - All images provided by Professor Wilson.