




Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The process of stitching images together to create panoramic views of country roads in northern Iraq and the Singapore skyline. The report covers image loading, patch creation, correlation coefficient calculation, image alignment, and final result display. Two different approaches are used: one based on overlapping regions and correlation coefficients, and the other based on structures of interest and image transformations.
Typology: Study Guides, Projects, Research
1 / 8
This page cannot be seen from the preview
Don't miss anything!
Name: Sofie Lindblom (Sofli222) Lab Partner: Emma Edvardsson (Emmed608)
In contrast to previous labs the mini project leaves the method undefined and only a few guidelines are given. It is up to each group to apply knowledge obtained so far in the course to solve the tasks. We decided to try two different approaches: the first one will show a panoramic view of a country road in northern Iraq and the second will display a view of the Singapore skyline.
The images only vary in the x-direction, in lab 2 the images were also shifted in the y-direction. Considering that knowledge about shifting in both x and y direction already have been used, combined with the fact that it is more interesting to process an image from one’s private selection we hope shifting in only x-direction provide enough proof of understanding.
Firstly the images are loaded and converted to gray scale using the MATLAB command ‘rgb2gray’. The results are found in figure 1 and 2 below.
Figure 1
Figure 2
Since this is a panoramic image it is desired to stitch the parts together to recreate the original view. Starting with attaching part 1 to part 2 and thereafter attach the resulting image with part
The selected patch from part 2 is compared with patches in part 1. The correlation coefficient for each comparison is calculated and saved in an array. The corresponding code can be found in ‘Iraq.pdf’.
In order to stitch the parts into one single image it is desired to append part 1 to part 2 where the correlation coefficient obtains its maximum value. The responding image is displayed in figure 3.
Figure 3
The same procedure as carried out in Step C is performed with the resulting image in figure 3 and part 3.
The above example stitch several images into one panoramic picture; however it does not address the full challenge of the given task. Due to this fact a second attempt was made with different images. In this part two images were merged into one panorama picture by using structures of interest points in both images. The steps are described in more detail below.
The images are loaded and converted to gray scale in the same way as in the Iraq example, the code is found in ‘Singapore.pdf’ and the corresponding result is found in figure 5.
Figure 5
Points of interest were selected by using the imtool in Matlab. The points were chosen in a way
so that it had a representation in both images and a significant attribute e.g. a corner or a line.
The points chosen are marked with red numbers in figure 6 and figure 7 below. Eight points for
each picture were selected with the motivation that it gave a fairly good result in a lab
performed earlier in the course.
Figure 6
Figure 7
By converting the coordinates of the picked points to homogenous coordinates a transformation
matrix could be calculated. The transformation matrix computes the coordinates for the first image to match the coordinates in the second image. The overlapping regions are removed and the shifted
image is shown in figure 8. The corresponding code is found in ‘Singapore.m’.
Figure 8
The final result will have a small error margin since the points of interest were selected by hand and not automatically generated. When selecting pixels by hand there is a chance of a shift in pixels between the points selected in the two images. By auto generating points through an image processing algorithm this error can be avoided. Possible algorithms are edge and line detectors such as Harris and Hough transform. What the algorithms have in common is detecting sharp local changes in intensity. The changes in intensity are represented by isolated points, lines or edges. The code from lecture 8, ‘HarrisDemo.m’, was modified to generate points of interest for the two images. Figure 11 and Figure 12 represents image 1 and 2, respectively.
Figure 11
Figure 12
From Figure 11 and 12, points of interest can be selected and processed in similar manner as in this example.
Researching algorithms for stitching images one frequently comes across the approach using the
Scale Invariant Feature Transform (SIFT) algorithm and the RANdom SAmple Consensus (RANSAC) method.
SIFT is a method used to obtain key points in images and compute a set of matching points. RANSAC estimates a homography which transforms the images such that the points match. Using this homograph, the images are transformed and thereafter appended. This process is iterative and a
wide panorama consisting of many images can be created. There are several examples available for download to perform this type of stitching. We neglected this approach since most of the options
were written for a different version of MATLAB compared to the one we use and because the code is quite advanced, which would lead to a lot of copying and little understanding. Although it can be
concluded that the results using this method is of a higher standard, we will try this at home.