






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Various methodologies for classifying retinal images into arteries and veins, which is essential for detecting various diseases in retinal fundus images. the importance of distinguishing arteries from veins and the differences between them. Three existing methodologies for artery and vein classification are presented, including image enhancement, feature extraction, and structural knowledge. Each methodology uses different techniques for vessel segmentation and classification, resulting in accurate differentiation between arteries and veins.
Typology: Lecture notes
Uploaded on 09/12/2022
1 / 10
This page cannot be seen from the preview
Don't miss anything!
[DOI:10.5121/ijcsa.2014.4606 69
1
2
Retina is a layer which is found at the back side of the eye ball which plays main role for visualization. Any disease in the retina leads to severe problems. Blood vessels segmentation and classification of retinal vessels into arteries and veins is an essential thing for detection of various diseases like Diabetic Retinography etc. This paper discusses about various existing methodologies for classification of retinal image into artery and vein which are helpful for the detection of various diseases in retinal fundus image. This process is basis for the AVR calculation i.e. for the calculation of average diameter of arteries to veins. One of the symptoms of Diabetic Retinography causes abnormally wide veins and this leads to low ratio of AVR. Diseases like high blood pressure and pancreas also have abnormal AVR. Thus classification of blood vessels into arteries and veins is more important. Retinal fundus images are available on the publically available Database like DRIVE [5], INSPIREAVR [6], VICAVR [7].
Retinal Image, Fundus, Preprocessing, Vessel Segmentation, Classification.
In [1], author has mentioned the difference between arteries and veins. They are as follows. Blood vessels of retina are divided into two types. They are
Arteries transport blood rich in oxygen to the organs of the body. The veins transport blood low in oxygen level. Arteries are brighter but Veins are darker. For diagnosis of various diseases it is
more essential to distinguish the vessels into arteries and veins. An abnormal ratio of the size of arteries to veins is one of important symptom of various diseases like diabetes retinography, high
blood pressure, pancreas etc. For example diabetic patients have abnormally wide veins, where as
pancreas patients have narrowed arteries and high blood pressure patients have thickened arteries. To detect these diseases the retina has to be examined routinely. Blood vessel has to be
segmented before classifying the blood vessels into arteries and veins. In general as mentioned in
[1], there are four important differences between arteries and veins:
2.VARIOUS METHODOLOGIES FOR ARTERY AND VEIN
CLASSIFICATION
Artery Vein classification methodology proposed in [2] consists of three main steps. Several Image enhancement techniques are applied in the first step which is used to improve the images. To separate major arteries from veins specific feature extraction process is employed. Feature extraction and vessel classification are not applied to each vessel point instead it is applied to each small vessel segment. Finally, the results obtained from the previous step are improved by applying a post processing step. The post processing step uses structural characteristics of the retinal vascular network. Some incorrectly labelled vessels are correctly labelled using this step. The vessels are labelled correctly based on the adjacent vessel or by using the other vessels connected to it.
Figure 1. Stages of the method [2]
Image enhancement is employed to enhance the contrast between arteries and veins in the retinal images and it is considered to be an important step in [2]. Histogram matching algorithm is applied for normalizing the color through images. Histogram matching algorithm takes two images as input. One is the source image A and the other is the reference image R and the image B is returned as output. Image A is transformed into Image B using the histogram matching
Image Enhancement
Vessel Segmentation and thinning
Feature Extraction
artery/vein classification
Post- processin g
vessels Luminance channel
Table 1. Best features extracted from vessels [2]
The post-processing stage is the last step in this methodology. It consists of two steps. First structural knowledge at bifurcation points and cross over are used to find connected vessels of same type. The structural knowledge includes two rules. The first rule states that if a bifurcation point has three vessel segments then all of the three vessels should be of same type. The second rule states that if two vessels cross each other, then one must be an artery and the other one must be a vein. In the second step, number of vessel points labelled as arteries and veins are counted for each detected sub tree of artery or vein, and also the dominant label is found in that sub tree. If the number of vessel pixels with the dominant label exceeds a threshold then the dominant label of that vessel sub tree is assigned to all vessel points of that tree.
Artery Vein classification methodology proposed in [3] is a new algorithm for classifying the vessels, in which the peculiarities of retinal images are exploited. By applying a divide et impera approach a concentric zone around the optic disc are partitioned into quadrants, there by a more robust local classification analysis can be performed. The results obtained by this technique were compared with manual classification provided on a validation set having 443 vessels. The overall classification error reduces from 12 % to 7 % if examination is based only on the diagnostically important retinal vessels.
2.2.1. Image preprocessing and vessel tracking
A previously develo p ed algorithm [11] is used in this methodology. This algorithm analyzes the background area of retinal image to detect changes of contrast and luminosity, and through an estimation of their local statistical properties derives a compensation for their drifts. The first task is to extract the vessel network in retinal fundus image. It is often achieved through a vessel tracking procedure. In this methodology previously developed sparse tracking algorithm [12] is used to extract the vessel network.
2.2.2. Divide
The local nature of the A/V classification procedure and the symmetry of the vessel network layout are exploited by partitioning the retina into four regions. Each region should have a reasonably similar number of veins and arteries, and in which the two types of vessels hopefully have important local differences in features. A concentric zone was identified around the optic disc and then the fundus image is partitioned into four regions, and each containing one of the main arcs of the A/V network. They are
It is shown in Figure 2.
Figure 2. Principal arcades of the retinal vessel network [3]
To perform this partitioning, first the position of the optic disc is identified and its approximate diameter can be found either manually or automatically as in [13] or [14]. Identified cardinal axes divide the retinal image into the four quadrants Quadi i = 1 2 3 4. For each quadrant Quadi, the algorithm automatically detects 5 vessels having largest mean diameter which are named as S1, S2, S3, S4, and S5. Partitioned retinal image is shown in Figure 3. This method selects only the main vessels and there by avoids confusing small arterioles and venoules. The balanced presence of veins and arteries holds in all four quadrants only if main vessels and their branches are considered.
Figure 3. Partitioned Retinal Image [3]
2.2.3. Feature Extraction
Author of this methodology [3] has performed an extensive statistical analysis to find the most discriminant features for the A/V classification. Finally the mean of hue values and the variance of red values are considered as the best features to classify into an artery or vein. The fact that the arteries and veins classes are differentiated by looking at their average homogeneity and hue of their red component is also in accordance with medical experience: when two vessels close to each other are compared for classification, the one having dark red is classified as vein; if this difference is not significant enough, then the one having lowest degree of uniformity is classified as artery.
(a) (b) (c) (d)
Figure 4. Graph generation. (a) Original image; (b) segmented vessel; (c) Centerline image; (d) Extracted graph. [4]
2.3.2.1. Vessel Segmentation
For extracting the graph, vessel segmentation result has to be used. The result is also used for estimating vessel calibers. The method proposed by Mendonça et al. [15] is used for segmenting the retinal vessel, after being adapted for the segmentation of high resolution images [16].
2.3.2.2. Vessel Centerline Extraction
To obtain the centerline image an iterative thinning algorithm described in [17] has to be applied to the vessel segmentation result. This algorithm removes border pixels from the segmented image. It is removed until the object shrinks to a minimally connected stroke. The segmented image is shown in Figure 4(b) where as its centerline image is shown in Figure 4 (c).
2.3.2.3 Graph Extraction
In the next step, the graph nodes have to be extracted from the centerline image. It is extracted by finding the intersection points and the endpoints or terminal points. Intersection points are the pixels having more than two neighbours. Endpoints or terminal points are the pixels with only one neighbor. In order to find the links between nodes (vessel segments), all the intersection points and their neighbors are removed from the centerline image and as result is an image with separate components which are the vessel segments. Next, each vessel segment is represented by a link between two nodes. The graph extracted from the centreline image Figure 4(c) is shown in Figure 4(d).
2.3.2.4. Graph Modification
As a result of the segmentation and centerline extraction processes, the extracted graph may include some misrepresentation of the vascular structure. The extracted graph should be altered when one of following errors is identified. The typical errors are
And these errors are defined in [18].
2.3.3. Graph Analysis
A decision on the type of the nodes is the output of the graph analysis. The node classification algorithm starts by extracting the following node information: node degree, the angles between the links, orientation of each link, the degree of adjacent nodes, and the vessel caliber at each link. Node analysis has four different cases depending on the degree of node. Four different cases and its possible node types are shown in Table 2. After deciding on the type of node, all links that belong to a particular vessel are identified and labelled. The final result is the assignment of two labels in each separate sub graph. Sub graph 1 links will be assigned with C 11 , C1^2 labels. Similarly Sub graph 2 links will be assigned with C 21 , C 22 labels and so on.
Cases Possible Node Types Case 1 – Nodes of degree 2 Connecting point Meeting point Case 2 – Nodes of degree 3 Bifurcation point Meeting point Case 3 – Nodes of degree 4 Bifurcation point Meeting point Crossing point Case 4 – Nodes of degree 5 Crossing point
Table 2. Four different cases and its possible node types [4]
The vessel structural information embedded in the graph representation is used in above described labelling phase. Based on these labelling phase, the final goal is now to assign one of the labels with the artery class (A), and the other with vein class (V). In order to allow the final discrimination between A/V classes the structural information and vessel intensity information are used. The 30 features listed in Table 3 are measured and normalized to zero mean and unit standard deviation for each centerline pixel. Some features shown in Table 3 were previously used in [3], [19]. Author has tested with classifiers like quadratic discriminant analysis (QDA), linear discriminant analysis (LDA), and k -nearest neighbor (kNN), on the INSPIRE-AVR dataset. Here sequential forward floating selection is used for feature selection. It starts with an empty feature set and then improves the performance of the classifier by adding or removing features.
Nr. Features
1-3 Red, Green and Blue intensities of the centerline pixels. 4-6 Hue, Saturation and Intensity of the centerline pixels. 7-9 Mean of Red, Green and Blue intensities in the vessel. 10-12 Mean of Hue, Saturation and Intensity in the vessel. 13-15 Standard deviation of Red, Green and Blue intensities in the vessel. 16-18 Standard deviation Hue, Saturation and Intensity in the vessel. 19-22 Maximum and minimum of Red and Green intensities in the vessel. 23-30 Intensity of the centerline pixel in a Gaussian blurred (σ =2 , 4 , 8 , 16) of Red and Green plane.
Table 3. List of features measured for each centreline pixel [4].
Authors
S.Maheswari received her B.E (CSE) in 2009 from Anna University and pursuing M.E (CSE) in Dr. Sivanthi Aditanar College of Engineering, Tiruchendur. Her area of interest is in medical Image Processing and she is also an active student member of CSI.
S.V.Anandhi received her B.E (CSE) and M.E (CSE) in 2006 and 2010, respectively. Her area of interest is in Image Processing.