Automated identification of diabetic retinopathy has followed a number of paths. For accurate automated lesion detection, the first step is to identify images that are not suitable for automated assessment. This is followed by lesion or disease classification. For appropriate referral of individuals, the accurate recognition of a lesion and the extent of the lesion is important. The extent of a specific lesion and whether there are more than one type of lesion typify disease progression. In this sense, the visual word dictionary analysis allows automated image analysis where no pre-processing of images for specific lesions is required; training can be from different image resolutions, cameras and allows for some noise in the image. We demonstrate that the method when combined with machine learning allows feature fusion that incorporates identification of one or multiple lesions within one image at an appropriate accuracy level for clinical referral. In addition, the same general visual dictionary methodology can be applied to identify blurred images that are not suitable for automated assessment triggering alternative actions from the operator such as acquiring new images or referring to a qualified reader. For the detection of hard exudates, our approach has achieved an area under the curve (AUC) of 94.7%, while for superficial hemorrhages detection, the AUC achieved is 83.2%. The best performance reached for quality analysis is 87.4%.
@inproceedings{jelinek2013quality, authors = "Hebert F. Jelinek and Ramon Pires and Rafael Padilha and Siome Goldenstein and Jacques Wainer and Anderson Rocha", title = "Quality control and multi-lesion detection in automated retinopathy classification using a visual words dictionary", booktitle = "International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)", year = 2013, address = "Osaka, Japan", }