Header menu link for other important links
Comparing Pixel N-grams and Bag of Visual Word Features for the Classification of Diabetic Retinopathy
, Andrew Stranieri, Herbert Jeline
Published in
Pages: 1 - 7
The extraction of Bag of Visual Words (BoVW) features from retinal images for automated classification has been shown to be effective but computationally expensive. Histogram and co-variance matrix features do not generally result in models that have the same predictive accuracy as BoVW and are still computationally expensive. The discovery of features that result in accurate image classification on computationally constrained devices such as smartphones would enable new and promising applications for image classification. For example, smartphone retinal cameras can conceivably make diabetic retinopathy widely available and potentially reduce undiagnosed retinopathy if it could be achieved with computationally simple classification algorithms. A novel image feature extraction technique inspired by N-grams in text mining, called 'Pixel N-grams' is described that can serve this purpose. Results on mammogram and texture classification have shown high accuracy despite the reduced computational complexity. However retinal scan classification results using Pixel N-grams lag behind BoVW approaches. An explanation for the relative poor performance of Pixel N-grams with diabetic retinopathy that draws on concepts associated with the No Free Lunch theorem are presented.
About the journal
JournalACM International Conference Proceeding Series