Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter November 14, 2014

Application of Image Processing in Fruit and Vegetable Analysis: A Review

Shiv Ram Dubey and Anand Singh Jalal

Abstract

Images are an important source of data and information in the agricultural sciences. The use of image-processing techniques has outstanding implications for the analysis of agricultural operations. Fruit and vegetable classification is one of the major applications that can be utilized in supermarkets to automatically detect the kinds of fruits or vegetables purchased by customers and to determine the appropriate price for the produce. Training on-site is the underlying prerequisite for this type of arrangement, which is generally caused by the users having little or no expert knowledge. We explored various methods used in addressing fruit and vegetable classification and in recognizing fruit disease problems. We surveyed image-processing approaches used for fruit disease detection, segmentation and classification. We also compared the performance of state-of-the-art methods under two scenarios, i.e., fruit and vegetable classification and fruit disease classification. The methods surveyed in this paper are able to distinguish among different kinds of fruits and their diseases that are very alike in color and texture.

1 Introduction

In the agricultural sciences, images are an important source of data and information. To reproduce and report such data, photography is the only method that has been used in recent years. However, it is difficult to process or quantify the photographic data mathematically. Digital image analysis and image-processing technology help to circumvent these problems owing to the advances in computers and microelectronics associated with traditional photography. These tools aid in improving images from the microscopic to the telescopic visual range and offer a scope for their analysis. Several applications of image-processing technology have been developed for agricultural operations. These applications involve the implementation of camera-based hardware systems or color scanners for inputting the images. Computer-based image processing is undergoing rapid evolution with ever-changing computing systems. The dedicated imaging systems available in the market, where the user can press a few keys and get the results, are not very versatile and more importantly, they have a high price tag on them. Additionally, it is hard to understand as to how the results are being produced. In this paper, we surveyed the literature for the solutions proposed to address these problems, and framed classification problems in the most realistic way possible.

Recognition system is a “grand challenge” for the computer vision to achieve near human levels of recognition. The classification of fruits and vegetable is useful in supermarkets where prices for fruits purchased by a client can be defined automatically. Fruit and vegetable classification can also be utilized in computer vision for the automatic sorting of fruits from a set consisting of different kinds of fruits. Picking out different kinds of vegetables and fruits is a routine task in supermarkets, where the cashier must identify not only the species of a particular fruit or vegetable (i.e., banana, apple, pear), but also its variety (i.e., Golden Delicious, Jonagold, Fuji) to determine its price. This problem has been addressed by packaging products, but most of the time consumers want to pick the product themselves, which cannot be packaged, and which must be weighed. Assigning codes for each kind of fruit and vegetable is a common solution to this problem, but this approach presents some problems such as memorization of a large number of codes, which failing to do so might result in errors in pricing. As an aid to the cashier, a small book with pictures and codes is issued in many supermarkets, but the problem with this approach is that flipping over the booklet is time-consuming.

In this paper, we review several image features and image descriptors in the literature, and present a system to address the problem by adapting a camera at supermarkets that can recognize fruits and vegetables on the basis of color and texture cues. Formally, the system must generate a list of possible types of species and variety for an image of a fruit or vegetable. The input image contains a fruit or vegetable of single variety, in a random position and in any number. Objects inside a plastic bag can add hue shifts and specular reflections. Given the variety and the impossibility of predicting which types of fruits and vegetables are sold, training should be done on site by someone with little or no technical knowledge. Hence, the system must be able to achieve a higher level of accuracy using only a few training examples. Monitoring the health of fruits and trees and detecting their diseases is critical for sustainable agriculture. To the best of our knowledge, no sensor is commercially available for the real-time assessment of the health conditions of trees. Scouting is the most widely used method for monitoring stress in trees, but it is expensive and a time-consuming and labor-intensive process. Polymerase chain reaction, a molecular technique used for the identification of fruit diseases, is a possible solution, but it requires detailed sampling and processing procedure.

Early detection of crop diseases can facilitate their control through proper management approaches, such as vector control through fungicide applications, disease-specific chemical applications and pesticide applications, thereby increasing productivity and profit. The classical approach for detection and identification of fruit diseases is through observation by experts using their naked eye. In some developing countries, consultation with experts is a time-consuming and costly affair because of their distant locations and their limited availability. Automatic detection of fruit diseases is necessary to immediately recognize the symptoms of diseases as soon as they appear on the growing fruits. Fruit diseases that appear during harvesting can cause significant losses in yield and quality. For example, soybean rust, a fungal disease, has caused significant economic losses, but just by removing 20% of the infection, the farmers may benefit with an approximately 11 million-dollar profit [58]. An early detection system of fruit diseases can aid in decreasing such losses and can halt further spread of diseases.

The various types of diseases of fruits determine the quality, quantity and stability of yield. Fruit diseases not only reduce yield, but also destroy the variety and result in its withdrawal from the cultivation. Fruit diseases appear as spots on the fruits and, if not treated immediately, can cause severe losses. Excessive use of pesticides to control fruit diseases increases the levels of toxic residues on agricultural products, and this practice has also been identified as a major contributor to groundwater contamination. Pesticides are also among the most expensive components in the production cost, and their use also poses significant risks to the health of ecosystems and the consumers. Therefore, their application must be minimized. In this paper, we reviewed the approaches designed to detect fruit diseases as soon as they manifest in order that proper management treatment can be applied.

A lot of work has been done to automate the visual inspection of fruits with respect to size and color. However, detection of defects on fruits using images is replete with problems owing to the natural variations in skin color among different types of fruits, large differences in types of defects, and the difficulty of recognizing symptoms when covered by stem/calyx. To determine what control measures to implement to avoid similar losses, it is important to recognize the type of diseases occurring on the produce. Some fruit diseases can also infect other parts of the tree, affecting twigs, leaves and branches. Precise segmentation of the tree is required for accurate detection of defects. Early detection of fruit diseases, that is, before the onset of symptoms, could be a valuable source of information for designing and executing proper pest management strategies and disease control measures to prevent the development and spread of fruit diseases.

In this paper, we explore the use of image-processing and computer vision techniques in the food and farming industry. Our main objective is to review the approaches proposed to recognize the types of fruits and vegetables and their diseases as seen in images.

2 Literature Review

In this section, we review the studies done on image categorization, fruit recognition, fruit and vegetable classification, and fruit disease identification using images. Fruit and vegetable classification and fruit disease identification can be seen as an instance of image categorization. Most studies on fruit recognition or fruit disease detection considered color and texture properties for the categorization. Most investigations on fruit recognition were done with the fruits located on trees, but we restrict our review to those approaches proposed for the classification of different types of fruits and vegetables. Most of the work for the fruit disease detection done in the literature is restricted to the detection of a single type of disease only. In this section, several approaches used by researchers is discussed with the aim of being aware of the latest research carried out, which are related to the formulated problems in this paper.

2.1 Issues and Challenges

We survey the methods used in the recognition of fruits and vegetables and in the identification of fruit diseases and note the issues and challenges that each method tries to address. These issues and challenges may serve as the basis for evaluating the different methods. The input images may contain fruits or vegetables of more than one variety in an arbitrary position and in any number. Many kinds of fruits and vegetables exhibit significant variations in shape, texture and color depending on their ripeness. For example, oranges can be green, yellow, or patchy and brown. Because using just one image feature to separate fruits and vegetables into classes might not be sufficient, it is necessary to extract and combine those features that are useful for recognizing the produce. The produce might be inside a plastic bag, which can add hue shifts and specular reflections. Different classifiers may produce different results, so the selection of the type of classifier to use must also be addressed. In the literature, available classifiers work on two classes only, but in the produce classification problem we consider more than two classes, so it is a major issue to use a binary classifier in a multiclass scenario. Subtracting the background may be necessary to reduce scene complexities, such as variations in illumination, sensor capturing artifacts, background clutter, shading and shadows. Consequently, the result of the system heavily depends on the efficiency of the image segmentation method. The performance of the fruit disease recognition system also depends upon the defect segmentation, and thus precise defect segmentation is required. It might be interesting to consider the number of training examples because more training examples require more time to train the system. The system must perform better in situations where the system is trained with less training examples.

2.2 Fruit and Vegetable Recognition and Classification

A number of studies have been conducted on image categorization. Veggie-Vision [5] was an initial attempt to develop a produce recognition system for use in supermarkets. The system could analyze color, texture and density, and thus was able to obtain more information. Density was calculated by dividing weight with the area of the fruit. The reported accuracy was approximately 95% when color and texture features were combined, but the top four responses were used to achieve such result. Rocha et al. [59] developed an approach that could combine many features and classifiers. The authors approached the multi-class classification problem as a set of binary classification problem in such a way that one can assemble together diverse features and classifier approaches custom-tailored to parts of the problem. They achieved a classification accuracy of up to 99% for some fruits, but they fused three features, namely, border-interior classification (BIC), color coherence vector (CCV), and unser features, and identified top two responses to achieve them. Their method obtained poor results for some types of fruit and vegetable, such as Fuji Apple. Arivazhagan et al. [2] combined color and texture features to classify the produce. They used minimum-distance classifier and achieved 86% accuracy over their dataset with 15 different types of produce. Faria et al. [22] presented a framework for classifier fusion for the automatic recognition of produce in supermarkets. They combined low-cost classifiers trained for specific classes of interest to enhance the recognition rate. Chowdhury et al. [11] recognized 10 different vegetables using color histogram and statistical texture features. They obtained a classification accuracy of up to 96.55% using neural network as a classifier. Danti et al. [15] classified 10 types of leafy vegetables using the BPNN classifier with a success rate of 96.40%. To form the feature vector, they first cropped and resized the image and then extracted the mean and range of hue and saturation channel of HSV image. Suresha et al. [63] obtained 95% classification accuracy over a dataset with eight types of different vegetables using texture measures in RGB color space. They used watershed segmentation to extract the region of interest as a pre-processing and decision tree classifier for training and classification purposes.

Dubey [16] and Dubey and Jalal [17, 20] proposed a framework for recognizing and classifying images of 15 different types produce. The approach involves segmenting an image to extract the region of interest, and then calculating the features from that segmented region, which is further used in training and classification by a multi-class support vector machine. Moreover, they proposed an improved sum and difference histogram (ISADH) texture feature for this kind of problem. The ISADH outperformed the other image color and texture features. Arefi et al. [1] developed a segmentation algorithm to guide a robot arm in picking ripe tomatoes using image-processing technique. They prepared a machine vision system to acquire images from tomatoes. The algorithm works in two phases: (1) the background is subtracted from the RGB color space and then the ripe tomato is extracted using a combination of RGB, HSI, and YIQ color spaces; and (2) the ripe tomato is localized using morphological features of the image. They achieved an accuracy of up to 96.36% for 110 tomato images.

Fruit detection greatly affects the robot’s harvesting efficiency because it is an unstructured environment with changing lighting conditions. Bulanon et al. [10] enhanced the portion occupied by fruit in images using a red chromaticity coefficient and adopted a circle detection method for classifying individual fruits. To improve fruit visibility, they acquired multiple views from different viewing angles for a portion of a tree canopy. Fruit visibility improved from 50% to about 90% by acquiring multiple views. Date palm fruits are popular in the Middle East and have a growing international presence. Sorting of dates can be a tedious job and a key process in the date palm industry. Haidar et al. [27] presented a method for classifying dates automatically based on pattern recognition and computer vision. Using an extracted image of an appropriately crafted mixture of 15 different visual features, they tried multiple classification methods. The performance of the methods ranged from 89% to 99%. Jimenez et al. [29] developed a method that can identify spherical fruits in the natural environment in which difficult situations are present: occlusions, shadows, bright areas, and overlapping fruits. Range and attenuation data are sensed by a laser range-finder sensor, and the 3-D position of the fruit with radius and reflectance are obtained after the recognition steps.

Vegetable quality is often measured in terms of color, shape, mass, firmness, size and absence of bruises, which can also serve as a basis for classifying fruits. Lino et al. [39] classified lemons and tomatoes on the basis of their size and color. Liu et al. [40] developed as system for recognizing peach in a natural setting. The system first obtains and image of a red peach region and then a matching expansion is used to recognize the entire region. The potential center point of the fitting circle is calculated by the intersection of the perpendicular bisector of the line on the contour. Finally, the center point and radius of the fitting peach circle are obtained by calculating the statistical parameters of the potential center points. Patrasa et al. [51] examined variations in antioxidant profiles among fruits and vegetables using pattern recognition tools. Classification is done on the basis of global antioxidant activity, levels of antioxidant groups (ascorbic acid, total anthocyanins, total phenolics), and quality parameters (moisture, instrumental color). Using hierarchical cluster analysis and principal component analysis (PCA), their system can discover the interrelationships among the parameters and thus identify different fruits and vegetables. Patel et al. [50] developed a fruit detection method using an improved algorithm that can calculate multiple features. The algorithm can assign different weights for different features such as color, intensity, edge and the orientation of the input image. The approximate locations of the fruit within an image are represented by the weights of the different features. The algorithm can detect up to 90% of different fruit images taken from different positions on a tree.

Thermal imaging is an approach for converting the pattern of invisible radiation of an object into visible images to facilitate the extraction and analysis of features. If temperature differences can be used to assist in the analysis, diagnosis, or evaluation of a product or process, then infrared thermal imaging technology can be successfully applied. The potential use of thermal imaging in the food and agriculture industry includes detecting diseases and pathogens in plants, predicting water stress in crops, predicting fruit yield, planning irrigation scheduling, detecting bruises in fruits and vegetables, evaluating the maturing of fruits, distributing temperature during cooking, and detecting foreign bodies in food material. Vadivambal and Jayas [67] reviewed the application of thermal imaging in the food and agriculture industry and highlighted the potential of thermal imaging techniques in various agricultural processes. The main advantage of using infrared thermal imaging is the non-contact, non-destructive, and non-invasive nature of the technique in finding the distribution of temperature in a short period of time. Seng and Mirisaee [61] combined different methods that can analyze color, size, and shape to increase the accuracy of recognition. Using the nearest neighbor classifier for the classification, they achieved an accuracy of up to 90%.

Researchers have begun to explore how mobile devices can be conveniently used for recording nutritional intake. Rahman et al. [57] introduced a concept for integrating the camera of mobile phones to capture the images of food consumed. These images can be processed automatically to identify the food items present in the image. They generated texture features from food images and demonstrated that this feature leads to greater accuracy for a mobile phone-based dietary assessment system.

2.3 Fruit Disease Recognition and Classification

Automatic detection of fruit diseases is necessary to detect immediately the symptoms as soon as they appear on the growing fruits. Fruit diseases can develop after harvesting, thereby causing major losses in yield and quality. To determine what disease control measures to take to avoid similar losses during the next harvesting period, it is crucial to recognize the symptoms of fruit diseases. Some diseases can also affect other parts of the tree, infecting twigs, leaves and branches. Some common diseases of apple fruits are apple scab, apple rot and apple blotch [28]. Apple scabs are gray or brown corky spots. Apple rot infections produce slightly sunken, circular brown or black spots that may be covered by a red halo. Apple blotch is a fungal disease and appears on the surface of the fruit as dark, irregular or lobed edges.

Because consumers demand high-quality food products, a fast, accurate and objective method for determining food quality is needed. A promising alternative is the use of computer vision as a cost-effective, non-destructive and automated technique. This inspection approach based on image analysis and processing has found a variety of different applications in the food and agriculture industry. Brosnan and Sun [6, 7] reviewed the progress that had been made in the development of computer vision, and emphasized the important aspects of the image-processing technique coupled with recent developments throughout the agricultural and food industry. Bennedsen and Peterson [3] developed a machine vision system for sorting apples according to surface defects, including bruises. The system could detect surface defects using a combination of a routine based on artificial neural networks and principal components, and three different threshold segmentation routines. When evaluated using eight apple varieties, the routines were able to detect 77%–91% of the individual defects and were able to measure 78%–92.7% of the total defective area. Bennedsen et al. [4] used rotating apples in front of a camera to capture multiple images and removed the dark areas on the apple surface efficiently. Pydipati et al. [55] identified four types of citrus diseases in reference to uninfected fruits using color co-occurrence methods and generalized squared distance in HSV color space, and achieved more than 95% accuracy. Kim et al. [31] introduced an approach for classifying grapefruit peel diseases. Their dataset consisted of the six known diseases of grapefruit peel and the uninfected one. The ROI of the fruit is generated by cropping over which intensity texture features are generated and classified using discriminative analysis. They gained 96% accuracy.

Bulanon et al. [8] explored the variation in temperature at different times of the day in citrus canopy as a potential approach for detecting oranges. Using a thermal infrared camera, they monitored the tree canopy in 24-h cycles and measured surface temperature, ambient temperature and relative humidity. The temperature profile of both the canopy and fruit demonstrated large temperature gradients from afternoon (16:00) until midnight. Using image-processing techniques, they segmented very efficiently the thermal images of the fruits during the period with the largest temperature difference. In another study, Bulanon et al. [9] fused a thermal image with a visible image of an orange canopy to improve fruit detection. A thermal infrared camera captured the thermal image while a digital color camera acquired the visible image. They applied two image-fusion approaches, namely, fuzzy logic and the Laplacian pyramid transform (LPT). The fuzzy logic approach performed better than the LPT, and both fusion approaches improved detection as compared with using thermal images alone.

Qin et al. [56] developed a hyperspectral imaging system for acquiring reflectance images of citrus samples in the spectral region ranging from 450 to 930 nm. They performed spectral information divergence (SID) classification of hyperspectral images of grapefruits to recognize canker from normal fruits and other citrus surface conditions by quantifying the spectral similarities using a predetermined canker reference spectrum. They reported an overall classification accuracy of 96.2% using an optimized SID threshold value of 0.008. Crowe and Delwiche [12] proposed the use of three cameras that can sense the reflectance in the visible region and narrow bands in the near-infrared region for simultaneous color evaluation and detection of fruit defect. The visible region was used for color grading, whereas a narrow band centered at 780 nm was for identifying concavity with structured illumination while a second band centered at 750 nm was for detecting dark spots under complex illumination. In another study, Crowe and Delwiche [13] combined two near-infrared images of a fruit with a real-time pipeline image-processing system. The information on structured illumination facilitated in distinguishing defects from concavities. They estimated the total projected area of defects on each fruit and accordingly classified the defects based on the defect pixel total.

Near-infrared hyperspectral imaging (NIR-HSI) is an emerging technique that combines imaging techniques with the classical NIR spectroscopy to obtain spatial and spectral information simultaneously from a field or a sample. The technique is fast, non-polluting, non-destructive, and relatively inexpensive per capita cost of analysis. Dale et al. [14] reviewed the use of NIR-HSI in agriculture and in quality control of agricultural food products. There is growing interest in the application of HSI in quality and safety assessments of agricultural food products. Lorente et al. [42] surveyed the literature for the use of hyperspectral imaging in the inspection of fruit and vegetables. They explained the different approaches to acquire the images and their use in the inspection of the internal and external features of produce.

Fernando et al. [23] used an unsupervised method based on a multivariate image analysis strategy that uses PCA to generate a reference eigenspace from a matrix obtained by unfolding spatial and color data from defect-free peel samples. Moreover, they introduced a multi-resolution concept to speed up the process. They tested about 120 samples of mandarins and oranges from four different cultivars, namely, Marisol, Fortune, Clemenules, and Valencia. They reported a success rate of 91.5% in detecting individual defects, and 94.2% in detecting both damaged and sound samples. Dubey and Jalal [18, 19] developed another method for detecting and classifying fruit diseases using image-processing techniques. The method could detect the region of defect using an image segmentation technique based on k-means clustering, and then extract the features from that segmented region for use by a multi-class support vector machine for training and classification purposes.

Gabriel and Aguilera [24] proposed a pattern recognition method for automatic detection of stem and calyx ends and damaged blueberries. The method could extract color and geometrical features. In selecting the best features, they tested five algorithms and found that the best classifiers were support vector machine and linear discriminant analysis. Using these classifiers, they were able to determine the orientation of blueberries in 96.8% of the cases. The average performance for mechanically damaged, shriveled, and fungally decayed blueberries were reported as 86%, 93.3%, and 97%, respectively. Apple fecal contamination is an important food safety issue. Kim et al. [32] were able to detect fecal-contaminated apples using a hyperspectral reflectance imaging technique in conjunction with PCA. To detect apples with fecal contamination, they identified three visible and two near-infrared wavelengths that could potentially be implemented in multispectral imaging systems. In another study, Kim et al. [33] reported that multispectral fluorescence approaches can also be incorporated to detect fecal contamination on apple surfaces. Pujari et al. [52, 53] identified diseased fruits to allow for grading normal fruits using BPNN classifier with color- and texture-based features in the RGB and YCbCr color spaces. They obtained nearly 88% success rate. Pujari et al. [54] also used the ANN/Knowledge base classifier and by combining the color and texture features in the RGB color space to classify powdery mildew on six types of fruits. Using only color-based features and decision tree, Kanakaraddi et al. [30] computed the severity level of pathogenic diseases in chili fruits.

Kleynen et al. [34] developed a multispectral vision system in the visible/NIR range with four wavelength bands. They classified fruit defects into four categories: slight, more serious, leading to rejection, and recent bruises. They used a correlation pattern matching algorithm to detect stem-ends and calyxes. For defect segmentation, they applied a pixel classification approach and non-parametric models of defective and non-defective fruits based on Bayes theorem. They achieved good classification rates for apples with serious defects and recent bruises. Leemans et al. [35] proposed an approach based on color information for detecting defects in “Golden Delicious” apples. The approach involves the following steps. First, a model is generated on the basis of variability in the normal color of the variety. Afterwards, each pixel of an image is compared with that of the model to segment the defects. Any pixel that matches with that of the model is considered as a healthy tissue. Leemans et al. [36] proposed a segmentation process based on Bayesian classification that uses information enclosed in a color image of a bi-color apple. The process could separate in segments most defects, such as bruises, bitter pit, fungi attack, scar tissue, frost damages, scab and insect attack.

An automated bruise detection system can help the fruit industry in reducing potential economic losses while providing food products of higher quality to consumers. Lu [43] explored the potential of near-infrared hyperspectral imaging in the spectral region between 900 and 1700 nm for detecting bruises on apples. Using a near-infrared hyperspectral imaging system, they were able to detect both new and old bruises on apples.

Li et al. [38] developd a lighting transform method based on a low-pass Butterworth filter with a cutoff frequency (i.e. the filter response will be maximally flat for D0 <7). They applied this method for transforming non-uniform intensity values on spherical oranges into uniform intensity values over the whole fruit surface. The frequency response of the Butterworth filter was maximally flat up to the cutoff frequency, after which it started decreasing. They found that a ratio method and an R and G component combination coupled with a big area and elongated region removal algorithm could be used to discriminate effectively stem-ends from defects. Mehl et al. [44] presented multispectral techniques using hyperspectral image analysis for detecting defects on three apple cultivars: Red Delicious, Golden Delicious, and Gala. The techniques involved (1) designing a multispectral imaging system from hyperspectral image analysis to characterize spectral features of apples for the specific selection of filters and (2) multispectral imaging for fast detection of apple contaminations. Using these techniques, they found good separation between the normal and contaminated Golden Delicious and Gala varieties, but rather limited separation for Red Delicious variety. Mehl et al. [45] extended their earlier study by developing a hyperspectral imaging technique for the detection of defects and contaminations on the surface of apples. Li et al. [37] developed an experimental hardware system based on computer imaging technology for sorting defects on the surface of apples. The hardware system can inspect simultaneously four sides of each apple on the sorting line. They also developed methods for image background removal, defect segmentation and identification of stem-end and calyx areas. They argued that the adoption of the experimental hardware system is practical and feasible. In current citrus manufacturing industries, color and caliper are key features for the automatic classification of fruits using computer vision approaches. However, human inspection is still the primary means for detecting defects on the surface of citrus. Lopez et al. [41] presented a computer vision system capable of detecting defects and also classifying the type of flaws in citrus. The system uses Sobel gradient to images to segment the faulty zones. Afterwards, color and texture features are extracted considering different color spaces. They employed several techniques for classification and obtained promising results.

Ouyang et al. [47] developed a synthesis segmentation algorithm for use in real-time online segmentation of images of strawberry diseases in greenhouses. The algorithm can eliminate the effects of uneven illumination through the “top-hat” transform, and can remove noise interferences by median filtering. The algorithm computes for the complete area of the fruit image by means of gray morphology, logical operation, OTSU, and mean shift segmentation. Subsequently, the algorithm normalizes the extracted eigenvalues, and then uses eigenvectors of the samples to train the support vector machine and BP neural network. Their results indicate that support vector machines have higher recognition accuracy than the BP neural network. Panli [48] developed a method for segmentation of stems using mathematical morphology. The method applies an attention selection model based on the phase of the Fourier transform to extract the fruit saliency map. In this way, defects on the fruit surface are detected. Fnally, the method uses a support vector machine for the classification using the color and texture of the defective part. They obtained a good classification accuracy.

Schatzki et al. [60] used X-ray imaging for detecting defects in apples, in which apples are characterized as infected or not based on their appearance in X-ray images. Human observers inspect sets of X-ray images for a given cultivar/orientation and record the recognition rates. When they viewed still X-ray images on a computer screen, they found acceptable recognition: 50% of the defective apples were recognized, with 5% of the supposedly good apples were classified as defective. Several thresholding and classification-based approaches are used for pixel-wise segmentation of “Jonagold” apples surface defects. According to Unay and Gosselin [65], segmentation accuracy improves when pixels are represented as a neighborhood. The authors add that multi-layer perceptrons are more promising than the other techniques in terms of computational expense and segmentation accuracy. Their approach generates more precise results on healthy fruits.

Zhang and Wu [68] proposed a fruit classification method based on a multi-class kernel support vector machine (kSVM). The method works as follows: a split-and-merge algorithm is used to remove the background of each image. A feature space is then composed from the extracted color histogram, texture and shape features of each fruit image. PCA is used to reduce the dimensions of the feature space. The authors constructed three kinds of multi-class SVMs, namely, directed acyclic graph SVM, max-wins-voting SVM, and winner-takes-all SVM. They also chose three kinds of kernels: Gaussian radial basis kernel, homogeneous polynomial kernel, and linear kernel. They reported that the max-wins-voting SVM with Gaussian radial Basis kernel recorded the best classification accuracy with 88.2%. Tian et al. [64] developed a multiple classifier system (MCS) based on the support vector machine for pattern recognition of wheat leaf diseases. Three different features, namely, color, texture and shape, are used as training sets. These features are classified by the low-level of MCS classifier to different mid-level categories, which are partly described by the symptom of crop diseases. From these mid-categories produced from low-level classifiers, mid-level features are then extracted. Finally, the high-level SVMs, which correct the errors made by the different feature SVMs, are trained to improve the performance of recognition. Their approach obtained a good success rate of recognition compared with the other classifiers for wheat leaf diseases.

3 Performance Comparison

In this section, we compare the most common methods used for fruit and fruit disease recognition. Figure 1 shows the flow diagram of these methods, which operate in two phases: training and testing. Both require some pre-processing (i.e., image and defect segmentation and feature extraction). In comparison, the system works in three steps. For fruit recognition, fruit images are first segmented into foreground and background. For fruit disease recognition, infected fruit parts are segmented from the image of the diseased fruit. In both cases, the features of the images are then extracted. Finally, a multi-class support vector machine (MSVM) is trained to classify fruits and vegetables into one of the classes and recognize fruit diseases.

Figure 1. Fruit and Fruit Disease Recognition System [17, 18].

Figure 1.

Fruit and Fruit Disease Recognition System [17, 18].

3.1 Image Segmentation and Defect Segmentation

Image segmentation is a convenient and effective method for detecting foreground objects in images with stationary background. Background subtraction is a commonly used class of techniques for segmenting objects of interest in a scene. This task has been widely studied in the literature. Background subtraction techniques can be seen as a two-object image segmentation method, which often need to cope with variations in illumination and sensor capturing artifacts such as blur. Specular reflections, background clutter, shading and shadows in the images are the major factors that must be addressed. Therefore, it may be necessary to perform image segmentation by focusing only on the object’s description to reduce scene complexity. Rocha et al. [59] used a background subtraction method based on K-means clustering technique. Among several image segmentation techniques, only the K-means-based image segmentation method provides a trade-off between efficient segmentation and per unit cost. Some examples of image segmentation techniques are shown in Figure 2.

Figure 2. Some Image Segmentation Result on the Fruit and Vegetable Images [20].

Figure 2.

Some Image Segmentation Result on the Fruit and Vegetable Images [20].

Precise defect segmentation is required in fruit disease classification. If the method used is imprecise, the features of the non-infected region will dominate over the features of the infected region. Whereas using K-means clustering techniques allow for the segmentation of defects of infected fruit images with three or four clusters, applying fruit background subtraction allows for segmentation with only two clusters. In defect segmentation, using only a single channel and two clusters are not sufficient, so more than two clusters and more than one channel of the color images should be considered for precise disease segmentation. In this experiment, images are partitioned into three or four clusters in which one cluster contains the majority of the diseased parts. The final decision of the number of clusters is done by empirical observation: once the c number of clusters is deemed sufficient for a particular problem, human intervention will no longer be required for further processing, that is, the process becomes fully automated. In our case, two and four clusters will suffice for classifying fruits and fruit diseases, respectively. Figure 3 shows some of the image segmentation results using the K-mean clustering technique.

Figure 3. Some Defect Segmentation Results on the Diseased Fruit [19].

Figure 3.

Some Defect Segmentation Results on the Diseased Fruit [19].

3.2 Feature Extraction

Some state-of-the-art color and texture features are extracted and used to validate the accuracy and efficiency of the system. The features used in the classification of fruits and vegetables and their diseases are global color histogram (GCH), color coherence vector (CCV), border/interior classification (BIC), local binary pattern (LBP), completed local binary patterns (CLBP), Unser’s feature (UNSER), and improved sum and difference histogram (ISDH).

  1. Global Color Histogram (GCH)The GCH is the simplest approach for encoding the information present in an image [25]. A GCH is a set of ordered values, for each distinct color, representing the probability of a pixel being of that color. Uniform normalization and quantization are used to avoid scaling bias and to reduce the number of distinct colors [25].

  2. Color Coherence Vector (CCV)Pass et al. [49] presented an approach for comparing images based on CCVs. Color coherence is defined as the degree to which image pixels of that color are members of a large region with homogeneous color. These regions are referred to as coherent regions. Coherent pixels are the parts of the contiguous region, whereas incoherent pixels are not. To compute for CCVs, the method blurs and discretizes the image’s color space to eliminate small variations between neighboring pixels. Afterwards, it finds the connected components in the image to classify the pixels of a given color bucket whether it is either coherent or incoherent. After classifying the image pixels, CCV computes two color histograms: one for coherent pixels and another for incoherent pixels. The two histograms are stored as a single histogram.

  3. Border/Interior Classification (BIC)To compute for BIC, the method classifies image pixels as border or interior. A pixel is classified as interior if its four neighbors (top, bottom, left, and right) have the same quantized color. Otherwise, it is classified as border. After the image pixels are classified, two color histograms are computed: one for border pixels and another for interior pixels [62].

  4. Local Binary Pattern (LBP)Given a pixel in the input image, LBP is computed by comparing it with its neighbors as follows [46]:

    (1)LBPN,R=n=0n1s(vnvc)2n,s(x)={1,x00,x<0 (1)

    where vc is the value of the central pixel, vn is the value of its neighbors, R is the radius of the neighborhood and N is the total number of neighbors. Suppose the coordinate of vc is (0, 0), then the coordinates of vn are [R cos(2πn/N), R sin(2πn/N)]. The values of neighbors that are not present in the image grids may be estimated by interpolation. Let the size of image is I*J. After the LBP code of each pixel is computed, a histogram is created to represent the texture image:

    (2)H(k)=i=1Ij=1Jf(LBPN,R(i,j),k),k[0,K],f(x,y)={1,x=y0,otherwise (2)

    where K is the maximal LBP code value. In this experiment, the value of “N” and “R” are set to “8” and “1”, respectively, to compute for the LBP feature.

  5. Completed Local Binary Pattern (CLBP)LBP considers only signs of local differences (i.e., difference of each pixel with its neighbors), whereas CLBP considers both signs (S) and magnitude (M) of local differences as well as original center gray level (C) value [26]. The CLBP feature is the combination of three features, namely, CLBP_S, CLBP_M, and CLBP_C. CLBP_S is the same as the original LBP and used to code the sign information of local differences. CLBP_M is used to code the magnitude information of local differences:

    (3)CLBPN,R=n=0n1t(mn,c)2n,t(x,c)={1,xc0,x<c (3)

    where c is a threshold and set to the mean value of the input image in this experiment. CLBP_C is used to code the information of original center gray level value:

    (4)CLBPN,R=t(gc,cI),t(x,c)={1,xc0,x<c (4)

    where threshold cI is set to the average gray level of the input image. In this experiment, the value of “N” and “R” are set to “8” and “1”, respectively, to compute for the CLBP feature.

  6. Unser’s Feature (UNSER)To extract the Unser feature, the method first finds the sum and difference of intensity values over a displacement of (d1, d2) of an image, and then it calculates two histograms (sum and difference histogram) and stores both histograms as a single histogram [66].

  7. Improved Sum and Difference Histogram (ISADH)Dubey and Jalal [20] developed an efficient ISADH texture feature to encode the neighboring information of a pixel in the image on the basis of sum and difference histogram. The sum and difference are calculated with neighboring pixels in the x direction and then these outputs are simulated in the y-direction by calculating both the sum and difference in the y-direction. By considering x and y directions separately, the algorithm is able to encode the relation of any pixel with its neighboring pixels in both the x and y directions very efficiently.

3.3 Training and Classification

Rocha et al. [59] recently presented a unified approach that can combine many features and classifiers. The authors approached the multi-class classification problem as a set of binary classification problems in such a way one can assemble together diverse features and classifier approaches custom-tailored to parts of the problem. They defined class binarization as a mapping of a multi-class problem into two-class problems (divide-and-conquer) and referred to binary classifier as a base learner. For N-class problem N×(N–1)/2 binary classifiers will be needed where N is the number of different classes. According to the authors, the ijth binary classifier uses the patterns of class i as positive and the patterns of class j as negative. They calculate the minimum distance of the generated vector (binary outcomes) to the binary pattern (ID) representing each class to find the final outcome. Test case will belong to that class for which the distance between the ID of that class and binary outcomes will be minimum.

Their approach can be understood by a simple three-class problem. Let three classes be x, y, and z. Three binary classifiers consisting of two classes each (i.e., x×y, x×z, and y×z) will be used as base learners, and each binary classifier will be trained with training images. Each class will receive a unique ID as shown in Table 1. To populate the table is straightforward. First, we perform the binary comparison x×y and tag the class x with the outcome +1, the class y with –1 and set the remaining entries in that column to 0. Thereafter, we repeat the procedure for the classifier x×z, and tag the class x with +1, the class z with –1, and the remaining entries in that column with 0. We repeat this procedure for the binary classifier y×z, and tag the class y with +1, the class z with –1, and set the remaining entries with 0 in that column, where the entry 0 means a “Don’t care” value. Finally, each row represents a unique ID of that class (e.g., y = [–1, +1, 0]). Each binary classifier results in a binary response for any input example. For example, if the outcomes for the binary classifier x×y, x×z, and y×z are +1, –1, and +1, respectively, then the input example will belong to that class that have the minimum distance from the vector [+1, –1, +1]. The final answer will be given by the minimum distance of

Table 1.

Unique ID of Each Class.

x×yx×zy×z
x+1+10
y–10+1
z0–1–1

min dist ({+1,1,+1}, ({+1,+1,0},{1,0,+1},{0,1,1})).

We used the MSVM as a set of binary SVMs for the training and classification in both problems.

3.4 Results

To demonstrate the performance of the system, we used a dataset of fruits and vegetables obtained from a supermarket, comprising 15 different categories for a total of 2615 images: plum (264), Agata potato (201), Asterix potato (181), cashew (210), onion (75), orange (103), Tahiti lime (104), kiwi (157), Fuji apple (212), Granny Smith apple (155), watermelon (192), honeydew melon (145), nectarine (247), Spanish pear (158), and diamond peach (211) (Figure 4). The dataset contained fruit images under different lighting conditions with varying numbers of elements in an image. It also contained the pose differences among the fruits and cropping and partial occlusion. The presence of these features was allowed to make the dataset more realistic. A dataset of diseased apple fruits were also considered, which included four different categories forming a total of 391 images: apple blotch (104), apple rot (107), apple scab (100), and normal apples (80) (Figure 5). The variations in the type and color of the apples allowed the dataset to be more realistic. In the experiment, different numbers of images per class were used for training purposes. The average error was computed by calculating the sum of the average error of each class divided by the total number of classes. Figure 6 shows the average error for the classification of fruits and vegetables in relation to different features in both the RGB and HSV color spaces, whereas Figure 7 represents the average accuracy obtained for the classification of apple fruit diseases. The x-axis represents the number of images per class for the training, and y-axis represents the average error/average accuracy. The results illustrate that GCH has the highest average error (Figure 6) and the lowest average accuracy (Figure 7) for both the RGB and HSV color images. This is so because GCH has only the color information and it does not consider the relation among the neighboring pixels.

Figure 4. Data Set used in the Fruit and Vegetable Classification Problem [20].

Figure 4.

Data Set used in the Fruit and Vegetable Classification Problem [20].

Figure 5. Some Images of the Apples Infected with Scab, Rot, Blotch, and Normal Apples of the Dataset [19].

Figure 5.

Some Images of the Apples Infected with Scab, Rot, Blotch, and Normal Apples of the Dataset [19].

Figure 6. Average Fruit and Vegetable Classification Error using the GCH, CCV, BIC, UNSER, and ISADH Features Considering MSVM as a Base Learner in the RGB and HSV Color Spaces [20].

Figure 6.

Average Fruit and Vegetable Classification Error using the GCH, CCV, BIC, UNSER, and ISADH Features Considering MSVM as a Base Learner in the RGB and HSV Color Spaces [20].

Figure 7. Apple Fruit Disease Average Classification Accuracy using GCH, CCV, LBP, and CLBP Features Considering MSVM as a Base Learner in the RGB and HSV Color Spaces [19].

Figure 7.

Apple Fruit Disease Average Classification Accuracy using GCH, CCV, LBP, and CLBP Features Considering MSVM as a Base Learner in the RGB and HSV Color Spaces [19].

The average error for CCV is less than that for the GCH (Figure 6). Moreover, the average accuracy of CCV is higher than that of GCH (Figure 7) because the former feature exploits the concept of coherent and incoherent regions. The BIC feature has a lower average classification error than CCV because the former feature considers the values of four neighboring pixels. The UNSER feature has lower average classification error than GCH, CCV, and BIC, and has higher classification than GCH and CCV (Figure 6). The reported error for fruit and vegetable classification in the RGB color space is 9.43%, 7.58%, 5.84%, 5.65%, and 4.56% for GCH, CCV, BIC, UNSER, and ISADH, respectively, whereas MSVM is trained with 60 images per fruit or vegetable. LBP performs better than GCH and CCV because LBP incorporates the neighboring information in a rotation-invariant manner (Figure 7). CLBP performs very well in both color spaces (Figure 7). It is also observed across the plots that both features perform better in the HSV color space than in the RGB color space. For the ISADH feature with 60 training examples per class, the reported classification error is 4.56% in RGB and 1.48% in HSV for fruit and vegetable classification. The ISADH texture feature outperforms the other features because the former uses the neighboring information of the x-direction with the neighboring information of the y-direction (Figure 6). In terms of disease recognition, the CLBP feature is a better choice.

We also made a comparison of the performance of SVM and KNN classifiers using ISADH texture features in both the RGB and HSV color spaces. In Figure 8, the comparison is illustrated for the fruit and vegetable classification problem. In Figure 9, the results are compared for the fruit disease recognition problem. The value K is considered as one. Across the plots, the performance of the SVM classifier is better than that of the well-known KNN classifier in both the RGB and HSV color spaces. On the basis of these results we gathered from several research papers, we argue that the ISADH texture feature is better suited with MSVM in both the RGB and HSV color spaces for solving fruit and fruit disease classification problems.

Figure 8. Comparison of SVM and KNN Classifier using the ISADH Texture Feature in Both the RGB and HSV Color Spaces for Fruit and Vegetable Classification Problem [20].

Figure 8.

Comparison of SVM and KNN Classifier using the ISADH Texture Feature in Both the RGB and HSV Color Spaces for Fruit and Vegetable Classification Problem [20].

Figure 9. Comparison of SVM and KNN Classifier using the ISADH Texture Feature in Both the RGB and HSV Color Spaces for Fruit Disease Recognition Problem [21].

Figure 9.

Comparison of SVM and KNN Classifier using the ISADH Texture Feature in Both the RGB and HSV Color Spaces for Fruit Disease Recognition Problem [21].

3.5 Comparison Among Existing Methods

We also compared the performance of different approaches for addressing fruit and vegetable classification (Table 2) and fruit disease recognition (Table 3) problems. We made the comparisons on the basis of the number of categories in the database, pre-processing steps involved, features extracted, color space used, classifiers used, and accuracy achieved.

Table 2.

Comparison of Existing Fruit and Vegetable Classification Methods.

ReferenceDataset (number of categories)Pre-ProcessingFeaturesColor SpaceTrainingEvaluation CriteriaAverage Accuracy
Dubey and Jalal [20]15K-means with 2 clustersISADHHSVMulticlass SVMAccuracy99%
Rocha et al. [59]15K-means with 2 clustersGCH+CCV+BIC+Unser (Fusion)HSVMulticlass SVMAverage error97%
Arivazhagan et al. [2]15CroppingCo-occurrence features such as contrast, energy, local homogeneity, cluster shade and cluster prominenceHSVMinimum distance classifierRecognition rate86%
Chowdhury et al. [11]10Color histogram +TexureHSVNeural networksAccuracy96.55%
Faria et al. [22]15K-means with 2 clustersColor, texture and shapeHSVClassifier fusionAccuracy98.8%±0.9
Danti et al. [15]10Cropping and resizingMean and range of Hue and saturationHSVBPNN classifierAccuracy96.40%
Suresha et al. [63]8Watershed segmentationTexture fearuresRGBDecision-tree classifierAccuracy95%

Table 3.

Comparison of Existing Fruit Disease Recognition Methods.

ReferenceDataset (number of categories)Pre-ProcessingFeaturesColor SpaceTrainingEvaluation CriteriaAverage Accuracy
Dubey and Jalal [21]4K-means with 3 and 4 clustersISADH + Gradient filtersHSVMulticlass SVM + KNNAccuracy and AUC>99%
Pujari et al. [52]2 – Normal and anthracnose- affected fruit typesK-meansTexture featuresRGBBPNN classifierAccuracy84.65% for normal type and 76.6% for anthracnose affected type
Pujari et al. [53]2 – Normal and affected fruit typesColor features + GLCMYCbCrBPNN classifierAccuracy89.15% for normal type 88.58% for affected type
Pujari et al. [54]6Shade correction, removing artifacts and formattingColor + Texture featuresRGBANN/Knowledge base classifierAccuracy87.80%
Kim et al. [31]6ROI croppingIntensity texture featuresHSVDiscriminant analysisAccuracy96%
Pydipati et al. [55]4Edge detectionColor co-occurrence methodsHSVGeneralized squared distanceAccuracy>95%
Kanakaraddi et al. [30]4Median filteringColor featuresRGBDecision treeDisease severity

4 Conclusion

In this paper, we reviewed the progress that has been made in the application of information and communication technology in the agriculture and food industries. Specifically, we explored several computer vision and image-processing approaches adopted for the classification of fruits and vegetables and their diseases. Most of these approaches involve three main steps: (1) background subtraction, (2) feature extraction, and (3) training and classification. We also surveyed the literature for image-processing-based solutions that use color and texture features for automatic recognition and classification of fruits and vegetables and their diseases. These image-processing techniques involve three steps: image and defect segmentation is performed using the K-means clustering method. The features are then extracted from the segmented image and infected region. Finally, the images are classified into one of the fruit and disease classes. For the evaluation of these methods, a total of 15 types of fruits and vegetables and three types of diseases of apples were considered. An average classification error of 1% and 3% was reported for fruit and vegetable classification and fruit disease classification, respectively.

In this review, only a single type of fruit was considered and only one type of disease was present in the fruit or an image. In the future, we will extend our work towards identifying different species and varieties of fruits and vegetables in a single image, and perhaps attempt this too in identifying different diseases in an image of a produce. Another possible direction may include the implementation of such systems in real-life scenarios. To improve the accuracy of classification, several features such as the shape, color, and texture of the produce may also considered.


Corresponding author: Shiv Ram Dubey, GLA University – Computer Engineering and Applications, 17KM Stone, NH-2, Chaumuhan, Mathura, Uttar Pradesh 281406, India, e-mail:

Bibliography

[1] A. Arefi, A. M. Motlagh, K. Mollazade and R. F. Teimourlou, Recognition and localization of ripen tomato based on machine vision, Asian J. Crop Sci.5 (2011), 1144–1149.Search in Google Scholar

[2] S. Arivazhagan, R. N. Shebiah, S. S. Nidhyanandhan and L. Ganesan, Fruit recognition using color and texture features, J. Emerg. Trends Comput. Inform. Sci.1 (2010), 90–94.Search in Google Scholar

[3] B. S. Bennedsen and D. L. Peterson, Performance of a system for apple surface defect identification in near-infrared images, Biosyst. Eng.90 (2005), 419–431.10.1016/j.biosystemseng.2004.12.005Search in Google Scholar

[4] B. S. Bennedsen, D. L. Peterson and A. Tabb, Identifying defects in images of rotating apples, Comput. Electron. Agr.48 (2005), 92–102.10.1016/j.compag.2005.01.003Search in Google Scholar

[5] R. M. Bolle, J. H. Connell, N. Haas, R. Mohan and G. Taubin, Veggievision: a produce recognition system, in: Proceedings of the 3rd IEEE Workshop on Applications of Computer Vision, pp. 1–8, Sarasota, USA, 1996.Search in Google Scholar

[6] T. Brosnan and D.-W. Sun, Inspection and grading of agricultural and food products by computer vision systems – a review, Comput. Electron. Agr.36 (2002), 193–213.10.1016/S0168-1699(02)00101-1Search in Google Scholar

[7] T. Brosnan and D.-W. Sun, Improving quality inspection of food products by computer vision – a review, J. Food Eng.61 (2004), 3–16.10.1016/S0260-8774(03)00183-3Search in Google Scholar

[8] D. M. Bulanon, T. F. Burks and V. Alchanatis, Study of temporal variation in citrus canopy using thermal imaging for citrus fruit detection, Biosyst. Eng.101 (2008), 161–171.10.1016/j.biosystemseng.2008.08.002Search in Google Scholar

[9] D. M. Bulanon, T. F. Burks and V. Alchanatis, Image fusion of visible and thermal images for fruit detection, Biosyst. Eng.103 (2009a), 12–22.10.1016/j.biosystemseng.2009.02.009Search in Google Scholar

[10] D. M. Bulanon, T. F. Burks and V. Alchanatis, Improving fruit detection for robotic fruit harvesting, in: ISHS Acta Horticulturae 824: International Symposium on Application of Precision Agriculture for Fruits and Vegetables, pp. 329–336, 2009.10.17660/ActaHortic.2009.824.39Search in Google Scholar

[11] M. T. Chowdhury, M. S. Alam, M. A. Hasan and M. I. Khan, Vegetables detection from the glossary shop for the blind, IOSR J. Electr. Electron. Eng.8 (2013), 43–53.10.9790/1676-0834353Search in Google Scholar

[12] T. G. Crowe and M. J. Delwiche, Real-time defect detection in fruit – part I: Design concepts and development of protoype hardware, T. Am. Soc. Agr. Biol. Eng.39 (1996), 2299–2308.10.13031/2013.27740Search in Google Scholar

[13] T. G. Crowe and M. J. Delwiche, Real-time defect detection in fruit – part II: an algorithm and performance of a protoype system, T. Am. Soc. Agri. Biol. Eng.39 (1996), 2309–2317.10.13031/2013.27741Search in Google Scholar

[14] L. M. Dale, A. Thewis, C. Boudry, I. Rotar, P. Dardenne, V. Baeten and J. A. F. Piernac, Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: a review, Appl. Spectrosc. Rev.48 (2013), 142–159.10.1080/05704928.2012.705800Search in Google Scholar

[15] A. Danti, M. Madgi and B. S. Anami, Mean and range color features based identification of common Indian leafy vegetables. Int. J. Sign. Proc. Image Proc. Pattern Recogn.5 (2012), 151–160.Search in Google Scholar

[16] S. R. Dubey, Automatic recognition of fruits and vegetables and detection of fruit diseases, Unpublished Master’s Theses, GLA University Mathura, India, 2012.Search in Google Scholar

[17] S. R. Dubey and A. S. Jalal, Robust approach for fruit and vegetable classification, Procedia Eng.38 (2012), 3449–3453.10.1016/j.proeng.2012.06.398Search in Google Scholar

[18] S. R. Dubey and A. S. Jalal, Detection and classification of apple fruit diseases using complete local binary patterns, in: Proceedings of the 3rd International Conference on Computer and Communication Technology, MNNIT Allahabad, India, pp. 346–351, 2012.10.1109/ICCCT.2012.76Search in Google Scholar

[19] S. R. Dubey and A. S. Jalal, Adapted approach for fruit disease identification using images, Int. J. Comput. Vision Image Proc.2 (2012), 51–65.10.4018/ijcvip.2012070104Search in Google Scholar

[20] S. R. Dubey and A. S. Jalal, Species and variety detection of fruits and vegetables from images, Int. J. Appl. Pattern Recog.1 (2013), 108–126.10.1504/IJAPR.2013.052343Search in Google Scholar

[21] S. R. Dubey and A. S. Jalal, Fruit disease recognition using improved sum and difference histogram from images, Int. J. Appl. Pattern Recog.1 (2014), 199–220.10.1504/IJAPR.2014.063759Search in Google Scholar

[22] F. A. Faria, J. A. dos Santos, A. Rocha and R. S. Torres, Automatic classifier fusion for produce recognition, in: Proceeding of 25th SIBGRAPI Conference on Graphics, Patterns and Images, pp. 152–259, 2012.10.1109/SIBGRAPI.2012.42Search in Google Scholar

[23] L.-G. Fernando, A.-G. Gabriela, J. Blasco, N. Aleixos and J.-M. Valiente, Automatic detection of skin defects in citrus fruits using a multivariate image analysis approach, Comput. Electr. Agr.71 (2010), 189–197.10.1016/j.compag.2010.02.001Search in Google Scholar

[24] A. L. V. Gabriel and J. M. Aguilera, Automatic detection of orientation and diseases in blueberries using image analysis to improve their postharvest storage quality, Food Control33 (2013), 166–173.10.1016/j.foodcont.2013.02.025Search in Google Scholar

[25] R. Gonzalez and R. Woods, Digital image processing,3rd edition, Prentice-Hall, Upper Saddle River, NJ, USA, 2007.Search in Google Scholar

[26] Z. Guo, L. Zhang and D. Zhang, A completed modeling of local binary pattern operator for texture classification, IEEE T. Image Process.19 (2010), 1657–1663.10.1109/TIP.2010.2044957Search in Google Scholar

[27] A. Haidar, H. Dong and N. Mavridis, Image-based date fruit classification, in: Proceedings of the 4th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops, pp. 357–363, 2012.10.1109/ICUMT.2012.6459693Search in Google Scholar

[28] J. Hartman, Apple fruit diseases appearing at harvest, Plant Pathology Fact Sheet, College of Agriculture, University of Kentucky. Retrieved December 2012 from , 2010.Search in Google Scholar

[29] R. Jimenez, A. K. Jain, R. Ceres and J. L. Pons, Automatic fruit recognition: a survey and new results using range/attenuation images, Pattern Recogn.32 (1999), 1719–1736.10.1016/S0031-3203(98)00170-8Search in Google Scholar

[30] S. Kanakaraddi, P. Iliger, A. Gaonkar, M. Alagoudar and A. Prakash, Analysis and grading of pathogenic disease of chilli fruit using image processing, in: Proceedings of the International Conference on Advances in Engineering & Technology, pp. 46–50, 2014.Search in Google Scholar

[31] D. G. Kim, T. F. Burks, J. Qin and D. M. Bulanon, Classification of grapefruit peel diseases using color texture feature analysis, Int. J. Agr. Biol. Eng.2 (2009), 41–50.Search in Google Scholar

[32] M. S. Kim, A. M. Lefcourt, K. Chao, Y. R. Chen, I. Kim and D. E. Chan, Multispectral detection of fecal contamination on apples based on hyperspectral imagery: Part I. Application of visible and near-infrared reflectance imaging, T. Am. Soc. Agr. Eng.45 (2002), 2027–2037.10.13031/2013.11414Search in Google Scholar

[33] M. S. Kim, A. M. Lefcourt, Y. R. Chen, I. Kim, D. E. Chan and K. Chao, Multispectral detection of fecal contamination on apples based on hyperspectral imagery: Part II. Application of hyperspectral fluorescence imaging, T. Am. Soc. Agr. Eng.45 (2002), 2039–2047.10.13031/2013.11416Search in Google Scholar

[34] O. Kleynen, V. Leemans and M.-F. Destain, Development of a multi-spectral vision system for the detection of defects on apples, J. Food Eng.69 (2005), 41–49.10.1016/j.jfoodeng.2004.07.008Search in Google Scholar

[35] V. Leemans, H. Magein and M.-F. Destain, Defect segmentation on ‘golden delicious’ apples by using colour machine vision, Comput. Electron. Agr.20 (1998), 117–130.10.1016/S0168-1699(98)00012-XSearch in Google Scholar

[36] V. Leemans, H. Magein and M.-F. Destain, Defect segmentation on ‘jonagold’ apples using colour vision and a Bayesian classification method, Comput. Electron. Agr.23 (1999), 43–53.10.1016/S0168-1699(99)00006-XSearch in Google Scholar

[37] Q. Li, M. Wang and W. Gu, Computer vision based system for apple surface defect detection, Comput. Electron. Agr.36 (2002), 215–223.10.1016/S0168-1699(02)00093-5Search in Google Scholar

[38] J. Li, X. Rao, F. Wang, W. Wu and Y. Ying, Automatic detection of common surface defects on oranges using combined lighting transform and image ratio methods, Postharvest Biol. Tec.82 (2013), 59–69.10.1016/j.postharvbio.2013.02.016Search in Google Scholar

[39] A. C. L. Lino, J. Sanches and I. M. D. Fabbro, Image processing techniques for lemons and tomatoes classification, Bragantia67 (2008), 785–789.10.1590/S0006-87052008000300029Search in Google Scholar

[40] Y. Liu, B. Chen and J. Qiao, Development of a machine vision algorithm for recognition of peach fruit in natural scene, T. Am. Soc. Agr. Biol. Eng.54 (2011), 695–702.10.13031/2013.36472Search in Google Scholar

[41] J. J. Lopez, C. Maximo and A. Emanuel, Computer-based detection and classification of flaws in citrus fruits, Neural Comput. Appl.20 (2011), 975–981.10.1007/s00521-010-0396-2Search in Google Scholar

[42] D. Lorente, N. Aleixos, J. Gómez-Sanchis, S. Cubero, O. L. García-Navarrete and J. Blasco, Recent advances and applications of hyperspectral imaging for fruit and vegetable quality assessment, Food Bioprocess Tech.5 (2012), 1121–1142.10.1007/s11947-011-0725-1Search in Google Scholar

[43] R. Lu, Detection of bruises on apples using near-infrared hyperspectral imaging, T. Am. Soc. Agr. Eng.46 (2003), 523–530.10.13031/2013.12941Search in Google Scholar

[44] P. M. Mehl, K. Chao, M. Kim and Y.-R. Chen, Detection of defects on selected apple cultivars using hyperspectral and multispectral image analysis, Appl. Eng. Agr.18 (2002), 219–226.10.13031/2013.7790Search in Google Scholar

[45] P. M. Mehl, Y.-R. Chen, M. S. Kim and D. E. Chen, Development of hyperspectral imaging technique for the detection of apple surface defects and contaminations, J. Food Eng.61 (2004), 67–81.10.1016/S0260-8774(03)00188-2Search in Google Scholar

[46] T. Ojala, M. Pietikäinen and T. T. Mäenpää, Multiresolution gray-scale and rotation invariant texture classification with local binary pattern, IEEE T. Pattern Anal.24 (2002), 971–987.10.1109/TPAMI.2002.1017623Search in Google Scholar

[47] C. Ouyang, D. Li, J. Wang, S. Wang and Y. Han, The research of the strawberry disease identification based on image processing and pattern recognition, Int. Fed. Info. Proc.392 (2013), 69–77.10.1007/978-3-642-36124-1_9Search in Google Scholar

[48] H. E. Panli, Fruit surface defects detection and classification based on attention model, J. Comput. Info. Sys.8 (2012), 4233–4240.Search in Google Scholar

[49] G. Pass, R. Zabih and J. Miller, Comparing images using color coherence vectors, in: Proceedings of the ACM Multimedia, pp. 1–14, 1997.10.1145/244130.244148Search in Google Scholar

[50] H. N. Patel, R. K. Jain and M. V. Joshi, Fruit detection using improved multiple features based algorithm, Int. J. Comp. Appl.13 (2011), 1–5.10.5120/1756-2395Search in Google Scholar

[51] A. Patrasa, N. P. Bruntona, G. Downeya, A. Rawsona, K. Warrinerb and C. Gernigonc, Application of principal component and hierarchical cluster analysis to classify fruits and vegetables commonly consumed in Ireland based on in vitro antioxidant activity, J. Food Compos. Anal.24 (2011), 250–256.10.1016/j.jfca.2010.09.012Search in Google Scholar

[52] J. D. Pujari, R. Yakkundimath and A. S. Byadgi, Grading and classification of anthracnose fungal disease of fruits based on statistical texture features, Int. J. Advanced Sci. Tech.52 (2013), 121–132.Search in Google Scholar

[53] J. D. Pujari, R. Yakkundimath and A. S. Byadgi, Reduced color and texture features based identification and classification of affected and normal fruits’ images, Int. J. Agr. Food Sci.3 (2013), 119–127.Search in Google Scholar

[54] J. D. Pujari, R. Yakkundimath and A. S. Byadgi, Recognition and classification of produce affected by identically looking powdery mildew disease, Acta Tech. Agr.17 (2014), 29–34.10.2478/ata-2014-0007Search in Google Scholar

[55] R. Pydipati, T. F. Burks and W. S. Lee, Identification of citrus disease using color texture features and discriminant analysis, Comput. Electron. Agr.52 (2006), 49–59.10.1016/j.compag.2006.01.004Search in Google Scholar

[56] J. Qin, F. Burks, M. A. Ritenour and W. G. Bonn, Detection of citrus canker using hyper-spectral reflectance imaging with spectral information divergence, J. Food Eng.93 (2009), 183–191.10.1016/j.jfoodeng.2009.01.014Search in Google Scholar

[57] M. H. Rahman, M. R. Pickering, D. Kerr, C. J. Boushey and E. J. Delp, A new texture feature for improved food recognition accuracy in a mobile phone based dietary assessment system, in: Proceedings of the IEEE International Conference on Multimedia and Expo Workshops, pp. 418–423, 2012.10.1109/ICMEW.2012.79Search in Google Scholar

[58] M. J. Roberts, D. Schimmelpfennig, E. Ashley and M. Livingston, The value of plant disease early-warning systems: a case study of USDA’s soybean rust coordinated framework, Economic Research Service, 18, United States Department of Agriculture, Retrieved December 2011, from , 2006.Search in Google Scholar

[59] A. Rocha, C. Hauagge, J. Wainer and D. Siome, Automatic fruit and vegetable classification from images, Comput. Electron. Agr.70 (2010), 96–104.10.1016/j.compag.2009.09.002Search in Google Scholar

[60] T. F. Schatzki, R. P. Haff, R. Young, I. Can, L.-C. Le and N. Toyofuku, Defect detection in apples by means of x-ray imaging, T. Am. Soc. Agr. Eng.40 (1997), 1407–1415.10.13031/2013.21367Search in Google Scholar

[61] W. C. Seng and S. H. Mirisaee, A new method for fruits recognition system, in: Proceedings of International Conference on Electrical Engineering and Informatics, pp. 130–134, 2009.Search in Google Scholar

[62] R. Stehling, M. Nascimento and A. Falcao, A compact and efficient image retrieval approach based on border/interior pixel classification, in: Proceedings of the ACM Conference on Information and Knowledge Management, pp. 102–109, 2002.10.1145/584792.584812Search in Google Scholar

[63] M. Suresha, K. S. Sandeep Kumar and G. Shiva Kumar, Texture features and decision trees based vegetables classification, in: IJCA Proceedings on National Conferecne on Advanced Computing and Communications, pp. 21–26, 2012.Search in Google Scholar

[64] Y. Tian, C. Zhao, S. Lu and X. Guo, SVM-based multiple classifier system for recognition of wheat leaf diseases, in: Proceedings of the World Automation Congress, pp. 189–193, 2012.Search in Google Scholar

[65] D. Unay and B. Gosselin, Automatic defect detection of ‘jonagold’ apples on multi-spectral images: a comparative study, Postharvest Biol. Tech.42 (2006), 271–279.10.1016/j.postharvbio.2006.06.010Search in Google Scholar

[66] M. Unser, Sum and difference histograms for texture classification, IEEE T. Pattern Anal.8 (1986), 118–125.10.1109/TPAMI.1986.4767760Search in Google Scholar

[67] R. Vadivambal and D. S. Jayas, Applications of thermal imaging in agriculture and food industry – a review, Food Bioprocess Tech.4 (2011), 186–199.10.1007/s11947-010-0333-5Search in Google Scholar

[68] Y. Zhang and L. Wu, Classification of fruits using computer vision and a multiclass support vector machine, Sensors12 (2012), 489–505.10.3390/s120912489Search in Google Scholar

Received: 2014-4-1
Published Online: 2014-11-14
Published in Print: 2015-12-1

©2015 by De Gruyter

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.