1,167
Views
9
CrossRef citations to date
0
Altmetric
Research Article

Flower segmentation with level sets evolution controlled by colour, texture and shape features

, ORCID Icon & | (Reviewing Editor)
Article: 1323572 | Received 23 Nov 2016, Accepted 24 Apr 2017, Published online: 05 May 2017

Abstract

This work proposes a pre-informed Chan vese based level sets algorithm. Pre information includes objects colour, texture and shape fused features. The aim is to use this algorithm to segment flower images and extract meaningful features that will help is classification of floral content. Shape pre-information modelling is handled manually using advance image processing tools. Local binary patterns features makeup texture pre-information and Red, Green and Blue colour channels of the object provide colour pre-information. All pre-defined object information is fused together to for high dimension subspace defining object characteristics. Testing of the algorithm on flower images datasets show a jump in information content in the resulting segmentation output compared to other models in the category. Segmentation of flowers is important for recognition, classification and quality assessment to ever increasing volumes in floral markets.

View correction statement:
Erratum

Public Interest Statement

Flower segmentation and classification is a challenging task for computer vision engineers. A tool is required that can capture the full scale of parameters for classification. This work is the first step towards the development of the iFlower tool for recognition of floral species. This mobile based application can greatly help people understand the natural environment and contribute their part in making it nourish and prevent them from going extinct. This work focused on a segmentation model for flowers in natural environment using their color, texture and shape information. Shape is extracted manually in this work. In future a fully automated process is being developed by our team.

1. Introduction

Flowers induce instantaneous and elongated effects on emotions, mood, behaviours and memory of both male and female human beings (Haviland-Jones, Rosario, Wilson, & McGuire, Citation2005). The authors studied extensively about the reactions flowers cause during their contact with humans in three different ways and concluded that human happiness is directly linked to flowers. This is the reason for a 30% increase in world floriculture market every year and a 25% in India per annum (Oommen, Citation2015). The other side of the story is the losses incurred as they don’t last long after they are cut form the plant. The storage, temperature, sorting, packaging and transportation are some of the causes for a market loss of nearly 25% every year (van Meeteren, Citation2009).

Computer vision based algorithms are capable of determining the quality of flower during its journey from blossoming to final consumer market. In this work we limit ourselves to the first stage of development of a complete floral quality tester using computer vision models. The first and most complicated task is to extract the flower to lower dimensional subspace for classification. The binary segmentation of the flower is performed by using a higher dimensional feature subspace consisting of colour, texture and shape characteristics of the image objects. The proposed method is evaluated on flower database available at oxford http://www.robots.ox.ac.uk/~vgg/data/flowers/.

The world consists of close to 250,000 species of flowers. Classification of these species is largely at the discretion of the botanists. Even the people involved in floral trade are unable to classify them correctly. An image is enough to classify the floral content with the help a guide book and an expert botanist. People still find it difficult to know a particular flower species when capture a picture of it using a digital camera. If they have a name of the flower it is easy to find information about the flower species using Google search engine. But the link between the photographed flower picture and the name of the flower is missing. Hence, this paper investigates the first step in the process of automatic classification of flora for images of flowers captured by digital cameras.

Vision computing applications is growing at an enormous pace in the last decade and agriculture (Gomes & Leta, Citation2012) is no exception. Pest detection, grading (Effendi, Ramli, Ghani, & Yaakob, Citation2009), lesion estimation (Niphadkar, Burks, Qin, & Ritenour, Citation2013), yield prediction (Dana & Ivo, Citation2008) and flower quality estimation (Benmehaia, Khedidja, & Bentchikou, Citation2016) leading to good harvesting are the major areas (Sharma, Sharma, & Srivastava, Citation2016). For floral image processing, quite a few research is published in the past decade. The problem is being approached from two different perspectives. They are knowledge based and data based approaches. Data based approaches use the information in the digital picture such as color, shape, features and resolution as the parameters for segmentation. Knowledge based models use some form pre information about the object being segmented.

In (Nilsback & Zisserman, Citation2007), the authors use an image specific color distribution which detects the shape of the flowers in the image automatically. The algorithm starts with color descriptor and transforms into a foreground/background segmentation initializing a generic shape model. The generic shape model is applicable across multiple classes and viewpoints. Previously Das, Manmatha, and Riseman (Citation1999) used color of the flowers as domain knowledge for segmentation. The algorithm also learns image background model from the periphery of the image. This model works well when the flower in the image is different from background by over 50%.

But the oxford database which is being used in our work does not follow this rule and the creators of the database propose the algorithm in Nilsback and Zisserman (Citation2006). Here the authors develop a visual vocabulary of flowers based on color, texture and shape (CTS) information. This model overcomes the ambiguities that arise during flower classifications due to ambient lighting variations, petals shape deformations, color changes and occlusion during image capture. Our work in this paper is also focused on pre knowledge of shape, color and texture of the flower.

The methods in literature for flower segmentation project the segmented flower on to color plane. This way it becomes difficult for classification using an automatic classifier. In this work we make a flower feature database that can be used for classification using Hidden Markov Model (HMM’s) or Artificial Neural Networks (ANN’s). The oxford data-set consists of 17 different flower species. Flower classification is a challenging task even for human eyes. This can be observed from Figure , where the flower images from the data-set in http://www.robots.ox.ac.uk/~vgg/data/flowers/ are stacked together for reference.

Figure 1. Sample data-set.

Figure 1. Sample data-set.

The scale of difficulty in extracting useful segments for classification from these randomly captured images is an uphill task for computer vision scientists. Here we propose to use CTS of the flower as pre information for the segmentation cost function. Literature used all these parameters as features for segmentation. But the disadvantage lies with the algorithm which fail to extract fine lines on the surface of the flowers as shown in Figure .

Figure 2. Detail information that is missing in the flower segmentation algorithms.

Figure 2. Detail information that is missing in the flower segmentation algorithms.

Shape only or texture only classifications of flowers produce ambiguous results. Figure shows some of the shape prior, texture priors form gray level covariance matrix (GLCM) (Kishore & Rajesh Kumar, Citation2012) and Gabor filter in 4 different orientations (Kishore, Prasad, & Kishore, Citation2013). The texture models produce excellent texture segmentation outputs for most of the image processing problems. But from the Figure it becomes difficult to produce the required information shown in Figure . The shape model gives only the global region of flowers which alone cannot handle the segmentation process.

Figure 3. Shape, GLCM and gabor texture features of a flower.

Figure 3. Shape, GLCM and gabor texture features of a flower.

The features from GLCM and Gabor textures show information loss that is vital for flower classification. In this work we introduce a mixed feature as pre information for the level set function. The mixed feature is made up of shape, texture and color. For color Red, Green and Blue (RGB) planes are featured. Shapes are hand modelled from the original images of flowers. For texture we use Local Binary Patterns (LBP) (Ahonen, Hadid, & Pietikainen Citation2006; Guo, Zhang, & Zhang, Citation2010; Zhao & Pietikainen, Citation2007) features instead of GLCM or Gabor features. The mixed feature image of a flower from the data-set is shown in Figure .

Figure 4. Mixed feature using CTS images.

Figure 4. Mixed feature using CTS images.

The pre information related to flower in CTS form is provided as constraints in the formulation of Level sets. Level sets were first introduced by Osher and Fedkiw (Citation2006) and later popularized by Vese and Chan (Citation2002). A number of versions of level sets with shape (Saito, Nawano, & Shimizu, Citation2016) and texture priors (Wu, Gan, Lin, Zhang, & Chang, Citation2015) were very popular with the image processing research community. A combination of CTS features are used exclusively by computer vision researchers for complex image segmentation (Cremers, Osher, & Soatto, Citation2006; Cremers, Rousson, & Deriche Citation2007; Hu et al., Citation2013).

All the previous level set models used the features separately during level set formulation. Here we propose to fuse all the pre information to the level set and provide them as a single constraint instead of three separate functions. Section 2 introduces the proposed model. Section 3 discuss the results and Section 4 concludes the work.

The flow chart describes the methodology used for segmenting flower images is shown in Figure .

Figure 5. Flow chart describing the proposed flower segmentation model.

Figure 5. Flow chart describing the proposed flower segmentation model.

2. Multi feature level set formulation

CTS features form a knowledge base for the level set function to operate on the image plane. Previous algorithms used three different functions for level set formulation from these features. In this work we propose to use a single term for all these features to be incorporated in a level set. A brief review about the level sets and feature extraction models make up this section.

2.1. Color feature

Previous works extracted RGB vectors from the image plane to construct a size (RGB) × 3 vector feature subspace. Other methods involve converting RGB space to Lab or HIS or HSV color spaces and extracting them as color features. The novel model proposed in this work saves computing power initially by avoiding this step. The idea is to use each of the R, G and B planes separately during contour propagation. The level set is formulated on these 3 sub planes, which will be elaborated in the level sets section.

2.2. Texture features

Nature creates different varieties of flowers based on colors and textures. Quite a few models in texture extraction were ideated for flower segmentation. They are Gray Covariance Matrix (GLCM), Gabor Filters, Wavelet Filters and LBP. Results of our analysis in Figures and show that LBP features in RGB plane provide us with good texture of the flower compared to the other three.

LBP compares each pixel in a pre-defined neighbourhood to summarize the local structure of the image. For an image pixel I(xy) ∊ ℜ+, where (x, y) gives the pixel position in the intensity image. The RGB image is I(xyN) ∊ ℜ+, where N represents RGB color planes. The neighbourhoods of a pixel can vary from 3 pixels with radius r = 1 or a neighbourhood of 12 pixels with r = 2.5. The value of pixels using LBP code for a centre pixel xc,yc,N is given by(1) LBPxc,yc,N=i=1Nj=1Psgp-gc2P(1) (2) s(x)=1x00Otherwise(2)

where gc is intensity gray value of centre pixel at xc,yc and gp is gray value around the neighbourhood of gc. The value of N represents color plane and P gives the number pixels in the neighbourhood of gc. Figure gives the LBP calculations on the flower image in Figure .

Figure 6. (a) Original flower image, (b) LBP process for 3 × 3 window and (c) the resultant LBP resulting in a color texture image.

Figure 6. (a) Original flower image, (b) LBP process for 3 × 3 window and (c) the resultant LBP resulting in a color texture image.

The LBP texture features are invariant to monotonic gray level changes that are predominant in flower images. Computational simplicity makes the LBP ideal for this work as the next stage involves more computationally intensive algorithm in the form of level sets. This color texture flower data forms the pre-information to the level set function.

2.3. Shape features

Shape features are deciding factor for object classification. Shape segmentation is the most challenging concept with respect to flower images. Pre-information on shapes is obtained using hand segmented flower images. From the data-set of flower images, external shapes of flowers are segmented resulting in binary shape filters as shown in Figure .

2.4. Level set formulation

The level set model introduced by Caselles, Kimmel, and Sapiro (Citation1997) and few others is an image segmentation model based on a closed contour spreading in the image plane adhering to object edges. An implicitly defined contour of arbitrary shape Θ in the image plane O:{o(x,y)R2} as the zero level set of an embedding function ϕ:OR:(3) Θ=x,yO|ϕx,y=0(3)

Level set function ϕ evolves instead of contour Θ itself giving advantages in the form of no marker control and immune to topological deformations of the active contour in the image plane. Here we focus on Chan Vese (CV) (Vese & Chan, Citation2002) level set functional formulated from Chan and Vese (Citation2001) in the image plane I:ΘR+as(4) ECvϕ=ΘI(x)-C+2Hϕxdx+ΘI(x)-C-21-Hϕxdx+λΘHϕxdx(4)

I(x,y) is a 2D image plane represented as I(x). Here Hϕx is a Heaviside step function. C+ represents average intensity of pixels considered constant in positive ϕ region and C represents negative ϕ region constant. The last term in the Equation (4) tries to keep a smoothing contour during evolution and λ is proportionality constant deciding on the minimum amount of separation needed between boundaries. The first two parts constitute external energy representing error between the image and piecewise constant approximations of the evolving level set functional. Gradient descent minimization of the level set ϕ gives a curve evolution expression(5) ϕt=-ECvϕ=δϕλ.ϕϕ-I(x)-C+2+I(x)-C-2(5)

CV model proposed the δϕ term which helps level set detect even the internal edges. Applying this level set model in (5) to segment flower images in various unformatted images results in segmentation outputs shown in Figure .

Figure 7. (a) Original flower imagem (b) level set evolution and (c) segmented flowers using CV model.

Figure 7. (a) Original flower imagem (b) level set evolution and (c) segmented flowers using CV model.

The outputs in Figure shows incomplete flower segments after an average of 1000 iterations. A shape and texture prior model for level set as a learning basis will focus on segmenting the flower in RGB sub planes which is useful in post processing recognition. Cremers et al. (Citation2006) introduced the signed distance function for shape encoded level sets which is modified in this work incorporating texture information. To establish a unique relationship between its surrounding level set ϕ and a pre-defined shape, texture, color (STC) model φSTC, it will be assumed that ϕ<0,insideφSTC, ϕ>0,outsideφSTC and ϕ=1 everywhere else. The model φSTC is defined as a combination of shape and texture models taken across RGB color planes. The color model in this case is user dependent and can be either HSV or Lab color space. The combination level set function is(6) φSTC=N=3φS+φT(6)

There are many ways to define this signed distance function (Kraft, Citation2016; Laadhari, Saramito, & Misbah, Citation2016; Sussman, Smereka, & Osher, Citation1994), out of which we use the most widely applied with constrains towards scaling, rotation and translational properties. In this work, we propose to use initial contour ϕ and shape prior φSTC contour to compute level set area difference as in Cremers et al. (Citation2006):(7) d2ϕ,φSTC=i=1NΘHϕx-HφSTCx2dx(7)

where N is the number of color sub planes the image is defined. The defined distance function is image size independent, nonnegative, symmetrical and satisfies triangle inequality. Local energy minimization between ϕ0,φSTC maximizes the possibility of finding correct shape in the cluttered backgrounds. The affine transformations are defined by current STC ϕ0. The curve evolution expression is obtained by applying Euler–Lagrange equation on (7) as(8) ϕ0t=i=1N2δϕ0×HφSTC-Hϕ0(8)

where δ is delta function and t is artificial time step. Finally combining STC prior energy term in (7) and CV level set function in (2), we get the total energy function of the level set as(9) ET=i=1NζEC+(1-ζ)ESTC(9)

Here ζ controls the effect of STC prior energy on the image energy. For single shape priors the energy functional used for algorithm development is derived from evolution Equations in (5) and (8) is(10) ϕt=i=1Nζδϕλ.ϕϕ-I(x)-C+2+I(x)-C-2+21-ζ×HφSTC-Hϕ(10)

where C+ and C are updated iteratively in each discrete time step using the expressions(11) C+=i=1NΘIHϕdxΘHϕdx(11) (12) C-=i=1NΘI×1-HϕdxΘ1-Hϕdx(12)

The following proposed level set model is tested for different flower images in the data-set (http://www.robots.ox.ac.uk/~vgg/data/flowers/).

3. Results and discussion

The following parameters of the proposed level set functional needs adjustments during the simulation trials. The STC effect term ζ is set based on the complexity in the image planes. This term controls the contour movement distributions on the sub planes related to shape and texture. The ζ is around 0.634 for images captured under lesser ambient lighting or with dominating backgrounds compared to flower foregrounds. For high quality images this value is kept very low around 0.12. In this work the value of ζ ranges from 0.1 to 0.7.

The level set stopping criteria is set using the gradient descent algorithm. Contour evolution stops when the error between the current iteration and previous iteration reaches a pre-defined threshold. The error value is set 0.0001. The performance of the proposed method is estimated using structural similarity index (SSIM) (Kishore, Sastry, & Rahman, Citation2016) against the other methods such as Gabor texture prior (GTP) (Shah & Chauhan, Citation2016) and gray covariance matrix texture priors (GLCMTP) (Lloyd, Marshall, Moore, & Rosin, Citation2016).

The segmentation results for various flower images in the data-set are presented in Figure along with their shape and texture priors. Local binary pattern based texture is used in all the simulations.

Figure 8. Flower segmentation process.

Notes: FI: flower image, TCFI: texture color FI, SFI: shape flower image, IC: initial contour, 29 itr: 29th iteration captured, 81 itr: 81th iteration captured, 181 itr: 181th iteration captured, FS: segmented flower.
Figure 8. Flower segmentation process.

Figure shows initial contour in yellow and final contour in green. Both the contours are projected on to the image under IC heading. The final iteration count for each flower image is different, even though the images collected in Figure are shown for common iteration count. The last column shown the binary segmented flower image that can be used for classification process. Visual analysis of the segmented flower indicates the quality of the proposed method. A very clear picture is obtained about the flower showing internal texture and its shape. The internal stem of the flower in Anthurium is fully extracted under the cluttering influence of the flower texture. This near perfect segmentation can achieve good classification rates. Figure shows more test results on flower images from the data-set in http://www.robots.ox.ac.uk/~vgg/data/flowers/.

Figure 9. Segmentation results of the proposed algorithm on flowers of different sizes, shapes, textures and colors.

Figure 9. Segmentation results of the proposed algorithm on flowers of different sizes, shapes, textures and colors.

Figure gives the performance of the algorithm when tested on images of different color, shape and texture. The resulting flower images can be presented to a classifier for classification. For qualitative performance measurement we use SSIM as a measure between the segmented output and the hand segmented ground truth. The ground truth is extracted using expert craftsman in Photoshop.

Superiority of the proposed Local binary pattern Texture (LBPT) segmentation is established by comparing its performance with GTP (Shah & Chauhan, Citation2016) and GLCMTP (Lloyd et al., Citation2016). The performance indicators are SSIM and number of iterations. SSIM measures the local and global similarity between pixels between images. We use reference image as the hand segmented image and the test image is represented by segmented flower. The local similarity 3D map for a flower image in the last row of Figure is shown in Figure .

Figure 10. Showing local SSIM using a 3D mesh plot.

Figure 10. Showing local SSIM using a 3D mesh plot.

The clarity of segmentation result for the last row is questionable and its closeness to actual segmentation can be seen from the plot of Figure . There is a small variation at the centre of the flower as the stem in the flower is masked by the flower itself. Except for that one centre region, the other portions of the flower exactly match to that of the hand segmented flower image.

Table gives global SSIM values for flowers in the database between LBPT, GTP and GLCMTP. SSIM values are in the range of [0, 1]. Is SSIM score is ‘0’ indicates no match and ‘1’ indicates a perfect match.

Table 1. Performance indicators for the flower segmentation algorithms using level sets

The average SSIM value for all the images in the data-set from the proposed algorithm (LBPT − LS) is around 0.9282 and the average number of iterations is 189. The average SSIM values for GTP − LS and GLCMTP − LS are 0.7798 and 0.7801 respectively. Similarly, the average iteration count is 221 and 228 for GTP and GLCMTP. Further these values can be improved using a combination of these methods and more complicated flower image structures.

4. Conclusion

A novel method for flower image segmentation is attempted in this work. Flower images are the most complicated structures nature creates for humans which are difficult to understand. The task is accomplished using level sets that are pre informed about the color, shape and texture related to the flower. LBP make texture information related to the flower. Shape is hand segmented and the level set is evolved in the RGB color plane using the shape and texture information. SSIM and number of iterations are used as a performance measure for comparing the proposed level set with Gabor texture filter based level set and GLCM based level sets. The proposed method using LBP textures outperforms the other two texture models for flower classification. In future, a more comprehensive model is required to segment flowers and extract meaningful features in real time. A deep learning model is being developed for classifying a large data-set of flowers.

Cover image

Source: Authors.

Erratum

This article was originally published with errors. This version has been corrected. Please see Erratum https://doi.org/10.1080/23311916.2017.1333255.

Additional information

Funding

Funding. The authors received no direct funding for this research.

Notes on contributors

Syed Inthiyaz

Syed Inthiyaz was born in Vijayawada, Krishna (District), AP, India. He received BTech in ECE from JNT University Hyderabad, AP, India. MTech from JNT University, Kakinada. His research interested areas include Image Processing and Video Processing.

B.T.P. Madhav

B.T.P. Madhav was born in India, AP, in 1981. He received the BSc, MSc, MBA, MTech degrees from Nagarjuna University, AP, India in 2001, 2003, 2007, and 2009 respectively. He got his PhD in the field of antennas from K.L. University. He currently working as Professor in Electronics and Communication Engineering department of ECE. He has published more than 240 papers in International, National journals and Conferences. His research interests include antennas, liquid crystals applications and wireless communications.

P.V.V. Kishore

P.V.V. Kishore (MIEEE’07) received his PhD degree in Electronics and Communications Engineering from Andhra University College of Engineering in 2013. He received MTech from Cochin University in the year 2003. He is currently full professor and Multimedia and Vision Computing Research Chair at K.L. University, ECE Department.

References

  • Ahonen, T., Hadid, A., & Pietikainen, M. (2006). Face description with local binary patterns: Application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, 2037–2041.10.1109/TPAMI.2006.244
  • Benmehaia, R., Khedidja, D., & Bentchikou, M. E. M. (2016). Estimation of the flower buttons per inflorescences of grapevine (Vitis vinifera L.) by image auto-assessment processing. African Journal of Agricultural Research, 11, 3203–3209.
  • Caselles, V., Kimmel, R., & Sapiro, G. (1997). Geodesic active contours. International Journal of Computer Vision, 22, 61–79.10.1023/A:1007979827043
  • Chan, T. F., & Vese, L. A. (2001). Active contours without edges. IEEE Transactions on Image Processing, 10, 266–277.10.1109/83.902291
  • Cremers, D., Osher, S. J., & Soatto, S. (2006). Kernel density estimation and intrinsic alignment for shape priors in level set segmentation. International Journal of Computer Vision, 69, 335–351.10.1007/s11263-006-7533-5
  • Cremers, D., Rousson, M., & Deriche, R. (2007). A review of statistical approaches to level set segmentation: Integrating color, texture, motion and Shape. International Journal of Computer Vision, 72, 195–215.10.1007/s11263-006-8711-1
  • Dana, W., & Ivo, W. (2008). Computer image analysis of seed shape and seed color for flax cultivar description. Computers and Electronics in Agriculture, 61, 126–135.10.1016/j.compag.2007.10.001
  • Das, M., Manmatha, R., Riseman, & E. M. (1999). Indexing flower patent images using domain knowledge. IEEE Intelligent Systems, 14, 24–33.10.1109/5254.796084
  • Effendi, Z., Ramli, R., Ghani, J. A., & Yaakob, Z. (2009). Development of Jatropha curcas color grading system for ripeness evaluation. European Journal of Scientific Research, 30, 662–669.
  • Gomes, J. F. S., & Leta, F. R. (2012). Applications of computer vision techniques in the agriculture and food industry: A review. European Food Research and Technology, 235, 989–1000.10.1007/s00217-012-1844-2
  • Guo, Z., Zhang, L., & Zhang, D. (2010). A completed modeling of local binary pattern operator for texture classification. IEEE Transactions on Image Processing, 19, 1657–1663.
  • Haviland-Jones, J., Rosario, H. H., Wilson, P., & McGuire, T. R. (2005). An environmental approach to positive emotion: Flowers. Evolutionary Psychology, 3, 104–132.
  • Hu, W., Zhou, X., Li, W., Luo, W., Zhang, X., & Maybank, S. (2013). Active contour-based visual tracking by integrating colors, shapes, and motions. IEEE Transactions on Image Processing, 22, 1778–1792.
  • Kishore, P. V. V., & Rajesh Kumar, P. (2012). Segment, track, extract, recognize and convert sign language videos to voice/text. International Journal of Advanced Computer Science and Applications (IJACSA), 3, 35–47.
  • Kishore, P. V. V., Prasad, M. V. D., & Kishore, S. R. C. (2013). Conglomeration of hand shapes and texture information for recognizing gestures of Indian sign language using feed forward neural networks. International Journal of engineering and Technology (IJET), 5, 3742–3756.
  • Kishore, P. V. V., Sastry, A., & Rahman, Z. U. (2016). Double technique for improving ultrasound medical images. Journal of Medical Imaging and Health Informatics, 6, 667–675.10.1166/jmihi.2016.1743
  • Kraft, D. (2016). Self-consistent gradient flow for shape optimization. Optimization Methods and Software, 1–13.10.1080/10556788.2016.1171864
  • Laadhari, A., Saramito, P., & Misbah, C. (2016). An adaptive finite element method for the modeling of the equilibrium of red blood cells. International Journal for Numerical Methods in Fluids, 80, 397–428.10.1002/fld.v80.7
  • Lloyd, K., Marshall, D., Moore, S. C., & Rosin, P. L. (2016). Detecting violent crowds using temporal analysis of GLCM Texture. arXiv preprint arXiv:1605.05106.
  • Nilsback, M.-E., & Zisserman, A. (2006). A visual vocabulary for flower classification. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) (Vol. 2, pp. 1447–1454). IEEE.
  • Nilsback, M.-E., & Zisserman, A. (2007). Delving into the whorl of flower segmentation. In BMVC (pp. 1–10).
  • Niphadkar, N. P., Burks, T. F., Qin, J., & Ritenour, M. A. (2013). Estimation of citrus canker lesion size using hyperspectral reflectance imaging. International Journal of Agricultural and Biological Engineering, 6, 41–51.
  • Oommen, A. (2015). Floriculture report—Flower power agriculture and processed food products export development authority (pp. 51–53). Noida: A Government of India Enterprise.
  • Osher, S., & Fedkiw, R. (2006). Level set methods and dynamic implicit surfaces (Vol. 153). Berlin: Springer Science & Business Media.
  • Saito, A., Nawano, S., & Shimizu, A. (2016). Joint optimization of segmentation and shape prior from level-set-based statistical shape model, and its application to the automated segmentation of abdominal organs. Medical Image Analysis, 28, 46–65.10.1016/j.media.2015.11.003
  • Shah, S. A., & Chauhan, N. C. (2016). Techniques for detection and analysis of tumours from brain MRI images: A review. Journal of Biomedical Engineering and Medical Imaging, 3, 9.
  • Sharma, A., Sharma, V. K., & Srivastava, D. K. (2016). Overlapped flowers yield detection using computer-based interface. Perspectives in Science, 8, 25–27.10.1016/j.pisc.2016.01.009
  • Sussman, M., Smereka, P., & Osher, S. (1994). A level set approach for computing solutions to incompressible two-phase flow. Journal of Computational Physics, 114, 146–159.10.1006/jcph.1994.1155
  • van Meeteren, U. (2009). Causes of quality loss of cut flowers—A critical analysis of postharvest treatments. Acta Horticulturae, 847, 27–36.10.17660/ActaHortic.2009.847.2
  • Vese, L. A., & Chan, T. F. (2002). A multiphase level set framework for image segmentation using the Mumford and Shah model. International Journal of Computer Vision, 50, 271–293.10.1023/A:1020874308076
  • Wu, Q., Gan, Y., Lin, B., Zhang, Q., & Chang, Huawen (2015). An active contour model based on fused texture features for image segmentation. Neurocomputing, 151, 1133–1141.10.1016/j.neucom.2014.04.085
  • Zhao, G., & Pietikainen, M. (2007). Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29, 915–928.10.1109/TPAMI.2007.1110