Haar Like Features Pdf Download
The effective real-time face detection framework proposed by Viola and Jones gained much popularity due its computational efficiency and its simplicity. A notable variant replaces the original Haar-like features with MBLBP (Multi-Block Local Binary Pattern) which are defined by the local binary pattern operator, both detector types are integrated into the OpenCV library. However, each descriptor and its evaluation method has its own set of strengths and setbacks. In this paper, an enhanced two-layer face detector composed of both Haar-like and MB-LBP features is presented. Haar-like features are employed as a coarse filter but with a new evaluation involving dual threshold. The already established MB-LBPs are arranged as the fine filter of the detector. The Gentle AdaBoost learning algorithm is deployed for the training of the proposed detector to reach the classification and performance potential. Experiments show that in the early stages of classification, Haar features with dual threshold are more discriminative than MB-LBP and original Haarlike features with respect to number of features required and computation. Benchmarking the proposed detector demonstrate overall 12% higher detection rate at 17% false alarm over using MB-LBP features singly while performing with 3 speedup.
Haar Like Features Pdf Download
Haar-like features are simple digital image features that were introduced in areal-time face detector 1. These features can be efficiently computed on anyscale in constant time, using an integral image 1. After that, a smallnumber of critical features is selected from this large set of potentialfeatures (e.g., using AdaBoost learning algorithm as in 1). The followingexample will show the mechanism to build this family of descriptors.
The value of the descriptor is equal to the difference between the sum of theintensity values in the green rectangle and the red one. The red area issubtracted to the sum of the pixel intensities of the green In practice, theHaar-like features will be placed in all possible location of an image and afeature value will be computed for each of these locations.
In this paper, a real time face detection and recognition system is introduced for applications or services in Ubiquitous network environments. The system is realized based on a Haar-like features algorithm and a Hidden Markov model (HMM) algorithm using communications between a WPS(Wearable Personal Station) 350MHz development board and a Pentium III 800 MHz main server communicating with each other by a Bluetooth wireless communication method. Through experimentation, the system identifies faces with 96% accuracy for 480 images of 48 different people and shows successful interaction between the WPS board and the main server for intelligent face recognition service in Ubiquitous network environments.
However, as you can imagine, using a fixed sliding window and sliding it across every (x, y)-coordinate of an image, followed by computing these Haar-like features, and finally performing the actual classification can be computationally expensive.
We used the Viola-Jones algorithm to develop a leishmania parasite detection system. The algorithm includes three procedures: feature extraction, integral image creation, and classification. Haar-like features are used as features. An integral image was used to represent an abstract of the image that significantly speeds up the algorithm. The adaBoost technique was used to select the discriminate features and to train the classifier.
By categorizing the subsections using Haar-like features, we can create integral images. The reason behind the categorization is to eliminate unwanted sections of our image and shorten processing time. To compute the sum of the pixel values in the subsections, array references are used. A single-rectangle sub-window needs four array references, while two, three, and four adjacent rectangle sub-windows need six, eight, and nine references, respectively. In an integral image of size \(R \times C\), the main integral image \(ii\left( R,C \right)\) is produced during single processing of the sum of the pixel values above and to the left of \(\left( R,C \right)\). Once the integral image representation ii of the original image I is computed, the sum of original pixel values within any rectangle can be computed by a lookup table. Therefore, as shown in Fig. 2, to compute the sum of pixel values in subsection S1, (r1, c1) is needed and is computed as mentioned below: