Contents
Download PDF
pdf Download XML
552 Views
109 Downloads
Share this article
Research Article | Volume 4 Issue 2 (July-Dec, 2024) | Pages 1 - 6
Recognizing Approach Using Hand Geometry
1
Faculty of Biotechnology, Al-Qasim Green University, Babylon, Al Qasim, Iraq
Under a Creative Commons license
Open Access
Received
July 9, 2024
Revised
July 26, 2024
Accepted
Aug. 21, 2024
Published
Oct. 10, 2024
Abstract

Hand recognition is the process of identifying or verifying a person using their hands. However, the main problem is to choose a reliable and accurate hand identification technique, which is a critical challenge and the reason for this research. Various machine learning algorithms have been used, deep learning, a subset of machine learning methods, can intelligently analyze data from biometrics. Two distinct models are proposed in this research for hand image identification. The first proposed model is based on Alex Net features and error-corrected output code generated by support vector machine. It starts with image acquisition, then goes to preprocessing, feature extraction using Alex Net, and classification using error-corrected output code and SVM. On the other hand, the second proposed model is to generate a Hand Net model for hand identification process. The feature extraction and classification process are done by the layers of the model. both proposed techniques have been shown to be successful in the hand identification process.

Keywords
INTRODUCTION

The human hand has sufficient anatomical features to serve as a personal identification device. Biometric technologies have been used to address security and identity systems around the world, and they play an important role in the networked society. When compared to other features, measuring hand geometry has advantages. For example, hand geometry contains more information than a fingerprint and is more reliable and easier to record on low-resolution and low-cost devices for simple civil and commercial applications[1,2].Machine learning, in particular (deep learning), is perhaps one of the hottest topics in the field of artificial intelligence at the moment, and has shown significant progress over some of its competitors[3].To obtain more reliable and accurate systems, machine learning techniques have been used in biometrics and its application areas[4].Thus, new powerful algorithms, such as deep learning algorithms, can be excellent candidates for solving difficult biometric problems. In this research, the majority of hand recognition features will be retrieved and used as a biometric technique to analyze and determine human authenticity.

RELATED WORKS

This section reviews some of the previously proposed methods related to the hand geometry recognition system.[5] have used a peg-based imaging scheme and obtained 16 features, which include length and width of the fingers, aspect ratio of the palm to fingers, and thickness of the hand [5] Information fusion was used in the feature extraction stage, with Bayesian back-propagation neural network reporting the matching scores. The experimental results of this study revealed that the false acceptance rate (FAR) was 10% and the false rejection rate (FRR) was 0%.[6] proposed Support Vector Machines (SVM) and k-Nearest Neighbor Neighbor (k-NN). The results indicate that the proposed strategy has lower computational cost and better accuracy in human identification [7]. proposed a new hybrid model for biometric system. By using the middle finger knuckle print as an additional significant signature, the recognition performance using Euclidean distance was about 97 percent [8]. proposed the development of an averaged convolutional neural network capable of classifying hand gestures during training and testing the model on 3750 fixed images of hand gestures that differ in terms of rotation, illumination, and noise. And the definition of the model is 99.73 percent accurate.[9] proposed an author for the neural network based on backpropagation topology with different training methods, according to the suggestion. The hiring of the morphological method (Al-Hashah) to dig features. Our recruiting theme is a collection of 500 images in experiments (50 themes, 10 images each). And Nahjahm al-Mushoght was accurate by approximately 96.41 percent [10] Using the Hong Kong University of Science and Technology's hand-drawn image database as a theme for employing the criterion of analysis of variance to determine the specificity of the whole feature. The theme of recruiting Yemeni images for the first 100 people in the experiment. The theme of preserving the most special feature of eigenvectors, and employing Euclidean distance to analyze recognition performance. The recognition average of the eigenvectors of the three traits is 91.7%, and the eigenvectors of the six traits are 94.2%, according to the experimental data [11]. 

METHODOLOGY

This study aims to propose an automatic hand identification system that will facilitate and increase the flexibility of security applications that use hand images to identify individuals. The extraction of useful features and information is a critical step in a variety of pattern recognition and computer vision. In this study, the major stages for hand recognition are feature extraction and classification. Two techniques have been proposed for feature extraction and classification, both of which are based on machine learning algorithms. The first strategy offered employed Network structure simulation Alex net to train data and extract features from it, and ECOC with SVM as a strong classifier, while the second technique proposed was called Hand net and relied on building a CNN model used for the whole hand recognition process. The proposed hand identification system consists of four main stages: image acquisition, preprocessing, feature extraction and classification stages. 

 

  1. Image acquisition Stage

Image capture is the process of obtaining an image from a hardware source such as a digital camera, mobile phone, handheld image capture device, etc. In this research, two distinct datasets were collected, each with a unique set of requirements. In terms of lighting, resolution, background, type of device used to capture the image, and number of samples acquired, the samples were taken in a variety of situations and conditions. The first set consisted of 10 categories, each containing 100 images from a mobile camera of different dimensions that were combined to form a single image (pixel), while the second set consisted of 50 categories, each containing 20 images from a digital camera. The image was taken so that all the right fingers, as well as the fingers without pins that the hand normally uses, were close together. To improve the accuracy of the system, the angle should not be adjusted more than 30 degrees, and high-quality right-hand images should be stored in the database. Figure 3.1 shows some sample images of the right hand

Figure ‎1: Sample Image of Right Hand.


 

  1. Image pre-processing

Preprocessing of images is needed to depict the full image as though it were of the same height and weight. In order to extract deep features from every kind of images in the hand. In this manner, rotate image angle 30-degree maximum. then the image resizes and increases the image's contrast with histogram balance. The input image is resized into (227× 227 ,225× 225) pixels suitable size for CNN model. The resized image enhanced to increase the contrast of images using histogram equalization. This equalization is an important step that it transforms the values of an intensity image.

  1. The Feature extraction

Feature extraction assists in extracting the most useful features from data used by utilizing two methods for efficiently reducing the amount of data. The methods are detailed below.

 

  1. The Alex Net Features

Instead of using the pre-trained Alex net model, the activation layer of the Alex net model was used to extract the attributes from the trained data. The features extracted by the Alex network's fully connected layer (FC7) are the outputs of this layer multiplied by the number of nodes it contains; this layer represented 4096 traits extracted. The best 100 traits from this group were identified and selected as the best extracted traits for use in the classification problem using the trained data and experiments. .

 

  1. The proposed hand net features 

Feature extraction tries to extract the characteristics that aid in the identification of individuals. The retrieved feature will serve as the starting point for the categorization process. In this paper, the features extraction in the proposed structure of Hand Net model consists of four succeeded Conv blocks. Each block comprises three layers: a convolutional layer with activation function Rectified Linear Unit (Relu), Max Pooling layer, and Dropout layer.

 

  1. The classification 

Classification task can be accomplished in machine learning algorithms such as support vector machine, neural network, logistic regression, etc. The classification has been applied in two methods in this paper according to the two proposed models.

 

  1. AlexNet classification using SVM- ECOC

SVM algorithm that interacts with the problem of binary classification, where there are no two target faces, i.e., yes/no, 0.1, black/white, cat/dog, etc. However, in practice, there are many classification problems related to multiple classification, since the support vector machine can only deal with binary classification problems, so SVM is modified. Error Correction Output Coding (ECOC) algorithm is a powerful method for multi-class classification problem. When combined with the SVM method, it can significantly increase the classification accuracy of the data set by using the features. SVM is also suitable for multi-class problems such as unit versus comfort, and unit versus unit. Classifications can be done by dividing multi-category problems into a fixed number of binary classification problems. In contrast to the other unit-versus-unit, ECOC technique allows encoding the entire class as a random number for binary classification problems when employing a very specific representation, it allows additional models to act as "error-correcting" predictors, which can lead to better predictive performance.

 

  1. Hand Net classification using SoftMax classifier 

SoftMax classification represents the final function in the network when employing multiple classification, where the activation function is used to classify the categories, which in turn works by multiplying the result of the previous class extracted by the traditional equation x=w*a+ b, then dividing the total value of x by the sum of the values ​​as in equation (1).

 

         ……1

RESULTS AND DISCUSSION
  1. Dataset Description

In the experiments that used three sets of data, the theme of collecting distinct sets of data as special sets of data, and all of them have a set of conditions and variables related to lighting, background, size of the image, entities, the device used, the number of people and the samples taken for each person. The third database is a standard database developed by a group of researchers with the aim of determining vital parameters using manual images [16]. I have a set of personal data, one of which contains ten people, each person has 100 photos, and the second data set contains 50 people, each person has twenty photos. Contains all sets of data on different positions and different sizes at an angle of 30 degrees. These datasets consist of females and males between the ages of 22 and 65 in general, with pictures in JPG format and collected in a safe environment for employees.

 

  1. Experiment results

The results obtained from the two proposed models are organized into experimental results for the Alex network model and the Hand network model. In addition, different evaluation criteria such as accuracy, f-score, precision, true positive rate and false positive rate are adopted. In all experiments, the data is divided into 70% training and 30% training.

  1. Experiment results for hand Recognition using the Alex net feature and ECOC-SVM

The findings of 70% of the training data and 30% of the test data from the set of training and testing images in the datasets are presented in the Table (1).


 

 

Table ‎1: The recognition performance of the Alex Net is measured using different metrics from FC7 of Alex's net with 70% training and testing 30%.

Datasets

Accuracy

F-score

Precision

TPR

FPR

Property Dataset 1

99.6 %

97.32 %

97.24 %

97.40 %

0.2 %

Property Dataset 2

99.25 %

97.11 %

97.12 %

97.21 %

0.3 %

Hand Dataset [16]

90.5 %

89.53 %

89.27 %

92.54 %

0.4 %

 


 

As shown in the previous table, the dataset for feature 1 recorded the lowest error rate of 0.2%, indicating that the classification method was quite accurate during training and data recognition. In addition, the results of the other rules were robust, despite the scarcity of data used to train them. Figure (2) shows the results for each criterion with each dataset.

Figure 2: Illustrates the performance ratio obtained by dividing the data into 70% for training and 30% for testing in accordance with system performance standards.

This can be seen in Figure (2), which shows how well the datasets perform despite the many differences between databases in terms of data size, image resolution, and lighting.

 

  1. Hand Recognition experiments using Hand Net model

As in the previous model, this approach was applied to three datasets, splitting the data into 70% training and 30% testing for six epochs and a learning rate of 10-5. The result of the manual network model can be divided into three categories:

  • Property Dataset 1

All the training images in this case are 700 images and 300 images for testing and extracting 10 features. The 700 training images are separated into two categories: 90% training and 10% validation, which means (600 images for training, 100 images for validation). According to the results, the obtained recognition rate is 98.97% accuracy and the loss rate and other performance of the hand image are shown in Figure (3,4). In addition, Table (2) represents additional criteria to evaluate the quality of the results.

 

Figure 3: accuracy and loss rate for the Property Dataset 1.

Figure (4): Flowchart of evaluation of Property Dataset 1.

Table (2): The system's performance was evaluated using a Property Dataset 1 with divided data, which resulted in 70 train and 30 test.

Dataset

Accuracy

F-score

Precision

TPR

FPR

Property Dataset 1

98.97%

98.69%

98.97 %

99.06%

0.2 %

  • Property Dataset 2

 In this case, 50 features were extracted with 700 images in training and 300 images in testing. The training images are divided into two categories: 80% training images and 20% validation images, resulting in 500 training images and 200 validation images). According to the results, the obtained recognition rate is 90% accuracy, and the loss rate and other performance of the hand image are shown in Figure (5,6). In addition, Table (3) represents additional criteria to evaluate the quality of the results.

Figure ‎5: accuracy and loss rate for the Property Dataset 2.

Figure (6): Flowchart of evaluation of Property Dataset 2.

Table (3): The system's performance was evaluated using a Property Dataset 2 with divided data, which resulted in 70 train and 30 test.

 

Dataset

Accuracy

F-score

Precision

TPR

FPR

Property Dataset 1

90%  

99%

91.5%

9.5%

0.19%

 

 

  •  
  •  
  •  

 

 

 

  • Hand dataset

In this case, 50 features were extracted from 700 training images and 300 testing images. The training images are divided into two categories: 90% training images and 10% validation images, resulting in 600 training images and 100 validation images. According to the results, the recognition rate obtained is 98% accuracy, and the loss rate and other performance of the hand image are shown in Figure (7,8). In addition, Table (4) represents additional criteria to evaluate the quality of the results.

Figure 7: The accuracy and loss rate of hand dataset.

Figure (8): Flowchart of evaluation of Property Hand dataset.

 

Table (4): The system's performance was evaluated using hand dataset with divided data, which resulted in 70 train and 30 test.

Dataset

Accuracy

F-score

Precision

TPR

FPR

Hand Dataset

97.7%

96.8%

98 %

97%

0.05% 

 

CONCLUSION AND FUTURE WORK

In this paper, two methods for hand recognition are proposed: the first is based on Alex Net features with ECOC+SVM for classification, and the second is to build a hand recognition model called Hand Net model. The experiments were conducted on a dataset collected from properties because there is a limited dataset for human hand. One of the first conclusions of the technique is that it is suitable for working with small and medium datasets. The SVM classifier with the least amount of training process is able to correctly predict the identity of the hand so the convolutional neural network is a popular deep learning network for existing visual identification tasks. CNN, like all deep learning techniques, depends heavily on the quantity and quality of training data.

 

Conflict of Interest: The authors declare that they have no conflict of interest

 

Funding: No funding sources

 

Ethical approval: The study was approved by the Al-Qasim Green University, Babylon, Al Qasim, Iraq.

REFERENCES
  1. Iqbal, M., and B. Qadir. "Biometrics Technology - Attitudes & Influencing Factors When Trying to Adopt This Technology in Blekinge Healthcare." April, 2012.

  2. Pato, J. N., L. I. Millett, and W. Biometrics. "Optometric Recognition." Vol. 4, no. 8, 1927.

  3. Liu, Y., W. Sun, and L. J. Durlofsky. "A Deep-Learning-Based Geological Parameterization for History Matching Complex Models." Mathematical Geosciences, vol. 51, no. 6, 2019, pp. 725-766. doi: 10.1007/s11004-019-09794-9.

  4. Almabdy, S., and L. Elrefaei. "Deep Convolutional Neural Network-Based Approaches for Face Recognition." Applied Sciences, vol. 9, no. 20, 2019, doi: 10.3390/app9204397.

  5. Jain, A. K., A. Ross, and S. Pankanti. "A Prototype Hand Geometry Based Verification System." Proceedings of the 2nd International Conference on Audio- and Video-Based Biometric Person Authentication, 1999, pp. 166-171.

  6. Al-nima, R. R. "Design a Biometric Identification System Based on the Fusion of Hand Geometry and Backhand Patterns." 2010, pp. 169-180.

  7. Sánchez-Ávila, D.-S.-S., G. B. del Pozo, and J. Guerra-Casanova. "Unconstrained and Contactless Hand Geometry Biometrics." Sensors, vol. 11, no. 11, 2011, pp. 10143-10164. doi: 10.3390/s111110143.

  8. Mathivanan, B., V. Palanisamy, and S. Selvarajan. "A Hybrid Model for Human Recognition System Using Hand Dorsum Geometry and Finger-Knuckle-Print." Journal of Computer Science, vol. 8, no. 11, 2012, pp. 1814-1821. doi: 10.3844/jcssp.2012.1814.1821.

  9. Kika, A., and A. Koni. "Hand Gesture Recognition Using Convolutional Neural Network and Histogram of Oriented Gradients Features." CEUR Workshop Proceedings, vol. 2280, 2018, pp. 75-79.

  10. Shawkat, S. A., K. S. L. Al-badri, and A. I. Turki. "The New Hand Geometry System and Automatic Identification." Periodicals of Engineering and Natural Sciences, vol. 7, no. 3, 2019, pp. 996-1008. doi: 10.21533/pen.v7i3.632.

  11. Lockett, A. J. "Performance Analysis." Nature Computational Series, 2020, pp. 239-262. doi: 10.1007/978-3-662-62007-6_10.

  12. Mohammed, A. F., Nahi, H. A., Mosa, A. M., and Kadhim, I. "Secure E-Healthcare System Based on Biometric Approach." Data and Metadata, vol. 2, 2023, p. 56.

  13. Mohammed, A. F., Hashim, S. M., and Jebur, I. K. "The Diagnosis of COVID-19 in CT Images Using Hybrid Machine Learning Approaches (CNN & SVM)." Periodicals of Engineering and Natural Sciences (PEN), vol. 10, no. 2, 2022, pp. 376-387.

  14. Hashim, S. M., H. A. Nahi, and A. F. Mohammed. "Steganalysis of JPEG Images Based on 2D-Gabor Filters and Feature Dimensionality Reduction." 3rd Information Technology To Enhance E-learning and Other Applications (IT-ELA), 2022, pp. 18-23. doi: 10.1109/IT-ELA57378.2022.10107942.

  15. Mohammed, Amal Fadhil, Akmam Majed Mosa, and Jebur, I. K. "Analysis of the Effect of Social Media on University Education Through COVID-19 Pandemic Using the Naive Bayes Algorithm." LC International Journal of STEM, vol. 3, no. 3, 2022, pp. 1-11.

  16. "Hands-and-Palm-Images-Dataset." Kagglehttps://www.kaggle.com/shyambhu/hands-and-palm-images-dataset. Accessed 1 Dec. 2024.

Recommended Articles
Research Article
The Effectiveness of Neuromuscular Coordination Exercises in Regulating the Electrical Activity of Shoulder Muscles and Its Reflection on Biomechanical Characteristics and the Level of Shot Put Achievement for Juniors
Published: 20/01/2026
Download PDF
Research Article
Design of an Electronic Device for Measuring Complex Motor Reaction Time to Auditory and Visual Stimuli
Published: 22/01/2026
Download PDF
Research Article
Effectiveness of the Hexagonal Thinking Strategy on Psychomotor Coordination and Learning the Forehand and Backhand Groundstrokes in Tennis for Students
Published: 02/02/2026
Download PDF
Research Article
Phosphogenic Power as a Function of Maximum Speed and Impulse Force and Their Relationship to the Triple Jump Performance of Students
Published: 18/01/2026
Download PDF
Chat on WhatsApp
Flowbite Logo
PO Box 101, Nakuru
Kenya.
Email: office@iarconsortium.org

Editorial Office:
J.L Bhavan, Near Radison Blu Hotel,
Jalukbari, Guwahati-India
Useful Links
Order Hard Copy
Privacy policy
Terms and Conditions
Refund Policy
Shipping Policy
Others
About Us
Team Members
Contact Us
Online Payments
Join as Editor
Join as Reviewer
Subscribe to our Newsletter
+91 60029-93949
Follow us
MOST SEARCHED KEYWORDS
Copyright © iARCON International LLP . All Rights Reserved.