Abstract

Sign language is a combination of complex hand movements, body postures, and facial expressions. However, only a limited number of people can understand and use it. A computer aid sign language recognition with finger spelling style utilizing a convolutional neural network (CNN) is proposed to reduce the burden. We compared two CNN architectures such as Resnet 50, and DenseNet 121 to classify the American sign language dataset. Several data splitting proportions were also tested. From the experimental result, it is shown that the Resnet 50 architecture with 80:20 data splitting for training and testing indicates the best performance with an accuracy of 0.999913, sensitivity 0.998966, precision 0.998958, specificity 0.999955, F1-score 0.999913, and error 0.0000898.