Video Annomaly Classification Using Convolutional Neural Network
Abstract
The use of surveillance videos is increasingly popular in city monitoring systems. Generally, the analysis process in surveillance videos still relies on conventional methods. This method requires professional personnel to constantly monitor and analyze videos to identify abnormal events. Consequently, the conventional approach is time-consuming, resource-intensive, and costly. Therefore, a system is needed to automatically detect video anomalies, reducing the massive human resource utilization for video monitoring. This research employs deep learning methods to classify anomalies in videos. The video anomaly detection process involves transforming the video into image format by extracting each frame present in the video. Subsequently, a Convolutional Neural Network (CNN) model is utilized to classify anomalous events within the video. Testing results using the CNN architectures DenseNet121 and EfficientNet V2 yielded performance accuracies of 99.89 and 98.24, respectively. The testing results indicate that the DenseNet121 architecture outperforms the EfficientNetV2 architecture in terms of performance.