Optimized Image-Based Event Detection Via K-Fold Learning And Minimum-Boundary Deep Cnns
DOI:
https://doi.org/10.64252/yegcn228Keywords:
Convolutional Neural Networks, Deep Learning, Event Detection, Image Analysis, Minimum Boundary Hyper plane, Pattern RecognitionAbstract
Detecting events from real-world photographs and videos is a significant and intricate endeavour. Historically, forecasts about traffic, population dynamics, and environmental hazards mostly depend on sensor data rather than direct picture analysis. Images are first transformed into binary data derived from photographs or video sources. Despite extensive research on event identification, current methodologies often encounter accuracy issues stemming from inaccurate social network data, asymmetric datasets, restricted application scope, and inadequate semantic feature extraction. This paper presents a Minimum Boundary Deep Convolutional Neural Network (MB-deep CNN) to overcome these restrictions. The model improves performance by extracting information from picture intensity, directional patterns, gradient patterns, and spatial properties, enabling efficient pixel translation. The MB-deep CNN segments and recognises events directly from visual inputs, enhancing accuracy and beyond the limitations of prior approaches.