Innovative methods of automotive crash detection through audio recognition using neural networks algorithms
DOI:
https://doi.org/10.20535/2411-1031.2024.12.2.317938Keywords:
artificial intelligence, convolutional neural networks, audio signal processingAbstract
The automatic e-Call system has become mandatory in the European Union since 2018. This requirement means that all new passenger vehicles released on the European market after this date must be equipped with a digital emergency response service, which automatically notifies emergency services in case of an accident through the Automatic Crash Notification (ACN) system. Since the response of emergency services (police, ambulance, etc.) to such calls is extremely expensive, the task arises of improving the accuracy of such reports by verifying the fact that the accident actually occurred. Nowadays, most car manufacturers determine an emergency by analyzing the information coming from the built-in accelerometer sensors. As a result, quite often sudden braking, which avoids an accident, is mistakenly identified as an emergency and leads to a false call to emergency services. Some car manufacturers equip their high-end vehicles with an automatic collision notification, which mainly monitors the airbag deployment in order to detect a severe collision, and call assistance with the embedded cellular radios. In order to reduce costs some third-party solutions offer the installation of boxes under the hood, wind-screen boxes and/or OBDII dongles with an embedded acceleration sensor, a third-party sim-card as well as a proprietary algorithm to detect bumps. Nevertheless, relying on acceleration data may lead to false predictions: street bumps, holes and bad street conditions trigger false positives, whereas collisions coming from the back while standing still may be classified as normal acceleration. Also acceleration data is not suitable to identify vehicle side impacts. In many cases emergency braking helps to avoid collision, while acceleration data would be very similar to the data observed in case of an accident, resulting in a conclusion that the crash actually occurred. As a result, the average accuracy of those car crash detection algorithms nowadays does not exceed , which is acceptable, yet offers a lot of room for further improvement, since each additional percept of accuracy would provide substantial cost savings. That is why the task of increasing accuracy of collision detection stays urgent. In this article, we will describe an innovative approach to the recognition of car accidents based on the use of convolutional neural networks to classify soundtracks recorded inside the car when road accidents occur, assuming that every crash produces a sound. Recording of the soundtrack inside the car can be implemented both with the help of built-in microphones as well as using the driver's smartphone, hands-free car kits, dash cameras, which would drastically reduce cost of hardware required to solve this task. Also, modern smartphones are equipped with accelerometers, which can serve as a trigger for starting the analysis of the soundtrack using a neural network, which will save the computing resources of the smartphone. Accuracy of the crash detection can be further improved by using multiple sound sources. Modern automobiles may be equipped with various devices capable of recording the audio inside the car, namely: built-in microphone of the hands-free speaking system, mobile phones of the driver and/or passengers, dash-cam recording devices, smart back-view mirrors etc.
References
S. Sumit, “A Comprehensive Guide to Convolutional Neural Networks ‒ the ELI5 way”, Towards Data Science, 2018. [Оnline]. Available: https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural-networks-theeli5zway3bd2b1164a53. Accessed on: Nov. 19, 2024.
P. Lyashchynsky, and P. Lyashchynsky, “Automated synthesis of convolutional neural network structures”, Problèmes et perspectives d'introduction de la recherche scientifique innovante, vol. 2, рр. 114-116, 2019. doi: http://dx.doi.org/10.36074/29.11.2019.v2.
A.O. Dashkevich, “Research of multilayer neural networks for automatic feature extraction in solving the problem of pattern recognition”, Scientific Bulletin of TDATU, vol. 2, no. 6, pp. 134-139, 2019. [Оnline]. Available: https://nauka.tsatu.edu.ua/e-journals-tdatu/pdf6t2/20.pdf. Accessed on: Nov. 27, 2024.
O. Gencoglu, T. Virtanen, and H. Huttunen, “Recognition of acoustic events using deep neural networks”, in Proc. 22nd European Signal Processing Conference (EUSIPCO), Lisbon, 2014. [Оnline]. Available: https://homepages.tuni.fi/tuomas.virtanen/papers/dnn_eusipco2014.pdf. Accessed on: May 19, 2014.
S. Abdoli, P. Cardinal, and A. Koerich, “End-to-end environmental sound classification using a 1D convolutional neural network”, Expert Systems with Applications, vol. 136, pp. 252-263, Dec. 2019. doi: http://dx.doi.org/10.1016/j.eswa.2019.06.040.
M. Paciorek, A. Klusek, and P. Wawryka, “Effective Car Collision Detection with Mobile Phone Only”, in Proc. International Conference on Computational Science, Krakov, 2021, рр. 303-317. doi: http://dx.doi.org/10.1007/978-3-030-77980-124.
M. Sammarco, and M. Detyniecki, “Car Accident Detection and Reconstruction Through Sound Analysis with Crashzam”, in Proc. International Conference on Smart Cities and Green ICT Systems, Poland, 2019. рр. 159-180, doi: http://dx.doi.org/10.1007/978-3-030-26633-2_8.
L. Dobulyak, D. Ferbey, and S. Kostenko, “The use of deep learning in environmental sound classification tasks”, Scientific Bulletin of Uzhhorod University. Series “Mathematics and Informatics”, no. 41(2), pp. 118-127, 2022. doi: https://doi.org/10.24144/2616-7700.2022.41(2).118-127.
V.Y. Kutkovetsky, Pattern Recognition. Mykolaiv, Ukraine: Petro Mohyla National University, 2017.
S.O. Subbotin, Neural networks: theory and practice: study guide. Zhytomyr, Ukraine: O.O. Yevenok Pabl., 2020.
M.A. Novotarsky, and B.B. Nesterenko, Artificial neural networks: computation. Kyiv, Ukraine: Institute of Mathematics of National Academy of Sciences of Ukraine, 2004.
A. Francl, and J. McDermott, “Deep neural network models of sound localization reveal how perception is adapted to real-world environments”, Nature Human Behaviour, vol. 6, pp. 111-133, 2022. doi: https://doi.org/10.1038/s41562-021-01244-z.
A. Podda, R. Balia, L. Pompianu, S. Carta, G. Fenu, and R. Saia, “CARgram CNN-based accident recognition from road sounds through intensity-projected spectrogram analysis”, Digital Signal Processing, vol. 147, pp. 1-10, 2024. doi: https://doi.org/10.1016/j.dsp.2024.104431.
Road traffic injuries report, World Health Organization. 2021. [Оnline]. Available: https://www.who.int/health-topics/road-safety/children-and-young-people#tab=tab_1. Accessed on: Nov. 19, 2024.
J. Salamon, C. Jacoby, and J. Bello, “A Dataset and Taxonomy for Urban Sound”, in Proc. 22nd ACM international conference on Multimedia (MM '14), Orlando, FL, USA, 2014, pp.1041-1044. doi: http://dx.doi.org/10.1145/2647868.2655045.
H. Champion et al., “Automatic crash notification and the urgency algorithm: its history, value, and use”, Topics in emergency medicine, no. 26 (2), pp. 143-156, 2004. doi: https://doi.org/10.1145/2647868.2655045.
S. Haria, S. Anchaliya, V. Gala, and T. Maru, “Car crash prevention and detection system using sensors and smart poles”, in Proc. 2nd Intern. Conf. on Intell. Comp. and Cont. Sys. (ICICCS), IEEE, Madurai, 2018. pp. 800-804: doi: http://dx.doi.org/10.1109/ICCONS.2018.8663017.
M. Huang, M. Wang, X. Liu, R. Kan, and H. Qiu, “Environmental Sound Classification Framework Based on L-mHP Features and SE-ResNet50 Network Model”, Symmetry, no. 15 (5), pр. 1045-1052, 2023. doi: https://doi.org/10.3390/sym15051045.
S. Rovetta, Z. Mnasri, and F. Masulli, “Detection of hazardous road events from audio streams: an ensemble outlier detection approach”, in Proc. IEEE Conf. on Evolvi. and Adap. Intell. Sys. (EAIS), Bari, 2020. pp.1-6. doi: https://doi.org/10.1109/EAIS48028.2020.9122704.
Y. Arslan, and H. Canbolat, “Performance of deep neural networks in audio surveillance”, in Proc. 6th IEEE Intern. Conf. on Contr. Eng. & IT (CEIT), Istanbul, 2018, pp.1-5. doi: https://doi.org/10.1109/CEIT.2018.8751822.
X. Zhang, Y. Chen, M. Liu, and C. Huang, “Acoustic traffic event detection in long tunnels using fast binary spectral features”, Circuits Syst. Signal Process, no. 39, pp.2994-3006, 2020. doi: https://doi.org/10.1007/s00034-019-01294-9.
B. Kumar, A. Basit, M. Kiruba, R. Giridharan, and S. Keerthana, “Road accident detection using machine learning”, in Proc. IEEE Inter. Conf. on Sys., Computation, Automation and Networking (ICSCAN), Puducherry, 2021. pp. 1-5. doi: https://doi.org/10.1109/ICSCAN53069.2021.9526546.
P. Foggia, A. Saggese, N. Strisciuglio, M. Vento, and V. Vigilante, “Detecting sounds of interest in roads with deep networks”, in Proc. Inter. Conf. on Image Analysis and Proc., Springer, 2019, pp.583-592. doi: https://doi.org/10.1007/978-3-030-30645-8_53.
P. Foggia, N. Petkov, A. Saggese, N. Strisciuglio, and M. Vento, “Reliable detection of audio events in highly noisy environments”, Pattern Recognit Lett., vol. 65, pp. 22-28, 2015. doi: http://dx.doi.org/10.1016/j.patrec.2015.06.026.
S. Abdoli, P. Cardinal, and A. Koerich, “End-to-end environmental sound classification using a 1d convolutional neural network”, Expert Systems with Applications, vol. 136, pp. 252-263, 2019. doi: https://doi.org/10.1016/j.eswa.2019.06.040.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Collection "Information Technology and Security"
This work is licensed under a Creative Commons Attribution 4.0 International License.
The authors that are published in this collection, agree to the following terms:
- The authors reserve the right to authorship of their work and pass the collection right of first publication this work is licensed under the Creative Commons Attribution License, which allows others to freely distribute the published work with the obligatory reference to the authors of the original work and the first publication of the work in this collection.
- The authors have the right to conclude an agreement on exclusive distribution of the work in the form in which it was published this anthology (for example, to place the work in a digital repository institution or to publish in the structure of the monograph), provided that references to the first publication of the work in this collection.
- Policy of the journal allows and encourages the placement of authors on the Internet (for example, in storage facilities or on personal web sites) the manuscript of the work, prior to the submission of the manuscript to the editor, and during its editorial processing, as it contributes to productive scientific discussion and positive effect on the efficiency and dynamics of citations of published work (see The Effect of Open Access).