Use of large language models to identify fake information

Authors

  • Dmytro Lande Institute of special communication and information protection of National technical university of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine https://orcid.org/0000-0003-3945-1178
  • Vira Hyrda Institute of special communications and information protection of National technical university of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, Ukraine https://orcid.org/0000-0002-3858-4086

DOI:

https://doi.org/10.20535/2411-1031.2024.12.2.315743

Keywords:

cybersecurity, generative language models, СhatGPT, information classification, artificial intelligence

Abstract

In recent years, the field of artificial intelligence has undergone a true revolution with the emergence of large language models (LLMs) such as GPT-4, Llama-3, Gemini, and others, which have been successfully applied across a wide range of tasks – from text generation to data analysis. This article examines how these models can be effectively used for detecting fake information. This study explores the use of the ChatGPT chatbot for identifying fake information in the context of cybersecurity. Using a large language model, a swarm of virtual experts was created, which generated informational messages on the topic of cybersecurity (both fake and truthful) and assessed them as either “fake” or “true.” For analysis, a semantic network was constructed and subsequently visualized using Gephi. The research analyzed two datasets of messages: one created by human experts and the other by artificial experts. Each message was rated and converted into a numerical format for further analysis. Using the Hamming distance, the results were validated, and the accuracy of matches between assessments was determined. As a result of building the semantic network, key concepts in the field of cybersecurity were identified, along with the relationships between them. A swarm of artificial experts generated a dataset of messages with fake and truthful content, which was assessed both by the artificial experts themselves and by a human expert. Analysis of the Hamming distance between these assessments demonstrated that artificial intelligence has potential in detecting fake information; however, at this stage, its performance requires human oversight and adjustments.

Author Biographies

Dmytro Lande, Institute of special communication and information protection of National technical university of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv

doctor of technical science, professor, professor at the cybersecurity and application of information systems and technologies academic department

Vira Hyrda, Institute of special communications and information protection of National technical university of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv

postgraduate student

References

Cabinet of Ministers of Ukraine (2020, Dec. 12). Order No. 1556, On Approval of the Concept of Artificial Intelligence Development in Ukraine. [Online]. Available: https://zakon.rada.gov.ua/laws/show/1556-2020-р#Text. Accessed on: Aug. 14, 2024.

Digital Compass: the European way for the Digital Decade. Brussels, Belgium: European Commission, 2021. [Online]. Available: https://eufordigital.eu/library/2030-digital-compass-the-european-way-for-the-digital-decade/. Accessed on: Oct. 19, 2024.

S. Hurzhii, “The special features of using the artificial intelligence in the matters of cybersecurity”, Information and Law, 2023, no. 4(47), pp. 207-216. doi: https://doi.org/10.37750/2616-6798.2023.4(47).291669.

M. Marienko, and V. Kovalenko, “Artificial intelligence and open science in education”, Physical and mathematical education, vol. 38, № 1, pp. 48-53, 2023, doi: https://doi.org/10.311105/2413-1571-2023-038-1-007.

D. Lande, and L. Strashnoy, GPT Semantic Networking: A Dream of the Semantic Web – The Time is Now. Kyiv, Ukraine: Engineering, 2023.

D. Johnson et al., “Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model”, Preprint. Res Sq. 2023; rs.3.rs-2566942, 2023. doi: https://doi.org/10.21203/rs.3.rs-2566942/v1.

C.K. Lo, “What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature”, Education Sciences, vol. 13, no. 4, 410, 2023, doi: https://doi.org/10.3390/educsci13040410.

N. Surameery, and M. Shakor, “Use Chat GPT to Solve Programming Bugs”, International Journal of Information technology and Computer Engineering, no. 31. рр. 17-22, 2023, doi: https://doi.org/10.55529/ijitc.31.17.22.

M. Belova, and D. Belov, “Implementation of Artificial Intelligence in Pre-Trial Investigation of Criminal Cases: International Experience”, Analytical and Comparative Jurisprudence, no. 2, pp. 448-453, 2023. doi: https://doi.org/10.24144/2788-6018.2023.02.78.

V. Prazdnikov, and I. Sugonyak, “Machine learning models and methods for recognizing fake content", Technical Engineering, vol. 92, no. 2, pp. 132-136, 2023, doi: https://doi.org/10.26642/ten-2023-2(92)-131-136.

J. Kocoń et al. “ChatGPT: Jack of all trades, master of none”, Information fusion, vol. 99, 101861, 2023, doi: https://doi.org/10.1016/j.inffus.2023.101861.

Published

2024-12-26

How to Cite

Lande, D., & Hyrda, V. (2024). Use of large language models to identify fake information. Collection "Information Technology and Security", 12(2), 236–242. https://doi.org/10.20535/2411-1031.2024.12.2.315743

Issue

Section

ARTIFICIAL INTELLIGENCE IN THE CYBERSECURITY FIELD