Home AI New AI Software Tackles Deepfake Threats to Personal Security

New AI Software Tackles Deepfake Threats to Personal Security

by Geny Caloisi

A research project involving the University of Portsmouth has led to the development of ‘DeepGuard’, an advanced software tool designed to identify AI-generated images and combat the growing risks posed by deepfake technology.

Realistic AI-generated images, whether created from text descriptions or manipulated within videos, present a significant challenge to personal security. The risks range from identity theft to fraudulent document creation and the misuse of personal likenesses. As AI-generated content becomes more sophisticated, distinguishing between real and fake imagery is increasingly difficult.

DeepGuard, developed through a research collaboration that includes the University of Portsmouth’s Artificial Intelligence and Data Science (PAIDS) Research Centre, provides a solution. The tool uses a combination of three AI techniques—binary classification, ensemble learning, and multi-class classification—to analyse and accurately differentiate between genuine and artificially created images. Crucially, it can also trace the origins of manipulated visuals.

The technology has broad applications, from assisting law enforcement in investigating cyber fraud to supporting media organisations in verifying the authenticity of images used in news reporting. By preventing the spread of misinformation and ensuring accountability in digital content, DeepGuard could play a key role in safeguarding individuals and institutions from reputational harm.

The project is led by Dr Gueltoum Bendiab and Yasmine Namani from the Department of Electronics at the University of Frères Mentouri in Algeria, in collaboration with Dr Stavros Shiaeles from the University of Portsmouth’s PAIDS Research Centre and School of Computing.

Dr Shiaeles explained the urgency of tackling deepfake threats, “With AI’s rapid advancements, spotting fake images with the human eye alone is becoming impossible. Manipulated visuals can be used to forge documents, spread false information, undermine elections, and even incite harm. Criminals are taking advantage of these technologies, from blackmail operations to profit-driven social media scams where AI-generated characters replace real people. DeepGuard, and its future iterations, will provide a crucial tool for verifying image authenticity across different sectors.”

The research, published in The Journal of Information Security and Applications, contributes to the wider academic study of deepfake detection. The team reviewed 255 research papers published between 2016 and 2023, assessing various image manipulation techniques—including facial and body alterations, changes in expressions, and AI-generated voice replication. Their findings provide a foundation for further innovation in combating digital deception.

For more details, visit the University of Portsmouth website: http://www.port.ac.uk/

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy