AI News

Can ChatGPT stop deepfakes?

Can ChatGPT stop deepfakes?

Deepfakes represent the new challenges that the world is grappling with. Several notable personalities have been its victims. AI is one technology that is emerging as a viable solution. Experts believe AI can detect fakes and images generated using the same tool more effectively. While a clear method for doing so is limited or largely in progress, expats believe that AI’s ability to comprehend images makes it an ideal candidate in this context.

The University of Buffalo conducted tests on ChatGPT using OpenAI. It turns out that ChatGPT does a decent job of identifying fake images or deepfakes. The LLM was evaluated using Google’s Gemini. The conclusion is that its natural language processing makes it a more practical tool for detection. Siwei Lyu, the lead researcher, said that what sets it apart is its ability to explain its findings, adding that it is also comprehensible to humans.

Lyu also said that LLMs are doing a decent job despite not being specifically designed to do that. Their semantic knowledge makes them well-suited for the task. Lyu believes that more efforts can be invested to improve LLMs for deepfake identification.

LLMs essentially rely on a large database of captioned images to determine the relationship between words and images. In this way, Shan Jai, an assistant lab director in the UB Media Forensic Lab, said that images become their language.

The Lab’s team tested GPT-4 and Gemini and concluded that GPT was accurate 79.5% of the time. However, this accuracy was only observed when detecting synthetic artifacts. It was accurate 77.2% of the time when detecting StyleGAN-generated images. ChatGPT could also explain its decision-making in simple language, allowing users to understand the process and result.

While ChatGPT has emerged as an ideal candidate here, the team has said that it also suffers from drawbacks. It is natural to come to that conclusion, considering the original development was not targeted at detecting deepfakes. Research reveals that its focus is solely on semantic-level abnormalities. It can end up serving as a double-edged sword for the detection of deepfakes.

Another drawback that has come to light is that LLMs can sometimes refuse to analyze an image. ChatGPT, according to the study, has even responded by saying that it was sorry for not being able to assist users with their queries. ChatGPT’s programming prevents it from responding if it doesn’t achieve a certain level of confidence.

Both the race for AI and the malicious use of deepfakes are gaining momentum. It is imperative to sharpen AI-backed tools as early as possible, or it would be deepfakes to win the race and ruin technology for everyone.

AI-generated content is not entirely detrimental. A significant number of users have implemented artificial intelligence (AI) to generate visually appealing images and implement them prudently.

What is your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
ToAI Team
Fueled by a shared fascination with Artificial Intelligence, the Times Of AI journalists team brings together various researchers, writers, and analysts. We aim to provide a comprehensive knowledge of AI for a broad audience of the Times Of AI. Through in-depth analysis of the latest advancements, investigation of ethical considerations around AI development, AI governance, machine learning, data science, automation, cybersecurity, and discussions about the future impact of AI across various sectors, we aim to empower readers with the details they need to navigate this rapidly evolving field.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:AI News