Breaking News

Ewen Ferguson takes a gamble by backing himself and teaming up with four DP World Tour pros for The Open. Facial Recognition Technology Regulation Must be Part of Police Reform Tech Workers Emerge as Victors in the AI Talent Battle Marshall Health Network is excited to welcome eye care specialists to their team in Real WV. Rangers cruise past Rays with 13-2 victory to complete 3-game sweep

Researchers from the University at Buffalo, in collaboration with the University at Albany and the Chinese University of Hong Kong, have discovered that large language models (LLMs) may be more effective in detecting deepfakes than state-of-the-art algorithms. LLMs were tested on identifying deepfakes of human faces, and although they were not specifically designed for this purpose, their semantic knowledge makes them well-suited for it.

The study found that LLMs like ChatGPT and Google’s Gemini were able to accurately detect synthetic artifacts in images generated by different methods, comparable to previous deepfake detection algorithms. However, LLMs may struggle in capturing statistical differences at a signal level, limiting their detection capabilities in some cases. Other LLMs like Gemini may provide nonsensical explanations for their analyses or refuse to analyze images altogether.

Despite these limitations, the researchers suggest that fine-tuning LLMs for deepfake detection could improve their performance and make them more efficient tools for users and developers. While LLMs may not be as advanced as current detection algorithms, their natural language processing capabilities and semantic knowledge offer a unique approach to deepfake detection. By leveraging these strengths, LLMs like ChatGPT could play a valuable role in combating the spread of AI-generated misinformation in the future.

Leave a Reply