Deepfake detection improves when using algorithms that are more aware of demographic diversity

1 min read

Deepfakes – essentially putting words in someone else’s mouth in a very believable way – are becoming more sophisticated by the day and increasingly hard to spot. Recent examples of deepfakes include Taylor Swift nude images, an audio recording of President Joe Biden telling New Hampshire residents not to vote, and a video of Ukrainian President Volodymyr Zelenskyy calling on his troops to lay down their arms. Although companies have created detectors to help spot deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted. A deepfake…

This story continues at The Next Web

You may like these posts

Post a Comment

hey there, great job keep on interacting