Google is working on removing AI deepfakes from its search results
Google is upgrading its safety features to make them easier to remove deepfakes from search while also preventing them from showing up higher in search results. While users can request for the removal of explicit deepfakes successfully, Google now wants to make the process easier by automatically filtering out related search results and reporting them as well as other similar or duplicate images.
Google will also demote websites that repeatedly contain AI deepfakes in their search ranking.
In a blog, making the announcement, Google product manager Emma Higham said, “This approach has worked well for other types of harmful content, and our testing shows that it will be a valuable way to reduce fake explicit content in search results.”
The search giant shared that past updates reduced exposure to explicit image results for queries around deepfakes by over 70% this year. “With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images,” she said.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Earlier, in May, Google began removing advertisers who were promoting deepfake porn services. In 2022, it also expanded the types of content around “doxxing” so they could be removed and started to blur sexually explicit imagery by default in August 2023.
Non-consensual AI deepfakes have become an increasing cause of concern for tech firms. Meta was recently investigated by its Oversight Board for failing to handle sexually explicit deepfakes of real women adequately.