Synthetic human-like fakes: Difference between revisions

+ 2022 | counter-measure | Protecting President Zelenskyy against deep fakes a 2022 preprint at arxiv.org by Matyáš Boháček of Johannes Kepler Gymnasium and w:Hany Farid, the dean and head of of w:University of California, Berkeley School of Information. This brief paper describes their automated digital look-alike detection system [...] Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones...
m (Text replacement - "Laws against synthesis and related crimes" to "Laws against synthesis and other related crimes")
(+ 2022 | counter-measure | Protecting President Zelenskyy against deep fakes a 2022 preprint at arxiv.org by Matyáš Boháček of Johannes Kepler Gymnasium and w:Hany Farid, the dean and head of of w:University of California, Berkeley School of Information. This brief paper describes their automated digital look-alike detection system [...] Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones...)
Line 361: Line 361:
== 2020's synthetic human-like fakes ==
== 2020's synthetic human-like fakes ==
[[File:Appearance of Queen Elizabeth II stolen by Channel 4 in Dec 2020 (screenshot at 191s).png|thumb|right|480px|In Dec 2020 Channel 4 aired a Queen-like fake i.e. they had thieved the appearance of Queen Elizabeth II using deepfake methods.]]
[[File:Appearance of Queen Elizabeth II stolen by Channel 4 in Dec 2020 (screenshot at 191s).png|thumb|right|480px|In Dec 2020 Channel 4 aired a Queen-like fake i.e. they had thieved the appearance of Queen Elizabeth II using deepfake methods.]]
* '''2022''' | '''<font color="green">counter-measure</font>''' | [https://arxiv.org/pdf/2206.12043.pdf '''Protecting President Zelenskyy against deep fakes''' a 2022 preprint at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.
* '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com]
* '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com]