Protecting President Zelenskyy against deep fakes: Difference between revisions

Jump to navigation Jump to search
updated info + article submitted for publication on Friday 2022-06-24 by Hany Farid
(moved the SSF! wiki reporting on this from Synthetic human-like fakes#2020's synthetic human-like fakes and going to LST transclude the core definition to there from here)
 
(updated info + article submitted for publication on Friday 2022-06-24 by Hany Farid)
Line 1: Line 1:
<section begin=what-is-it />[https://arxiv.org/pdf/2206.12043.pdf '''''Protecting President Zelenskyy against deep fakes''''' a 2022 preprint at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.<section end=what-is-it />
<section begin=what-is-it />[https://arxiv.org/abs/2206.12043 ''''''Protecting President Zelenskyy against Deep Fakes'''''' at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.<section end=what-is-it />
 
[https://arxiv.org/pdf/2206.12043.pdf '''''Protecting President Zelenskyy against deep fakes''''' a 2022 .pdf at arxiv.org] submitted for publication on Friday 2022-06-24 by Hany Farid.


[[Category:Science]]
[[Category:Science]]
[[Category:Antifake]]
[[Category:Antifake]]
[[Category:2022]]
[[Category:2022]]
We use only those cookies necessary for the functioning of the wiki and we will never sell your data. All data is stored in the EU.

Navigation menu