Protecting President Zelenskyy against deep fakes: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(+ as a ref {{cite arXiv}})
m (Text replacement - "Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction" to "Detecting deep-fake audio through vocal tract reconstruction")
Tags: Mobile edit Mobile web edit
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
<section begin=what-is-it />[https://arxiv.org/abs/2206.12043 ''''''Protecting President Zelenskyy against Deep Fakes'''''' at arxiv.org]<ref>
<section begin=what-is-it />[[Protecting President Zelenskyy against deep fakes]] [https://arxiv.org/abs/2206.12043 ''''''Protecting President Zelenskyy against Deep Fakes'''''' at arxiv.org]<ref>


{{cite arXiv
{{cite arXiv
Line 18: Line 18:


== See also ==
== See also ==
Further work by Boháček and Farid ''''''[[Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms]]''''''
* Further work by Boháček and Farid ''''''[[Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms]]'''''' 2022
* Against digital sound-alikes and any fake human-like voices ''''''[[Detecting deep-fake audio through vocal tract reconstruction]]'''''' 2022


== References ==
== References ==
Line 25: Line 26:
[[Category:Science]]
[[Category:Science]]
[[Category:Antifake]]
[[Category:Antifake]]
[[Category:Countermeasure]]
[[Category:2022]]
[[Category:2022]]

Latest revision as of 10:46, 12 February 2024

Protecting President Zelenskyy against deep fakes 'Protecting President Zelenskyy against Deep Fakes' at arxiv.org[1] by Matyáš Boháček of Johannes Kepler Gymnasium and w:Hany Farid, the dean and head of of w:Berkeley School of Information at the University of California, Berkeley. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital media forensics is a very good idea explored by many. Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.

Protecting President Zelenskyy against deep fakes a 2022 .pdf at arxiv.org submitted for publication on Friday 2022-06-24 by Hany Farid.

See also[edit | edit source]

References[edit | edit source]

  1. Boháček, Matyáš; Farid, Hany (2022-06-14). "Protecting President Zelenskyy against Deep Fakes". arXiv:2206.12043 [cs.CV].