Synthetic human-like fakes: Difference between revisions

→‎2020's synthetic human-like fakes: + <ref name="Audio Deepfake detection review 2022"> as {{cite journal}} + mentioning that this publication belongs to a Special Issue
(→‎2020's synthetic human-like fakes: + 2022 | science / counter-measure | 'Attacker Attribution of Audio Deepfakes' at arxiv.org, a pre-print to be presented at the Interspeech 2022 conference organized by w:International Speech Communication Association in Korea September 18-22 2022)
(→‎2020's synthetic human-like fakes: + <ref name="Audio Deepfake detection review 2022"> as {{cite journal}} + mentioning that this publication belongs to a Special Issue)
Line 411: Line 411:
* '''2022''' | '''<font color="green">counter-measure</font>''' | [https://arxiv.org/pdf/2206.12043.pdf '''''Protecting President Zelenskyy against deep fakes''''' a 2022 preprint at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.  
* '''2022''' | '''<font color="green">counter-measure</font>''' | [https://arxiv.org/pdf/2206.12043.pdf '''''Protecting President Zelenskyy against deep fakes''''' a 2022 preprint at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.  


* '''2022''' | '''<font color="green">science / review of counter-measures</font>''' | [https://www.mdpi.com/1999-4893/15/5/155 ''''''A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions'''''' at mdpi.com], a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the [[w:King Saud University]], Saudi Arabia published in [[w:Algorithms (journal)]] on Wednesday 2022-05-04 published by the [[w:MDPI]] (Multidisciplinary Digital Publishing Institute)
* '''2022''' | '''<font color="green">science / review of counter-measures</font>''' | [https://www.mdpi.com/1999-4893/15/5/155 ''''''A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions'''''' at mdpi.com]<ref name="Audio Deepfake detection review 2022">
 
{{cite journal
| last1      = Almutairi
| first1    = Zaynab
| last2      = Elgibreen
| first2    = Hebah
| date      = 2022-05-04
| title      = A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions
| url        = https://www.mdpi.com/1999-4893/15/5/155
| journal    = [[w:Algorithms (journal)]]
| volume    =
| issue      =
| pages      =
| doi        = https://doi.org/10.3390/a15050155
| access-date = 2022-10-18
}}
 
</ref>, a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the [[w:King Saud University]], Saudi Arabia published in [[w:Algorithms (journal)]] on Wednesday 2022-05-04 published by the [[w:MDPI]] (Multidisciplinary Digital Publishing Institute). This article belongs to the Special Issue [https://www.mdpi.com/journal/algorithms/special_issues/Adversarial_Federated_Machine_Learning ''Commemorative Special Issue: Adversarial and Federated Machine Learning: State of the Art and New Perspectives'' at mdpi.com]


* '''2022''' | '''<font color="green">science / counter-measure</font>''' | [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org], a pre-print to be presented at the [https://www.interspeech2022.org/ Interspeech 2022 conference] organized by [[w:International Speech Communication Association]] in Korea September 18-22 2022.
* '''2022''' | '''<font color="green">science / counter-measure</font>''' | [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org], a pre-print to be presented at the [https://www.interspeech2022.org/ Interspeech 2022 conference] organized by [[w:International Speech Communication Association]] in Korea September 18-22 2022.