Synthetic human-like fakes: Difference between revisions

Jump to navigation Jump to search
→‎Timeline of synthetic human-like fakes: + 2022 | science and demonstration | w:OpenAI(.com) published w:ChatGPT, a discutational AI accessible with a free account at chat.openai.com. Initial version was published on 2022-11-30.
(→‎2020's synthetic human-like fakes: + science / review of counter-measures | 'A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions' at mdpi.com, a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the w:King Saud University, Saudi Arabia published in w:Algorithms (journal) on Wednesday 2022-05-04)
(→‎Timeline of synthetic human-like fakes: + 2022 | science and demonstration | w:OpenAI(.com) published w:ChatGPT, a discutational AI accessible with a free account at chat.openai.com. Initial version was published on 2022-11-30.)
(11 intermediate revisions by the same user not shown)
Line 360: Line 360:


== 2020's synthetic human-like fakes ==
== 2020's synthetic human-like fakes ==
[[File:Appearance of Queen Elizabeth II stolen by Channel 4 in Dec 2020 (screenshot at 191s).png|thumb|right|480px|In Dec 2020 Channel 4 aired a Queen-like fake i.e. they had thieved the appearance of Queen Elizabeth II using deepfake methods.]]


* '''2022''' | '''<font color="green">counter-measure</font>''' | The work [https://www.usenix.org/system/files/sec22fall_blue.pdf '''''Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction''''' at usenix.org], [https://www.usenix.org/conference/usenixsecurity22/presentation/blue presentation page], [https://www.usenix.org/system/files/sec22-blue.pdf version included in the proceedings]<ref name="University of Florida 2022">
* '''2022''' | <font color="orange">'''science'''</font> and <font color="green">'''demonstration'''</font> | [[w:OpenAI]][https://openai.com/ (.com)] published [[w:ChatGPT]], a discutational AI accessible with a free account at [https://chat.openai.com/ chat.openai.com]. Initial version was published on 2022-11-30.
 
* '''2022''' | '''<font color="green">brief report of counter-measures</font>''' | {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}} Publication date 2022-11-23.
 
* '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction|what-is-it}}
:{{#lst:Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction|original-reporting}}. Presented to peers in August 2022 and to the general public in September 2022.
 
* '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}} Preprint published in February 2022 and submitted to [[w:arXiv]] in June 2022
 
* '''2022''' | '''<font color="green">science / review of counter-measures</font>''' | [https://www.mdpi.com/1999-4893/15/5/155 ''''''A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions'''''' at mdpi.com]<ref name="Audio Deepfake detection review 2022">


{{cite journal
{{cite journal
| last1      = Blue
| last1      = Almutairi
| first1    = Logan
| first1    = Zaynab
| last2      = Warren
| last2      = Elgibreen
| first2    = Kevin
| first2    = Hebah
| last3      = Abdullah
| date      = 2022-05-04
| first3    = Hadi
| title      = A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions
| last4      = Gibson
| url        = https://www.mdpi.com/1999-4893/15/5/155
| first4    = Cassidy
| journal    = [[w:Algorithms (journal)]]
| last5      = Vargas
| first5    = Luis
| last6      = O’Dell
| first6    = Jessica
| last7      = Butler
| first7    = Kevin
| last8      = Traynor
| first8    = Patrick     
| date      = August 2022
| title      = Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction
| url        = https://www.usenix.org/conference/usenixsecurity22/presentation/blue
| journal    = Proceedings of the 31st USENIX Security Symposium
| volume    =  
| volume    =  
| issue      =  
| issue      =  
| pages      = 2691-2708
| pages      =  
| doi        =  
| doi        = https://doi.org/10.3390/a15050155
| isbn      = 978-1-939133-31-1
| access-date = 2022-10-18
| access-date = 2022-10-06
}}
}}


</ref> and [https://www.usenix.org/system/files/sec22_slides-blue.pdf slides] from researchers of the Florida Institute for Cybersecurity Research (FICS) in the [[w:University of Florida]] received funding from the [[w:Office of Naval Research]] and was presented in August 2020 at the [[w:USENIX]] Security Symposium. The scientists wrote an article on their work titled [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 '''''Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices''''' at theconversation.com] that was published Tuesday 2022-09-20.<ref name="The Conversation 2022">
</ref>, a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the [[w:King Saud University]], Saudi Arabia published in [[w:Algorithms (journal)]] on Wednesday 2022-05-04 published by the [[w:MDPI]] (Multidisciplinary Digital Publishing Institute). This article belongs to the Special Issue [https://www.mdpi.com/journal/algorithms/special_issues/Adversarial_Federated_Machine_Learning ''Commemorative Special Issue: Adversarial and Federated Machine Learning: State of the Art and New Perspectives'' at mdpi.com]
{{cite web
|url=https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104
|title=Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices
|last=Blue
|first=Logan
|last2=Traynor
|first2=Patrick
|date=2022-09-20
|website=theconversation.com
|publisher=The Conversation
|access-date=2022-10-05
|quote=By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.}}
 
</ref>


* '''2022''' | '''<font color="green">counter-measure</font>''' | [https://arxiv.org/pdf/2206.12043.pdf '''''Protecting President Zelenskyy against deep fakes''''' a 2022 preprint at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.
* '''2022''' | '''<font color="green">science / counter-measure</font>''' | [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org], a pre-print was presented at the [https://www.interspeech2022.org/ Interspeech 2022 conference] organized by [[w:International Speech Communication Association]] in Korea September 18-22 2022.
 
* '''2022''' | '''<font color="green">science / review of counter-measures</font>''' | [https://www.mdpi.com/1999-4893/15/5/155 ''''''A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions'''''' at mdpi.com], a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the [[w:King Saud University]], Saudi Arabia published in [[w:Algorithms (journal)]] on Wednesday 2022-05-04 published by the [[w:MDPI]] (Multidisciplinary Digital Publishing Institute)


* '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com]
* '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com]
Line 440: Line 419:
* '''2021''' | '''<font color="red">crime / fraud</font>''' | {{#lst:Synthetic human-like fakes|2021 digital sound-alike enabled fraud}}
* '''2021''' | '''<font color="red">crime / fraud</font>''' | {{#lst:Synthetic human-like fakes|2021 digital sound-alike enabled fraud}}
* '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref>
* '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref>
[[File:Appearance of Queen Elizabeth II stolen by Channel 4 in Dec 2020 (screenshot at 191s).png|thumb|right|480px|In Dec 2020 Channel 4 aired a Queen-like fake i.e. they had thieved the appearance of Queen Elizabeth II using deepfake methods.]]


* '''2020''' | '''Controversy''' / '''Public service announcement''' | Channel 4 thieved the appearance of Queen Elizabeth II using deepfake methods. The product of synthetic human-like fakery originally aired on Channel 4 on 25 December at 15:25 GMT.<ref name="Queen-like deepfake 2020 BBC  reporting">https://www.bbc.com/news/technology-55424730</ref> [https://www.youtube.com/watch?v=IvY-Abd2FfM&t=3s View in YouTube]
* '''2020''' | '''Controversy''' / '''Public service announcement''' | Channel 4 thieved the appearance of Queen Elizabeth II using deepfake methods. The product of synthetic human-like fakery originally aired on Channel 4 on 25 December at 15:25 GMT.<ref name="Queen-like deepfake 2020 BBC  reporting">https://www.bbc.com/news/technology-55424730</ref> [https://www.youtube.com/watch?v=IvY-Abd2FfM&t=3s View in YouTube]
We use only those cookies necessary for the functioning of the wiki and we will never sell your data. All data is stored in the EU.

Navigation menu