3,839
edits
Juho Kunsola (talk | contribs) (→1990's synthetic human-like fakes: + link to Download video evidence of ''Video rewrite: Driving visual speech with audio'' Bregler et al 1997 from dropbox.com+ other authors of the work Michelle Covell and Malcom Slaney) |
Juho Kunsola (talk | contribs) (→Timeline of synthetic human-like fakes: + 2022 | science and demonstration | w:OpenAI(.com) published w:ChatGPT, a discutational AI accessible with a free account at chat.openai.com. Initial version was published on 2022-11-30.) |
||
(21 intermediate revisions by the same user not shown) | |||
Line 360: | Line 360: | ||
== 2020's synthetic human-like fakes == | == 2020's synthetic human-like fakes == | ||
* '''2022''' | '''<font color="green">counter- | * '''2022''' | <font color="orange">'''science'''</font> and <font color="green">'''demonstration'''</font> | [[w:OpenAI]][https://openai.com/ (.com)] published [[w:ChatGPT]], a discutational AI accessible with a free account at [https://chat.openai.com/ chat.openai.com]. Initial version was published on 2022-11-30. | ||
* '''2022''' | '''<font color="green">brief report of counter-measures</font>''' | {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}} Publication date 2022-11-23. | |||
* '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction|what-is-it}} | |||
:{{#lst:Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction|original-reporting}}. Presented to peers in August 2022 and to the general public in September 2022. | |||
* '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}} Preprint published in February 2022 and submitted to [[w:arXiv]] in June 2022 | |||
* '''2022''' | '''<font color="green">science / review of counter-measures</font>''' | [https://www.mdpi.com/1999-4893/15/5/155 ''''''A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions'''''' at mdpi.com]<ref name="Audio Deepfake detection review 2022"> | |||
{{cite journal | |||
| last1 = Almutairi | |||
| first1 = Zaynab | |||
| last2 = Elgibreen | |||
| first2 = Hebah | |||
| date = 2022-05-04 | |||
| title = A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions | |||
| url = https://www.mdpi.com/1999-4893/15/5/155 | |||
| journal = [[w:Algorithms (journal)]] | |||
| volume = | |||
| issue = | |||
| pages = | |||
| doi = https://doi.org/10.3390/a15050155 | |||
| access-date = 2022-10-18 | |||
}} | |||
</ref>, a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the [[w:King Saud University]], Saudi Arabia published in [[w:Algorithms (journal)]] on Wednesday 2022-05-04 published by the [[w:MDPI]] (Multidisciplinary Digital Publishing Institute). This article belongs to the Special Issue [https://www.mdpi.com/journal/algorithms/special_issues/Adversarial_Federated_Machine_Learning ''Commemorative Special Issue: Adversarial and Federated Machine Learning: State of the Art and New Perspectives'' at mdpi.com] | |||
* '''2022''' | '''<font color="green">science / counter-measure</font>''' | [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org], a pre-print was presented at the [https://www.interspeech2022.org/ Interspeech 2022 conference] organized by [[w:International Speech Communication Association]] in Korea September 18-22 2022. | |||
* '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com] | * '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com] | ||
Line 373: | Line 401: | ||
|last=Rosner | |last=Rosner | ||
|first=Helen | |first=Helen | ||
|author-link=Helen Rosner | |author-link=[[w:Helen Rosner]] | ||
|date=2021-07-15 | |date=2021-07-15 | ||
|title=A Haunting New Documentary About Anthony Bourdain | |title=A Haunting New Documentary About Anthony Bourdain | ||
Line 391: | Line 419: | ||
* '''2021''' | '''<font color="red">crime / fraud</font>''' | {{#lst:Synthetic human-like fakes|2021 digital sound-alike enabled fraud}} | * '''2021''' | '''<font color="red">crime / fraud</font>''' | {{#lst:Synthetic human-like fakes|2021 digital sound-alike enabled fraud}} | ||
* '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref> | * '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref> | ||
[[File:Appearance of Queen Elizabeth II stolen by Channel 4 in Dec 2020 (screenshot at 191s).png|thumb|right|480px|In Dec 2020 Channel 4 aired a Queen-like fake i.e. they had thieved the appearance of Queen Elizabeth II using deepfake methods.]] | |||
* '''2020''' | '''Controversy''' / '''Public service announcement''' | Channel 4 thieved the appearance of Queen Elizabeth II using deepfake methods. The product of synthetic human-like fakery originally aired on Channel 4 on 25 December at 15:25 GMT.<ref name="Queen-like deepfake 2020 BBC reporting">https://www.bbc.com/news/technology-55424730</ref> [https://www.youtube.com/watch?v=IvY-Abd2FfM&t=3s View in YouTube] | * '''2020''' | '''Controversy''' / '''Public service announcement''' | Channel 4 thieved the appearance of Queen Elizabeth II using deepfake methods. The product of synthetic human-like fakery originally aired on Channel 4 on 25 December at 15:25 GMT.<ref name="Queen-like deepfake 2020 BBC reporting">https://www.bbc.com/news/technology-55424730</ref> [https://www.youtube.com/watch?v=IvY-Abd2FfM&t=3s View in YouTube] | ||
Line 591: | Line 621: | ||
* <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. | * <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. | ||
* '''1997''' | '''technology / science''' | [https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/human/bregler-sig97.pdf ''''Video rewrite: Driving visual speech with audio'''' at www2.eecs.berkeley.edu]<ref group="1st seen in" name="Bohacek-Farid-2022">PROTECTING PRESIDENT ZELENSKYY AGAINST DEEP FAKES https://arxiv.org/pdf/2206.12043.pdf</ref> Christoph Breigler, Michelle Covell and Malcom Slaney presented their work at the ACM SIGGRAPH 1997. [https://www.dropbox.com/sh/s4l00z7z4gn7bvo/AAAP5oekFqoelnfZYjS8NQyca?dl=0 Download video evidence of ''Video rewrite: Driving visual speech with audio'' Bregler et al 1997 from dropbox.com] | * '''1997''' | '''technology / science''' | [https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/human/bregler-sig97.pdf ''''Video rewrite: Driving visual speech with audio'''' at www2.eecs.berkeley.edu]<ref name="Bregler1997"> | ||
{{cite journal | |||
| last1 = Bregler | |||
| first1 = Christoph | |||
| last2 = Covell | |||
| first2 = Michele | |||
| last3 = Slaney | |||
| first3 = Malcolm | |||
| date = 1997-08-03 | |||
| title = Video Rewrite: Driving Visual Speech with Audio | |||
| url = https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/human/bregler-sig97.pdf | |||
| journal = SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques | |||
| volume = | |||
| issue = | |||
| pages = 353-360 | |||
| doi = 10.1145/258734.258880 | |||
| access-date = 2022-09-09 | |||
}} | |||
</ref><ref group="1st seen in" name="Bohacek-Farid-2022"> | |||
PROTECTING PRESIDENT ZELENSKYY AGAINST DEEP FAKES https://arxiv.org/pdf/2206.12043.pdf | |||
</ref> Christoph Breigler, Michelle Covell and Malcom Slaney presented their work at the ACM SIGGRAPH 1997. [https://www.dropbox.com/sh/s4l00z7z4gn7bvo/AAAP5oekFqoelnfZYjS8NQyca?dl=0 Download video evidence of ''Video rewrite: Driving visual speech with audio'' Bregler et al 1997 from dropbox.com], [http://chris.bregler.com/videorewrite/ view author's site at chris.bregler.com], [https://dl.acm.org/doi/10.1145/258734.258880 paper at dl.acm.org] [https://www.researchgate.net/publication/220720338_Video_Rewrite_Driving_Visual_Speech_with_Audio paper at researchgate.net] | |||
* '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage. | * '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage. |