Open main menu
Home
Random
Recent changes
Special pages
Community portal
Settings
About Stop Synthetic Filth! wiki
Disclaimers
Stop Synthetic Filth! wiki
Search
User menu
Talk
Contributions
Log in
Editing
Synthetic human-like fakes
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= Timeline of synthetic human-like fakes = See the #SSFWIKI '''[[Mediatheque]]''' for viewing media that is or is probably to do with synthetic human-like fakes. == 2020's synthetic human-like fakes == * '''2023''' | '''<font color="orange">Real-time digital look-and-sound-alike crime</font>''' | In April a man in northern China was defrauded of 4.3 million yuan by a criminal employing a digital look-and-sound-alike pretending to be his friend on a video call made with a stolen messaging service account.<ref name="Reuters real-time digital look-and-sound-alike crime 2023"/> * '''2023''' | '''<font color="orange">Election meddling with digital look-alikes</font>''' | The [[w:2023 Turkish presidential election]] saw numerous deepfake controversies. ** "''Ahead of the election in Turkey, President Recep Tayyip Erdogan showed a video linking his main challenger Kemal Kilicdaroglu to the militant Kurdish organization PKK.''" [...] "''Research by DW's fact-checking team in cooperation with DW's Turkish service shows that the video at the campaign rally was '''manipulated''' by '''combining two separate videos''' with totally different backgrounds and content.''" [https://www.dw.com/en/fact-check-turkeys-erdogan-shows-false-kilicdaroglu-video/a-65554034 reports dw.com] * '''2023''' | March 7 th | '''<font color="red">science / demonstration</font>''' | Microsoft researchers submitted a paper for publication outlining their [https://arxiv.org/abs/2303.03926 '''Cross-lingual neural codec language modeling system''' at arxiv.org] dubbed [https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e-x/ '''VALL-E X''' at microsoft.com], that extends upon VALL-E's capabilities to be cross-lingual and also maintaining the same "''emotional tone''" from sample to fake. * '''2023''' | January 5th | '''<font color="red">science / demonstration</font>''' | Microsoft researchers announced [https://www.microsoft.com/en-us/research/project/vall-e/ '''''VALL-E''''' - Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers (at microsoft.com)] that is able to thieve a voice from only '''3 seconds of sample''' and it is also able to mimic the "''emotional tone''" of the sample the synthesis if produced of.<ref> {{cite web | url = https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/ | title = Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio | last = Edwards | first = Benj | date = 2023-01-10 | website = [[w:Arstechnica.com]] | publisher = Arstechnica | access-date = 2023-05-05 | quote = For the paper's conclusion, they write: "Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models." }} </ref> * '''2023''' | January 1st | '''<font color="green">Law</font>''' | {{#lst:Law on sexual offences in Finland 2023|what-is-it}} * '''2022''' | <font color="orange">'''science'''</font> and <font color="green">'''demonstration'''</font> | [[w:OpenAI]][https://openai.com/ (.com)] published [[w:ChatGPT]], a discutational AI accessible with a free account at [https://chat.openai.com/ chat.openai.com]. Initial version was published on 2022-11-30. * '''2022''' | '''<font color="green">brief report of counter-measures</font>''' | {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}} Publication date 2022-11-23. * '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Detecting deep-fake audio through vocal tract reconstruction|what-is-it}} :{{#lst:Detecting deep-fake audio through vocal tract reconstruction|original-reporting}}. Presented to peers in August 2022 and to the general public in September 2022. * '''2022''' | <font color="orange">'''disinformation attack'''</font> | In June 2022 a fake digital look-and-sound-alike in the appearance and voice of [[w:Vitali Klitschko]], mayor of [[w:Kyiv]], held fake video phone calls with several European mayors. The Germans determined that the video phone call was fake by contacting the Ukrainian officials. This attempt at covert disinformation attack was originally reported by [[w:Der Spiegel]].<ref>https://www.theguardian.com/world/2022/jun/25/european-leaders-deepfake-video-calls-mayor-of-kyiv-vitali-klitschko</ref><ref>https://www.dw.com/en/vitali-klitschko-fake-tricks-berlin-mayor-in-video-call/a-62257289</ref> * '''2022''' | science | [[w:DALL-E]] 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles" was published in April 2022.<ref>{{Cite web |title=DALL·E 2 |url=https://openai.com/dall-e-2/ |access-date=2023-04-22 |website=OpenAI |language=en-US}}</ref> ([https://en.wikipedia.org/w/index.php?title=DALL-E&oldid=1151136107 Wikipedia]) * '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}} Preprint published in February 2022 and submitted to [[w:arXiv]] in June 2022 * '''2022''' | '''<font color="green">science / review of counter-measures</font>''' | [https://www.mdpi.com/1999-4893/15/5/155 ''''''A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions'''''' at mdpi.com]<ref name="Audio Deepfake detection review 2022"> {{cite journal | last1 = Almutairi | first1 = Zaynab | last2 = Elgibreen | first2 = Hebah | date = 2022-05-04 | title = A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions | url = https://www.mdpi.com/1999-4893/15/5/155 | journal = [[w:Algorithms (journal)]] | volume = | issue = | pages = | doi = https://doi.org/10.3390/a15050155 | access-date = 2022-10-18 }} </ref>, a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the [[w:King Saud University]], Saudi Arabia published in [[w:Algorithms (journal)]] on Wednesday 2022-05-04 published by the [[w:MDPI]] (Multidisciplinary Digital Publishing Institute). This article belongs to the Special Issue [https://www.mdpi.com/journal/algorithms/special_issues/Adversarial_Federated_Machine_Learning ''Commemorative Special Issue: Adversarial and Federated Machine Learning: State of the Art and New Perspectives'' at mdpi.com] * '''2022''' | '''<font color="green">science / counter-measure</font>''' | [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org], a pre-print was presented at the [https://www.interspeech2022.org/ Interspeech 2022 conference] organized by [[w:International Speech Communication Association]] in Korea September 18-22 2022. * '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com] * '''2021''' | Entertainment | The Swedish pop band [[w:ABBA]] published an album in September and will be performing shows where the music is live and real, but the visuals will be [[#Age analysis and rejuvenating and aging syntheses|rejuvenated]] [[#Digital look-alikes|digital look-alikes]] of the band members displayed to the fans with [[w:holography]] technology. ABBA used [[w:Industrial Light & Magic]] as the purveyor of technology. [[w:Industrial Light & Magic]] was acquired by [[w:The Walt Disney Company]] in 2012 as part of their acquisition [[w:Lucasfilm]]. * '''2021''' | Controversy | July 2021 saw the release of [[w:Roadrunner: A Film About Anthony Bourdain]] and soon controversy arose, as the director [[w:Morgan Neville]] admitted to [[w:Helen Rosner]], a food writer for [[w:The New Yorker]] that he had contracted an AI company to thieve [[w:Anthony Bourdain]]'s voice and used it to insert audio that sounded like him, without declaring it as faked.<ref name="NewYorker 2020"> {{Cite magazine |last=Rosner |first=Helen |author-link=[[w:Helen Rosner]] |date=2021-07-15 |title=A Haunting New Documentary About Anthony Bourdain |url=https://www.newyorker.com/culture/annals-of-gastronomy/the-haunting-afterlife-of-anthony-bourdain |url-status=live |access-date=2021-08-25 |magazine=[[w:The New Yorker]] |language=en-US }} </ref><ref group="1st seen in"> Witness newsletter I subscribed to at https://www.witness.org/get-involved/ </ref> * '''2021''' | Science | [https://arxiv.org/pdf/2102.05630.pdf '''''Voice Cloning: a Multi-Speaker Text-to-Speech Synthesis Approach based on Transfer Learning''''' .pdf at arxiv.org], a paper submitted in Feb 2021 by researchers from the [[w:University of Turin]].<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018" /> * '''2021''' | '''<font color="red">crime / fraud</font>''' | {{#lst:Synthetic human-like fakes|2021 digital sound-alike enabled fraud}} * '''2021''' | science and demonstration | '''DALL-E''', a [[w:deep learning]] model developed by [[w:OpenAI]] to generate digital images from [[w:natural language]] descriptions, called "prompts" was published in January 2021. DALL-E uses a version of [[w:GPT-3]] modified to generate images. (Adapted from [https://en.wikipedia.org/w/index.php?title=DALL-E&oldid=1151136107 Wikipedia]) * '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref> [[File:Appearance of Queen Elizabeth II stolen by Channel 4 in Dec 2020 (screenshot at 191s).png|thumb|right|480px|In Dec 2020 Channel 4 aired a Queen-like fake i.e. they had thieved the appearance of Queen Elizabeth II using deepfake methods.]] * '''2020''' | '''Controversy''' / '''Public service announcement''' | Channel 4 thieved the appearance of Queen Elizabeth II using deepfake methods. The product of synthetic human-like fakery originally aired on Channel 4 on 25 December at 15:25 GMT.<ref name="Queen-like deepfake 2020 BBC reporting">https://www.bbc.com/news/technology-55424730</ref> [https://www.youtube.com/watch?v=IvY-Abd2FfM&t=3s View in YouTube] * '''2020''' | reporting | [https://www.wired.co.uk/article/deepfake-porn-websites-videos-law "''Deepfake porn is now mainstream. And major sites are cashing in''" at wired.co.uk] by Matt Burgess. Published August 2020. * '''2020''' | demonstration | '''[https://moondisaster.org/ Moondisaster.org]''' (full film embedded in website) project by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]] published in July 2020, makes use of various methods of making a synthetic human-like fake. Alternative place to watch: [https://www.youtube.com/watch?v=LWLadJFI8Pk ''In Event of Moon Disaster - FULL FILM'' at youtube.com] ** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster''] * '''2020''' | US state law | {{#lst:Laws against synthesis and other related crimes|California2020}} * '''2020''' | Chinese legislation | {{#lst:Laws against synthesis and other related crimes|China2020}} == 2010's synthetic human-like fakes == * '''2019''' | science and demonstration | At the December 2019 NeurIPS conference, a novel method for making animated fakes of anything with AI [https://aliaksandrsiarohin.github.io/first-order-model-website/ '''''First Order Motion Model for Image Animation''''' (website at aliaksandrsiarohin.github.io)], [https://proceedings.neurips.cc/paper/2019/file/31c0b36aef265d9221af80872ceb62f9-Paper.pdf (paper)] [https://github.com/AliaksandrSiarohin/first-order-model (github)] was presented.<ref group="1st seen in">https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/</ref> ** Reporting [https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/ '''''Memers are making deepfakes, and things are getting weird''''' at technologyreview.com], 2020-08-28 by Karen Hao. * '''2019''' | demonstration | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 '''a deepfake of the President in office'''] [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it. * '''2019''' | US state law | {{#lst:Laws against synthesis and other related crimes|Texas2019}} * '''2019''' | US state law | {{#lst:Laws against synthesis and other related crimes|Virginia2019}} * '''2019''' | Science | [https://arxiv.org/pdf/1809.10460.pdf '''''Sample Efficient Adaptive Text-to-Speech''''' .pdf at arxiv.org], a 2019 paper from Google researchers, published as a conference paper at [[w:International Conference on Learning Representations]] (ICLR)<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018"> https://www.connectedpapers.com/main/8fc09dfcff78ac9057ff0834a83d23eb38ca198a/Transfer-Learning-from-Speaker-Verification-to-Multispeaker-TextToSpeech-Synthesis/graph</ref> * '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] * '''2019''' | <font color="red">'''crime'''</font> | '''[[w:Fraud]]''' with [[#digital sound-alikes|digital sound-alike]] technology surfaced in 2019. See [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on ''''''An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft''''''], a 2019 Washington Post article or [https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/ ''''''A Voice Deepfake Was Used To Scam A CEO Out Of $243,000'''''' at Forbes.com] (2019-09-03) * '''2019''' | demonstration | [http://whichfaceisreal.com/ ''''''Which Face is real?'''''' at whichfaceisreal.com] is an easily unnerving game by [http://ctbergstrom.com/ Carl Bergstrom] and [https://jevinwest.org/ Jevin West] where you need to '''try to distinguish''' from a pair of photos '''which is real and which is not'''. A part of the "tools" of the [https://callingbullshit.org/ Calling Bullshit] course taught at the [[w:University of Washington]]. <font color="green">'''Relevancy: certain'''</font> * '''2019''' | demonstration | '''[https://www.thispersondoesnotexist.com/ 'Thispersondoesnotexist.com']''' (since February 2019) by Philip Wang. It showcases a [[w:StyleGAN]] at the task of making an endless stream of pictures that look like no-one in particular, but are eerily human-like. <font color="green">'''Relevancy: certain'''</font> * '''2019''' | action | [[w:Nvidia]] [[w:open source]]s [[w:StyleGAN]], a novel [[w:generative adversarial network]].<ref name="Medium2019"> {{Cite web |url=https://medium.com/syncedreview/nvidia-open-sources-hyper-realistic-face-generator-stylegan-f346e1a73826 |title= NVIDIA Open-Sources Hyper-Realistic Face Generator StyleGAN |last= |first= |date= 2019-02-09 |website= [[Medium.com]] |access-date= 2020-07-13 }} </ref> * '''<font color="green">2018</font>''' | '''<font color="green">counter-measure</font>''' | In September 2018 Google added “'''involuntary synthetic pornographic imagery'''” to its '''ban list''', allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”<ref name="WashingtonPost2018"> {{cite web |url= https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/ |title= Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target' |last= Harwell |first= Drew |date= 2018-12-30 |website= |publisher= [[w:The Washington Post]] |access-date= 2020-07-13 |quote= In September [of 2018], Google added “involuntary synthetic pornographic imagery” to its ban list }} </ref> '''[https://support.google.com/websearch/answer/9116649?hl=en Information on removing involuntary fake pornography from Google at support.google.com]''' if it shows up in Google and the '''[https://support.google.com/websearch/troubleshooter/3111061#ts=2889054%2C2889099%2C2889064%2C9171203 form to request removing involuntary fake pornography at support.google.com]''', select "''I want to remove: A fake nude or sexually explicit picture or video of myself''" [[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems|w:Conference on Neural Information Processing Systems]] (NeurIPS). It requires only 5 seconds of sample to steal a voice.]] * '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results. * '''2018''' | science | [https://arxiv.org/abs/1710.10196 '''Progressive Growing of GANs for Improved Quality, Stability, and Variation''' at arxiv.org] ([https://arxiv.org/pdf/1710.10196.pdf .pdf]), colloquially known as ProGANs were presented by Nvidia researchers at the [https://iclr.cc/Conferences/2018 2018 ICLR]. [[w:International Conference on Learning Representations]] * '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018"> {{cite web | url = https://www.theguardian.com/world/2018/nov/09/worlds-first-ai-news-anchor-unveiled-in-china | title = World's first AI news anchor unveiled in China | last = Kuo | first = Lily | date = 2018-11-09 | website = | access-date = 2020-07-13 | quote = }} </ref> and Zhang Zhao ([[w:English language]]). The digital look-alikes were made in conjunction with [[w:Sogou]].<ref name="BusinessInsider2018"> {{cite web | url = https://businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11 | title = China created what it claims is the first AI news anchor — watch it in action here | last = Hamilton | first = Isobel Asher | date = 2018-11-09 | website = | access-date = 2020-07-13 | quote = }} </ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera. * '''2018''' | action | [https://schiff.house.gov/imo/media/doc/2018-09%20ODNI%20Deep%20Fakes%20letter.pdf '''Deep Fakes letter to the Office of the Director of National Intelligence''' at schiff.house.gov], a letter sent to the [[w:Director of National Intelligence]] on 2018-09-13 by congresspeople [[w:Adam Schiff]], [[w:Stephanie Murphy]] and [[w:Carlos Curbelo]] requesting a report be compiled on the synthetic human-like fakes situation and what are the threats and what could be the solutions.<ref group="1st seen in">[https://uk.pcmag.com/security/117402/us-lawmakers-ai-generated-fake-videos-may-be-a-security-threat ''''''US Lawmakers: AI-Generated Fake Videos May Be a Security Threat'''''' at uk.pcmag.com], 2018-09-13 reporting by Michael Kan</ref> * '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning|w:deep machine learning]] so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting. * '''2017''' | science | '''[http://grail.cs.washington.edu/projects/AudioToObama/ 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu]'''. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the [[w:University of Washington]] presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire [[w:lip sync]] and wider facial information from [[w:training material]] consisting 2D videos with audio had been completed.<ref name="Suw2017">{{Citation | last = Suwajanakorn | first = Supasorn | author-link = | last2 = Seitz | first2 = Steven | author2-link = | last3 = Kemelmacher-Shlizerman | first3 = Ira | author3-link = | title = Synthesizing Obama: Learning Lip Sync from Audio | publisher = [[University of Washington]] | year = 2017 | url = http://grail.cs.washington.edu/projects/AudioToObama/ | access-date = 2020-07-13 }} </ref> <font color="green">'''Relevancy: certain'''</font> * '''2016''' | movie | '''[[w:Rogue One]]''' is a Star Wars film for which digital look-alikes of actors [[w:Peter Cushing]] and [[w:Carrie Fisher]] were made. In the film their appearance would appear to be of same age as the actors were during the filming of the original 1977 ''[[w:Star Wars (film)]]'' film. * '''2016''' | science / demonstration | [[w:DeepMind]]'s [[w:WaveNet]] owned by [[w:Google]] also demonstrated ability to steal people's voices [[File:Adobe Corporate Logo.png|thumb|right|300px|[[w:Adobe Inc.]]'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in '''2016''' before an implementation was sold to criminal organizations.]] {{#ev:youtube|I3l4XLZ59iw|420px|left|'''''#[[w:Adobe Voco]]. Adobe Audio Manipulator Sneak Peak with [[w:Jordan Peele]]''''' (at Youtube.com). November 2016 demonstration of a Adobe's unreleased sound-like-anyone-machine, the '''[[w:Adobe Voco]]''' at the [[w:Adobe MAX]] 2016 event in [[w:San Diego]], [[w:California]]. The original Adobe Voco '''required 20 minutes of sample''' to '''thieve a voice'''.}} * '''<font color="red">2016</font>''' | <font color="red">science</font> and demonstration | '''[[w:Adobe Inc.]]''' publicly demonstrates '''[[w:Adobe Voco]]''', a '''sound-like-anyone machine''' [https://www.youtube.com/watch?v=I3l4XLZ59iw '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube]. THe original Adobe Voco required '''20 minutes''' of sample '''to thieve a voice'''. <font color="green">'''Relevancy: certain'''</font>. * '''2016''' | science | '''[http://www.niessnerlab.org/projects/thies2016face.html 'Face2Face: Real-time Face Capture and Reenactment of RGB Videos' at Niessnerlab.org]''' A paper (with videos) on the semi-real-time 2D video manipulation with gesture forcing and lip sync forcing synthesis by Thies et al, Stanford. <font color="green">'''Relevancy: certain'''</font> * '''2016''' | music video | [https://www.youtube.com/watch?v=tMQHAy0HUDo ''''''Plug'''''' by Kube at youtube.com] - A 2016 music video by [[w:Kube (rapper)]] ([[w:fi:Kube]]), that shows deepfake-like technology this early. Video was uploaded on 2016-09-15 and is directed by Faruk Nazeri. * '''2015''' | Science | [https://arxiv.org/abs/1411.7766v3 ''''''Deep Learning Face Attributes in the Wild'''''' at arxiv.org] presented at the 2015 [[w:International Conference on Computer Vision]] * '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015"> {{cite web | url = http://www.hollywoodreporter.com/behind-screen/furious-7-how-peter-jacksons-784157 | title = 'Furious 7' and How Peter Jackson's Weta Created Digital Paul Walker | last = Giardina | first = Carolyn | date = 2015-03-25 | work=[[The Hollywood Reporter]] | access-date = 2020-07-13 | quote = }} </ref> * '''2014''' | science | [[w:Ian Goodfellow]] et al. presented the principles of a '''[[w:generative adversarial network]]'''. '''GAN'''s made the headlines in early 2018 with the [[w:deepfake]]s controversies. * '''2013''' | demonstration | At the 2013 SIGGGRAPH [[w:Activision]] and USC presented a [[w:real time computing]] "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,<ref name="reform_youtube2015"> {{cite AV media | people = | title = ReForm - Hollywood's Creating Digital Clones | medium = youtube | publisher = The Creators Project | location = | date = 2020-07-13 | url = https://www.youtube.com/watch?v=lTC3k9Iv4r0 }} </ref> utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture.<ref name="Deb2013">{{cite web | last = Debevec | first = Paul | title = Digital Ira SIGGRAPH 2013 Real-Time Live | website = | date = | url = http://gl.ict.usc.edu/Research/DigitalIra/ | format = | doi = | accessdate = 2017-07-13}} </ref> The end result both precomputed and real-time rendering with the modernest game [[w:Graphics processing unit|w:GPU]] shown [http://gl.ict.usc.edu/Research/DigitalIra/ here] and looks fairly realistic. * '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font> * '''2011''' | <font color="green">'''Law in Finland'''</font> | Distribution and attempt of distribution and also possession of '''synthetic [[w:Child sexual abuse material|CSAM]]''' was '''criminalized''' on Wednesday 2011-06-01, upon the initiative of the [[w:Vanhanen II Cabinet]]. These protections against CSAM were moved into 19 §, 20 § and 21 § of Chapter 20 when the [[Law on sexual offences in Finland 2023]] was improved and gathered into Chapter 20 upon the initiative of the [[w:Marin Cabinet]]. == 2000's synthetic human-like fakes == * '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|w:CLU]]. * '''2009''' | movie | A digital look-alike of a younger [[w:Arnold Schwarzenegger]] was made for the movie ''[[w:Terminator Salvation]]'' though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger. * '''2009''' | demonstration | [http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html Paul Debevec: ''''Animating a photo-realistic face'''' at ted.com] Debevec et al. presented new digital likenesses, made by [[w:Image Metrics]], this time of actress [[w:Emily O'Brien]] whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. <ref name="Deb2009">[http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html In this TED talk video] at 00:04:59 you can see ''two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>''. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.</ref> Motion looks fairly convincing contrasted to the clunky run in the [[w:Animatrix#Final Flight of the Osiris|w:''Animatrix: Final Flight of the Osiris'']] which was [[w:state-of-the-art]] in 2003 if photorealism was the intention of the [[w:animators]]. * '''2004''' | movie | The '''[[w:Spider-man 2]]''' (and '''[[w:Spider-man 3]]''', 2007) films. Relevancy: The films include a [[Synthetic human-like fakes#Digital look-alike|digital look-alike]] made of actor [[w:Tobey Maguire]] by [[w:Sony Pictures Imageworks]].<ref name="Pig2005">{{cite web | last = Pighin | first = Frédéric | author-link = | title = Siggraph 2005 Digital Face Cloning Course Notes | website = | date = | url = http://pages.cs.wisc.edu/~lizhang/sig-course-05-face/COURSE_9_DigitalFaceCloning.pdf | format = | doi = | accessdate = 2020-06-26}} </ref> * '''2003''' | short film | [[w:The Animatrix#Final Flight of the Osiris|w:''The Animatrix: Final Flight of the Osiris'']] a [[w:state-of-the-art]] want-to-be human likenesses not quite fooling the watcher made by [[w:Square Pictures#Square Pictures|w:Square Pictures]]. [[File:The-matrix-logo.svg|thumb|left|300px|Logo of the [[w:The Matrix (franchise)]]]] [[File:BSSDF01_400.svg|thumb|left|300px|Traditional [[w:Bidirectional reflectance distribution function|w:BRDF]] vs. [[w:subsurface scattering|subsurface scattering]] inclusive BSSRDF i.e. [[w:Bidirectional scattering distribution function#Overview of the BxDF functions|w:Bidirectional scattering-surface reflectance distribution function]]. <br/><br/> An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]] * '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''. [https://www.researchgate.net/publication/215518319_Universal_Capture_-_Image-based_Facial_Animation_for_The_Matrix_Reloaded ''''''Universal Capture - Image-based Facial Animation for "The Matrix Reloaded"'''''' at researchgate.net] (2003) {{#ev:youtube|3qIXIHAmcKU|640px|right|Music video for '''''Bullet''''' by [[w:Covenant (band)|w:Covenant]] from 2002. Here you can observe the classic "''skin looks like cardboard''"-bug that stopped the pre-reflectance capture era versions from passing human testing.}} * '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|w:BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov. == 1990's synthetic human-like fakes == [[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]] * <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|w:USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/> * <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. * '''1997''' | '''technology / science''' | [https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/human/bregler-sig97.pdf ''''Video rewrite: Driving visual speech with audio'''' at www2.eecs.berkeley.edu]<ref name="Bregler1997" /><ref group="1st seen in" name="Bohacek-Farid-2022"> PROTECTING PRESIDENT ZELENSKYY AGAINST DEEP FAKES https://arxiv.org/pdf/2206.12043.pdf </ref> Christoph Breigler, Michelle Covell and Malcom Slaney presented their work at the ACM SIGGRAPH 1997. [https://www.dropbox.com/sh/s4l00z7z4gn7bvo/AAAP5oekFqoelnfZYjS8NQyca?dl=0 Download video evidence of ''Video rewrite: Driving visual speech with audio'' Bregler et al 1997 from dropbox.com], [http://chris.bregler.com/videorewrite/ view author's site at chris.bregler.com], [https://dl.acm.org/doi/10.1145/258734.258880 paper at dl.acm.org] [https://www.researchgate.net/publication/220720338_Video_Rewrite_Driving_Visual_Speech_with_Audio paper at researchgate.net] * '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage. == 1970's synthetic human-like fakes == {{#ev:vimeo|16292363|480px|right|''[[w:A Computer Animated Hand|w:A Computer Animated Hand]]'' is a 1972 short film by [[w:Edwin Catmull]] and [[w:Fred Parke]]. This was the first time that [[w:computer-generated imagery]] was used in film to animate likenesses of moving human appearance.}} * '''1976''' | movie | ''[[w:Futureworld]]'' reused parts of ''A Computer Animated Hand'' on the big screen. * '''1972''' | entertainment | '''[https://vimeo.com/59434349 'A Computer Animated Hand' on Vimeo]'''. [[w:A Computer Animated Hand]] by [[w:Edwin Catmull]] and [[w:Fred Parke]]. Relevancy: This was the '''first time''' that [[w:computer-generated imagery]] was used in film to '''animate''' moving '''human-like appearance'''. * '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref> == 1960's synthetic human-like fakes == * '''1961''' | demonstration | The first singing by a computer was performed by an [[w:IBM 704]] and the song was [[w:Daisy Bell]], written in 1892 by British songwriter [[w:Harry Dacre]]. Go to [[Mediatheque#1961]] to view. == 1930's synthetic human-like fakes == [[File:Homer Dudley (October 1940). "The Carrier Nature of Speech". Bell System Technical Journal, XIX(4);495-515. -- Fig.5 The voder being demonstrated at the New York World's Fair.jpg|thumb|left|300px|'''[[w:Voder]]''' demonstration pavillion at the [[w:1939 New York World's Fair]]]] * '''1939''' | demonstration | '''[[w:Voder]]''' (''Voice Operating Demonstrator'') from the [[w:Bell Labs|w:Bell Telephone Laboratory]] was the first time that [[w:speech synthesis]] was done electronically by breaking it down into its acoustic components. It was invented by [[w:Homer Dudley]] in 1937–1938 and developed on his earlier work on the [[w:vocoder]]. (Wikipedia) == 1770's synthetic human-like fakes == [[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von Kempelen]]'s [[w:Wolfgang von Kempelen's Speaking Machine]], built 2007–09 at the Department of [[w:Phonetics]], [[w:Saarland University]], [[w:Saarbrücken]], Germany. This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s]] * '''1791''' | science | '''[[w:Wolfgang von Kempelen's Speaking Machine]]''' of [[w:Wolfgang von Kempelen]] of [[w:Pressburg]], [[w:Hungary]], described in a 1791 paper was [[w:bellows]]-operated.<ref>''Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine'' ("Mechanism of the human speech with description of its speaking machine", J. B. Degen, Wien).</ref> This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s. (based on [[w:Speech synthesis#History]]) * '''1779''' | science / discovery | [[w:Christian Gottlieb Kratzenstein]] won the first prize in a competition announced by the [[w:Russian Academy of Sciences]] for '''models''' he built of the '''human [[w:vocal tract]]''' that could produce the five long '''[[w:vowel]]''' sounds.<ref name="Helsinki"> [http://www.acoustics.hut.fi/publications/files/theses/lemmetty_mst/chap2.html History and Development of Speech Synthesis], Helsinki University of Technology, Retrieved on November 4, 2006 </ref> (Based on [[w:Speech synthesis#History]]) ----
Summary:
Please note that all contributions to Stop Synthetic Filth! wiki are considered to be released under the Creative Commons Attribution-ShareAlike (see
SSF:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)