Synthetic human-like fakes: Difference between revisions

added entries I've authored in w:Human image synthesis to == Timeline of synthetic human-like fakes == and == Media perhaps about synthetic human-like fakes ==
(merged === Timeline of digital sound-alikes === to == Timeline of synthetic human-like fakes ==)
(added entries I've authored in w:Human image synthesis to == Timeline of synthetic human-like fakes == and == Media perhaps about synthetic human-like fakes ==)
Line 154: Line 154:


* '''1972''' | entertainment | '''[https://vimeo.com/59434349 'A Computer Animated Hand' on Vimeo]'''. [[w:A Computer Animated Hand]] by [[w:Edwin Catmull]] and [[w:Fred Parke]]. Relevancy: This was the '''first time''' that [[w:computer-generated imagery|computer-generated imagery]] was used in film to '''animate''' moving '''human-like appearance'''.
* '''1972''' | entertainment | '''[https://vimeo.com/59434349 'A Computer Animated Hand' on Vimeo]'''. [[w:A Computer Animated Hand]] by [[w:Edwin Catmull]] and [[w:Fred Parke]]. Relevancy: This was the '''first time''' that [[w:computer-generated imagery|computer-generated imagery]] was used in film to '''animate''' moving '''human-like appearance'''.
* '''1976''' | movie | ''[[w:Futureworld]]'' reused parts of ''A Computer Animated Hand'' on the big screen.


=== 1990's ===
=== 1990's ===
* '''1994''' | movie | [[w:The Crow (1994 film)|The Crow]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage.


[[File:BSSDF01_400.svg|thumb|left|300px|Traditional [[w:Bidirectional reflectance distribution function|BRDF]] vs. [[w:subsurface scattering|subsurface scattering]] inclusive BSSRDF i.e. [[w:Bidirectional scattering distribution function#Overview of the BxDF functions|Bidirectional scattering-surface reflectance distribution function]]. An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]]
[[File:BSSDF01_400.svg|thumb|left|300px|Traditional [[w:Bidirectional reflectance distribution function|BRDF]] vs. [[w:subsurface scattering|subsurface scattering]] inclusive BSSRDF i.e. [[w:Bidirectional scattering distribution function#Overview of the BxDF functions|Bidirectional scattering-surface reflectance distribution function]]. An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]]
Line 163: Line 166:
=== 2010's ===
=== 2010's ===
* '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''.  A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font>
* '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''.  A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font>
* '''2013''' | demonstration | At the 2013 SIGGGRAPH [[w:Activision]] and USC presented a [[w:real time computing|real time]] "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,<ref name="reform_youtube2015">
{{cite AV media
| people =
| title = ReForm - Hollywood's Creating Digital Clones
| medium = youtube
| publisher = The Creators Project
| location =
| date = 2020-07-13
| url = https://www.youtube.com/watch?v=lTC3k9Iv4r0
}}
</ref> utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture.<ref name="Deb2013">{{cite web
  | last = Debevec
  | first = Paul
  | title = Digital Ira SIGGRAPH 2013 Real-Time Live
  | website =
  | date =
  | url = http://gl.ict.usc.edu/Research/DigitalIra/
  | format =
  | doi =
  | accessdate =  2017-07-13}}
</ref> The end result both precomputed and real-time rendering with the modernest game [[w:Graphics processing unit|GPU]] shown [http://gl.ict.usc.edu/Research/DigitalIra/ here] and looks fairly realistic.
* '''2014''' | science | [[w:Ian Goodfellow]] et al. presented the principles of a [[w:generative adversarial network]].  GANs made the headlines in early 2018 with the [[w:deepfake]]s controversies.


* '''2016''' | science | '''[http://www.niessnerlab.org/projects/thies2016face.html 'Face2Face: Real-time Face Capture and Reenactment of RGB Videos' at Niessnerlab.org]''' A paper (with videos) on the semi-real-time 2D video manipulation with gesture forcing and lip sync forcing synthesis by Thies et al, Stanford. <font color="green">'''Relevancy: certain'''</font>
* '''2016''' | science | '''[http://www.niessnerlab.org/projects/thies2016face.html 'Face2Face: Real-time Face Capture and Reenactment of RGB Videos' at Niessnerlab.org]''' A paper (with videos) on the semi-real-time 2D video manipulation with gesture forcing and lip sync forcing synthesis by Thies et al, Stanford. <font color="green">'''Relevancy: certain'''</font>


* '''2016''' | science / demonstration | [[w:DeepMind]]'s [[w:WaveNet]] owned by [[w:Google]] also demonstrated ability to steal people's voices
* '''2016''' | science / demonstration | [[w:DeepMind]]'s [[w:WaveNet]] owned by [[w:Google]] also demonstrated ability to steal people's voices
[[File:Adobe Corporate Logo.png|thumb|right|300px|[[w:Adobe Inc.]]'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in '''2016''' before an implementation was sold to criminal organizations.]]
* '''<font color="red">2016</font>''' | <font color="red">science</font> and demonstration | '''[[w:Adobe Inc.]]''' publicly demonstrates '''[[w:Adobe Voco]]''', a '''sound-like-anyone machine''' [https://www.youtube.com/watch?v=I3l4XLZ59iw '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube]. THe original Adobe Voco required '''20 minutes''' of sample '''to thieve a voice'''. <font color="green">'''Relevancy: certain'''</font>.


* '''2017''' | science | '''[http://grail.cs.washington.edu/projects/AudioToObama/ 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu]'''. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the [[w:University of Washington]] presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire [[w:lip sync]] and wider facial information from [[w:training material]] consisting 2D videos with audio had been completed.<ref name="Suw2017">{{Citation
* '''2017''' | science | '''[http://grail.cs.washington.edu/projects/AudioToObama/ 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu]'''. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the [[w:University of Washington]] presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire [[w:lip sync]] and wider facial information from [[w:training material]] consisting 2D videos with audio had been completed.<ref name="Suw2017">{{Citation
Line 179: Line 211:
  | year = 2017  
  | year = 2017  
  | url = http://grail.cs.washington.edu/projects/AudioToObama/
  | url = http://grail.cs.washington.edu/projects/AudioToObama/
  | access-date = 2020-06-26 }}
  | access-date = 2020-07-13 }}
</ref> <font color="green">'''Relevancy: certain'''</font>
</ref> <font color="green">'''Relevancy: certain'''</font>


[[File:Adobe Corporate Logo.png|thumb|right|300px|[[w:Adobe Inc.]]'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in '''2016''' before an implementation was sold to criminal organizations.]]
* '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning|deep machine learning]] so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.


* '''<font color="red">2018</font>''' | <font color="red">science</font> and demonstration | '''[[w:Adobe Inc.]]''' publicly demonstrates '''[[w:Adobe Voco]]''', a '''sound-like-anyone machine''' [https://www.youtube.com/watch?v=I3l4XLZ59iw '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube]. THe original Adobe Voco required '''20 minutes''' of sample '''to thieve a voice'''. <font color="green">'''Relevancy: certain'''</font>.
* '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018">
{{cite web
| url = https://www.theguardian.com/world/2018/nov/09/worlds-first-ai-news-anchor-unveiled-in-china
| title = World's first AI news anchor unveiled in China
| last = Kuo
| first =  Lily
| date = 2018-11-09
| website =
| access-date = 2020-07-13
| quote = }}
</ref> and Zhang Zhao ([[w:English language]]). The digital look-alikes were made in conjunction with [[w:Sogou]].<ref name="BusinessInsider2018">
{{cite web
| url = https://businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11
| title = China created what it claims is the first AI news anchor — watch it in action here
| last = Hamilton
| first =  Isobel Asher
| date = 2018-11-09
| website =
| access-date = 2020-07-13
| quote = }}
</ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera.


[[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google|Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] (NeurIPS). It requires only 5 seconds of sample to steal a voice.]]
[[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google|Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] (NeurIPS). It requires only 5 seconds of sample to steal a voice.]]
Line 191: Line 243:




* [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft'], a 2019 Washington Post article
* '''2019''' | crime | [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft'], a 2019 Washington Post article
 
* '''2019''' | action | [[w:Nvidia]] [[w:open source]]s [[w:StyleGAN]], a novel [[w:generative adversarial network]].<ref name="Medium2019">
 
{{Cite web
|url=https://medium.com/syncedreview/nvidia-open-sources-hyper-realistic-face-generator-stylegan-f346e1a73826 
|title= NVIDIA Open-Sources Hyper-Realistic Face Generator StyleGAN
|last=
|first=
|date= 2019-02-09
|website= [[Medium.com]]
|access-date= 2020-07-13
}}</ref>


* '''2019''' | demonstration | '''[https://www.thispersondoesnotexist.com/ 'Thispersondoesnotexist.com']''' (since February 2019) by Philip Wang. It showcases a [[w:StyleGAN]] at the task of making an endless stream of pictures that look like no-one in particular, but are eerily human-like. <font color="green">'''Relevancy: certain'''</font>
* '''2019''' | demonstration | '''[https://www.thispersondoesnotexist.com/ 'Thispersondoesnotexist.com']''' (since February 2019) by Philip Wang. It showcases a [[w:StyleGAN]] at the task of making an endless stream of pictures that look like no-one in particular, but are eerily human-like. <font color="green">'''Relevancy: certain'''</font>
Line 197: Line 261:
* '''2019''' | demonstration | '''[http://whichfaceisreal.com/ 'Which Face is real?' at whichfaceisreal.com]''' is an easily unnerving game by [http://ctbergstrom.com/ Carl Bergstrom] and [https://jevinwest.org/ Jevin West] where you need to '''try to distinguish''' from a pair of photos '''which is real and which is not'''. A part of the "tools" of the [https://callingbullshit.org/ Calling Bullshit] course taught at the [[w:University of Washington]]. <font color="green">'''Relevancy: certain'''</font>
* '''2019''' | demonstration | '''[http://whichfaceisreal.com/ 'Which Face is real?' at whichfaceisreal.com]''' is an easily unnerving game by [http://ctbergstrom.com/ Carl Bergstrom] and [https://jevinwest.org/ Jevin West] where you need to '''try to distinguish''' from a pair of photos '''which is real and which is not'''. A part of the "tools" of the [https://callingbullshit.org/ Calling Bullshit] course taught at the [[w:University of Washington]]. <font color="green">'''Relevancy: certain'''</font>


 
* '''2019''' | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 a deepfake] of the President in office [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it.
----
----


Line 257: Line 321:


* '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''.
* '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''.
* '''2003''' | short film | [[w:The Animatrix#Final Flight of the Osiris|''The Animatrix: Final Flight of the Osiris'']] a [[w:state-of-the-art]] want-to-be human likenesses not quite fooling the watcher made by [[w:Square Pictures#Square Pictures|Square Pictures]].


* '''2004''' | movie | The '''[[w:Spider-man 2]]''' (and '''[[w:Spider-man 3]]''', 2007) films. Relevancy: The films include a [[Synthetic human-like fakes#Digital look-alike|digital look-alike]] made of actor [[w:Tobey Maguire]] by [[w:Sony Pictures Imageworks]].<ref name="Pig2005">{{cite web
* '''2004''' | movie | The '''[[w:Spider-man 2]]''' (and '''[[w:Spider-man 3]]''', 2007) films. Relevancy: The films include a [[Synthetic human-like fakes#Digital look-alike|digital look-alike]] made of actor [[w:Tobey Maguire]] by [[w:Sony Pictures Imageworks]].<ref name="Pig2005">{{cite web
Line 274: Line 340:


* '''2006''' | music video | '''[https://www.youtube.com/watch?v=hC_sqi9oocI 'John The Revelator' by Depeche Mode (official music video) on Youtube]''' by [[w:Depeche Mode]] from the single [[w:John the Revelator / Lilian]]. Relevancy: [[Biblical explanation - The books of Daniel and Revelations#Revelation 13|Book of Revelations]].
* '''2006''' | music video | '''[https://www.youtube.com/watch?v=hC_sqi9oocI 'John The Revelator' by Depeche Mode (official music video) on Youtube]''' by [[w:Depeche Mode]] from the single [[w:John the Revelator / Lilian]]. Relevancy: [[Biblical explanation - The books of Daniel and Revelations#Revelation 13|Book of Revelations]].
* '''2009''' | demonstration | Debevec et al. presented new digital likenesses, made by [[w:Image Metrics]], this time of actress [[w:Emily O'Brien]] whose reflectance was captured with the USC light stage 5<ref name="Deb2009">[http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html In this TED talk video] at 00:04:59 you can see ''two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>''. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.</ref> Motion looks fairly convincing contrasted to the clunky run in the [[Animatrix#Final Flight of the Osiris|''Animatrix: Final Flight of the Osiris'']] which was [[state-of-the-art]] in 2003 if photorealism was the intention of the [[animators]].
* '''2009''' | movie | A digital look-alike of a younger [[w:Arnold Schwarzenegger]] was made for the movie ''[[w:Terminator Salvation]]'' though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger.
* '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|CLU]].


=== 2010's ===
=== 2010's ===
Line 281: Line 353:


* '''2013''' | music video | '''[https://www.youtube.com/watch?v=ZWrUEsVrdSU 'Before Your Very Eyes' by Atoms For Peace (official music video) on Youtube]''' by [[w:Atoms for Peace (band)]] from their album [[w:Amok (Atoms for Peace album)]]. Relevancy: Watch the video
* '''2013''' | music video | '''[https://www.youtube.com/watch?v=ZWrUEsVrdSU 'Before Your Very Eyes' by Atoms For Peace (official music video) on Youtube]''' by [[w:Atoms for Peace (band)]] from their album [[w:Amok (Atoms for Peace album)]]. Relevancy: Watch the video
* '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015">
{{cite web
| url = http://www.hollywoodreporter.com/behind-screen/furious-7-how-peter-jacksons-784157
| title = 'Furious 7' and How Peter Jackson's Weta Created Digital Paul Walker
| last = Giardina
| first =  Carolyn
| date = 2015-03-25
| work=[[The Hollywood Reporter]]
| access-date = 2020-07-13
| quote = }}
</ref>


* '''2016''' | music video |'''[https://www.youtube.com/watch?v=ElvLZMsYXlo 'Voodoo In My Blood' (official music video) by Massive Attack on Youtube]''' by [[w:Massive Attack]] and featuring [[w:Tricky]] from the album [[w:Ritual Spirit]]. Relevancy: '''How many machines''' can you see in the same frame at times? If you answered one, look harder and make a more educated guess.
* '''2016''' | music video |'''[https://www.youtube.com/watch?v=ElvLZMsYXlo 'Voodoo In My Blood' (official music video) by Massive Attack on Youtube]''' by [[w:Massive Attack]] and featuring [[w:Tricky]] from the album [[w:Ritual Spirit]]. Relevancy: '''How many machines''' can you see in the same frame at times? If you answered one, look harder and make a more educated guess.