3,839
edits
Juho Kunsola (talk | contribs) m (Text replacement - "User:Juho Kunsola/Law proposals" to "Law proposals") |
Juho Kunsola (talk | contribs) |
||
(36 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a ''' | <section begin=definitions-of-synthetic-human-like-fakes /> | ||
When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''[[#Digital look-alikes|digital look-alike]]'''. | |||
When it cannot be determined by human testing or media forensics whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a pre-recorded '''[[#Digital sound-alikes|digital sound-alike]]'''. | |||
<section end=definitions-of-synthetic-human-like-fakes /> | |||
[[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|left|460px|Image 2 (low resolution rip) | [[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|left|460px|Image 2 (low resolution rip) | ||
Line 15: | Line 16: | ||
<small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]] | <small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]] | ||
[[File:Saint John on Patmos.jpg|thumb|right|360px|See <big>'''[[Biblical explanation - The books of Daniel and Revelation]]'''</big> to see the advance warning for our time that we were given in 6th century BC and then again in 1st century. | [[File:Saint John on Patmos.jpg|thumb|right|360px|link=Biblical explanation - The books of Daniel and Revelation|See <big>'''[[Biblical explanation - The books of Daniel and Revelation]]'''</big> to see the advance warning for our time that we were given in 6th century BC and then again in 1st century. | ||
<br/><br/> | <br/><br/> | ||
'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]. Picture from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]] | 'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]. Picture from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]] | ||
Line 21: | Line 22: | ||
== Digital look-alikes == | == Digital look-alikes == | ||
{{#ev:youtube|LWLadJFI8Pk| | {{#ev:youtube|LWLadJFI8Pk|400px|right|It is recommended that you watch ''In Event of Moon Disaster - FULL FILM'' (2020) at the '''[https://moondisaster.org/ moondisaster.org''' project website] (where it has interactive portions) by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]]}} | ||
Line 71: | Line 72: | ||
These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This '''hate illustration''' increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve. | These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This '''hate illustration''' increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve. | ||
=== List of possible naked digital look-alike attacks === | === List of possible naked digital look-alike attacks === | ||
Line 82: | Line 81: | ||
* "''Unconscious and injected''"-attack (Digital look-alike gets "disease") | * "''Unconscious and injected''"-attack (Digital look-alike gets "disease") | ||
=== | === Age analysis and rejuvenating and aging syntheses === | ||
* [https://arxiv.org/abs/2002.03750 ''''''An Overview of Two Age Synthesis and Estimation Techniques'''''' at arxiv.org] [https://arxiv.org/pdf/2002.03750.pdf (.pdf)], submitted for review on 2020-01-26 | |||
* [https://www.sciencedirect.com/science/article/abs/pii/S0925231220309942 ''''''Dual Reference Age Synthesis'''''' at sciencedirect.com] [https://arxiv.org/pdf/1908.02671.pdf (preprint at arxiv.org)] published on 2020-10-21 in [[w:Neurocomputing (journal)]] | |||
* [https://ieeexplore.ieee.org/document/6084154 ''''''A simple automatic facial aging/rejuvenating synthesis method'''''' at ieeexplore.ieee.org] [https://www.researchgate.net/publication/220755281_Automatic_Facial_AgingRejuvenating_Synthesis_Method read free at researchgate.net], published at the proceedings of the 2011 IEEE International Conference on Systems, Man and Cybernetics | |||
* [https://ieeexplore.ieee.org/document/5406526 ''''''Age Synthesis and Estimation via Faces: A Survey'''''' at ieeexplore.ieee.org] (paywall) [https://www.researchgate.net/publication/46288561_Age_Synthesis_and_Estimation_via_Faces_A_Survey at researchgate.net] published November 2010 | |||
=== Temporal limit of digital look-alikes === | |||
[[File:Institut Lumière - CINEMATOGRAPHE Camera.jpg|thumb|left|120px|A picture of the 1895 [[w:Cinematograph]]]] | |||
[[w:History of film technology]] has information about where the border is. | |||
Digital look-alikes cannot be used to attack people who existed before the technological invention of film. For moving pictures the breakthrough is attributed to [[w:Auguste and Louis Lumière]]'s [[w:Cinematograph]] premiered in Paris on 28 December '''1895''', though this was only the commercial and popular breakthrough, as even earlier moving pictures exist. (adapted from [[w:History of film]]) | |||
The '''[[w:Kinetoscope]]''' is an even earlier motion picture exhibition device. A prototype for the Kinetoscope was shown to a convention of the National Federation of Women's Clubs on May 20, 1891.<ref name="memory.loc.gov"> | |||
{{cite web | |||
|publisher=[[w:Library of Congress]] | |||
|website=Memory.loc.gov | |||
|url=http://memory.loc.gov/ammem/edhtml/edmvhist.html | |||
|title=Inventing Entertainment: The Early Motion Pictures and Sound Recordings of the Edison Companies | |||
|access-date=2020-12-09 | |||
}} | |||
</ref> The first public demonstration of the Kinetoscope was held at the [[w:Brooklyn Museum|Brooklyn Institute of Arts and Sciences]] on '''May 9''', '''1893'''. ([[w:Kinetoscope|Wikipedia]])<ref name="memory.loc.gov"/> | |||
---- | ---- | ||
== Digital sound-alikes == | == Digital sound-alikes == | ||
[[File:Helsingin-Sanomat-2012-David-Martin-Howard-of-University-of-York-on-apporaching-digital-sound-alikes.jpg|right|thumb|338px|A picture of a cut-away titled "''Voice-terrorist could mimic a leader''" from a 2012 [[w:Helsingin Sanomat]] warning that the sound-like-anyone machines are approaching. Thank you to homie [https://pure.york.ac.uk/portal/en/researchers/david-martin-howard(ecfa9e9e-1290-464f-981a-0c70a534609e).html Prof. David Martin Howard] of the [[w:University of York]], UK and the anonymous editor for the heads-up.]] | |||
Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | ||
For these reasons the bannable '''raw materials''' i.e. covert voice models '''[[Law proposals to ban covert modeling|should be prohibited by law]]''' in order to protect humans from abuse by criminal parties. | |||
=== Documented digital sound-alike attacks === | |||
* Sound like anyone technology found its way to the hands of criminals as in '''2019''' [[w:NortonLifeLock|Symantec]] researchers knew of 3 cases where technology has been used for '''[[w:crime]]''' | |||
** [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name="BBC2019"> | |||
{{cite web | |||
|url= https://www.bbc.com/news/technology-48908736 | |||
|title= Fake voices 'help cyber-crooks steal cash' | |||
|last= | |||
|first= | |||
|date= 2019-07-08 | |||
|website= [[w:bbc.com]] | |||
|publisher= [[w:BBC]] | |||
|access-date= 2020-07-22 | |||
|quote= }} | |||
</ref> | |||
** [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com] documents a [[w:fraud]] committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name="WaPo2019"> | |||
{{cite web | |||
|url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ | |||
|title= An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft | |||
|last= Drew | |||
|first= Harwell | |||
|date= 2020-04-16 | |||
|website= [[w:washingtonpost.com]] | |||
|publisher= [[w:Washington Post]] | |||
|access-date= 2019-07-22 | |||
|quote= }} | |||
</ref> | |||
---- | ---- | ||
Line 98: | Line 148: | ||
* In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results | * In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results | ||
Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking. | Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking. | ||
Line 105: | Line 153: | ||
{{#Widget:Iframe - Audio samples from Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Google Research}} | {{#Widget:Iframe - Audio samples from Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Google Research}} | ||
[ | The Iframe above is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers. | ||
=== Digital sing-alikes === | |||
The to the right [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine presented by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018. | |||
{{#ev:youtube|0sR1rU3gLzQ|640px|right|Video [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.}} | |||
As of 2020 the digital sing-alikes may not yet be here, but when we hear a faked singing voice and we cannot hear that it is fake, then we will know. An ability to sing does not seem to add much hostile capabilities compared to the ability to thieve spoken word. | |||
* [https://arxiv.org/abs/1910.11690 ''''''Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks'''''' at arxiv.org], a 2019 singing voice synthesis technique using [[w:convolutional neural network|w:convolutional neural networks (CNN)]]. Accepted into the 2020 [[w:International Conference on Acoustics, Speech, and Signal Processing|International Conference on Acoustics, Speech, and Signal Processing (ICASSP)]]. | |||
* [http://compmus.ime.usp.br/sbcm/2019/papers/sbcm-2019-7.pdf ''''''State of art of real-time singing voice synthesis'''''' at compmus.ime.usp.br] presented at the 2019 [http://compmus.ime.usp.br/sbcm/2019/program/ 17th Brazilian Symposium on Computer Music] | |||
* [http://theses.fr/2017PA066511 ''''''Synthesis and expressive transformation of singing voice'''''' at theses.fr] [https://www.theses.fr/2017PA066511.pdf as .pdf] a 2017 doctorate thesis by [http://theses.fr/227185943 Luc Ardaillon] | |||
* [http://mtg.upf.edu/node/512 ''''''Synthesis of the Singing Voice by Performance Sampling and Spectral Models'''''' at mtg.upf.edu], a 2007 journal article in the [[w:IEEE Signal Processing Society]]'s Signal Processing Magazine | |||
* [https://www.researchgate.net/publication/4295714_Speech-to-Singing_Synthesis_Converting_Speaking_Voices_to_Singing_Voices_by_Controlling_Acoustic_Features_Unique_to_Singing_Voices ''''''Speech-to-Singing Synthesis: Converting Speaking Voices to Singing Voices by Controlling Acoustic Features Unique to Singing Voices'''''' at researchgate.net], a November 2007 paper published in the IEEE conference on Applications of Signal Processing to Audio and Acoustics | |||
* [[w:Category:Singing software synthesizers]] | |||
---- | ---- | ||
Line 132: | Line 196: | ||
* [https://www.forbes.com/sites/bernardmarr/2019/05/06/artificial-intelligence-can-now-copy-your-voice-what-does-that-mean-for-humans/#617f6d872a2a '''"Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?"''' May 2019 reporting at forbes.com] on [[w:Baidu Research]]'es attempt at the sound-like-anyone-machine demonstrated at the 2018 [[w:NeurIPS]] conference. | * [https://www.forbes.com/sites/bernardmarr/2019/05/06/artificial-intelligence-can-now-copy-your-voice-what-does-that-mean-for-humans/#617f6d872a2a '''"Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?"''' May 2019 reporting at forbes.com] on [[w:Baidu Research]]'es attempt at the sound-like-anyone-machine demonstrated at the 2018 [[w:NeurIPS]] conference. | ||
=== Temporal limit of digital sound-alikes === | |||
[[File:Edison_and_phonograph_edit1.jpg|thumb|left|210px|[[w:Thomas Edison]] and his early [[w:phonograph]]. Cropped from [[w:Library of Congress]] copy, ca. 1877, (probably 18 April 1878)]] | |||
The temporal limit of whom, dead or living, the digital sound-alikes can attack is defined by the '''[[w:history of sound recording]]'''. | |||
The article starts by mentioning that the invention of the [[w:phonograph]] by [[w:Thomas Edison]] in '''1877''' is considered the start of sound recording. | |||
The '''phonautograph''' is the earliest known device for recording [[w:sound]]. Previously, tracings had been obtained of the sound-producing vibratory motions of [[w:tuning forks]] and other objects by physical contact with them, but not of actual sound waves as they propagated through air or other media. Invented by Frenchman [[W:Édouard-Léon Scott de Martinville]], it was patented on March 25, '''1857'''.<ref name="NPR-Phonautograph"> | |||
{{Cite news | |||
|url=https://www.npr.org/templates/story/story.php?storyId=89380697 | |||
|title=1860 'Phonautograph' Is Earliest Known Recording | |||
{{ | |last=Flatow | ||
|first=Ira|date=April 4, 2008|work=NPR | |||
|access-date=2012-12-09 | |||
|language=en}} | |||
</ref> | |||
Apparently, it did not occur to anyone before the 1870s that the recordings, called '''phonautograms''', contained enough information about the sound that they could, in theory, be '''used to recreate it'''. Because the phonautogram tracing was an insubstantial two-dimensional line, direct physical playback was impossible in any case. Several phonautograms recorded '''before 1861''' were successfully played as sound in '''2008''' by optically scanning them and using a computer to process the scans into digital audio files. ([[w:Phonautograph|Wikipedia]]) | |||
[[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram]] of a male voice saying 'nineteenth century']] | [[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram]] of a male voice saying 'nineteenth century']] | ||
Line 215: | Line 262: | ||
* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref> | * '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref> | ||
* '''2019''' | '''[[w:Conference on Neural Information Processing Systems|w:NeurIPS]]''' | [[w:Facebook, Inc.]] [https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/facebook-ai-launches-its-deepfake-detection-challenge '''"Facebook AI Launches Its Deepfake Detection Challenge"''' at spectrum.ieee.org] | * '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s [https://www.defmin.fi/en/frontpage/overview/ministry_of_defence/departments_and_units/defence_policy_department/scientific_advisory_board_for_defence_%28matine%29#d779859d '''Scientific Advisory Board for Defence''' ('''MATINE''')] public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE. | ||
* '''2019''' | '''[[w:Conference on Neural Information Processing Systems|w:NeurIPS]]''' | [[w:Facebook, Inc.]] [https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/facebook-ai-launches-its-deepfake-detection-challenge '''"Facebook AI Launches Its Deepfake Detection Challenge"''' at spectrum.ieee.org] [[w:IEEE Spectrum]]. More reporting at [https://venturebeat.com/2019/12/11/facebook-microsoft-and-others-launch-deepfake-detection-challenge/ '''''"Facebook, Microsoft, and others launch Deepfake Detection Challenge"''''' at venturebeat.com] | |||
* '''2019''' | '''CVPR''' | [https://sites.google.com/view/mediaforensics2019/home '''2019''' CVPR: ''''Workshop on Media Forensics''''] | * '''2019''' | '''CVPR''' | [https://sites.google.com/view/mediaforensics2019/home '''2019''' CVPR: ''''Workshop on Media Forensics''''] | ||
Line 244: | Line 293: | ||
=== SSF! wiki proposed countermeasure to synthetic porn: Adequate Porn Watcher AI (transcluded) === | === SSF! wiki proposed countermeasure to synthetic porn: Adequate Porn Watcher AI (transcluded) === | ||
Transcluded from [[Adequate Porn Watcher AI]] | Transcluded main contents from [[Adequate Porn Watcher AI (concept)]] | ||
{{#lstx:Adequate Porn Watcher AI|See_also}} | {{#lstx:Adequate Porn Watcher AI (concept)|See_also}} | ||
=== Possible legal response: Outlawing digital sound-alikes (transcluded) === | === Possible legal response: Outlawing digital sound-alikes (transcluded) === | ||
Line 256: | Line 305: | ||
=== 2020's synthetic human-like fakes === | === 2020's synthetic human-like fakes === | ||
* '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | On 2020-11-18 the [[w:Partnership on AI]] introduced the [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref> | |||
* '''2020''' | reporting | [https://www.wired.co.uk/article/deepfake-porn-websites-videos-law "''Deepfake porn is now mainstream. And major sites are cashing in''" at wired.co.uk] by Matt Burgess. Published August 2020. | * '''2020''' | reporting | [https://www.wired.co.uk/article/deepfake-porn-websites-videos-law "''Deepfake porn is now mainstream. And major sites are cashing in''" at wired.co.uk] by Matt Burgess. Published August 2020. | ||
Line 383: | Line 435: | ||
</ref> | </ref> | ||
* '''<font color=" | * '''<font color="green">2018</font>''' | '''<font color="green">counter-measure</font>''' | In September 2018 Google added “'''involuntary synthetic pornographic imagery'''” to its '''ban list''', allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”<ref name="WashingtonPost2018"> | ||
{{cite web | {{cite web | ||
Line 522: | Line 574: | ||
An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]] | An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]] | ||
* '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''. | * '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''. [https://www.researchgate.net/publication/215518319_Universal_Capture_-_Image-based_Facial_Animation_for_The_Matrix_Reloaded ''''''Universal Capture - Image-based Facial Animation for "The Matrix Reloaded"'''''' at researchgate.net] (2003) | ||
{{#ev:youtube|3qIXIHAmcKU|640px|right|Music video for '''''Bullet''''' by [[w:Covenant (band)|w:Covenant]] from 2002. Here you can observe the classic "''skin looks like cardboard''"-bug that stopped the pre-reflectance capture era versions from passing human testing.}} | {{#ev:youtube|3qIXIHAmcKU|640px|right|Music video for '''''Bullet''''' by [[w:Covenant (band)|w:Covenant]] from 2002. Here you can observe the classic "''skin looks like cardboard''"-bug that stopped the pre-reflectance capture era versions from passing human testing.}} | ||
Line 531: | Line 583: | ||
[[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]] | [[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]] | ||
* <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|w:USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/> | * <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|w:USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/> | ||
Line 565: | Line 611: | ||
---- | ---- | ||
== Footnotes == | == Footnotes == |