3,958
edits
Juho Kunsola (talk | contribs) (→Countermeasures against synthetic human-like fakes: + NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.) |
Juho Kunsola (talk | contribs) m (minor fmt) |
||
(21 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
When the [[Glossary#No camera|camera does not exist]], but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''digital look-alike'''. | When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''digital look-alike'''. | ||
When it cannot be determined by human testing whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a '''digital sound-alike'''. | When it cannot be determined by human testing whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a '''digital sound-alike'''. | ||
[[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb| | [[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|left|460px|Image 2 (low resolution rip) | ||
<br/>(1) Sculpting a morphable model to one single picture | <br/>(1) Sculpting a morphable model to one single picture | ||
<br/>(2) Produces 3D approximation | <br/>(2) Produces 3D approximation | ||
Line 15: | Line 15: | ||
<small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]] | <small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]] | ||
[[File:Saint John on Patmos.jpg|thumb| | [[File:Saint John on Patmos.jpg|thumb|right|360px|See <big>'''[[Biblical explanation - The books of Daniel and Revelation]]'''</big> to see the advance warning for our time that we were given in 6th century BC and then again in 1st century. | ||
<br/><br/> | <br/><br/> | ||
'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]. Picture from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w: | 'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]. Picture from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]] | ||
== Digital look-alikes == | == Digital look-alikes == | ||
{{#ev: | {{#ev:youtube|LWLadJFI8Pk|640px|right|It is recommended that you watch ''In Event of Moon Disaster - FULL FILM'' (2020) at the '''[https://moondisaster.org/ moondisaster.org''' project website] (where it has interactive portions) by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]]}} | ||
=== Introduction to digital look-alikes === | === Introduction to digital look-alikes === | ||
Line 42: | Line 43: | ||
[[File:The-diffuse-reflection-deducted-from-the-specular-reflection-Debevec-2000.png|thumb|left|260px|Subtraction of the diffuse reflection from the specular reflection yields the specular component of the model's reflectance. | [[File:The-diffuse-reflection-deducted-from-the-specular-reflection-Debevec-2000.png|thumb|left|260px|Subtraction of the diffuse reflection from the specular reflection yields the specular component of the model's reflectance. | ||
<br /><br /> | <br /><br /> | ||
<small>[[:File:Deb-2000-reflectance-separation.png|Original picture]] by [[w:Paul | <small>[[:File:Deb-2000-reflectance-separation.png|Original picture]] by [[w:Paul Debevec]] et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855</small>]] | ||
In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. [[w:The Matrix Reloaded]] and [[w:The Matrix Revolutions]] released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the [[Glossary#Reflectance capture|reflectance capture]] over the human face, was made for the first time in 1999 at the [[w:University of Southern California]] and was presented to the crème de la crème | In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. [[w:The Matrix Reloaded]] and [[w:The Matrix Revolutions]] released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the [[Glossary#Reflectance capture|reflectance capture]] over the human face, was made for the first time in 1999 at the [[w:University of Southern California]] and was presented to the crème de la crème | ||
Line 64: | Line 65: | ||
{{Q|Do you think that was [[w: | {{Q|Do you think that was [[w:Hugo Weaving]]'s left cheekbone that [[w:Keanu Reeves]] punched in with his right fist?|Trad|The Matrix Revolutions}} | ||
=== The problems with digital look-alikes === | === The problems with digital look-alikes === | ||
Line 92: | Line 93: | ||
=== 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion) === | === 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion) === | ||
* In the '''2018''' at the '''[[w: | * In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results | ||
The Iframe below is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers. | The Iframe below is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers. | ||
Line 114: | Line 115: | ||
# Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1. | # Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1. | ||
Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human | Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human voice!]]''' | ||
=== Examples of speech synthesis software not quite able to fool a human yet === | === Examples of speech synthesis software not quite able to fool a human yet === | ||
Line 130: | Line 131: | ||
=== Documented digital sound-alike attacks === | === Documented digital sound-alike attacks === | ||
* Sound like anyone technology found its way to the hands of criminals as in '''2019''' [[w:NortonLifeLock|Symantec]] researchers knew of 3 cases where technology has been used for '''[[w: | * Sound like anyone technology found its way to the hands of criminals as in '''2019''' [[w:NortonLifeLock|Symantec]] researchers knew of 3 cases where technology has been used for '''[[w:crime]]''' | ||
** [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name="BBC2019"> | ** [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name="BBC2019"> | ||
{{cite web | {{cite web | ||
Line 158: | Line 159: | ||
---- | ---- | ||
The below video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine presented by Google Research in [[w:NeurIPS|NeurIPS]] 2018. | The below video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine presented by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018. | ||
{{#ev:youtube|0sR1rU3gLzQ|640px|right|Video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine by Google Research in [[w:NeurIPS|NeurIPS]] 2018.}} | {{#ev:youtube|0sR1rU3gLzQ|640px|right|Video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.}} | ||
[[File:Spectrogram-19thC.png|thumb|right|640px|A [[w: | [[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram]] of a male voice saying 'nineteenth century']] | ||
== Text | == Text syntheses == | ||
[[w:Chatbot]]s have existed for a longer time, but only now armed with AI they are becoming more deceiving. | [[w:Chatbot]]s have existed for a longer time, but only now armed with AI they are becoming more deceiving. | ||
In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI. | In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI. | ||
[[w:OpenAI]]'s [[w:OpenAI#GPT|Generative Pre-trained Transformer]] ('''GPT''') is a left-to-right [[w: | [[w:OpenAI]]'s [[w:OpenAI#GPT|w:Generative Pre-trained Transformer]] ('''GPT''') is a left-to-right [[w:transformer (machine learning model)]]-based [[w:Natural-language generation|text generation]] model succeeded by [[w:OpenAI#GPT-2|w:GPT-2]] and [[w:OpenAI#GPT-3|w:GPT-3]] | ||
''' Reporting / announcements ''' | ''' Reporting / announcements ''' | ||
Line 189: | Line 190: | ||
=== Organizations against synthetic human-like fakes === | === Organizations against synthetic human-like fakes === | ||
[[File:DARPA_Logo.jpg|thumb|left|240px|The Defense Advanced Research Projects Agency, better known as [[w: | [[File:DARPA_Logo.jpg|thumb|left|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | ||
* '''[[w:DARPA]]''' [https://www.darpa.mil/program/media-forensics '''DARPA program''': ''''Media Forensics'''' ('''MediFor''') at darpa.mil] aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in [https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics June '''2016''']<ref name="IA-MediFor-2016-crawl">https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics</ref>. | * '''[[w:DARPA]]''' [https://www.darpa.mil/program/media-forensics '''DARPA program''': ''''Media Forensics'''' ('''MediFor''') at darpa.mil] aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in [https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics June '''2016''']<ref name="IA-MediFor-2016-crawl">https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics</ref>. | ||
Line 197: | Line 198: | ||
* '''[[w:University of Colorado Denver]]''' is the home of the [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics '''National Center for Media Forensics''' at artsandmedia.ucdenver.edu] at the [[w:University of Colorado Denver]] offers a [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/media-forensics-graduate-program Master's degree program], [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/training-courses training courses] and [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/national-center-for-media-forensics-research scientific basic and applied research]. [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/faculty-staff Faculty staff at the NCMF] | * '''[[w:University of Colorado Denver]]''' is the home of the [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics '''National Center for Media Forensics''' at artsandmedia.ucdenver.edu] at the [[w:University of Colorado Denver]] offers a [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/media-forensics-graduate-program Master's degree program], [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/training-courses training courses] and [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/national-center-for-media-forensics-research scientific basic and applied research]. [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/faculty-staff Faculty staff at the NCMF] | ||
[[File:Connie Leyva 2015.jpg|thumb|left|240px|[[w: | [[File:Connie Leyva 2015.jpg|thumb|left|240px|[[w:California]] [[w:California State Senate|w:Senator]] [[w:Connie Leyva]] introduced [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] in Feb '''2019'''. It has been [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn endorsed by SAG-AFTRA], but has not yet passed.]] | ||
* '''[[w:SAG-AFTRA]]''' [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w: | * '''[[w:SAG-AFTRA]]''' [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''. | ||
=== Events against synthetic human-like fakes === | === Events against synthetic human-like fakes === | ||
* '''2020''' | '''[[w:Conference on Computer Vision and Pattern Recognition | * '''2020''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' | [https://sites.google.com/view/wmediaforensics2020/home 2020 Conference on Computer Vision and Pattern Recognition: ''''Workshop on Media Forensics'''' at sites.google.com], a '''June 2020''' workshop at the [[w:Conference on Computer Vision and Pattern Recognition]]. | ||
* '''2019''' | '''[[w:Conference on Neural Information Processing Systems|NeurIPS]]''' | [[w: | * '''2019''' | '''[[w:Conference on Neural Information Processing Systems|w:NeurIPS]]''' | [[w:Facebook, Inc.]] [https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/facebook-ai-launches-its-deepfake-detection-challenge '''"Facebook AI Launches Its Deepfake Detection Challenge"''' at spectrum.ieee.org] | ||
* '''2019''' | '''CVPR''' | [https://sites.google.com/view/mediaforensics2019/home '''2019''' CVPR: ''''Workshop on Media Forensics''''] | * '''2019''' | '''CVPR''' | [https://sites.google.com/view/mediaforensics2019/home '''2019''' CVPR: ''''Workshop on Media Forensics''''] | ||
Line 211: | Line 212: | ||
* '''Annual''' (?) | '''[[w:National Institute of Standards and Technology]] (NIST) | [https://www.nist.gov/itl/iad/mig/media-forensics-challenge NIST: 'Media Forensics Challenge'''' at nist.gov], an iterative research challenge by the [[w:National Institute of Standards and Technology]] with the ongoing challenge being the 2nd one in action. [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2019-0 the evaluation criteria for the 2019 iteration are being formed] | * '''Annual''' (?) | '''[[w:National Institute of Standards and Technology]] (NIST) | [https://www.nist.gov/itl/iad/mig/media-forensics-challenge NIST: 'Media Forensics Challenge'''' at nist.gov], an iterative research challenge by the [[w:National Institute of Standards and Technology]] with the ongoing challenge being the 2nd one in action. [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2019-0 the evaluation criteria for the 2019 iteration are being formed] | ||
* '''2018''' | '''[[w:European Conference on Computer Vision|ECCV]]''' [https://sites.google.com/view/wocm2018/home ECCV 2018: ''''Workshop on Objectionable Content and Misinformation'''' at sites.google.com], a workshop at the '''2018''' [[w:European Conference on Computer Vision]] | * '''2018''' | '''[[w:European Conference on Computer Vision|w:European Conference on Computer Vision (ECCV)]]''' [https://sites.google.com/view/wocm2018/home ECCV 2018: ''''Workshop on Objectionable Content and Misinformation'''' at sites.google.com], a workshop at the '''2018''' [[w:European Conference on Computer Vision]] in [[w:Munich]] had focus on objectionable content detection e.g. [[w:nudity]], [[w:pornography]], [[w:violence]], [[w:hate]], [[w:Child sexual abuse|w:children exploitation]] and [[w:terrorism]] among others and to address misinformation problems when people are fed [[w:disinformation]] and they punt it on as misinformation. Announced topics included [[w:Outline of forensic science|w:image/video forensics]], [[w:detection]]/[[w:analysis]]/[[w:understanding]] of [[w:Counterfeit|w:fake]] images/videos, [[w:misinformation]] detection/understanding: mono-modal and [[w:Multimodality|w:multi-modal]], adversarial technologies and detection/understanding of objectionable content | ||
* '''2018''' | '''[[w:National Institute of Standards and Technology|NIST]]''' [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018 NIST ''''Media Forensics Challenge 2018'''' at nist.gov] was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. | * '''2018''' | '''[[w:National Institute of Standards and Technology|w:NIST]]''' [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018 NIST ''''Media Forensics Challenge 2018'''' at nist.gov] was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. | ||
* '''2017''' | '''[[w:National Institute of Standards and Technology|NIST]]''' [https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation NIST ''''Nimble Challenge 2017'''' at nist.gov] | * '''2017''' | '''[[w:National Institute of Standards and Technology|w:NIST]]''' [https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation NIST ''''Nimble Challenge 2017'''' at nist.gov] | ||
=== Studies against synthetic human-like fakes === | === Studies against synthetic human-like fakes === | ||
Line 221: | Line 222: | ||
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], a '''2020''' '''review''' on the subject of digital look-alikes and media forensics | * [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], a '''2020''' '''review''' on the subject of digital look-alikes and media forensics | ||
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w: | * [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]] | ||
''' Search for more ''' | ''' Search for more ''' | ||
Line 247: | Line 248: | ||
=== 2020's synthetic human-like fakes === | === 2020's synthetic human-like fakes === | ||
* '''2020''' | reporting | [https://www.wired.co.uk/article/deepfake-porn-websites-videos-law "''Deepfake porn is now mainstream. And major sites are cashing in''" at wired.co.uk] by Matt Burgess. Published August 2020. | |||
* '''2020''' | demonstration | '''[https://moondisaster.org/ Moondisaster.org]''' (full film embedded in website) project by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]] published in July 2020, makes use of various methods of making a synthetic human-like fake. Alternative place to watch: [https://www.youtube.com/watch?v=LWLadJFI8Pk ''In Event of Moon Disaster - FULL FILM'' at youtube.com] | |||
** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster''] | |||
[[File:Marc Berman.jpg|thumb|120px|left|Homie [[w:Marc Berman|w:Marc Berman]], a righteous fighter for our human rights in this age of industrial disinformation filth and a member of the [[w:California State Assembly]], most loved for authoring [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 AB-602], which came into effect on Jan 1 2020, banning both the manufacturing and [[w:digital distribution]] of synthetic pornography without the [[w:consent]] of the people depicted.]] | |||
* '''2020''' | US state law | January 1 <ref name="KFI2019"> | * '''2020''' | US state law | January 1 <ref name="KFI2019"> | ||
Line 267: | Line 268: | ||
|quote=}} | |quote=}} | ||
</ref> the [[w:California]] [[w:State law (United States)|state law]] [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 AB-602] came into effect banning the manufacturing and [[w: | </ref> the [[w:California]] [[w:State law (United States)|w:US state law]] [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 AB-602] came into effect banning the manufacturing and [[w:digital distribution]] of synthetic pornography without the [[w:consent]] of the people depicted. AB-602 provides victims of synthetic pornography with [[w:injunction|w:injunctive relief]] and poses legal threats of [[w:statutory damages|w:statutory]] and [[w:punitive damages]] on [[w:criminal]]s making or distributing synthetic pornography without consent. The bill AB-602 was signed into law by California [[w:Governor (United States)|w:Governor]] [[w:Gavin Newsom]] on October 3 2019 and was authored by [[w:California State Assembly]] member [[w:Marc Berman]].<ref name="CNET2019"> | ||
{{cite web | {{cite web | ||
Line 327: | Line 328: | ||
</ref> | </ref> | ||
[[File:Marcus Simon.jpeg|thumb|right|108px|Homie [[w:Marcus Simon|w:Marcus Simon]] ([http://marcussimon.com/ marcussimon.com]) is a Member of the [[w:Virginia House of Delegates]] and a true pioneer in legislating against synthetic filth.]] | |||
* '''2019''' | US state law | Since July 1 <ref> | * '''2019''' | US state law | Since July 1 <ref> | ||
Line 335: | Line 338: | ||
}} | }} | ||
</ref> [[w:Virginia]] [[w:criminalization|has criminalized]] the sale and dissemination of unauthorized synthetic pornography, but not the manufacture.<ref name="Virginia2019Chapter515"> | </ref> [[w:Virginia]] [[w:criminalization|w:has criminalized]] the sale and dissemination of unauthorized synthetic pornography, but not the manufacture.<ref name="Virginia2019Chapter515"> | ||
{{cite web | {{cite web | ||
| url = https://law.lis.virginia.gov/vacode/18.2-386.2/ | | url = https://law.lis.virginia.gov/vacode/18.2-386.2/ | ||
Line 347: | Line 350: | ||
| quote = }} | | quote = }} | ||
</ref>, as [https://law.lis.virginia.gov/vacode/18.2-386.2/ § 18.2-386.2 titled ''''''Unlawful dissemination or sale of images of another; penalty.''''''] became part of the '''[[w:Code of Virginia]]'''. The law text states: "''Any person who, with the [[w:Intention (criminal law)|intent]] to [[w:coercion|coerce]], [[w:harassment|harass]], or [[w:intimidation|intimidate]], [[w:Malice_(law)|malicious]]ly [[w:dissemination|disseminates]] or [[w:sales|sells]] any videographic or still image created by any means whatsoever that depicts another person who is totally [w:[nudity|nude]], or in a state of undress so as to expose the [[w:sex organs|genitals]], pubic area, [[w:buttocks]], or female [[w:breast]], where such person knows or has reason to know that he is not [[w:license]]d or [[w:authorization|authorized]] to disseminate or sell such videographic or still image is guilty of a Class 1 [[w:Misdemeanor#United States|misdemeanor]].''".<ref name="Virginia2019Chapter515"/> The identical bills were [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+HB2678 House Bill 2678] presented by [[w:Delegate (American politics)|Delegate]] [[w:Marcus Simon]] to the [[w:Virginia House of Delegates]] on January 14 2019 and three day later an identical [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+SB1736 Senate bill 1736] was introduced to the [[w:Senate of Virginia]] by Senator [[w:Adam Ebbin]]. | </ref>, as [https://law.lis.virginia.gov/vacode/18.2-386.2/ § 18.2-386.2 titled ''''''Unlawful dissemination or sale of images of another; penalty.''''''] became part of the '''[[w:Code of Virginia]]'''. The law text states: "''Any person who, with the [[w:Intention (criminal law)|w:intent]] to [[w:coercion|w:coerce]], [[w:harassment|w:harass]], or [[w:intimidation|w:intimidate]], [[w:Malice_(law)|w:malicious]]ly [[w:dissemination|w:disseminates]] or [[w:sales|w:sells]] any videographic or still image created by any means whatsoever that depicts another person who is totally [w:[nudity|nude]], or in a state of undress so as to expose the [[w:sex organs|w:genitals]], pubic area, [[w:buttocks]], or female [[w:breast]], where such person knows or has reason to know that he is not [[w:license]]d or [[w:authorization|w:authorized]] to disseminate or sell such videographic or still image is guilty of a Class 1 [[w:Misdemeanor#United States|w:misdemeanor]].''".<ref name="Virginia2019Chapter515"/> The identical bills were [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+HB2678 House Bill 2678] presented by [[w:Delegate (American politics)|w:Delegate]] [[w:Marcus Simon]] to the [[w:Virginia House of Delegates]] on January 14 2019 and three day later an identical [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+SB1736 Senate bill 1736] was introduced to the [[w:Senate of Virginia]] by Senator [[w:Adam Ebbin]]. | ||
* '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] | * '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] | ||
* '''2019''' | <font color="red">'''crime'''</font> | '''[[w:Fraud]]''' with [[#digital sound-alikes|digital sound-alike]] technology surfaced in 2019. See [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on ''''''An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft''''''], a 2019 Washington Post article or [https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/ ''''''A Voice Deepfake Was Used To Scam A CEO Out Of $243,000'''''' at Forbes.com] (2019-09-03) | * '''2019''' | <font color="red">'''crime'''</font> | '''[[w:Fraud]]''' with [[#digital sound-alikes|digital sound-alike]] technology surfaced in 2019. See [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on ''''''An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft''''''], a 2019 Washington Post article or [https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/ ''''''A Voice Deepfake Was Used To Scam A CEO Out Of $243,000'''''' at Forbes.com] (2019-09-03) | ||
Line 387: | Line 390: | ||
</ref> '''[https://support.google.com/websearch/answer/9116649?hl=en Information on removing involuntary fake pornography from Google at support.google.com]''' if it shows up in Google and the '''[https://support.google.com/websearch/troubleshooter/3111061#ts=2889054%2C2889099%2C2889064%2C9171203 form to request removing involuntary fake pornography at support.google.com]''', select "''I want to remove: A fake nude or sexually explicit picture or video of myself''" | </ref> '''[https://support.google.com/websearch/answer/9116649?hl=en Information on removing involuntary fake pornography from Google at support.google.com]''' if it shows up in Google and the '''[https://support.google.com/websearch/troubleshooter/3111061#ts=2889054%2C2889099%2C2889064%2C9171203 form to request removing involuntary fake pornography at support.google.com]''', select "''I want to remove: A fake nude or sexually explicit picture or video of myself''" | ||
[[File:GoogleLogoSept12015.png|thumb|right|300px|[[w: | [[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems|w:Conference on Neural Information Processing Systems]] (NeurIPS). It requires only 5 seconds of sample to steal a voice.]] | ||
* '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results. | * '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results. | ||
Line 413: | Line 416: | ||
</ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera. | </ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera. | ||
* '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning|deep machine learning]] so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting. | * '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning|w:deep machine learning]] so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting. | ||
* '''2017''' | science | '''[http://grail.cs.washington.edu/projects/AudioToObama/ 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu]'''. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the [[w:University of Washington]] presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire [[w:lip sync]] and wider facial information from [[w:training material]] consisting 2D videos with audio had been completed.<ref name="Suw2017">{{Citation | * '''2017''' | science | '''[http://grail.cs.washington.edu/projects/AudioToObama/ 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu]'''. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the [[w:University of Washington]] presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire [[w:lip sync]] and wider facial information from [[w:training material]] consisting 2D videos with audio had been completed.<ref name="Suw2017">{{Citation | ||
Line 435: | Line 438: | ||
[[File:Adobe Corporate Logo.png|thumb|right|300px|[[w:Adobe Inc.]]'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in '''2016''' before an implementation was sold to criminal organizations.]] | [[File:Adobe Corporate Logo.png|thumb|right|300px|[[w:Adobe Inc.]]'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in '''2016''' before an implementation was sold to criminal organizations.]] | ||
{{#ev:youtube|I3l4XLZ59iw|420px|left|'''''#[[w:Adobe Voco]]. Adobe Audio Manipulator Sneak Peak with [[w:Jordan Peele]]''''' (at Youtube.com). November 2016 demonstration of a Adobe's unreleased sound-like-anyone-machine, the '''[[w:Adobe Voco]]''' at the [[w:Adobe MAX]] 2016 event in [[w:San Diego]], [[w:California]]. The original Adobe Voco '''required 20 minutes of sample''' to '''thieve a voice'''.}} | |||
* '''<font color="red">2016</font>''' | <font color="red">science</font> and demonstration | '''[[w:Adobe Inc.]]''' publicly demonstrates '''[[w:Adobe Voco]]''', a '''sound-like-anyone machine''' [https://www.youtube.com/watch?v=I3l4XLZ59iw '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube]. THe original Adobe Voco required '''20 minutes''' of sample '''to thieve a voice'''. <font color="green">'''Relevancy: certain'''</font>. | * '''<font color="red">2016</font>''' | <font color="red">science</font> and demonstration | '''[[w:Adobe Inc.]]''' publicly demonstrates '''[[w:Adobe Voco]]''', a '''sound-like-anyone machine''' [https://www.youtube.com/watch?v=I3l4XLZ59iw '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube]. THe original Adobe Voco required '''20 minutes''' of sample '''to thieve a voice'''. <font color="green">'''Relevancy: certain'''</font>. | ||
Line 453: | Line 457: | ||
* '''2014''' | science | [[w:Ian Goodfellow]] et al. presented the principles of a '''[[w:generative adversarial network]]'''. '''GAN'''s made the headlines in early 2018 with the [[w:deepfake]]s controversies. | * '''2014''' | science | [[w:Ian Goodfellow]] et al. presented the principles of a '''[[w:generative adversarial network]]'''. '''GAN'''s made the headlines in early 2018 with the [[w:deepfake]]s controversies. | ||
* '''2013''' | demonstration | At the 2013 SIGGGRAPH [[w:Activision]] and USC presented a [[w:real time computing | * '''2013''' | demonstration | At the 2013 SIGGGRAPH [[w:Activision]] and USC presented a [[w:real time computing]] "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,<ref name="reform_youtube2015"> | ||
{{cite AV media | {{cite AV media | ||
| people = | | people = | ||
Line 474: | Line 478: | ||
| doi = | | doi = | ||
| accessdate = 2017-07-13}} | | accessdate = 2017-07-13}} | ||
</ref> The end result both precomputed and real-time rendering with the modernest game [[w:Graphics processing unit|GPU]] shown [http://gl.ict.usc.edu/Research/DigitalIra/ here] and looks fairly realistic. | </ref> The end result both precomputed and real-time rendering with the modernest game [[w:Graphics processing unit|w:GPU]] shown [http://gl.ict.usc.edu/Research/DigitalIra/ here] and looks fairly realistic. | ||
* '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font> | * '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font> | ||
Line 480: | Line 484: | ||
=== 2000's synthetic human-like fakes === | === 2000's synthetic human-like fakes === | ||
* '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|w:CLU]]. | |||
* '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|CLU]]. | |||
* '''2009''' | movie | A digital look-alike of a younger [[w:Arnold Schwarzenegger]] was made for the movie ''[[w:Terminator Salvation]]'' though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger. | * '''2009''' | movie | A digital look-alike of a younger [[w:Arnold Schwarzenegger]] was made for the movie ''[[w:Terminator Salvation]]'' though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger. | ||
* '''2009''' | demonstration | [http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html Paul Debevec: ''''Animating a photo-realistic face'''' at ted.com] Debevec et al. presented new digital likenesses, made by [[w:Image Metrics]], this time of actress [[w:Emily O'Brien]] whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. <ref name="Deb2009">[http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html In this TED talk video] at 00:04:59 you can see ''two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>''. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.</ref> Motion looks fairly convincing contrasted to the clunky run in the [[w:Animatrix#Final Flight of the Osiris|''Animatrix: Final Flight of the Osiris'']] which was [[w:state-of-the-art]] in 2003 if photorealism was the intention of the [[w:animators]]. | * '''2009''' | demonstration | [http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html Paul Debevec: ''''Animating a photo-realistic face'''' at ted.com] Debevec et al. presented new digital likenesses, made by [[w:Image Metrics]], this time of actress [[w:Emily O'Brien]] whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. <ref name="Deb2009">[http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html In this TED talk video] at 00:04:59 you can see ''two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>''. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.</ref> Motion looks fairly convincing contrasted to the clunky run in the [[w:Animatrix#Final Flight of the Osiris|w:''Animatrix: Final Flight of the Osiris'']] which was [[w:state-of-the-art]] in 2003 if photorealism was the intention of the [[w:animators]]. | ||
* '''2004''' | movie | The '''[[w:Spider-man 2]]''' (and '''[[w:Spider-man 3]]''', 2007) films. Relevancy: The films include a [[Synthetic human-like fakes#Digital look-alike|digital look-alike]] made of actor [[w:Tobey Maguire]] by [[w:Sony Pictures Imageworks]].<ref name="Pig2005">{{cite web | * '''2004''' | movie | The '''[[w:Spider-man 2]]''' (and '''[[w:Spider-man 3]]''', 2007) films. Relevancy: The films include a [[Synthetic human-like fakes#Digital look-alike|digital look-alike]] made of actor [[w:Tobey Maguire]] by [[w:Sony Pictures Imageworks]].<ref name="Pig2005">{{cite web | ||
Line 501: | Line 503: | ||
</ref> | </ref> | ||
* '''2003''' | short film | [[w:The Animatrix#Final Flight of the Osiris|''The Animatrix: Final Flight of the Osiris'']] a [[w:state-of-the-art]] want-to-be human likenesses not quite fooling the watcher made by [[w:Square Pictures#Square Pictures|Square Pictures]]. | * '''2003''' | short film | [[w:The Animatrix#Final Flight of the Osiris|w:''The Animatrix: Final Flight of the Osiris'']] a [[w:state-of-the-art]] want-to-be human likenesses not quite fooling the watcher made by [[w:Square Pictures#Square Pictures|w:Square Pictures]]. | ||
[[File:The-matrix-logo.svg|thumb|left|300px|Logo of the [[w:The Matrix (franchise)]]]] | |||
[[File:BSSDF01_400.svg|thumb|left|300px|Traditional [[w:Bidirectional reflectance distribution function|w:BRDF]] vs. [[w:subsurface scattering|subsurface scattering]] inclusive BSSRDF i.e. [[w:Bidirectional scattering distribution function#Overview of the BxDF functions|w:Bidirectional scattering-surface reflectance distribution function]]. <br/><br/> | |||
An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]] | |||
* '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''. | * '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''. | ||
* '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov. | {{#ev:youtube|3qIXIHAmcKU|640px|right|Music video for '''''Bullet''''' by [[w:Covenant (band)|w:Covenant]] from 2002. Here you can observe the classic "''skin looks like cardboard''"-bug that stopped the pre-reflectance capture era versions from passing human testing.}} | ||
* '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|w:BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov. | |||
=== 1990's synthetic human-like fakes === | === 1990's synthetic human-like fakes === | ||
[[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]] | |||
[[File:Deb2000-light-stage-low-res-rip.png|thumb|left|304px|Original [[w:light stage]] used in the 1999 reflectance capture by [[w:Paul Debevec|Debevec]] et al.<br /><br /> | |||
It consists of two rotary axes with height and radius control. Light source and a polarizer were placed on one arm and a camera and the other polarizer on the other arm. | |||
<br /><br /> | |||
<small>Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]] | |||
* <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/> | * <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|w:USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/> | ||
* <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. | * <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. | ||
* '''1994''' | movie | [[w:The Crow (1994 film) | * '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage. | ||
=== 1970's synthetic human-like fakes === | === 1970's synthetic human-like fakes === | ||
{{#ev:vimeo|16292363|480px|right|''[[w:A Computer Animated Hand|w:A Computer Animated Hand]]'' is a 1972 short film by [[w:Edwin Catmull]] and [[w:Fred Parke]]. This was the first time that [[w:computer-generated imagery]] was used in film to animate likenesses of moving human appearance.}} | |||
* '''1976''' | movie | ''[[w:Futureworld]]'' reused parts of ''A Computer Animated Hand'' on the big screen. | * '''1976''' | movie | ''[[w:Futureworld]]'' reused parts of ''A Computer Animated Hand'' on the big screen. | ||
* '''1972''' | entertainment | '''[https://vimeo.com/59434349 'A Computer Animated Hand' on Vimeo]'''. [[w:A Computer Animated Hand]] by [[w:Edwin Catmull]] and [[w:Fred Parke]]. Relevancy: This was the '''first time''' that [[w: | * '''1972''' | entertainment | '''[https://vimeo.com/59434349 'A Computer Animated Hand' on Vimeo]'''. [[w:A Computer Animated Hand]] by [[w:Edwin Catmull]] and [[w:Fred Parke]]. Relevancy: This was the '''first time''' that [[w:computer-generated imagery]] was used in film to '''animate''' moving '''human-like appearance'''. | ||
* '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref> | * '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref> | ||
Line 528: | Line 545: | ||
=== 1770's synthetic human-like fakes === | === 1770's synthetic human-like fakes === | ||
[[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von | [[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von Kempelen]]'s [[w:Wolfgang von Kempelen's Speaking Machine]], built 2007–09 at the Department of [[w:Phonetics]], [[w:Saarland University]], [[w:Saarbrücken]], Germany. This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s]] | ||
* '''1791''' | science | '''[[w:Wolfgang von Kempelen's Speaking Machine]]''' of [[w:Wolfgang von Kempelen]] of [[w:Pressburg]], [[w:Hungary]], described in a 1791 paper was [[w:bellows]]-operated.<ref>''Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine'' ("Mechanism of the human speech with description of its speaking machine", J. B. Degen, Wien).</ref> This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s. (based on [[w:Speech synthesis#History]]) | * '''1791''' | science | '''[[w:Wolfgang von Kempelen's Speaking Machine]]''' of [[w:Wolfgang von Kempelen]] of [[w:Pressburg]], [[w:Hungary]], described in a 1791 paper was [[w:bellows]]-operated.<ref>''Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine'' ("Mechanism of the human speech with description of its speaking machine", J. B. Degen, Wien).</ref> This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s. (based on [[w:Speech synthesis#History]]) | ||
Line 562: | Line 579: | ||
=== 2000's media perhaps about synthetic human-like fakes === | === 2000's media perhaps about synthetic human-like fakes === | ||
* '''2006''' | music video | [https://www.youtube.com/watch?v=hC_sqi9oocI ''''''John The Revelator'''''' by '''Depeche Mode''' (official music video) on Youtube] by [[w:Depeche Mode]] from the single [[w:John the Revelator / Lilian]]. Relevancy: [[Biblical explanation - The books of Daniel and Revelation#Revelation 13|Book of Revelations]]. | * '''2006''' | music video | [https://www.youtube.com/watch?v=hC_sqi9oocI ''''''John The Revelator'''''' by '''Depeche Mode''' (official music video) on Youtube] by [[w:Depeche Mode]] from the single [[w:John the Revelator / Lilian]]. Relevancy: [[Biblical explanation - The books of Daniel and Revelation#Revelation 13|Book of Revelations]]. | ||
Line 568: | Line 584: | ||
* '''2005''' | music video | [https://www.youtube.com/watch?v=wwvLlEtxX3o ''''''Only'''''' by Nine '''Inch Nails''' at Youtube.com] [[w:Only (Nine Inch Nails song)]] by the [[w:Nine Inch Nails]]. Relevancy: check the lyrics, check the video | * '''2005''' | music video | [https://www.youtube.com/watch?v=wwvLlEtxX3o ''''''Only'''''' by Nine '''Inch Nails''' at Youtube.com] [[w:Only (Nine Inch Nails song)]] by the [[w:Nine Inch Nails]]. Relevancy: check the lyrics, check the video | ||
* '''2005''' | short film | [https://www.youtube.com/watch?v=zl6hNj1uOkY ''''''Doll Face'''''' by '''Andy Huang''' on youtube.com] was uploaded on 2007-02-19 by [http://www.andrewthomashuang.com/ Andrew Thomas Huang (.com)]. There are various unofficial videos using 'Doll Face' as graphics, but with different music. | |||
* '''2001''' | music video | [https://www.youtube.com/watch?v=dbB-mICjkQM ''''''Plug In Baby'''''' by '''Muse''' on youtube.com] by [[w:Muse (band)]] from their album [[w:Origin of Symmetry]]. Relevancy: See video | * '''2001''' | music video | [https://www.youtube.com/watch?v=dbB-mICjkQM ''''''Plug In Baby'''''' by '''Muse''' on youtube.com] by [[w:Muse (band)]] from their album [[w:Origin of Symmetry]]. Relevancy: See video | ||
Line 576: | Line 593: | ||
* '''1998''' | music video | [https://www.youtube.com/watch?v=XbByxzZ-4dI ''''''Rabbit in Your Headlights'''''' by '''UNKLE''' on Youtube] by [[w:Unkle]] and featuring [[w:Thom Yorke]] on the lyrics. [[w:Rabbit in Your Headlights#Music video|Wikipedia on 'Rabbit in Your Headlights' music video]]. Relevancy: Contains shots that would have '''injured''' / '''killed''' a '''human actor'''. | * '''1998''' | music video | [https://www.youtube.com/watch?v=XbByxzZ-4dI ''''''Rabbit in Your Headlights'''''' by '''UNKLE''' on Youtube] by [[w:Unkle]] and featuring [[w:Thom Yorke]] on the lyrics. [[w:Rabbit in Your Headlights#Music video|Wikipedia on 'Rabbit in Your Headlights' music video]]. Relevancy: Contains shots that would have '''injured''' / '''killed''' a '''human actor'''. | ||
* '''1998''' | music | [https://www.youtube.com/watch?v=sjrnKG1j3Eg ''''''New Model No. 15'''''' by '''Marilyn Manson''' (lyrics in video) on Youtube] | * '''1998''' | music | [https://www.youtube.com/watch?v=sjrnKG1j3Eg ''''''New Model No. 15'''''' by '''Marilyn Manson''' (lyrics in video) on Youtube] by [[w:Marilyn Manson (band)]] from the album [[w:Mechanical Animals]]. Relevancy: The '''lyrics''' are obviously about '''[[digital look-alikes]]''' approaching. | ||
* '''1998''' | music video | [https://www.youtube.com/watch?v=FC-Kos_b1sE ''''''The Dope Show'''''' by '''Marilyn Manson''' (lyric video) on Youtube] [https://www.youtube.com/watch?v=5R682M3ZEyk (official music video)] by [[w:Marilyn Manson (band)]] from the album [[w:Mechanical Animals]]. Relevancy: '''lyrics''' | * '''1998''' | music video | [https://www.youtube.com/watch?v=FC-Kos_b1sE ''''''The Dope Show'''''' by '''Marilyn Manson''' (lyric video) on Youtube] [https://www.youtube.com/watch?v=5R682M3ZEyk (official music video)] by [[w:Marilyn Manson (band)]] from the album [[w:Mechanical Animals]]. Relevancy: '''lyrics''' | ||
* '''1990''' | music (video) | [https://www.youtube.com/watch?v=JG9CXQxhfL4 '''''Daydreaming''''' by '''Massive Attack''' on youtube.com] [[w:Daydreaming (Massive Attack song)]] by [[w:Massive Attack]] Relevancy: | * '''1996''' | music | [https://www.youtube.com/watch?v=LMK9z5jyKbQ ''''''Dead Cities'''''' by '''The Future Sound of London''' on Youtube] - Title track from the 1996 [[w:The Future Sound of London]] album [[w:Dead Cities (album)]]. Relevancy: You need to listen | ||
* '''1990''' | music (video) | [https://www.youtube.com/watch?v=JG9CXQxhfL4 ''''''Daydreaming'''''' by '''Massive Attack''' on youtube.com] [[w:Daydreaming (Massive Attack song)]] by [[w:Massive Attack]] Relevancy: "''But what happen when the bomb drops Down...?''" | |||
=== 1980's media perhaps about synthetic human-like fakes === | === 1980's media perhaps about synthetic human-like fakes === | ||
Line 599: | Line 618: | ||
=== | === 2nd century BC media perhaps about synthetic human-like fakes === | ||
* [[w: | * [[w:2nd century BC]] | scripture | The '''[[w:Book of Daniel]]''' was put in writing. | ||
* See [[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Biblical explanation - The books of Daniel and Revelation § Daniel 7]]. '''Caution''' to reader: contains '''explicit''' written information about the beasts. | * See [[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Biblical explanation - The books of Daniel and Revelation § Daniel 7]]. '''Caution''' to reader: contains '''explicit''' written information about the beasts. | ||
Line 607: | Line 626: | ||
[[File:Daniel's vision of the four beasts from the sea and the Ancient of Days - Silos Apocalypse (1109), f.240 - BL Add MS 11695.jpg|thumb|left|360px|Image taken from Silos Apocalypse. Originally published/produced in Spain (Silos), 1109.<br/><br/> | [[File:Daniel's vision of the four beasts from the sea and the Ancient of Days - Silos Apocalypse (1109), f.240 - BL Add MS 11695.jpg|thumb|left|360px|Image taken from Silos Apocalypse. Originally published/produced in Spain (Silos), 1109.<br/><br/> | ||
[[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Daniel 7]], Daniel's vision of the three beasts <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:1-6 - Three beasts|Dan 7:1-6]]</sup> and the fourth beast <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:7-8 - The fourth beast|Dan 7:7-8]]</sup> from the sea and the [[w: | [[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Daniel 7]], Daniel's vision of the three beasts <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:1-6 - Three beasts|Dan 7:1-6]]</sup> and the fourth beast <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:7-8 - The fourth beast|Dan 7:7-8]]</sup> from the sea and the [[w:Ancient of Days]]<sup>[[Biblical explanation - The books of Daniel and Revelation#The Ancient of Days|Dan 7:9-10]]</sup>]] | ||
* [[w:6th century BC]] | scripture | '''[[w:Daniel (biblical figure)]]''' was in [[w:Babylonian captivity]] when he had his visions where God warned us of synthetic human-like fakes first. | * [[w:6th century BC]] | scripture | '''[[w:Daniel (biblical figure)]]''' was in [[w:Babylonian captivity]] when he had his visions where God warned us of synthetic human-like fakes first. |