Editing Synthetic human-like fakes
Jump to navigation
Jump to search
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
When the [[Glossary#No camera|camera does not exist]], but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''digital look-alike'''. | |||
When the | |||
When it cannot be determined by human testing whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a '''digital sound-alike'''. | |||
[[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|right|460px|Image 2 (low resolution rip) | [[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|right|460px|Image 2 (low resolution rip) | ||
<br/>(1) Sculpting a morphable model to one single picture | <br/>(1) Sculpting a morphable model to one single picture | ||
<br/>(2) Produces 3D approximation | <br/>(2) Produces 3D approximation | ||
Line 74: | Line 14: | ||
<small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]] | <small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]] | ||
[[File:Saint John on Patmos.jpg|thumb|left|360px|See [[Biblical explanation - The books of Daniel and Revelation]] to see the advance warning for our time that we were given in 6th century BC and then again in 1st century. | |||
<br/><br/> | |||
'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]. Picture from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé|Musée Condé]] 40km north of Paris, France.]] | |||
== Digital look-alikes == | == Digital look-alikes == | ||
{{#ev:youtube|LWLadJFI8Pk| | {{#ev:youtube|LWLadJFI8Pk|640px|right|It is recommended that you watch ''In Event of Moon Disaster - FULL FILM'' (2020) at the '''[https://moondisaster.org/ moondisaster.org''' project website] by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|MIT]]}} | ||
Line 99: | Line 43: | ||
[[File:The-diffuse-reflection-deducted-from-the-specular-reflection-Debevec-2000.png|thumb|left|260px|Subtraction of the diffuse reflection from the specular reflection yields the specular component of the model's reflectance. | [[File:The-diffuse-reflection-deducted-from-the-specular-reflection-Debevec-2000.png|thumb|left|260px|Subtraction of the diffuse reflection from the specular reflection yields the specular component of the model's reflectance. | ||
<br /><br /> | <br /><br /> | ||
<small>[[:File:Deb-2000-reflectance-separation.png|Original picture]] by [[w:Paul Debevec]] et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855</small>]] | <small>[[:File:Deb-2000-reflectance-separation.png|Original picture]] by [[w:Paul Debevec|Debevec]] et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855</small>]] | ||
In the cinemas we have seen digital look-alikes for | In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. [[w:The Matrix Reloaded]] and [[w:The Matrix Revolutions]] released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the [[Glossary#Reflectance capture|reflectance capture]] over the human face, was made for the first time in 1999 at the [[w:University of Southern California]] and was presented to the crème de la crème | ||
of the computer graphics field in their annual gathering SIGGRAPH 2000.<ref name="Deb2000"> | of the computer graphics field in their annual gathering SIGGRAPH 2000.<ref name="Deb2000"> | ||
{{cite book | {{cite book | ||
Line 120: | Line 64: | ||
}}</ref> | }}</ref> | ||
{{ | |||
{{Q|Do you think that was [[w:Hugo Weaving|Hugo Weaving]]'s left cheekbone that [[w:Keanu Reeves|Keanu Reeves]] punched in with his right fist?|Trad|The Matrix Revolutions}} | |||
=== The problems with digital look-alikes === | === The problems with digital look-alikes === | ||
Extremely unfortunately for the humankind, organized criminal leagues, that posses the '''weapons capability''' of making believable looking '''synthetic pornography''', are producing on industrial production pipelines ''' | Extremely unfortunately for the humankind, organized criminal leagues, that posses the '''weapons capability''' of making believable looking '''synthetic pornography''', are producing on industrial production pipelines '''synthetic terror porn'''<ref group="footnote" name="About the term synthetic terror porn">It is terminologically more precise, more inclusive and more useful to talk about 'synthetic terror porn', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.</ref> by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by. | ||
These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This '''hate illustration''' increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve. | |||
For these reasons the bannable '''raw materials''' i.e. covert models, needed to produce this disinformation terror on the information-industrial production pipelines, '''[[Law proposals to ban covert modeling|should be prohibited by law]]''' in order to protect humans from arbitrary abuse by criminal parties. | |||
=== List of possible naked digital look-alike attacks === | |||
* The classic "''portrayal of as if in involuntary sex''"-attack. (Digital look-alike "cries") | |||
* "''Sexual preference alteration''"-attack. (Digital look-alike "smiles") | |||
* "''Cutting / beating''"-attack (Constructs a deceptive history for genuine scars) | |||
* "''Mutilation''"-attack (Digital look-alike "dies") | |||
* "''Unconscious and injected''"-attack (Digital look-alike gets "disease") | |||
---- | ---- | ||
== Digital sound-alikes == | == Digital sound-alikes == | ||
Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | |||
---- | |||
' | === 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion) === | ||
* In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results | |||
'' | The Iframe below is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers. | ||
Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking. | |||
{{#Widget:Iframe - Audio samples from Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Google Research}} | {{#Widget:Iframe - Audio samples from Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Google Research}} | ||
[[File:Helsingin-Sanomat-2012-David-Martin-Howard-of-University-of-York-on-apporaching-digital-sound-alikes.jpg|left|thumb|338px|A picture of a cut-away titled "''Voice-terrorist could mimic a leader''" from a 2012 [[w:Helsingin Sanomat]] warning that the sound-like-anyone machines are approaching. Thank you to homie [https://pure.york.ac.uk/portal/en/researchers/david-martin-howard(ecfa9e9e-1290-464f-981a-0c70a534609e).html Prof. David Martin Howard] of the [[w:University of York]], UK and the anonymous editor for the heads-up.]] | |||
---- | |||
=== Example of a hypothetical 4-victim digital sound-alike attack === | |||
A very simple example of a digital sound-alike attack is as follows: | |||
Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims: | |||
The to | # Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes | ||
# Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1 | |||
# Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1 | |||
# Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1. | |||
Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice!]]''' | |||
=== | === Examples of speech synthesis software not quite able to fool a human yet === | ||
Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer. | |||
= | * '''[https://lyrebird.ai/ Lyrebird.ai]''' [https://www.youtube.com/watch?v=xxDBlZu__Xk (listen)] | ||
* '''[https://candyvoice.com/ CandyVoice.com]''' [https://candyvoice.com/demos/voice-conversion (test with your choice of text)] | |||
* '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]] | |||
* [https://papers.nips.cc/paper/8206-neural-voice-cloning-with-a-few-samples ''''Neural Voice Cloning with a Few Samples''' at papers.nips.cc], [[w:Baidu Research]]'es shot at sound-like-anyone-machine did not convince in '''2018''' | |||
=== Reporting on the sound-like-anyone-machines === | |||
* [https://www.forbes.com/sites/bernardmarr/2019/05/06/artificial-intelligence-can-now-copy-your-voice-what-does-that-mean-for-humans/#617f6d872a2a '''"Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?"''' May 2019 reporting at forbes.com] on [[w:Baidu Research]]'es attempt at the sound-like-anyone-machine demonstrated at the 2018 [[w:NeurIPS]] conference. | |||
* [https://www. | |||
* [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name=" | === Documented digital sound-alike attacks === | ||
* Sound like anyone technology found its way to the hands of criminals as in '''2019''' [[w:NortonLifeLock|Symantec]] researchers knew of 3 cases where technology has been used for '''[[w:crime|crime]]''' | |||
** [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name="BBC2019"> | |||
{{cite web | {{cite web | ||
|url= https://www.bbc.com/news/technology-48908736 | |url= https://www.bbc.com/news/technology-48908736 | ||
Line 247: | Line 144: | ||
|quote= }} | |quote= }} | ||
</ref> | </ref> | ||
* [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com] documents a [[w:fraud]] committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name=" | ** [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com] documents a [[w:fraud]] committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name="WaPo2019"> | ||
{{cite web | {{cite web | ||
|url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ | |url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ | ||
Line 257: | Line 154: | ||
|publisher= [[w:Washington Post]] | |publisher= [[w:Washington Post]] | ||
|access-date= 2019-07-22 | |access-date= 2019-07-22 | ||
|quote= | |quote= }} | ||
</ref> | </ref> | ||
---- | |||
The below video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine presented by Google Research in [[w:NeurIPS|NeurIPS]] 2018. | |||
{{#ev:youtube|0sR1rU3gLzQ|640px|right|Video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine by Google Research in [[w:NeurIPS|NeurIPS]] 2018.}} | |||
{{ | |||
[[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram|spectrogram]] of a male voice saying 'nineteenth century']] | |||
== Text | == Text synthesis == | ||
[[w:Chatbot]]s | [[w:Chatbot]]s have existed for a longer time, but only now armed with AI they are becoming more deceiving. | ||
In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI. | In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI. | ||
[[w:OpenAI]]'s [[w:OpenAI#GPT|Generative Pre-trained Transformer]] ('''GPT''') is a left-to-right [[w:Transformer (machine learning model)|transformer]]-based [[w:Natural-language generation|text generation]] model succeeded by [[w:OpenAI#GPT-2|GPT-2]] and [[w:OpenAI#GPT-3|GPT-3]] | |||
[[w:OpenAI]]'s [[w:OpenAI#GPT| | |||
''' Reporting / announcements ''' | |||
* [https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/ ''''A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.'''' at technologyreview.com] '''August 2020''' reporting in the [[w:MIT Technology Review]] by Karen Hao about GPT-3. | * [https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/ ''''A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.'''' at technologyreview.com] '''August 2020''' reporting in the [[w:MIT Technology Review]] by Karen Hao about GPT-3. | ||
Line 385: | Line 185: | ||
* [https://analyticssteps.com/blogs/detection-fake-and-false-news-text-analysis-approaches-and-cnn-deep-learning-model '''"Detection of Fake and False News (Text Analysis): Approaches and CNN as Deep Learning Model"''' at analyticsteps.com], a 2019 summmary written by Shubham Panth. | * [https://analyticssteps.com/blogs/detection-fake-and-false-news-text-analysis-approaches-and-cnn-deep-learning-model '''"Detection of Fake and False News (Text Analysis): Approaches and CNN as Deep Learning Model"''' at analyticsteps.com], a 2019 summmary written by Shubham Panth. | ||
== | == Countermeasures against synthetic human-like fakes == | ||
<section begin=APW_AI-transclusion /> | |||
=== Organizations against synthetic human-like fakes === | |||
[[File:DARPA_Logo.jpg|thumb|left|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA|DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | |||
* | * '''[[w:DARPA]]''' [https://www.darpa.mil/program/media-forensics '''DARPA program''': ''''Media Forensics'''' ('''MediFor''') at darpa.mil] aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in [https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics June '''2016''']<ref name="IA-MediFor-2016-crawl">https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics</ref>. | ||
* [https://www.darpa.mil/program/semantic-forensics '''DARPA program''': ''''Semantic Forensics'''' ('''SemaFor''') at darpa.mil] aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at [[w:Duke University]]'s [https://researchfunding.duke.edu/semantic-forensics-semafor '''Research Funding database: Semantic Forensics (SemaFor)''' at researchfunding.duke.edu] and some at [https://www.grants.gov/web/grants/view-opportunity.html?oppId=319894 '''Semantic Forensics grant opportunity''' (closed Nov 2019) at grants.gov]. Archive.org first cralwed their website in [https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November '''2019''']<ref name="IA-SemaFor-2019-crawl">https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November</ref> | |||
* https:// | |||
* '''[[w:University of Colorado Denver]]''' is the home of the [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics '''National Center for Media Forensics''' at artsandmedia.ucdenver.edu] at the [[w:University of Colorado Denver]] offers a [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/media-forensics-graduate-program Master's degree program], [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/training-courses training courses] and [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/national-center-for-media-forensics-research scientific basic and applied research]. [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/faculty-staff Faculty staff at the NCMF] | |||
[[File:Connie Leyva 2015.jpg|thumb|left|240px|[[w:California|California]] [[w:California State Senate|Senator]] [[w:Connie Leyva|Connie Leyva]] introduced [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] in Feb '''2019'''. It has been [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn endorsed by SAG-AFTRA], but has not yet passed.]] | |||
* '''[[w:SAG-AFTRA]]''' [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate|California State Senate]] by [[w:California|California]] [[w:Connie Leyva|Senator Connie Leyva]] in Feb '''2019'''. | |||
=== Events against synthetic human-like fakes === | |||
''' | * '''2020''' | '''[[w:Conference on Computer Vision and Pattern Recognition|CVPR]]''' | [https://sites.google.com/view/wmediaforensics2020/home 2020 Conference on Computer Vision and Pattern Recognition: ''''Workshop on Media Forensics'''' at sites.google.com], a '''June 2020''' workshop at the [[w:Conference on Computer Vision and Pattern Recognition]]. | ||
[[w: | |||
* | * '''2019''' | '''[[w:Conference on Neural Information Processing Systems|NeurIPS]]''' | [[w:Facebook, Inc.|Facebook, Inc.]] [https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/facebook-ai-launches-its-deepfake-detection-challenge '''"Facebook AI Launches Its Deepfake Detection Challenge"''' at spectrum.ieee.org] | ||
* '''2019''' | '''CVPR''' | [https://sites.google.com/view/mediaforensics2019/home '''2019''' CVPR: ''''Workshop on Media Forensics''''] | |||
* [https://www. | * '''Annual''' (?) | '''[[w:National Institute of Standards and Technology]] (NIST) | [https://www.nist.gov/itl/iad/mig/media-forensics-challenge NIST: 'Media Forensics Challenge'''' at nist.gov], an iterative research challenge by the [[w:National Institute of Standards and Technology]] with the ongoing challenge being the 2nd one in action. [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2019-0 the evaluation criteria for the 2019 iteration are being formed] | ||
* [https:// | * '''2018''' | '''[[w:European Conference on Computer Vision|ECCV]]''' [https://sites.google.com/view/wocm2018/home ECCV 2018: ''''Workshop on Objectionable Content and Misinformation'''' at sites.google.com], a workshop at the '''2018''' [[w:European Conference on Computer Vision]] in [[w:Munich]] had focus on objectionable content detection e.g. [[w:nudity]], [[w:pornography]], [[w:violence]], [[w:hate]], [[w:Child sexual abuse|children exploitation]] and [[w:terrorism]] among others and to address misinformation problems when people are fed [[w:disinformation]] and they punt it on as misinformation. Announced topics included [[w:Outline of forensic science|image/video forensics]], [[w:detection]]/[[w:analysis]]/[[w:understanding]] of [[w:Counterfeit|fake]] images/videos, [[w:misinformation]] detection/understanding: mono-modal and [[w:Multimodality|multi-modal]], adversarial technologies and detection/understanding of objectionable content | ||
* | * '''2018''' | '''[[w:National Institute of Standards and Technology|NIST]]''' [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018 NIST ''''Media Forensics Challenge 2018'''' at nist.gov] was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. | ||
* | * '''2017''' | '''[[w:National Institute of Standards and Technology|NIST]]''' [https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation NIST ''''Nimble Challenge 2017'''' at nist.gov] | ||
=== Studies against synthetic human-like fakes === | |||
* | * [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], a '''2020''' '''review''' on the subject of digital look-alikes and media forensics | ||
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University|Duke University]] [[w:Duke University School of Law|School of Law]] | |||
''' Search for more ''' | |||
* [[w:Law review]] | |||
** [[w:List of law reviews in the United States]] | |||
== | === Companies against synthetic human-like fakes === | ||
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020. | |||
<section end=APW_AI-transclusion /> | |||
=== SSF! wiki proposed countermeasure to synthetic porn: Adequate Porn Watcher AI (transcluded) === | |||
Transcluded from [[Adequate Porn Watcher AI]] | |||
{{#lstx:Adequate Porn Watcher AI|See_also}} | |||
-- | === Possible legal response: Outlawing digital sound-alikes (transcluded) === | ||
Transcluded from [[User:Juho Kunsola/Law proposals#Law proposal to ban covert modeling of human voice|Juho's proposal on banning digital sound-alikes]] | |||
{{#section-h:User:Juho Kunsola/Law proposals|Law proposal to ban covert modeling of human voice}} | |||
== | == Timeline of synthetic human-like fakes == | ||
=== 2020's synthetic human-like fakes === | |||
* ''' | * '''2020''' | demonstration | '''[https://moondisaster.org/ Moondisaster.org]''' (full film embedded in website) project by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|MIT]] makes a synthetic human-like fake in the appearance and almost in the sound of Nixon. Alternative place to watch: [https://www.youtube.com/watch?v=LWLadJFI8Pk ''In Event of Moon Disaster - FULL FILM'' at youtube.com] | ||
** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster''] | |||
[[File:Marc Berman.jpg|thumb|120px|left|Homie [[w:Marc Berman|Marc Berman]], a righteous fighter for our human rights in this age of industrial disinformation filth and a member of the [[w:California State Assembly]], most loved for authoring [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 AB-602], which came into effect on Jan 1 2020, banning both the manufacturing and [[w:Digital distribution|distribution]] of synthetic pornography without the [[w:consent]] of the people depicted.]] | |||
* ''' | * '''2020''' | US state law | January 1 <ref name="KFI2019"> | ||
{{cite web | {{cite web | ||
|url= https://kfiam640.iheart.com/content/2019-12-30-here-are-the-new-california-laws-going-into-effect-in-2020/ | |||
|title= Here Are the New California Laws Going Into Effect in 2020 | |||
|last= Johnson | |||
|first= R.J. | |||
|date= 2019-12-30 | |||
|website= [[KFI]] | |||
|publisher= [[iHeartMedia]] | |||
|access-date= 2020-07-13 | |||
|quote=}} | |||
}} | |||
</ref> the [[w:California]] [[w:State law (United States)|state law]] [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 AB-602] came into effect banning the manufacturing and [[w:Digital distribution|distribution]] of synthetic pornography without the [[w:consent]] of the people depicted. AB-602 provides victims of synthetic pornography with [[w:injunction|injunctive relief]] and poses legal threats of [[w:statutory damages|statutory]] and [[w:punitive damages]] on [[w:criminal]]s making or distributing synthetic pornography without consent. The bill AB-602 was signed into law by California [[w:Governor (United States)|Governor]] [[w:Gavin Newsom]] on October 3 2019 and was authored by [[w:California State Assembly]] member [[w:Marc Berman]].<ref name="CNET2019"> | |||
{{cite web | |||
| last = Mihalcik | |||
| first = Carrie | |||
| title = California laws seek to crack down on deepfakes in politics and porn | |||
| website = [[w:cnet.com]] | |||
| publisher = [[w:CNET]] | |||
| date = 2019-10-04 | |||
| url = https://www.cnet.com/news/california-laws-seek-to-crack-down-on-deepfakes-in-politics-and-porn/ | |||
| access-date = 2020-07-13 | |||
| access-date = | |||
}} | }} | ||
</ref> | </ref> | ||
* '''2020''' | Chinese legislation | On January 1 Chinese law requiring that synthetically faked footage should bear a clear notice about its fakeness came into effect. Failure to comply could be considered a [[w:crime]] the [[w:Cyberspace Administration of China]] stated on its website. China announced this new law in November 2019.<ref name="Reuters2019"> | |||
{{cite web | |||
| url = https://www.reuters.com/article/us-china-technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VU | |||
| title = China seeks to root out fake news and deepfakes with new online content rules | |||
| last = | |||
| first = | |||
| date = 2019-11-29 | |||
| website = [[w:Reuters.com]] | |||
| publisher = [[w:Reuters]] | |||
| access-date = 2020-07-13 | |||
| quote = }} | |||
</ref> The Chinese government seems to be reserving the right to prosecute both users and [[w:online video platform]]s failing to abide by the rules. <ref name="TheVerge2019"> | |||
{{cite web | |||
| url = https://www.theverge.com/2019/11/29/20988363/china-deepfakes-ban-internet-rules-fake-news-disclosure-virtual-reality | |||
| title = China makes it a criminal offense to publish deepfakes or fake news without disclosure | |||
| last = Statt | |||
{{ | | first = Nick | ||
| date = 2019-11-29 | |||
| website = | |||
| publisher = [[w:The Verge]] | |||
| access-date = 2020-07-13 | |||
| quote = }} | |||
|url=https://www. | |||
| | |||
| | |||
| | |||
| | |||
}} | |||
</ref> | </ref> | ||
=== 2010's synthetic human-like fakes === | |||
* ''' | * '''2019''' | demonstration | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 '''a deepfake of the President in office'''] [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it. | ||
* ''' | * '''2019''' | US state law | Since September 1 [[w:Texas]] senate bill [https://capitol.texas.gov/tlodocs/86R/billtext/html/SB00751F.htm '''SB 751'''] [[w:amendment]]s to the election code came into effect, giving [[w:candidates]] in [[w:elections]] a 30-day protection period to the elections during which making and distributing digital look-alikes or synthetic fakes of the candidates is an offense. The law text defines the subject of the law as "''a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality''"<ref name="TexasSB751"> | ||
{{cite web | |||
|url= https://capitol.texas.gov/BillLookup/History.aspx?LegSess=86R&Bill=SB751 | |||
|title= Relating to the creation of a criminal offense for fabricating a deceptive video with intent to influence the outcome of an election | |||
|last= | |||
|first= | |||
|date= 2019-06-14 | |||
|website= | |||
|publisher= [[w:Texas]] | |||
|access-date= 2020-07-13 | |||
|quote= In this section, "deep fake video" means a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality}} | |||
</ref> | |||
[[File:Marcus Simon.jpeg|thumb|right|108px|Homie [[w:Marcus Simon|Marcus Simon]] ([http://marcussimon.com/ marcussimon.com]) is a Member of the [[w:Virginia House of Delegates]] and a true pioneer in legislating against synthetic filth.]] | |||
* ''' | * '''2019''' | US state law | Since July 1 <ref> | ||
{{Cite web | |||
| url=https://www.fauquier.com/news/new-state-laws-go-into-effect-july/article_6e2e16c8-96b7-11e9-88d0-83a8852ef3eb.html | |||
| title=New state laws go into effect July 1 | |||
}} | |||
</ref> [[w:Virginia]] [[w:criminalization|has criminalized]] the sale and dissemination of unauthorized synthetic pornography, but not the manufacture.<ref name="Virginia2019Chapter515"> | |||
{{cite web | |||
| url = https://law.lis.virginia.gov/vacode/18.2-386.2/ | |||
| title = § 18.2-386.2. Unlawful dissemination or sale of images of another; penalty. | |||
| last = | |||
| first = | |||
| date = | |||
| website = | |||
| publisher = [[w:Virginia]] | |||
| access-date = 2020-07-13 | |||
| quote = }} | |||
</ref>, as [https://law.lis.virginia.gov/vacode/18.2-386.2/ § 18.2-386.2 titled ''''''Unlawful dissemination or sale of images of another; penalty.''''''] became part of the '''[[w:Code of Virginia]]'''. The law text states: "''Any person who, with the [[w:Intention (criminal law)|intent]] to [[w:coercion|coerce]], [[w:harassment|harass]], or [[w:intimidation|intimidate]], [[w:Malice_(law)|malicious]]ly [[w:dissemination|disseminates]] or [[w:sales|sells]] any videographic or still image created by any means whatsoever that depicts another person who is totally [w:[nudity|nude]], or in a state of undress so as to expose the [[w:sex organs|genitals]], pubic area, [[w:buttocks]], or female [[w:breast]], where such person knows or has reason to know that he is not [[w:license]]d or [[w:authorization|authorized]] to disseminate or sell such videographic or still image is guilty of a Class 1 [[w:Misdemeanor#United States|misdemeanor]].''".<ref name="Virginia2019Chapter515"/> The identical bills were [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+HB2678 House Bill 2678] presented by [[w:Delegate (American politics)|Delegate]] [[w:Marcus Simon]] to the [[w:Virginia House of Delegates]] on January 14 2019 and three day later an identical [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+SB1736 Senate bill 1736] was introduced to the [[w:Senate of Virginia]] by Senator [[w:Adam Ebbin]]. | |||
* '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition| | * '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] | ||
* '''2019''' | <font color="red">'''crime'''</font> | '''[[w:Fraud]]''' with [[#digital sound-alikes|digital sound-alike]] technology surfaced in 2019. See [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on ''''''An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft''''''], a 2019 Washington Post article or [https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/ ''''''A Voice Deepfake Was Used To Scam A CEO Out Of $243,000'''''' at Forbes.com] (2019-09-03) | * '''2019''' | <font color="red">'''crime'''</font> | '''[[w:Fraud]]''' with [[#digital sound-alikes|digital sound-alike]] technology surfaced in 2019. See [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on ''''''An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft''''''], a 2019 Washington Post article or [https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/ ''''''A Voice Deepfake Was Used To Scam A CEO Out Of $243,000'''''' at Forbes.com] (2019-09-03) | ||
Line 600: | Line 372: | ||
</ref> | </ref> | ||
* '''<font color=" | * '''<font color="red">2018</font>''' | '''<font color="red">counter-measure</font>''' | In September 2018 Google added “'''involuntary synthetic pornographic imagery'''” to its '''ban list''', allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”<ref name="WashingtonPost2018"> | ||
{{cite web | {{cite web | ||
Line 616: | Line 388: | ||
</ref> '''[https://support.google.com/websearch/answer/9116649?hl=en Information on removing involuntary fake pornography from Google at support.google.com]''' if it shows up in Google and the '''[https://support.google.com/websearch/troubleshooter/3111061#ts=2889054%2C2889099%2C2889064%2C9171203 form to request removing involuntary fake pornography at support.google.com]''', select "''I want to remove: A fake nude or sexually explicit picture or video of myself''" | </ref> '''[https://support.google.com/websearch/answer/9116649?hl=en Information on removing involuntary fake pornography from Google at support.google.com]''' if it shows up in Google and the '''[https://support.google.com/websearch/troubleshooter/3111061#ts=2889054%2C2889099%2C2889064%2C9171203 form to request removing involuntary fake pornography at support.google.com]''', select "''I want to remove: A fake nude or sexually explicit picture or video of myself''" | ||
[[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems| | [[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google|Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] (NeurIPS). It requires only 5 seconds of sample to steal a voice.]] | ||
* '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results. | * '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results. | ||
* '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018"> | * '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018"> | ||
Line 644: | Line 414: | ||
</ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera. | </ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera. | ||
* '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning|deep machine learning]] so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting. | |||
* '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning| | |||
* '''2017''' | science | '''[http://grail.cs.washington.edu/projects/AudioToObama/ 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu]'''. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the [[w:University of Washington]] presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire [[w:lip sync]] and wider facial information from [[w:training material]] consisting 2D videos with audio had been completed.<ref name="Suw2017">{{Citation | * '''2017''' | science | '''[http://grail.cs.washington.edu/projects/AudioToObama/ 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu]'''. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the [[w:University of Washington]] presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire [[w:lip sync]] and wider facial information from [[w:training material]] consisting 2D videos with audio had been completed.<ref name="Suw2017">{{Citation | ||
Line 668: | Line 436: | ||
[[File:Adobe Corporate Logo.png|thumb|right|300px|[[w:Adobe Inc.]]'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in '''2016''' before an implementation was sold to criminal organizations.]] | [[File:Adobe Corporate Logo.png|thumb|right|300px|[[w:Adobe Inc.]]'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in '''2016''' before an implementation was sold to criminal organizations.]] | ||
* '''<font color="red">2016</font>''' | <font color="red">science</font> and demonstration | '''[[w:Adobe Inc.]]''' publicly demonstrates '''[[w:Adobe Voco]]''', a '''sound-like-anyone machine''' [https://www.youtube.com/watch?v=I3l4XLZ59iw '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube]. THe original Adobe Voco required '''20 minutes''' of sample '''to thieve a voice'''. <font color="green">'''Relevancy: certain'''</font>. | * '''<font color="red">2016</font>''' | <font color="red">science</font> and demonstration | '''[[w:Adobe Inc.]]''' publicly demonstrates '''[[w:Adobe Voco]]''', a '''sound-like-anyone machine''' [https://www.youtube.com/watch?v=I3l4XLZ59iw '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube]. THe original Adobe Voco required '''20 minutes''' of sample '''to thieve a voice'''. <font color="green">'''Relevancy: certain'''</font>. | ||
* '''2016''' | science | '''[http://www.niessnerlab.org/projects/thies2016face.html 'Face2Face: Real-time Face Capture and Reenactment of RGB Videos' at Niessnerlab.org]''' A paper (with videos) on the semi-real-time 2D video manipulation with gesture forcing and lip sync forcing synthesis by Thies et al, Stanford. <font color="green">'''Relevancy: certain'''</font> | * '''2016''' | science | '''[http://www.niessnerlab.org/projects/thies2016face.html 'Face2Face: Real-time Face Capture and Reenactment of RGB Videos' at Niessnerlab.org]''' A paper (with videos) on the semi-real-time 2D video manipulation with gesture forcing and lip sync forcing synthesis by Thies et al, Stanford. <font color="green">'''Relevancy: certain'''</font> | ||
* '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015"> | * '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015"> | ||
Line 691: | Line 454: | ||
* '''2014''' | science | [[w:Ian Goodfellow]] et al. presented the principles of a '''[[w:generative adversarial network]]'''. '''GAN'''s made the headlines in early 2018 with the [[w:deepfake]]s controversies. | * '''2014''' | science | [[w:Ian Goodfellow]] et al. presented the principles of a '''[[w:generative adversarial network]]'''. '''GAN'''s made the headlines in early 2018 with the [[w:deepfake]]s controversies. | ||
* '''2013''' | demonstration | At the 2013 SIGGGRAPH [[w:Activision]] and USC presented a [[w:real time computing]] "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,<ref name="reform_youtube2015"> | * '''2013''' | demonstration | At the 2013 SIGGGRAPH [[w:Activision]] and USC presented a [[w:real time computing|real time]] "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,<ref name="reform_youtube2015"> | ||
{{cite AV media | {{cite AV media | ||
| people = | | people = | ||
Line 712: | Line 475: | ||
| doi = | | doi = | ||
| accessdate = 2017-07-13}} | | accessdate = 2017-07-13}} | ||
</ref> The end result both precomputed and real-time rendering with the modernest game [[w:Graphics processing unit| | </ref> The end result both precomputed and real-time rendering with the modernest game [[w:Graphics processing unit|GPU]] shown [http://gl.ict.usc.edu/Research/DigitalIra/ here] and looks fairly realistic. | ||
* '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font> | * '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font> | ||
=== 2000's synthetic human-like fakes === | |||
[[File:The-matrix-logo.svg|thumb|right|300px|Logo of the [[w:The Matrix (franchise)]]]] | |||
* '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU| | * '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|CLU]]. | ||
* '''2009''' | movie | A digital look-alike of a younger [[w:Arnold Schwarzenegger]] was made for the movie ''[[w:Terminator Salvation]]'' though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger. | * '''2009''' | movie | A digital look-alike of a younger [[w:Arnold Schwarzenegger]] was made for the movie ''[[w:Terminator Salvation]]'' though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger. | ||
* '''2009''' | demonstration | [http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html Paul Debevec: ''''Animating a photo-realistic face'''' at ted.com] Debevec et al. presented new digital likenesses, made by [[w:Image Metrics]], this time of actress [[w:Emily O'Brien]] whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. <ref name="Deb2009">[http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html In this TED talk video] at 00:04:59 you can see ''two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>''. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.</ref> Motion looks fairly convincing contrasted to the clunky run in the [[w:Animatrix#Final Flight of the Osiris| | * '''2009''' | demonstration | [http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html Paul Debevec: ''''Animating a photo-realistic face'''' at ted.com] Debevec et al. presented new digital likenesses, made by [[w:Image Metrics]], this time of actress [[w:Emily O'Brien]] whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. <ref name="Deb2009">[http://www.ted.com/talks/paul_debevec_animates_a_photo_real_digital_face.html In this TED talk video] at 00:04:59 you can see ''two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - <u>Which is which is difficult to tell</u>''. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a [[w:treadmill]]. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.</ref> Motion looks fairly convincing contrasted to the clunky run in the [[w:Animatrix#Final Flight of the Osiris|''Animatrix: Final Flight of the Osiris'']] which was [[w:state-of-the-art]] in 2003 if photorealism was the intention of the [[w:animators]]. | ||
* '''2004''' | movie | The '''[[w:Spider-man 2]]''' (and '''[[w:Spider-man 3]]''', 2007) films. Relevancy: The films include a [[Synthetic human-like fakes#Digital look-alike|digital look-alike]] made of actor [[w:Tobey Maguire]] by [[w:Sony Pictures Imageworks]].<ref name="Pig2005">{{cite web | * '''2004''' | movie | The '''[[w:Spider-man 2]]''' (and '''[[w:Spider-man 3]]''', 2007) films. Relevancy: The films include a [[Synthetic human-like fakes#Digital look-alike|digital look-alike]] made of actor [[w:Tobey Maguire]] by [[w:Sony Pictures Imageworks]].<ref name="Pig2005">{{cite web | ||
Line 739: | Line 502: | ||
</ref> | </ref> | ||
* '''2003''' | short film | [[w:The Animatrix#Final Flight of the Osiris| | * '''2003''' | short film | [[w:The Animatrix#Final Flight of the Osiris|''The Animatrix: Final Flight of the Osiris'']] a [[w:state-of-the-art]] want-to-be human likenesses not quite fooling the watcher made by [[w:Square Pictures#Square Pictures|Square Pictures]]. | ||
* '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''. | |||
[[ | * '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov. | ||
[[File:BSSDF01_400.svg|thumb|left|300px|Traditional [[w:Bidirectional reflectance distribution function| | === 1990's synthetic human-like fakes === | ||
[[File:BSSDF01_400.svg|thumb|left|300px|Traditional [[w:Bidirectional reflectance distribution function|BRDF]] vs. [[w:subsurface scattering|subsurface scattering]] inclusive BSSRDF i.e. [[w:Bidirectional scattering distribution function#Overview of the BxDF functions|Bidirectional scattering-surface reflectance distribution function]]. <br/><br/> | |||
An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]] | An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]] | ||
* ''' | * <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/> | ||
* <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. | |||
* '''1994''' | movie | [[w:The Crow (1994 film)|The Crow]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage. | |||
=== 1970's synthetic human-like fakes === | |||
{{#ev:vimeo|16292363|480px|right|''[[w:A Computer Animated Hand|A Computer Animated Hand]]'' is a 1972 short film by [[w:Edwin Catmull|Edwin Catmull]] and [[w:Fred Parke|Fred Parke]]. This was the first time that [[w:computer-generated imagery|computer-generated imagery]] was used in film to animate likenesses of moving human appearance.}} | |||
* '''1976''' | movie | ''[[w:Futureworld]]'' reused parts of ''A Computer Animated Hand'' on the big screen. | |||
* '''1972''' | entertainment | '''[https://vimeo.com/59434349 'A Computer Animated Hand' on Vimeo]'''. [[w:A Computer Animated Hand]] by [[w:Edwin Catmull]] and [[w:Fred Parke]]. Relevancy: This was the '''first time''' that [[w:computer-generated imagery|computer-generated imagery]] was used in film to '''animate''' moving '''human-like appearance'''. | |||
* '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref> | |||
=== 1770's synthetic human-like fakes === | |||
[[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von Kempelen|Kempelen]]'s [[w:Wolfgang von Kempelen's Speaking Machine|speaking machine]], built 2007–09 at the Department of [[w:Phonetics|Phonetics]], [[w:Saarland University|Saarland University]], [[w:Saarbrücken|Saarbrücken]], Germany. This machine added models of the tongue and lips, enabling it to produce [[w:consonant|consonant]]s as well as [[w:vowel|vowel]]s]] | |||
* '''1791''' | science | '''[[w:Wolfgang von Kempelen's Speaking Machine]]''' of [[w:Wolfgang von Kempelen]] of [[w:Pressburg]], [[w:Hungary]], described in a 1791 paper was [[w:bellows]]-operated.<ref>''Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine'' ("Mechanism of the human speech with description of its speaking machine", J. B. Degen, Wien).</ref> This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s. (based on [[w:Speech synthesis#History]]) | |||
* '''1779''' | science / discovery | [[w:Christian Gottlieb Kratzenstein]] won the first prize in a competition announced by the [[w:Russian Academy of Sciences]] for '''models''' he built of the '''human [[w:vocal tract]]''' that could produce the five long '''[[w:vowel]]''' sounds.<ref name="Helsinki"> | |||
[http://www.acoustics.hut.fi/publications/files/theses/lemmetty_mst/chap2.html History and Development of Speech Synthesis], Helsinki University of Technology, Retrieved on November 4, 2006 | |||
</ref> (Based on [[w:Speech synthesis#History]]) | |||
---- | |||
== Media perhaps about synthetic human-like fakes == | |||
This is a chronological listing of media that are probably to do with [[synthetic human-like fakes]]. | |||
The links currently include scripture, science, demonstrations, music videos, music, entertainment and movies. | |||
=== 2020's media perhaps about synthetic human-like fakes === | |||
* '''2022''' | movie | '''''[[w:The Matrix 4]]''''' ('''2022''') will be the 4th installment of the [[w:The Matrix (franchise)]]. Relevancy: High likelihood of relevance, but unknown as this film is not yet ready or released. | |||
=== 2010's media perhaps about synthetic human-like fakes === | |||
* '''2018''' | music video | [https://www.youtube.com/watch?v=X8f5RgwY8CI&list=PLxKHVMqMZqUTgHYRSXfZN_JjItBzVTCau ''''''Simulation Theory'''''' album by '''Muse''' on Youtube] by [[w:Muse (band)]] from the [[w:Simulation Theory (album)]]. '''Obs.''' "The Pause," "Watch What I Do" and "The Interlude" are not part of the album. Relevancy: Whole album | |||
* '''2016''' | music video |[https://www.youtube.com/watch?v=ElvLZMsYXlo ''''''Voodoo In My Blood'''''' (official music video) by '''Massive Attack''' on Youtube] by [[w:Massive Attack]] and featuring [[w:Tricky]] from the album [[w:Ritual Spirit]]. Relevancy: '''How many machines''' can you see in the same frame at times? If you answered one, look harder and make a more educated guess. | |||
* '''2016''' | music video | [https://www.youtube.com/watch?v=8r31DFrFs5A ''''''The Spoils'''''' by '''Massive Attack''' on Youtube] by [[w:Massive Attack]] featuring [[w:Hope Sandoval]]. [[w:The Spoils (song)|Wikipedia on The Spoils (song)]] Relevancy: The video '''contains synthesis''' of '''human-like''' likenesses. | |||
* '''2013''' | music | [https://www.youtube.com/watch?v=VvT8ydMiETc ''''''In Two'''''' by the '''Nine Inch Nails''' (lyric video) on Youtube] by [[w:Nine Inch Nails]] from the album [[w:Hesitation Marks]]. Relevancy: The '''lyrics''' seem to be about '''appearance theft'''. | |||
* ''' | * '''2013''' | music | [https://www.youtube.com/watch?v=Rn3W6ok-IhE ''''''Copy of A'''''' by the '''Nine Inch Nails''' (lyric video) on Youtube] by [[w:Nine Inch Nails]] from the album [[w:Hesitation Marks]]. Relevancy: The '''lyrics''' seem to be about '''appearance theft'''. | ||
= | * '''2013''' | music video | [https://www.youtube.com/watch?v=ZWrUEsVrdSU ''''''Before Your Very Eyes'''''' by '''Atoms For Peace''' (official music video) on Youtube] by [[w:Atoms for Peace (band)]] from their album [[w:Amok (Atoms for Peace album)]]. Video was made by [http://www.andrewthomashuang.com/ Andrew Thomas Huang (.com)] Relevancy: Watch the video | ||
[ | === 2000's media perhaps about synthetic human-like fakes === | ||
* '''2007''' | short film | [https://www.youtube.com/watch?v=zl6hNj1uOkY ''''''Doll Face'''''' by '''Andy Huang''' on youtube.com] was uploaded on 2007-02-19 by [http://www.andrewthomashuang.com/ Andrew Thomas Huang (.com)]. There are various unofficial videos using 'Doll Face' as graphics, but with different music. | |||
* | * '''2006''' | music video | [https://www.youtube.com/watch?v=hC_sqi9oocI ''''''John The Revelator'''''' by '''Depeche Mode''' (official music video) on Youtube] by [[w:Depeche Mode]] from the single [[w:John the Revelator / Lilian]]. Relevancy: [[Biblical explanation - The books of Daniel and Revelation#Revelation 13|Book of Revelations]]. | ||
* | * '''2005''' | music video | [https://www.youtube.com/watch?v=wwvLlEtxX3o ''''''Only'''''' by Nine '''Inch Nails''' at Youtube.com] [[w:Only (Nine Inch Nails song)]] by the [[w:Nine Inch Nails]]. Relevancy: check the lyrics, check the video | ||
* '''2001''' | music video | [https://www.youtube.com/watch?v=dbB-mICjkQM ''''''Plug In Baby'''''' by '''Muse''' on youtube.com] by [[w:Muse (band)]] from their album [[w:Origin of Symmetry]]. Relevancy: See video | |||
* ''' | * '''2001''' | music video | [https://www.youtube.com/watch?v=lWIeVTs94rI ''''''Evolution Revolution Love'''''' by '''Tricky''' on Youtube] by [[w:Tricky (musician)]] from the [[w:Blowback (album)]] and featuring [[w:Ed Kowalczyk]]. Relevancy: See video | ||
=== 1990's media perhaps about synthetic human-like fakes === | |||
* '''1998''' | music video | [https://www.youtube.com/watch?v=XbByxzZ-4dI ''''''Rabbit in Your Headlights'''''' by '''UNKLE''' on Youtube] by [[w:Unkle]] and featuring [[w:Thom Yorke]] on the lyrics. [[w:Rabbit in Your Headlights#Music video|Wikipedia on 'Rabbit in Your Headlights' music video]]. Relevancy: Contains shots that would have '''injured''' / '''killed''' a '''human actor'''. | |||
* ''' | * '''1998''' | music | [https://www.youtube.com/watch?v=sjrnKG1j3Eg ''''''New Model No. 15'''''' by '''Marilyn Manson''' (lyrics in video) on Youtube]''' by [[w:Marilyn Manson (band)]] from the album [[w:Mechanical Animals]]. Relevancy: The '''lyrics''' are obviously about '''[[digital look-alikes]]''' approaching. | ||
== | * '''1998''' | music video | [https://www.youtube.com/watch?v=FC-Kos_b1sE ''''''The Dope Show'''''' by '''Marilyn Manson''' (lyric video) on Youtube] [https://www.youtube.com/watch?v=5R682M3ZEyk (official music video)] by [[w:Marilyn Manson (band)]] from the album [[w:Mechanical Animals]]. Relevancy: '''lyrics''' | ||
* '''1990''' | music (video) | [https://www.youtube.com/watch?v=JG9CXQxhfL4 '''''Daydreaming''''' by '''Massive Attack''' on youtube.com] [[w:Daydreaming (Massive Attack song)]] by [[w:Massive Attack]] Relevancy: foreseeing "''But what happen when the bomb drops Down...?''" | |||
=== 1980's media perhaps about synthetic human-like fakes === | |||
* ''' | * '''1986''' | music video | [https://www.youtube.com/watch?v=6epzmRZk6UU ''''''Paranoimia'''''' by '''Art of Noise''' on Youtube]. [[w:Paranoimia]] by [[w:Art of Noise]] featuring Max Headroom from the album [[w:In Visible Silence]] Relevancy: Contains state-of-the-art (for the era) '''synthetic human-like character''', [[w:Max Headroom]]). | ||
* ''' | * '''1983''' | music video | [https://www.youtube.com/watch?v=O0lIlROWro8 ''''''Musique Non-Stop'''''' by '''Kraftwerk''' on Youtube] made in 1983, but published only in in '''1986''' by [[w:Kraftwerk]] from album [[w:Electric Café]]. Relevancy: Contains state-of-the-art (for the era) '''[[digital look-alikes]]''' of the band members. | ||
== | === 1st century media perhaps about synthetic human-like fakes === | ||
* '''[[w:1st century]]''' | scripture | '''[[w:Jesus]] teaches''' about things that are yet to come in | |||
*# '''[[w:Matthew 24]]''' | |||
*# '''[[w:The Sheep and the Goats]]''' and | |||
*# '''[[w:Mark 13]]'''. | |||
* ''' | *'''1st century''' | scripture | '''[[w:2 Thessalonians 2]]''' is the second chapter of the [[w:Second Epistle to the Thessalonians]]. It is traditionally attributed to [[w:Paul the Apostle]], with [[w:Saint Timothy]] as a co-author. See [[Biblical explanation - The books of Daniel and Revelation#2 Thessalonians 2|Biblical explanation - The books of Daniel and Revelation § 2 Thessalonians 2]] '''Caution''' to reader: contains '''explicit''' written information about the beasts | ||
*'''1st century''' | scripture | '''[[w:Book of Revelation]]'''. The task of writing down and smuggling out this early warning of what is to come is given by God to his servant John, who was imprisoned on the island of [[w:Patmos]]. See [[Biblical explanation - The books of Daniel and Revelation#Revelation 13|Biblical explanation - The books of Daniel and Revelation § Revelation 13]]. '''Caution''' to reader: contains '''explicit''' written information about the beasts. | |||
== | === 3rd century BC media perhaps about synthetic human-like fakes === | ||
* [[w:3rd century BC]] | scripture | The '''[[w:Book of Daniel]]''' was put in writing. | |||
* See [[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Biblical explanation - The books of Daniel and Revelation § Daniel 7]]. '''Caution''' to reader: contains '''explicit''' written information about the beasts. | |||
=== 6th century BC media perhaps about synthetic human-like fakes === | |||
[[File:Daniel's vision of the four beasts from the sea and the Ancient of Days - Silos Apocalypse (1109), f.240 - BL Add MS 11695.jpg|thumb|left|360px|Image taken from Silos Apocalypse. Originally published/produced in Spain (Silos), 1109.<br/><br/> | |||
[ | [[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Daniel 7]], Daniel's vision of the three beasts <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:1-6 - Three beasts|Dan 7:1-6]]</sup> and the fourth beast <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:7-8 - The fourth beast|Dan 7:7-8]]</sup> from the sea and the [[w:Ancient of Days|Ancient of Days]]<sup>[[Biblical explanation - The books of Daniel and Revelation#The Ancient of Days|Dan 7:9-10]]</sup>]] | ||
</ | |||
- | * [[w:6th century BC]] | scripture | '''[[w:Daniel (biblical figure)]]''' was in [[w:Babylonian captivity]] when he had his visions where God warned us of synthetic human-like fakes first. | ||
* His testimony was put into written form in the [[#3rd century BC]]. | |||
= Footnotes = | == Footnotes == | ||
<references group="footnote" /> | <references group="footnote" /> | ||
= | == 1st seen in == | ||
= 1st seen in = | |||
<references group="1st seen in" /> | <references group="1st seen in" /> | ||
== References == | |||
= References = | |||
<references /> | <references /> |