Editing Synthetic human-like fakes

Jump to navigation Jump to search
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
'''Definitions'''
When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''digital look-alike'''.
<section begin=definitions-of-synthetic-human-like-fakes />
When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''[[Synthetic human-like fakes#Digital look-alikes|digital look-alike]]'''.  


In 2017-2018 this started to be referred to as [[w:deepfake]], even though altering video footage of humans with a computer with a deceiving effect is actually 20 yrs older than the name "deep fakes" or "deepfakes".<ref name="Bohacek and Farid 2022 protecting against fakes">
When it cannot be determined by human testing whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a '''digital sound-alike'''.  


{{cite journal
| last1      = Boháček
| first1    = Matyáš
| last2      = Farid
| first2    = Hany
| date      = 2022-11-23
| title      = Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms
| url        = https://www.pnas.org/doi/10.1073/pnas.2216035119
| journal    = [[w:Proceedings of the National Academy of Sciences of the United States of America]]
| volume    = 119
| issue      = 48
| pages      =
| doi        = 10.1073/pnas.221603511
| access-date = 2023-01-05
}}
</ref><ref name="Bregler1997">
{{cite journal
| last1      = Bregler
| first1    = Christoph
| last2      = Covell
| first2    = Michele
| last3      = Slaney
| first3    = Malcolm
| date      = 1997-08-03
| title      = Video Rewrite: Driving Visual Speech with Audio
| url        = https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/human/bregler-sig97.pdf
| journal    = SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques
| volume    =
| issue      =
| pages      = 353-360
| doi        = 10.1145/258734.258880
| access-date = 2022-09-09
}}
</ref>
When it cannot be determined by human testing or media forensics whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a pre-recorded '''[[Synthetic human-like fakes#Digital sound-alikes|digital sound-alike]]'''. This is now commonly referred to as [[w:audio deepfake]].
'''Real-time digital look-and-sound-alike''' in a video call was used to defraud a substantial amount of money in 2023.<ref name="Reuters real-time digital look-and-sound-alike crime  2023">
{{cite web
| url = https://www.reuters.com/technology/deepfake-scam-china-fans-worries-over-ai-driven-fraud-2023-05-22/
| title = 'Deepfake' scam in China fans worries over AI-driven fraud
| last =
| first =
| date = 2023-05-22
| website = [[w:Reuters.com]]
| publisher = [[w:Reuters]]
| access-date = 2023-06-05
| quote =
}}
</ref>
<section end=definitions-of-synthetic-human-like-fakes />
::[[Synthetic human-like fakes|Read more about '''synthetic human-like fakes''']], see and support '''[[organizations and events against synthetic human-like fakes]]''' and what they are doing, what kinds of '''[[Laws against synthesis and other related crimes]]''' have been formulated, [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|examine the SSFWIKI '''timeline''' of synthetic human-like fakes]] or [[Mediatheque|view the '''Mediatheque''']].
[[File:Screenshot at 27s of a moving digital-look-alike made to appear Obama-like by Monkeypaw Productions and Buzzfeed 2018.png|thumb|right|480px|link=Mediatheque/2018/Obama's appearance thieved - a public service announcement digital look-alike by Monkeypaw Productions and Buzzfeed|{{#lst:Mediatheque|Obama-like-fake-2018}}]]


[[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|right|460px|Image 2 (low resolution rip) shows a 1999 technique for sculpting a morphable model, till it matches the target's appearance.
[[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|left|460px|Image 2 (low resolution rip)  
<br/>(1) Sculpting a morphable model to one single picture  
<br/>(1) Sculpting a morphable model to one single picture  
<br/>(2) Produces 3D approximation  
<br/>(2) Produces 3D approximation  
Line 74: Line 14:


<small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]]
<small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]]
[[File:Saint John on Patmos.jpg|thumb|right|360px|See <big>'''[[Biblical explanation - The books of Daniel and Revelation]]'''</big> to see the advance warning for our time that we were given in 6th century BC and then again in 1st century.
<br/><br/>
'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]. Picture from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]]


== Digital look-alikes ==
== Digital look-alikes ==


{{#ev:youtube|LWLadJFI8Pk|400px|right|It is recommended that you watch ''In Event of Moon Disaster - FULL FILM'' (2020) at the '''[https://moondisaster.org/ moondisaster.org''' project website] (where it has interactive portions) by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]]}}
{{#ev:youtube|LWLadJFI8Pk|640px|right|It is recommended that you watch ''In Event of Moon Disaster - FULL FILM'' (2020) at the '''[https://moondisaster.org/ moondisaster.org''' project website] (where it has interactive portions) by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]]}}




Line 101: Line 45:
<small>[[:File:Deb-2000-reflectance-separation.png|Original picture]]  by [[w:Paul Debevec]] et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855</small>]]
<small>[[:File:Deb-2000-reflectance-separation.png|Original picture]]  by [[w:Paul Debevec]] et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855</small>]]


In the cinemas we have seen digital look-alikes for 20 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. [[w:The Matrix Reloaded]] and [[w:The Matrix Revolutions]] released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the [[Glossary#Reflectance capture|reflectance capture]] over the human face, was made for the first time in 1999 at the [[w:University of Southern California]] and was presented to the crème de la crème  
In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. [[w:The Matrix Reloaded]] and [[w:The Matrix Revolutions]] released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the [[Glossary#Reflectance capture|reflectance capture]] over the human face, was made for the first time in 1999 at the [[w:University of Southern California]] and was presented to the crème de la crème  
of the computer graphics field in their annual gathering SIGGRAPH 2000.<ref name="Deb2000">
of the computer graphics field in their annual gathering SIGGRAPH 2000.<ref name="Deb2000">
{{cite book
{{cite book
Line 120: Line 64:
}}</ref>
}}</ref>


{{#lst:Quotes|MatrixTrad}}
{{Q|Do you think that was [[w:Hugo Weaving]]'s left cheekbone that [[w:Keanu Reeves]] punched in with his right fist?|Trad|The Matrix Revolutions}}


=== The problems with digital look-alikes ===
=== The problems with digital look-alikes ===


Extremely unfortunately for the humankind, organized criminal leagues, that posses the '''weapons capability''' of making believable looking '''synthetic pornography''', are producing on industrial production pipelines '''terroristic synthetic pornography'''<ref group="footnote" name="About the term terroristic synthetic pornography">It is terminologically more precise, more inclusive and more useful to talk about 'terroristic synthetic pornography', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.</ref> by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by.
Extremely unfortunately for the humankind, organized criminal leagues, that posses the '''weapons capability''' of making believable looking '''synthetic pornography''', are producing on industrial production pipelines '''synthetic terror porn'''<ref group="footnote" name="About the term synthetic terror porn">It is terminologically more precise, more inclusive and more useful to talk about 'synthetic terror porn', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.</ref> by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by.
 
These industrially produced pornographic delusions are causing great human suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence.
 
These kinds of '''hate illustration''' increases and strengthens hate feeling, hate thinking, hate speech and hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve.


'''Children-like sexual abuse images'''
These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This '''hate illustration''' increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve.


Sadly by 2023 there is a market for synthetic human-like sexual abuse material that looks like children. See [https://www.bbc.com/news/uk-65932372 ''''''Illegal trade in AI child sex abuse images exposed'''''' at bbc.com] 2023-06-28 reports [[w:Stable Diffusion]] being abused to produce this kind of images. The [[w:Internet Watch Foundation]] also reports on the alarming existence of production of synthetic human-like sex abuse material portraying minors. See [https://www.iwf.org.uk/news-media/news/prime-minister-must-act-on-threat-of-ai-as-iwf-sounds-alarm-on-first-confirmed-ai-generated-images-of-child-sexual-abuse/ ''''''Prime Minister must act on threat of AI as IWF ‘sounds alarm’ on first confirmed AI-generated images of child sexual abuse'''''' at iwf.org.uk] (2023-08-18)
For these reasons the bannable '''raw materials''' i.e. covert models, needed to produce this disinformation terror on the information-industrial production pipelines, '''[[Law proposals to ban covert modeling|should be prohibited by law]]''' in order to protect humans from arbitrary abuse by criminal parties.


=== Fixing the problems from digital look-alikes ===
=== List of possible naked digital look-alike attacks ===


We need to move on 3 fields: [[Laws against synthesis and other related crimes|legal]], technological and cultural.
* The classic "''portrayal of as if in involuntary sex''"-attack. (Digital look-alike "cries")
 
* "''Sexual preference alteration''"-attack. (Digital look-alike "smiles")
'''Technological''': Computer vision system like [[FacePinPoint.com]] for seeking unauthorized pornography / nudes used to exist 2017-2021 and could be revived if funding is found. It was a service practically identical with SSFWIKI original concept [[Adequate Porn Watcher AI (concept)]].
* "''Cutting / beating''"-attack (Constructs a deceptive history for genuine scars)
 
* "''Mutilation''"-attack (Digital look-alike "dies")
'''Legal''': Legislators around the planet have been waking up to this reality that not everything that seems a video of people is a video of people and various laws have been passed to protect humans and humanity from the menaces of synthetic human-like fakes, mostly digital look-alikes so far, but hopefully humans will be protected also fro other aspects of synthetic human-like fakes by laws. See [[Laws against synthesis and other related crimes]]
* "''Unconscious and injected''"-attack (Digital look-alike gets "disease")
 
=== Age analysis and rejuvenating and aging syntheses ===
 
* [https://arxiv.org/abs/2002.03750 ''''''An Overview of Two Age Synthesis and Estimation Techniques'''''' at arxiv.org] [https://arxiv.org/pdf/2002.03750.pdf (.pdf)], submitted for review on 2020-01-26
* [https://www.sciencedirect.com/science/article/abs/pii/S0925231220309942 ''''''Dual Reference Age Synthesis'''''' at sciencedirect.com] [https://arxiv.org/pdf/1908.02671.pdf (preprint at arxiv.org)] published on 2020-10-21 in [[w:Neurocomputing (journal)]]
* [https://ieeexplore.ieee.org/document/6084154 ''''''A simple automatic facial aging/rejuvenating synthesis method'''''' at ieeexplore.ieee.org] [https://www.researchgate.net/publication/220755281_Automatic_Facial_AgingRejuvenating_Synthesis_Method read free at researchgate.net], published at the proceedings of the 2011 IEEE International Conference on Systems, Man and Cybernetics
* [https://ieeexplore.ieee.org/document/5406526 ''''''Age Synthesis and Estimation via Faces: A Survey'''''' at ieeexplore.ieee.org] (paywall) [https://www.researchgate.net/publication/46288561_Age_Synthesis_and_Estimation_via_Faces_A_Survey at researchgate.net] published November 2010
 
=== Temporal limit of digital look-alikes ===
[[File:Institut Lumière - CINEMATOGRAPHE Camera.jpg|thumb|left|120px|A picture of the 1895 [[w:Cinematograph]]]]
 
[[w:History of film technology]] has information about where the border is.
 
Digital look-alikes cannot be used to attack people who existed before the technological invention of film. For moving pictures the breakthrough is attributed to [[w:Auguste and Louis Lumière]]'s [[w:Cinematograph]] premiered in Paris on 28 December '''1895''', though this was only the commercial and popular breakthrough, as even earlier moving pictures exist. (adapted from [[w:History of film]])
 
The '''[[w:Kinetoscope]]''' is an even earlier motion picture exhibition device. A prototype for the Kinetoscope was shown to a convention of the National Federation of Women's Clubs on May 20, 1891.<ref name="memory.loc.gov">
 
{{cite web
|publisher=[[w:Library of Congress]]
|website=Memory.loc.gov
|url=http://memory.loc.gov/ammem/edhtml/edmvhist.html
|title=Inventing Entertainment: The Early Motion Pictures and Sound Recordings of the Edison Companies
|access-date=2020-12-09
}}
 
</ref> The first public demonstration of the Kinetoscope was held at the [[w:Brooklyn Museum|Brooklyn Institute of Arts and Sciences]] on '''May 9''', '''1893'''. ([[w:Kinetoscope|Wikipedia]])<ref name="memory.loc.gov"/>


=== Limits of digital look-alikes ===


Digital look-alikes cannot be used to attack people who existed before the technological invention of film. For moving pictures the breakthrough is attributed to [[w:Auguste and Louis Lumière]]'s [[w:Cinematograph]] premiered in Paris on 28 December 1895, though this was only the commercial and popular breakthrough, as even earlier moving pictures exist.
----
----


== Digital sound-alikes ==
== Digital sound-alikes ==
=== University of Florida published an antidote to synthetic human-like fake voices in 2022 ===


'''2022''' saw a brilliant '''<font color="green">counter-measure</font>''' presented to peers at the 31st [[w:USENIX]] Security Symposium 10-12 August 2022 by [[w:University of Florida]] <u><big>'''[[Detecting deep-fake audio through vocal tract reconstruction]]'''</big></u>.
Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.  


The university's foundation has applied for a patent and let us hope that they will [[w:copyleft]] the patent as this protective method needs to be rolled out to protect the humanity.
----


'''Below transcluded [[Detecting deep-fake audio through vocal tract reconstruction|from the article]]'''
=== 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion) ===


{{#lst:Detecting deep-fake audio through vocal tract reconstruction|what-is-it}} {{#lst:Detecting deep-fake audio through vocal tract reconstruction|original-reporting}}
* In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results


'''This new counter-measure needs to be rolled out to humans to protect humans against the fake human-like voices.'''
The Iframe below is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers.


{{#lst:Detecting deep-fake audio through vocal tract reconstruction|embed}}
Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking.
 
=== On known history of digital sound-alikes ===
[[File:Helsingin-Sanomat-2012-David-Martin-Howard-of-University-of-York-on-apporaching-digital-sound-alikes.jpg|right|thumb|338px|A picture of a cut-away titled "''Voice-terrorist could mimic a leader''" from a 2012 [[w:Helsingin Sanomat]] warning that the sound-like-anyone machines are approaching. Thank you to homie [https://pure.york.ac.uk/portal/en/researchers/david-martin-howard(ecfa9e9e-1290-464f-981a-0c70a534609e).html Prof. David Martin Howard] of the [[w:University of York]], UK and the anonymous editor for the heads-up.]]
 
The first English speaking digital sound-alikes were first introduced in 2016 by Adobe and Deepmind, but neither of them were made publicly available.
<section begin=GoogleTransferLearning2018 />
Then in '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results
 
The Iframe below is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ ''''''Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"'''''' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers.
 
Have a listen.


{{#Widget:Iframe - Audio samples from Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Google Research}}
{{#Widget:Iframe - Audio samples from Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Google Research}}


Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking.
[[File:Helsingin-Sanomat-2012-David-Martin-Howard-of-University-of-York-on-apporaching-digital-sound-alikes.jpg|left|thumb|338px|A picture of a cut-away titled "''Voice-terrorist could mimic a leader''" from a 2012 [[w:Helsingin Sanomat]] warning that the sound-like-anyone machines are approaching. Thank you to homie [https://pure.york.ac.uk/portal/en/researchers/david-martin-howard(ecfa9e9e-1290-464f-981a-0c70a534609e).html Prof. David Martin Howard] of the [[w:University of York]], UK and the anonymous editor for the heads-up.]]


<section end=GoogleTransferLearning2018 />
----


''' Reporting on the sound-like-anyone-machines '''
=== Example of a hypothetical 4-victim digital sound-alike attack ===
* [https://www.forbes.com/sites/bernardmarr/2019/05/06/artificial-intelligence-can-now-copy-your-voice-what-does-that-mean-for-humans/#617f6d872a2a '''"Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?"''' May 2019 reporting at forbes.com] on [[w:Baidu Research]]'es attempt at the sound-like-anyone-machine demonstrated at the 2018 [[w:NeurIPS]] conference.
A very simple example of a digital sound-alike attack is as follows:  


Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:


The to the right [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine presented by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.
# Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
# Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
# Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
# Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.


{{#ev:youtube|0sR1rU3gLzQ|640px|right|Video [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.}}
Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human voice!]]'''


=== Documented crimes with digital sound-alikes ===
=== Examples of speech synthesis software not quite able to fool a human yet ===
In 2019 reports of crimes being committed with digital sound-alikes started surfacing. As of Jan 2022 no reports of other types of attack than fraud have been found.
Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer.


==== 2019 digital sound-alike enabled fraud  ====
* '''[https://lyrebird.ai/ Lyrebird.ai]''' [https://www.youtube.com/watch?v=xxDBlZu__Xk (listen)]
By 2019 digital sound-alike anyone technology found its way to the hands of criminals. In '''2019''' [[w:NortonLifeLock|Symantec]] researchers knew of 3 cases where digital sound-alike technology had been used for '''[[w:crime]]'''.<ref name="Washington Post reporting on 2019 digital sound-alike fraud" />
* '''[https://candyvoice.com/ CandyVoice.com]''' [https://candyvoice.com/demos/voice-conversion (test with your choice of text)]
* '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]]
* [https://papers.nips.cc/paper/8206-neural-voice-cloning-with-a-few-samples ''''Neural Voice Cloning with a Few Samples''' at papers.nips.cc], [[w:Baidu Research]]'es shot at sound-like-anyone-machine did not convince in '''2018'''


Of these crimes the most publicized was a fraud case in March 2019 where 220,000€ were defrauded with the use of a real-time digital sound-alike.<ref name="WSJ original reporting on 2019 digital sound-alike fraud" /> The company that was the victim of this fraud had bought some kind of cyberscam insurance from French insurer [[w:Euler Hermes]] and the case came to light when Mr. Rüdiger Kirsch of Euler Hermes informed [[w:The Wall Street Journal]] about it.<ref name="Forbes reporting on 2019 digital sound-alike fraud" />
=== Reporting on the sound-like-anyone-machines ===
* [https://www.forbes.com/sites/bernardmarr/2019/05/06/artificial-intelligence-can-now-copy-your-voice-what-does-that-mean-for-humans/#617f6d872a2a '''"Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?"''' May 2019 reporting at forbes.com] on [[w:Baidu Research]]'es attempt at the sound-like-anyone-machine demonstrated at the 2018 [[w:NeurIPS]] conference.


''' Reporting on the 2019 digital sound-alike enabled fraud '''
* [https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 '''''Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case''''' at wsj.com] original reporting, date unknown, updated 2019-08-30<ref name="WSJ original reporting on 2019 digital sound-alike fraud">


{{cite web
|url=https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
|title=Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case
|last=Stupp
|first=Catherine
|date=2019-08-30
|website=[[w:wsj.com]]
|publisher=[[w:The Wall Street Journal]]
|access-date=2022-01-01
|quote=}}


</ref>
=== Documented digital sound-alike attacks ===
 
* Sound like anyone technology found its way to the hands of criminals as in '''2019''' [[w:NortonLifeLock|Symantec]] researchers knew of 3 cases where technology has been used for '''[[w:crime]]'''
* [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name="BBC reporting on 2019 digital sound-alike fraud">
** [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name="BBC2019">
{{cite web
{{cite web
  |url= https://www.bbc.com/news/technology-48908736
  |url= https://www.bbc.com/news/technology-48908736
Line 247: Line 146:
  |quote= }}
  |quote= }}
</ref>
</ref>
* [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com] documents a [[w:fraud]] committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name="Washington Post reporting on 2019 digital sound-alike fraud">
** [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com] documents a [[w:fraud]] committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name="WaPo2019">
{{cite web
{{cite web
  |url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/
  |url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/
Line 257: Line 156:
  |publisher= [[w:Washington Post]]
  |publisher= [[w:Washington Post]]
  |access-date= 2019-07-22
  |access-date= 2019-07-22
  |quote=Researchers at the cybersecurity firm Symantec said they have found at least three cases of executives’ voices being mimicked to swindle companies. Symantec declined to name the victim companies or say whether the Euler Hermes case was one of them, but it noted that the losses in one of the cases totaled millions of dollars.}}
  |quote= }}
</ref>
</ref>
* [https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/ '''''A Voice Deepfake Was Used To Scam A CEO Out Of $243,000''''' at forbes.com], 2019-09-03 reporting<ref name="Forbes reporting on 2019 digital sound-alike fraud">


{{cite web
----
|url=https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/
|title=A Voice Deepfake Was Used To Scam A CEO Out Of $243,000
|last=Damiani
|first=Jesse
|date=2019-09-03
|website=[[w:Forbes.com]]
|publisher=[[w:Forbes]]
|access-date=2022-01-01
|quote=According to a new report in The Wall Street Journal, the CEO of an unnamed UK-based energy firm believed he was on the phone with his boss, the chief executive of firm’s the German parent company, when he followed the orders to immediately transfer €220,000 (approx. $243,000) to the bank account of a Hungarian supplier. In fact, the voice belonged to a fraudster using AI voice technology to spoof the German chief executive. Rüdiger Kirsch of Euler Hermes Group SA, the firm’s insurance company, shared the information with WSJ.}}


</ref>
The below video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine presented by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.
 
==== 2020 digital sound-alike fraud attempt ====
{{#ev:youtube|0sR1rU3gLzQ|640px|right|Video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.}}
In June 2020 fraud was attempted with a poor quality pre-recorded digital sound-alike with delivery method was voicemail. ([https://soundcloud.com/jason-koebler/redacted-clip '''Listen to a redacted clip''' at soundcloud.com]) The recipient in a tech company didn't believe the voicemail to be real and alerted the company and they realized that someone tried to scam them. The company called in Nisos to investigate the issue. Nisos analyzed the evidence and they were certain it was a fake, but had aspects of a cut-and-paste job to it. Nisos prepared [https://www.nisos.com/blog/synthetic-audio-deepfake/ a report titled '''''"The Rise of Synthetic Audio Deepfakes"''''' at nisos.com] on the issue and shared it with Motherboard, part of [[w:Vice (magazine)]] prior to its release.<ref name="Vice reporting on 2020 digital sound-alike fraud attempt">
 
{{cite web
|url=https://www.vice.com/en/article/pkyqvb/deepfake-audio-impersonating-ceo-fraud-attempt
|title=Listen to This Deepfake Audio Impersonating a CEO in Brazen Fraud Attempt
|last=Franceschi-Bicchierai
|first=Lorenzo
|date=2020-07-23
|website=[[w:Vice.com]]
|publisher=[[w:Vice (magazine)]]
|access-date=2022-01-03
|quote=}}
 
 
</ref>
 
==== 2021 digital sound-alike enabled fraud ====
 
<section begin=2021 digital sound-alike enabled fraud />The 2nd publicly known fraud done with a digital sound-alike<ref group="1st seen in" name="2021 digital sound-alike fraud case">https://www.reddit.com/r/VocalSynthesis/</ref> took place on Friday 2021-01-15. A bank in Hong Kong was manipulated to wire money to numerous bank accounts by using a voice stolen from one of the their client company's directors. They managed to defraud $35 million of the U.A.E. based company's money.<ref name="Forbes reporting on 2021 digital sound-alike fraud">https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/</ref>. This case came into light when Forbes saw [https://www.documentcloud.org/documents/21085009-hackers-use-deep-voice-tech-in-400k-theft a document] where the U.A.E. financial authorities were seeking administrative assistance from the US authorities towards the end of recovering a small portion of the defrauded money that had been sent to bank accounts in the USA.<ref name="Forbes reporting on 2021 digital sound-alike fraud" />
 
'''Reporting on the 2021 digital sound-alike enabled fraud'''
 
* [https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/ '''''Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find''''' at forbes.com] 2021-10-14 original reporting
* [https://www.unite.ai/deepfaked-voice-enabled-35-million-bank-heist-in-2020/ '''''Deepfaked Voice Enabled $35 Million Bank Heist in 2020''''' at unite.ai]<ref group="1st seen in" name="2021 digital sound-alike fraud case" /> reporting updated on 2021-10-15
* [https://www.aiaaic.org/aiaaic-repository/ai-and-algorithmic-incidents-and-controversies/usd-35m-voice-cloning-heist '''''USD 35m voice cloning heist''''' at aiaaic.org], October 2021 AIAAIC repository entry
<section end=2021 digital sound-alike enabled fraud />
 
'''More fraud cases with digital sound-alikes'''
* [https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/ '''''They thought loved ones were calling for help. It was an AI scam.''''' at washingtonpost.com], March 2023 reporting
 
=== Example of a hypothetical 4-victim digital sound-alike attack ===
A very simple example of a digital sound-alike attack is as follows:
 
Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:
 
# Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
# Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
# Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
# Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.
 
=== Examples of speech synthesis software not quite able to fool a human yet ===
Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer. 
 
* '''[https://lyrebird.ai/ Lyrebird.ai]''' [https://www.youtube.com/watch?v=xxDBlZu__Xk (listen)]
* '''[https://candyvoice.com/ CandyVoice.com]''' [https://candyvoice.com/demos/voice-conversion (test with your choice of text)]
* '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]]
* [https://papers.nips.cc/paper/8206-neural-voice-cloning-with-a-few-samples ''''Neural Voice Cloning with a Few Samples''' at papers.nips.cc], [[w:Baidu Research]]'es shot at sound-like-anyone-machine did not convince in '''2018'''
 
=== Temporal limit of digital sound-alikes ===
[[File:Edison_and_phonograph_edit1.jpg|thumb|left|210px|[[w:Thomas Edison]] and his early [[w:phonograph]]. Cropped from [[w:Library of Congress]] copy, ca. 1877,  (probably 18 April 1878)]]
 
The temporal limit of whom, dead or living, the digital sound-alikes can attack is defined by the '''[[w:history of sound recording]]'''.
 
The article starts by mentioning that the invention of the [[w:phonograph]] by [[w:Thomas Edison]] in '''1877''' is considered the start of sound recording.
 
The '''phonautograph''' is the earliest known device for recording [[w:sound]]. Previously, tracings had been obtained of the sound-producing vibratory motions of [[w:tuning forks]] and other objects by physical contact with them, but not of actual sound waves as they propagated through air or other media. Invented by Frenchman [[W:Édouard-Léon Scott de Martinville]], it was patented on March 25, '''1857'''.<ref name="NPR-Phonautograph">
 
{{Cite news
|url=https://www.npr.org/templates/story/story.php?storyId=89380697
|title=1860 'Phonautograph' Is Earliest Known Recording
|last=Flatow
|first=Ira|date=April 4, 2008|work=NPR
|access-date=2012-12-09
|language=en}}
 
</ref>
 
Apparently, it did not occur to anyone before the 1870s that the recordings, called '''phonautograms''', contained enough information about the sound that they could, in theory, be '''used to recreate it'''. Because the phonautogram tracing was an insubstantial two-dimensional line, direct physical playback was impossible in any case. Several phonautograms recorded '''before 1861''' were successfully played as sound in '''2008''' by optically scanning them and using a computer to process the scans into digital audio files. ([[w:Phonautograph|Wikipedia]])


[[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram]] of a male voice saying 'nineteenth century']]
[[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram]] of a male voice saying 'nineteenth century']]
=== What should we do about digital sound-alikes? ===
Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.
For these reasons the bannable '''raw materials''' i.e. covert voice models '''[[Law proposals to ban covert modeling|should be prohibited by law]]''' in order to protect humans from abuse by criminal parties.
It is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human voice!]]'''


== Text syntheses ==
== Text syntheses ==
[[w:Chatbot]]s and [[w:spamming]] have existed for a longer time, but only now armed with AI they are becoming more deceiving.  
[[w:Chatbot]]s have existed for a longer time, but only now armed with AI they are becoming more deceiving.  


In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI.
In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI.
'''[[w:Large language model]]s''' ('''LLM''') are very large [[w:language model]]s consisting of a [[w:Artificial neural network|w:neural network]] with many parameters.


[[w:OpenAI]]'s [[w:OpenAI#GPT|w:Generative Pre-trained Transformer]] ('''GPT''') is a left-to-right [[w:transformer (machine learning model)]]-based [[w:Natural-language generation|text generation]] model succeeded by [[w:OpenAI#GPT-2|w:GPT-2]] and [[w:OpenAI#GPT-3|w:GPT-3]]
[[w:OpenAI]]'s [[w:OpenAI#GPT|w:Generative Pre-trained Transformer]] ('''GPT''') is a left-to-right [[w:transformer (machine learning model)]]-based [[w:Natural-language generation|text generation]] model succeeded by [[w:OpenAI#GPT-2|w:GPT-2]] and [[w:OpenAI#GPT-3|w:GPT-3]]
November 2022 saw the publication of OpenAI's '''[[w:ChatGPT]]''', a conversational artificial intelligence.


'''[[w:Bard (chatbot)]]''' is a conversational [[w:generative artificial intelligence]] [[w:chatbot]] developed by [[w:Google]], based on the [[w:LaMDA]] family of [[w:large language models]]. It was developed as a direct response to the rise of [[w:OpenAI]]'s [[w:ChatGPT]], and was released in March 2023. ([https://en.wikipedia.org/w/index.php?title=Bard_(chatbot)&oldid=1152361586 Wikipedia])
''' Reporting / announcements '''
 
''' Reporting / announcements ''' (in reverse chronology)
* [https://blogs.microsoft.com/blog/2023/02/07/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web/ '''''Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web''''' at blogs.microsoft.com] '''February 2023''' (2023-02-07). The new improved Bing, available only in Microsoft's Edge browser is reportedly based on a language model refined from GPT 3.5.<ref>https://www.theverge.com/2023/2/7/23587454/microsoft-bing-edge-chatgpt-ai</ref>
 
* [https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text '''New AI classifier for indicating AI-written text''' at openai.com], a '''January 2023''' blog post about OpenAI's AI classifier for detecting AI-written texts.
 
* [https://openai.com/blog/chatgpt '''''Introducing ChatGPT''''' at openai.com] '''November 2022''' (2022-11-30)


* [https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/ ''''A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.'''' at technologyreview.com] '''August 2020''' reporting in the [[w:MIT Technology Review]] by Karen Hao about GPT-3.
* [https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/ ''''A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.'''' at technologyreview.com] '''August 2020''' reporting in the [[w:MIT Technology Review]] by Karen Hao about GPT-3.
Line 385: Line 187:
* [https://analyticssteps.com/blogs/detection-fake-and-false-news-text-analysis-approaches-and-cnn-deep-learning-model '''"Detection of Fake and False News (Text Analysis): Approaches and CNN as Deep Learning Model"''' at analyticsteps.com], a 2019 summmary written by Shubham Panth.
* [https://analyticssteps.com/blogs/detection-fake-and-false-news-text-analysis-approaches-and-cnn-deep-learning-model '''"Detection of Fake and False News (Text Analysis): Approaches and CNN as Deep Learning Model"''' at analyticsteps.com], a 2019 summmary written by Shubham Panth.


=== Detectors for synthesized texts ===
== Countermeasures against synthetic human-like fakes ==


Introduction of [[w:ChatGPT]] by OpenAI brought the need for software to detect machine-generated texts.
<section begin=APW_AI-transclusion />
=== Organizations against synthetic human-like fakes ===


Try AI plagiarism detection for free
[[File:DARPA_Logo.jpg|thumb|left|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]]


* [https://contentdetector.ai/ '''AI Content Detector''' at contentdetector.ai]- ''AI Content Detector - Detect ChatGPT Plagiarism'' ('''try for free''')
* '''[[w:DARPA]]''' [https://www.darpa.mil/program/media-forensics '''DARPA program''': ''''Media Forensics'''' ('''MediFor''') at darpa.mil] aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in [https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics June '''2016''']<ref name="IA-MediFor-2016-crawl">https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics</ref>.
* [https://platform.openai.com/ai-text-classifier '''AI Text Classifier''' at platform.openai.com]- ''The AI Text Classifier is a fine-tuned GPT model that predicts how likely it is that a piece of text was generated by AI from a variety of sources, such as ChatGPT.'' ('''free account required''')
* [https://gptradar.com/ '''GPT Radar''' at gptradar.com] - ''AI text detector app'' ('''try for free''')<ref group="1st seen in" name="Wordlift.io 2023">https://wordlift.io/blog/en/best-plagiarism-checkers-for-ai-generated-content/</ref>
* [https://gptzero.me/ '''GPTZero''' at gptzero.me] - ''The World's #1 AI Detector with over 1 Million Users'' ('''try for free''')
* [https://copyleaks.com/plagiarism-checker '''Plagiarism Checker''' at copyleaks.com]- ''Plagiarism Checker by Copyleaks'' ('''try for free''')<ref group="1st seen in" name="Wordlift.io 2023" />
* https://gowinston.ai/ - ''The most powerful AI content detection solution'' ('''free-tier available''')<ref group="1st seen in" name="Wordlift.io 2023" />
* [https://www.zerogpt.com/ '''ZeroGPT''' at zerogpt.com]<ref group="1st seen in" name="Wordlift.io 2023" /> - ''GPT-4 And ChatGPT detector by ZeroGPT: detect OpenAI text - ZeroGPT the most Advanced and Reliable Chat GPT and GPT-4 detector tool'' ('''try for free''')


For-a-fee AI plagiarism detection tools
* [https://www.darpa.mil/program/semantic-forensics '''DARPA program''': ''''Semantic Forensics'''' ('''SemaFor''') at darpa.mil] aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at [[w:Duke University]]'s [https://researchfunding.duke.edu/semantic-forensics-semafor '''Research Funding database: Semantic Forensics (SemaFor)''' at researchfunding.duke.edu] and some at [https://www.grants.gov/web/grants/view-opportunity.html?oppId=319894 '''Semantic Forensics grant opportunity''' (closed Nov 2019) at grants.gov]. Archive.org first cralwed their website in [https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November '''2019''']<ref name="IA-SemaFor-2019-crawl">https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November</ref>
* https://originality.ai/ - ''The Most Accurate AI Content Detector and Plagiarism Checker Built for Serious Content Publishers''<ref group="1st seen in" name="Wordlift.io 2023" />
* https://www.turnitin.com/ - ''Empower students to do their best, original work''<ref group="1st seen in" name="Wordlift.io 2023" />


== Handwriting syntheses ==
* '''[[w:University of Colorado Denver]]''' is the home of the [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics '''National Center for Media Forensics''' at artsandmedia.ucdenver.edu] at the [[w:University of Colorado Denver]] offers a [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/media-forensics-graduate-program Master's degree program], [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/training-courses training courses] and [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/national-center-for-media-forensics-research scientific basic and applied research]. [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/faculty-staff Faculty staff at the NCMF]


Handwriting syntheses could be used
[[File:Connie Leyva 2015.jpg|thumb|left|240px|[[w:California]] [[w:California State Senate|w:Senator]] [[w:Connie Leyva]] introduced [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] in Feb '''2019'''. It has been [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn endorsed by SAG-AFTRA], but has not yet passed.]]
# Defensively, to hide one's handwriting style from public view
# Offensively, to thieve somebody else's handwriting style


If the handwriting-like synthesis passes human and media forensics testing, it is a '''digital handwrite-alike'''.
* '''[[w:SAG-AFTRA]]''' [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the  [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''.


Here we find a '''risk''' similar to that which realized when the '''[[w:speaker recognition]] systems''' turned out to be instrumental in the development of '''[[#Digital sound-alikes|digital sound-alikes]]'''. After the knowledge needed to recognize a speaker was [[w:Transfer learning|w:transferred]] into a generative task in 2018 by Google researchers, we no longer cannot effectively determine for English speakers which recording is human in origin and which is from a machine origin.
=== Events against synthetic human-like fakes ===


'''Handwriting-like syntheses''':
* '''2020''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' |  [https://sites.google.com/view/wmediaforensics2020/home 2020 Conference on Computer Vision and Pattern Recognition:  ''''Workshop on Media Forensics'''' at sites.google.com], a '''June 2020''' workshop at the [[w:Conference on Computer Vision and Pattern Recognition]].  
[[w:Recurrent neural network]]s (RNN) seem are a popular choice for this task.


* [https://github.com/topics/handwriting-synthesis GitHub topic '''handwriting-synthesis'''] has 29 public repositories as of September 2021.  
* '''2019''' | '''[[w:Conference on Neural Information Processing Systems|w:NeurIPS]]''' | [[w:Facebook, Inc.]] [https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/facebook-ai-launches-its-deepfake-detection-challenge '''"Facebook AI Launches Its Deepfake Detection Challenge"''' at spectrum.ieee.org]
* [https://github.com/topics/handwriting-generation GitHub topic '''handwriting-generation'''] has 21 public repositories as of September 2021.


* '''2019''' | '''CVPR''' | [https://sites.google.com/view/mediaforensics2019/home '''2019''' CVPR: ''''Workshop on Media Forensics'''']


* [https://www.sciencedirect.com/science/article/abs/pii/S0031320319303814 '''''Deep imitator: Handwriting calligraphy imitation via deep attention networks'''' at sciencedirect.com], published in [[w:Pattern Recognition (journal)]] in August '''2020'''.
* '''Annual''' (?) | '''[[w:National Institute of Standards and Technology]] (NIST)  | [https://www.nist.gov/itl/iad/mig/media-forensics-challenge NIST: 'Media Forensics Challenge'''' at nist.gov], an iterative research challenge by the [[w:National Institute of Standards and Technology]] with the ongoing challenge being the 2nd one in action. [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2019-0 the evaluation criteria for the 2019 iteration are being formed]


* [https://greydanus.github.io/2016/08/21/handwriting/ '''Scribe''' - ''Generating Realistic Handwriting with TensorFlow'' at greydanus.github.io] blog post published on '''2016'''-08-21. [https://github.com/greydanus/scribe '''Scribe code''' at github.com]
* '''2018''' | '''[[w:European Conference on Computer Vision|w:European Conference on Computer Vision (ECCV)]]''' [https://sites.google.com/view/wocm2018/home ECCV 2018: ''''Workshop on Objectionable Content and Misinformation'''' at sites.google.com], a workshop at the '''2018''' [[w:European Conference on Computer Vision]] in [[w:Munich]] had focus on objectionable content detection e.g. [[w:nudity]], [[w:pornography]], [[w:violence]], [[w:hate]], [[w:Child sexual abuse|w:children exploitation]] and [[w:terrorism]] among others and to address misinformation problems when people are fed [[w:disinformation]] and they punt it on as misinformation. Announced topics included [[w:Outline of forensic science|w:image/video forensics]], [[w:detection]]/[[w:analysis]]/[[w:understanding]] of [[w:Counterfeit|w:fake]] images/videos, [[w:misinformation]] detection/understanding: mono-modal and [[w:Multimodality|w:multi-modal]], adversarial technologies and detection/understanding of objectionable content


* [https://dl.acm.org/doi/10.1145/2886099 '''''My Text in Your Handwriting''''' at dl.acm.org], a system from [[w:University College London]] published on '''2016'''-05-18 in [[w:ACM Transactions on Graphics]].<ref group="1st seen in">https://www.ucl.ac.uk/news/2016/aug/new-computer-programme-replicates-handwriting via Google search for "ai handwriting generator"</ref>
* '''2018''' | '''[[w:National Institute of Standards and Technology|w:NIST]]''' [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018 NIST ''''Media Forensics Challenge 2018'''' at nist.gov] was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.


* [https://arxiv.org/abs/1308.0850  '''''Generating Sequences With Recurrent Neural Networks''''' at arxiv.org] by Alex Graves published on '''2013'''-08-04 in Neural and Evolutionary Computing.
* '''2017''' | '''[[w:National Institute of Standards and Technology|w:NIST]]''' [https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation NIST ''''Nimble Challenge 2017'''' at nist.gov]
:#[https://www.cs.toronto.edu/~graves/handwriting.html '''''Recurrent neural network handwriting generation demo''''' at cs.toronto.edu] is a demonstration site for publication
:# [https://www.calligrapher.ai/ '''Calligrapher.ai''' - ''Realistic computer-generated handwriting''] - The user may control parameters: speed, legibility, stroke width and style. The domain is registered by some organization in Iceland and the website offers no about-page<ref group="1st seen in">https://seanvasquez.com/handwriting-generation redirects to Calligrapher.ai - seen in https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/</ref>. According to [https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/ this reddit post] Calligrapher.ai is based on Graves' 2013 work, but "''adds an [[w:inference]] model to allow for sampling latent style vectors (similar to the VAE model used by SketchRNN)''".<ref>https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/</ref>


''' Handwriting recognition '''
=== Studies against synthetic human-like fakes ===


* '''[[w:Handwriting recognition]]''' ('''HWR'''), also known as '''Handwritten Text Recognition''' ('''HTR'''), is the ability of a computer to receive and interpret intelligible [[w:handwriting|w:handwritten]] input (Wikipedia)
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], a '''2020''' '''review''' on the subject of digital look-alikes and media forensics
* '''[[w:Intelligent word recognition]]''', or '''IWR''', is the recognition of unconstrained handwritten words.<ref>
 
{{Cite web
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]]
|url=https://www.efilecabinet.com/what-is-iwr-intelligent-word-recognition-how-is-it-related-to-document-management/
|title=What is IWR? (Intelligent Word Recognition)
|date=2016-01-04
|website=eFileCabinet
|language=en-US
|access-date=2021-09-21
}}


</ref> (Wikipedia)
''' Search for more '''
* [[w:Law review]]
** [[w:List of law reviews in the United States]]


* [https://github.com/topics/handwriting-recognition GitHub topic '''handwriting-recognition'''] contains 238 repositories as of September 2021.
=== Companies against synthetic human-like fakes ===


== Singing syntheses ==
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020.


As of 2020 the '''digital sing-alikes''' may not yet be here, but when we hear a faked singing voice and we cannot hear that it is fake, then we will know. An ability to sing does not seem to add much hostile capabilities compared to the ability to thieve spoken word.
<section end=APW_AI-transclusion />


* [https://arxiv.org/abs/1910.11690 ''''''Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks'''''' at arxiv.org], a 2019 singing voice synthesis technique using [[w:convolutional neural network|w:convolutional neural networks (CNN)]]. Accepted into the 2020 [[w:International Conference on Acoustics, Speech, and Signal Processing|International Conference on Acoustics, Speech, and Signal Processing (ICASSP)]].
=== SSF! wiki proposed countermeasure to synthetic porn: Adequate Porn Watcher AI (transcluded) ===
* [http://compmus.ime.usp.br/sbcm/2019/papers/sbcm-2019-7.pdf ''''''State of art of real-time singing voice synthesis'''''' at compmus.ime.usp.br] presented at the 2019 [http://compmus.ime.usp.br/sbcm/2019/program/ 17th Brazilian Symposium on Computer Music]
Transcluded from [[Adequate Porn Watcher AI]]
* [http://theses.fr/2017PA066511 ''''''Synthesis and expressive transformation of singing voice'''''' at theses.fr] [https://www.theses.fr/2017PA066511.pdf as .pdf] a 2017 doctorate thesis by [http://theses.fr/227185943 Luc Ardaillon]
* [http://mtg.upf.edu/node/512 ''''''Synthesis of the Singing Voice by Performance Sampling and Spectral Models'''''' at mtg.upf.edu], a 2007 journal article in the [[w:IEEE Signal Processing Society]]'s Signal Processing Magazine
* [https://www.researchgate.net/publication/4295714_Speech-to-Singing_Synthesis_Converting_Speaking_Voices_to_Singing_Voices_by_Controlling_Acoustic_Features_Unique_to_Singing_Voices ''''''Speech-to-Singing Synthesis: Converting Speaking Voices to Singing Voices by Controlling Acoustic Features Unique to Singing Voices'''''' at researchgate.net], a November 2007 paper published in the IEEE conference on Applications of Signal Processing to Audio and Acoustics


{{#lstx:Adequate Porn Watcher AI|See_also}}


* [[w:Category:Singing software synthesizers]]
=== Possible legal response: Outlawing digital sound-alikes (transcluded) ===
Transcluded from [[User:Juho Kunsola/Law proposals#Law proposal to ban covert modeling of human voice|Juho's proposal on banning digital sound-alikes]]


----
{{#section-h:User:Juho Kunsola/Law proposals|Law proposal to ban covert modeling of human voice}}


= Timeline  of synthetic human-like fakes =
== Timeline  of synthetic human-like fakes ==
See the #SSFWIKI '''[[Mediatheque]]''' for viewing media that is or is probably to do with synthetic human-like fakes.


== 2020's synthetic human-like fakes ==
=== 2020's synthetic human-like fakes ===


* '''2023''' | '''<font color="orange">Real-time digital look-and-sound-alike crime</font>''' | In April a man in northern China was defrauded of 4.3 million yuan by a criminal employing a digital look-and-sound-alike pretending to be his friend on a video call made with a stolen messaging service account.<ref name="Reuters real-time digital look-and-sound-alike crime  2023"/>
* '''2020''' | reporting | [https://www.wired.co.uk/article/deepfake-porn-websites-videos-law "''Deepfake porn is now mainstream. And major sites are cashing in''" at wired.co.uk] by Matt Burgess. Published August 2020.


* '''2023''' | '''<font color="orange">Election meddling with digital look-alikes</font>''' | The [[w:2023 Turkish presidential election]] saw numerous deepfake controversies.  
* '''2020''' | demonstration | '''[https://moondisaster.org/ Moondisaster.org]''' (full film embedded in website) project by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]] published in July 2020, makes use of various methods of making a synthetic human-like fake. Alternative place to watch: [https://www.youtube.com/watch?v=LWLadJFI8Pk ''In Event of Moon Disaster - FULL FILM'' at youtube.com] 
** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster'']


** "''Ahead of the election in Turkey, President Recep Tayyip Erdogan showed a video linking his main challenger Kemal Kilicdaroglu to the militant Kurdish organization PKK.''" [...] "''Research by DW's fact-checking team in cooperation with DW's Turkish service shows that the video at the campaign rally was '''manipulated''' by '''combining two separate videos''' with totally different backgrounds and content.''" [https://www.dw.com/en/fact-check-turkeys-erdogan-shows-false-kilicdaroglu-video/a-65554034 reports dw.com]  
[[File:Marc Berman.jpg|thumb|120px|left|Homie [[w:Marc Berman|w:Marc Berman]], a righteous fighter for our human rights in this age of industrial disinformation filth and a member of the [[w:California State Assembly]], most loved for authoring [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 AB-602], which came into effect on Jan 1 2020, banning both the manufacturing and [[w:digital distribution]] of synthetic pornography without the [[w:consent]] of the people depicted.]]


* '''2023''' | March 7 th | '''<font color="red">science / demonstration</font>''' | Microsoft researchers submitted a paper for publication outlining their [https://arxiv.org/abs/2303.03926 '''Cross-lingual neural codec language modeling system''' at arxiv.org] dubbed [https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e-x/ '''VALL-E X''' at microsoft.com], that extends upon VALL-E's capabilities to be cross-lingual and also maintaining the same "''emotional tone''" from sample to fake.
* '''2020''' | US state law | January 1 <ref name="KFI2019">


* '''2023''' | January 5th | '''<font color="red">science / demonstration</font>''' | Microsoft researchers announced [https://www.microsoft.com/en-us/research/project/vall-e/ '''''VALL-E''''' - Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers (at microsoft.com)] that is able to thieve a voice from only '''3 seconds of sample''' and it is also able to mimic the "''emotional tone''" of the sample the synthesis if produced of.<ref>
{{cite web
{{cite web
| url = https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/
|url= https://kfiam640.iheart.com/content/2019-12-30-here-are-the-new-california-laws-going-into-effect-in-2020/
| title = Microsoft’s new AI can simulate anyone’s voice with 3 seconds of audio
|title= Here Are the New California Laws Going Into Effect in 2020
| last = Edwards
|last= Johnson
| first = Benj
|first= R.J.
| date = 2023-01-10
|date= 2019-12-30
| website = [[w:Arstechnica.com]]
|website= [[KFI]]
| publisher = Arstechnica
|publisher= [[iHeartMedia]]
| access-date = 2023-05-05
|access-date= 2020-07-13
| quote = For the paper's conclusion, they write: "Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E. We will also put Microsoft AI Principles into practice when further developing the models."
|quote=}}
}}
</ref>


* '''2023''' | January 1st | '''<font color="green">Law</font>''' | {{#lst:Law on sexual offences in Finland 2023|what-is-it}}
</ref> the [[w:California]] [[w:State law (United States)|w:US state law]] [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602 AB-602] came into effect banning the manufacturing and [[w:digital distribution]] of synthetic pornography without the [[w:consent]] of the people depicted.  AB-602 provides victims of synthetic pornography with [[w:injunction|w:injunctive relief]] and poses legal threats of [[w:statutory damages|w:statutory]] and [[w:punitive damages]] on [[w:criminal]]s making or distributing synthetic pornography without consent. The bill AB-602 was signed into law by California [[w:Governor (United States)|w:Governor]] [[w:Gavin Newsom]] on October 3 2019 and was authored by [[w:California State Assembly]] member [[w:Marc Berman]].<ref name="CNET2019">


* '''2022''' | <font color="orange">'''science'''</font> and <font color="green">'''demonstration'''</font> | [[w:OpenAI]][https://openai.com/ (.com)] published [[w:ChatGPT]], a discutational AI accessible with a free account at [https://chat.openai.com/ chat.openai.com]. Initial version was published on 2022-11-30.
{{cite web
 
| last = Mihalcik
* '''2022''' | '''<font color="green">brief report of counter-measures</font>''' | {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}} Publication date 2022-11-23.
| first = Carrie
 
| title = California laws seek to crack down on deepfakes in politics and porn
* '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Detecting deep-fake audio through vocal tract reconstruction|what-is-it}}
| website = [[w:cnet.com]]
:{{#lst:Detecting deep-fake audio through vocal tract reconstruction|original-reporting}}. Presented to peers in August 2022 and to the general public in September 2022.
| publisher = [[w:CNET]]
 
| date = 2019-10-04
* '''2022''' | <font color="orange">'''disinformation attack'''</font> | In June 2022 a fake digital look-and-sound-alike in the appearance and voice of [[w:Vitali Klitschko]], mayor of [[w:Kyiv]], held fake video phone calls with several European mayors. The Germans determined that the video phone call was fake by contacting the Ukrainian officials. This attempt at covert disinformation attack was originally reported by [[w:Der Spiegel]].<ref>https://www.theguardian.com/world/2022/jun/25/european-leaders-deepfake-video-calls-mayor-of-kyiv-vitali-klitschko</ref><ref>https://www.dw.com/en/vitali-klitschko-fake-tricks-berlin-mayor-in-video-call/a-62257289</ref>
| url = https://www.cnet.com/news/california-laws-seek-to-crack-down-on-deepfakes-in-politics-and-porn/
 
| access-date = 2020-07-13
* '''2022''' | science | [[w:DALL-E]] 2, a successor designed to generate more realistic images at higher resolutions that "can combine concepts, attributes, and styles" was published in April 2022.<ref>{{Cite web |title=DALL·E 2 |url=https://openai.com/dall-e-2/ |access-date=2023-04-22 |website=OpenAI |language=en-US}}</ref> ([https://en.wikipedia.org/w/index.php?title=DALL-E&oldid=1151136107 Wikipedia])
 
 
* '''2022''' | '''<font color="green">counter-measure</font>''' | {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}} Preprint published in February 2022 and submitted to [[w:arXiv]] in June 2022
 
* '''2022''' | '''<font color="green">science / review of counter-measures</font>''' | [https://www.mdpi.com/1999-4893/15/5/155 ''''''A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions'''''' at mdpi.com]<ref name="Audio Deepfake detection review 2022">
 
{{cite journal
| last1      = Almutairi
| first1    = Zaynab
| last2      = Elgibreen
| first2    = Hebah
| date      = 2022-05-04
| title      = A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions
| url        = https://www.mdpi.com/1999-4893/15/5/155
| journal    = [[w:Algorithms (journal)]]
| volume    =
| issue      =
| pages      =
| doi        = https://doi.org/10.3390/a15050155
| access-date = 2022-10-18
}}
}}


</ref>, a review of audio deepfake detection methods by researchers Zaynab Almutairi and Hebah Elgibreen of the [[w:King Saud University]], Saudi Arabia published in [[w:Algorithms (journal)]] on Wednesday 2022-05-04 published by the [[w:MDPI]] (Multidisciplinary Digital Publishing Institute). This article belongs to the Special Issue [https://www.mdpi.com/journal/algorithms/special_issues/Adversarial_Federated_Machine_Learning ''Commemorative Special Issue: Adversarial and Federated Machine Learning: State of the Art and New Perspectives'' at mdpi.com]
</ref>
* '''2020''' | Chinese legislation | On January 1 Chinese law requiring that synthetically faked footage should bear a clear notice about its fakeness came into effect. Failure to comply could be considered a [[w:crime]] the [[w:Cyberspace Administration of China]] stated on its website. China announced this new law in November 2019.<ref name="Reuters2019">


* '''2022''' | '''<font color="green">science / counter-measure</font>''' | [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org], a pre-print was presented at the [https://www.interspeech2022.org/ Interspeech 2022 conference] organized by [[w:International Speech Communication Association]] in Korea September 18-22 2022.
{{cite web
| url = https://www.reuters.com/article/us-china-technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VU
| title = China seeks to root out fake news and deepfakes with new online content rules
| last =
| first =
| date = 2019-11-29
| website = [[w:Reuters.com]]
| publisher = [[w:Reuters]]
| access-date = 2020-07-13
| quote = }}


* '''2021''' | Science and demonstration | In the NeurIPS 2021 held virtually in December researchers from Nvidia and [[w:Aalto University]] present their paper [https://nvlabs.github.io/stylegan3/ '''''Alias-Free Generative Adversarial Networks (StyleGAN3)''''' at nvlabs.github.io] and associated [https://github.com/NVlabs/stylegan3 implementation] in [[w:PyTorch]] and the results are deceivingly human-like in appearance. [https://nvlabs-fi-cdn.nvidia.com/stylegan3/stylegan3-paper.pdf StyleGAN3 paper as .pdf at nvlabs-fi-cdn.nvidia.com]
</ref> The Chinese government seems to be reserving the right to prosecute both users and [[w:online video platform]]s failing to abide by the rules. <ref name="TheVerge2019">


* '''2021''' | Entertainment | The Swedish pop band [[w:ABBA]] published an album in September and will be performing shows where the music is live and real, but the visuals will be [[#Age analysis and rejuvenating and aging syntheses|rejuvenated]] [[#Digital look-alikes|digital look-alikes]] of the band members displayed to the fans with [[w:holography]] technology. ABBA used [[w:Industrial Light & Magic]] as the purveyor of technology. [[w:Industrial Light & Magic]] was acquired by [[w:The Walt Disney Company]] in 2012 as part of their acquisition [[w:Lucasfilm]].
{{cite web
| url = https://www.theverge.com/2019/11/29/20988363/china-deepfakes-ban-internet-rules-fake-news-disclosure-virtual-reality
| title = China makes it a criminal offense to publish deepfakes or fake news without disclosure
| last = Statt
| first = Nick
| date = 2019-11-29
| website =
| publisher = [[w:The Verge]]
| access-date = 2020-07-13
| quote = }}


* '''2021''' | Controversy | July 2021 saw the release of [[w:Roadrunner: A Film About Anthony Bourdain]] and soon controversy arose, as the director [[w:Morgan Neville]] admitted to [[w:Helen Rosner]], a food writer for [[w:The New Yorker]] that he had contracted an AI company to thieve [[w:Anthony Bourdain]]'s voice and used it to insert audio that sounded like him, without declaring it as faked.<ref name="NewYorker 2020">
{{Cite magazine
|last=Rosner
|first=Helen
|author-link=[[w:Helen Rosner]]
|date=2021-07-15
|title=A Haunting New Documentary About Anthony Bourdain
|url=https://www.newyorker.com/culture/annals-of-gastronomy/the-haunting-afterlife-of-anthony-bourdain
|url-status=live
|access-date=2021-08-25
|magazine=[[w:The New Yorker]]
|language=en-US
}}
</ref><ref group="1st seen in">
Witness newsletter I subscribed to at https://www.witness.org/get-involved/
</ref>
</ref>


* '''2021''' | Science | [https://arxiv.org/pdf/2102.05630.pdf '''''Voice Cloning: a Multi-Speaker Text-to-Speech Synthesis Approach based on Transfer Learning''''' .pdf at arxiv.org], a paper submitted in Feb 2021 by researchers from the [[w:University of Turin]].<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018" />
=== 2010's synthetic human-like fakes ===


* '''2021''' | '''<font color="red">crime / fraud</font>''' | {{#lst:Synthetic human-like fakes|2021 digital sound-alike enabled fraud}}
* '''2019''' | demonstration | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 '''a deepfake of the President in office'''] [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it.


* '''2021''' | science and demonstration | '''DALL-E''', a [[w:deep learning]] model developed by [[w:OpenAI]] to generate digital images from [[w:natural language]] descriptions, called "prompts" was published in January 2021. DALL-E uses a version of [[w:GPT-3]] modified to generate images. (Adapted from [https://en.wikipedia.org/w/index.php?title=DALL-E&oldid=1151136107 Wikipedia])
* '''2019''' | US state law | Since September 1 [[w:Texas]] senate bill [https://capitol.texas.gov/tlodocs/86R/billtext/html/SB00751F.htm '''SB 751'''] [[w:amendment]]s to the election code came into effect, giving [[w:candidates]] in [[w:elections]] a 30-day protection period to the elections during which making and distributing digital look-alikes or synthetic fakes of the candidates is an offense. The law text defines the subject of the law as "''a  video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality''"<ref name="TexasSB751">


* '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref>
{{cite web
|url= https://capitol.texas.gov/BillLookup/History.aspx?LegSess=86R&Bill=SB751
|title= Relating to the creation of a criminal offense for fabricating a deceptive video with intent to influence the outcome of an election
|last=
|first=
|date= 2019-06-14
|website=
|publisher= [[w:Texas]]
|access-date= 2020-07-13
|quote= In  this  section,  "deep  fake  video" means  a  video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality}}


[[File:Appearance of Queen Elizabeth II stolen by Channel 4 in Dec 2020 (screenshot at 191s).png|thumb|right|480px|In Dec 2020 Channel 4 aired a Queen-like fake i.e. they had thieved the appearance of Queen Elizabeth II using deepfake methods.]]
</ref>


* '''2020''' | '''Controversy''' / '''Public service announcement''' | Channel 4 thieved the appearance of Queen Elizabeth II using deepfake methods. The product of synthetic human-like fakery originally aired on Channel 4 on 25 December at 15:25 GMT.<ref name="Queen-like deepfake 2020 BBC  reporting">https://www.bbc.com/news/technology-55424730</ref> [https://www.youtube.com/watch?v=IvY-Abd2FfM&t=3s View in YouTube]
[[File:Marcus Simon.jpeg|thumb|right|108px|Homie [[w:Marcus Simon|w:Marcus Simon]] ([http://marcussimon.com/ marcussimon.com]) is a Member of the [[w:Virginia House of Delegates]] and a true pioneer in legislating against synthetic filth.]]


* '''2020''' | reporting | [https://www.wired.co.uk/article/deepfake-porn-websites-videos-law "''Deepfake porn is now mainstream. And major sites are cashing in''" at wired.co.uk] by Matt Burgess. Published August 2020.
* '''2019''' | US state law | Since July 1  <ref>


* '''2020''' | demonstration | '''[https://moondisaster.org/ Moondisaster.org]''' (full film embedded in website) project by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]] published in July 2020, makes use of various methods of making a synthetic human-like fake. Alternative place to watch: [https://www.youtube.com/watch?v=LWLadJFI8Pk ''In Event of Moon Disaster - FULL FILM'' at youtube.com] 
{{Cite web
** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster'']
| url=https://www.fauquier.com/news/new-state-laws-go-into-effect-july/article_6e2e16c8-96b7-11e9-88d0-83a8852ef3eb.html
| title=New state laws go into effect July 1
}}


* '''2020''' | US state law | {{#lst:Laws against synthesis and other related crimes|California2020}}
</ref> [[w:Virginia]] [[w:criminalization|w:has criminalized]] the sale and dissemination of unauthorized synthetic pornography, but not the manufacture.<ref name="Virginia2019Chapter515">
* '''2020''' | Chinese legislation {{#lst:Laws against synthesis and other related crimes|China2020}}
{{cite web
| url = https://law.lis.virginia.gov/vacode/18.2-386.2/
| title = § 18.2-386.2. Unlawful dissemination or sale of images of another; penalty.
| last =
| first =
| date =
| website =
  | publisher = [[w:Virginia]]
| access-date = 2020-07-13
| quote = }}


== 2010's synthetic human-like fakes ==
</ref>, as [https://law.lis.virginia.gov/vacode/18.2-386.2/ § 18.2-386.2 titled ''''''Unlawful dissemination or sale of images of another; penalty.''''''] became part of the '''[[w:Code of Virginia]]'''. The law text states: "''Any person who, with the [[w:Intention (criminal law)|w:intent]] to [[w:coercion|w:coerce]], [[w:harassment|w:harass]], or [[w:intimidation|w:intimidate]], [[w:Malice_(law)|w:malicious]]ly [[w:dissemination|w:disseminates]] or [[w:sales|w:sells]] any videographic or still image created by any means whatsoever that depicts another person who is totally [w:[nudity|nude]], or in a state of undress so as to expose the [[w:sex organs|w:genitals]], pubic area, [[w:buttocks]], or female [[w:breast]], where such person knows or has reason to know that he is not [[w:license]]d or [[w:authorization|w:authorized]] to disseminate or sell such videographic or still image is guilty of a Class 1 [[w:Misdemeanor#United States|w:misdemeanor]].''".<ref name="Virginia2019Chapter515"/> The identical bills were [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+HB2678 House Bill 2678] presented by [[w:Delegate (American politics)|w:Delegate]] [[w:Marcus Simon]] to the [[w:Virginia House of Delegates]] on January 14 2019 and three day later an identical [https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+SB1736 Senate bill 1736] was introduced to the [[w:Senate of Virginia]] by Senator [[w:Adam Ebbin]].
* '''2019''' | science and demonstration | At the December 2019 NeurIPS conference, a novel method for making animated fakes of anything with AI [https://aliaksandrsiarohin.github.io/first-order-model-website/ '''''First Order Motion Model for Image Animation''''' (website at aliaksandrsiarohin.github.io)], [https://proceedings.neurips.cc/paper/2019/file/31c0b36aef265d9221af80872ceb62f9-Paper.pdf (paper)] [https://github.com/AliaksandrSiarohin/first-order-model (github)] was presented.<ref group="1st seen in">https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/</ref>
** Reporting [https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/ '''''Memers are making deepfakes, and things are getting weird''''' at technologyreview.com], 2020-08-28 by Karen Hao.
* '''2019''' | demonstration | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 '''a deepfake of the President in office'''] [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it.
* '''2019''' | US state law | {{#lst:Laws against synthesis and other related crimes|Texas2019}}
* '''2019''' | US state law | {{#lst:Laws against synthesis and other related crimes|Virginia2019}}
* '''2019''' | Science | [https://arxiv.org/pdf/1809.10460.pdf '''''Sample Efficient Adaptive Text-to-Speech''''' .pdf at arxiv.org], a 2019 paper from Google researchers, published as a conference paper at [[w:International Conference on Learning Representations]] (ICLR)<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018"> https://www.connectedpapers.com/main/8fc09dfcff78ac9057ff0834a83d23eb38ca198a/Transfer-Learning-from-Speaker-Verification-to-Multispeaker-TextToSpeech-Synthesis/graph</ref>


* '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com]
* '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com]
Line 600: Line 376:
</ref>
</ref>


* '''<font color="green">2018</font>''' | '''<font color="green">counter-measure</font>''' | In September 2018 Google added “'''involuntary synthetic pornographic imagery'''” to its '''ban list''', allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”<ref name="WashingtonPost2018">
* '''<font color="red">2018</font>''' | '''<font color="red">counter-measure</font>''' | In September 2018 Google added “'''involuntary synthetic pornographic imagery'''” to its '''ban list''', allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”<ref name="WashingtonPost2018">


{{cite web
{{cite web
Line 619: Line 395:


* '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results.
* '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results.
* '''2018''' | science | [https://arxiv.org/abs/1710.10196 '''Progressive Growing of GANs for Improved Quality, Stability, and Variation''' at arxiv.org] ([https://arxiv.org/pdf/1710.10196.pdf .pdf]), colloquially known as ProGANs were presented by Nvidia researchers at the [https://iclr.cc/Conferences/2018 2018 ICLR]. [[w:International Conference on Learning Representations]]


* '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018">
* '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018">
Line 643: Line 417:
  | quote = }}
  | quote = }}
</ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera.
</ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera.
* '''2018''' | action | [https://schiff.house.gov/imo/media/doc/2018-09%20ODNI%20Deep%20Fakes%20letter.pdf '''Deep Fakes letter to the Office of the Director of National Intelligence''' at schiff.house.gov], a letter sent to the [[w:Director of National Intelligence]] on 2018-09-13 by congresspeople [[w:Adam Schiff]], [[w:Stephanie Murphy]] and [[w:Carlos Curbelo]] requesting a report be compiled on the synthetic human-like fakes situation and what are the threats and what could be the solutions.<ref group="1st seen in">[https://uk.pcmag.com/security/117402/us-lawmakers-ai-generated-fake-videos-may-be-a-security-threat ''''''US Lawmakers: AI-Generated Fake Videos May Be a Security Threat'''''' at uk.pcmag.com], 2018-09-13 reporting by Michael Kan</ref>


* '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning|w:deep machine learning]] so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.
* '''2018''' | controversy / demonstration | The [[w:deepfake]]s controversy surfaces where [[w:Pornographic film|porn video]]s were doctored utilizing [[w:deep learning|w:deep machine learning]] so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.
Line 674: Line 446:


* '''2016''' | music video | [https://www.youtube.com/watch?v=tMQHAy0HUDo ''''''Plug'''''' by Kube at youtube.com] - A 2016 music video by [[w:Kube (rapper)]] ([[w:fi:Kube]]), that shows deepfake-like technology this early. Video was uploaded on 2016-09-15 and is directed by Faruk Nazeri.
* '''2016''' | music video | [https://www.youtube.com/watch?v=tMQHAy0HUDo ''''''Plug'''''' by Kube at youtube.com] - A 2016 music video by [[w:Kube (rapper)]] ([[w:fi:Kube]]), that shows deepfake-like technology this early. Video was uploaded on 2016-09-15 and is directed by Faruk Nazeri.
* '''2015''' | Science | [https://arxiv.org/abs/1411.7766v3 ''''''Deep Learning Face Attributes in the Wild'''''' at arxiv.org] presented at the 2015 [[w:International Conference on Computer Vision]]


* '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015">
* '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015">
Line 716: Line 486:
* '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''.  A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font>
* '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''.  A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font>


* '''2011''' | <font color="green">'''Law in Finland'''</font> | Distribution and attempt of distribution and also possession of '''synthetic [[w:Child sexual abuse material|CSAM]]''' was '''criminalized''' on Wednesday 2011-06-01, upon the initiative of the [[w:Vanhanen II Cabinet]]. These protections against CSAM were moved into 19 §, 20 § and 21 § of Chapter 20 when the [[Law on sexual offences in Finland 2023]] was improved and gathered into Chapter 20 upon the initiative of the [[w:Marin Cabinet]].
=== 2000's synthetic human-like fakes ===
 
== 2000's synthetic human-like fakes ==


* '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|w:CLU]].
* '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|w:CLU]].
Line 747: Line 515:
An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]]
An analytical BRDF must take into account the subsurface scattering, or the end result '''will not pass human testing'''.]]


* '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''. [https://www.researchgate.net/publication/215518319_Universal_Capture_-_Image-based_Facial_Animation_for_The_Matrix_Reloaded ''''''Universal Capture - Image-based Facial Animation for "The Matrix Reloaded"'''''' at researchgate.net] (2003)
* '''2003''' | movie(s) | The '''[[w:Matrix Reloaded]]''' and '''[[w:Matrix Revolutions]]''' films. Relevancy: '''First public display''' of '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' that are virtually '''indistinguishable from''' the '''real actors'''.


{{#ev:youtube|3qIXIHAmcKU|640px|right|Music video for '''''Bullet''''' by [[w:Covenant (band)|w:Covenant]] from 2002. Here you can observe the classic "''skin looks like cardboard''"-bug that stopped the pre-reflectance capture era versions from passing human testing.}}
{{#ev:youtube|3qIXIHAmcKU|640px|right|Music video for '''''Bullet''''' by [[w:Covenant (band)|w:Covenant]] from 2002. Here you can observe the classic "''skin looks like cardboard''"-bug that stopped the pre-reflectance capture era versions from passing human testing.}}
Line 753: Line 521:
* '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|w:BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov.
* '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|w:BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov.


== 1990's synthetic human-like fakes ==
=== 1990's synthetic human-like fakes ===


[[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]]
[[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]]
[[File:Deb2000-light-stage-low-res-rip.png|thumb|left|304px|Original [[w:light stage]] used in the 1999 reflectance capture by [[w:Paul Debevec|Debevec]] et al.<br /><br />
It consists of two rotary axes with height and radius control. Light source and a polarizer were placed on one arm and a camera and the other polarizer on the other arm.
<br /><br />
<small>Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]]


* <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|w:USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/>
* <font color="red">'''1999'''</font> | <font color="red">'''science'''</font> | '''[http://dl.acm.org/citation.cfm?id=344855 'Acquiring the reflectance field of a human face' paper at dl.acm.org ]''' [[w:Paul Debevec]] et al. of [[w:University of Southern California|w:USC]] did the '''first known reflectance capture''' over '''the human face''' with their extremely simple [[w:light stage]]. They presented their method and results in [[w:SIGGRAPH]] 2000. The scientific breakthrough required finding the [[w:subsurface scattering|w:subsurface light component]] (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its [[w:Polarization (waves)]] and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.<ref name="Deb2000"/>


* <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute.  
* <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute.  
* '''1997''' | '''technology / science''' | [https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/human/bregler-sig97.pdf ''''Video rewrite: Driving visual speech with audio'''' at www2.eecs.berkeley.edu]<ref name="Bregler1997" /><ref group="1st seen in" name="Bohacek-Farid-2022">
PROTECTING PRESIDENT ZELENSKYY AGAINST DEEP FAKES https://arxiv.org/pdf/2206.12043.pdf
</ref> Christoph Breigler, Michelle Covell and Malcom Slaney presented their work at the ACM SIGGRAPH 1997. [https://www.dropbox.com/sh/s4l00z7z4gn7bvo/AAAP5oekFqoelnfZYjS8NQyca?dl=0 Download video evidence of ''Video rewrite: Driving visual speech with audio'' Bregler et al 1997 from dropbox.com], [http://chris.bregler.com/videorewrite/ view author's site at chris.bregler.com], [https://dl.acm.org/doi/10.1145/258734.258880 paper at dl.acm.org] [https://www.researchgate.net/publication/220720338_Video_Rewrite_Driving_Visual_Speech_with_Audio paper at researchgate.net]


* '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage.
* '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage.


== 1970's synthetic human-like fakes ==
=== 1970's synthetic human-like fakes ===


{{#ev:vimeo|16292363|480px|right|''[[w:A Computer Animated Hand|w:A Computer Animated Hand]]'' is a 1972 short film by [[w:Edwin Catmull]] and [[w:Fred Parke]]. This was the first time that [[w:computer-generated imagery]] was used in film to animate likenesses of moving human appearance.}}
{{#ev:vimeo|16292363|480px|right|''[[w:A Computer Animated Hand|w:A Computer Animated Hand]]'' is a 1972 short film by [[w:Edwin Catmull]] and [[w:Fred Parke]]. This was the first time that [[w:computer-generated imagery]] was used in film to animate likenesses of moving human appearance.}}
Line 779: Line 547:
* '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref>
* '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref>


== 1960's synthetic human-like fakes ==
=== 1770's synthetic human-like fakes ===
 
* '''1961''' | demonstration | The first singing by a computer was performed by an [[w:IBM 704]] and the song was [[w:Daisy Bell]], written in 1892 by British songwriter [[w:Harry Dacre]]. Go to [[Mediatheque#1961]] to view.
 
== 1930's synthetic human-like fakes ==
[[File:Homer Dudley (October 1940). "The Carrier Nature of Speech". Bell System Technical Journal, XIX(4);495-515. -- Fig.5 The voder being demonstrated at the New York World's Fair.jpg|thumb|left|300px|'''[[w:Voder]]''' demonstration pavillion at the [[w:1939 New York World's Fair]]]]
 
* '''1939''' | demonstration | '''[[w:Voder]]''' (''Voice Operating Demonstrator'') from the [[w:Bell Labs|w:Bell Telephone Laboratory]] was the first time that [[w:speech synthesis]] was done electronically by breaking it down into its acoustic components. It was invented by [[w:Homer Dudley]] in 1937–1938 and developed on his earlier work on the [[w:vocoder]]. (Wikipedia)
 
== 1770's synthetic human-like fakes ==


[[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von Kempelen]]'s [[w:Wolfgang von Kempelen's Speaking Machine]], built 2007–09 at the Department of [[w:Phonetics]], [[w:Saarland University]], [[w:Saarbrücken]], Germany. This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s]]
[[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von Kempelen]]'s [[w:Wolfgang von Kempelen's Speaking Machine]], built 2007–09 at the Department of [[w:Phonetics]], [[w:Saarland University]], [[w:Saarbrücken]], Germany. This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s]]
Line 800: Line 559:
----
----


= Footnotes =
== Media perhaps about synthetic human-like fakes ==
This is a chronological listing of media that are probably to do with [[synthetic human-like fakes]].
 
The links currently include scripture, science, demonstrations, music videos, music, entertainment and movies.
 
=== 2020's media perhaps about synthetic human-like fakes ===
 
* '''2022''' | movie | '''''[[w:The Matrix 4]]''''' ('''2022''') will be the 4th installment of the [[w:The Matrix (franchise)]]. Relevancy: High likelihood of relevance, but unknown as this film is not yet ready or released.
 
=== 2010's media perhaps about synthetic human-like fakes  ===
 
* '''2018''' | music video | [https://www.youtube.com/watch?v=X8f5RgwY8CI&list=PLxKHVMqMZqUTgHYRSXfZN_JjItBzVTCau ''''''Simulation Theory'''''' album by '''Muse''' on Youtube] by [[w:Muse (band)]] from the [[w:Simulation Theory (album)]]. '''Obs.''' "The Pause," "Watch What I Do" and "The Interlude" are not part of the album. Relevancy: Whole album
 
* '''2016''' | music video |[https://www.youtube.com/watch?v=ElvLZMsYXlo ''''''Voodoo In My Blood'''''' (official music video) by '''Massive Attack''' on Youtube] by [[w:Massive Attack]] and featuring [[w:Tricky]] from the album [[w:Ritual Spirit]]. Relevancy: '''How many machines''' can you see in the same frame at times? If you answered one, look harder and make a more educated guess.
 
* '''2016''' | music video | [https://www.youtube.com/watch?v=8r31DFrFs5A ''''''The Spoils'''''' by '''Massive Attack''' on Youtube] by [[w:Massive Attack]] featuring [[w:Hope Sandoval]]. [[w:The Spoils (song)|Wikipedia on The Spoils (song)]] Relevancy: The video '''contains synthesis''' of '''human-like''' likenesses.
 
* '''2013''' | music | [https://www.youtube.com/watch?v=VvT8ydMiETc ''''''In Two'''''' by the '''Nine Inch Nails''' (lyric video) on Youtube] by [[w:Nine Inch Nails]] from the album [[w:Hesitation Marks]]. Relevancy: The '''lyrics''' seem to be about '''appearance theft'''.
 
* '''2013''' | music | [https://www.youtube.com/watch?v=Rn3W6ok-IhE ''''''Copy of A'''''' by the '''Nine Inch Nails''' (lyric video) on Youtube] by [[w:Nine Inch Nails]] from the album [[w:Hesitation Marks]]. Relevancy: The '''lyrics''' seem to be about '''appearance theft'''.
 
* '''2013''' | music video | [https://www.youtube.com/watch?v=ZWrUEsVrdSU ''''''Before Your Very Eyes'''''' by '''Atoms For Peace''' (official music video) on Youtube] by [[w:Atoms for Peace (band)]] from their album [[w:Amok (Atoms for Peace album)]]. Video was made by [http://www.andrewthomashuang.com/ Andrew Thomas Huang (.com)] Relevancy: Watch the video
 
=== 2000's media perhaps about synthetic human-like fakes  ===
 
* '''2006''' | music video | [https://www.youtube.com/watch?v=hC_sqi9oocI ''''''John The Revelator'''''' by '''Depeche Mode''' (official music video) on Youtube] by [[w:Depeche Mode]] from the single [[w:John the Revelator / Lilian]]. Relevancy: [[Biblical explanation - The books of Daniel and Revelation#Revelation 13|Book of Revelations]].
 
* '''2005''' | music video | [https://www.youtube.com/watch?v=wwvLlEtxX3o ''''''Only'''''' by Nine '''Inch Nails''' at Youtube.com] [[w:Only (Nine Inch Nails song)]] by the [[w:Nine Inch Nails]]. Relevancy: check the lyrics, check the video
 
* '''2005''' | short film | [https://www.youtube.com/watch?v=zl6hNj1uOkY ''''''Doll Face'''''' by '''Andy Huang''' on youtube.com] was uploaded on 2007-02-19 by [http://www.andrewthomashuang.com/ Andrew Thomas Huang (.com)]. There are various unofficial videos using 'Doll Face' as graphics, but with different music.
* '''2001''' | music video | [https://www.youtube.com/watch?v=dbB-mICjkQM ''''''Plug In Baby'''''' by '''Muse''' on youtube.com] by [[w:Muse (band)]] from their album [[w:Origin of Symmetry]]. Relevancy: See video
 
* '''2001''' | music video | [https://www.youtube.com/watch?v=lWIeVTs94rI ''''''Evolution Revolution Love'''''' by '''Tricky''' on Youtube] by [[w:Tricky (musician)]] from the [[w:Blowback (album)]] and featuring [[w:Ed Kowalczyk]]. Relevancy: See video
 
=== 1990's media perhaps about synthetic human-like fakes  ===
 
* '''1998''' | music video | [https://www.youtube.com/watch?v=XbByxzZ-4dI ''''''Rabbit in Your Headlights'''''' by '''UNKLE''' on Youtube] by [[w:Unkle]] and featuring [[w:Thom Yorke]] on the lyrics.  [[w:Rabbit in Your Headlights#Music video|Wikipedia on 'Rabbit in Your Headlights' music video]]. Relevancy: Contains shots that would have '''injured''' / '''killed''' a '''human actor'''.
 
* '''1998''' | music | [https://www.youtube.com/watch?v=sjrnKG1j3Eg ''''''New Model No. 15'''''' by '''Marilyn Manson''' (lyrics in video) on Youtube] by [[w:Marilyn Manson (band)]] from the album [[w:Mechanical Animals]]. Relevancy: The '''lyrics''' are obviously about '''[[digital look-alikes]]''' approaching.
 
* '''1998''' |  music video | [https://www.youtube.com/watch?v=FC-Kos_b1sE ''''''The Dope Show'''''' by '''Marilyn Manson''' (lyric video) on Youtube] [https://www.youtube.com/watch?v=5R682M3ZEyk (official music video)] by [[w:Marilyn Manson (band)]] from the album [[w:Mechanical Animals]]. Relevancy: '''lyrics'''
 
* '''1996''' | music | [https://www.youtube.com/watch?v=LMK9z5jyKbQ ''''''Dead Cities'''''' by '''The Future Sound of London''' on Youtube] - Title track from the 1996 [[w:The Future Sound of London]] album [[w:Dead Cities (album)]]. Relevancy: You need to listen
 
* '''1990''' | music (video) | [https://www.youtube.com/watch?v=JG9CXQxhfL4 ''''''Daydreaming'''''' by '''Massive Attack''' on youtube.com] [[w:Daydreaming (Massive Attack song)]] by [[w:Massive Attack]] Relevancy: "''But what happen when the bomb drops Down...?''"
 
=== 1980's media perhaps about synthetic human-like fakes ===
 
* '''1986''' | music video | [https://www.youtube.com/watch?v=6epzmRZk6UU ''''''Paranoimia'''''' by '''Art of Noise''' on Youtube]. [[w:Paranoimia]] by [[w:Art of Noise]] featuring Max Headroom from the album [[w:In Visible Silence]] Relevancy: Contains state-of-the-art (for the era) '''synthetic human-like character''', [[w:Max Headroom]]).
 
* '''1983''' | music video | [https://www.youtube.com/watch?v=O0lIlROWro8 ''''''Musique Non-Stop'''''' by '''Kraftwerk''' on Youtube] made in 1983, but published only in in '''1986''' by [[w:Kraftwerk]] from album [[w:Electric Café]]. Relevancy: Contains state-of-the-art (for the era) '''[[digital look-alikes]]''' of the band members.
 
=== 1st century media perhaps about synthetic human-like fakes  ===
* '''[[w:1st century]]''' | scripture | '''[[w:Jesus]] teaches''' about things that are yet to come in
*# '''[[w:Matthew 24]]'''
*# '''[[w:The Sheep and the Goats]]''' and
*# '''[[w:Mark 13]]'''.
 
*'''1st century''' | scripture | '''[[w:2 Thessalonians 2]]''' is the second chapter of the [[w:Second Epistle to the Thessalonians]]. It is traditionally attributed to [[w:Paul the Apostle]], with [[w:Saint Timothy]] as a co-author.  See [[Biblical explanation - The books of Daniel and Revelation#2 Thessalonians 2|Biblical explanation - The books of Daniel and Revelation § 2 Thessalonians 2]] '''Caution''' to reader: contains '''explicit''' written information about the beasts
 
*'''1st century''' | scripture | '''[[w:Book of Revelation]]'''. The task of writing down and smuggling out this early warning of what is to come is given by God to his servant John, who was imprisoned on the island of [[w:Patmos]].  See [[Biblical explanation - The books of Daniel and Revelation#Revelation 13|Biblical explanation - The books of Daniel and Revelation § Revelation 13]]. '''Caution''' to reader: contains '''explicit''' written information about the beasts.
 
 
=== 2nd century BC media perhaps about synthetic human-like fakes  ===
 
* [[w:2nd century BC]] | scripture | The '''[[w:Book of Daniel]]''' was put in writing.
* See [[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Biblical explanation - The books of Daniel and Revelation § Daniel 7]]. '''Caution''' to reader: contains '''explicit''' written information about the beasts.
 
=== 6th century BC media perhaps about synthetic human-like fakes ===
 
[[File:Daniel's vision of the four beasts from the sea and the Ancient of Days - Silos Apocalypse (1109), f.240 - BL Add MS 11695.jpg|thumb|left|360px|Image taken from Silos Apocalypse. Originally published/produced in Spain (Silos), 1109.<br/><br/>
[[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Daniel 7]], Daniel's vision of the three beasts <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:1-6 - Three beasts|Dan 7:1-6]]</sup> and the fourth beast <sup>[[Biblical explanation - The books of Daniel and Revelation#Daniel 7:7-8 - The fourth beast|Dan 7:7-8]]</sup> from the sea and the [[w:Ancient of Days]]<sup>[[Biblical explanation - The books of Daniel and Revelation#The Ancient of Days|Dan 7:9-10]]</sup>]]
 
* [[w:6th century BC]] | scripture | '''[[w:Daniel (biblical figure)]]''' was in [[w:Babylonian captivity]] when he had his visions where God warned us of synthetic human-like fakes first.
* His testimony was put into written form in the [[#3rd century BC]].
 
== Footnotes ==
<references group="footnote" />  
<references group="footnote" />  


== Contact information of organizations ==
== 1st seen in ==
Please contact [[Organizations, studies and events against synthetic human-like fakes|these organizations]] and tell them to work harder against the disinformation weapons
 
= 1st seen in =
<references group="1st seen in" />
<references group="1st seen in" />


 
== References ==
= References =
<references />
<references />
Please note that all contributions to Stop Synthetic Filth! wiki are considered to be released under the Creative Commons Attribution-ShareAlike (see SSF:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)