Editing Synthetic human-like fakes
Jump to navigation
Jump to search
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
<section begin=definitions-of-synthetic-human-like-fakes /> | <section begin=definitions-of-synthetic-human-like-fakes /> | ||
When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''[[ | When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''[[#Digital look-alikes|digital look-alike]]'''. | ||
When it cannot be determined by human testing or media forensics whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a pre-recorded '''[[#Digital sound-alikes|digital sound-alike]]'''. | [[Synthetic human-like fakes|Read more about synthetic human-like fakes]], [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|examine timeline of synthetic human-like fakes]] or [[Mediatheque|view Mediatheque]] | |||
When it cannot be determined by human testing or media forensics whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a pre-recorded '''[[ | |||
<section end=definitions-of-synthetic-human-like-fakes /> | <section end=definitions-of-synthetic-human-like-fakes /> | ||
[[File:Screenshot at 27s of a moving digital-look-alike made to appear Obama-like by Monkeypaw Productions and Buzzfeed 2018.png|thumb|right|480px|link=Mediatheque/2018/Obama's appearance thieved - a public service announcement digital look-alike by Monkeypaw Productions and Buzzfeed|{{#lst:Mediatheque|Obama-like-fake-2018}}]] | [[File:Screenshot at 27s of a moving digital-look-alike made to appear Obama-like by Monkeypaw Productions and Buzzfeed 2018.png|thumb|right|480px|link=Mediatheque/2018/Obama's appearance thieved - a public service announcement digital look-alike by Monkeypaw Productions and Buzzfeed|{{#lst:Mediatheque|Obama-like-fake-2018}}]] | ||
Line 101: | Line 44: | ||
<small>[[:File:Deb-2000-reflectance-separation.png|Original picture]] by [[w:Paul Debevec]] et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855</small>]] | <small>[[:File:Deb-2000-reflectance-separation.png|Original picture]] by [[w:Paul Debevec]] et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855</small>]] | ||
In the cinemas we have seen digital look-alikes for | In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. [[w:The Matrix Reloaded]] and [[w:The Matrix Revolutions]] released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the [[Glossary#Reflectance capture|reflectance capture]] over the human face, was made for the first time in 1999 at the [[w:University of Southern California]] and was presented to the crème de la crème | ||
of the computer graphics field in their annual gathering SIGGRAPH 2000.<ref name="Deb2000"> | of the computer graphics field in their annual gathering SIGGRAPH 2000.<ref name="Deb2000"> | ||
{{cite book | {{cite book | ||
Line 124: | Line 67: | ||
=== The problems with digital look-alikes === | === The problems with digital look-alikes === | ||
Extremely unfortunately for the humankind, organized criminal leagues, that posses the '''weapons capability''' of making believable looking '''synthetic pornography''', are producing on industrial production pipelines ''' | Extremely unfortunately for the humankind, organized criminal leagues, that posses the '''weapons capability''' of making believable looking '''synthetic pornography''', are producing on industrial production pipelines '''synthetic terror porn'''<ref group="footnote" name="About the term synthetic terror porn">It is terminologically more precise, more inclusive and more useful to talk about 'synthetic terror porn', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.</ref> by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by. | ||
These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This '''hate illustration''' increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve. | |||
=== List of possible naked digital look-alike attacks === | |||
''' | * The classic "''portrayal of as if in involuntary sex''"-attack. (Digital look-alike "cries") | ||
* "''Sexual preference alteration''"-attack. (Digital look-alike "smiles") | |||
* "''Cutting / beating''"-attack (Constructs a deceptive history for genuine scars) | |||
* "''Mutilation''"-attack (Digital look-alike "dies") | |||
* "''Unconscious and injected''"-attack (Digital look-alike gets "disease") | |||
=== Age analysis and rejuvenating and aging syntheses === | === Age analysis and rejuvenating and aging syntheses === | ||
Line 172: | Line 109: | ||
== Digital sound-alikes == | == Digital sound-alikes == | ||
[[File:Helsingin-Sanomat-2012-David-Martin-Howard-of-University-of-York-on-apporaching-digital-sound-alikes.jpg|right|thumb|338px|A picture of a cut-away titled "''Voice-terrorist could mimic a leader''" from a 2012 [[w:Helsingin Sanomat]] warning that the sound-like-anyone machines are approaching. Thank you to homie [https://pure.york.ac.uk/portal/en/researchers/david-martin-howard(ecfa9e9e-1290-464f-981a-0c70a534609e).html Prof. David Martin Howard] of the [[w:University of York]], UK and the anonymous editor for the heads-up.]] | [[File:Helsingin-Sanomat-2012-David-Martin-Howard-of-University-of-York-on-apporaching-digital-sound-alikes.jpg|right|thumb|338px|A picture of a cut-away titled "''Voice-terrorist could mimic a leader''" from a 2012 [[w:Helsingin Sanomat]] warning that the sound-like-anyone machines are approaching. Thank you to homie [https://pure.york.ac.uk/portal/en/researchers/david-martin-howard(ecfa9e9e-1290-464f-981a-0c70a534609e).html Prof. David Martin Howard] of the [[w:University of York]], UK and the anonymous editor for the heads-up.]] | ||
Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | |||
< | |||
For these reasons the bannable '''raw materials''' i.e. covert voice models '''[[Law proposals to ban covert modeling|should be prohibited by law]]''' in order to protect humans from abuse by criminal parties. | |||
=== Documented digital sound-alike attacks === | |||
* Sound like anyone technology found its way to the hands of criminals as in '''2019''' [[w:NortonLifeLock|Symantec]] researchers knew of 3 cases where technology has been used for '''[[w:crime]]''' | |||
** [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name="BBC2019"> | |||
=== Documented | |||
* | |||
* [https://www.bbc.com/news/technology-48908736 '''"Fake voices 'help cyber-crooks steal cash''''" at bbc.com] July 2019 reporting <ref name=" | |||
{{cite web | {{cite web | ||
|url= https://www.bbc.com/news/technology-48908736 | |url= https://www.bbc.com/news/technology-48908736 | ||
Line 247: | Line 129: | ||
|quote= }} | |quote= }} | ||
</ref> | </ref> | ||
* [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com] documents a [[w:fraud]] committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name=" | ** [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com] documents a [[w:fraud]] committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name="WaPo2019"> | ||
{{cite web | {{cite web | ||
|url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ | |url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ | ||
Line 257: | Line 139: | ||
|publisher= [[w:Washington Post]] | |publisher= [[w:Washington Post]] | ||
|access-date= 2019-07-22 | |access-date= 2019-07-22 | ||
|quote= | |quote= }} | ||
</ref> | </ref> | ||
---- | |||
=== 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion) === | |||
<section begin=GoogleTransferLearning2018 /> | |||
* In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results | |||
Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking. | |||
{{#Widget:Iframe - Audio samples from Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis by Google Research}} | |||
The Iframe above is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers. | |||
<section end=GoogleTransferLearning2018 /> | |||
The to the right [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine presented by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018. | |||
{{#ev:youtube|0sR1rU3gLzQ|640px|right|Video [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.}} | |||
---- | |||
=== Example of a hypothetical 4-victim digital sound-alike attack === | === Example of a hypothetical 4-victim digital sound-alike attack === | ||
Line 314: | Line 169: | ||
# Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1 | # Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1 | ||
# Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1. | # Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1. | ||
Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human voice!]]''' | |||
=== Examples of speech synthesis software not quite able to fool a human yet === | === Examples of speech synthesis software not quite able to fool a human yet === | ||
Line 322: | Line 179: | ||
* '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]] | * '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]] | ||
* [https://papers.nips.cc/paper/8206-neural-voice-cloning-with-a-few-samples ''''Neural Voice Cloning with a Few Samples''' at papers.nips.cc], [[w:Baidu Research]]'es shot at sound-like-anyone-machine did not convince in '''2018''' | * [https://papers.nips.cc/paper/8206-neural-voice-cloning-with-a-few-samples ''''Neural Voice Cloning with a Few Samples''' at papers.nips.cc], [[w:Baidu Research]]'es shot at sound-like-anyone-machine did not convince in '''2018''' | ||
=== Reporting on the sound-like-anyone-machines === | |||
* [https://www.forbes.com/sites/bernardmarr/2019/05/06/artificial-intelligence-can-now-copy-your-voice-what-does-that-mean-for-humans/#617f6d872a2a '''"Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?"''' May 2019 reporting at forbes.com] on [[w:Baidu Research]]'es attempt at the sound-like-anyone-machine demonstrated at the 2018 [[w:NeurIPS]] conference. | |||
=== Temporal limit of digital sound-alikes === | === Temporal limit of digital sound-alikes === | ||
Line 346: | Line 206: | ||
[[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram]] of a male voice saying 'nineteenth century']] | [[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram]] of a male voice saying 'nineteenth century']] | ||
== | == Singing syntheses == | ||
As of 2020 the '''digital sing-alikes''' may not yet be here, but when we hear a faked singing voice and we cannot hear that it is fake, then we will know. An ability to sing does not seem to add much hostile capabilities compared to the ability to thieve spoken word. | |||
* [https://arxiv.org/abs/1910.11690 ''''''Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks'''''' at arxiv.org], a 2019 singing voice synthesis technique using [[w:convolutional neural network|w:convolutional neural networks (CNN)]]. Accepted into the 2020 [[w:International Conference on Acoustics, Speech, and Signal Processing|International Conference on Acoustics, Speech, and Signal Processing (ICASSP)]]. | |||
* [http://compmus.ime.usp.br/sbcm/2019/papers/sbcm-2019-7.pdf ''''''State of art of real-time singing voice synthesis'''''' at compmus.ime.usp.br] presented at the 2019 [http://compmus.ime.usp.br/sbcm/2019/program/ 17th Brazilian Symposium on Computer Music] | |||
* [http://theses.fr/2017PA066511 ''''''Synthesis and expressive transformation of singing voice'''''' at theses.fr] [https://www.theses.fr/2017PA066511.pdf as .pdf] a 2017 doctorate thesis by [http://theses.fr/227185943 Luc Ardaillon] | |||
* [http://mtg.upf.edu/node/512 ''''''Synthesis of the Singing Voice by Performance Sampling and Spectral Models'''''' at mtg.upf.edu], a 2007 journal article in the [[w:IEEE Signal Processing Society]]'s Signal Processing Magazine | |||
* [https://www.researchgate.net/publication/4295714_Speech-to-Singing_Synthesis_Converting_Speaking_Voices_to_Singing_Voices_by_Controlling_Acoustic_Features_Unique_to_Singing_Voices ''''''Speech-to-Singing Synthesis: Converting Speaking Voices to Singing Voices by Controlling Acoustic Features Unique to Singing Voices'''''' at researchgate.net], a November 2007 paper published in the IEEE conference on Applications of Signal Processing to Audio and Acoustics | |||
* [[w:Category:Singing software synthesizers]] | |||
== Text syntheses == | == Text syntheses == | ||
[[w:Chatbot]]s | [[w:Chatbot]]s have existed for a longer time, but only now armed with AI they are becoming more deceiving. | ||
In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI. | In [[w:natural language processing]] development in [[w:natural-language understanding]] leads to more cunning [[w:natural-language generation]] AI. | ||
[[w:OpenAI]]'s [[w:OpenAI#GPT|w:Generative Pre-trained Transformer]] ('''GPT''') is a left-to-right [[w:transformer (machine learning model)]]-based [[w:Natural-language generation|text generation]] model succeeded by [[w:OpenAI#GPT-2|w:GPT-2]] and [[w:OpenAI#GPT-3|w:GPT-3]] | [[w:OpenAI]]'s [[w:OpenAI#GPT|w:Generative Pre-trained Transformer]] ('''GPT''') is a left-to-right [[w:transformer (machine learning model)]]-based [[w:Natural-language generation|text generation]] model succeeded by [[w:OpenAI#GPT-2|w:GPT-2]] and [[w:OpenAI#GPT-3|w:GPT-3]] | ||
''' Reporting / announcements ''' | |||
* [https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/ ''''A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.'''' at technologyreview.com] '''August 2020''' reporting in the [[w:MIT Technology Review]] by Karen Hao about GPT-3. | * [https://www.technologyreview.com/2020/08/14/1006780/ai-gpt-3-fake-blog-reached-top-of-hacker-news/ ''''A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.'''' at technologyreview.com] '''August 2020''' reporting in the [[w:MIT Technology Review]] by Karen Hao about GPT-3. | ||
Line 384: | Line 238: | ||
''' External links ''' | ''' External links ''' | ||
* [https://analyticssteps.com/blogs/detection-fake-and-false-news-text-analysis-approaches-and-cnn-deep-learning-model '''"Detection of Fake and False News (Text Analysis): Approaches and CNN as Deep Learning Model"''' at analyticsteps.com], a 2019 summmary written by Shubham Panth. | * [https://analyticssteps.com/blogs/detection-fake-and-false-news-text-analysis-approaches-and-cnn-deep-learning-model '''"Detection of Fake and False News (Text Analysis): Approaches and CNN as Deep Learning Model"''' at analyticsteps.com], a 2019 summmary written by Shubham Panth. | ||
== Handwriting syntheses == | == Handwriting syntheses == | ||
Line 428: | Line 264: | ||
* [https://arxiv.org/abs/1308.0850 '''''Generating Sequences With Recurrent Neural Networks''''' at arxiv.org] by Alex Graves published on '''2013'''-08-04 in Neural and Evolutionary Computing. | * [https://arxiv.org/abs/1308.0850 '''''Generating Sequences With Recurrent Neural Networks''''' at arxiv.org] by Alex Graves published on '''2013'''-08-04 in Neural and Evolutionary Computing. | ||
:#[https://www.cs.toronto.edu/~graves/handwriting.html '''''Recurrent neural network handwriting generation demo''''' at cs.toronto.edu] is a demonstration site for publication | :#[https://www.cs.toronto.edu/~graves/handwriting.html '''''Recurrent neural network handwriting generation demo''''' at cs.toronto.edu] is a demonstration site for publication | ||
:# [https://www.calligrapher.ai/ '''Calligrapher.ai''' - ''Realistic computer-generated handwriting''] - The user may control parameters: speed, legibility, stroke width and style. The domain is registered by some organization in Iceland and the website offers no about-page<ref group=" | :# [https://www.calligrapher.ai/ '''Calligrapher.ai''' - ''Realistic computer-generated handwriting''] - The user may control parameters: speed, legibility, stroke width and style. The domain is registered by some organization in Iceland and the website offers no about-page<ref group="note">https://seanvasquez.com/handwriting-generation redirects to Calligrapher.ai - seen in https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/</ref>. According to [https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/ this reddit post] Calligrapher.ai is based on Graves' 2013 work, but "''adds an [[w:inference]] model to allow for sampling latent style vectors (similar to the VAE model used by SketchRNN)''".<ref>https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/</ref> | ||
''' Handwriting recognition ''' | ''' Handwriting recognition ''' | ||
* '''[[w:Handwriting recognition]]''' | * '''[[w:Handwriting recognition]]''' | ||
* '''[[w:Intelligent word recognition]]''', or '''IWR''', is the recognition of unconstrained handwritten words.<ref> | * '''[[w:Intelligent word recognition]]''', or '''IWR''', is the recognition of unconstrained handwritten words.<ref> | ||
{{Cite web | {{Cite web | ||
Line 447: | Line 283: | ||
* [https://github.com/topics/handwriting-recognition GitHub topic '''handwriting-recognition'''] contains 238 repositories as of September 2021. | * [https://github.com/topics/handwriting-recognition GitHub topic '''handwriting-recognition'''] contains 238 repositories as of September 2021. | ||
== | == Countermeasures against synthetic human-like fakes == | ||
<section begin=APW_AI-transclusion /> | |||
=== Organizations against synthetic human-like fakes === | |||
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | |||
* '''[[w:DARPA]]''' ([https://www.darpa.mil/ darpa.mil]) [https://contact.darpa.mil/ contact form]<ref group="contact"> | |||
* '''The Defense Advanced Research Projects Agency''' | |||
* Contact form https://contact.darpa.mil/ | |||
* Email: outreach@darpa.mil | |||
* Defense Advanced Research Projects Agency | |||
* 675 North Randolph Street | |||
* Arlington, VA 22203-2114 | |||
* Phone 1-703-526-6630 | |||
</ref>[https://www.darpa.mil/program/media-forensics '''DARPA program''': ''''Media Forensics'''' ('''MediFor''') at darpa.mil] aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in [https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics June '''2016''']<ref name="IA-MediFor-2016-crawl">https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics</ref>. | |||
* [https://www.darpa.mil/program/semantic-forensics '''DARPA program''': ''''Semantic Forensics'''' ('''SemaFor''') at darpa.mil] aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at [[w:Duke University]]'s [https://researchfunding.duke.edu/semantic-forensics-semafor '''Research Funding database: Semantic Forensics (SemaFor)''' at researchfunding.duke.edu] and some at [https://www.grants.gov/web/grants/view-opportunity.html?oppId=319894 '''Semantic Forensics grant opportunity''' (closed Nov 2019) at grants.gov]. Archive.org first cralwed their website in [https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November '''2019''']<ref name="IA-SemaFor-2019-crawl">https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November</ref> | |||
* '''[[w:University of Colorado Denver]]''''s College of Arts & Media<ref group="contact"> | |||
* '''National Center for Media Forensics''' at https://artsandmedia.ucdenver.edu | |||
* Email: CAM@ucdenver.edu | |||
* College of Arts & Media | |||
* National Center for Media Forensics | |||
* CU Denver | |||
* Arts Building | |||
* Suite 177 | |||
* 1150 10th Street | |||
* Denver, CO 80204 | |||
* USA | |||
* Phone 1-303-315-7400 | |||
</ref> is the home of the [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics '''National Center for Media Forensics''' at artsandmedia.ucdenver.edu] at the [[w:University of Colorado Denver]] offers a [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/media-forensics-graduate-program Master's degree program], [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/training-courses training courses] and [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/national-center-for-media-forensics-research scientific basic and applied research]. [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/faculty-staff Faculty staff at the NCMF] | |||
* [https://www.clemson.edu/centers-institutes/watt/hub/index.html '''Media Forensics Hub''' at clemson.edu]<ref group="contact"> | |||
* '''Media Forensics Hub''' at Clemson University clemson.edu | |||
* Media Forensics Hub | |||
* Clemson University | |||
* Clemson, South Carolina 29634 | |||
* USA | |||
* Phone 1-864-656-3311 | |||
</ref> at the Watt Family Innovation Center of the '''[[w:Clemson University]]''' has the aims of promoting multi-disciplinary research, collecting and facilitating discussion and ideation of challenges and solutions. They provide [https://www.clemson.edu/centers-institutes/watt/hub/resources/ resources], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/research.html research], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/education.html media forensics education] and are running a [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/wg-disinfo.html '''Working Group''' on '''disinformation'''].<ref group="contact">mediaforensics@clemson.edu</ref> | |||
* [https://lab.witness.org/ '''The WITNESS Media Lab''' at lab.witness.org] by [[w:Witness (organization)]][https://www.witness.org/get-involved/ contact form]) ([https://www.witness.org/get-involved/ contact form])<ref group="contact"> | |||
* '''WITNESS''' (Media Lab) | |||
* Contact form https://www.witness.org/get-involved/ incl. mailing list subscription possiblity | |||
* WITNESS | |||
* 80 Hanson Place, 5th Floor | |||
* Brooklyn, NY 11217 | |||
* USA | |||
* Phone: 1.718.783.2000 | |||
</ref>, a human rights non-profit organization based out of Brooklyn, New York, is against synthetic filth actively since 2018. They work both in awareness raising as well as media forensics. | |||
** [https://lab.witness.org/projects/osint-digital-forensics/ '''Open-source intelligence digital forensics''' - ''How do we work together to detect AI-manipulated media?'' at lab.witness.org]. "''In February '''2019''' WITNESS in association with [[w:George Washington University]] brought together a group of leading researchers in [[Glossary#Media forensics|media forensics]] and [[w:detection]] of [[w:deepfakes]] and other [[w:media manipulation]] with leading experts in social newsgathering, [[w:User-generated content]] and [[w:open-source intelligence]] ([[w:OSINT]]) verification and [[w:fact-checking]].''" (website) | |||
** [https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ '''Prepare, Don’t Panic: Synthetic Media and Deepfakes''' at lab.witness.org] is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in '''2018''' with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the [http://witness.mediafire.com/file/q5juw7dc3a2w8p7/Deepfakes_Final.pdf/file '''report''' “'''Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening'''”] (dated 2018-06-11). [https://blog.witness.org/2018/07/deepfakes/ '''''Deepfakes and Synthetic Media: What should we fear? What can we do?''''' at blog.witness.org] | |||
[[File:Connie Leyva 2015.jpg|thumb|right|240px|[[w:California]] [[w:California State Senate|w:Senator]] [[w:Connie Leyva]] sponsored [https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200SB564&showamends=false '''California Senate Bill SB 564''' - ''Depiction of individual using digital or electronic technology: sexually explicit material: cause of action''] in Feb '''2019'''. It is identical to Assembly Bill 602 authored by [[w:Marc Berman]]. The bill was [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn endorsed by SAG-AFTRA]. It became law on 1 January 2020 in the [[w:California Civil Code|w:California Civil Code]] of the [[w:California Codes]].]] | |||
* | * '''Screen Actors Guild - American Federation of Television and Radio Artists''' - '''[[w:SAG-AFTRA]]''' ([https://www.sagaftra.org/ sagaftra.org] [https://servicesagaftra.custhelp.com/app/ask contact form]<ref group="contact"> | ||
* '''Screen Actors Guild - American Federation of Television and Radio Artists''' at https://www.sagaftra.org/ | |||
- | * Screen Actors Guild - American Federation of Television and Radio Artists | ||
* 5757 Wilshire Boulevard, 7th Floor | |||
* Los Angeles, California 90036 | |||
* USA | |||
* Phone: 1-855-724-2387 | |||
* Email: info@sagaftra.org | |||
* ''' | * https://www.sagaftra.org/contact-us | ||
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''. | |||
=== Organizations possibly against synthetic human-like fakes === | |||
Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | |||
{{cite web | {{cite web | ||
|url= https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf | |||
|title= The ethics of artificial intelligence: Issues and initiatives | |||
|last= | |||
|first= | |||
|date= March 2020 | |||
|website= [[w:Europa (web portal)]] | |||
|publisher=[[w:European Parliamentary Research Service]] | |||
|access-date=2021-02-17 | |||
|quote=This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.}} | |||
</ref> | |||
* [https://ieai.mcts.tum.de/ '''INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE''' at ieai.mcts.tum.de]<ref group="contact"> | |||
* '''INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE''' | |||
Visitor’s address | |||
* Marsstrasse 40 | |||
* D-80335 Munich | |||
Postal address | |||
* INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE | |||
* Arcisstrasse 21 | |||
* D-80333 Munich | |||
* Germany | |||
Email | |||
* ieai(at)mcts.tum.de | |||
Website | |||
* https://ieai.mcts.tum.de | |||
</ref> received initial funding from [[w:Facebook]] in 2019.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/ieaitum/ IEAI on LinkedIn.com] | |||
* [https://ethical.institute/ '''The Institute for Ethical AI & Machine Learning''' at ethical.institute]([https://ethical.institute/#contact contact form] asks a lot of questions)<ref group="contact"> | |||
'''The Institute for Ethical AI & Machine Learning''' | |||
Website https://ethical.institute/ | |||
Email | |||
* a@ethical.institute | |||
Contacted | |||
* 2021-08-14 used the contact form at https://ethical.institute/#contact | |||
</ref><ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/the-institute-for-ethical-machine-learning/ The Institute for Ethical AI & Machine Learning on LinkedIn.com] | |||
* [https://www.buckingham.ac.uk/research-the-institute-for-ethical-ai-in-education/ '''The Institute for Ethical AI in Education''' at buckingham.ac.uk]<ref group="contact"> | |||
* '''The Institute for Ethical AI in Education''' | |||
From | |||
* https://www.buckingham.ac.uk/contact-us | |||
Mail | |||
* The University of Buckingham | |||
* The Institute for Ethical AI in Education | |||
* Hunter Street | |||
* Buckingham | |||
* MK18 1EG | |||
* United Kingdom | |||
</ref><ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://futureoflife.org/ '''Future of Life Institute''' at futureoflife.org] ([https://futureoflife.org/contact/ contact form] with also mailing list)<ref group="contact"> | |||
'''Future of Life Institute''' | |||
Contact form | |||
* https://futureoflife.org/contact/ | |||
* No physical contact info | |||
Contacted | |||
* 2021-08-14 | Subscribed to newsletter | |||
</ref> received funding from private donors.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> See [[w:Future of Life Institute]] for more info. | |||
* [https://www.ai-gakkai.or.jp/en/ '''The Japanese Society for Artificial Intelligence''' ('''JSAI''') at ai-gakkai.or.jp]<ref group="contact"> | |||
'''The Japanese Society for Artificial Intelligence''' | |||
Contact info | |||
* https://www.ai-gakkai.or.jp/en/about/info/ | |||
Mail | |||
* The Japanese Society for Artificial Intelligence | |||
* 402, OS Bldg. | |||
* 4-7 Tsukudo-cho, Shinjuku-ku, Tokyo 162-0821 | |||
* Japan | |||
Phone | |||
* 03-5261-3401 | |||
</ref> Publication: Ethical guidelines.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://ai-4-all.org/ '''AI4All''' at ai-4-all.org] ([https://ai-4-all.org/contact/ contact form] with also mailing list subscription) <ref group="contact"> | |||
* '''AI4ALL''' | |||
Mail | |||
* AI4ALL | |||
* 548 Market St | |||
* PMB 95333 | |||
* San Francisco, California 94104 | |||
* USA | |||
Contacted: | |||
* 2021-08-14 | Subscribed to mailing list | |||
</ref> funded by [[w:Google]]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/ai4allorg/ AI4All on LinkedIn.com] | |||
* [https://thefuturesociety.org/ '''The Future Society''' at thefuturesociety.org] ([https://thefuturesociety.org/contact/ contact form] with also mailing list subscription)<ref group="contact"> | |||
* '''The Future Society''' at thefuturesociety.org | |||
Contact | |||
* https://thefuturesociety.org/contact/ | |||
* No physical contact info | |||
</ref><ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>. Their activities include policy research, educational & leadership development programs, advisory services, seminars & summits and other special projects to advance the responsible adoption of Artificial Intelligence (AI) and other emerging technologies. [https://www.linkedin.com/company/thefuturesociety/ The Future Society on LinkedIn.com] | |||
* [https://ainowinstitute.org/ '''The Ai Now Institute''' at ainowinstitute.org] ([https://ainowinstitute.org/contact.html contact form] and possibility to subscribe to mailing list)<ref group="contact"> | |||
* '''The Ai Now Institute''' at ainowinstitute.org | |||
Contact | |||
* https://ainowinstitute.org/contact.html | |||
Email | |||
* info@ainowinstitute.org | |||
Contacted | |||
* 2021-08-14 | Subscribed to mailing list | |||
</ref> at [[w:New York University]]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>. Their work is licensed under a '''Creative Commons''' Attribution-NoDerivatives 4.0 International License. [https://www.linkedin.com/company/ai-now-institute/about/ The Ai Now Institute on LinkedIn.com] | |||
* [https://www.partnershiponai.org/ '''Partnership on AI''' at partnershiponai.org] ([https://www.partnershiponai.org/contact/ contact form])<ref group="contact"> | |||
* '''Partnership on AI''' at partnershiponai.org | |||
Contact | |||
* https://www.partnershiponai.org/contact/ | |||
Mail | |||
* Partnership on AI | |||
* 115 Sansome St, Ste 1200, | |||
* San Francisco, CA 94104 | |||
* USA | |||
</ref> is based in the USA and funded by technology companies. They provide [https://www.partnershiponai.org/resources/ resources] and have a vast amount and high caliber of [https://www.partnershiponai.org/partners/ partners]. See [[w:Partnership on AI]] and [https://www.linkedin.com/company/partnershipai/ Partnership on AI on LinkedIn.com] for more info. | |||
* [https://responsiblerobotics.org/ '''The Foundation for Responsible Robotics''' at responsiblerobotics.org] ([https://responsiblerobotics.org/contact/ contact form])<ref group="contact"> | |||
* '''The Foundation for Responsible Robotics''' at responsiblerobotics.org | |||
Contact form | |||
* https://responsiblerobotics.org/contact/ | |||
Email | |||
* info@responsiblerobotics.org | |||
</ref> is based in [[w:Netherlands]].<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/foundation-for-responsible-robotics/about/ The Foundation for Responsible Robotics on LinkedIn.com] | |||
* [https://ai4people.eu/ '''AI4People''' at ai4people.eu] ([https://ai4people.eu/contact-us/ contact form])<ref group="contact"> | |||
* '''AI4People''' at ai4people.eu | |||
Contact form | |||
* https://ai4people.eu/contact-us/ | |||
* No physical contact info | |||
</ref> is based in [[w:Belgium]] is a multi-stakeholder forum.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/ai-for-people/ AI4People on LinkedIn.com] | |||
* [https://aiethicsinitiative.org/ '''The Ethics and Governance of Artificial Intelligence Initiative''' at aiethicsinitiative.org] is a joint project of the [https://www.media.mit.edu/ MIT Media Lab] and the [https://cyber.harvard.edu/ Harvard Berkman-Klein Center for Internet and Society] and is based in the USA.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://www.saidot.ai/ '''Saidot''' at saidot.ai] is a Finnish company offering a platform for AI transparency, explainability and communication.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/saidot/ Saidot on LinkedIn.com] | |||
* [https://www.eu-robotics.net/ '''euRobotics''' at eu-robotics.net] is funded by the [[w:European Commission]].<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://www.gov.uk/government/organisations/centre-for-data-ethics-and-innovation '''Centre for Data Ethics and Innovation''' at gov.uk], part of Department for Digital, Culture, Media & Sport is financed by the UK govt. [https://cdei.blog.gov.uk/ '''Centre for Data Ethics and Innovation Blog''' at cdei.blog.gov.uk]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/centre-for-data-ethics-innovation/ Centre for Data Ethics and Innovation on LinkedIn.com] | |||
* [http://sigai.acm.org/ '''ACM Special Interest Group on Artificial Intelligence''' at sigai.acm.org] is a [[w:Special Interest Group]] on AI by [[w:Association for Computing Machinery|ACM]]. [http://sigai.acm.org/aimatters/blog/ ''''''AI Matters: A Newsletter of ACM SIGAI'' -blog''' at sigai.acm.org] and [http://sigai.acm.org/aimatters/ the newsletter that the blog gets its contents from]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://ethicsinaction.ieee.org/ '''IEEE Ethics in Action - in Autonomous and Intelligent Systems''' at ethicsinaction.ieee.org] (mailing list subscription on website)<ref group="contact"> | |||
* '''IEEE Ethics in Action - in Autonomous and Intelligent Systems''' at ethicsinaction.ieee.org | |||
Email | |||
* aiopps@ieee.org | |||
</ref> | |||
* [https://www.counterhate.com/ '''The Center for Countering Digital Hate''' at counterhate.com] (subscribe to mailing list on website<ref group="contact"> | |||
Email | |||
* info@counterhate.com | |||
Contacted | |||
* 2021-08-14 | Subscribed to mailing list | |||
</ref> is an international not-for-profit NGO that seeks to disrupt the architecture of online hate and misinformation with offices in London and Washington DC. | |||
* [https://carnegieendowment.org/specialprojects/counteringinfluenceoperations '''Partnership for Countering Influence Operations''' ('''PCIO''') at carnegieendowment.org] ([https://carnegieendowment.org/about/?fa=contact contact form])<ref group="contact"> | |||
* '''Carnegie Endowment for International Peace - Partnership for Countering Influence Operations''' ('''PCIO''') at carnegieendowment.org | |||
* Contact form https://carnegieendowment.org/about/?fa=contact | |||
Mail | |||
* Carnegie Endowment for International Peace | |||
* Partnership for Countering Influence Operations | |||
* 1779 Massachusetts Avenue NW | |||
* Washington, DC 20036-2103 | |||
* USA | |||
Phone | |||
* 1-202-483-7600 | |||
Fax | |||
* 1-202-483-1840 | |||
</ref> is a partnership by the [[w:Carnegie Endowment for International Peace]] | |||
* [https://www.unglobalpulse.org/ '''UN Global Pulse''' at unglobalpulse.org] is [[w:United Nations]] Secretary-General’s initiative on big data and artificial intelligence for development, humanitarian action, and peace. | |||
* [https://www.humane-ai.eu/ '''humane-ai.eu'''] by [https://www.k4all.org/ '''Knowledge 4 All Foundation Ltd.''' at k4all.org]<ref group="contact"> | |||
*'''Knowledge 4 All Foundation Ltd.''' - https://www.k4all.org/ | |||
* Betchworth House | |||
* 57-65 Station Road | |||
* Redhill, Surrey, RH1 1DL | |||
* UK | |||
</ref> | </ref> | ||
* ''' | === Other essential developments === | ||
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact"> | |||
* ''' | * '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com | ||
* | Phone | ||
* 1-514-343-6111, ext. 29669 | |||
* | Email | ||
* declaration-iaresponsable@umontreal.ca | |||
* ''' | </ref> and the same site in French [https://www.declarationmontreal-iaresponsable.com/ '''La Déclaration de Montéal IA responsable''' at declarationmontreal-iaresponsable.com]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | ||
* [https://uniglobalunion.org/ '''UNI Global Union''' at uniglobalunion.org] is based in [[w:Nyon]], [[w:Switzerland]] and deals mainly with labor issues to do with AI and robotics.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/uni-global-union/ UNI Global Union on LinkedIn.com] | |||
* [https://cordis.europa.eu/project/id/IST-2000-26048 '''European Robotics Research Network''' at cordis.europa.eu] funded by the [[w:European Commission]].<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
=== Events against synthetic human-like fakes === | |||
* '''2022''' | ''' | * ''' UPCOMING 2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel | ||
* ''' | * ''' UPCOMING 2021 ''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], virtual in December 2021. | ||
* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact"> | |||
| | |||
| | Email: | ||
| | * mfc_poc@nist.gov | ||
| | </ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
| | |||
| | * '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 | [https://sites.google.com/view/mediaforensics2021 2021 Conference on Computer Vision and Pattern Recognition: ''''Workshop on Media Forensics'''' at sites.google.com], a '''June 2021''' workshop at the Conference on Computer Vision and Pattern Recognition. | ||
| | |||
| | * '''2020''' | [http://cvpr2020.thecvf.com/ '''CVPR''' 2020] | [https://sites.google.com/view/wmediaforensics2020/home 2020 Conference on Computer Vision and Pattern Recognition: ''''Workshop on Media Forensics'''' at sites.google.com], a '''June 2020''' workshop at the Conference on Computer Vision and Pattern Recognition. | ||
| | |||
| | * '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref> | ||
| | * '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE. | ||
}} | |||
* '''2019''' | '''[[w:NeurIPS]] 2019''' | [[w:Facebook, Inc.]] [https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/facebook-ai-launches-its-deepfake-detection-challenge '''"Facebook AI Launches Its Deepfake Detection Challenge"''' at spectrum.ieee.org] [[w:IEEE Spectrum]]. More reporting at [https://venturebeat.com/2019/12/11/facebook-microsoft-and-others-launch-deepfake-detection-challenge/ '''''"Facebook, Microsoft, and others launch Deepfake Detection Challenge"''''' at venturebeat.com] | |||
* '''2019''' | '''[https://cvpr2019.thecvf.com/ CVPR 2019]''' | [https://sites.google.com/view/mediaforensics2019/home '''2019''' CVPR: ''''Workshop on Media Forensics''''] | |||
* '''2017'''-'''2020''' | '''NIST''' | [https://www.nist.gov/itl/iad/mig/media-forensics-challenge NIST: 'Media Forensics Challenge'''' ('''MFC''') at nist.gov], an iterative research challenge by the [[w:National Institute of Standards and Technology]] [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2019-0 the evaluation criteria for the 2019 iteration are being formed]. Succeeded by the '''Open Media Forensics Challenge'''. | |||
* '''2018''' | '''[[w:European Conference on Computer Vision|w:European Conference on Computer Vision (ECCV)]]''' [https://sites.google.com/view/wocm2018/home ECCV 2018: ''''Workshop on Objectionable Content and Misinformation'''' at sites.google.com], a workshop at the '''2018''' [[w:European Conference on Computer Vision]] in [[w:Munich]] had focus on objectionable content detection e.g. [[w:nudity]], [[w:pornography]], [[w:violence]], [[w:hate]], [[w:Child sexual abuse|w:children exploitation]] and [[w:terrorism]] among others and to address misinformation problems when people are fed [[w:disinformation]] and they punt it on as misinformation. Announced topics included [[w:Outline of forensic science|w:image/video forensics]], [[w:detection]]/[[w:analysis]]/[[w:understanding]] of [[w:Counterfeit|w:fake]] images/videos, [[w:misinformation]] detection/understanding: mono-modal and [[w:Multimodality|w:multi-modal]], adversarial technologies and detection/understanding of objectionable content | |||
* '''2018''' | '''NIST''' [https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2018 NIST ''''Media Forensics Challenge 2018'''' at nist.gov] was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. | |||
* '''2017''' | '''NIST''' [https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation NIST ''''Nimble Challenge 2017'''' at nist.gov] | |||
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | |||
=== Studies against synthetic human-like fakes === | |||
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services | |||
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract) | |||
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]] | |||
''' Search for more ''' | |||
* [[w:Law review]] | |||
** [[w:List of law reviews in the United States]] | |||
=== Reporting against synthetic human-like fakes === | |||
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]]. | |||
=== Companies against synthetic human-like fakes === | |||
See [[resources]] for more. | |||
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020. | |||
<section end=APW_AI-transclusion /> | |||
=== SSF! wiki proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) === | |||
Transcluded from [[Current and possible laws and their application#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]] | |||
{{#section-h:Current and possible laws and their application|Law proposal to ban visual synthetic filth}} | |||
=== SSF! wiki proposed countermeasure to weaponized synthetic porn pornography: Adequate Porn Watcher AI (concept) (transcluded) === | |||
Transcluded main contents from [[Adequate Porn Watcher AI (concept)]] | |||
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}} | |||
=== SSF! wiki proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) === | |||
Transcluded from [[Current and possible laws and their application#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]] | |||
{{#section-h:Current and possible laws and their application|Law proposal to ban unauthorized modeling of human voice}} | |||
== Timeline of synthetic human-like fakes == | |||
See the #SSFWIKI '''[[Mediatheque]]''' for viewing media that is or is probably to do with synthetic human-like fakes. | |||
=== 2020's synthetic human-like fakes === | |||
* '''2021''' | Entertainment | The Swedish pop band [[w:ABBA]] published an album in September and will be performing shows where the music is live and real, but the visuals will be [[#Age analysis and rejuvenating and aging syntheses|rejuvenated]] [[#Digital look-alikes|digital look-alikes]] of the band members displayed to the fans with [[w:holography]] technology. ABBA used [[w:Industrial Light & Magic]] as the purveyor of technology. [[w:Industrial Light & Magic]] was acquired by [[w:The Walt Disney Company]] in 2012 as part of their acquisition [[w:Lucasfilm]]. | * '''2021''' | Entertainment | The Swedish pop band [[w:ABBA]] published an album in September and will be performing shows where the music is live and real, but the visuals will be [[#Age analysis and rejuvenating and aging syntheses|rejuvenated]] [[#Digital look-alikes|digital look-alikes]] of the band members displayed to the fans with [[w:holography]] technology. ABBA used [[w:Industrial Light & Magic]] as the purveyor of technology. [[w:Industrial Light & Magic]] was acquired by [[w:The Walt Disney Company]] in 2012 as part of their acquisition [[w:Lucasfilm]]. | ||
Line 536: | Line 732: | ||
|last=Rosner | |last=Rosner | ||
|first=Helen | |first=Helen | ||
|author-link= | |author-link=Helen Rosner | ||
|date=2021-07-15 | |date=2021-07-15 | ||
|title=A Haunting New Documentary About Anthony Bourdain | |title=A Haunting New Documentary About Anthony Bourdain | ||
Line 552: | Line 748: | ||
* '''2021''' | Science | [https://arxiv.org/pdf/2102.05630.pdf '''''Voice Cloning: a Multi-Speaker Text-to-Speech Synthesis Approach based on Transfer Learning''''' .pdf at arxiv.org], a paper submitted in Feb 2021 by researchers from the [[w:University of Turin]].<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018" /> | * '''2021''' | Science | [https://arxiv.org/pdf/2102.05630.pdf '''''Voice Cloning: a Multi-Speaker Text-to-Speech Synthesis Approach based on Transfer Learning''''' .pdf at arxiv.org], a paper submitted in Feb 2021 by researchers from the [[w:University of Turin]].<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018" /> | ||
* ''' | * '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | On 2020-11-18 the [[w:Partnership on AI]] introduced the [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref> | ||
* ''' | * '''2020''' | reporting | [https://www.wired.co.uk/article/deepfake-porn-websites-videos-law "''Deepfake porn is now mainstream. And major sites are cashing in''" at wired.co.uk] by Matt Burgess. Published August 2020. | ||
* ''' | * '''2020''' | demonstration | '''[https://moondisaster.org/ Moondisaster.org]''' (full film embedded in website) project by the [https://virtuality.mit.edu/ Center for Advanced Virtuality] of the [[w:Massachusetts Institute of Technology|w:MIT]] published in July 2020, makes use of various methods of making a synthetic human-like fake. Alternative place to watch: [https://www.youtube.com/watch?v=LWLadJFI8Pk ''In Event of Moon Disaster - FULL FILM'' at youtube.com] | ||
** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster''] | |||
* '''2020''' | US state law | {{#lst:Current and possible laws and their application|California2020}} | |||
* '''2020''' | Chinese legislation | {{#lst:Current and possible laws and their application|China2020}} | |||
=== 2010's synthetic human-like fakes === | |||
* ''' | * '''2019''' | demonstration | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 '''a deepfake of the President in office'''] [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it. | ||
* ''' | * '''2019''' | US state law | {{#lst:Current and possible laws and their application|Texas2019}} | ||
* ''' | * '''2019''' | US state law | {{#lst:Current and possible laws and their application|Virginia2019}} | ||
* 2019 | Science | [https://arxiv.org/pdf/1809.10460.pdf '''''Sample Efficient Adaptive Text-to-Speech''''' .pdf at arxiv.org], a 2019 paper from Google researchers, published as a conference paper at [[w:International Conference on Learning Representations]] (ICLR)<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018"> https://www.connectedpapers.com/main/8fc09dfcff78ac9057ff0834a83d23eb38ca198a/Transfer-Learning-from-Speaker-Verification-to-Multispeaker-TextToSpeech-Synthesis/graph</ref> | |||
* | |||
* '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] | * '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] | ||
Line 619: | Line 809: | ||
* '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results. | * '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis ''''Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis''''] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] ('''NeurIPS'''). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results. | ||
* '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018"> | * '''2018''' | demonstration | At the 2018 [[w:World Internet Conference]] in [[w:Wuzhen]] the [[w:Xinhua News Agency]] presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao ([[w:Chinese language]])<ref name="TheGuardian2018"> | ||
Line 716: | Line 904: | ||
* '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font> | * '''2013''' | demonstration | A '''[https://ict.usc.edu/pubs/Scanning%20and%20Printing%20a%203D%20Portrait%20of%20President%20Barack%20Obama.pdf 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu]'''. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: <font color="green">'''Relevancy: certain'''</font> | ||
=== 2000's synthetic human-like fakes === | |||
== 2000's synthetic human-like fakes == | |||
* '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|w:CLU]]. | * '''2010''' | movie | [[w:Walt Disney Pictures]] released a sci-fi sequel entitled ''[[w:Tron: Legacy]]'' with a digitally rejuvenated digital look-alike made of the actor [[w:Jeff Bridges]] playing the [[w:antagonist]] [[w:List of Tron characters#CLU|w:CLU]]. | ||
Line 753: | Line 939: | ||
* '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|w:BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov. | * '''2002''' | music video | '''[https://www.youtube.com/watch?v=3qIXIHAmcKU 'Bullet' by Covenant on Youtube]''' by [[w:Covenant (band)]] from their album [[w:Northern Light (Covenant album)]]. Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the '''classic "''skin looks like cardboard''"-bug''' (assuming this was not intended) that '''thwarted efforts to''' make digital look-alikes that '''pass human testing''' before the '''reflectance capture and dissection in 1999''' by [[w:Paul Debevec]] et al. at the [[w:University of Southern California]] and subsequent development of the '''"Analytical [[w:bidirectional reflectance distribution function|w:BRDF]]"''' (quote-unquote) by ESC Entertainment, a company set up for the '''sole purpose''' of '''making the cinematography''' for the 2003 films Matrix Reloaded and Matrix Revolutions '''possible''', lead by George Borshukov. | ||
== 1990's synthetic human-like fakes == | === 1990's synthetic human-like fakes === | ||
[[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]] | [[File:Institute for Creative Technologies (logo).jpg|thumb|left|156px|Logo of the '''[[w:Institute for Creative Technologies]]''' founded in 1999 in the [[w:University of Southern California]] by the [[w:United States Army]]]] | ||
Line 760: | Line 946: | ||
* <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. | * <font color="red">'''1999'''</font> | <font color="red">'''institute founded'''</font> | The '''[[w:Institute for Creative Technologies]]''' was founded by the [[w:United States Army]] in the [[w:University of Southern California]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].<ref name="ICT-about">https://ict.usc.edu/about/</ref>. In 2016 [[w:Hao Li]] was appointed to direct the institute. | ||
* '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage. | * '''1994''' | movie | [[w:The Crow (1994 film)]] was the first film production to make use of [[w:digital compositing]] of a computer simulated representation of a face onto scenes filmed using a [[w:body double]]. Necessity was the muse as the actor [[w:Brandon Lee]] portraying the protagonist was tragically killed accidentally on-stage. | ||
== 1970's synthetic human-like fakes == | === 1970's synthetic human-like fakes === | ||
{{#ev:vimeo|16292363|480px|right|''[[w:A Computer Animated Hand|w:A Computer Animated Hand]]'' is a 1972 short film by [[w:Edwin Catmull]] and [[w:Fred Parke]]. This was the first time that [[w:computer-generated imagery]] was used in film to animate likenesses of moving human appearance.}} | {{#ev:vimeo|16292363|480px|right|''[[w:A Computer Animated Hand|w:A Computer Animated Hand]]'' is a 1972 short film by [[w:Edwin Catmull]] and [[w:Fred Parke]]. This was the first time that [[w:computer-generated imagery]] was used in film to animate likenesses of moving human appearance.}} | ||
Line 779: | Line 959: | ||
* '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref> | * '''1971''' | science | '''[https://interstices.info/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud/ 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos)]'''. [[w:Henri Gouraud (computer scientist)]] made the first [[w:Computer graphics]] [[w:geometry]] [[w:digitization]] and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple [[w:wire-frame model]] and he applied [[w:Gouraud shading]] to produce the '''first known representation''' of '''human-likeness''' on computer. <ref>{{cite web|title=Images de synthèse : palme de la longévité pour l'ombrage de Gouraud|url=http://interstices.info/jcms/c_25256/images-de-synthese-palme-de-la-longevite-pour-lombrage-de-gouraud}}</ref> | ||
== 1960's synthetic human-like fakes == | === 1960's synthetic human-like fakes === | ||
* '''1961''' | demonstration | The first singing by a computer was performed by an [[w:IBM 704]] and the song was [[w:Daisy Bell]], written in 1892 by British songwriter [[w:Harry Dacre]]. Go to [[Mediatheque#1961]] to view. | * '''1961''' | demonstration | The first singing by a computer was performed by an [[w:IBM 704]] and the song was [[w:Daisy Bell]], written in 1892 by British songwriter [[w:Harry Dacre]]. Go to [[Mediatheque#1961]] to view. | ||
== 1930's synthetic human-like fakes == | === 1930's synthetic human-like fakes === | ||
[[File:Homer Dudley (October 1940). "The Carrier Nature of Speech". Bell System Technical Journal, XIX(4);495-515. -- Fig.5 The voder being demonstrated at the New York World's Fair.jpg|thumb|left|300px|'''[[w:Voder]]''' demonstration pavillion at the [[w:1939 New York World's Fair]]]] | [[File:Homer Dudley (October 1940). "The Carrier Nature of Speech". Bell System Technical Journal, XIX(4);495-515. -- Fig.5 The voder being demonstrated at the New York World's Fair.jpg|thumb|left|300px|'''[[w:Voder]]''' demonstration pavillion at the [[w:1939 New York World's Fair]]]] | ||
* '''1939''' | demonstration | '''[[w:Voder]]''' (''Voice Operating Demonstrator'') from the [[w:Bell Labs|w:Bell Telephone Laboratory]] was the first time that [[w:speech synthesis]] was done electronically by breaking it down into its acoustic components. It was invented by [[w:Homer Dudley]] in 1937–1938 and developed on his earlier work on the [[w:vocoder]]. (Wikipedia) | * '''1939''' | demonstration | '''[[w:Voder]]''' (''Voice Operating Demonstrator'') from the [[w:Bell Labs|w:Bell Telephone Laboratory]] was the first time that [[w:speech synthesis]] was done electronically by breaking it down into its acoustic components. It was invented by [[w:Homer Dudley]] in 1937–1938 and developed on his earlier work on the [[w:vocoder]]. (Wikipedia) | ||
== 1770's synthetic human-like fakes == | === 1770's synthetic human-like fakes === | ||
[[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von Kempelen]]'s [[w:Wolfgang von Kempelen's Speaking Machine]], built 2007–09 at the Department of [[w:Phonetics]], [[w:Saarland University]], [[w:Saarbrücken]], Germany. This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s]] | [[File:Kempelen Speakingmachine.JPG|right|thumb|300px|A replica of [[w:Wolfgang von Kempelen]]'s [[w:Wolfgang von Kempelen's Speaking Machine]], built 2007–09 at the Department of [[w:Phonetics]], [[w:Saarland University]], [[w:Saarbrücken]], Germany. This machine added models of the tongue and lips, enabling it to produce [[w:consonant]]s as well as [[w:vowel]]s]] | ||
Line 800: | Line 980: | ||
---- | ---- | ||
= Footnotes = | == Footnotes == | ||
<references group="footnote" /> | <references group="footnote" /> | ||
= | == 1st seen in == | ||
= 1st seen in = | |||
<references group="1st seen in" /> | <references group="1st seen in" /> | ||
== Contact information of organizations == | |||
<references group="contact" /> | |||
= References = | == References == | ||
<references /> | <references /> |