Page history
24 July 2020
+ embedded 'In Event of Moon Disaster - FULL FILM'
+275
→2020's synthetic human-like fakes: + 2020 | demonstration | In Event of Moon Disaster - FULL FILM at youtube.com by the moondisaster.org project by the Center for Advanced Virtuality of the MIT makes a synthetic human-like fake in the appearance and almost in the sound of Nixon.
+618
23 July 2020
→2010's synthetic human-like fakes: + "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io + "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com
+352
→2010's synthetic human-like fakes: + Speech2Face at github.com
+106
22 July 2020
+ === 2020's synthetic human-like fakes ===
+43
→2010's synthetic human-like fakes: + 2019 | science and demonstration | 'Speech2Face: Learning the Face Behind a Voice' at arXiv.org a system for generating likely facial features based on the voice of a person, presented by the w:MIT Computer Science and Artificial Intelligence Laboratory at the 2019 CVPR. This may develop to something that really causes problems.
+449
fix
m+1
→Examples of speech synthesis software not quite able to fool a human yet: + https://papers.nips.cc/paper/8206-neural-voice-cloning-with-a-few-samples ''''Neural Voice Cloning with a Few Samples''' at papers.nips.cc, w:Baidu Research'es shot at sound-like-anyone-machine did not convince in '''2018'''
+233
→Countermeasures against synthetic human-like fakes: + File:Connie Leyva 2015.jpg
+433
→2010's synthetic human-like fakes: + File:Marc Berman.jpg
+530
mv content around unchanged
+7
→Reporting on the sound-like-anyone-machines: + '''"An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft"''' at washingtonpost.com documents a w:fraud committed with digital sound-like-anyone-machine, July 2019 reporting.<ref name="WaPo2019">
+807
→Reporting on the sound-like-anyone-machines: + https://www.bbc.com/news/technology-48908736 Fake voices ''''help cyber-crooks steal cash'''' at bbc.com <ref name="BBC2019">
+604
→Digital sound-alikes: + link to "Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?" May 2019 reporting at forbes.com on w:Baidu Research'es attempt at the sound-like-anyone-machine demonstrated at the 2018 w:NeurIPS conference.
+443
→Text synthesis: + [https://analyticssteps.com/blogs/detection-fake-and-false-news-text-analysis-approaches-and-cnn-deep-learning-model
+308
→Events against synthetic human-like fakes: + w:Facebook, Inc.
+36
→Events against synthetic human-like fakes: + '''2019''' at the '''NeurIPS''' "Facebook AI Launches Its Deepfake Detection Challenge" at spectrum.ieee.org
+306
fmt
m−65
→Countermeasures against synthetic human-like fakes: + link to faculty staff page at the National Center for Media Forensics
+127
fmt + California SB 564 read to senate in Feb 2019
+18
→Organizations against synthetic human-like fakes: fmt link
m+67
→Organizations against synthetic human-like fakes: + '''w:SAG-AFTRA''' endorsed California Senate Bill SB 564 introduced to the w:California State Senate by w:California Senator Connie Leyva.
+410
→Studies against synthetic human-like fakes: + ''' Search for more ''' + w:Law review + w:List of law reviews in the United States
+94
→Countermeasures against synthetic human-like fakes: + added subheadings
+144
→Countermeasures against synthetic human-like fakes: fmt
m+205
→Countermeasures against synthetic human-like fakes: + 'DEEPFAKES: False pornography is here and the law cannot protect you' at scholarship.law.duke.edu by Douglas Harris, published in Duke Law & Technology Review - Volume 17 on 2019-01-05 by Duke University School of Law
+422
→Countermeasures against synthetic human-like fakes: starting on reverse chronologization of the '''Events'''
+369
→Countermeasures against synthetic human-like fakes: + Archive.org's firts crawl of SemaFor was in November 2019, the same time as the application period for the grants at grant.gov ended
+405
21 July 2020
20 July 2020
moved a few pics to the left side
m−2
→Timeline of synthetic human-like fakes: tweak subheadings + minor fmt
m+150
more precise wording regarding the National Center for Media Forensics
m−23
→Countermeasures against synthetic human-like fakes: + Archive.org first crawled [MediFor] homepage in June 2016 + <ref name="IA-MediFor-2016">
+176
→Countermeasures against synthetic human-like fakes
+255
→Countermeasures against synthetic human-like fakes: + https://researchfunding.duke.edu/semantic-forensics-semafor
+207
→Text synthesis: + ''''OpenAI’s latest AI text generator GPT-3 amazes early adopters'''' at siliconangle.com '''July 2020''' reporting on GPT-3
+230
→Text synthesis: + '''OpenAI releases the full version of GPT-2''' at openai.com in '''August 2019'''
+138
→Text synthesis: + ''''OpenAI releases curtailed version of GPT-2 language model'''' at venturebeat.com
+309
→Text synthesis: + abbreviation GPT for "Generative Pre-trained Transformer"
m+12
→Text synthesis: + w:OpenAI's Generative Pre-trained Transformer is a left-to-right transformer-based text generation model succeeded by GPT-2 and GPT-3
+269
+ == Text synthesis == + w:Chatbots + w:natural language processing + w:natural-language understanding + w:natural-language generation
+284
fmt caption
m+12
+ == 1st seen in == + <references group="1st seen in" />
+54
moved some content under == Timeline of synthetic human-like fakes ==
+17
→2000's: Bringing the link to the video Paul Debevec: ''''Animating a photo-realistic face'''' at ted.com out of the ref and onto the text
+615
→1990's: unredirring a Wikipedia link
m+11
→Timeline of synthetic human-like fakes: 1999 + The '''w:Institute for Creative Technologies''' was founded by the w:US Army in the w:University of Southern California. It collaborates with the w:United States Army Futures Command, w:United States Army Combat Capabilities Development Command, w:Combat Capabilities Development Command Soldier Center, w:United States Army Research Laboratory and w:United States Army Research Laboratory + <ref name="ICT-about">
+574
moving the countermeasures Adequate Porn Watcher AI and the possible legal response of illegalizing the possession of models of other people's voices without permission under
+67
→Countermeasures against synthetic human-like fakes: + '''Semantic Forensics grant opportunity''' (closed Nov 2019) at grants.gov
+147
→Countermeasures against synthetic human-like fakes: + DARPA program: 'Semantic Forensics (SemaFor) at darpa.mil aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns".
+379