Synthetic human-like fakes
When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a digital look-alike.
When it cannot be determined by human testing or media forensics whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a pre-recorded digital sound-alike.
Digital look-alikes
Introduction to digital look-alikes
In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. w:The Matrix Reloaded and w:The Matrix Revolutions released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the reflectance capture over the human face, was made for the first time in 1999 at the w:University of Southern California and was presented to the crème de la crème of the computer graphics field in their annual gathering SIGGRAPH 2000.[1]
“Do you think that was w:Hugo Weaving's left cheekbone that w:Keanu Reeves punched in with his right fist?”
The problems with digital look-alikes
Extremely unfortunately for the humankind, organized criminal leagues, that posses the weapons capability of making believable looking synthetic pornography, are producing on industrial production pipelines synthetic terror porn[footnote 1] by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by.
These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This hate illustration increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve.
List of possible naked digital look-alike attacks
- The classic "portrayal of as if in involuntary sex"-attack. (Digital look-alike "cries")
- "Sexual preference alteration"-attack. (Digital look-alike "smiles")
- "Cutting / beating"-attack (Constructs a deceptive history for genuine scars)
- "Mutilation"-attack (Digital look-alike "dies")
- "Unconscious and injected"-attack (Digital look-alike gets "disease")
Age analysis and rejuvenating and aging syntheses
- 'An Overview of Two Age Synthesis and Estimation Techniques' at arxiv.org (.pdf), submitted for review on 2020-01-26
- 'Dual Reference Age Synthesis' at sciencedirect.com (preprint at arxiv.org) published on 2020-10-21 in w:Neurocomputing (journal)
- 'A simple automatic facial aging/rejuvenating synthesis method' at ieeexplore.ieee.org read free at researchgate.net, published at the proceedings of the 2011 IEEE International Conference on Systems, Man and Cybernetics
- 'Age Synthesis and Estimation via Faces: A Survey' at ieeexplore.ieee.org (paywall) at researchgate.net published November 2010
Temporal limit of digital look-alikes
w:History of film technology has information about where the border is.
Digital look-alikes cannot be used to attack people who existed before the technological invention of film. For moving pictures the breakthrough is attributed to w:Auguste and Louis Lumière's w:Cinematograph premiered in Paris on 28 December 1895, though this was only the commercial and popular breakthrough, as even earlier moving pictures exist. (adapted from w:History of film)
The w:Kinetoscope is an even earlier motion picture exhibition device. A prototype for the Kinetoscope was shown to a convention of the National Federation of Women's Clubs on May 20, 1891.[2] The first public demonstration of the Kinetoscope was held at the Brooklyn Institute of Arts and Sciences on May 9, 1893. (Wikipedia)[2]
Digital sound-alikes
Living people can defend[footnote 2] themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.
For these reasons the bannable raw materials i.e. covert voice models should be prohibited by law in order to protect humans from abuse by criminal parties.
Documented digital sound-alike attacks
- Sound like anyone technology found its way to the hands of criminals as in 2019 Symantec researchers knew of 3 cases where technology has been used for w:crime
- "Fake voices 'help cyber-crooks steal cash'" at bbc.com July 2019 reporting [3]
- "An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft" at washingtonpost.com documents a w:fraud committed with digital sound-like-anyone-machine, July 2019 reporting.[4]
'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion)
- In the 2018 at the w:Conference on Neural Information Processing Systems (NeurIPS) the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' (at arXiv.org) was presented. The pre-trained model is able to steal voices from a sample of only 5 seconds with almost convincing results
Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking.
The Iframe above is transcluded from 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io, the audio samples of a sound-like-anyone machine presented as at the 2018 w:NeurIPS conference by Google researchers.
Digital sing-alikes
The to the right video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube describes the voice thieving machine presented by Google Research in w:NeurIPS 2018.
As of 2020 the digital sing-alikes may not yet be here, but when we hear a faked singing voice and we cannot hear that it is fake, then we will know. An ability to sing does not seem to add much hostile capabilities compared to the ability to thieve spoken word.
- 'Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks' at arxiv.org, a 2019 singing voice synthesis technique using w:convolutional neural networks (CNN). Accepted into the 2020 International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
- 'State of art of real-time singing voice synthesis' at compmus.ime.usp.br presented at the 2019 17th Brazilian Symposium on Computer Music
- 'Synthesis and expressive transformation of singing voice' at theses.fr as .pdf a 2017 doctorate thesis by Luc Ardaillon
- 'Synthesis of the Singing Voice by Performance Sampling and Spectral Models' at mtg.upf.edu, a 2007 journal article in the w:IEEE Signal Processing Society's Signal Processing Magazine
- 'Speech-to-Singing Synthesis: Converting Speaking Voices to Singing Voices by Controlling Acoustic Features Unique to Singing Voices' at researchgate.net, a November 2007 paper published in the IEEE conference on Applications of Signal Processing to Audio and Acoustics
Example of a hypothetical 4-victim digital sound-alike attack
A very simple example of a digital sound-alike attack is as follows:
Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:
- Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
- Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
- Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
- Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.
Thus it is high time to act and to criminalize the covert modeling of human voice!
Examples of speech synthesis software not quite able to fool a human yet
Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer.
- Lyrebird.ai (listen)
- CandyVoice.com (test with your choice of text)
- Merlin, a w:neural network based speech synthesis system by the Centre for Speech Technology Research at the w:University of Edinburgh
- 'Neural Voice Cloning with a Few Samples at papers.nips.cc, w:Baidu Research'es shot at sound-like-anyone-machine did not convince in 2018
Reporting on the sound-like-anyone-machines
- "Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?" May 2019 reporting at forbes.com on w:Baidu Research'es attempt at the sound-like-anyone-machine demonstrated at the 2018 w:NeurIPS conference.
Temporal limit of digital sound-alikes
The temporal limit of whom, dead or living, the digital sound-alikes can attack is defined by the w:history of sound recording.
The article starts by mentioning that the invention of the w:phonograph by w:Thomas Edison in 1877 is considered the start of sound recording.
The phonautograph is the earliest known device for recording w:sound. Previously, tracings had been obtained of the sound-producing vibratory motions of w:tuning forks and other objects by physical contact with them, but not of actual sound waves as they propagated through air or other media. Invented by Frenchman W:Édouard-Léon Scott de Martinville, it was patented on March 25, 1857.[5]
Apparently, it did not occur to anyone before the 1870s that the recordings, called phonautograms, contained enough information about the sound that they could, in theory, be used to recreate it. Because the phonautogram tracing was an insubstantial two-dimensional line, direct physical playback was impossible in any case. Several phonautograms recorded before 1861 were successfully played as sound in 2008 by optically scanning them and using a computer to process the scans into digital audio files. (Wikipedia)
Text syntheses
w:Chatbots have existed for a longer time, but only now armed with AI they are becoming more deceiving.
In w:natural language processing development in w:natural-language understanding leads to more cunning w:natural-language generation AI.
w:OpenAI's w:Generative Pre-trained Transformer (GPT) is a left-to-right w:transformer (machine learning model)-based text generation model succeeded by w:GPT-2 and w:GPT-3
Reporting / announcements
- 'A college kid’s fake, AI-generated blog fooled tens of thousands. This is how he made it.' at technologyreview.com August 2020 reporting in the w:MIT Technology Review by Karen Hao about GPT-3.
- 'OpenAI’s latest AI text generator GPT-3 amazes early adopters' at siliconangle.com July 2020 reporting on GPT-3
- OpenAI releases the full version of GPT-2 at openai.com in August 2019
- 'OpenAI releases curtailed version of GPT-2 language model' at venturebeat.com, August 2019 reporting on the original release of of the curtailed version of GPT-2
External links
- "Detection of Fake and False News (Text Analysis): Approaches and CNN as Deep Learning Model" at analyticsteps.com, a 2019 summmary written by Shubham Panth.
Countermeasures against synthetic human-like fakes
Organizations against synthetic human-like fakes
- w:DARPA (darpa.mil) contact form[contact 1]DARPA program: 'Media Forensics' (MediFor) at darpa.mil aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in June 2016[6].
- DARPA program: 'Semantic Forensics' (SemaFor) at darpa.mil aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at w:Duke University's Research Funding database: Semantic Forensics (SemaFor) at researchfunding.duke.edu and some at Semantic Forensics grant opportunity (closed Nov 2019) at grants.gov. Archive.org first cralwed their website in November 2019[7]
- w:University of Colorado Denver's College of Arts & Media[contact 2] is the home of the National Center for Media Forensics at artsandmedia.ucdenver.edu at the w:University of Colorado Denver offers a Master's degree program, training courses and scientific basic and applied research. Faculty staff at the NCMF
- Media Forensics Hub at clemson.edu[contact 3] at the Watt Family Innovation Center of the w:Clemson University has the aims of promoting multi-disciplinary research, collecting and facilitating discussion and ideation of challenges and solutions. They provide resources, research, media forensics education and are running a Working Group on disinformation.[contact 4]
- The WITNESS Media Lab at lab.witness.org by w:Witness (organization)contact form) (contact form)[contact 5], a human rights non-profit organization based out of Brooklyn, New York, is against synthetic filth actively since 2018. They work both in awareness raising as well as media forensics.
- Open-source intelligence digital forensics - How do we work together to detect AI-manipulated media? at lab.witness.org. "In February 2019 WITNESS in association with w:George Washington University brought together a group of leading researchers in media forensics and w:detection of w:deepfakes and other w:media manipulation with leading experts in social newsgathering, w:User-generated content and w:open-source intelligence (w:OSINT) verification and w:fact-checking." (website)
- Prepare, Don’t Panic: Synthetic Media and Deepfakes at lab.witness.org is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in 2018 with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the report “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening” (dated 2018-06-11). Deepfakes and Synthetic Media: What should we fear? What can we do? at blog.witness.org
- Screen Actors Guild - American Federation of Television and Radio Artists - w:SAG-AFTRA (sagaftra.org contact form[contact 6] SAG-AFTRA ACTION ALERT: "Support California Bill to End Deepfake Porn" at sagaftra.org endorses California Senate Bill SB 564 introduced to the w:California State Senate by w:California w:Senator Connie Leyva in Feb 2019.
Organizations possibly against synthetic human-like fakes
Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 1]
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE at ieai.mcts.tum.de[contact 7] received initial funding from w:Facebook in 2019.[1st seen in 1] IEAI on LinkedIn.com
- The Institute for Ethical AI & Machine Learning at ethical.institute(contact form asks a lot of questions)[contact 8][1st seen in 1] The Institute for Ethical AI & Machine Learning on LinkedIn.com
- Future of Life Institute at futureoflife.org (contact form with also mailing list)[contact 10] received funding from private donors.[1st seen in 1] See w:Future of Life Institute for more info.
- The Japanese Society for Artificial Intelligence (JSAI) at ai-gakkai.or.jp[contact 11] Publication: Ethical guidelines.[1st seen in 1]
- AI4All at ai-4-all.org (contact form with also mailing list subscription) [contact 12] funded by w:Google[1st seen in 1] AI4All on LinkedIn.com
- The Future Society at thefuturesociety.org (contact form with also mailing list subscription)[contact 13][1st seen in 1]. Their activities include policy research, educational & leadership development programs, advisory services, seminars & summits and other special projects to advance the responsible adoption of Artificial Intelligence (AI) and other emerging technologies. The Future Society on LinkedIn.com
- The Ai Now Institute at ainowinstitute.org (contact form and possibility to subscribe to mailing list)[contact 14] at w:New York University[1st seen in 1]. Their work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. The Ai Now Institute on LinkedIn.com
- Partnership on AI at partnershiponai.org (contact form)[contact 15] is based in the USA and funded by technology companies. They provide resources and have a vast amount and high caliber of partners. See w:Partnership on AI and Partnership on AI on LinkedIn.com for more info.
- The Foundation for Responsible Robotics at responsiblerobotics.org (contact form)[contact 16] is based in w:Netherlands.[1st seen in 1] The Foundation for Responsible Robotics on LinkedIn.com
- AI4People at ai4people.eu (contact form)[contact 17] is based in w:Belgium is a multi-stakeholder forum.[1st seen in 1] AI4People on LinkedIn.com
- The Ethics and Governance of Artificial Intelligence Initiative at aiethicsinitiative.orgCite error: Invalid
<ref>
tag; refs with no name must have content is based in the USA.[1st seen in 1]
- Saidot at saidot.aiCite error: Invalid
<ref>
tag; refs with no name must have content is a Finnish company offering a platform for AI transparency, explainability and communication.[1st seen in 1] Saidot on LinkedIn.com
- euRobotics at eu-robotics.net is funded by the w:European Commission.[1st seen in 1]
- Centre for Data Ethics and Innovation at gov.ukCite error: Invalid
<ref>
tag; refs with no name must have content financed by the UK govt. Centre for Data Ethics and Innovation Blog at cdei.blog.gov.uk[1st seen in 1] Centre for Data Ethics and Innovation on LinkedIn.com
- ACM Special Interest Group on Artificial Intelligence at sigai.acm.orgCite error: Invalid
<ref>
tag; refs with no name must have content is a w:Special Interest Group on AI by ACM.[1st seen in 1]
- IEEE Ethics in Action - in Autonomous and Intelligent Systems at ethicsinaction.ieee.orgCite error: Invalid
<ref>
tag; refs with no name must have content
- The Center for Countering Digital Hate at counterhate.comCite error: Invalid
<ref>
tag; refs with no name must have content is an international not-for-profit NGO that seeks to disrupt the architecture of online hate and misinformation with offices in London and Washington DC.
- Partnership for Countering Influence Operations (PCIO) at carnegieendowment.orgCite error: Invalid
<ref>
tag; refs with no name must have content is a partnership by the w:Carnegie Endowment for International Peace
Other essential developments
- The Montréal Declaration for a Responsible Development of Artificial Intelligence at montrealdeclaration-responsibleai.com and the same site in French La Déclaration de Montéal IA responsable at declarationmontreal-iaresponsable.com[1st seen in 1]
- UNI Global Union at uniglobalunion.org is based in w:Nyon, w:Switzerland and deals mainly with labor issues to do with AI and robotics.[1st seen in 1] UNI Global Union on LinkedIn.com
- European Robotics Research Network at cordis.europa.eu funded by the w:European Commission.[1st seen in 1]
- European Robotics Platform at eu-robotics.net is funded by the w:European Commission. See w:European Robotics Platform and w:List of European Union robotics projects#EUROP for more info.[1st seen in 1]
Events against synthetic human-like fakes
- ONGOING | w:National Institute of Standards and Technology (NIST) | Open Media Forensics Challenge (OpenMFC) at mfc.nist.gov Open Media Forensics Challenge at nist.gov - Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.[8]
- 2021 | w:Conference on Computer Vision and Pattern Recognition (CVPR) | 2021 Conference on Computer Vision and Pattern Recognition: 'Workshop on Media Forensics' at sites.google.com, a June 2021 workshop at the Conference on Computer Vision and Pattern Recognition.
- 2020 | CVPR | 2020 Conference on Computer Vision and Pattern Recognition: 'Workshop on Media Forensics' at sites.google.com, a June 2020 workshop at the Conference on Computer Vision and Pattern Recognition.
- 2020 | The winners of the Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes[9]
- 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
- 2019 | w:NeurIPS | w:Facebook, Inc. "Facebook AI Launches Its Deepfake Detection Challenge" at spectrum.ieee.org w:IEEE Spectrum. More reporting at "Facebook, Microsoft, and others launch Deepfake Detection Challenge" at venturebeat.com
- 2019 | CVPR | 2019 CVPR: 'Workshop on Media Forensics'
- 2017-2020 | w:National Institute of Standards and Technology (NIST) | NIST: 'Media Forensics Challenge' (MFC) at nist.gov, an iterative research challenge by the w:National Institute of Standards and Technology the evaluation criteria for the 2019 iteration are being formed
- 2018 | w:European Conference on Computer Vision (ECCV) ECCV 2018: 'Workshop on Objectionable Content and Misinformation' at sites.google.com, a workshop at the 2018 w:European Conference on Computer Vision in w:Munich had focus on objectionable content detection e.g. w:nudity, w:pornography, w:violence, w:hate, w:children exploitation and w:terrorism among others and to address misinformation problems when people are fed w:disinformation and they punt it on as misinformation. Announced topics included w:image/video forensics, w:detection/w:analysis/w:understanding of w:fake images/videos, w:misinformation detection/understanding: mono-modal and w:multi-modal, adversarial technologies and detection/understanding of objectionable content
- 2018 | w:NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
- 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [10]
Studies against synthetic human-like fakes
- 'Disinformation That Kills: The Expanding Battlefield Of Digital Warfare' at cbinsights.com, a 2020-10-21 research brief on disinformation warfare by w:CB Insights, a private company that provides w:market intelligence and w:business analytics services
- 'Media Forensics and DeepFakes: an overview' at arXiv.org (as .pdf at arXiv.org), an overview on the subject of digital look-alikes and media forensics published in August 2020 in Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing. 'Media Forensics and DeepFakes: An Overview' at ieeexplore.ieee.org (paywalled, free abstract)
- 'DEEPFAKES: False pornography is here and the law cannot protect you' at scholarship.law.duke.edu by Douglas Harris, published in Duke Law & Technology Review - Volume 17 on 2019-01-05 by w:Duke University w:Duke University School of Law
Search for more
Reporting against synthetic human-like fakes
- 'Researchers use facial quirks to unmask ‘deepfakes’' at news.berkeley.edu 2019-06-18 reporting by Kara Manke published in Politics & society, Research, Technology & engineering-section in Berkley News of w:UC Berkeley.
Companies against synthetic human-like fakes
See resources for more.
- Cyabra.com is an AI-based system that helps organizations be on the guard against disinformation attacks[1st seen in 2]. Reuters.com reporting from July 2020.
SSF! wiki proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded)
Transcluded from Juho's proposal for banning unauthorized synthetic pornography
§1 Models of human appearance
A model of human appearance means
- A realistic 3D model
- A 7D bidirectional reflectance distribution function model
- A direct-to-2D capable w:machine learning model
- Or a model made with any technology whatsoever, that looks deceivingly like the target person.
§2 Producing synthetic pornography
Making projections, still or videographic, where targets are portrayed in a nude or in a sexual situation from models of human appearance defined in §1 without express consent of the targets is illegal.
§3 Distributing synthetic pornography
Distributing, making available, public display, purchase, sale, yielding, import and export of non-authorized synthetic pornography defined in §2 are punishable.[footnote 3]
§4 Aggravated producing and distributing synthetic pornography
If the media described in §2 or §3 is made or distributed with the intent to frame for a crime or for blackmail, the crime should be judged as aggravated.
Afterwords
The original idea I had was to ban both the raw materials i.e. the models to make the visual synthetic filth and also the end product weaponized synthetic pornography, but then in July 2019 it appeared to me that Adequate Porn Watcher AI (concept) could really help in this age of industrial disinformation if it were built, trained and operational. Banning modeling of human appearance was in conflict with the revised plan.
It is safe to assume that collecting permissions to model each pornographic recording is not plausible, so an interesting question is that can we ban covert modeling from non-pornographic pictures, while still retaining the ability to model all porn found on the Internet.
In case we want to pursue banning modeling people's appearance from non-pornographic images/videos without explicit permission be pursued it must be formulated so that this does not make Adequate Porn Watcher AI (concept) illegal / impossible. This would seem to lead to a weird situation where modeling a human from non-pornographic media would be illegal, but modeling from pornography legal.
SSF! wiki proposed countermeasure to weaponized synthetic porn pornography: Adequate Porn Watcher AI (concept) (transcluded)
Transcluded main contents from Adequate Porn Watcher AI (concept)
Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.
Obs. #A service identical to APW_AI used to exist - FacePinPoint.com
The method and the effect
The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.
If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
Rules
Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.
Definition of adequacy
An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
What about the people in the porn-industry?
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.
History
The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.
Countermeasures elsewhere
Partial transclusion from Organizations, studies and events against synthetic human-like fakes
Companies against synthetic filth
- Alecto AI at alectoai.com[1st seen in 3], a provider of an AI-based face information analytics, founded in 2021 in Palo Alto.
- Facenition.com, an NZ company founded in 2019 and ingenious method to hunt for the fake human-like images. Probably has been purchased, mergered or licensed by ThatsMyFace.com
- ThatsMyFace.com[1st seen in 3], an Australian company.[contacted 1] Previously, another company in the USA had this same name and domain name.[11]
A service identical to APW_AI used to exist - FacePinPoint.com
Partial transclusion from FacePinPoint.com
FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 2]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[12], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[13] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.
SSF! wiki proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded)
Transcluded from Juho's proposal on banning digital sound-alikes
Motivation: The current situation where the criminals can freely trade and grow their libraries of stolen voices is unwise.
§1 Unauthorized modeling of a human voice
Acquiring such a model of a human's voice, that deceptively resembles some dead or living person's voice and the possession, purchase, sale, yielding, import and export without the express consent of the target are punishable.
§2 Application of unauthorized voice models
Producing and making available media from covert voice models defined in §1 is punishable.
§3 Aggravated application of unauthorized voice models
If the produced media is for a purpose to
- frame a human target or targets for crimes
- to attempt extortion or
- to defame the target,
the crime should be judged as aggravated.
Timeline of synthetic human-like fakes
2020's synthetic human-like fakes
- 2021 | Science | Voice Cloning: a Multi-Speaker Text-to-Speech Synthesis Approach based on Transfer Learning .pdf at arxiv.org, a paper submitted in Feb 2021 by researchers from the w:University of Turin.[1st seen in 4]
- 2020 | counter-measure | On 2020-11-18 the w:Partnership on AI introduced the 'AI Incident Database' at incidentdatabase.ai.[14]
- 2020 | reporting | "Deepfake porn is now mainstream. And major sites are cashing in" at wired.co.uk by Matt Burgess. Published August 2020.
- 2020 | demonstration | Moondisaster.org (full film embedded in website) project by the Center for Advanced Virtuality of the w:MIT published in July 2020, makes use of various methods of making a synthetic human-like fake. Alternative place to watch: In Event of Moon Disaster - FULL FILM at youtube.com
- 2020 | US state law | January 1 2020 [15] the w:California w:US state law "AB-602 Depiction of individual using digital or electronic technology: sexually explicit material: cause of action." came into effect in the civil code of the w:California Codes banning the manufacturing and w:digital distribution of synthetic pornography without the w:consent of the people depicted. AB-602 provides victims of synthetic pornography with w:injunctive relief and poses legal threats of w:statutory and w:punitive damages on w:criminals making or distributing synthetic pornography without consent. The bill AB-602 was signed into law by California w:Governor w:Gavin Newsom on October 3 2019 and was authored by w:California State Assemblymember w:Marc Berman and an identical Senate bill was coauthored by w:California Senator w:Connie Leyva.[16][17] AB602 at trackbill.com
- 2020 | Chinese legislation | On Wednesday January 1 2020 Chinese law requiring that synthetically faked footage should bear a clear notice about its fakeness came into effect. Failure to comply could be considered a w:crime the w:Cyberspace Administration of China (cac.gov.cn) stated on its website. China announced this new law in November 2019.[18] The Chinese government seems to be reserving the right to prosecute both users and w:online video platforms failing to abide by the rules. [19]
2010's synthetic human-like fakes
- 2019 | demonstration | In September 2019 w:Yle, the Finnish w:public broadcasting company, aired a result of experimental w:journalism, a deepfake of the President in office w:Sauli Niinistö in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it.
- 2019 | US state law | On September 1 2019 w:Texas Senate bill SB 751 - Relating to the creation of a criminal offense for fabricating a deceptive video with intent to influence the outcome of an election w:amendments to the election code came into effect in the w:Law of Texas, giving w:candidates in w:elections a 30-day protection period to the elections during which making and distributing digital look-alikes or synthetic fakes of the candidates is an offense. The law text defines the subject of the law as "a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality"[20] SB 751 was introduced to the Senate by w:Bryan Hughes (politician).[21]
- 2019 | US state law | Since July 1 2019[22] w:Virginia w:has criminalized the sale and dissemination of unauthorized synthetic pornography, but not the manufacture.[23], as section § 18.2-386.2 titled 'Unlawful dissemination or sale of images of another; penalty.' became part of the w:Code of Virginia.
Code of Virginia (TOC) » Title 18.2. Crimes and Offenses Generally » Chapter 8. Crimes Involving Morals and Decency » Article 5. Obscenity and Related Offenses » Section § 18.2-386.2. Unlawful dissemination or sale of images of another; penalty
The section § 18.2-386.2. Unlawful dissemination or sale of images of another; penalty. of Virginia is as follows:
A. Any w:person who, with the w:intent to w:coerce, w:harass, or w:intimidate, w:maliciously w:disseminates or w:sells any videographic or still image created by any means whatsoever that w:depicts another person who is totally w:nude, or in a state of undress so as to expose the w:genitals, pubic area, w:buttocks, or female w:breast, where such person knows or has reason to know that he is not w:licensed or w:authorized to disseminate or sell such w:videographic or w:still image is w:guilty of a Class 1 w:misdemeanor.
- For purposes of this subsection, "another person" includes a person whose image was used in creating, adapting, or modifying a videographic or still image with the intent to depict an actual person and who is recognizable as an actual person by the person's w:face, w:likeness, or other distinguishing characteristic.
B. If a person uses w:services of an w:Internet service provider, an electronic mail service provider, or any other information service, system, or access software provider that provides or enables computer access by multiple users to a computer server in committing acts prohibited under this section, such provider shall not be held responsible for violating this section for content provided by another person.
C. Venue for a prosecution under this section may lie in the w:jurisdiction where the unlawful act occurs or where any videographic or still image created by any means whatsoever is produced, reproduced, found, stored, received, or possessed in violation of this section.
D. The provisions of this section shall not preclude prosecution under any other w:statute.[23]
The identical bills were House Bill 2678 presented by w:Delegate w:Marcus Simon to the w:Virginia House of Delegates on January 14 2019 and three day later an identical Senate bill 1736 was introduced to the w:Senate of Virginia by Senator w:Adam Ebbin.
- 2019 | Science | Sample Efficient Adaptive Text-to-Speech .pdf at arxiv.org, a 2019 paper from Google researchers, published as a conference paper at w:International Conference on Learning Representations (ICLR)[1st seen in 4]
- 2019 | science and demonstration | 'Speech2Face: Learning the Face Behind a Voice' at arXiv.org a system for generating likely facial features based on the voice of a person, presented by the w:MIT Computer Science and Artificial Intelligence Laboratory at the 2019 w:CVPR. Speech2Face at github.com This may develop to something that really causes problems. "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io, "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com
- 2019 | crime | w:Fraud with digital sound-alike technology surfaced in 2019. See 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft', a 2019 Washington Post article or 'A Voice Deepfake Was Used To Scam A CEO Out Of $243,000' at Forbes.com (2019-09-03)
- 2019 | demonstration | 'Which Face is real?' at whichfaceisreal.com is an easily unnerving game by Carl Bergstrom and Jevin West where you need to try to distinguish from a pair of photos which is real and which is not. A part of the "tools" of the Calling Bullshit course taught at the w:University of Washington. Relevancy: certain
- 2019 | demonstration | 'Thispersondoesnotexist.com' (since February 2019) by Philip Wang. It showcases a w:StyleGAN at the task of making an endless stream of pictures that look like no-one in particular, but are eerily human-like. Relevancy: certain
- 2019 | action | w:Nvidia w:open sources w:StyleGAN, a novel w:generative adversarial network.[24]
- 2018 | counter-measure | In September 2018 Google added “involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”[25] Information on removing involuntary fake pornography from Google at support.google.com if it shows up in Google and the form to request removing involuntary fake pornography at support.google.com, select "I want to remove: A fake nude or sexually explicit picture or video of myself"
- 2018 | science and demonstration | The work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' (at arXiv.org) was presented at the 2018 w:Conference on Neural Information Processing Systems (NeurIPS). The pre-trained model is able to steal voices from a sample of only 5 seconds with almost convincing results.
- 2018 | demonstration | At the 2018 w:World Internet Conference in w:Wuzhen the w:Xinhua News Agency presented two digital look-alikes made to the resemblance of its real news anchors Qiu Hao (w:Chinese language)[26] and Zhang Zhao (w:English language). The digital look-alikes were made in conjunction with w:Sogou.[27] Neither the w:speech synthesis used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera.
- 2018 | action | Deep Fakes letter to the Office of the Director of National Intelligence at schiff.house.gov, a letter sent to the w:Director of National Intelligence on 2018-09-13 by congresspeople w:Adam Schiff, w:Stephanie Murphy and w:Carlos Curbelo requesting a report be compiled on the synthetic human-like fakes situation and what are the threats and what could be the solutions.[1st seen in 5]
- 2018 | controversy / demonstration | The w:deepfakes controversy surfaces where porn videos were doctored utilizing w:deep machine learning so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.
- 2017 | science | 'Synthesizing Obama: Learning Lip Sync from Audio' at grail.cs.washington.edu. In SIGGRAPH 2017 by Supasorn Suwajanakorn et al. of the w:University of Washington presented an audio driven digital look-alike of upper torso of Barack Obama. It was driven only by a voice track as source data for the animation after the training phase to acquire w:lip sync and wider facial information from w:training material consisting 2D videos with audio had been completed.[28] Relevancy: certain
- 2016 | movie | w:Rogue One is a Star Wars film for which digital look-alikes of actors w:Peter Cushing and w:Carrie Fisher were made. In the film their appearance would appear to be of same age as the actors were during the filming of the original 1977 w:Star Wars (film) film.
- 2016 | science / demonstration | w:DeepMind's w:WaveNet owned by w:Google also demonstrated ability to steal people's voices
- 2016 | science and demonstration | w:Adobe Inc. publicly demonstrates w:Adobe Voco, a sound-like-anyone machine '#VoCo. Adobe Audio Manipulator Sneak Peak with Jordan Peele | Adobe Creative Cloud' on Youtube. THe original Adobe Voco required 20 minutes of sample to thieve a voice. Relevancy: certain.
- 2016 | science | 'Face2Face: Real-time Face Capture and Reenactment of RGB Videos' at Niessnerlab.org A paper (with videos) on the semi-real-time 2D video manipulation with gesture forcing and lip sync forcing synthesis by Thies et al, Stanford. Relevancy: certain
- 2016 | music video | 'Plug' by Kube at youtube.com - A 2016 music video by w:Kube (rapper) (w:fi:Kube), that shows deepfake-like technology this early. Video was uploaded on 2016-09-15 and is directed by Faruk Nazeri.
- 2015 | Science | 'Deep Learning Face Attributes in the Wild' at arxiv.org presented at the 2015 w:International Conference on Computer Vision
- 2015 | movie | In the w:Furious 7 a digital look-alike made of the actor w:Paul Walker who died in an accident during the filming was done by w:Weta Digital to enable the completion of the film.[29]
- 2014 | science | w:Ian Goodfellow et al. presented the principles of a w:generative adversarial network. GANs made the headlines in early 2018 with the w:deepfakes controversies.
- 2013 | demonstration | At the 2013 SIGGGRAPH w:Activision and USC presented a w:real time computing "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,[30] utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture.[31] The end result both precomputed and real-time rendering with the modernest game w:GPU shown here and looks fairly realistic.
- 2013 | demonstration | A 'Scanning and Printing a 3D Portrait of President Barack Obama' at ict.usc.edu. A 7D model and a 3D bust was made of President Obama with his consent. Relevancy: Relevancy: certain
2000's synthetic human-like fakes
- 2010 | movie | w:Walt Disney Pictures released a sci-fi sequel entitled w:Tron: Legacy with a digitally rejuvenated digital look-alike made of the actor w:Jeff Bridges playing the w:antagonist w:CLU.
- 2009 | movie | A digital look-alike of a younger w:Arnold Schwarzenegger was made for the movie w:Terminator Salvation though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger.
- 2009 | demonstration | Paul Debevec: 'Animating a photo-realistic face' at ted.com Debevec et al. presented new digital likenesses, made by w:Image Metrics, this time of actress w:Emily O'Brien whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - Which is which is difficult to tell. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a w:treadmill. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. [32] Motion looks fairly convincing contrasted to the clunky run in the w:Animatrix: Final Flight of the Osiris which was w:state-of-the-art in 2003 if photorealism was the intention of the w:animators.
- 2004 | movie | The w:Spider-man 2 (and w:Spider-man 3, 2007) films. Relevancy: The films include a digital look-alike made of actor w:Tobey Maguire by w:Sony Pictures Imageworks.[33]
- 2003 | short film | w:The Animatrix: Final Flight of the Osiris a w:state-of-the-art want-to-be human likenesses not quite fooling the watcher made by w:Square Pictures.
- 2003 | movie(s) | The w:Matrix Reloaded and w:Matrix Revolutions films. Relevancy: First public display of digital look-alikes that are virtually indistinguishable from the real actors. 'Universal Capture - Image-based Facial Animation for "The Matrix Reloaded"' at researchgate.net (2003)
- 2002 | music video | 'Bullet' by Covenant on Youtube by w:Covenant (band) from their album w:Northern Light (Covenant album). Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the classic "skin looks like cardboard"-bug (assuming this was not intended) that thwarted efforts to make digital look-alikes that pass human testing before the reflectance capture and dissection in 1999 by w:Paul Debevec et al. at the w:University of Southern California and subsequent development of the "Analytical w:BRDF" (quote-unquote) by ESC Entertainment, a company set up for the sole purpose of making the cinematography for the 2003 films Matrix Reloaded and Matrix Revolutions possible, lead by George Borshukov.
1990's synthetic human-like fakes
- 1999 | science | 'Acquiring the reflectance field of a human face' paper at dl.acm.org w:Paul Debevec et al. of w:USC did the first known reflectance capture over the human face with their extremely simple w:light stage. They presented their method and results in w:SIGGRAPH 2000. The scientific breakthrough required finding the w:subsurface light component (the simulation models are glowing from within slightly) which can be found using knowledge that light that is reflected from the oil-to-air layer retains its w:Polarization (waves) and the subsurface light loses its polarization. So equipped only with a movable light source, movable video camera, 2 polarizers and a computer program doing extremely simple math and the last piece required to reach photorealism was acquired.[1]
- 1999 | institute founded | The w:Institute for Creative Technologies was founded by the w:United States Army in the w:University of Southern California. It collaborates with the w:United States Army Futures Command, w:United States Army Combat Capabilities Development Command, w:Combat Capabilities Development Command Soldier Center and w:United States Army Research Laboratory.[34]. In 2016 w:Hao Li was appointed to direct the institute.
- 1994 | movie | w:The Crow (1994 film) was the first film production to make use of w:digital compositing of a computer simulated representation of a face onto scenes filmed using a w:body double. Necessity was the muse as the actor w:Brandon Lee portraying the protagonist was tragically killed accidentally on-stage.
1970's synthetic human-like fakes
- 1976 | movie | w:Futureworld reused parts of A Computer Animated Hand on the big screen.
- 1972 | entertainment | 'A Computer Animated Hand' on Vimeo. w:A Computer Animated Hand by w:Edwin Catmull and w:Fred Parke. Relevancy: This was the first time that w:computer-generated imagery was used in film to animate moving human-like appearance.
- 1971 | science | 'Images de synthèse : palme de la longévité pour l’ombrage de Gouraud' (still photos). w:Henri Gouraud (computer scientist) made the first w:Computer graphics w:geometry w:digitization and representation of a human face. Modeling was his wife Sylvie Gouraud. The 3D model was a simple w:wire-frame model and he applied w:Gouraud shading to produce the first known representation of human-likeness on computer. [35]
1770's synthetic human-like fakes
- 1791 | science | w:Wolfgang von Kempelen's Speaking Machine of w:Wolfgang von Kempelen of w:Pressburg, w:Hungary, described in a 1791 paper was w:bellows-operated.[36] This machine added models of the tongue and lips, enabling it to produce w:consonants as well as w:vowels. (based on w:Speech synthesis#History)
- 1779 | science / discovery | w:Christian Gottlieb Kratzenstein won the first prize in a competition announced by the w:Russian Academy of Sciences for models he built of the human w:vocal tract that could produce the five long w:vowel sounds.[37] (Based on w:Speech synthesis#History)
Footnotes
- ↑ It is terminologically more precise, more inclusive and more useful to talk about 'synthetic terror porn', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.
- ↑ Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.
- ↑ People who are found in possession of this synthetic pornography should probably not be penalized, but rather advised to get some help.
1st seen in
- ↑ 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 1.15 1.16 1.17 1.18 1.19
"The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17.
This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
- ↑ https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E
- ↑ 3.0 3.1 https://spectrum.ieee.org/deepfake-porn
- ↑ 4.0 4.1 https://www.connectedpapers.com/main/8fc09dfcff78ac9057ff0834a83d23eb38ca198a/Transfer-Learning-from-Speaker-Verification-to-Multispeaker-TextToSpeech-Synthesis/graph
- ↑ 'US Lawmakers: AI-Generated Fake Videos May Be a Security Threat' at uk.pcmag.com, 2018-09-13 reporting by Michael Kan
Contact information of organizations
- ↑
- The Defense Advanced Research Projects Agency
- Contact form https://contact.darpa.mil/
- Email: outreach@darpa.mil
- Defense Advanced Research Projects Agency
- 675 North Randolph Street
- Arlington, VA 22203-2114
- Phone 1-703-526-6630
- ↑
- National Center for Media Forensics at https://artsandmedia.ucdenver.edu
- Email: CAM@ucdenver.edu
- College of Arts & Media
- National Center for Media Forensics
- CU Denver
- Arts Building
- Suite 177
- 1150 10th Street
- Denver, CO 80204
- USA
- Phone 1-303-315-7400
- ↑
- Media Forensics Hub at Clemson University clemson.edu
- Media Forensics Hub
- Clemson University
- Clemson, South Carolina 29634
- USA
- Phone 1-864-656-3311
- ↑ mediaforensics@clemson.edu
- ↑
- WITNESS (Media Lab)
- Contact form https://www.witness.org/get-involved/ incl. mailing list subscription possiblity
- WITNESS
- 80 Hanson Place, 5th Floor
- Brooklyn, NY 11217
- USA
- Phone: 1.718.783.2000
- ↑
- Screen Actors Guild - American Federation of Television and Radio Artists at https://www.sagaftra.org/
- Screen Actors Guild - American Federation of Television and Radio Artists
- 5757 Wilshire Boulevard, 7th Floor
- Los Angeles, California 90036
- USA
- Phone: 1-855-724-2387
- Email: info@sagaftra.org
- ↑
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
- Marsstrasse 40
- D-80335 Munich
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
- Arcisstrasse 21
- D-80333 Munich
- Germany
- ieai(at)mcts.tum.de
- ↑
The Institute for Ethical AI & Machine Learning
Website https://ethical.institute/
Email
- a@ethical.institute
- 2021-08-14 used the contact form at https://ethical.institute/#contact
- ↑
- The Institute for Ethical AI in Education
- The University of Buckingham
- The Institute for Ethical AI in Education
- Hunter Street
- Buckingham
- MK18 1EG
- United Kingdom
- ↑
Future of Life Institute
Contact form
- No physical contact info
- 2021-08-14 | Subscribed to newsletter
- ↑
The Japanese Society for Artificial Intelligence
Contact info
Mail
- The Japanese Society for Artificial Intelligence
- 402, OS Bldg.
- 4-7 Tsukudo-cho, Shinjuku-ku, Tokyo 162-0821
- Japan
- 03-5261-3401
- ↑
- AI4ALL
- AI4ALL
- 548 Market St
- PMB 95333
- San Francisco, California 94104
- USA
- 2021-08-14 | Subscribed to mailing list
- ↑
- The Future Society at thefuturesociety.org
- No physical contact info
- ↑
- The Ai Now Institute at ainowinstitute.org
- info@ainowinstitute.org
- 2021-08-14 | Subscribed to mailing list
- ↑
- Partnership on AI at partnershiponai.org
- Partnership on AI
- 115 Sansome St, Ste 1200,
- San Francisco, CA 94104
- USA
- ↑
- The Foundation for Responsible Robotics at responsiblerobotics.org
- info@responsiblerobotics.org
- ↑
- AI4People at ai4people.eu
- No physical contact info
References
- ↑ 1.0 1.1 Debevec, Paul (2000). "Acquiring the reflectance field of a human face". Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. ACM. pp. 145–156. doi:10.1145/344779.344855. ISBN 978-1581132083. Retrieved 2020-06-27.
- ↑ 2.0 2.1 "Inventing Entertainment: The Early Motion Pictures and Sound Recordings of the Edison Companies". Memory.loc.gov. w:Library of Congress. Retrieved 2020-12-09.
- ↑ "Fake voices 'help cyber-crooks steal cash'". w:bbc.com. w:BBC. 2019-07-08. Retrieved 2020-07-22.
- ↑ Drew, Harwell (2020-04-16). "An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft". w:washingtonpost.com. w:Washington Post. Retrieved 2019-07-22.
- ↑ Flatow, Ira (April 4, 2008). "1860 'Phonautograph' Is Earliest Known Recording". NPR. Retrieved 2012-12-09.
- ↑ https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
- ↑ https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
- ↑ https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
- ↑ https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/
- ↑ https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
- ↑ https://www.crunchbase.com/organization/thatsmyface-com
- ↑ whois facepinpoint.com
- ↑ https://www.facepinpoint.com/aboutus
- ↑ https://www.partnershiponai.org/aiincidentdatabase/
- ↑ Johnson, R.J. (2019-12-30). "Here Are the New California Laws Going Into Effect in 2020". KFI. iHeartMedia. Retrieved 2021-01-23.
- ↑ "AB 602 - California Assembly Bill 2019-2020 Regular Session - Depiction of individual using digital or electronic technology: sexually explicit material: cause of action". openstates.org. openstates.org. Retrieved 2021-03-24.
- ↑ Mihalcik, Carrie (2019-10-04). "California laws seek to crack down on deepfakes in politics and porn". w:cnet.com. w:CNET. Retrieved 2021-01-23.
- ↑ "China seeks to root out fake news and deepfakes with new online content rules". w:Reuters.com. w:Reuters. 2019-11-29. Retrieved 2021-01-23.
- ↑ Statt, Nick (2019-11-29). "China makes it a criminal offense to publish deepfakes or fake news without disclosure". w:The Verge. Retrieved 2021-01-23.
- ↑
"Relating to the creation of a criminal offense for fabricating a deceptive video with intent to influence the outcome of an election". w:Texas. 2019-06-14. Retrieved 2021-01-23.
In this section, "deep fake video" means a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality
- ↑ https://capitol.texas.gov/BillLookup/History.aspx?LegSess=86R&Bill=SB751
- ↑ "New state laws go into effect July 1".
- ↑ 23.0 23.1 "§ 18.2-386.2. Unlawful dissemination or sale of images of another; penalty". w:Virginia. Retrieved 2021-01-23.
- ↑ "NVIDIA Open-Sources Hyper-Realistic Face Generator StyleGAN". Medium.com. 2019-02-09. Retrieved 2020-07-13.
- ↑
Harwell, Drew (2018-12-30). "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". w:The Washington Post. Retrieved 2020-07-13.
In September [of 2018], Google added “involuntary synthetic pornographic imagery” to its ban list
- ↑ Kuo, Lily (2018-11-09). "World's first AI news anchor unveiled in China". Retrieved 2020-07-13.
- ↑ Hamilton, Isobel Asher (2018-11-09). "China created what it claims is the first AI news anchor — watch it in action here". Retrieved 2020-07-13.
- ↑ Suwajanakorn, Supasorn; Seitz, Steven; Kemelmacher-Shlizerman, Ira (2017), Synthesizing Obama: Learning Lip Sync from Audio, University of Washington, retrieved 2020-07-13
- ↑ Giardina, Carolyn (2015-03-25). "'Furious 7' and How Peter Jackson's Weta Created Digital Paul Walker". The Hollywood Reporter. Retrieved 2020-07-13.
- ↑ ReForm - Hollywood's Creating Digital Clones (youtube). The Creators Project. 2020-07-13.
- ↑ Debevec, Paul. "Digital Ira SIGGRAPH 2013 Real-Time Live". Retrieved 2017-07-13.
- ↑ In this TED talk video at 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - Which is which is difficult to tell. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a w:treadmill. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.
- ↑ Pighin, Frédéric. "Siggraph 2005 Digital Face Cloning Course Notes" (PDF). Retrieved 2020-06-26.
- ↑ https://ict.usc.edu/about/
- ↑ "Images de synthèse : palme de la longévité pour l'ombrage de Gouraud".
- ↑ Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine ("Mechanism of the human speech with description of its speaking machine", J. B. Degen, Wien).
- ↑ History and Development of Speech Synthesis, Helsinki University of Technology, Retrieved on November 4, 2006
Cite error: <ref>
tags exist for a group named "contacted", but no corresponding <references group="contacted"/>
tag was found