3,839
edits
Juho Kunsola (talk | contribs) (→Media perhaps about synthetic human-like fakes: moved to Mediatheque for reworking) |
Juho Kunsola (talk | contribs) |
||
(45 intermediate revisions by the same user not shown) | |||
Line 65: | Line 65: | ||
}}</ref> | }}</ref> | ||
{{ | {{#lst:Quotes|MatrixTrad}} | ||
=== The problems with digital look-alikes === | === The problems with digital look-alikes === | ||
Line 72: | Line 72: | ||
These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This '''hate illustration''' increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve. | These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This '''hate illustration''' increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve. | ||
=== List of possible naked digital look-alike attacks === | === List of possible naked digital look-alike attacks === | ||
Line 117: | Line 115: | ||
Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | Living people can defend<ref group="footnote" name="judiciary maybe not aware">Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.</ref> themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability. | ||
For these reasons the bannable '''raw materials''' i.e. covert voice models '''[[Law proposals to ban covert modeling|should be prohibited by law]]''' in order to protect humans from abuse by criminal parties. | |||
=== Documented digital sound-alike attacks === | === Documented digital sound-alike attacks === | ||
Line 147: | Line 146: | ||
=== 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion) === | === 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion) === | ||
<section begin=GoogleTransferLearning2018 /> | |||
* In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results | * In the '''2018''' at the '''[[w:Conference on Neural Information Processing Systems]]''' (NeurIPS) the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results | ||
Line 155: | Line 154: | ||
The Iframe above is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers. | The Iframe above is transcluded from [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io], the audio samples of a sound-like-anyone machine presented as at the 2018 [[w:NeurIPS]] conference by Google researchers. | ||
<section end=GoogleTransferLearning2018 /> | |||
=== Digital sing-alikes === | === Digital sing-alikes === | ||
Line 245: | Line 245: | ||
=== Organizations against synthetic human-like fakes === | === Organizations against synthetic human-like fakes === | ||
[[File:DARPA_Logo.jpg|thumb| | [[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | ||
* '''[[w:DARPA]]''' [https://www.darpa.mil/program/media-forensics '''DARPA program''': ''''Media Forensics'''' ('''MediFor''') at darpa.mil] aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in [https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics June '''2016''']<ref name="IA-MediFor-2016-crawl">https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics</ref>. | * '''[[w:DARPA]]''' [https://www.darpa.mil/program/media-forensics '''DARPA program''': ''''Media Forensics'''' ('''MediFor''') at darpa.mil] aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in [https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics June '''2016''']<ref name="IA-MediFor-2016-crawl">https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics</ref>. | ||
Line 253: | Line 253: | ||
* '''[[w:University of Colorado Denver]]''' is the home of the [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics '''National Center for Media Forensics''' at artsandmedia.ucdenver.edu] at the [[w:University of Colorado Denver]] offers a [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/media-forensics-graduate-program Master's degree program], [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/training-courses training courses] and [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/national-center-for-media-forensics-research scientific basic and applied research]. [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/faculty-staff Faculty staff at the NCMF] | * '''[[w:University of Colorado Denver]]''' is the home of the [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/about-the-national-center-for-media-forensics '''National Center for Media Forensics''' at artsandmedia.ucdenver.edu] at the [[w:University of Colorado Denver]] offers a [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/media-forensics-graduate-program Master's degree program], [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/training-courses training courses] and [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/national-center-for-media-forensics-research scientific basic and applied research]. [https://artsandmedia.ucdenver.edu/areas-of-study/national-center-for-media-forensics/faculty-staff Faculty staff at the NCMF] | ||
[[File:Connie Leyva 2015.jpg|thumb| | * [https://www.clemson.edu/centers-institutes/watt/hub/index.html '''Media Forensics Hub''' at clemson.edu] at the Watt Family Innovation Center of the '''[[w:Clemson University]]''' has the aims of promoting multi-disciplinary research, collecting and facilitating discussion and ideation of challenges and solutions. They provide [https://www.clemson.edu/centers-institutes/watt/hub/resources/ resources], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/research.html research], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/education.html media forensics education] and are running a [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/wg-disinfo.html '''Working Group''' on '''disinformation'''].<ref group="contact">mediaforensics@clemson.edu</ref> | ||
* [https://lab.witness.org/ '''The WITNESS Media Lab''' at lab.witness.org] by [[w:Witness (organization)]], a human rights non-profit organization based out of Brooklyn, New York, is against synthetic filth actively since 2018. They work both in awareness raising as well as media forensics. | |||
** [https://lab.witness.org/projects/osint-digital-forensics/ '''Open-source intelligence digital forensics''' - ''How do we work together to detect AI-manipulated media?'' at lab.witness.org]. "''In February '''2019''' WITNESS in association with [[w:George Washington University]] brought together a group of leading researchers in [[Glossary#Media forensics|media forensics]] and [[w:detection]] of [[w:deepfakes]] and other [[w:media manipulation]] with leading experts in social newsgathering, [[w:User-generated content]] and [[w:open-source intelligence]] ([[w:OSINT]]) verification and [[w:fact-checking]].''" (website) | |||
** [https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ '''Prepare, Don’t Panic: Synthetic Media and Deepfakes''' at lab.witness.org] is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in '''2018''' with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the [http://witness.mediafire.com/file/q5juw7dc3a2w8p7/Deepfakes_Final.pdf/file '''report''' “'''Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening'''”] (dated 2018-06-11). [https://blog.witness.org/2018/07/deepfakes/ '''''Deepfakes and Synthetic Media: What should we fear? What can we do?''''' at blog.witness.org] | |||
[[File:Connie Leyva 2015.jpg|thumb|right|240px|[[w:California]] [[w:California State Senate|w:Senator]] [[w:Connie Leyva]] sponsored [https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201920200SB564&showamends=false '''California Senate Bill SB 564''' - ''Depiction of individual using digital or electronic technology: sexually explicit material: cause of action''] in Feb '''2019'''. It is identical to Assembly Bill 602 authored by [[w:Marc Berman]]. The bill was [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn endorsed by SAG-AFTRA]. It became law on 1 January 2020 in the [[w:California Civil Code|w:California Civil Code]] of the [[w:California Codes]].]] | |||
* '''[[w:SAG-AFTRA]]''' [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''. | |||
=== Organizations possibly against synthetic human-like fakes === | |||
* '''[[w: | Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | ||
{{cite web | |||
|url= https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf | |||
|title= The ethics of artificial intelligence: Issues and initiatives | |||
|last= | |||
|first= | |||
|date= March 2020 | |||
|website= [[w:Europa (web portal)]] | |||
|publisher=[[w:European Parliamentary Research Service]] | |||
|access-date=2021-02-17 | |||
|quote=This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.}} | |||
</ref> | |||
* [https://ieai.mcts.tum.de/ '''INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE''' at ieai.mcts.tum.de] received initial funding from [[w:Facebook]] in 2019.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/ieaitum/ IEAI on LinkedIn.com] | |||
* [https://ethical.institute/ '''The Institute for Ethical AI & Machine Learning''' at ethical.institute]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/the-institute-for-ethical-machine-learning/ The Institute for Ethical AI & Machine Learning on LinkedIn.com] | |||
* [https://www.buckingham.ac.uk/research-the-institute-for-ethical-ai-in-education/ '''The Institute for Ethical AI in Education''' at buckingham.ac.uk]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://futureoflife.org/ '''Future of Life Institute''' at futureoflife.org] received funding from private donors.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> See [[w:Future of Life Institute]] for more info. | |||
* [https://www.ai-gakkai.or.jp/ '''The Japanese Society for Artificial Intelligence''' ('''JSAI''') at ai-gakkai.or.jp]. Publication: Ethical guidelines.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://ai-4-all.org/ '''AI4All''' at ai-4-all.org] funded by [[w:Google]]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/ai4allorg/ AI4All on LinkedIn.com] | |||
* [https://thefuturesociety.org/ '''The Future Society''' at thefuturesociety.org]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/thefuturesociety/ The Future Society on LinkedIn.com] | |||
* [https://ainowinstitute.org/ '''The Ai Now Institute''' at ainowinstitute.org] at [[w:New York University]]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/ai-now-institute/about/ The Ai Now Institute on LinkedIn.com] | |||
* [https://www.partnershiponai.org/ '''Partnership on AI''' at partnershiponai.org] is based in the USA and funded by technology companies. See [[w:Partnership on AI]] and [https://www.linkedin.com/company/partnershipai/ Partnership on AI on LinkedIn.com] for more info. | |||
* [https://responsiblerobotics.org/ '''The Foundation for Responsible Robotics''' at responsiblerobotics.org] is based in [[w:Netherlands]].<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/foundation-for-responsible-robotics/about/ The Foundation for Responsible Robotics on LinkedIn.com] | |||
* [https://ai4people.eu/ '''AI4People''' at ai4people.eu] is based in [[w:Belgium]] is a multi-stakeholder forum.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/ai-for-people/ AI4People on LinkedIn.com] | |||
* [https://aiethicsinitiative.org/ '''The Ethics and Governance of Artificial Intelligence Initiative''' at aiethicsinitiative.org] is based in the USA.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://www.saidot.ai/ '''Saidot''' at saidot.ai] is a Finnish company offering a platform for AI transparency, explainability and communication.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/saidot/ Saidot on LinkedIn.com] | |||
* [https://www.eu-robotics.net/ '''euRobotics''' at eu-robotics.net] is funded by the [[w:European Commission]].<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://www.gov.uk/government/organisations/centre-for-data-ethics-and-innovation '''Centre for Data Ethics and Innovation''' at gov.uk] financed by the UK govt. [https://cdei.blog.gov.uk/ '''Centre for Data Ethics and Innovation Blog''' at cdei.blog.gov.uk]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/centre-for-data-ethics-innovation/ Centre for Data Ethics and Innovation on LinkedIn.com] | |||
* [http://sigai.acm.org/ '''ACM Special Interest Group on Artificial Intelligence''' at sigai.acm.org] is a [[w:Special Interest Group]] on AI by [[w:Association for Computing Machinery|ACM]].<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://ethicsinaction.ieee.org/ '''IEEE Ethics in Action - in Autonomous and Intelligent Systems''' at ethicsinaction.ieee.org] | |||
* [https://www.counterhate.com/ '''The Center for Countering Digital Hate''' at counterhate.com] is an international not-for-profit NGO that seeks to disrupt the architecture of online hate and misinformation with offices in London and Washington DC. | |||
* [https://carnegieendowment.org/specialprojects/counteringinfluenceoperations '''Partnership for Countering Influence Operations''' ('''PCIO''') at carnegieendowment.org] is a partnership by the [[w:Carnegie Endowment for International Peace]] | |||
=== Other essential developments === | |||
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com] and the same site in French [https://www.declarationmontreal-iaresponsable.com/ '''La Déclaration de Montéal IA responsable''' at declarationmontreal-iaresponsable.com]<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://uniglobalunion.org/ '''UNI Global Union''' at uniglobalunion.org] is based in [[w:Nyon]], [[w:Switzerland]] and deals mainly with labor issues to do with AI and robotics.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> [https://www.linkedin.com/company/uni-global-union/ UNI Global Union on LinkedIn.com] | |||
* [https://cordis.europa.eu/project/id/IST-2000-26048 '''European Robotics Research Network''' at cordis.europa.eu] funded by the [[w:European Commission]].<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | |||
=== Events against synthetic human-like fakes === | === Events against synthetic human-like fakes === | ||
* ''' | * '''ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' (NIST) | [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' | [https://sites.google.com/view/mediaforensics2021 2021 Conference on Computer Vision and Pattern Recognition: ''''Workshop on Media Forensics'''' at sites.google.com], a '''June 2021''' workshop at the Conference on Computer Vision and Pattern Recognition. | |||
* '''2020''' | '''CVPR''' | [https://sites.google.com/view/wmediaforensics2020/home 2020 Conference on Computer Vision and Pattern Recognition: ''''Workshop on Media Forensics'''' at sites.google.com], a '''June 2020''' workshop at the Conference on Computer Vision and Pattern Recognition. | |||
* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref> | * '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref> | ||
Line 276: | Line 347: | ||
* '''2017''' | '''[[w:National Institute of Standards and Technology|w:NIST]]''' [https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation NIST ''''Nimble Challenge 2017'''' at nist.gov] | * '''2017''' | '''[[w:National Institute of Standards and Technology|w:NIST]]''' [https://www.nist.gov/itl/iad/mig/nimble-challenge-2017-evaluation NIST ''''Nimble Challenge 2017'''' at nist.gov] | ||
* '''2016''' | '''Nimble Challenge 2016'''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | |||
=== Studies against synthetic human-like fakes === | === Studies against synthetic human-like fakes === | ||
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], | * [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services | ||
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract) | |||
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]] | * [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]] | ||
Line 293: | Line 368: | ||
<section end=APW_AI-transclusion /> | <section end=APW_AI-transclusion /> | ||
=== SSF! wiki proposed countermeasure to synthetic | === SSF! wiki proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) === | ||
Transcluded | Transcluded from [[Current and possible laws and their application#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]] | ||
{{#section-h:Current and possible laws and their application|Law proposal to ban visual synthetic filth}} | |||
=== SSF! wiki proposed countermeasure to weaponized synthetic porn pornography: Adequate Porn Watcher AI (concept) (transcluded) === | |||
Transcluded main contents from [[Adequate Porn Watcher AI (concept)]] | |||
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}} | |||
{{#section-h: | === SSF! wiki proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) === | ||
Transcluded from [[Current and possible laws and their application#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]] | |||
{{#section-h:Current and possible laws and their application|Law proposal to ban unauthorized modeling of human voice}} | |||
== Timeline of synthetic human-like fakes == | == Timeline of synthetic human-like fakes == | ||
Line 307: | Line 387: | ||
=== 2020's synthetic human-like fakes === | === 2020's synthetic human-like fakes === | ||
* '''2021''' | Science | [https://arxiv.org/pdf/2102.05630.pdf '''''Voice Cloning: a Multi-Speaker Text-to-Speech Synthesis Approach based on Transfer Learning''''' .pdf at arxiv.org], a paper submitted in Feb 2021 by researchers from the [[w:University of Turin]].<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018" /> | |||
* '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | On 2020-11-18 the [[w:Partnership on AI]] introduced the [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref> | * '''<font color="green">2020</font>''' | '''<font color="green">counter-measure</font>''' | On 2020-11-18 the [[w:Partnership on AI]] introduced the [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref> | ||
Line 315: | Line 396: | ||
** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster''] | ** [https://www.cnet.com/news/mit-releases-deepfake-video-of-nixon-announcing-nasa-apollo-11-disaster/ Cnet.com July 2020 reporting ''MIT releases deepfake video of 'Nixon' announcing NASA Apollo 11 disaster''] | ||
* '''2020''' | US state law | {{#lst:Current and possible laws and their application|California2020}} | |||
* '''2020''' | Chinese legislation | {{#lst:Current and possible laws and their application|China2020}} | |||
* '''2020''' | US state law | | |||
{{ | |||
}} | |||
* '''2020''' | Chinese legislation | | |||
{{ | |||
=== 2010's synthetic human-like fakes === | === 2010's synthetic human-like fakes === | ||
Line 376: | Line 403: | ||
* '''2019''' | demonstration | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 '''a deepfake of the President in office'''] [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it. | * '''2019''' | demonstration | In September 2019 [[w:Yle]], the Finnish [[w:public broadcasting company]], aired a result of experimental [[w:journalism]], [https://yle.fi/uutiset/3-10955498 '''a deepfake of the President in office'''] [[w:Sauli Niinistö]] in its main news broadcast for the purpose of highlighting the advancing disinformation technology and problems that arise from it. | ||
* '''2019''' | US state law | | * '''2019''' | US state law | {{#lst:Current and possible laws and their application|Texas2019}} | ||
* '''2019''' | US state law | {{#lst:Current and possible laws and their application|Virginia2019}} | |||
* 2019 | Science | [https://arxiv.org/pdf/1809.10460.pdf '''''Sample Efficient Adaptive Text-to-Speech''''' .pdf at arxiv.org], a 2019 paper from Google researchers, published as a conference paper at [[w:International Conference on Learning Representations]] (ICLR)<ref group="1st seen in" name="ConnectedPapers suggestion on Google Transfer learning 2018"> https://www.connectedpapers.com/main/8fc09dfcff78ac9057ff0834a83d23eb38ca198a/Transfer-Learning-from-Speaker-Verification-to-Multispeaker-TextToSpeech-Synthesis/graph | |||
</ref> | </ref> | ||
* '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] | * '''2019''' | science and demonstration | [https://arxiv.org/pdf/1905.09773.pdf ''''Speech2Face: Learning the Face Behind a Voice'''' at arXiv.org] a system for generating likely facial features based on the voice of a person, presented by the [[w:MIT Computer Science and Artificial Intelligence Laboratory]] at the 2019 [[w:Conference on Computer Vision and Pattern Recognition|w:CVPR]]. [https://github.com/saiteja-talluri/Speech2Face Speech2Face at github.com] This may develop to something that really causes problems. [https://neurohive.io/en/news/speech2face-neural-network-predicts-the-face-behind-a-voice/ "Speech2Face: Neural Network Predicts the Face Behind a Voice" reporing at neurohive.io], [https://belitsoft.com/speech-recognition-software-development/speech2face "Speech2Face Sees Voices and Hears Faces: Dreams Come True with AI" reporting at belitsoft.com] | ||
Line 506: | Line 502: | ||
* '''2016''' | music video | [https://www.youtube.com/watch?v=tMQHAy0HUDo ''''''Plug'''''' by Kube at youtube.com] - A 2016 music video by [[w:Kube (rapper)]] ([[w:fi:Kube]]), that shows deepfake-like technology this early. Video was uploaded on 2016-09-15 and is directed by Faruk Nazeri. | * '''2016''' | music video | [https://www.youtube.com/watch?v=tMQHAy0HUDo ''''''Plug'''''' by Kube at youtube.com] - A 2016 music video by [[w:Kube (rapper)]] ([[w:fi:Kube]]), that shows deepfake-like technology this early. Video was uploaded on 2016-09-15 and is directed by Faruk Nazeri. | ||
* '''2015''' | Science | [https://arxiv.org/abs/1411.7766v3 ''''''Deep Learning Face Attributes in the Wild'''''' at arxiv.org] presented at the 2015 [[w:International Conference on Computer Vision]] | |||
* '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015"> | * '''2015''' | movie | In the ''[[w:Furious 7]]'' a digital look-alike made of the actor [[w:Paul Walker]] who died in an accident during the filming was done by [[w:Weta Digital]] to enable the completion of the film.<ref name="thr2015"> |