Organizations, studies and events against synthetic human-like fakes: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(→‎Events against synthetic human-like fakes: + UPCOMING 2023 | World Anti-Bullying Forum at worldantibullyingforum.com + Open Call for Hosting World Anti-Bullying Forum 2023)
(→‎Other studies against synthetic human-like fakes: + '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu by Natalie Lussier, published in Idaho Law Review January 2022)
 
(40 intermediate revisions by the same user not shown)
Line 1: Line 1:
Here you can find organizations, workshops and events and services against [[synthetic human-like fakes]] and also organizations and curricula for media forensics.
Here you can find [[#Organizations against synthetic human-like fakes|organizations]], [[#Studies against synthetic human-like fakes|studies]] and [[#Events against synthetic human-like fakes|events]] against [[synthetic human-like fakes]] and also [[#Organizations for media forensics|organizations and curricula for media forensics]].


Transcluded in this article are
The [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI timeline of synthetic human-like fakes]] lists both positive and negative developments in reverse chronological order.
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com
* [[Laws against synthetic filth#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]]
* [[Laws against synthetic filth#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]].


For laws and bills in planning against synthetic filth see [[laws against synthetic filth]].
For laws in effect and bills in planning against synthetic filth see [[Laws against synthesis and other related crimes]].
 
In [[resources]] there are likely a few services that would fit here.
 
=== Services that should get back to the task at hand - FacePinPoint.com ===
Transcluded from [[FacePinPoint.com]]
{{#lst:FacePinPoint.com|FacePinPoint.com}}


<section begin=core organizations />
<section begin=core organizations />
=== Organizations against synthetic human-like fakes ===
= Organizations against synthetic human-like fakes =
'''AI incident repositories'''
== AI incident repositories ==
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai />
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai />


Line 44: Line 34:


</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref>  
</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref>  
* <section begin=oecd.ai />[https://oecd.ai/en/ '''The OECD.AI Policy Observatory''' at oecd.ai], in conjunction with the Patrick J McGovern Foundation, provide the [https://oecd.ai/en/incidents '''OECD AI Incidents Monitor''' ('''AIM''') at oecd.ai]<section end=oecd.ai />
* <section begin=AJL.org />The [[w:Algorithmic Justice League]] is also accepting reports of AI harms at [https://report.ajl.org/ report.ajl.org]<section end=AJL.org />


''' Help for victims of image or audio based abuse '''
== Help for victims of image or audio based abuse ==
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI">
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI">
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org
Line 57: Line 51:
* https://www.facebook.com/CyberCivilRightsInitiative
* https://www.facebook.com/CyberCivilRightsInitiative


</ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org]  
</ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org]. [https://cybercivilrights.org/faqs-usvictims/ '''''Get help now''''' - '''CCRI Safety Center''' at cybercivilrights.org] - '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.''
** [https://cybercivilrights.org/faqs-usvictims/ '''''Get help now''''' - '''CCRI Safety Center''' at cybercivilrights.org] - '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.''
<section begin=cybercivilrights.org law compilations />* [https://cybercivilrights.org/existing-laws/ '''Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws''' at cybercivilrights.org]
** [https://cybercivilrights.org/existing-laws/ '''Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws''' at cybercivilrights.org]
** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org]
*** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org]
** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org]
*** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org]
** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org]<section end=cybercivilrights.org law compilations /><section end=cybercivilrights.org />
*** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org]<section end=cybercivilrights.org />


* <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove />
* <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove />


* <section begin=badassarmy.org />[https://badassarmy.org/ '''Battling Against Demeaning and Abusive Selfie Sharing''' at badassarmy.org] have compiled a [https://badassarmy.org/revenge-porn-laws-by-state/ '''list of revenge porn laws by US states''']<section end=badassarmy.org />
== Awareness and countermeasures ==
 
'''Awareness and countermeasures'''
* [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact">
* [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact">
The '''Internet Watch Foundation''' at iwf.org.uk
The '''Internet Watch Foundation''' at iwf.org.uk
Line 150: Line 141:
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the  [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''.
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the  [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''.


=== Organizations for media forensics ===
== Organizations for media forensics ==
 
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]]
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]]


Line 208: Line 200:


<section begin=other organizations />
<section begin=other organizations />
=== Organizations possibly against synthetic human-like fakes ===
== Organizations possibly against synthetic human-like fakes ==


Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)]  by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020">
Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)]  by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020">
Line 447: Line 439:
</ref>
</ref>


=== Other essential developments ===
== Services that should get back to the task at hand - FacePinPoint.com ==
Transcluded from [[FacePinPoint.com]]
{{#lst:FacePinPoint.com|FacePinPoint.com}}
 
== Other essential developments ==
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact">
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact">


Line 464: Line 460:
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>


=== Events against synthetic human-like fakes ===
= Studies against synthetic human-like fakes =
 
== Detecting deep-fake audio through vocal tract reconstruction ==
{{#lst:Detecting deep-fake audio through vocal tract reconstruction|what-is-it}}
 
== Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms ==
* {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}}
 
== Protecting President Zelenskyy against deep fakes ==
* {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}}
 
== Other studies against synthetic human-like fakes ==
* [https://www.icct.nl/sites/default/files/2023-12/The%20Weaponisation%20of%20Deepfakes.pdf '''''The Weaponisation of Deepfakes - Digital Deception by the Far-Right''''' at icct.nl], an [[w:International Centre for Counter-Terrorism]] policy brief by Ella Busch and Jacob Ware. Published in December 2023.
 
* [https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF '''''Contextualizing Deepfake Threats to Organizations''''' - '''Cybersecurity Information Sheet''' at media.defense.gov]
 
* [https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf '''''Increasing Threats of Deepfake Identities''''' at dhs.gov] by the [[w:United States Department of Homeland Security]]
 
* [https://www.dhs.gov/sites/default/files/2022-10/AEP%20DeepFake%20PHASE2%20FINAL%20corrected20221006.pdf '''''Increasing Threats of Deepfake Identities''' – '''Phase 2: Mitigation Measures''''' at dhs.gov]
 
* [https://link.springer.com/article/10.1007/s13347-023-00657-0 '''''Deepfake Pornography and the Ethics of Non-Veridical Representations''''' at link.springer.com], a 2023 research article, published in [[w:Philosophy & Technology]] on 2023-08-23. (paywalled)
 
* [https://digitalcommons.law.uidaho.edu/cgi/viewcontent.cgi?article=1252&context=idaho-law-review '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu] by Natalie Lussier, published in Idaho Law Review January 2022
 
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services
 
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract)
 
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]]
 
'''Legal information compilations'''
{{#lst:Laws against synthesis and other related crimes|anti-fake-law-compilations}}
 
More studies can be found in the [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI Timeline of synthetic human-like fakes]]
 
''' Search for more '''
* [[w:Law review]]
** [[w:List of law reviews in the United States]]
 
''' Reporting against synthetic human-like fakes '''
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]].


* '''UPCOMING 2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] [https://worldantibullyingforum.com/news/open-call-for-hosting-world-anti-bullying-forum-2023/ ''Open Call for Hosting World Anti-Bullying Forum 2023'' at worldantibullyingforum.com]
''' Companies against synthetic human-like fakes '''
See [[resources]] for more.


* '''UPCOMING 2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020.


* '''UPCOMING 2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) to be held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022<ref>https://law.yale.edu/isp/events/technologies-deception</ref>
= Events against synthetic human-like fakes =
== Upcoming events ==
In reverse chronological order 


* '''UPCOMING 2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], virtual in December 2021.
== Ongoing events ==


* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact">
* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact">
Line 480: Line 519:


</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
== Past events ==
* '''2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] October 25-27 in North Carolina, U.S.A.
* '''2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel
* '''2022''' | [https://conferences.sigcomm.org/hotnets/2022/ '''HotNets 2022: Twenty-First ACM Workshop on Hot Topics in Networks''' at conferences.sigcomm.org] - November 14-15, 2022 — Austin, Texas, USA. Presented at HotNets 2022 was a very interesting paper on [https://farid.berkeley.edu/downloads/publications/hotnets22.pdf ''''''Global Content Revocation on the Internet: A Case Study in Technology Ecosystem Transformation'''''' at farid.berkeley.edu]
* '''2022''' | [https://www.interspeech2022.org/ '''INTERSPEECH 2022''' at interspeech2022.org] organized by [[w:International Speech Communication Association]] was held on 18-22 September 2022 in Korea. The work [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org] was presented there
* '''2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) was held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022<ref>https://law.yale.edu/isp/events/technologies-deception</ref>
* '''2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side [[w:StyleGAN]]3 was presented there.


* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com]
* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com]
Line 488: Line 541:


* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref>
* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref>
* '''2020''' | [https://www.ftc.gov/news-events/events/2020/01/you-dont-say-ftc-workshop-voice-cloning-technologies '''''You Don't Say: An FTC Workshop on Voice Cloning Technologies''''' at ftc.gov] was held on Tuesday 2020-01-28 - [https://venturebeat.com/2020/01/29/ftc-voice-cloning-seminar-crime-use-cases-safeguards-ai-machine-learning/ reporting at venturebeat.com]


* '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s  '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
* '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s  '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
Line 504: Line 559:


* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
=== Studies against synthetic human-like fakes ===
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract)
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]]
''' Search for more '''
* [[w:Law review]]
** [[w:List of law reviews in the United States]]
=== Reporting against synthetic human-like fakes ===
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]].
=== Companies against synthetic human-like fakes ===
See [[resources]] for more.
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020.


<section end=other organizations />
<section end=other organizations />


=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) ===
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) =
  Transcluded from [[Laws against synthetic filth#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]]
  Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]]


{{#section-h:Laws against synthetic filth|Law proposal to ban visual synthetic filth}}
{{#section-h:Laws against synthesis and other related crimes|Law proposal to ban visual synthetic filth}}


=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) ===
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) =
  Transcluded main contents from [[Adequate Porn Watcher AI (concept)]]
  Transcluded main contents from [[Adequate Porn Watcher AI (concept)]]


{{#lstx:Adequate Porn Watcher AI (concept)|See_also}}
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}}


=== SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) ===
= SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) =
  Transcluded from [[Laws against synthetic filth#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]]
  Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]]


{{#section-h:Laws against synthetic filth|Law proposal to ban unauthorized modeling of human voice}}
{{#section-h:Laws against synthesis and other related crimes|Law proposal to ban unauthorized modeling of human voice}}


----
----
== About this article ==
Transcluded in this article are
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com
* [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]]
* [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]]. 
In [[resources]] there are likely a few services that could fit here.


== Footnotes ==
== Footnotes ==
Line 556: Line 601:
== 1st seen in ==
== 1st seen in ==
<references group="1st seen in" />
<references group="1st seen in" />


== References ==
== References ==
<references />
<references />

Latest revision as of 23:21, 14 February 2024

Here you can find organizations, studies and events against synthetic human-like fakes and also organizations and curricula for media forensics.

The SSFWIKI timeline of synthetic human-like fakes lists both positive and negative developments in reverse chronological order.

For laws in effect and bills in planning against synthetic filth see Laws against synthesis and other related crimes.


Organizations against synthetic human-like fakes[edit | edit source]

AI incident repositories[edit | edit source]

Help for victims of image or audio based abuse[edit | edit source]

Awareness and countermeasures[edit | edit source]

Organizations for media forensics[edit | edit source]

The Defense Advanced Research Projects Agency, better known as w:DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.


Organizations possibly against synthetic human-like fakes[edit | edit source]

Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 2]

Services that should get back to the task at hand - FacePinPoint.com[edit | edit source]

Transcluded from FacePinPoint.com

FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 7]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[6], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[7] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.

Other essential developments[edit | edit source]

Studies against synthetic human-like fakes[edit | edit source]

Detecting deep-fake audio through vocal tract reconstruction[edit | edit source]

Detecting deep-fake audio through vocal tract reconstruction is an epic scientific work, against fake human-like voices, from the w:University of Florida in published to peers in August 2022.

The Office of Naval Research (ONR) at nre.navy.mil of the USA funded this breakthrough science.

The work Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction at usenix.org, presentation page, version included in the proceedings[8] and slides from researchers of the Florida Institute for Cybersecurity Research (FICS) at fics.institute.ufl.edu in the w:University of Florida received funding from the w:Office of Naval Research and was presented in August 2020 at the 31st w:USENIX Security Symposium.

This work was done by PhD student Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor.

The University of Florida Research Foundation Inc has filed for and received an US patent titled 'Detecting deep-fake audio through vocal tract reconstruction' registration number US20220036904A1 (link to patents.google.com) with 20 claims. The patent application was published on Thursday 2022-02-03. The patent application was approved on 2023-07-04 and has an adjusted expiration date of 2041-12-29.

Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms[edit | edit source]

Protecting President Zelenskyy against deep fakes[edit | edit source]

Other studies against synthetic human-like fakes[edit | edit source]

Legal information compilations


More studies can be found in the SSFWIKI Timeline of synthetic human-like fakes

Search for more

Reporting against synthetic human-like fakes

Companies against synthetic human-like fakes See resources for more.

Events against synthetic human-like fakes[edit | edit source]

Upcoming events[edit | edit source]

In reverse chronological order

Ongoing events[edit | edit source]

Past events[edit | edit source]


  • 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
  • 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
  • 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [20]


SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded)[edit | edit source]

Transcluded from Juho's proposal for banning unauthorized synthetic pornography


§1 Models of human appearance[edit | edit source]

A model of human appearance means

§2 Producing synthetic pornography[edit | edit source]

Making projections, still or videographic, where targets are portrayed in a nude or in a sexual situation from models of human appearance defined in §1 without express consent of the targets is illegal.

§3 Distributing synthetic pornography[edit | edit source]

Distributing, making available, public display, purchase, sale, yielding, import and export of non-authorized synthetic pornography defined in §2 are punishable.[footnote 1]

§4 Aggravated producing and distributing synthetic pornography[edit | edit source]

If the media described in §2 or §3 is made or distributed with the intent to frame for a crime or for blackmail, the crime should be judged as aggravated.

Afterwords[edit | edit source]

The original idea I had was to ban both the raw materials i.e. the models to make the visual synthetic filth and also the end product weaponized synthetic pornography, but then in July 2019 it appeared to me that Adequate Porn Watcher AI (concept) could really help in this age of industrial disinformation if it were built, trained and operational. Banning modeling of human appearance was in conflict with the revised plan.

It is safe to assume that collecting permissions to model each pornographic recording is not plausible, so an interesting question is that can we ban covert modeling from non-pornographic pictures, while still retaining the ability to model all porn found on the Internet.

In case we want to pursue banning modeling people's appearance from non-pornographic images/videos without explicit permission be pursued it must be formulated so that this does not make Adequate Porn Watcher AI (concept) illegal / impossible. This would seem to lead to a weird situation where modeling a human from non-pornographic media would be illegal, but modeling from pornography legal.


SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded)[edit | edit source]

Transcluded main contents from Adequate Porn Watcher AI (concept)

Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.

Obs. #A service identical to APW_AI used to exist - FacePinPoint.com

The method and the effect

The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

Rules

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.

Definition of adequacy

An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

What about the people in the porn-industry?

People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.

History

The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.


SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded)[edit | edit source]

Transcluded from Juho's proposal on banning digital sound-alikes


Motivation: The current situation where the criminals can freely trade and grow their libraries of stolen voices is unwise.

§1 Unauthorized modeling of a human voice[edit | edit source]

Acquiring such a model of a human's voice, that deceptively resembles some dead or living person's voice and the possession, purchase, sale, yielding, import and export without the express consent of the target are punishable.

§2 Application of unauthorized voice models[edit | edit source]

Producing and making available media from covert voice models defined in §1 is punishable.

§3 Aggravated application of unauthorized voice models[edit | edit source]

If the produced media is for a purpose to

  • frame a human target or targets for crimes
  • to attempt extortion or
  • to defame the target,

the crime should be judged as aggravated.




About this article[edit | edit source]

Transcluded in this article are

In resources there are likely a few services that could fit here.

Footnotes[edit | edit source]

  1. People who are found in possession of this synthetic pornography should probably not be penalized, but rather advised to get some help.

Contact information[edit | edit source]

Please contact these organizations and tell them to work harder against the disinformation weapons

  1. Contact Artificial Intelligence Algorithmic Automation Incidents Controversies (AIAAIC) at aiaaic.org Snail mail
    • AIAAIC
    • The Bradfield Centre
    • 184 Cambridge Science Park
    • Cambridge, CB4 0GA
    • United Kingdom
  2. Contact Cyber Civil Rights Initiative at cybercivilrights.org
    • CCRI is located in Coral Gables, Florida, USA.
  3. The Internet Watch Foundation at iwf.org.uk From https://www.iwf.org.uk/contact-us/ Snail mail
    • Internet Watch Foundation
    • Discovery House
    • Vision Park
    • Chivers Way
    • Histon
    • Cambridge
    • CB24 9ZR
    • UK
    • Office Phone +44 (0)1223 20 30 30. - Phone lines are open between 8:30 - 16:30 Monday to Friday (UK time)
    • Media enquiries Email media [at] iwf.org.uk
  4. Partnership on AI at partnershiponai.org Contact Mail
    • Partnership on AI
    • 115 Sansome St, Ste 1200,
    • San Francisco, CA 94104
    • USA
    • WITNESS
    • 80 Hanson Place, 5th Floor
    • Brooklyn, NY 11217
    • USA
    • Phone: 1.718.783.2000
    • Screen Actors Guild - American Federation of Television and Radio Artists
    • 5757 Wilshire Boulevard, 7th Floor
    • Los Angeles, California 90036
    • USA
    • Phone: 1-855-724-2387
    • Email: info@sagaftra.org
    • Email: outreach@darpa.mil
    • Defense Advanced Research Projects Agency
    • 675 North Randolph Street
    • Arlington, VA 22203-2114
    • Phone 1-703-526-6630
    • Email: CAM@ucdenver.edu
    • College of Arts & Media
    • National Center for Media Forensics
    • CU Denver
    • Arts Building
    • Suite 177
    • 1150 10th Street
    • Denver, CO 80204
    • USA
    • Phone 1-303-315-7400
    • Media Forensics Hub at Clemson University clemson.edu
    • Media Forensics Hub
    • Clemson University
    • Clemson, South Carolina 29634
    • USA
    • Phone 1-864-656-3311
  5. mediaforensics@clemson.edu
    • INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
    Visitor’s address
    • Marsstrasse 40
    • D-80335 Munich
    Postal address
    • INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
    • Arcisstrasse 21
    • D-80333 Munich
    • Germany
    Email
    • ieai(at)mcts.tum.de
    Website
  6. The Institute for Ethical AI & Machine Learning Website https://ethical.institute/ Email
    • a@ethical.institute
    • The Institute for Ethical AI in Education
    From Mail
    • The University of Buckingham
    • The Institute for Ethical AI in Education
    • Hunter Street
    • Buckingham
    • MK18 1EG
    • United Kingdom
  7. Future of Life Institute Contact form
    • No physical contact info
  8. The Japanese Society for Artificial Intelligence Contact info Mail
    • The Japanese Society for Artificial Intelligence
    • 402, OS Bldg.
    • 4-7 Tsukudo-cho, Shinjuku-ku, Tokyo 162-0821
    • Japan
    Phone
    • 03-5261-3401
    • AI4ALL
    Mail
    • AI4ALL
    • 548 Market St
    • PMB 95333
    • San Francisco, California 94104
    • USA
    • The Future Society at thefuturesociety.org
    Contact
    • No physical contact info
    • The AI Now Institute at ainowinstitute.org
    Contact Email
    • info@ainowinstitute.org
    • The Foundation for Responsible Robotics at responsiblerobotics.org
    Contact form Email
    • info@responsiblerobotics.org
    • AI4People at ai4people.eu
    Contact form
    • No physical contact info
    • IEEE Ethics in Action - in Autonomous and Intelligent Systems at ethicsinaction.ieee.org
    Email
    • aiopps@ieee.org
  9. Email
    • info@counterhate.com
    • Carnegie Endowment for International Peace - Partnership for Countering Influence Operations (PCIO) at carnegieendowment.org
    Mail
    • Carnegie Endowment for International Peace
    • Partnership for Countering Influence Operations
    • 1779 Massachusetts Avenue NW
    • Washington, DC 20036-2103
    • USA
    Phone
    • 1-202-483-7600
    Fax
    • 1-202-483-1840
    • Knowledge 4 All Foundation Ltd. - https://www.k4all.org/
    • Betchworth House
    • 57-65 Station Road
    • Redhill, Surrey, RH1 1DL
    • UK
    • The Montréal Declaration for a Responsible Development of Artificial Intelligence at montrealdeclaration-responsibleai.com
    Phone
    • 1-514-343-6111, ext. 29669
    Email
    • declaration-iaresponsable@umontreal.ca
  10. Email:
    • mfc_poc@nist.gov

Contacted[edit | edit source]

  1. Contacted Artificial Intelligence Algorithmic Automation Incidents Controversies (AIAAIC) at aiaaic.org
    • 2022-01-04 | Sent them email to info [at] aiaaic.org mentioning their site is down and thanking them for their effort
    • 2022-01-04 | Received reply from Charlie. Replied back asking what is "AIAAIC" short for and Charlie Pownall responded "Artificial Intelligence Algorithmic Automation Incidents Controversies"
  2. Contacted The Institute for Ethical AI & Machine Learning
  3. Contacted Future of Life Institute
    • 2021-08-14 | Subscribed to newsletter
  4. Contacted AI4ALL:
    • 2021-08-14 | Subscribed to mailing list
  5. Contacted The AI Now Institute at ainowinstitute.org
    • 2021-08-14 | Subscribed to mailing list
  6. Contacted The Center for Countering Digital Hate at counterhate.com
    • 2021-08-14 | Subscribed to mailing list
  7. Contacted Lionel Hagege / FacePinPoint.com
    • 2022-10-07 - Sent LinkedIn-mail to Lionel about https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 as I had promised to do so when I find out that someone has found something that sounds like it could work against the digital sound-alikes.
    • 2022-02-22 - 2022-02-28 - We exchanged some messages with Mr. Hagege. I got some information to him that I thought he really should know and also got a very hope-bearing answer to my question about the IP: Mr. Hagege owns it after the dissolution of the company in 2021 and he would be interested in getting at it again, but only if sufficient financing would be supplied.
    • 2022-02-21 - 2nd contact - I got another free month of LinkedIn pro and sent LinkedIn mail to Mr. Hagege, explaining the pressing need for FacePinPoint.com to make a return, as a public good. Hoping to get an answer.
    • 2021-12-18 - first contact - I managed to reach someone at FacePinPoint via their Facebook chat and they told me that they could not find funding and needed to shut shop.
    Their email does not seem to be working as both my emails (info@ and webmaster@) bounced with Relay access denied (in reply to RCPT TO command).

1st seen in[edit | edit source]

  1. https://www.iwf.org.uk/our-technology/report-remove/
  2. 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 "The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17. This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
  3. https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E

References[edit | edit source]

  1. https://www.partnershiponai.org/aiincidentdatabase/
  2. whois aiaaic.org
  3. https://charliepownall.com/ai-algorithimic-incident-controversy-database/
  4. https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
  5. https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
  6. whois facepinpoint.com
  7. https://www.facepinpoint.com/aboutus
  8. Blue, Logan; Warren, Kevin; Abdullah, Hadi; Gibson, Cassidy; Vargas, Luis; O’Dell, Jessica; Butler, Kevin; Traynor, Patrick (August 2022). "Detecting deep-fake audio through vocal tract reconstruction". Proceedings of the 31st USENIX Security Symposium: 2691–2708. ISBN 978-1-939133-31-1. Retrieved 2022-10-06.
  9. Boháček, Matyáš; Farid, Hany (2022-11-23). "Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms". w:Proceedings of the National Academy of Sciences of the United States of America. 119 (48). doi:10.1073/pnas.221603511. Retrieved 2023-01-05.
  10. Boháček, Matyáš; Farid, Hany (2022-06-14). "Protecting President Zelenskyy against Deep Fakes". arXiv:2206.12043 [cs.CV].
  11. Lawson, Amanda (2023-04-24). "A Look at Global Deepfake Regulation Approaches". responsible.ai. Responsible Artificial Intelligence Institute. Retrieved 2024-02-14.
  12. Williams, Kaylee (2023-05-15). "Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography". techpolicy.press. Retrieved 2024-02-14.
  13. Owen, Aled (2024-02-02). "Deepfake laws: is AI outpacing legislation?". onfido.com. Onfido. Retrieved 2024-02-14.
  14. Pirius, Rebecca (2024-02-07). "Is Deepfake Pornography Illegal?". Criminaldefenselawyer.com. w:Nolo (publisher). Retrieved 2024-02-22.
  15. Rastogi, Janvhi (2023-10-16). "Deepfake Pornography: A Legal and Ethical Menace". tclf.in. The Contemporary Law Forum. Retrieved 2024-02-14.
  16. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
  17. https://law.yale.edu/isp/events/technologies-deception
  18. https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/
  19. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge