Editing Organizations, studies and events against synthetic human-like fakes

Jump to navigation Jump to search
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
Here you can find [[#Organizations against synthetic human-like fakes|organizations]], [[#Studies against synthetic human-like fakes|studies]] and [[#Events against synthetic human-like fakes|events]] against [[synthetic human-like fakes]] and also [[#Organizations for media forensics|organizations and curricula for media forensics]].
Here you can find organizations, workshops and events and services against [[synthetic human-like fakes]] and also organizations and curricula for media forensics.


The [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI timeline of synthetic human-like fakes]] lists both positive and negative developments in reverse chronological order.
Transcluded in this article are
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com
* [[Laws against synthetic filth#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]]
* [[Laws against synthetic filth#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]].


For laws in effect and bills in planning against synthetic filth see [[Laws against synthesis and other related crimes]].
For laws and bills in planning against synthetic filth see [[laws against synthetic filth]].
 
In [[resources]] there are likely a few services that would fit here.
 
=== Services that should get back to the task at hand - FacePinPoint.com ===
Transcluded from [[FacePinPoint.com]]
{{#lst:FacePinPoint.com|FacePinPoint.com}}


<section begin=core organizations />
<section begin=core organizations />
= Organizations against synthetic human-like fakes =
=== Organizations against synthetic human-like fakes ===
== AI incident repositories ==
'''AI incident repositories'''
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai />
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai />


Line 34: Line 44:


</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref>  
</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref>  
* <section begin=oecd.ai />[https://oecd.ai/en/ '''The OECD.AI Policy Observatory''' at oecd.ai], in conjunction with the Patrick J McGovern Foundation, provide the [https://oecd.ai/en/incidents '''OECD AI Incidents Monitor''' ('''AIM''') at oecd.ai]<section end=oecd.ai />
* <section begin=AJL.org />The [[w:Algorithmic Justice League]] is also accepting reports of AI harms at [https://report.ajl.org/ report.ajl.org]<section end=AJL.org />


== Help for victims of image or audio based abuse ==
''' Help for victims of image or audio based abuse '''
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI">
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI">
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org
Line 51: Line 57:
* https://www.facebook.com/CyberCivilRightsInitiative
* https://www.facebook.com/CyberCivilRightsInitiative


</ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org]. [https://cybercivilrights.org/faqs-usvictims/ '''''Get help now''''' - '''CCRI Safety Center''' at cybercivilrights.org] - '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.''
</ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org]  
<section begin=cybercivilrights.org law compilations />* [https://cybercivilrights.org/existing-laws/ '''Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws''' at cybercivilrights.org]
** [https://cybercivilrights.org/faqs-usvictims/ '''''Get help now''''' - '''CCRI Safety Center''' at cybercivilrights.org] - '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.''
** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org]
** [https://cybercivilrights.org/existing-laws/ '''Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws''' at cybercivilrights.org]
** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org]
*** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org]
** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org]<section end=cybercivilrights.org law compilations /><section end=cybercivilrights.org />
*** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org]
*** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org]<section end=cybercivilrights.org />


* <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove />
* <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove />


== Awareness and countermeasures ==
* <section begin=badassarmy.org />[https://badassarmy.org/ '''Battling Against Demeaning and Abusive Selfie Sharing''' at badassarmy.org] have compiled a [https://badassarmy.org/revenge-porn-laws-by-state/ '''list of revenge porn laws by US states''']<section end=badassarmy.org />
 
'''Awareness and countermeasures'''
* [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact">
* [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact">
The '''Internet Watch Foundation''' at iwf.org.uk
The '''Internet Watch Foundation''' at iwf.org.uk
Line 141: Line 150:
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the  [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''.
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the  [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''.


== Organizations for media forensics ==
=== Organizations for media forensics ===
 
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]]
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]]


Line 200: Line 208:


<section begin=other organizations />
<section begin=other organizations />
== Organizations possibly against synthetic human-like fakes ==
=== Organizations possibly against synthetic human-like fakes ===


Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)]  by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020">
Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)]  by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020">
Line 439: Line 447:
</ref>
</ref>


== Services that should get back to the task at hand - FacePinPoint.com ==
=== Other essential developments ===
Transcluded from [[FacePinPoint.com]]
{{#lst:FacePinPoint.com|FacePinPoint.com}}
 
== Other essential developments ==
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact">
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact">


Line 460: Line 464:
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>


= Studies against synthetic human-like fakes =
=== Events against synthetic human-like fakes ===
 
== Detecting deep-fake audio through vocal tract reconstruction ==
{{#lst:Detecting deep-fake audio through vocal tract reconstruction|what-is-it}}
 
== Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms ==
* {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}}
 
== Protecting President Zelenskyy against deep fakes ==
* {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}}
 
== Other studies against synthetic human-like fakes ==
* [https://www.icct.nl/sites/default/files/2023-12/The%20Weaponisation%20of%20Deepfakes.pdf '''''The Weaponisation of Deepfakes - Digital Deception by the Far-Right''''' at icct.nl], an [[w:International Centre for Counter-Terrorism]] policy brief by Ella Busch and Jacob Ware. Published in December 2023.
 
* [https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF '''''Contextualizing Deepfake Threats to Organizations''''' - '''Cybersecurity Information Sheet''' at media.defense.gov]
 
* [https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf '''''Increasing Threats of Deepfake Identities''''' at dhs.gov] by the [[w:United States Department of Homeland Security]]
 
* [https://www.dhs.gov/sites/default/files/2022-10/AEP%20DeepFake%20PHASE2%20FINAL%20corrected20221006.pdf '''''Increasing Threats of Deepfake Identities''' – '''Phase 2: Mitigation Measures''''' at dhs.gov]
 
* [https://link.springer.com/article/10.1007/s13347-023-00657-0 '''''Deepfake Pornography and the Ethics of Non-Veridical Representations''''' at link.springer.com], a 2023 research article, published in [[w:Philosophy & Technology]] on 2023-08-23. (paywalled)
 
* [https://digitalcommons.law.uidaho.edu/cgi/viewcontent.cgi?article=1252&context=idaho-law-review '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu] by Natalie Lussier, published in Idaho Law Review January 2022
 
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services
 
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract)
 
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]]
 
'''Legal information compilations'''
{{#lst:Laws against synthesis and other related crimes|anti-fake-law-compilations}}
 
More studies can be found in the [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI Timeline of synthetic human-like fakes]]
 
''' Search for more '''
* [[w:Law review]]
** [[w:List of law reviews in the United States]]
 
''' Reporting against synthetic human-like fakes '''
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]].


''' Companies against synthetic human-like fakes '''
* '''UPCOMING 2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] [https://worldantibullyingforum.com/news/open-call-for-hosting-world-anti-bullying-forum-2023/ ''Open Call for Hosting World Anti-Bullying Forum 2023'' at worldantibullyingforum.com]
See [[resources]] for more.


* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020.
* '''UPCOMING 2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel


= Events against synthetic human-like fakes =
* '''UPCOMING 2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) to be held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022<ref>https://law.yale.edu/isp/events/technologies-deception</ref>
== Upcoming events ==
In reverse chronological order 


== Ongoing events ==
* '''UPCOMING 2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], virtual in December 2021.


* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact">
* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact">
Line 519: Line 480:


</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
== Past events ==
* '''2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] October 25-27 in North Carolina, U.S.A.
* '''2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel
* '''2022''' | [https://conferences.sigcomm.org/hotnets/2022/ '''HotNets 2022: Twenty-First ACM Workshop on Hot Topics in Networks''' at conferences.sigcomm.org] - November 14-15, 2022 — Austin, Texas, USA. Presented at HotNets 2022 was a very interesting paper on [https://farid.berkeley.edu/downloads/publications/hotnets22.pdf ''''''Global Content Revocation on the Internet: A Case Study in Technology Ecosystem Transformation'''''' at farid.berkeley.edu]
* '''2022''' | [https://www.interspeech2022.org/ '''INTERSPEECH 2022''' at interspeech2022.org] organized by [[w:International Speech Communication Association]] was held on 18-22 September 2022 in Korea. The work [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org] was presented there
* '''2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) was held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022<ref>https://law.yale.edu/isp/events/technologies-deception</ref>
* '''2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side [[w:StyleGAN]]3 was presented there.


* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com]
* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com]
Line 541: Line 488:


* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref>
* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref>
* '''2020''' | [https://www.ftc.gov/news-events/events/2020/01/you-dont-say-ftc-workshop-voice-cloning-technologies '''''You Don't Say: An FTC Workshop on Voice Cloning Technologies''''' at ftc.gov] was held on Tuesday 2020-01-28 - [https://venturebeat.com/2020/01/29/ftc-voice-cloning-seminar-crime-use-cases-safeguards-ai-machine-learning/ reporting at venturebeat.com]


* '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s  '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
* '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s  '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
Line 559: Line 504:


* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
=== Studies against synthetic human-like fakes ===
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract)
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]]
''' Search for more '''
* [[w:Law review]]
** [[w:List of law reviews in the United States]]
=== Reporting against synthetic human-like fakes ===
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]].
=== Companies against synthetic human-like fakes ===
See [[resources]] for more.
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020.


<section end=other organizations />
<section end=other organizations />


= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) =
=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) ===
  Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]]
  Transcluded from [[Laws against synthetic filth#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]]


{{#section-h:Laws against synthesis and other related crimes|Law proposal to ban visual synthetic filth}}
{{#section-h:Laws against synthetic filth|Law proposal to ban visual synthetic filth}}


= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) =
=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) ===
  Transcluded main contents from [[Adequate Porn Watcher AI (concept)]]
  Transcluded main contents from [[Adequate Porn Watcher AI (concept)]]


{{#lstx:Adequate Porn Watcher AI (concept)|See_also}}
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}}


= SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) =
=== SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) ===
  Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]]
  Transcluded from [[Laws against synthetic filth#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]]


{{#section-h:Laws against synthesis and other related crimes|Law proposal to ban unauthorized modeling of human voice}}
{{#section-h:Laws against synthetic filth|Law proposal to ban unauthorized modeling of human voice}}


----
----
== About this article ==
Transcluded in this article are
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com
* [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]]
* [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]]. 
In [[resources]] there are likely a few services that could fit here.


== Footnotes ==
== Footnotes ==
Line 601: Line 556:
== 1st seen in ==
== 1st seen in ==
<references group="1st seen in" />
<references group="1st seen in" />


== References ==
== References ==
<references />
<references />
Please note that all contributions to Stop Synthetic Filth! wiki are considered to be released under the Creative Commons Attribution-ShareAlike (see SSF:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)