Organizations, studies and events against synthetic human-like fakes: Difference between revisions

→‎Other studies against synthetic human-like fakes: + '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu by Natalie Lussier, published in Idaho Law Review January 2022
(→‎Organizations against synthetic human-like fakes: + <ref group="contact" name="CCRI">)
(→‎Other studies against synthetic human-like fakes: + '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu by Natalie Lussier, published in Idaho Law Review January 2022)
 
(55 intermediate revisions by the same user not shown)
Line 1: Line 1:
Here you can find organizations, workshops and events and services against [[synthetic human-like fakes]] and also organizations and curricula for media forensics.
Here you can find [[#Organizations against synthetic human-like fakes|organizations]], [[#Studies against synthetic human-like fakes|studies]] and [[#Events against synthetic human-like fakes|events]] against [[synthetic human-like fakes]] and also [[#Organizations for media forensics|organizations and curricula for media forensics]].


Transcluded in this article are
The [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI timeline of synthetic human-like fakes]] lists both positive and negative developments in reverse chronological order.
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com
* [[Laws against synthetic filth#Law proposal to ban visual synthetic filth|A law proposal against digital look-alikes]]
* [[Laws against synthetic filth#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]]
 
For laws and bills in planning against synthetic filth see [[laws against synthetic filth]].


In [[resources]] there are likely a few services that would fit here.
For laws in effect and bills in planning against synthetic filth see [[Laws against synthesis and other related crimes]].


=== Services that should get back to the task at hand - FacePinPoint.com ===
<section begin=core organizations />
Transcluded from [[FacePinPoint.com]]
= Organizations against synthetic human-like fakes =
{{#lst:FacePinPoint.com|FacePinPoint.com}}
== AI incident repositories ==
 
<section begin=APW_AI-transclusion />
=== Organizations against synthetic human-like fakes ===
'''Incident repositories'''
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai />
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai />


Line 44: Line 34:


</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref>  
</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref>  
* <section begin=oecd.ai />[https://oecd.ai/en/ '''The OECD.AI Policy Observatory''' at oecd.ai], in conjunction with the Patrick J McGovern Foundation, provide the [https://oecd.ai/en/incidents '''OECD AI Incidents Monitor''' ('''AIM''') at oecd.ai]<section end=oecd.ai />


''' Help for victims of image or audio based abuse '''
* <section begin=AJL.org />The [[w:Algorithmic Justice League]] is also accepting reports of AI harms at [https://report.ajl.org/ report.ajl.org]<section end=AJL.org />
 
== Help for victims of image or audio based abuse ==
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI">
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI">
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org
Line 57: Line 51:
* https://www.facebook.com/CyberCivilRightsInitiative
* https://www.facebook.com/CyberCivilRightsInitiative


</ref>  
</ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org]. [https://cybercivilrights.org/faqs-usvictims/ '''''Get help now''''' - '''CCRI Safety Center''' at cybercivilrights.org] - '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.''
** [https://accsstaging.com/ccri/ '''Cyber Civil Rights Initiative''' at accsstaging.com] incl. '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.''
<section begin=cybercivilrights.org law compilations />* [https://cybercivilrights.org/existing-laws/ '''Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws''' at cybercivilrights.org]
*** [https://accsstaging.com/ccri/deep-fake-laws/ '''Deep Fake Laws''' in the USA at accsstaging.com]
** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org]
*** [https://accsstaging.com/ccri/sextortion-laws/ '''Sextortion Laws''' in the USA at accsstaging.com]
** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org]
*** [https://accsstaging.com/ccri/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at accsstaging.com]<section end=cybercivilrights.org />
** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org]<section end=cybercivilrights.org law compilations /><section end=cybercivilrights.org />


* <section begin=badassarmy.org />[https://badassarmy.org/ '''Battling Against Demeaning and Abusive Selfie Sharing''' at badassarmy.org] have compiled a [https://badassarmy.org/revenge-porn-laws-by-state/ '''list of revenge porn laws by US states''']<section end=badassarmy.org />
* <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove />
 
== Awareness and countermeasures ==
* [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact">
The '''Internet Watch Foundation''' at iwf.org.uk
 
From https://www.iwf.org.uk/contact-us/
 
'''Snail mail'''
 
* Internet Watch Foundation
* Discovery House
* Vision Park
* Chivers Way
* Histon
* Cambridge
* CB24 9ZR
* UK
 
* Office Phone +44 (0)1223 20 30 30. - Phone lines are open between 8:30 - 16:30 Monday to Friday (UK time)
* Media enquiries Email media [at] iwf.org.uk
 
 
 
</ref> - The [[w:Internet Watch Foundation]] is a UK charity that seeks to to minimise the availability of online sexual abuse content, specifically child sexual abuse images and videos hosted anywhere in the world and non-photographic child sexual abuse images hosted in the UK. [https://www.iwf.org.uk/about-us/ "About us" at iwf.org.uk], [https://www.iwf.org.uk/our-technology/ "Our technology" at iwf.org.uk]


'''Awareness and countermeasures'''
* [https://www.partnershiponai.org/ '''Partnership on AI''' at partnershiponai.org] ([https://www.partnershiponai.org/contact/ contact form])<ref group="contact">
* [https://www.partnershiponai.org/ '''Partnership on AI''' at partnershiponai.org] ([https://www.partnershiponai.org/contact/ contact form])<ref group="contact">


Line 99: Line 116:
** [https://lab.witness.org/projects/osint-digital-forensics/ '''Open-source intelligence digital forensics''' - ''How do we work together to detect AI-manipulated media?'' at lab.witness.org]. "''In February '''2019''' WITNESS in association with [[w:George Washington University]] brought together a group of leading researchers in [[Glossary#Media forensics|media forensics]] and [[w:detection]] of [[w:deepfakes]] and other [[w:media manipulation]] with leading experts in social newsgathering, [[w:User-generated content]] and [[w:open-source intelligence]] ([[w:OSINT]]) verification and [[w:fact-checking]].''" (website)
** [https://lab.witness.org/projects/osint-digital-forensics/ '''Open-source intelligence digital forensics''' - ''How do we work together to detect AI-manipulated media?'' at lab.witness.org]. "''In February '''2019''' WITNESS in association with [[w:George Washington University]] brought together a group of leading researchers in [[Glossary#Media forensics|media forensics]] and [[w:detection]] of [[w:deepfakes]] and other [[w:media manipulation]] with leading experts in social newsgathering, [[w:User-generated content]] and [[w:open-source intelligence]] ([[w:OSINT]]) verification and [[w:fact-checking]].''" (website)
** [https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ '''Prepare, Don’t Panic: Synthetic Media and Deepfakes''' at lab.witness.org] is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in '''2018''' with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the [http://witness.mediafire.com/file/q5juw7dc3a2w8p7/Deepfakes_Final.pdf/file  '''report''' “'''Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening'''”] (dated 2018-06-11). [https://blog.witness.org/2018/07/deepfakes/ '''''Deepfakes and Synthetic Media: What should we fear? What can we do?''''' at blog.witness.org]
** [https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ '''Prepare, Don’t Panic: Synthetic Media and Deepfakes''' at lab.witness.org] is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in '''2018''' with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the [http://witness.mediafire.com/file/q5juw7dc3a2w8p7/Deepfakes_Final.pdf/file  '''report''' “'''Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening'''”] (dated 2018-06-11). [https://blog.witness.org/2018/07/deepfakes/ '''''Deepfakes and Synthetic Media: What should we fear? What can we do?''''' at blog.witness.org]
* '''[[w:Financial Coalition Against Child Pornography]]''' could be interested in taking down payment possibilities also for sites distributing non-consensual synthetic pornography.


* [https://www.realitydefender.ai/ '''Reality Defender''' at realitydefender.ai] - ''Enterprise-Grade Deepfake Detection Platform'' for many stakeholders
* [https://www.realitydefender.ai/ '''Reality Defender''' at realitydefender.ai] - ''Enterprise-Grade Deepfake Detection Platform'' for many stakeholders
Line 122: Line 141:
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the  [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''.
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the  [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''.


=== Organizations for media forensics ===
== Organizations for media forensics ==
 
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]]
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]]


Line 177: Line 197:
* Phone 1-864-656-3311
* Phone 1-864-656-3311


</ref> at the Watt Family Innovation Center of the '''[[w:Clemson University]]''' has the aims of promoting multi-disciplinary research, collecting and facilitating discussion and ideation of challenges and solutions. They provide [https://www.clemson.edu/centers-institutes/watt/hub/resources/ resources], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/research.html research], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/education.html media forensics education] and are running a [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/wg-disinfo.html '''Working Group''' on '''disinformation'''].<ref group="contact">mediaforensics@clemson.edu</ref>
</ref> at the Watt Family Innovation Center of the '''[[w:Clemson University]]''' has the aims of promoting multi-disciplinary research, collecting and facilitating discussion and ideation of challenges and solutions. They provide [https://www.clemson.edu/centers-institutes/watt/hub/resources/ resources], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/research.html research], [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/education.html media forensics education] and are running a [https://www.clemson.edu/centers-institutes/watt/hub/connect-collab/wg-disinfo.html '''Working Group''' on '''disinformation'''].<ref group="contact">mediaforensics@clemson.edu</ref><section end=core organizations />


=== Organizations possibly against synthetic human-like fakes ===
<section begin=other organizations />
== Organizations possibly against synthetic human-like fakes ==


Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)]  by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020">
Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)]  by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020">
Line 418: Line 439:
</ref>
</ref>


=== Other essential developments ===
== Services that should get back to the task at hand - FacePinPoint.com ==
Transcluded from [[FacePinPoint.com]]
{{#lst:FacePinPoint.com|FacePinPoint.com}}
 
== Other essential developments ==
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact">
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact">


Line 435: Line 460:
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/>


=== Events against synthetic human-like fakes ===
= Studies against synthetic human-like fakes =
 
== Detecting deep-fake audio through vocal tract reconstruction ==
{{#lst:Detecting deep-fake audio through vocal tract reconstruction|what-is-it}}
 
== Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms ==
* {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}}
 
== Protecting President Zelenskyy against deep fakes ==
* {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}}
 
== Other studies against synthetic human-like fakes ==
* [https://www.icct.nl/sites/default/files/2023-12/The%20Weaponisation%20of%20Deepfakes.pdf '''''The Weaponisation of Deepfakes - Digital Deception by the Far-Right''''' at icct.nl], an [[w:International Centre for Counter-Terrorism]] policy brief by Ella Busch and Jacob Ware. Published in December 2023.
 
* [https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF '''''Contextualizing Deepfake Threats to Organizations''''' - '''Cybersecurity Information Sheet''' at media.defense.gov]
 
* [https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf '''''Increasing Threats of Deepfake Identities''''' at dhs.gov] by the [[w:United States Department of Homeland Security]]
 
* [https://www.dhs.gov/sites/default/files/2022-10/AEP%20DeepFake%20PHASE2%20FINAL%20corrected20221006.pdf '''''Increasing Threats of Deepfake Identities''' – '''Phase 2: Mitigation Measures''''' at dhs.gov]
 
* [https://link.springer.com/article/10.1007/s13347-023-00657-0 '''''Deepfake Pornography and the Ethics of Non-Veridical Representations''''' at link.springer.com], a 2023 research article, published in [[w:Philosophy & Technology]] on 2023-08-23. (paywalled)
 
* [https://digitalcommons.law.uidaho.edu/cgi/viewcontent.cgi?article=1252&context=idaho-law-review '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu] by Natalie Lussier, published in Idaho Law Review January 2022
 
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services
 
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract)
 
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]]
 
'''Legal information compilations'''
{{#lst:Laws against synthesis and other related crimes|anti-fake-law-compilations}}
 
More studies can be found in the [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI Timeline of synthetic human-like fakes]]
 
''' Search for more '''
* [[w:Law review]]
** [[w:List of law reviews in the United States]]


* ''' UPCOMING 2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel
''' Reporting against synthetic human-like fakes '''
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]].


* ''' UPCOMING 2021 ''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], virtual in December 2021.
''' Companies against synthetic human-like fakes '''
See [[resources]] for more.
 
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020.
 
= Events against synthetic human-like fakes =
== Upcoming events ==
In reverse chronological order 
 
== Ongoing events ==


* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact">
* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact">
Line 447: Line 519:


</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
== Past events ==
* '''2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] October 25-27 in North Carolina, U.S.A.
* '''2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel
* '''2022''' | [https://conferences.sigcomm.org/hotnets/2022/ '''HotNets 2022: Twenty-First ACM Workshop on Hot Topics in Networks''' at conferences.sigcomm.org] - November 14-15, 2022 — Austin, Texas, USA. Presented at HotNets 2022 was a very interesting paper on [https://farid.berkeley.edu/downloads/publications/hotnets22.pdf ''''''Global Content Revocation on the Internet: A Case Study in Technology Ecosystem Transformation'''''' at farid.berkeley.edu]
* '''2022''' | [https://www.interspeech2022.org/ '''INTERSPEECH 2022''' at interspeech2022.org] organized by [[w:International Speech Communication Association]] was held on 18-22 September 2022 in Korea. The work [https://arxiv.org/abs/2203.15563 ''''''Attacker Attribution of Audio Deepfakes'''''' at arxiv.org] was presented there
* '''2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) was held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022<ref>https://law.yale.edu/isp/events/technologies-deception</ref>
* '''2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side [[w:StyleGAN]]3 was presented there.


* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com]
* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com]
Line 455: Line 541:


* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref>
* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref>
* '''2020''' | [https://www.ftc.gov/news-events/events/2020/01/you-dont-say-ftc-workshop-voice-cloning-technologies '''''You Don't Say: An FTC Workshop on Voice Cloning Technologies''''' at ftc.gov] was held on Tuesday 2020-01-28 - [https://venturebeat.com/2020/01/29/ftc-voice-cloning-seminar-crime-use-cases-safeguards-ai-machine-learning/ reporting at venturebeat.com]


* '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s  '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
* '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s  '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
Line 472: Line 560:
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref>


=== Studies against synthetic human-like fakes ===
<section end=other organizations />


* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) =
Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]]


* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract)
{{#section-h:Laws against synthesis and other related crimes|Law proposal to ban visual synthetic filth}}


* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]]
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) =
Transcluded main contents from [[Adequate Porn Watcher AI (concept)]]


''' Search for more '''
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}}
* [[w:Law review]]
** [[w:List of law reviews in the United States]]


=== Reporting against synthetic human-like fakes ===
= SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) =
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]].
Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]]


=== Companies against synthetic human-like fakes ===
{{#section-h:Laws against synthesis and other related crimes|Law proposal to ban unauthorized modeling of human voice}}
See [[resources]] for more.


* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020.
----


<section end=APW_AI-transclusion />
== About this article ==


=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) ===
Transcluded in this article are
Transcluded from [[Laws against synthetic filth#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]]
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege
 
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com
{{#section-h:Laws against synthetic filth|Law proposal to ban visual synthetic filth}}
* [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]]
 
* [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]]
=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) ===
Transcluded main contents from [[Adequate Porn Watcher AI (concept)]]
 
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}}


=== SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) ===
In [[resources]] there are likely a few services that could fit here.
Transcluded from [[Laws against synthetic filth#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]]
 
{{#section-h:Laws against synthetic filth|Law proposal to ban unauthorized modeling of human voice}}
 
----


== Footnotes ==
== Footnotes ==
Line 523: Line 601:
== 1st seen in ==
== 1st seen in ==
<references group="1st seen in" />
<references group="1st seen in" />


== References ==
== References ==
<references />
<references />