Editing Organizations, studies and events against synthetic human-like fakes
Jump to navigation
Jump to search
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
Here you can find | Here you can find organizations, workshops and events and services against [[synthetic human-like fakes]] and also organizations and curricula for media forensics. | ||
Transcluded in this article are | |||
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege | |||
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com | |||
* [[Laws against synthesis and related crimes#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]] | |||
* [[Laws against synthesis and related crimes#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]]. | |||
For laws | For laws and bills in planning against synthetic filth see [[Laws against synthesis and related crimes]]. | ||
In [[resources]] there are likely a few services that would fit here. | |||
=== Services that should get back to the task at hand - FacePinPoint.com === | |||
Transcluded from [[FacePinPoint.com]] | |||
{{#lst:FacePinPoint.com|FacePinPoint.com}} | |||
<section begin=core organizations /> | <section begin=core organizations /> | ||
= Organizations against synthetic human-like fakes = | === Organizations against synthetic human-like fakes === | ||
== AI incident repositories | '''AI incident repositories''' | ||
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai /> | * <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai /> | ||
Line 34: | Line 44: | ||
</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref> | </ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref> | ||
''' Help for victims of image or audio based abuse ''' | |||
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI"> | <section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI"> | ||
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org | Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org | ||
Line 51: | Line 57: | ||
* https://www.facebook.com/CyberCivilRightsInitiative | * https://www.facebook.com/CyberCivilRightsInitiative | ||
</ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org] | </ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org] | ||
** [https://cybercivilrights.org/faqs-usvictims/ '''''Get help now''''' - '''CCRI Safety Center''' at cybercivilrights.org] - '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.'' | |||
** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org] | ** [https://cybercivilrights.org/existing-laws/ '''Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws''' at cybercivilrights.org] | ||
** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org] | *** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org] | ||
** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org] | *** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org] | ||
*** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org]<section end=cybercivilrights.org /> | |||
* <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove /> | * <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove /> | ||
== Awareness and countermeasures | * <section begin=badassarmy.org />[https://badassarmy.org/ '''Battling Against Demeaning and Abusive Selfie Sharing''' at badassarmy.org] have compiled a [https://badassarmy.org/revenge-porn-laws-by-state/ '''list of revenge porn laws by US states''']<section end=badassarmy.org /> | ||
'''Awareness and countermeasures''' | |||
* [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact"> | * [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact"> | ||
The '''Internet Watch Foundation''' at iwf.org.uk | The '''Internet Watch Foundation''' at iwf.org.uk | ||
Line 141: | Line 150: | ||
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''. | </ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''. | ||
== Organizations for media forensics == | === Organizations for media forensics === | ||
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | [[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | ||
Line 200: | Line 208: | ||
<section begin=other organizations /> | <section begin=other organizations /> | ||
== Organizations possibly against synthetic human-like fakes == | === Organizations possibly against synthetic human-like fakes === | ||
Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | ||
Line 439: | Line 447: | ||
</ref> | </ref> | ||
= | === Other essential developments === | ||
== Other essential developments == | |||
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact"> | * [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact"> | ||
Line 460: | Line 464: | ||
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | * [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | ||
= | === Events against synthetic human-like fakes === | ||
==== Upcoming events ==== | |||
In reverse chronological order | |||
== | |||
== | |||
* | * '''UPCOMING 2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel | ||
* [https:// | * '''UPCOMING 2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] [https://worldantibullyingforum.com/news/open-call-for-hosting-world-anti-bullying-forum-2023/ ''Open Call for Hosting World Anti-Bullying Forum 2023'' at worldantibullyingforum.com] | ||
==== Ongoing events ==== | |||
* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact"> | * '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact"> | ||
Line 520: | Line 481: | ||
</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | </ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
== Past events == | ==== Past events ==== | ||
* '''2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) to be held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022<ref>https://law.yale.edu/isp/events/technologies-deception</ref> | |||
* '''2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) | |||
* '''2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side [[w:StyleGAN]]3 was presented there. | * '''2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side [[w:StyleGAN]]3 was presented there. | ||
Line 559: | Line 512: | ||
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | * '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
=== Studies against synthetic human-like fakes === | |||
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services | |||
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract) | |||
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]] | |||
''' Search for more ''' | |||
* [[w:Law review]] | |||
** [[w:List of law reviews in the United States]] | |||
=== Reporting against synthetic human-like fakes === | |||
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]]. | |||
=== Companies against synthetic human-like fakes === | |||
See [[resources]] for more. | |||
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020. | |||
<section end=other organizations /> | <section end=other organizations /> | ||
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) = | === SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) === | ||
Transcluded from [[Laws against synthesis and | Transcluded from [[Laws against synthesis and related crimes#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]] | ||
{{#section-h:Laws against synthesis and | {{#section-h:Laws against synthesis and related crimes|Law proposal to ban visual synthetic filth}} | ||
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) = | === SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) === | ||
Transcluded main contents from [[Adequate Porn Watcher AI (concept)]] | Transcluded main contents from [[Adequate Porn Watcher AI (concept)]] | ||
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}} | {{#lstx:Adequate Porn Watcher AI (concept)|See_also}} | ||
= SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) = | === SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) === | ||
Transcluded from [[Laws against synthesis and | Transcluded from [[Laws against synthesis and related crimes#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]] | ||
{{#section-h:Laws against synthesis and | {{#section-h:Laws against synthesis and related crimes|Law proposal to ban unauthorized modeling of human voice}} | ||
---- | ---- | ||
== Footnotes == | == Footnotes == | ||
Line 601: | Line 564: | ||
== 1st seen in == | == 1st seen in == | ||
<references group="1st seen in" /> | <references group="1st seen in" /> | ||
== References == | == References == | ||
<references /> | <references /> |