3,885
edits
Juho Kunsola (talk | contribs) (housekeeping + World Anti-Bullying Forum in October 25-27 in North Carolina) |
Juho Kunsola (talk | contribs) (→Other studies against synthetic human-like fakes: + '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu by Natalie Lussier, published in Idaho Law Review January 2022) |
||
(23 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Here you can find organizations, | Here you can find [[#Organizations against synthetic human-like fakes|organizations]], [[#Studies against synthetic human-like fakes|studies]] and [[#Events against synthetic human-like fakes|events]] against [[synthetic human-like fakes]] and also [[#Organizations for media forensics|organizations and curricula for media forensics]]. | ||
The [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI timeline of synthetic human-like fakes]] lists both positive and negative developments in reverse chronological order. | |||
For laws in effect and bills in planning against synthetic filth see [[Laws against synthesis and other related crimes]]. | |||
<section begin=core organizations /> | <section begin=core organizations /> | ||
= Organizations against synthetic human-like fakes = | |||
== AI incident repositories == | |||
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai /> | * <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai /> | ||
Line 44: | Line 34: | ||
</ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref> | </ref> was founded by Charlie Pownall. The [https://www.aiaaic.org/aiaaic-repository '''AIAAIC repository''' at aiaaic.org] contains tons of reporting on different problematic uses of AI.<section end=AIAAIC.org /> The domain name aiaaic.org was registered on Tuesday 2021-02-23.<ref>whois aiaaic.org</ref>. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its [https://creativecommons.org/licenses/by/4.0/ CC BY 4.0 license].<ref>https://charliepownall.com/ai-algorithimic-incident-controversy-database/</ref> | ||
* <section begin=oecd.ai />[https://oecd.ai/en/ '''The OECD.AI Policy Observatory''' at oecd.ai], in conjunction with the Patrick J McGovern Foundation, provide the [https://oecd.ai/en/incidents '''OECD AI Incidents Monitor''' ('''AIM''') at oecd.ai]<section end=oecd.ai /> | |||
* <section begin=AJL.org />The [[w:Algorithmic Justice League]] is also accepting reports of AI harms at [https://report.ajl.org/ report.ajl.org]<section end=AJL.org /> | |||
== Help for victims of image or audio based abuse == | |||
<section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI"> | <section begin=cybercivilrights.org />* [https://cybercivilrights.org/ '''Cyber Civil Rights Initiative''' at cybercivilrights.org], a US-based NGO.<ref group="contact" name="CCRI"> | ||
Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org | Contact '''Cyber Civil Rights Initiative''' at cybercivilrights.org | ||
Line 57: | Line 51: | ||
* https://www.facebook.com/CyberCivilRightsInitiative | * https://www.facebook.com/CyberCivilRightsInitiative | ||
</ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org] | </ref> [https://cybercivilrights.org/about/ '''History / Mission / Vision''' of cybercivilrights.org]. [https://cybercivilrights.org/faqs-usvictims/ '''''Get help now''''' - '''CCRI Safety Center''' at cybercivilrights.org] - '''CCRI Image Abuse Helpline''' - ''If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.'' | ||
<section begin=cybercivilrights.org law compilations />* [https://cybercivilrights.org/existing-laws/ '''Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws''' at cybercivilrights.org] | |||
** [https://cybercivilrights.org/deep-fake-laws/ '''Deep Fake Laws''' in the USA at cybercivilrights.org] | |||
** [https://cybercivilrights.org/sextortion-laws/ '''Sextortion Laws''' in the USA at cybercivilrights.org] | |||
** [https://cybercivilrights.org/nonconsensual-pornagraphy-laws/ '''Nonconsensual Pornography Laws''' in the USA at cybercivilrights.org]<section end=cybercivilrights.org law compilations /><section end=cybercivilrights.org /> | |||
* <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove /> | * <section begin=Report Remove />[https://www.childline.org.uk/info-advice/bullying-abuse-safety/online-mobile-safety/remove-nude-image-shared-online/ '''Report Remove: ''Remove a nude image shared online''''' at childline.org.uk]<ref group="1st seen in">https://www.iwf.org.uk/our-technology/report-remove/</ref>. Report Remove is a service for under 19 yr olds by [[w:Childline]], a UK service by [[w:National Society for the Prevention of Cruelty to Children]] (NSPCC) and powered by technology from the [[w:Internet Watch Foundation]]. - ''Childline is here to help anyone under 19 in the UK with any issue they’re going through.'' Info on [https://www.iwf.org.uk/our-technology/report-remove/ '''Report Remove''' at iwf.org.uk]<section end=Report Remove /> | ||
== Awareness and countermeasures == | |||
* [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact"> | * [https://www.iwf.org.uk/ The '''Internet Watch Foundation''' at iwf.org.uk]<ref group="contact"> | ||
The '''Internet Watch Foundation''' at iwf.org.uk | The '''Internet Watch Foundation''' at iwf.org.uk | ||
Line 150: | Line 141: | ||
</ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''. | </ref> [https://www.sagaftra.org/action-alert-support-california-bill-end-deepfake-porn SAG-AFTRA ACTION ALERT: '''"Support California Bill to End Deepfake Porn"''' at sagaftra.org '''endorses'''] [https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201920200SB564 California Senate Bill SB 564] introduced to the [[w:California State Senate]] by [[w:California]] [[w:Connie Leyva|w:Senator Connie Leyva]] in Feb '''2019'''. | ||
== Organizations for media forensics == | |||
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | [[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.]] | ||
Line 208: | Line 200: | ||
<section begin=other organizations /> | <section begin=other organizations /> | ||
== Organizations possibly against synthetic human-like fakes == | |||
Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | ||
Line 447: | Line 439: | ||
</ref> | </ref> | ||
=== Other essential developments | == Services that should get back to the task at hand - FacePinPoint.com == | ||
Transcluded from [[FacePinPoint.com]] | |||
{{#lst:FacePinPoint.com|FacePinPoint.com}} | |||
== Other essential developments == | |||
* [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact"> | * [https://www.montrealdeclaration-responsibleai.com/ '''The Montréal Declaration for a Responsible Development of Artificial Intelligence''' at montrealdeclaration-responsibleai.com]<ref group="contact"> | ||
Line 464: | Line 460: | ||
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | * [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | ||
=== | = Studies against synthetic human-like fakes = | ||
==== Upcoming events | |||
== Detecting deep-fake audio through vocal tract reconstruction == | |||
{{#lst:Detecting deep-fake audio through vocal tract reconstruction|what-is-it}} | |||
== Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms == | |||
* {{#lst:Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms|what-is-it}} | |||
== Protecting President Zelenskyy against deep fakes == | |||
* {{#lst:Protecting President Zelenskyy against deep fakes|what-is-it}} | |||
== Other studies against synthetic human-like fakes == | |||
* [https://www.icct.nl/sites/default/files/2023-12/The%20Weaponisation%20of%20Deepfakes.pdf '''''The Weaponisation of Deepfakes - Digital Deception by the Far-Right''''' at icct.nl], an [[w:International Centre for Counter-Terrorism]] policy brief by Ella Busch and Jacob Ware. Published in December 2023. | |||
* [https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF '''''Contextualizing Deepfake Threats to Organizations''''' - '''Cybersecurity Information Sheet''' at media.defense.gov] | |||
* [https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf '''''Increasing Threats of Deepfake Identities''''' at dhs.gov] by the [[w:United States Department of Homeland Security]] | |||
* [https://www.dhs.gov/sites/default/files/2022-10/AEP%20DeepFake%20PHASE2%20FINAL%20corrected20221006.pdf '''''Increasing Threats of Deepfake Identities''' – '''Phase 2: Mitigation Measures''''' at dhs.gov] | |||
* [https://link.springer.com/article/10.1007/s13347-023-00657-0 '''''Deepfake Pornography and the Ethics of Non-Veridical Representations''''' at link.springer.com], a 2023 research article, published in [[w:Philosophy & Technology]] on 2023-08-23. (paywalled) | |||
* [https://digitalcommons.law.uidaho.edu/cgi/viewcontent.cgi?article=1252&context=idaho-law-review '''''NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY''''' at digitalcommons.law.uidaho.edu] by Natalie Lussier, published in Idaho Law Review January 2022 | |||
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services | |||
* [https://arxiv.org/abs/2001.06564 ''''Media Forensics and DeepFakes: an overview'''' at arXiv.org] [https://arxiv.org/pdf/2001.06564.pdf (as .pdf at arXiv.org)], an overview on the subject of digital look-alikes and media forensics published in August '''2020''' in [https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=9177372 Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing]. [https://ieeexplore.ieee.org/document/9115874 ''''Media Forensics and DeepFakes: An Overview'''' at ieeexplore.ieee.org] (paywalled, free abstract) | |||
* [https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1333&context=dltr ''''DEEPFAKES: False pornography is here and the law cannot protect you'''' at scholarship.law.duke.edu] by Douglas Harris, published in [https://scholarship.law.duke.edu/dltr/vol17/iss1/ Duke Law & Technology Review - Volume 17 on '''2019'''-01-05] by [[w:Duke University]] [[w:Duke University School of Law]] | |||
'''Legal information compilations''' | |||
{{#lst:Laws against synthesis and other related crimes|anti-fake-law-compilations}} | |||
More studies can be found in the [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|SSFWIKI Timeline of synthetic human-like fakes]] | |||
''' Search for more ''' | |||
* [[w:Law review]] | |||
** [[w:List of law reviews in the United States]] | |||
''' Reporting against synthetic human-like fakes ''' | |||
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]]. | |||
''' Companies against synthetic human-like fakes ''' | |||
See [[resources]] for more. | |||
* '''[https://cyabra.com/ Cyabra.com]''' is an AI-based system that helps organizations be on the guard against disinformation attacks<ref group="1st seen in" name="ReutersDisinfomation2020">https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E</ref>. [https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E Reuters.com reporting] from July 2020. | |||
= Events against synthetic human-like fakes = | |||
== Upcoming events == | |||
In reverse chronological order | In reverse chronological order | ||
== Ongoing events == | |||
* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact"> | * '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact"> | ||
Line 479: | Line 520: | ||
</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | </ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
== Past events == | |||
* '''2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] October 25-27 in North Carolina, U.S.A. | |||
* '''2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel | * '''2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel | ||
Line 516: | Line 559: | ||
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | * '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
<section end=other organizations /> | <section end=other organizations /> | ||
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) = | |||
Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]] | Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]] | ||
{{#section-h:Laws against synthesis and other related crimes|Law proposal to ban visual synthetic filth}} | {{#section-h:Laws against synthesis and other related crimes|Law proposal to ban visual synthetic filth}} | ||
= SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) = | |||
Transcluded main contents from [[Adequate Porn Watcher AI (concept)]] | Transcluded main contents from [[Adequate Porn Watcher AI (concept)]] | ||
{{#lstx:Adequate Porn Watcher AI (concept)|See_also}} | {{#lstx:Adequate Porn Watcher AI (concept)|See_also}} | ||
= SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) = | |||
Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]] | Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]] | ||
Line 561: | Line 578: | ||
---- | ---- | ||
== About this article == | |||
Transcluded in this article are | |||
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege | |||
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com | |||
* [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]] | |||
* [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]]. | |||
In [[resources]] there are likely a few services that could fit here. | |||
== Footnotes == | == Footnotes == | ||
Line 574: | Line 601: | ||
== 1st seen in == | == 1st seen in == | ||
<references group="1st seen in" /> | <references group="1st seen in" /> | ||
== References == | == References == | ||
<references /> | <references /> |