3,839
edits
Juho Kunsola (talk | contribs) m (Juho Kunsola moved page Against synthetic human-like fakes to Organizations and events against synthetic human-like fakes: Text replacement - "[A|a]gainst synthetic human-like fakes" to "Organizations and events against synthetic human-like fakes") |
Juho Kunsola (talk | contribs) m (Text replacement - "Laws against synthesis and related crimes" to "Laws against synthesis and other related crimes") |
||
(8 intermediate revisions by the same user not shown) | |||
Line 4: | Line 4: | ||
* [[FacePinPoint.com]], a crucial past service by Lionel Hagege | * [[FacePinPoint.com]], a crucial past service by Lionel Hagege | ||
* [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com | * [[Adequate Porn Watcher AI (concept)]], an AI concept practically identical with FacePinPoint.com | ||
* [[Laws against | * [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|A law proposal against synthetic non-consensual pornography]] | ||
* [[Laws against | * [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|A law proposal against digital sound-alikes]]. | ||
For laws and bills in planning against synthetic filth see [[ | For laws and bills in planning against synthetic filth see [[Laws against synthesis and other related crimes]]. | ||
In [[resources]] there are likely a few services that would fit here. | In [[resources]] there are likely a few services that would fit here. | ||
Line 16: | Line 16: | ||
<section begin=core organizations /> | <section begin=core organizations /> | ||
=== Organizations | === Organizations against synthetic human-like fakes === | ||
'''AI incident repositories''' | '''AI incident repositories''' | ||
* <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai /> | * <section begin=incidentdatabase.ai />The [https://incidentdatabase.ai/ ''''''AI Incident Database'''''' at incidentdatabase.ai] was introduced on 2020-11-18 by the [[w:Partnership on AI]].<ref name="PartnershipOnAI2020">https://www.partnershiponai.org/aiincidentdatabase/</ref><section end=incidentdatabase.ai /> | ||
Line 124: | Line 124: | ||
</ref>, a human rights non-profit organization based out of Brooklyn, New York, is against synthetic filth actively since 2018. They work both in awareness raising as well as media forensics. | </ref>, a human rights non-profit organization based out of Brooklyn, New York, is against synthetic filth actively since 2018. They work both in awareness raising as well as media forensics. | ||
** [https://lab.witness.org/projects/osint-digital-forensics/ '''Open-source intelligence digital forensics''' - ''How do we work together to detect AI-manipulated media?'' at lab.witness.org]. "''In February '''2019''' WITNESS in association with [[w:George Washington University]] brought together a group of leading researchers in [[Glossary#Media forensics|media forensics]] and [[w:detection]] of [[w:deepfakes]] and other [[w:media manipulation]] with leading experts in social newsgathering, [[w:User-generated content]] and [[w:open-source intelligence]] ([[w:OSINT]]) verification and [[w:fact-checking]].''" (website) | ** [https://lab.witness.org/projects/osint-digital-forensics/ '''Open-source intelligence digital forensics''' - ''How do we work together to detect AI-manipulated media?'' at lab.witness.org]. "''In February '''2019''' WITNESS in association with [[w:George Washington University]] brought together a group of leading researchers in [[Glossary#Media forensics|media forensics]] and [[w:detection]] of [[w:deepfakes]] and other [[w:media manipulation]] with leading experts in social newsgathering, [[w:User-generated content]] and [[w:open-source intelligence]] ([[w:OSINT]]) verification and [[w:fact-checking]].''" (website) | ||
** [https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ '''Prepare, Don’t Panic: Synthetic Media and Deepfakes''' at lab.witness.org] is a summary page for WITNESS Media Lab's ongoing work | ** [https://lab.witness.org/projects/synthetic-media-and-deep-fakes/ '''Prepare, Don’t Panic: Synthetic Media and Deepfakes''' at lab.witness.org] is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in '''2018''' with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the [http://witness.mediafire.com/file/q5juw7dc3a2w8p7/Deepfakes_Final.pdf/file '''report''' “'''Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening'''”] (dated 2018-06-11). [https://blog.witness.org/2018/07/deepfakes/ '''''Deepfakes and Synthetic Media: What should we fear? What can we do?''''' at blog.witness.org] | ||
* '''[[w:Financial Coalition Against Child Pornography]]''' could be interested in taking down payment possibilities also for sites distributing non-consensual synthetic pornography. | * '''[[w:Financial Coalition Against Child Pornography]]''' could be interested in taking down payment possibilities also for sites distributing non-consensual synthetic pornography. | ||
Line 208: | Line 208: | ||
<section begin=other organizations /> | <section begin=other organizations /> | ||
=== Organizations possibly | === Organizations possibly against synthetic human-like fakes === | ||
Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | Originally harvested from the study [https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf The ethics of artificial intelligence: Issues and initiatives (.pdf)] by the [[w:European Parliamentary Research Service]], published on the [[w:Europa (web portal)]] in March 2020.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"> | ||
Line 464: | Line 464: | ||
* [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | * [https://www.eu-robotics.net/ '''European Robotics Platform''' at eu-robotics.net] is funded by the [[w:European Commission]]. See [[w:European Robotics Platform]] and [[w:List of European Union robotics projects#EUROP]] for more info.<ref group="1st seen in" name="EU-Parl-Ethical-AI-Study-2020"/> | ||
=== Events | === Events against synthetic human-like fakes === | ||
==== Upcoming events ==== | |||
In reverse chronological order | |||
* '''UPCOMING 2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel | * '''UPCOMING 2022''' | '''[[w:European Conference on Computer Vision]]''' in Tel Aviv, Israel | ||
* '''UPCOMING | * '''UPCOMING 2023''' | [https://worldantibullyingforum.com '''World Anti-Bullying Forum'''] [https://worldantibullyingforum.com/news/open-call-for-hosting-world-anti-bullying-forum-2023/ ''Open Call for Hosting World Anti-Bullying Forum 2023'' at worldantibullyingforum.com] | ||
==== Ongoing events ==== | |||
* '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact"> | * '''2020 - ONGOING''' | '''[[w:National Institute of Standards and Technology]]''' ('''NIST''') ([https://www.nist.gov/ nist.gov]) ([https://www.nist.gov/about-nist/contact-us contacting NIST]) | Open Media Forensics Challenge presented in [https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge '''Open Media Forensics Challenge''' at nist.gov] and [https://mfc.nist.gov/ '''Open Media Forensics Challenge''' ('''OpenMFC''') at mfc.nist.gov]<ref group="contact"> | ||
Line 480: | Line 480: | ||
</ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | </ref> - ''Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.''<ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
==== Past events ==== | |||
* '''2022''' | [https://law.yale.edu/isp/events/technologies-deception '''Technologies of Deception''' at law.yale.edu], a conference hosted by the [[w:Information Society Project]] (ISP) to be held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022<ref>https://law.yale.edu/isp/events/technologies-deception</ref> | |||
* '''2021''' | '''[[w:Conference on Neural Information Processing Systems]]''' [https://neurips.cc/ '''NeurIPS 2021''' at neurips.cc], was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side [[w:StyleGAN]]3 was presented there. | |||
* '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com] | * '''2021''' | '''[[w:Conference on Computer Vision and Pattern Recognition]] (CVPR)''' 2021 [https://cvpr2021.thecvf.com/ '''CVPR 2021''' at cvpr2021.thecvf.com] | ||
Line 488: | Line 494: | ||
* '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref> | * '''2020''' | The winners of the [https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/ Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes]<ref name="VentureBeat2020">https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/</ref> | ||
* '''2020''' | [https://www.ftc.gov/news-events/events/2020/01/you-dont-say-ftc-workshop-voice-cloning-technologies '''''You Don't Say: An FTC Workshop on Voice Cloning Technologies''''' at ftc.gov] was held on Tuesday 2020-01-28 - [https://venturebeat.com/2020/01/29/ftc-voice-cloning-seminar-crime-use-cases-safeguards-ai-machine-learning/ reporting at venturebeat.com] | |||
* '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE. | * '''2019''' | At the annual Finnish [[w:Ministry of Defence (Finland)|w:Ministry of Defence]]'s '''Scientific Advisory Board for Defence''' ('''MATINE''') public research seminar, a research group presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf ''''''Synteettisen median tunnistus''''''' at defmin.fi] (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE. | ||
Line 505: | Line 513: | ||
* '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | * '''2016''' | '''Nimble Challenge 2016''' - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). <ref>https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge</ref> | ||
=== Studies | === Studies against synthetic human-like fakes === | ||
* [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services | * [https://www.cbinsights.com/research/future-of-information-warfare/ ''''Disinformation That Kills: The Expanding Battlefield Of Digital Warfare'''' at cbinsights.com], a '''2020'''-10-21 research brief on disinformation warfare by [[w:CB Insights]], a private company that provides [[w:market intelligence]] and [[w:business analytics]] services | ||
Line 517: | Line 525: | ||
** [[w:List of law reviews in the United States]] | ** [[w:List of law reviews in the United States]] | ||
=== Reporting | === Reporting against synthetic human-like fakes === | ||
* [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]]. | * [https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/ ''''''Researchers use facial quirks to unmask ‘deepfakes’'''''' at news.berkeley.edu] 2019-06-18 reporting by Kara Manke published in '' Politics & society, Research, Technology & engineering''-section in Berkley News of [[w:University of California, Berkeley|w:UC Berkeley]]. | ||
=== Companies | === Companies against synthetic human-like fakes === | ||
See [[resources]] for more. | See [[resources]] for more. | ||
Line 528: | Line 536: | ||
=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) === | === SSFWIKI proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded) === | ||
Transcluded from [[Laws against | Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban visual synthetic filth|Juho's proposal for banning unauthorized synthetic pornography]] | ||
{{#section-h:Laws against | {{#section-h:Laws against synthesis and other related crimes|Law proposal to ban visual synthetic filth}} | ||
=== SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) === | === SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded) === | ||
Line 538: | Line 546: | ||
=== SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) === | === SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded) === | ||
Transcluded from [[Laws against | Transcluded from [[Laws against synthesis and other related crimes#Law proposal to ban unauthorized modeling of human voice|Juho's proposal on banning digital sound-alikes]] | ||
{{#section-h:Laws against | {{#section-h:Laws against synthesis and other related crimes|Law proposal to ban unauthorized modeling of human voice}} | ||
---- | ---- |