Adequate Porn Watcher AI
Adequate Porn Watcher AI (APW_AI) is a working title for an w:AI to watch and model all porn ever found on the Internet to police porn for contraband and especially to protect humans by exposing digital look-alike attacks.
The purpose of the APW_AI is providing safety and security to its users, who can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.
If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.
Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake.
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.
An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.
Partial transclusions from Synthetic human-like fakes below
Organizations against synthetic human-like fakes
- w:DARPA DARPA program: 'Media Forensics (MediFor)' at darpa.mil aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in June 2016.
- DARPA DARPA program: 'Semantic Forensics (SemaFor) at darpa.mil aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at w:Duke University's Research Funding database: Semantic Forensics (SemaFor) at researchfunding.duke.edu and some at Semantic Forensics grant opportunity (closed Nov 2019) at grants.gov. Archive.org first cralwed their website in November 2019
- w:University of Colorado Denver is the home of the National Center for Media Forensics at artsandmedia.ucdenver.edu at the w:University of Colorado Denver offers a Master's degree program, training courses and scientific basic and applied research. Faculty staff at the NCMF
- w:SAG-AFTRA SAG-AFTRA ACTION ALERT: "Support California Bill to End Deepfake Porn" at sagaftra.org endorses California Senate Bill SB 564 introduced to the California State Senate by California Senator Connie Leyva in Feb 2019.
Events against synthetic human-like fakes
- 2020 | CVPR | 2020 Conference on Computer Vision and Pattern Recognition: 'Workshop on Media Forensics' at sites.google.com, a June 2020 workshop at the w:Conference on Computer Vision and Pattern Recognition.
- 2019 | NeurIPS | Facebook, Inc. "Facebook AI Launches Its Deepfake Detection Challenge" at spectrum.ieee.org
- 2019 | CVPR | 2019 CVPR: 'Workshop on Media Forensics'
- Annual (?) | w:National Institute of Standards and Technology (NIST) | NIST: 'Media Forensics Challenge' at nist.gov, an iterative research challenge by the w:National Institute of Standards and Technology with the ongoing challenge being the 2nd one in action. the evaluation criteria for the 2019 iteration are being formed
- 2018 | ECCV ECCV 2018: 'Workshop on Objectionable Content and Misinformation' at sites.google.com, a workshop at the 2018 w:European Conference on Computer Vision
Studies against synthetic human-like fakes
- 'Media Forensics and DeepFakes: an overview' at arXiv.org (as .pdf at arXiv.org), a 2020 review on the subject of digital look-alikes and media forensics
- 'DEEPFAKES: False pornography is here and the law cannot protect you' at scholarship.law.duke.edu by Douglas Harris, published in Duke Law & Technology Review - Volume 17 on 2019-01-05 by Duke University School of Law
Search for more
Companies against synthetic human-like fakes
- Cyabra.com is an AI-based system that helps organizations be on the guard against disinformation attacks[1st seen in 1]. Reuters.com reporting from July 2020.
Biblical explanation - The books of Daniel and Revelation, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth.
In Revelation 19:20 it says that the beast is taken prisoner, can we achieve this without 'APW_AI?
1st seen in