Please sign and share the petition 'Tighten regulation on taking, making and faking explicit images' at Change.org initiated by Helen Mort to the w:Law Commission (England and Wales) to properly update UK laws against synthetic filth. Only name and email required to support, no nationality requirement.
Adequate Porn Watcher AI (concept)
Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.
The method and the effect
The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.
If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.
Definition of adequacy
An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
What about the people in the porn-industry?
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.
The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.
Traditional porn-blocking done by w:some countries seems to use w:DNS to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually w:0.0.0.0.
Topics on github.com
- Topic "porn-block" on github.com (8 repositories as of 2020-09)[1st seen in 1]
- Topic "pornblocker" on github.com (13 repositories as of 2020-09)[1st seen in 2]
- Topic "porn-filter" on github.com (35 repositories as of 2020-09)[1st seen in 3]
Curated lists and databases
- 'Awesome-Adult-Filtering-Accountability' a list at github.com curated by wesinator - a list of tools and resources for adult content/porn accountability and filtering[1st seen in 1]
- 'Pornhosts' at github.com by Import-External-Sources is a hosts-file formatted file of the w:Response policy zone (RPZ) zone file. It states itself as " a consolidated anti porn hosts file" and states is mission as "an endeavour to find all porn domains and compile them into a single hosts to allow for easy blocking of porn on your local machine or on a network."[1st seen in 1]
- 'Amdromeda blocklist for Pi-hole' at github.com by Amdromeda[1st seen in 1] lists 50MB worth of just porn host names 1 (16.6MB) 2 (16.8MB) 3 (16.9MB) (As of 2020-09)
- 'Pihole-blocklist' at github.com by mhakim[1st seen in 3] 1
- 'superhostsfile' at github.com by universalbyte is an ongoing effort to chart out "negative" hosts.[1st seen in 2]
- 'hosts' at github.com by StevenBlack is a hosts file for negative sites. It is updated constantly from these sources and it lists 559k (1.64MB) of porn and other dodgy hosts (as of 2020-09)
- 'Porn-domains' at github.com by Bon appétit was (as of 2020-09) last updated in March 2019 and lists more than 22k domains.
Porn blocking services
Software for nudity detection
- 'PornDetector' consists of two python porn images (nudity) detectors at github.com by bakwc and are both written in w:Python (programming language)[1st seen in 3]. pcr.py uses w:scikit-learn and the w:OpenCV Open Source Computer Vision Library, whereas nnpcr.py uses w:TensorFlow and reaches a higher accuracy.
- 'Laravel 7 Google Vision restringe pornografia detector de faces' porn restriction app in Portuguese at github.com by thelesson that utilizes Google Vision API to help site maintainers stop users from uploading porn has been written for the for MiniBlog w:Laravel blog app.
Links regarding pornography
- w:Pornography laws by region
- w:Internet pornography
- w:Legal status of Internet pornography
- w:Sex and the law
- Reasons for w:opposition to pornography include w:religious views on pornography, w:feminist views of pornography, and claims of w:effects of pornography, such as w:pornography addiction. (Wikipedia as of 2020-09-19)
Technical means of censorship and how to circumvent
- w:Internet censorship and w:internet censorship circumvention
- w:Content-control software (Internet filter), a common approach to w:parental control.
- w:Accountability software
- w:Employee monitoring is often automated using w:employee monitoring software
- A w:wordfilter (sometimes referred to as just "filter" or "censor") is a script typically used on w:Internet forums or w:chat rooms that automatically scans users' posts or comments as they are submitted and automatically changes or w:censors particular words or phrases. (Wikipedia as of 2020-09)
- w:Domain fronting is a technique for w:internet censorship circumvention that uses different w:domain names in different communication layers of an w:HTTPS connection to discreetly connect to a different target domain than is discernable to third parties monitoring the requests and connections. (Wikipedia 2020-09-22)
- w:Internet censorship in China and w:some tips to how to evade internet censorship in China
Partial transclusions from Synthetic human-like fakes below
Organizations against synthetic human-like fakes
- w:DARPA DARPA program: 'Media Forensics' (MediFor) at darpa.mil aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in June 2016.
- DARPA program: 'Semantic Forensics' (SemaFor) at darpa.mil aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at w:Duke University's Research Funding database: Semantic Forensics (SemaFor) at researchfunding.duke.edu and some at Semantic Forensics grant opportunity (closed Nov 2019) at grants.gov. Archive.org first cralwed their website in November 2019
- w:University of Colorado Denver is the home of the National Center for Media Forensics at artsandmedia.ucdenver.edu at the w:University of Colorado Denver offers a Master's degree program, training courses and scientific basic and applied research. Faculty staff at the NCMF
- w:SAG-AFTRA SAG-AFTRA ACTION ALERT: "Support California Bill to End Deepfake Porn" at sagaftra.org endorses California Senate Bill SB 564 introduced to the w:California State Senate by w:California w:Senator Connie Leyva in Feb 2019.
Organizations possibly against synthetic human-like fakes
Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 4]
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE at ieai.mcts.tum.de received initial funding from w:Facebook in 2019.[1st seen in 4] IEAI on LinkedIn.com
- The Institute for Ethical AI & Machine Learning at ethical.institute[1st seen in 4] The Institute for Ethical AI & Machine Learning on LinkedIn.com
- Future of Life Institute at futureoflife.org received funding from private donors.[1st seen in 4] See w:Future of Life Institute for more info.
- The Japanese Society for Artificial Intelligence (JSAI) at ai-gakkai.or.jp. Publication: Ethical guidelines.[1st seen in 4]
- The Ai Now Institute at ainowinstitute.org at w:New York University[1st seen in 4] The Ai Now Institute on LinkedIn.com
- Partnership on AI at partnershiponai.org is based in the USA and funded by technology companies. See w:Partnership on AI and Partnership on AI on LinkedIn.com for more info.
- The Foundation for Responsible Robotics at responsiblerobotics.org is based in w:Netherlands.[1st seen in 4] The Foundation for Responsible Robotics on LinkedIn.com
- AI4People at ai4people.eu is based in w:Belgium is a multi-stakeholder forum.[1st seen in 4] AI4People on LinkedIn.com
- The Ethics and Governance of Artificial Intelligence Initiative at aiethicsinitiative.org is based in the USA.[1st seen in 4]
- Saidot at saidot.ai is a Finnish company offering a platform for AI transparency, explainability and communication.[1st seen in 4] Saidot on LinkedIn.com
- Centre for Data Ethics and Innovation at gov.uk financed by the UK govt. Centre for Data Ethics and Innovation Blog at cdei.blog.gov.uk[1st seen in 4] Centre for Data Ethics and Innovation on LinkedIn.com
- ACM Special Interest Group on Artificial Intelligence at sigai.acm.org is a w:Special Interest Group on AI by ACM.[1st seen in 4]
Other essential developments
- The Montréal Declaration for a Responsible Development of Artificial Intelligence at montrealdeclaration-responsibleai.com and the same site in French La Déclaration de Montéal IA responsable at declarationmontreal-iaresponsable.com[1st seen in 4]
- UNI Global Union at uniglobalunion.org is based in w:Nyon, w:Switzerland and deals mainly with labor issues to do with AI and robotics.[1st seen in 4] UNI Global Union on LinkedIn.com
- European Robotics Research Network at cordis.europa.eu funded by the w:European Commission.[1st seen in 4]
- European Robotics Platform at eu-robotics.net is funded by the w:European Commission. See w:European Robotics Platform and w:List of European Union robotics projects#EUROP for more info.[1st seen in 4]
Events against synthetic human-like fakes
- 2020 | w:Conference on Computer Vision and Pattern Recognition (CVPR) | 2020 Conference on Computer Vision and Pattern Recognition: 'Workshop on Media Forensics' at sites.google.com, a June 2020 workshop at the w:Conference on Computer Vision and Pattern Recognition.
- 2020 | The winners of the Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes
- 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
- 2019 | w:NeurIPS | w:Facebook, Inc. "Facebook AI Launches Its Deepfake Detection Challenge" at spectrum.ieee.org w:IEEE Spectrum. More reporting at "Facebook, Microsoft, and others launch Deepfake Detection Challenge" at venturebeat.com
- 2019 | CVPR | 2019 CVPR: 'Workshop on Media Forensics'
- Annual (?) | w:National Institute of Standards and Technology (NIST) | NIST: 'Media Forensics Challenge' at nist.gov, an iterative research challenge by the w:National Institute of Standards and Technology with the ongoing challenge being the 2nd one in action. the evaluation criteria for the 2019 iteration are being formed
- 2018 | w:European Conference on Computer Vision (ECCV) ECCV 2018: 'Workshop on Objectionable Content and Misinformation' at sites.google.com, a workshop at the 2018 w:European Conference on Computer Vision in w:Munich had focus on objectionable content detection e.g. w:nudity, w:pornography, w:violence, w:hate, w:children exploitation and w:terrorism among others and to address misinformation problems when people are fed w:disinformation and they punt it on as misinformation. Announced topics included w:image/video forensics, w:detection/w:analysis/w:understanding of w:fake images/videos, w:misinformation detection/understanding: mono-modal and w:multi-modal, adversarial technologies and detection/understanding of objectionable content
- 2018 | w:NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
Studies against synthetic human-like fakes
- 'Media Forensics and DeepFakes: an overview' at arXiv.org (as .pdf at arXiv.org), an overview on the subject of digital look-alikes and media forensics published in August 2020 in Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing. 'Media Forensics and DeepFakes: An Overview' at ieeexplore.ieee.org (paywalled, free abstract)
- 'DEEPFAKES: False pornography is here and the law cannot protect you' at scholarship.law.duke.edu by Douglas Harris, published in Duke Law & Technology Review - Volume 17 on 2019-01-05 by w:Duke University w:Duke University School of Law
Search for more
Companies against synthetic human-like fakes
- Cyabra.com is an AI-based system that helps organizations be on the guard against disinformation attacks[1st seen in 5]. Reuters.com reporting from July 2020.
Sources for technologies
|A map of technologies courtesy of Samsung Next, linked from 'Why it’s time to change the conversation around synthetic media' at venturebeat.com[1st seen in 6]|
Biblical explanation - The books of Daniel and Revelation, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth.
In Revelation 19:20 it says that the beast is taken prisoner, can we achieve this without 'APW_AI?
1st seen in
- Seen first in https://github.com/topics/porn-block, meta for actual use. The topic was stumbled upon.
- Seen first in https://github.com/topics/pornblocker Saw this originally when looking at https://github.com/topics/porn-block Topic
- Seen first in https://github.com/topics/porn-filter Saw this originally when looking at https://github.com/topics/porn-block Topic
"The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17.
This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
- venturebeat.com found via some Facebook AI & ML group or page yesterday. Sorry, don't know precisely right now.