Adequate Porn Watcher AI (concept): Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(improving wording)
(improving content)
Line 5: Line 5:
If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and less destructive as attacks may be exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.
If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and less destructive as attacks may be exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.


Looking up if matches are found for '''anyone else's model''' is '''forbidden''' and this could be enforced with a facial biometric required for uploading a model to be checked that no matches are found in the library and optionally left there for safekeeping so that the user gets alerted and help if he or she ever gets attacked with a synthetic porn attack.
Looking up if matches are found for '''anyone else's model''' is '''forbidden''' and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake.
 
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you gets alerted and help if you ever get attacked with a synthetic porn attack.


'''Adequate''' means it must be nearly free of false positives and able to process more than is ever uploaded.
'''Adequate''' means it must be nearly free of false positives and able to process more than is ever uploaded.

Revision as of 14:57, 29 July 2019

Adequate Porn Watcher AI is a working title for an AI to watch and model all porn ever found on the Internet to police contraband porn.

The purpose of the APW_AI is providing safety and security to its users, who can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and less destructive as attacks may be exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you gets alerted and help if you ever get attacked with a synthetic porn attack.

Adequate means it must be nearly free of false positives and able to process more than is ever uploaded.