Adequate Porn Watcher AI (concept): Difference between revisions

drafting headlines, before actually adding headlines + fmt
(reworking the content in the intro)
(drafting headlines, before actually adding headlines + fmt)
Line 1: Line 1:
'''Adequate Porn Watcher AI''' ('''APW_AI''') is an [[w:Artificial intelligence|w:AI]] concept to search for any and all '''porn that should not be''' by watching and modeling '''all porn''' ever found on the [[w:Internet]] thus effectively '''protecting humans''' by '''exposing [[Synthetic human-like fakes#List of possible naked digital look-alike attacks|covert naked digital look-alike attacks]] ''' and also other contraband.  
'''Adequate Porn Watcher AI''' ('''APW_AI''') is an [[w:Artificial intelligence|w:AI]] concept to search for any and all '''porn that should not be''' by watching and modeling '''all porn''' ever found on the [[w:Internet]] thus effectively '''protecting humans''' by '''exposing [[Synthetic human-like fakes#List of possible naked digital look-alike attacks|covert naked digital look-alike attacks]] ''' and also other contraband.  


The method by which '''APW_AI''' would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>.
''' The method and the effect '''
 
The method by which '''APW_AI''' would be providing <font color="blue">'''safety'''</font> and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>.


If people are '''able to check''' whether there is '''synthetic porn''' that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus '''decimate the monetary value''' of these disinformation weapons to the '''criminals'''.
If people are '''able to check''' whether there is '''synthetic porn''' that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus '''decimate the monetary value''' of these disinformation weapons to the '''criminals'''.
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
''' Rules'''


'''Looking up''' if matches are found for '''anyone else's model''' is '''forbidden''' and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake.
'''Looking up''' if matches are found for '''anyone else's model''' is '''forbidden''' and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake.


If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
''' Definition of adequacy '''
 
An ''adequate'' implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
 
''' What about the people in the porn-industry? '''


People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.


An ''adequate'' implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.


There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.
''' History '''


The idea of APW_AI occurred to [[User:Juho Kunsola]] on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of [[w:User:Juho_Kunsola/Law_proposals#Law_proposal_to_ban_covert_modeling_of_human_appearance|the plea to ban convert modeling of human appearance]] as that would have rendered APW_AI legally impossible.
The idea of APW_AI occurred to [[User:Juho Kunsola]] on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of [[w:User:Juho_Kunsola/Law_proposals#Law_proposal_to_ban_covert_modeling_of_human_appearance|the plea to ban convert modeling of human appearance]] as that would have rendered APW_AI legally impossible.