Adequate Porn Watcher AI (concept): Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(improving wording)
mNo edit summary
 
(127 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Adequate Porn Watcher AI''' is a working title for an AI to watch and model all porn ever found on the Internet to police contraband porn.  
'''Adequate Porn Watcher AI''' ('''APW_AI''') is an [[w:Artificial intelligence|w:AI]] and [[w:Computer vision|w:computer vision]] concept to search for any and all '''porn that should not be''' by watching and modeling '''all porn''' ever found on the [[w:Internet]] thus effectively '''protecting humans''' by '''exposing [[Synthetic human-like fakes#List of possible naked digital look-alike attacks|covert naked digital look-alike attacks]] ''' and also other contraband.  


The purpose of the '''APW_AI''' is providing safety and security to its users, who can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">nothing matching found</font> or it will be of the opinion that <font color="red">something matching found</font>.
Obs. '''[[#A service identical to APW_AI used to exist - FacePinPoint.com]]'''


If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and less destructive as attacks may be exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.
''' The method and the effect '''


Looking up if matches are found for '''anyone else's model''' is '''forbidden''' and this could be enforced with a facial biometric required for uploading a model to be checked that no matches are found in the library and optionally left there for safekeeping so that the user gets alerted and help if he or she ever gets attacked with a synthetic porn attack.
The method by which '''APW_AI''' would be providing <font color="blue">'''safety'''</font> and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>.


'''Adequate''' means it must be nearly free of false positives and able to process more than is ever uploaded.
If people are <font color="green">'''able to check'''</font> whether there is '''[[Glossary#Synthetic pornography|synthetic porn]]''' that looks like themselves, this causes synthetic hate-illustration industrialists' product <font color="green">'''lose destructive potential'''</font> and the attacks that happen are less destructive as they are exposed by the APW_AI and thus <font color="green">'''decimate the monetary value'''</font> of these disinformation weapons to the <font color="red>'''criminals'''</font>.
 
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
 
''' Rules'''
 
'''Looking up''' if matches are found for '''anyone else's model''' is '''forbidden''' and this should probably be enforced with a facial [[w:Biometrics|w:biometric]] [[w:facial recognition system]] app that checks that the model you want checked is yours and that you are awake.
 
''' Definition of adequacy '''
 
An ''adequate'' implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
 
''' What about the people in the porn-industry? '''
 
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.
 
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.
 
''' History '''
 
The idea of APW_AI occurred to [[User:Juho Kunsola]] on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of [[User:Juho_Kunsola/Law_proposals#Law_proposal_to_ban_covert_modeling_of_human_appearance|the plea to ban convert modeling of human appearance]] as that would have rendered APW_AI legally impossible.
 
= Countermeasures elsewhere =
Partial transclusion from [[Organizations, studies and events against synthetic human-like fakes]]
 
== Companies against synthetic filth ==
{{#lst:Organizations, studies and events against synthetic human-like fakes|companies-against-synthetic-filth}}
 
== A service identical to APW_AI used to exist - FacePinPoint.com ==
Partial transclusion from [[FacePinPoint.com]]
 
{{#lst:FacePinPoint.com|FacePinPoint.com}}
<section begin=See_also />
 
= Resources =
''' Tools '''
* '''[[w:PhotoDNA]]''' is an image-identification technology used for detecting [[w:child pornography]] and other illegal content reported to the [[w:National Center for Missing & Exploited Children]] (NCMEC) as required by law.<ref>
 
{{cite web
|url=https://www.theguardian.com/technology/2014/aug/07/microsoft-tip-police-child-abuse-images-paedophile
|title=Microsoft tip led police to arrest man over child abuse images
|work=[[w:The Guardian]]
|date=2014-08-07
}}
 
</ref> It was developed by [[w:Microsoft Research]] and [[w:Hany Farid]], professor at [[w:Dartmouth College]], beginning in 2009. ([https://en.wikipedia.org/w/index.php?title=PhotoDNA&oldid=1058600051 Wikipedia])
 
* The '''[[w:Child abuse image content list]]''' (CAIC List) is a list of URLs and image hashes provided by the [[w:Internet Watch Foundation]] to its partners to enable the blocking of [[w:child pornography]] & [[w:Obscene Publications Acts|w:criminally obscene adult content]] in the UK and by major international technology companies. ([https://en.wikipedia.org/w/index.php?title=Child_abuse_image_content_list&oldid=968491079 Wikipedia]).
 
''' Legal '''
* [[w:Outline of law]]
* [[w:List of national legal systems]]
* [[w:List of legislatures by country]]
 
== Traditional porn-blocking ==
Traditional porn-blocking done by [[w:Pornography laws by region|w:some countries]] seems to use [[w:Domain Name System|w:DNS]] to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually [[w:0.0.0.0]].
 
''' Topics on github.com '''
* [https://github.com/topics/porn-block Topic '''''"porn-block"''''' on github.com] (8 repositories as of 2020-09)<ref group="1st seen in" name="Github-topic-porn-block">Seen first in https://github.com/topics/porn-block, meta for actual use. The topic was stumbled upon.</ref>
* [https://github.com/topics/pornblocker Topic '''''"pornblocker"''''' on github.com] (13 repositories as of 2020-09)<ref group="1st seen in" name="Github-topic-pornblocker">Seen first in https://github.com/topics/pornblocker Saw this originally when looking at https://github.com/topics/porn-block Topic</ref>
* [https://github.com/topics/porn-filter Topic '''''"porn-filter"''''' on github.com] (35 repositories as of 2020-09)<ref group="1st seen in" name="Github-topic-porn-filter">Seen first in https://github.com/topics/porn-filter Saw this originally when looking at https://github.com/topics/porn-block Topic</ref>
 
''' Curated lists and databases '''
* [https://github.com/wesinator/awesome-Adult-Filtering-Accountability ''''''Awesome-Adult-Filtering-Accountability'''''' a list at github.com curated by ''wesinator''] - a list of tools and resources for adult content/porn accountability and filtering<ref group="1st seen in" name="Github-topic-porn-block" />
* [https://github.com/Import-External-Sources/pornhosts ''''''Pornhosts'''''' at github.com by ''Import-External-Sources''] is a hosts-file formatted file of the [[w:Response policy zone]] (RPZ) zone file. It states itself as "'' a consolidated anti porn hosts file''" and states is mission as "''an endeavour to find all porn domains and compile them into a single hosts to allow for easy blocking of porn on your local machine or on a network.''"<ref group="1st seen in" name="Github-topic-porn-block" />
* [https://github.com/Amdromeda/Blocklist-Pi-Hole ''''''Amdromeda blocklist for Pi-hole'''''' at github.com by ''Amdromeda'']<ref group="1st seen in" name="Github-topic-porn-block" /> lists 50MB worth of just porn host names [https://github.com/Amdromeda/Blocklist-Pi-Hole/blob/master/Porn%20pages%20(Part_1).txt 1] (16.6MB) [https://github.com/Amdromeda/Blocklist-Pi-Hole/blob/master/Porn%20pages%20(Part_2).txt 2] (16.8MB) [https://github.com/Amdromeda/Blocklist-Pi-Hole/blob/master/Porn%20pages%20(Part_3).txt 3] (16.9MB) (As of 2020-09)
* [https://github.com/mhhakim/pihole-blocklist ''''''Pihole-blocklist'''''' at github.com by ''mhakim'']<ref group="1st seen in" name="Github-topic-porn-filter" /> [https://raw.githubusercontent.com/mhhakim/pihole-blocklist/master/porn.txt 1]
* [https://github.com/universalbyte/superhostsfile ''''''superhostsfile'''''' at github.com by ''universalbyte''] is an ongoing effort to chart out "negative" hosts.<ref group="1st seen in" name="Github-topic-pornblocker" />
* [https://github.com/StevenBlack/hosts ''''''hosts'''''' at github.com by ''StevenBlack''] is a hosts file for negative sites. It is updated constantly from [https://github.com/StevenBlack/hosts/tree/master/data these sources] and it lists 559k (1.64MB) of porn and other dodgy hosts (as of 2020-09)
* [https://github.com/Bon-Appetit/porn-domains ''''''Porn-domains'''''' at github.com by ''Bon appétit''] was (as of 2020-09) last updated in March 2019 and lists more than 22k domains.
 
''' Porn blocking services '''
* '''[[w:Pi-hole]]''' - '''https://pi-hole.net/''' - ''Network-wide Ad Blocking''
 
''' Software for nudity detection '''
* [https://github.com/bakwc/PornDetector ''''''PornDetector'''''' consists of ''two python porn images (nudity) detectors'' at github.com by ''bakwc''] and are both written in [[w:Python (programming language)]]<ref group="1st seen in" name="Github-topic-porn-filter" />. pcr.py uses [[w:scikit-learn]] and the [[w:OpenCV|w:OpenCV Open Source Computer Vision Library]], whereas nnpcr.py uses [[w:TensorFlow]] and reaches a higher accuracy.
* [https://github.com/thelesson/Miniblog-Laravel-7-Google-Vision-detecta-faces-e-restringe-pornografia ''''''Laravel 7 Google Vision restringe pornografia detector de faces'''''' porn restriction app in Portuguese at github.com by ''thelesson''] that utilizes [https://cloud.google.com/vision Google Vision API] to help site maintainers stop users from uploading porn has been written for the for [https://github.com/madskristensen/MiniBlog MiniBlog] [[w:Laravel]] blog app.
 
== Links regarding pornography censorship ==
* [[w:Pornography laws by region]]
* [[w:Internet pornography]]
* [[w:Legal status of Internet pornography]]
* [[w:Sex and the law]]
 
''' Against pornography '''
* Reasons for [[w:opposition to pornography]] include [[w:Religious views on pornography|w:religious views on pornography]], [[w:Feminist views of pornography|w:feminist views of pornography]], and claims of [[w:Effects of pornography|w:effects of pornography]], such as [[w:pornography addiction]]. (Wikipedia as of 2020-09-19)
 
''' Technical means of censorship and how to circumvent '''
* [[w:Internet censorship]] and [[w:internet censorship circumvention]]
* [[w:Content-control software]] (Internet filter), a common approach to [[w:parental controls|w:parental control]].
** [[w:Comparison of content-control software and providers]]
* [[w:Accountability software]]
* [[w:Employee monitoring]] is often automated using [[w:employee monitoring software]]
* A [[w:wordfilter]] (sometimes referred to as just "''filter''" or "''censor''") is a script typically used on [[w:Internet forum]]s or [[w:chat room]]s that automatically scans users' posts or comments as they are submitted and automatically changes or [[w:censorship|w:censors]] particular words or phrases. (Wikipedia as of 2020-09)
* [[w:Domain fronting]] is a technique for [[w:internet censorship]] circumvention that uses different [[w:domain names]] in different communication layers of an [[w:HTTPS|w:HTTPS connection]] to discreetly connect to a different target domain than is discernable to third parties monitoring the requests and connections. (Wikipedia 2020-09-22)
* [[w:Internet censorship in China]] and [[w:Internet censorship in China#Evasion|w:some tips to how to evade internet censorship in China]]
 
= Sources for technologies =
{|
| https://venturebeat.com/wp-content/uploads/2020/08/Synthethic-Media-Landscape.jpg
|-
| A map of technologies courtesy of Samsung Next, linked from [https://venturebeat.com/2020/08/12/why-its-time-to-change-the-conversation-around-synthetic-media/ ''''''Why it’s time to change the conversation around synthetic media'''''' at venturebeat.com]<ref group="1st seen in">venturebeat.com found via some Facebook AI & ML group or page yesterday. Sorry, don't know precisely right now.</ref>
|}
 
= See also =
{| class="wikitable"
|-
|
* '''[[Main Page]]''' and '''[[synthetic human-like fakes|synthetic human-like fakes]]''' i.e. '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' and '''[[synthetic human-like fakes#Digital sound-alikes|digital sound-alikes]]''' so far, audio samples from a '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone machine]''' from 2018, '''[[synthetic human-like fakes#Media perhaps about synthetic human-like fakes|media perhaps about synthetic human-like fakes]]''' and '''[[how to protect yourself and others from covert modeling]]'''.
 
[[File:Deb2000-reflectance-separation-2-rows.png|thumb|320px|center|link=[[Main Page]]|Image 1: Separating specular and diffuse reflected light
 
<br/><br />
 
(a) Normal image in dot lighting
<br/><br/>
(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera
<br/><br/>
(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically
<br/><br/>
(d) The difference of c and b yields the specular highlight component
<br/><br/>
Images are scaled to seem to be the same luminosity.
<br/><br/>
Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – <small>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]]
 
|
'''[[Biblical connection - Revelation 13 and Daniel 7]]''', wherein '''[[Biblical connection - Revelation 13 and Daniel 7#Daniel 7|Daniel 7]]''' and '''[[Biblical connection - Revelation 13 and Daniel 7#Revelation 13|Revelation 13]]''' we are warned of this age of industrial filth.
 
In '''Revelation 19''':'''20''' it says that the '''beast is taken prisoner''', can we achieve this without ''''APW_AI'''?
 
[[File:Saint John on Patmos.jpg|thumb|center|link=[[Biblical connection - Revelation 13 and Daniel 7]]|320px|'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]
<br/><br/>
'Saint John on Patmos' from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]]
|}
 
= References =
<references />
 
= 1st seen in =
<references group="1st seen in" />
 
<section end=See_also />

Latest revision as of 09:06, 24 September 2024

Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.

Obs. #A service identical to APW_AI used to exist - FacePinPoint.com

The method and the effect

The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

Rules

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.

Definition of adequacy

An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

What about the people in the porn-industry?

People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.

History

The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.

Countermeasures elsewhere[edit | edit source]

Partial transclusion from Organizations, studies and events against synthetic human-like fakes

Companies against synthetic filth[edit | edit source]


A service identical to APW_AI used to exist - FacePinPoint.com[edit | edit source]

Partial transclusion from FacePinPoint.com


FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 2]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[2], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[3] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


Resources[edit | edit source]

Tools

Legal

Traditional porn-blocking[edit | edit source]

Traditional porn-blocking done by w:some countries seems to use w:DNS to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually w:0.0.0.0.

Topics on github.com

Curated lists and databases

Porn blocking services

Software for nudity detection

Links regarding pornography censorship[edit | edit source]

Against pornography

Technical means of censorship and how to circumvent

Sources for technologies[edit | edit source]

Synthethic-Media-Landscape.jpg
A map of technologies courtesy of Samsung Next, linked from 'Why it’s time to change the conversation around synthetic media' at venturebeat.com[1st seen in 5]

See also[edit | edit source]

Image 1: Separating specular and diffuse reflected light

(a) Normal image in dot lighting

(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

(d) The difference of c and b yields the specular highlight component

Images are scaled to seem to be the same luminosity.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Biblical connection - Revelation 13 and Daniel 7, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth.

In Revelation 19:20 it says that the beast is taken prisoner, can we achieve this without 'APW_AI?

'Saint John on Patmos' pictures w:John of Patmos on w:Patmos writing down the visions to make the w:Book of Revelation

'Saint John on Patmos' from folio 17 of the w:Très Riches Heures du Duc de Berry (1412-1416) by the w:Limbourg brothers. Currently located at the w:Musée Condé 40km north of Paris, France.

References[edit | edit source]

  1. https://www.crunchbase.com/organization/thatsmyface-com
  2. whois facepinpoint.com
  3. https://www.facepinpoint.com/aboutus
  4. "Microsoft tip led police to arrest man over child abuse images". w:The Guardian. 2014-08-07.

1st seen in[edit | edit source]

  1. 1.0 1.1 https://spectrum.ieee.org/deepfake-porn
  2. 2.0 2.1 2.2 2.3 Seen first in https://github.com/topics/porn-block, meta for actual use. The topic was stumbled upon.
  3. 3.0 3.1 Seen first in https://github.com/topics/pornblocker Saw this originally when looking at https://github.com/topics/porn-block Topic
  4. 4.0 4.1 4.2 Seen first in https://github.com/topics/porn-filter Saw this originally when looking at https://github.com/topics/porn-block Topic
  5. venturebeat.com found via some Facebook AI & ML group or page yesterday. Sorry, don't know precisely right now.



Cite error: <ref> tags exist for a group named "contacted", but no corresponding <references group="contacted"/> tag was found