Adequate Porn Watcher AI (concept)

    From Stop Synthetic Filth! wiki
    Revision as of 19:21, 1 November 2023 by Juho Kunsola (talk | contribs) (Text replacement - "Organizations and events against synthetic human-like fakes" to "Organizations, studies and events against synthetic human-like fakes")
    (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

    Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.

    Obs. #A service identical to APW_AI used to exist - FacePinPoint.com

    The method and the effect

    The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

    If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

    If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

    Rules

    Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.

    Definition of adequacy

    An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

    What about the people in the porn-industry?

    People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

    There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.

    History

    The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.


    Resources[edit | edit source]

    Tools

    Legal

    Traditional porn-blocking[edit | edit source]

    Traditional porn-blocking done by w:some countries seems to use w:DNS to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually w:0.0.0.0.

    Topics on github.com

    Curated lists and databases

    Porn blocking services

    Software for nudity detection

    Links regarding pornography censorship[edit | edit source]

    Against pornography

    Technical means of censorship and how to circumvent

    Countermeasures elsewhere[edit | edit source]

    Partial transclusions from Organizations, studies and events against synthetic human-like fakes below


    Organizations against synthetic human-like fakes[edit | edit source]

    AI incident repositories

    Help for victims of image or audio based abuse

    Awareness and countermeasures

    Organizations for media forensics[edit | edit source]

    The Defense Advanced Research Projects Agency, better known as w:DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.

    A service identical to APW_AI used to exist - FacePinPoint.com[edit | edit source]

    Transcluded from FacePinPoint.com
    


    FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 2]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[7], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[8] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


    Organizations possibly against synthetic human-like fakes[edit | edit source]

    Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 5]

    Services that should get back to the task at hand - FacePinPoint.com[edit | edit source]

    Transcluded from FacePinPoint.com
    

    FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 8]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[9], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[10] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.

    Other essential developments[edit | edit source]

    Studies against synthetic human-like fakes[edit | edit source]

    Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction[edit | edit source]

    Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms ()[edit | edit source]

    Protecting President Zelenskyy against deep fakes[edit | edit source]

    Other studies against synthetic human-like fakes[edit | edit source]

    More studies can be found in the SSFWIKI Timeline of synthetic human-like fakes

    Search for more

    Reporting against synthetic human-like fakes

    Companies against synthetic human-like fakes See resources for more.

    Events against synthetic human-like fakes[edit | edit source]

    Upcoming events[edit | edit source]

    In reverse chronological order

    Ongoing events[edit | edit source]

    Past events[edit | edit source]


    • 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
    • 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
    • 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [16]


    Sources for technologies[edit | edit source]

    Synthethic-Media-Landscape.jpg
    A map of technologies courtesy of Samsung Next, linked from 'Why it’s time to change the conversation around synthetic media' at venturebeat.com[1st seen in 7]

    See also[edit | edit source]

    Image 1: Separating specular and diffuse reflected light

    (a) Normal image in dot lighting

    (b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

    (c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

    (d) Subtraction of c from b, which yields the specular component

    Images are scaled to seem to be the same luminosity.

    Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

    Biblical connection - Revelation 13 and Daniel 7, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth.

    In Revelation 19:20 it says that the beast is taken prisoner, can we achieve this without 'APW_AI?

    'Saint John on Patmos' pictures w:John of Patmos on w:Patmos writing down the visions to make the w:Book of Revelation

    'Saint John on Patmos' from folio 17 of the w:Très Riches Heures du Duc de Berry (1412-1416) by the w:Limbourg brothers. Currently located at the w:Musée Condé 40km north of Paris, France.

    References[edit | edit source]

    1. "Microsoft tip led police to arrest man over child abuse images". w:The Guardian. 2014-08-07.
    2. https://www.partnershiponai.org/aiincidentdatabase/
    3. whois aiaaic.org
    4. https://charliepownall.com/ai-algorithimic-incident-controversy-database/
    5. https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
    6. https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
    7. whois facepinpoint.com
    8. https://www.facepinpoint.com/aboutus
    9. whois facepinpoint.com
    10. https://www.facepinpoint.com/aboutus
    11. Boháček, Matyáš; Farid, Hany (2022-11-23). "Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms". w:Proceedings of the National Academy of Sciences of the United States of America. 119 (48). doi:10.1073/pnas.221603511. Retrieved 2023-01-05.
    12. Boháček, Matyáš; Farid, Hany (2022-06-14). "Protecting President Zelenskyy against Deep Fakes". arXiv:2206.12043 [cs.CV].
    13. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
    14. https://law.yale.edu/isp/events/technologies-deception
    15. https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/
    16. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge

    1st seen in[edit | edit source]

    1. 1.0 1.1 1.2 1.3 Seen first in https://github.com/topics/porn-block, meta for actual use. The topic was stumbled upon.
    2. 2.0 2.1 Seen first in https://github.com/topics/pornblocker Saw this originally when looking at https://github.com/topics/porn-block Topic
    3. 3.0 3.1 3.2 Seen first in https://github.com/topics/porn-filter Saw this originally when looking at https://github.com/topics/porn-block Topic
    4. https://www.iwf.org.uk/our-technology/report-remove/
    5. 5.00 5.01 5.02 5.03 5.04 5.05 5.06 5.07 5.08 5.09 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19 "The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17. This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
    6. https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E
    7. venturebeat.com found via some Facebook AI & ML group or page yesterday. Sorry, don't know precisely right now.



    Cite error: <ref> tags exist for a group named "contact", but no corresponding <references group="contact"/> tag was found
    Cite error: <ref> tags exist for a group named "contacted", but no corresponding <references group="contacted"/> tag was found