Adequate Porn Watcher AI (concept): Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
m (fix own error in caption)
(→‎See also: update)
Line 118: Line 118:
|-
|-
|
|
* '''[[Main Page]]''' which transcludes '''[[synthetic human-like fakes|synthetic human-like fakes]]''' i.e. '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' and '''[[synthetic human-like fakes#Digital sound-alikes|digital sound-alikes]]''' so far, audio samples from a '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone machine]''' from 2018, '''[[synthetic human-like fakes#Media perhaps about synthetic human-like fakes|media perhaps about synthetic human-like fakes]]''' and transcludes '''[[how to protect yourself and others from covert modeling]]'''.  
* '''[[Main Page]]''' and '''[[synthetic human-like fakes|synthetic human-like fakes]]''' i.e. '''[[synthetic human-like fakes#Digital look-alikes|digital look-alikes]]''' and '''[[synthetic human-like fakes#Digital sound-alikes|digital sound-alikes]]''' so far, audio samples from a '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone machine]''' from 2018, '''[[synthetic human-like fakes#Media perhaps about synthetic human-like fakes|media perhaps about synthetic human-like fakes]]''' and '''[[how to protect yourself and others from covert modeling]]'''.  


[[File:Deb2000-reflectance-separation-2-rows.png|thumb|320px|center|link=[[Main Page]]|Image 1: Separating specular and diffuse reflected light
[[File:Deb2000-reflectance-separation-2-rows.png|thumb|320px|center|link=[[Main Page]]|Image 1: Separating specular and diffuse reflected light

Revision as of 20:10, 17 December 2023

Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.

Obs. #A service identical to APW_AI used to exist - FacePinPoint.com

The method and the effect

The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

Rules

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.

Definition of adequacy

An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

What about the people in the porn-industry?

People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.

History

The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.


Resources

Tools

Legal

Traditional porn-blocking

Traditional porn-blocking done by w:some countries seems to use w:DNS to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually w:0.0.0.0.

Topics on github.com

Curated lists and databases

Porn blocking services

Software for nudity detection

Links regarding pornography censorship

Against pornography

Technical means of censorship and how to circumvent

Countermeasures elsewhere

Partial transclusions from Organizations, studies and events against synthetic human-like fakes below


Companies against synthetic human-like fakes

I have been searching for an antidote to the synthetic human-like fakes since 2003 and on Friday 2019-07-12 it occurred to me that a service, like I have described later on in Adequate Porn Watcher AI (concept), could very well be the answer to "How to defend against the covert disinformation attacks with fake human-like images?". There is good progress on the legislative side, but laws that cannot be humanely policed, would end up dead letters in the law having negligible de facto effect, hence the need for some computer vision AI help.

Candidates for the ultimate defensive weapon against the digital look-alike attakcs


Organizations against synthetic human-like fakes

w:Fraunhofer Society's Fraunhofer Institute for Applied and Integrated Security (AISEC) has been developing automated tools for catching synthetic human-like fakes.

AI incident repositories

Help for victims of image or audio based abuse

Awareness and countermeasures

Organizations for media forensics

The Defense Advanced Research Projects Agency, better known as w:DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.

A service identical to APW_AI used to exist - FacePinPoint.com

Transcluded from FacePinPoint.com


FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 3]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[8], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[9] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


Organizations possibly against synthetic human-like fakes

Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 7]

Services that should get back to the task at hand - FacePinPoint.com

Transcluded from FacePinPoint.com

FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 9]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[10], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[11] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.

Other essential developments

Studies against synthetic human-like fakes

Detecting deep-fake audio through vocal tract reconstruction

Detecting deep-fake audio through vocal tract reconstruction is an epic scientific work, against fake human-like voices, from the w:University of Florida in published to peers in August 2022.

The Office of Naval Research (ONR) at nre.navy.mil of the USA funded this breakthrough science.

The work Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction at usenix.org, presentation page, version included in the proceedings[12] and slides from researchers of the Florida Institute for Cybersecurity Research (FICS) at fics.institute.ufl.edu in the w:University of Florida received funding from the w:Office of Naval Research and was presented on 2022-08-11 at the 31st w:USENIX Security Symposium.

This work was done by PhD student Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor.

The University of Florida Research Foundation Inc has filed for and received an US patent titled 'Detecting deep-fake audio through vocal tract reconstruction' registration number US20220036904A1 (link to patents.google.com) with 20 claims. The patent application was published on Thursday 2022-02-03. The patent application was approved on 2023-07-04 and has an adjusted expiration date of Sunday 2041-12-29.

Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms

Protecting President Zelenskyy against deep fakes

Other studies and reports against synthetic human-like fakes

Legal information compilations


More studies can be found in the SSFWIKI Timeline of synthetic human-like fakes

Search for more

Reporting against synthetic human-like fakes

Companies against synthetic human-like fakes See resources for more.

Events against synthetic human-like fakes

Upcoming events

In reverse chronological order

Ongoing events

Past events


  • 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
  • 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
  • 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [24]


Sources for technologies

Synthethic-Media-Landscape.jpg
A map of technologies courtesy of Samsung Next, linked from 'Why it’s time to change the conversation around synthetic media' at venturebeat.com[1st seen in 10]

See also

Image 1: Separating specular and diffuse reflected light

(a) Normal image in dot lighting

(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

(d) The difference of c and b yields the specular highlight component

Images are scaled to seem to be the same luminosity.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Biblical connection - Revelation 13 and Daniel 7, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth.

In Revelation 19:20 it says that the beast is taken prisoner, can we achieve this without 'APW_AI?

'Saint John on Patmos' pictures w:John of Patmos on w:Patmos writing down the visions to make the w:Book of Revelation

'Saint John on Patmos' from folio 17 of the w:Très Riches Heures du Duc de Berry (1412-1416) by the w:Limbourg brothers. Currently located at the w:Musée Condé 40km north of Paris, France.

References

  1. "Microsoft tip led police to arrest man over child abuse images". w:The Guardian. 2014-08-07.
  2. https://www.crunchbase.com/organization/thatsmyface-com
  3. https://www.partnershiponai.org/aiincidentdatabase/
  4. whois aiaaic.org
  5. https://charliepownall.com/ai-algorithimic-incident-controversy-database/
  6. https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
  7. https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
  8. whois facepinpoint.com
  9. https://www.facepinpoint.com/aboutus
  10. whois facepinpoint.com
  11. https://www.facepinpoint.com/aboutus
  12. Blue, Logan; Warren, Kevin; Abdullah, Hadi; Gibson, Cassidy; Vargas, Luis; O’Dell, Jessica; Butler, Kevin; Traynor, Patrick (August 2022). "Detecting deep-fake audio through vocal tract reconstruction". Proceedings of the 31st USENIX Security Symposium: 2691–2708. ISBN 978-1-939133-31-1. Retrieved 2022-10-06.
  13. Boháček, Matyáš; Farid, Hany (2022-11-23). "Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms". w:Proceedings of the National Academy of Sciences of the United States of America. 119 (48). doi:10.1073/pnas.221603511. Retrieved 2023-01-05.
  14. Boháček, Matyáš; Farid, Hany (2022-06-14). "Protecting President Zelenskyy against Deep Fakes". arXiv:2206.12043 [cs.CV].
  15. Lawson, Amanda (2023-04-24). "A Look at Global Deepfake Regulation Approaches". responsible.ai. Responsible Artificial Intelligence Institute. Retrieved 2024-02-14.
  16. Williams, Kaylee (2023-05-15). "Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography". techpolicy.press. Retrieved 2024-02-14.
  17. Owen, Aled (2024-02-02). "Deepfake laws: is AI outpacing legislation?". onfido.com. Onfido. Retrieved 2024-02-14.
  18. Pirius, Rebecca (2024-02-07). "Is Deepfake Pornography Illegal?". Criminaldefenselawyer.com. w:Nolo (publisher). Retrieved 2024-02-22.
  19. Rastogi, Janvhi (2023-10-16). "Deepfake Pornography: A Legal and Ethical Menace". tclf.in. The Contemporary Law Forum. Retrieved 2024-02-14.
  20. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
  21. https://law.yale.edu/isp/events/technologies-deception
  22. https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/
  23. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge

1st seen in

  1. 1.0 1.1 1.2 1.3 Seen first in https://github.com/topics/porn-block, meta for actual use. The topic was stumbled upon.
  2. 2.0 2.1 Seen first in https://github.com/topics/pornblocker Saw this originally when looking at https://github.com/topics/porn-block Topic
  3. 3.0 3.1 3.2 Seen first in https://github.com/topics/porn-filter Saw this originally when looking at https://github.com/topics/porn-block Topic
  4. 4.0 4.1 https://spectrum.ieee.org/deepfake-porn
  5. Deutsche Welle English https://www.dw.com/en/live-tv/channel-english
  6. https://www.iwf.org.uk/our-technology/report-remove/
  7. 7.00 7.01 7.02 7.03 7.04 7.05 7.06 7.07 7.08 7.09 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18 7.19 "The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17. This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
  8. Saw a piece on the 51% show on France24 English regarding pornographic "deep-fakes" in July 2024 and searched up the report.
  9. https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E
  10. venturebeat.com found via some Facebook AI & ML group or page yesterday. Sorry, don't know precisely right now.



Cite error: <ref> tags exist for a group named "contacted", but no corresponding <references group="contacted"/> tag was found
Cite error: <ref> tags exist for a group named "contact", but no corresponding <references group="contact"/> tag was found