Organizations, studies and events against synthetic human-like fakes: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(moved content from synthetic human-like fakes that belongs here)
(No difference)

Revision as of 14:14, 2 January 2022

Countermeasures against synthetic human-like fakes

Organizations that should get back to the task at hand - FacePinPoint.com

Transcluded from FacePinPoint.com

FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 1]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[1], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[2] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


Organizations against synthetic human-like fakes

Organizations for media forensics

The Defense Advanced Research Projects Agency, better known as w:DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.

Organizations possibly against synthetic human-like fakes

Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 1]

Other essential developments

Events against synthetic human-like fakes

  • 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
  • 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
  • 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [8]

Studies against synthetic human-like fakes

Search for more

Reporting against synthetic human-like fakes

Companies against synthetic human-like fakes

See resources for more.


SSF! wiki proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded)

Transcluded from Juho's proposal for banning unauthorized synthetic pornography


§1 Models of human appearance

A model of human appearance means

§2 Producing synthetic pornography

Making projections, still or videographic, where targets are portrayed in a nude or in a sexual situation from models of human appearance defined in §1 without express consent of the targets is illegal.

§3 Distributing synthetic pornography

Distributing, making available, public display, purchase, sale, yielding, import and export of non-authorized synthetic pornography defined in §2 are punishable.[footnote 1]

§4 Aggravated producing and distributing synthetic pornography

If the media described in §2 or §3 is made or distributed with the intent to frame for a crime or for blackmail, the crime should be judged as aggravated.

Afterwords

The original idea I had was to ban both the raw materials i.e. the models to make the visual synthetic filth and also the end product weaponized synthetic pornography, but then in July 2019 it appeared to me that Adequate Porn Watcher AI (concept) could really help in this age of industrial disinformation if it were built, trained and operational. Banning modeling of human appearance was in conflict with the revised plan.

It is safe to assume that collecting permissions to model each pornographic recording is not plausible, so an interesting question is that can we ban covert modeling from non-pornographic pictures, while still retaining the ability to model all porn found on the Internet.

In case we want to pursue banning modeling people's appearance from non-pornographic images/videos without explicit permission be pursued it must be formulated so that this does not make Adequate Porn Watcher AI (concept) illegal / impossible. This would seem to lead to a weird situation where modeling a human from non-pornographic media would be illegal, but modeling from pornography legal.


SSF! wiki proposed countermeasure to weaponized synthetic porn pornography: Adequate Porn Watcher AI (concept) (transcluded)

Transcluded main contents from Adequate Porn Watcher AI (concept)

Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.

Obs. #A service identical to APW_AI used to exist - FacePinPoint.com

The method and the effect

The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

Rules

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.

Definition of adequacy

An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

What about the people in the porn-industry?

People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.

History

The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.

Countermeasures elsewhere

Partial transclusion from Organizations, studies and events against synthetic human-like fakes

Companies against synthetic filth

A service identical to APW_AI used to exist - FacePinPoint.com

Partial transclusion from FacePinPoint.com


FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 2]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[9], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[10] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


SSF! wiki proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded)

Transcluded from Juho's proposal on banning digital sound-alikes


Motivation: The current situation where the criminals can freely trade and grow their libraries of stolen voices is unwise.

§1 Unauthorized modeling of a human voice

Acquiring such a model of a human's voice, that deceptively resembles some dead or living person's voice and the possession, purchase, sale, yielding, import and export without the express consent of the target are punishable.

§2 Application of unauthorized voice models

Producing and making available media from covert voice models defined in §1 is punishable.

§3 Aggravated application of unauthorized voice models

If the produced media is for a purpose to

  • frame a human target or targets for crimes
  • to attempt extortion or
  • to defame the target,

the crime should be judged as aggravated.



Cite error: <ref> tags exist for a group named "contacted", but no corresponding <references group="contacted"/> tag was found


Cite error: <ref> tags exist for a group named "contact", but no corresponding <references group="contact"/> tag was found
Cite error: <ref> tags exist for a group named "1st seen in", but no corresponding <references group="1st seen in"/> tag was found
Cite error: <ref> tags exist for a group named "footnote", but no corresponding <references group="footnote"/> tag was found