Adequate Porn Watcher AI (concept): Difference between revisions
Juho Kunsola (talk | contribs) (→A service identical to APW_AI that used to exist - FacePinPoint.com: + Facepinpoint Inc. filed for surrender on Friday 2021-06-11) |
Juho Kunsola (talk | contribs) m (Text replacement - "[A|a]gainst synthetic human-like fakes" to "Organizations and events against synthetic human-like fakes") Tags: Mobile edit Mobile web edit |
||
(20 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
'''Adequate Porn Watcher AI''' ('''APW_AI''') is an [[w:Artificial intelligence|w:AI]] and [[w:Computer vision|w:computer vision]] concept to search for any and all '''porn that should not be''' by watching and modeling '''all porn''' ever found on the [[w:Internet]] thus effectively '''protecting humans''' by '''exposing [[Synthetic human-like fakes#List of possible naked digital look-alike attacks|covert naked digital look-alike attacks]] ''' and also other contraband. | '''Adequate Porn Watcher AI''' ('''APW_AI''') is an [[w:Artificial intelligence|w:AI]] and [[w:Computer vision|w:computer vision]] concept to search for any and all '''porn that should not be''' by watching and modeling '''all porn''' ever found on the [[w:Internet]] thus effectively '''protecting humans''' by '''exposing [[Synthetic human-like fakes#List of possible naked digital look-alike attacks|covert naked digital look-alike attacks]] ''' and also other contraband. | ||
Obs. '''[[#A service identical to APW_AI used to exist - FacePinPoint.com]]''' | |||
''' The method and the effect ''' | ''' The method and the effect ''' | ||
Line 5: | Line 7: | ||
The method by which '''APW_AI''' would be providing <font color="blue">'''safety'''</font> and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>. | The method by which '''APW_AI''' would be providing <font color="blue">'''safety'''</font> and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>. | ||
If people are <font color="green">'''able to check'''</font> whether there is '''[[Glossary#Synthetic | If people are <font color="green">'''able to check'''</font> whether there is '''[[Glossary#Synthetic pornography|synthetic porn]]''' that looks like themselves, this causes synthetic hate-illustration industrialists' product <font color="green">'''lose destructive potential'''</font> and the attacks that happen are less destructive as they are exposed by the APW_AI and thus <font color="green">'''decimate the monetary value'''</font> of these disinformation weapons to the <font color="red>'''criminals'''</font>. | ||
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack. | If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack. | ||
Line 27: | Line 29: | ||
The idea of APW_AI occurred to [[User:Juho Kunsola]] on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of [[User:Juho_Kunsola/Law_proposals#Law_proposal_to_ban_covert_modeling_of_human_appearance|the plea to ban convert modeling of human appearance]] as that would have rendered APW_AI legally impossible. | The idea of APW_AI occurred to [[User:Juho Kunsola]] on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of [[User:Juho_Kunsola/Law_proposals#Law_proposal_to_ban_covert_modeling_of_human_appearance|the plea to ban convert modeling of human appearance]] as that would have rendered APW_AI legally impossible. | ||
<section begin=See_also /> | |||
<section begin= | |||
= Resources = | |||
''' Tools ''' | |||
* '''[[w:PhotoDNA]]''' is an image-identification technology used for detecting [[w:child pornography]] and other illegal content reported to the [[w:National Center for Missing & Exploited Children]] (NCMEC) as required by law.<ref> | |||
{{cite web | |||
|url=https://www.theguardian.com/technology/2014/aug/07/microsoft-tip-police-child-abuse-images-paedophile | |||
|title=Microsoft tip led police to arrest man over child abuse images | |||
|work=[[w:The Guardian]] | |||
|date=2014-08-07 | |||
}} | |||
</ref> It was developed by [[w:Microsoft Research]] and [[w:Hany Farid]], professor at [[w:Dartmouth College]], beginning in 2009. ([https://en.wikipedia.org/w/index.php?title=PhotoDNA&oldid=1058600051 Wikipedia]) | |||
* The '''[[w:Child abuse image content list]]''' (CAIC List) is a list of URLs and image hashes provided by the [[w:Internet Watch Foundation]] to its partners to enable the blocking of [[w:child pornography]] & [[w:Obscene Publications Acts|w:criminally obscene adult content]] in the UK and by major international technology companies. ([https://en.wikipedia.org/w/index.php?title=Child_abuse_image_content_list&oldid=968491079 Wikipedia]). | |||
''' Legal ''' | |||
* | * [[w:Outline of law]] | ||
* | * [[w:List of national legal systems]] | ||
* | * [[w:List of legislatures by country]] | ||
== Traditional porn-blocking == | == Traditional porn-blocking == | ||
Traditional porn-blocking done by [[w:Pornography laws by region|w:some countries]] seems to use [[w:Domain Name System|w:DNS]] to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually [[w:0.0.0.0]]. | Traditional porn-blocking done by [[w:Pornography laws by region|w:some countries]] seems to use [[w:Domain Name System|w:DNS]] to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually [[w:0.0.0.0]]. | ||
Line 85: | Line 76: | ||
* [https://github.com/thelesson/Miniblog-Laravel-7-Google-Vision-detecta-faces-e-restringe-pornografia ''''''Laravel 7 Google Vision restringe pornografia detector de faces'''''' porn restriction app in Portuguese at github.com by ''thelesson''] that utilizes [https://cloud.google.com/vision Google Vision API] to help site maintainers stop users from uploading porn has been written for the for [https://github.com/madskristensen/MiniBlog MiniBlog] [[w:Laravel]] blog app. | * [https://github.com/thelesson/Miniblog-Laravel-7-Google-Vision-detecta-faces-e-restringe-pornografia ''''''Laravel 7 Google Vision restringe pornografia detector de faces'''''' porn restriction app in Portuguese at github.com by ''thelesson''] that utilizes [https://cloud.google.com/vision Google Vision API] to help site maintainers stop users from uploading porn has been written for the for [https://github.com/madskristensen/MiniBlog MiniBlog] [[w:Laravel]] blog app. | ||
== Links regarding pornography == | == Links regarding pornography censorship == | ||
* [[w:Pornography laws by region]] | * [[w:Pornography laws by region]] | ||
* [[w:Internet pornography]] | * [[w:Internet pornography]] | ||
Line 105: | Line 96: | ||
== Countermeasures elsewhere == | == Countermeasures elsewhere == | ||
Partial transclusions from [[ | Partial transclusions from [[Organizations and events against synthetic human-like fakes]] below | ||
{{#lst:Organizations and events against synthetic human-like fakes|core organizations}} | |||
=== A service identical to APW_AI used to exist - FacePinPoint.com === | |||
Transcluded from [[FacePinPoint.com]] | |||
{{#lst:FacePinPoint.com|FacePinPoint.com}} | |||
{{#lst: | {{#lst:Organizations and events against synthetic human-like fakes|other organizations}} | ||
= Sources for technologies = | = Sources for technologies = | ||
Line 139: | Line 137: | ||
| | | | ||
'''[[Biblical | '''[[Biblical connection - Revelation 13 and Daniel 7]]''', wherein '''[[Biblical connection - Revelation 13 and Daniel 7#Daniel 7|Daniel 7]]''' and '''[[Biblical connection - Revelation 13 and Daniel 7#Revelation 13|Revelation 13]]''' we are warned of this age of industrial filth. | ||
In '''Revelation 19''':'''20''' it says that the '''beast is taken prisoner''', can we achieve this without ''''APW_AI'''? | In '''Revelation 19''':'''20''' it says that the '''beast is taken prisoner''', can we achieve this without ''''APW_AI'''? | ||
[[File:Saint John on Patmos.jpg|thumb|center|link=[[Biblical | [[File:Saint John on Patmos.jpg|thumb|center|link=[[Biblical connection - Revelation 13 and Daniel 7]]|320px|'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]] | ||
<br/><br/> | <br/><br/> | ||
'Saint John on Patmos' from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]] | 'Saint John on Patmos' from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]] |
Revision as of 11:10, 27 April 2022
Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.
Obs. #A service identical to APW_AI used to exist - FacePinPoint.com
The method and the effect
The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.
If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
Rules
Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.
Definition of adequacy
An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
What about the people in the porn-industry?
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.
History
The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.
Resources
Tools
- w:PhotoDNA is an image-identification technology used for detecting w:child pornography and other illegal content reported to the w:National Center for Missing & Exploited Children (NCMEC) as required by law.[1] It was developed by w:Microsoft Research and w:Hany Farid, professor at w:Dartmouth College, beginning in 2009. (Wikipedia)
- The w:Child abuse image content list (CAIC List) is a list of URLs and image hashes provided by the w:Internet Watch Foundation to its partners to enable the blocking of w:child pornography & w:criminally obscene adult content in the UK and by major international technology companies. (Wikipedia).
Legal
Traditional porn-blocking
Traditional porn-blocking done by w:some countries seems to use w:DNS to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually w:0.0.0.0.
Topics on github.com
- Topic "porn-block" on github.com (8 repositories as of 2020-09)[1st seen in 1]
- Topic "pornblocker" on github.com (13 repositories as of 2020-09)[1st seen in 2]
- Topic "porn-filter" on github.com (35 repositories as of 2020-09)[1st seen in 3]
Curated lists and databases
- 'Awesome-Adult-Filtering-Accountability' a list at github.com curated by wesinator - a list of tools and resources for adult content/porn accountability and filtering[1st seen in 1]
- 'Pornhosts' at github.com by Import-External-Sources is a hosts-file formatted file of the w:Response policy zone (RPZ) zone file. It states itself as " a consolidated anti porn hosts file" and states is mission as "an endeavour to find all porn domains and compile them into a single hosts to allow for easy blocking of porn on your local machine or on a network."[1st seen in 1]
- 'Amdromeda blocklist for Pi-hole' at github.com by Amdromeda[1st seen in 1] lists 50MB worth of just porn host names 1 (16.6MB) 2 (16.8MB) 3 (16.9MB) (As of 2020-09)
- 'Pihole-blocklist' at github.com by mhakim[1st seen in 3] 1
- 'superhostsfile' at github.com by universalbyte is an ongoing effort to chart out "negative" hosts.[1st seen in 2]
- 'hosts' at github.com by StevenBlack is a hosts file for negative sites. It is updated constantly from these sources and it lists 559k (1.64MB) of porn and other dodgy hosts (as of 2020-09)
- 'Porn-domains' at github.com by Bon appétit was (as of 2020-09) last updated in March 2019 and lists more than 22k domains.
Porn blocking services
- w:Pi-hole - https://pi-hole.net/ - Network-wide Ad Blocking
Software for nudity detection
- 'PornDetector' consists of two python porn images (nudity) detectors at github.com by bakwc and are both written in w:Python (programming language)[1st seen in 3]. pcr.py uses w:scikit-learn and the w:OpenCV Open Source Computer Vision Library, whereas nnpcr.py uses w:TensorFlow and reaches a higher accuracy.
- 'Laravel 7 Google Vision restringe pornografia detector de faces' porn restriction app in Portuguese at github.com by thelesson that utilizes Google Vision API to help site maintainers stop users from uploading porn has been written for the for MiniBlog w:Laravel blog app.
Links regarding pornography censorship
- w:Pornography laws by region
- w:Internet pornography
- w:Legal status of Internet pornography
- w:Sex and the law
Against pornography
- Reasons for w:opposition to pornography include w:religious views on pornography, w:feminist views of pornography, and claims of w:effects of pornography, such as w:pornography addiction. (Wikipedia as of 2020-09-19)
Technical means of censorship and how to circumvent
- w:Internet censorship and w:internet censorship circumvention
- w:Content-control software (Internet filter), a common approach to w:parental control.
- w:Accountability software
- w:Employee monitoring is often automated using w:employee monitoring software
- A w:wordfilter (sometimes referred to as just "filter" or "censor") is a script typically used on w:Internet forums or w:chat rooms that automatically scans users' posts or comments as they are submitted and automatically changes or w:censors particular words or phrases. (Wikipedia as of 2020-09)
- w:Domain fronting is a technique for w:internet censorship circumvention that uses different w:domain names in different communication layers of an w:HTTPS connection to discreetly connect to a different target domain than is discernable to third parties monitoring the requests and connections. (Wikipedia 2020-09-22)
- w:Internet censorship in China and w:some tips to how to evade internet censorship in China
Countermeasures elsewhere
Partial transclusions from Organizations and events against synthetic human-like fakes below
Companies against synthetic human-like fakes
I have been searching for an antidote to the synthetic human-like fakes since 2003 and on Friday 2019-07-12 it occurred to me that a service, like I have described later on in Adequate Porn Watcher AI (concept), could very well be the answer to "How to defend against the covert disinformation attacks with fake human-like images?". There is good progress on the legislative side, but laws that cannot be humanely policed, would end up dead letters in the law having negligible de facto effect, hence the need for some computer vision AI help.
Candidates for the ultimate defensive weapon against the digital look-alike attakcs
- Alecto AI at alectoai.com[1st seen in 4], a provider of an AI-based face information analytics, founded in 2021 in Palo Alto.
- Facenition.com, an NZ company founded in 2019 and ingenious method to hunt for the fake human-like images. Probably has been purchased, mergered or licensed by ThatsMyFace.com
- ThatsMyFace.com[1st seen in 4], an Australian company.[contacted 1] Previously, another company in the USA had this same name and domain name.[2]
Organizations against synthetic human-like fakes
w:Fraunhofer Society's Fraunhofer Institute for Applied and Integrated Security (AISEC) has been developing automated tools for catching synthetic human-like fakes.
- Deepfake-Total.com, an anti-audiofake service made by the Cognitive Security Technologies (CST) department of Fraunhofer AISEC.[1st seen in 5]
- Deepfakes: AI systems reliably expose manipulated audio and video at aisec.fraunhofer.de
- Fields of expertise - Cognitive Security Technologies (CST) department of Fraunhofer AISEC at aisec.fraunhofer.de
- Fraunhofer Institute for Applied and Integrated Security at aisec.fraunhofer.de
- Official website of the Fraunhofer Society at Fraunhofer.de
AI incident repositories
- The 'AI Incident Database' at incidentdatabase.ai was introduced on 2020-11-18 by the w:Partnership on AI.[3]
- Artificial Intelligence Algorithmic Automation Incidents Controversies at aiaaic.org[contact 1][contacted 2] was founded by Charlie Pownall. The AIAAIC repository at aiaaic.org contains tons of reporting on different problematic uses of AI. The domain name aiaaic.org was registered on Tuesday 2021-02-23.[4]. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its CC BY 4.0 license.[5]
- The OECD.AI Policy Observatory at oecd.ai, in conjunction with the Patrick J McGovern Foundation, provide the OECD AI Incidents Monitor (AIM) at oecd.ai
- The w:Algorithmic Justice League is also accepting reports of AI harms at report.ajl.org
Help for victims of image or audio based abuse
- Cyber Civil Rights Initiative at cybercivilrights.org, a US-based NGO.[contact 2] History / Mission / Vision of cybercivilrights.org. Get help now - CCRI Safety Center at cybercivilrights.org - CCRI Image Abuse Helpline - If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.
- Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws at cybercivilrights.org
- Report Remove: Remove a nude image shared online at childline.org.uk[1st seen in 6]. Report Remove is a service for under 19 yr olds by w:Childline, a UK service by w:National Society for the Prevention of Cruelty to Children (NSPCC) and powered by technology from the w:Internet Watch Foundation. - Childline is here to help anyone under 19 in the UK with any issue they’re going through. Info on Report Remove at iwf.org.uk
Awareness and countermeasures
- The Internet Watch Foundation at iwf.org.uk[contact 3] - The w:Internet Watch Foundation is a UK charity that seeks to to minimise the availability of online sexual abuse content, specifically child sexual abuse images and videos hosted anywhere in the world and non-photographic child sexual abuse images hosted in the UK. "About us" at iwf.org.uk, "Our technology" at iwf.org.uk
- Partnership on AI at partnershiponai.org (contact form)[contact 4] is based in the USA and funded by technology companies. They provide resources and have a vast amount and high caliber of partners. See w:Partnership on AI and Partnership on AI on LinkedIn.com for more info.
- The WITNESS Media Lab at lab.witness.org by w:Witness (organization)contact form) (contact form)[contact 5], a human rights non-profit organization based out of Brooklyn, New York, is against synthetic filth actively since 2018. They work both in awareness raising as well as media forensics.
- Open-source intelligence digital forensics - How do we work together to detect AI-manipulated media? at lab.witness.org. "In February 2019 WITNESS in association with w:George Washington University brought together a group of leading researchers in media forensics and w:detection of w:deepfakes and other w:media manipulation with leading experts in social newsgathering, w:User-generated content and w:open-source intelligence (w:OSINT) verification and w:fact-checking." (website)
- Prepare, Don’t Panic: Synthetic Media and Deepfakes at lab.witness.org is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in 2018 with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the report “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening” (dated 2018-06-11). Deepfakes and Synthetic Media: What should we fear? What can we do? at blog.witness.org
- w:Financial Coalition Against Child Pornography could be interested in taking down payment possibilities also for sites distributing non-consensual synthetic pornography.
- Reality Defender at realitydefender.ai - Enterprise-Grade Deepfake Detection Platform for many stakeholders
- Civic AI Security Program at civai.org, an independent nonprofit based in California, working on raising awareness. See https://deepfake.civai.org/ for a public service announcement "make your deepfake"-service.
- Screen Actors Guild - American Federation of Television and Radio Artists - w:SAG-AFTRA (sagaftra.org contact form[contact 6] SAG-AFTRA ACTION ALERT: "Support California Bill to End Deepfake Porn" at sagaftra.org endorses California Senate Bill SB 564 introduced to the w:California State Senate by w:California w:Senator Connie Leyva in Feb 2019.
Organizations for media forensics
- w:DARPA (darpa.mil) contact form[contact 7]DARPA program: 'Media Forensics' (MediFor) at darpa.mil aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in June 2016[6].
- DARPA program: 'Semantic Forensics' (SemaFor) at darpa.mil aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at w:Duke University's Research Funding database: Semantic Forensics (SemaFor) at researchfunding.duke.edu and some at Semantic Forensics grant opportunity (closed Nov 2019) at grants.gov. Archive.org first cralwed their website in November 2019[7]
- w:University of Colorado Denver's College of Arts & Media[contact 8] is the home of the National Center for Media Forensics at artsandmedia.ucdenver.edu at the w:University of Colorado Denver offers a Master's degree program, training courses and scientific basic and applied research. Faculty staff at the NCMF
- Media Forensics Hub at clemson.edu[contact 9] at the Watt Family Innovation Center of the w:Clemson University has the aims of promoting multi-disciplinary research, collecting and facilitating discussion and ideation of challenges and solutions. They provide resources, research, media forensics education and are running a Working Group on disinformation.[contact 10]
A service identical to APW_AI used to exist - FacePinPoint.com
Transcluded from FacePinPoint.com
FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 3]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[8], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[9] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.
Organizations possibly against synthetic human-like fakes
Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 7]
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE at ieai.mcts.tum.de[contact 11] received initial funding from w:Facebook in 2019.[1st seen in 7] IEAI on LinkedIn.com
- The Institute for Ethical AI & Machine Learning at ethical.institute(contact form asks a lot of questions)[contact 12][contacted 4][1st seen in 7] The Institute for Ethical AI & Machine Learning on LinkedIn.com
- Future of Life Institute at futureoflife.org (contact form with also mailing list)[contact 14][contacted 5] received funding from private donors.[1st seen in 7] See w:Future of Life Institute for more info.
- The Japanese Society for Artificial Intelligence (JSAI) at ai-gakkai.or.jp[contact 15] Publication: Ethical guidelines.[1st seen in 7]
- AI4All at ai-4-all.org (contact form with also mailing list subscription) [contact 16][contacted 6] funded by w:Google[1st seen in 7] AI4All on LinkedIn.com
- The Future Society at thefuturesociety.org (contact form with also mailing list subscription)[contact 17][1st seen in 7]. Their activities include policy research, educational & leadership development programs, advisory services, seminars & summits and other special projects to advance the responsible adoption of Artificial Intelligence (AI) and other emerging technologies. The Future Society on LinkedIn.com
- The Ai Now Institute at ainowinstitute.org (contact form and possibility to subscribe to mailing list)[contact 18][contacted 7] at w:New York University[1st seen in 7]. Their work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. The Ai Now Institute on LinkedIn.com
- The Foundation for Responsible Robotics at responsiblerobotics.org (contact form)[contact 19] is based in w:Netherlands.[1st seen in 7] The Foundation for Responsible Robotics on LinkedIn.com
- AI4People at ai4people.eu (contact form)[contact 20] is based in w:Belgium is a multi-stakeholder forum.[1st seen in 7] AI4People on LinkedIn.com
- The Ethics and Governance of Artificial Intelligence Initiative at aiethicsinitiative.org is a joint project of the MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society and is based in the USA.[1st seen in 7]
- Saidot at saidot.ai is a Finnish company offering a platform for AI transparency, explainability and communication.[1st seen in 7] Saidot on LinkedIn.com
- euRobotics at eu-robotics.net is funded by the w:European Commission.[1st seen in 7]
- Centre for Data Ethics and Innovation at gov.uk, part of Department for Digital, Culture, Media & Sport is financed by the UK govt. Centre for Data Ethics and Innovation Blog at cdei.blog.gov.uk[1st seen in 7] Centre for Data Ethics and Innovation on LinkedIn.com
- ACM Special Interest Group on Artificial Intelligence at sigai.acm.org is a w:Special Interest Group on AI by ACM. 'AI Matters: A Newsletter of ACM SIGAI -blog at sigai.acm.org and the newsletter that the blog gets its contents from[1st seen in 7]
- IEEE Ethics in Action - in Autonomous and Intelligent Systems at ethicsinaction.ieee.org (mailing list subscription on website)[contact 21]
- The Center for Countering Digital Hate at counterhate.com (subscribe to mailing list on website[contact 22][contacted 8] is an international not-for-profit NGO that seeks to disrupt the architecture of online hate and misinformation with offices in London and Washington DC.
- Partnership for Countering Influence Operations (PCIO) at carnegieendowment.org (contact form)[contact 23] is a partnership by the w:Carnegie Endowment for International Peace
- UN Global Pulse at unglobalpulse.org is w:United Nations Secretary-General’s initiative on big data and artificial intelligence for development, humanitarian action, and peace.
- humane-ai.eu by Knowledge 4 All Foundation Ltd. at k4all.org[contact 24]
Services that should get back to the task at hand - FacePinPoint.com
Transcluded from FacePinPoint.com
FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 9]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[10], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[11] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.
Other essential developments
- The Montréal Declaration for a Responsible Development of Artificial Intelligence at montrealdeclaration-responsibleai.com[contact 25] and the same site in French La Déclaration de Montéal IA responsable at declarationmontreal-iaresponsable.com[1st seen in 7]
- UNI Global Union at uniglobalunion.org is based in w:Nyon, w:Switzerland and deals mainly with labor issues to do with AI and robotics.[1st seen in 7] UNI Global Union on LinkedIn.com
- European Robotics Research Network at cordis.europa.eu funded by the w:European Commission.[1st seen in 7]
- European Robotics Platform at eu-robotics.net is funded by the w:European Commission. See w:European Robotics Platform and w:List of European Union robotics projects#EUROP for more info.[1st seen in 7]
Studies against synthetic human-like fakes
Detecting deep-fake audio through vocal tract reconstruction
Detecting deep-fake audio through vocal tract reconstruction is an epic scientific work, against fake human-like voices, from the w:University of Florida in published to peers in August 2022.
The work Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction at usenix.org, presentation page, version included in the proceedings[12] and slides from researchers of the Florida Institute for Cybersecurity Research (FICS) at fics.institute.ufl.edu in the w:University of Florida received funding from the w:Office of Naval Research and was presented on 2022-08-11 at the 31st w:USENIX Security Symposium.
This work was done by PhD student Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor.
The University of Florida Research Foundation Inc has filed for and received an US patent titled 'Detecting deep-fake audio through vocal tract reconstruction' registration number US20220036904A1 (link to patents.google.com) with 20 claims. The patent application was published on Thursday 2022-02-03. The patent application was approved on 2023-07-04 and has an adjusted expiration date of Sunday 2041-12-29.
Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms
- Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms is a brief report by Matyáš Boháček and Hany Farid on their recent work published on Wednesday 2022-11-23 in w:Proceedings of the National Academy of Sciences of the United States of America (PNAS). 'Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms' at pnas.org[13]
Protecting President Zelenskyy against deep fakes
- Protecting President Zelenskyy against deep fakes 'Protecting President Zelenskyy against Deep Fakes' at arxiv.org[14] by Matyáš Boháček of Johannes Kepler Gymnasium and w:Hany Farid, the dean and head of of w:Berkeley School of Information at the University of California, Berkeley. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital media forensics is a very good idea explored by many. Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.
Other studies and reports against synthetic human-like fakes
- Deepfake Porn Is Leading to a New Protection Industry at spectrum.ieee.org, a July 2024 piece in w:IEEE Spectrum.
- Briefing Paper: Deepfake Image-Based Sexual Abuse, Tech Facilitated Sexual Exploitation and the Law at equalitynow.org[1st seen in 8], a January 2024 briefing paper by w:Equality Now and Alliance For Universal Digital Rights at audri.org
- The Weaponisation of Deepfakes - Digital Deception by the Far-Right at icct.nl, an w:International Centre for Counter-Terrorism policy brief by Ella Busch and Jacob Ware. Published in December 2023.
- Increasing Threats of Deepfake Identities at dhs.gov by the w:United States Department of Homeland Security
- Deepfake Pornography and the Ethics of Non-Veridical Representations at link.springer.com, a 2023 research article, published in w:Philosophy & Technology on 2023-08-23. (paywalled)
- NONCONSENSUAL DEEPFAKES: DETECTING AND REGULATING THE RISING THREAT TO PRIVACY at digitalcommons.law.uidaho.edu by Natalie Lussier, published in Idaho Law Review January 2022
- 'Disinformation That Kills: The Expanding Battlefield Of Digital Warfare' at cbinsights.com, a 2020-10-21 research brief on disinformation warfare by w:CB Insights, a private company that provides w:market intelligence and w:business analytics services
- 'Media Forensics and DeepFakes: an overview' at arXiv.org (as .pdf at arXiv.org), an overview on the subject of digital look-alikes and media forensics published in August 2020 in Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing. 'Media Forensics and DeepFakes: An Overview' at ieeexplore.ieee.org (paywalled, free abstract)
- 'DEEPFAKES: False pornography is here and the law cannot protect you' at scholarship.law.duke.edu by Douglas Harris, published in Duke Law & Technology Review - Volume 17 on 2019-01-05 by w:Duke University w:Duke University School of Law
Legal information compilations
- Template loop detected: Organizations, studies and events against synthetic human-like fakes
- A Look at Global Deepfake Regulation Approaches at responsible.ai[15] April 2023 compilation and reporting by Amanda Lawson of the Responsible Artificial Intelligence Institute.
- The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology at legaljournal.princeton.edu[16] compilation and reporting by Caroline Quirk. PLJ is Princeton’s only student-run law review.
- Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography at techpolicy.press[17] May 2023 compilation and reporting by Kaylee Williams
- Deepfake AI laws for USA at foundationra.com, Sextortion laws for USA at foundationra.com and Revenge porn laws for USA at foundationra.com compilations by Foundation RA
- Deepfake laws: is AI outpacing legislation? at onfido.com[18] February 2024 summary and compilation by Aled Owen, Director of Global Policy at Onfido (for-profit)
- Is Deepfake Pornography Illegal? at criminaldefenselawyer.com [19] by Rebecca Pirius is a good bring-together of the current illegality/legality situation in the USA federally and state-wise. Published by w:Nolo (publisher), updated Feb 2024
- Deepfake Pornography: A Legal and Ethical Menace at tclf.in[20] October 2023 compilation and reporting by Janvhi Rastogi, published in the The Contemporary Law Forum.
More studies can be found in the SSFWIKI Timeline of synthetic human-like fakes
Search for more
Reporting against synthetic human-like fakes
- 'Researchers use facial quirks to unmask ‘deepfakes’' at news.berkeley.edu 2019-06-18 reporting by Kara Manke published in Politics & society, Research, Technology & engineering-section in Berkley News of w:UC Berkeley.
Companies against synthetic human-like fakes See resources for more.
- Cyabra.com is an AI-based system that helps organizations be on the guard against disinformation attacks[1st seen in 9]. Reuters.com reporting from July 2020.
Events against synthetic human-like fakes
Upcoming events
In reverse chronological order
Ongoing events
- 2020 - ONGOING | w:National Institute of Standards and Technology (NIST) (nist.gov) (contacting NIST) | Open Media Forensics Challenge presented in Open Media Forensics Challenge at nist.gov and Open Media Forensics Challenge (OpenMFC) at mfc.nist.gov[contact 26] - Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.[21]
Past events
- 2023 | World Anti-Bullying Forum October 25-27 in North Carolina, U.S.A.
- 2022 | w:European Conference on Computer Vision in Tel Aviv, Israel
- 2022 | HotNets 2022: Twenty-First ACM Workshop on Hot Topics in Networks at conferences.sigcomm.org - November 14-15, 2022 — Austin, Texas, USA. Presented at HotNets 2022 was a very interesting paper on 'Global Content Revocation on the Internet: A Case Study in Technology Ecosystem Transformation' at farid.berkeley.edu
- 2022 | INTERSPEECH 2022 at interspeech2022.org organized by w:International Speech Communication Association was held on 18-22 September 2022 in Korea. The work 'Attacker Attribution of Audio Deepfakes' at arxiv.org was presented there
- 2022 | Technologies of Deception at law.yale.edu, a conference hosted by the w:Information Society Project (ISP) was held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022[22]
- 2021 | w:Conference on Neural Information Processing Systems NeurIPS 2021 at neurips.cc, was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side w:StyleGAN3 was presented there.
- 2021 | w:Conference on Computer Vision and Pattern Recognition (CVPR) 2021 CVPR 2021 at cvpr2021.thecvf.com
- CVPR 2021 research areas visualization by Joshua Preston at public.tableau.com
- 2021 'Workshop on Media Forensics' in CVPR 2021 at sites.google.com, a June 2021 workshop at the Conference on Computer Vision and Pattern Recognition.
- 2020 | CVPR 2020 | 2020 Conference on Computer Vision and Pattern Recognition: 'Workshop on Media Forensics' at sites.google.com, a June 2020 workshop at the Conference on Computer Vision and Pattern Recognition.
- 2020 | The winners of the Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes[23]
- 2020 | You Don't Say: An FTC Workshop on Voice Cloning Technologies at ftc.gov was held on Tuesday 2020-01-28 - reporting at venturebeat.com
- 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
- 2019 | w:NeurIPS 2019 | w:Facebook, Inc. "Facebook AI Launches Its Deepfake Detection Challenge" at spectrum.ieee.org w:IEEE Spectrum. More reporting at "Facebook, Microsoft, and others launch Deepfake Detection Challenge" at venturebeat.com
- 2017-2020 | NIST | NIST: 'Media Forensics Challenge' (MFC) at nist.gov, an iterative research challenge by the w:National Institute of Standards and Technology the evaluation criteria for the 2019 iteration are being formed. Succeeded by the Open Media Forensics Challenge.
- 2018 | w:European Conference on Computer Vision (ECCV) ECCV 2018: 'Workshop on Objectionable Content and Misinformation' at sites.google.com, a workshop at the 2018 w:European Conference on Computer Vision in w:Munich had focus on objectionable content detection e.g. w:nudity, w:pornography, w:violence, w:hate, w:children exploitation and w:terrorism among others and to address misinformation problems when people are fed w:disinformation and they punt it on as misinformation. Announced topics included w:image/video forensics, w:detection/w:analysis/w:understanding of w:fake images/videos, w:misinformation detection/understanding: mono-modal and w:multi-modal, adversarial technologies and detection/understanding of objectionable content
- 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
- 2017 | NIST NIST 'Nimble Challenge 2017' at nist.gov
- 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [24]
Sources for technologies
A map of technologies courtesy of Samsung Next, linked from 'Why it’s time to change the conversation around synthetic media' at venturebeat.com[1st seen in 10] |
See also
|
Biblical connection - Revelation 13 and Daniel 7, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth. In Revelation 19:20 it says that the beast is taken prisoner, can we achieve this without 'APW_AI? |
References
- ↑ "Microsoft tip led police to arrest man over child abuse images". w:The Guardian. 2014-08-07.
- ↑ https://www.crunchbase.com/organization/thatsmyface-com
- ↑ https://www.partnershiponai.org/aiincidentdatabase/
- ↑ whois aiaaic.org
- ↑ https://charliepownall.com/ai-algorithimic-incident-controversy-database/
- ↑ https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
- ↑ https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
- ↑ whois facepinpoint.com
- ↑ https://www.facepinpoint.com/aboutus
- ↑ whois facepinpoint.com
- ↑ https://www.facepinpoint.com/aboutus
- ↑ Blue, Logan; Warren, Kevin; Abdullah, Hadi; Gibson, Cassidy; Vargas, Luis; O’Dell, Jessica; Butler, Kevin; Traynor, Patrick (August 2022). "Detecting deep-fake audio through vocal tract reconstruction". Proceedings of the 31st USENIX Security Symposium: 2691–2708. ISBN 978-1-939133-31-1. Retrieved 2022-10-06.
- ↑ Boháček, Matyáš; Farid, Hany (2022-11-23). "Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms". w:Proceedings of the National Academy of Sciences of the United States of America. 119 (48). doi:10.1073/pnas.221603511. Retrieved 2023-01-05.
- ↑ Boháček, Matyáš; Farid, Hany (2022-06-14). "Protecting President Zelenskyy against Deep Fakes". arXiv:2206.12043 [cs.CV].
- ↑ Lawson, Amanda (2023-04-24). "A Look at Global Deepfake Regulation Approaches". responsible.ai. Responsible Artificial Intelligence Institute. Retrieved 2024-02-14.
- ↑ Quirk, Caroline (2023-06-19). "The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology". legaljournal.princeton.edu. Princeton Legal Journal. Retrieved 2024-02-14.
- ↑ Williams, Kaylee (2023-05-15). "Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography". techpolicy.press. Retrieved 2024-02-14.
- ↑ Owen, Aled (2024-02-02). "Deepfake laws: is AI outpacing legislation?". onfido.com. Onfido. Retrieved 2024-02-14.
- ↑ Pirius, Rebecca (2024-02-07). "Is Deepfake Pornography Illegal?". Criminaldefenselawyer.com. w:Nolo (publisher). Retrieved 2024-02-22.
- ↑ Rastogi, Janvhi (2023-10-16). "Deepfake Pornography: A Legal and Ethical Menace". tclf.in. The Contemporary Law Forum. Retrieved 2024-02-14.
- ↑ https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
- ↑ https://law.yale.edu/isp/events/technologies-deception
- ↑ https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/
- ↑ https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
1st seen in
- ↑ 1.0 1.1 1.2 1.3 Seen first in https://github.com/topics/porn-block, meta for actual use. The topic was stumbled upon.
- ↑ 2.0 2.1 Seen first in https://github.com/topics/pornblocker Saw this originally when looking at https://github.com/topics/porn-block Topic
- ↑ 3.0 3.1 3.2 Seen first in https://github.com/topics/porn-filter Saw this originally when looking at https://github.com/topics/porn-block Topic
- ↑ 4.0 4.1 https://spectrum.ieee.org/deepfake-porn
- ↑ Deutsche Welle English https://www.dw.com/en/live-tv/channel-english
- ↑ https://www.iwf.org.uk/our-technology/report-remove/
- ↑ 7.00 7.01 7.02 7.03 7.04 7.05 7.06 7.07 7.08 7.09 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18 7.19
"The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17.
This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
- ↑ Saw a piece on the 51% show on France24 English regarding pornographic "deep-fakes" in July 2024 and searched up the report.
- ↑ https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E
- ↑ venturebeat.com found via some Facebook AI & ML group or page yesterday. Sorry, don't know precisely right now.
Cite error: <ref>
tags exist for a group named "contacted", but no corresponding <references group="contacted"/>
tag was found
Cite error: <ref>
tags exist for a group named "contact", but no corresponding <references group="contact"/>
tag was found