Adequate Porn Watcher AI (concept): Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(→‎A service identical to APW_AI that used to exist - FacePinPoint.com: + Facepinpoint Inc. filed for surrender on Friday 2021-06-11)
m (Text replacement - "[A|a]gainst synthetic human-like fakes" to "Organizations and events against synthetic human-like fakes")
Tags: Mobile edit Mobile web edit
(20 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Adequate Porn Watcher AI''' ('''APW_AI''') is an [[w:Artificial intelligence|w:AI]] and [[w:Computer vision|w:computer vision]] concept to search for any and all '''porn that should not be''' by watching and modeling '''all porn''' ever found on the [[w:Internet]] thus effectively '''protecting humans''' by '''exposing [[Synthetic human-like fakes#List of possible naked digital look-alike attacks|covert naked digital look-alike attacks]] ''' and also other contraband.  
'''Adequate Porn Watcher AI''' ('''APW_AI''') is an [[w:Artificial intelligence|w:AI]] and [[w:Computer vision|w:computer vision]] concept to search for any and all '''porn that should not be''' by watching and modeling '''all porn''' ever found on the [[w:Internet]] thus effectively '''protecting humans''' by '''exposing [[Synthetic human-like fakes#List of possible naked digital look-alike attacks|covert naked digital look-alike attacks]] ''' and also other contraband.  
Obs. '''[[#A service identical to APW_AI used to exist - FacePinPoint.com]]'''


''' The method and the effect '''
''' The method and the effect '''
Line 5: Line 7:
The method by which '''APW_AI''' would be providing <font color="blue">'''safety'''</font> and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>.
The method by which '''APW_AI''' would be providing <font color="blue">'''safety'''</font> and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say <font color="green">'''nothing matching found'''</font> or it will be of the opinion that <font color="red">'''something matching found'''</font>.


If people are <font color="green">'''able to check'''</font> whether there is '''[[Glossary#Synthetic porn|synthetic porn]]''' that looks like themselves, this causes synthetic hate-illustration industrialists' product <font color="green">'''lose destructive potential'''</font> and the attacks that happen are less destructive as they are exposed by the APW_AI and thus <font color="green">'''decimate the monetary value'''</font> of these disinformation weapons to the <font color="red>'''criminals'''</font>.
If people are <font color="green">'''able to check'''</font> whether there is '''[[Glossary#Synthetic pornography|synthetic porn]]''' that looks like themselves, this causes synthetic hate-illustration industrialists' product <font color="green">'''lose destructive potential'''</font> and the attacks that happen are less destructive as they are exposed by the APW_AI and thus <font color="green">'''decimate the monetary value'''</font> of these disinformation weapons to the <font color="red>'''criminals'''</font>.


If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
Line 27: Line 29:
The idea of APW_AI occurred to [[User:Juho Kunsola]] on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of [[User:Juho_Kunsola/Law_proposals#Law_proposal_to_ban_covert_modeling_of_human_appearance|the plea to ban convert modeling of human appearance]] as that would have rendered APW_AI legally impossible.
The idea of APW_AI occurred to [[User:Juho Kunsola]] on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of [[User:Juho_Kunsola/Law_proposals#Law_proposal_to_ban_covert_modeling_of_human_appearance|the plea to ban convert modeling of human appearance]] as that would have rendered APW_AI legally impossible.


= A service identical to APW_AI that used to exist - FacePinPoint.com =
<section begin=See_also />
<section begin=FacePinPoint.com />
[https://www.facepinpoint.com/home '''FacePinPoint''' at facepinpoint.com] was founded in 2017 by Lionel Hagege and officially launched on Saturday 2017-10-28<ref>


https://www.linkedin.com/pulse/facepinpoint-officially-launched-october-28th-lionel-hagege/
= Resources =
</ref>
''' Tools '''
<ref>
* '''[[w:PhotoDNA]]''' is an image-identification technology used for detecting [[w:child pornography]] and other illegal content reported to the [[w:National Center for Missing & Exploited Children]] (NCMEC) as required by law.<ref>
https://web.archive.org/web/20211225155243/https://www.facepinpoint.com/aboutus</ref>.
 
The [https://www.facepinpoint.com/how-it-works description of how it worked] ([https://web.archive.org/web/20211224134106/https://www.facepinpoint.com/how-it-works archived]) is the same as [[Adequate Porn Watcher AI (concept)|APW_AI]]'s description, but unfortunately they have closed the service.  


The domain name was registered in 2015<ref>whois facepinpoint.com</ref>, when Lionel set out to research the feasibility of his mission.<ref>https://www.facepinpoint.com/aboutus</ref>. Facepinpoint Inc. was registered in Delaware on 2017-08-11 and had office in Beverly Hills.<ref>https://businesssearch.sos.ca.gov/CBS/SearchResults?filing=&SearchType=CORP&SearchCriteria=Facepinpoint+Inc.&SearchSubType=Keyword</ref> The website still states copyright 2017. Facepinpoint Inc. filed for surrender on Friday 2021-06-11.<ref>https://businesssearch.sos.ca.gov/Document/RetrievePDF?Id=04054616-30737954</ref>
{{cite web
|url=https://www.theguardian.com/technology/2014/aug/07/microsoft-tip-police-child-abuse-images-paedophile
|title=Microsoft tip led police to arrest man over child abuse images
|work=[[w:The Guardian]]
|date=2014-08-07
}}


As of Dec 2021 their website is still up, but the AI service is no longer available, registrations are closed and their contact form doesn't work anymore.
</ref> It was developed by [[w:Microsoft Research]] and [[w:Hany Farid]], professor at [[w:Dartmouth College]], beginning in 2009. ([https://en.wikipedia.org/w/index.php?title=PhotoDNA&oldid=1058600051 Wikipedia])


In Dec 2021 I managed to reach someone at FacePinPoint and they told me that they could not find funding and needed to shut shop.  
* The '''[[w:Child abuse image content list]]''' (CAIC List) is a list of URLs and image hashes provided by the [[w:Internet Watch Foundation]] to its partners to enable the blocking of [[w:child pornography]] & [[w:Obscene Publications Acts|w:criminally obscene adult content]] in the UK and by major international technology companies. ([https://en.wikipedia.org/w/index.php?title=Child_abuse_image_content_list&oldid=968491079 Wikipedia]).


* https://www.facepinpoint.com/home
''' Legal '''
* https://www.facepinpoint.com/how-it-works
* [[w:Outline of law]]
* https://www.facepinpoint.com/aboutus - Lionel Hagege explains why he got on the mission
* [[w:List of national legal systems]]
* https://www.facepinpoint.com/ncplaws - non-consensual pornography laws by state in the USA compiled by [https://www.cagoldberglaw.com/ C.A. Goldberg, PLLC at cagoldberglaw.com]
* [[w:List of legislatures by country]]


''' Contact FacePinPoint.com and tell them to get back to fulfilling their mission'''
* [https://www.linkedin.com/in/lionel-hagege-1b8400102/ Lionel Hagege on linkedin.com]
* https://twitter.com/facepinpoint
* https://www.facebook.com/facepinpoint
* https://www.instagram.com/facepinpoint/
[[File:FacePinPoint-dot-com-how-it-works.png|thumb|center|1200px|FacePinPoint.com used to do the same thing as [[Adequate Porn Watcher AI (concept)|APW_AI]] has been planned to do: To look at mind boggling amounts of pornography in order to protect its users from their likeness being used/shown somewhere where it is not supposed to be.]]
<section end=FacePinPoint.com />
= Resources =
<section begin=See_also />
== Traditional porn-blocking ==
== Traditional porn-blocking ==
Traditional porn-blocking done by [[w:Pornography laws by region|w:some countries]] seems to use [[w:Domain Name System|w:DNS]] to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually [[w:0.0.0.0]].
Traditional porn-blocking done by [[w:Pornography laws by region|w:some countries]] seems to use [[w:Domain Name System|w:DNS]] to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually [[w:0.0.0.0]].
Line 85: Line 76:
* [https://github.com/thelesson/Miniblog-Laravel-7-Google-Vision-detecta-faces-e-restringe-pornografia ''''''Laravel 7 Google Vision restringe pornografia detector de faces'''''' porn restriction app in Portuguese at github.com by ''thelesson''] that utilizes [https://cloud.google.com/vision Google Vision API] to help site maintainers stop users from uploading porn has been written for the for [https://github.com/madskristensen/MiniBlog MiniBlog] [[w:Laravel]] blog app.
* [https://github.com/thelesson/Miniblog-Laravel-7-Google-Vision-detecta-faces-e-restringe-pornografia ''''''Laravel 7 Google Vision restringe pornografia detector de faces'''''' porn restriction app in Portuguese at github.com by ''thelesson''] that utilizes [https://cloud.google.com/vision Google Vision API] to help site maintainers stop users from uploading porn has been written for the for [https://github.com/madskristensen/MiniBlog MiniBlog] [[w:Laravel]] blog app.


== Links regarding pornography ==
== Links regarding pornography censorship ==
* [[w:Pornography laws by region]]
* [[w:Pornography laws by region]]
* [[w:Internet pornography]]
* [[w:Internet pornography]]
Line 105: Line 96:


== Countermeasures elsewhere ==
== Countermeasures elsewhere ==
Partial transclusions from [[Synthetic human-like fakes]] below
Partial transclusions from [[Organizations and events against synthetic human-like fakes]] below
 
{{#lst:Organizations and events against synthetic human-like fakes|core organizations}}
 
=== A service identical to APW_AI used to exist - FacePinPoint.com ===
Transcluded from [[FacePinPoint.com]]
 
{{#lst:FacePinPoint.com|FacePinPoint.com}}


{{#lst:Synthetic human-like fakes|APW_AI-transclusion}}
{{#lst:Organizations and events against synthetic human-like fakes|other organizations}}


= Sources for technologies =
= Sources for technologies =
Line 139: Line 137:


|
|
'''[[Biblical explanation - The books of Daniel and Revelation]]''', wherein '''[[Biblical explanation - The books of Daniel and Revelation#Daniel 7|Daniel 7]]''' and '''[[Biblical explanation - The books of Daniel and Revelation#Revelation 13|Revelation 13]]''' we are warned of this age of industrial filth.  
'''[[Biblical connection - Revelation 13 and Daniel 7]]''', wherein '''[[Biblical connection - Revelation 13 and Daniel 7#Daniel 7|Daniel 7]]''' and '''[[Biblical connection - Revelation 13 and Daniel 7#Revelation 13|Revelation 13]]''' we are warned of this age of industrial filth.  


In '''Revelation 19''':'''20''' it says that the '''beast is taken prisoner''', can we achieve this without ''''APW_AI'''?
In '''Revelation 19''':'''20''' it says that the '''beast is taken prisoner''', can we achieve this without ''''APW_AI'''?


[[File:Saint John on Patmos.jpg|thumb|center|link=[[Biblical explanation - The books of Daniel and Revelation]]|320px|'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]
[[File:Saint John on Patmos.jpg|thumb|center|link=[[Biblical connection - Revelation 13 and Daniel 7]]|320px|'Saint John on Patmos' pictures [[w:John of Patmos]] on [[w:Patmos]] writing down the visions to make the [[w:Book of Revelation]]
<br/><br/>
<br/><br/>
'Saint John on Patmos' from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]]
'Saint John on Patmos' from folio 17 of the [[w:Très Riches Heures du Duc de Berry]] (1412-1416) by the [[w:Limbourg brothers]]. Currently located at the [[w:Musée Condé]] 40km north of Paris, France.]]

Revision as of 11:10, 27 April 2022

Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.

Obs. #A service identical to APW_AI used to exist - FacePinPoint.com

The method and the effect

The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

Rules

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.

Definition of adequacy

An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

What about the people in the porn-industry?

People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.

History

The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.


Resources

Tools

Legal

Traditional porn-blocking

Traditional porn-blocking done by w:some countries seems to use w:DNS to deny access to porn sites by checking if the domain name matches an item in a porn sites database and if it is there then it returns an unroutable address, usually w:0.0.0.0.

Topics on github.com

Curated lists and databases

Porn blocking services

Software for nudity detection

Links regarding pornography censorship

Against pornography

Technical means of censorship and how to circumvent

Countermeasures elsewhere

Partial transclusions from Organizations and events against synthetic human-like fakes below


Companies against synthetic human-like fakes

I have been searching for an antidote to the synthetic human-like fakes since 2003 and on Friday 2019-07-12 it occurred to me that a service, like I have described later on in Adequate Porn Watcher AI (concept), could very well be the answer to "How to defend against the covert disinformation attacks with fake human-like images?". There is good progress on the legislative side, but laws that cannot be humanely policed, would end up dead letters in the law having negligible de facto effect, hence the need for some computer vision AI help.

Candidates for the ultimate defensive weapon against the digital look-alike attakcs


Organizations against synthetic human-like fakes

w:Fraunhofer Society's Fraunhofer Institute for Applied and Integrated Security (AISEC) has been developing automated tools for catching synthetic human-like fakes.

AI incident repositories

Help for victims of image or audio based abuse

Awareness and countermeasures

Organizations for media forensics

The Defense Advanced Research Projects Agency, better known as w:DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.

A service identical to APW_AI used to exist - FacePinPoint.com

Transcluded from FacePinPoint.com


FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 3]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[8], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[9] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


Organizations possibly against synthetic human-like fakes

Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 7]

Services that should get back to the task at hand - FacePinPoint.com

Transcluded from FacePinPoint.com

FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 9]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[10], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[11] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.

Other essential developments

Studies against synthetic human-like fakes

Detecting deep-fake audio through vocal tract reconstruction

Detecting deep-fake audio through vocal tract reconstruction is an epic scientific work, against fake human-like voices, from the w:University of Florida in published to peers in August 2022.

The Office of Naval Research (ONR) at nre.navy.mil of the USA funded this breakthrough science.

The work Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction at usenix.org, presentation page, version included in the proceedings[12] and slides from researchers of the Florida Institute for Cybersecurity Research (FICS) at fics.institute.ufl.edu in the w:University of Florida received funding from the w:Office of Naval Research and was presented on 2022-08-11 at the 31st w:USENIX Security Symposium.

This work was done by PhD student Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor.

The University of Florida Research Foundation Inc has filed for and received an US patent titled 'Detecting deep-fake audio through vocal tract reconstruction' registration number US20220036904A1 (link to patents.google.com) with 20 claims. The patent application was published on Thursday 2022-02-03. The patent application was approved on 2023-07-04 and has an adjusted expiration date of Sunday 2041-12-29.

Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms

Protecting President Zelenskyy against deep fakes

Other studies and reports against synthetic human-like fakes

Legal information compilations


More studies can be found in the SSFWIKI Timeline of synthetic human-like fakes

Search for more

Reporting against synthetic human-like fakes

Companies against synthetic human-like fakes See resources for more.

Events against synthetic human-like fakes

Upcoming events

In reverse chronological order

Ongoing events

Past events


  • 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
  • 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
  • 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [24]


Sources for technologies

Synthethic-Media-Landscape.jpg
A map of technologies courtesy of Samsung Next, linked from 'Why it’s time to change the conversation around synthetic media' at venturebeat.com[1st seen in 10]

See also

Image 1: Separating specular and diffuse reflected light

(a) Normal image in dot lighting

(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

(d) Subtraction of c from b, which yields the specular component

Images are scaled to seem to be the same luminosity.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Biblical connection - Revelation 13 and Daniel 7, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth.

In Revelation 19:20 it says that the beast is taken prisoner, can we achieve this without 'APW_AI?

'Saint John on Patmos' pictures w:John of Patmos on w:Patmos writing down the visions to make the w:Book of Revelation

'Saint John on Patmos' from folio 17 of the w:Très Riches Heures du Duc de Berry (1412-1416) by the w:Limbourg brothers. Currently located at the w:Musée Condé 40km north of Paris, France.

References

  1. "Microsoft tip led police to arrest man over child abuse images". w:The Guardian. 2014-08-07.
  2. https://www.crunchbase.com/organization/thatsmyface-com
  3. https://www.partnershiponai.org/aiincidentdatabase/
  4. whois aiaaic.org
  5. https://charliepownall.com/ai-algorithimic-incident-controversy-database/
  6. https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
  7. https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
  8. whois facepinpoint.com
  9. https://www.facepinpoint.com/aboutus
  10. whois facepinpoint.com
  11. https://www.facepinpoint.com/aboutus
  12. Blue, Logan; Warren, Kevin; Abdullah, Hadi; Gibson, Cassidy; Vargas, Luis; O’Dell, Jessica; Butler, Kevin; Traynor, Patrick (August 2022). "Detecting deep-fake audio through vocal tract reconstruction". Proceedings of the 31st USENIX Security Symposium: 2691–2708. ISBN 978-1-939133-31-1. Retrieved 2022-10-06.
  13. Boháček, Matyáš; Farid, Hany (2022-11-23). "Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms". w:Proceedings of the National Academy of Sciences of the United States of America. 119 (48). doi:10.1073/pnas.221603511. Retrieved 2023-01-05.
  14. Boháček, Matyáš; Farid, Hany (2022-06-14). "Protecting President Zelenskyy against Deep Fakes". arXiv:2206.12043 [cs.CV].
  15. Lawson, Amanda (2023-04-24). "A Look at Global Deepfake Regulation Approaches". responsible.ai. Responsible Artificial Intelligence Institute. Retrieved 2024-02-14.
  16. Williams, Kaylee (2023-05-15). "Exploring Legal Approaches to Regulating Nonconsensual Deepfake Pornography". techpolicy.press. Retrieved 2024-02-14.
  17. Owen, Aled (2024-02-02). "Deepfake laws: is AI outpacing legislation?". onfido.com. Onfido. Retrieved 2024-02-14.
  18. Pirius, Rebecca (2024-02-07). "Is Deepfake Pornography Illegal?". Criminaldefenselawyer.com. w:Nolo (publisher). Retrieved 2024-02-22.
  19. Rastogi, Janvhi (2023-10-16). "Deepfake Pornography: A Legal and Ethical Menace". tclf.in. The Contemporary Law Forum. Retrieved 2024-02-14.
  20. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
  21. https://law.yale.edu/isp/events/technologies-deception
  22. https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/
  23. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge

1st seen in

  1. 1.0 1.1 1.2 1.3 Seen first in https://github.com/topics/porn-block, meta for actual use. The topic was stumbled upon.
  2. 2.0 2.1 Seen first in https://github.com/topics/pornblocker Saw this originally when looking at https://github.com/topics/porn-block Topic
  3. 3.0 3.1 3.2 Seen first in https://github.com/topics/porn-filter Saw this originally when looking at https://github.com/topics/porn-block Topic
  4. 4.0 4.1 https://spectrum.ieee.org/deepfake-porn
  5. Deutsche Welle English https://www.dw.com/en/live-tv/channel-english
  6. https://www.iwf.org.uk/our-technology/report-remove/
  7. 7.00 7.01 7.02 7.03 7.04 7.05 7.06 7.07 7.08 7.09 7.10 7.11 7.12 7.13 7.14 7.15 7.16 7.17 7.18 7.19 "The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17. This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
  8. Saw a piece on the 51% show on France24 English regarding pornographic "deep-fakes" in July 2024 and searched up the report.
  9. https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E
  10. venturebeat.com found via some Facebook AI & ML group or page yesterday. Sorry, don't know precisely right now.



Cite error: <ref> tags exist for a group named "contacted", but no corresponding <references group="contacted"/> tag was found
Cite error: <ref> tags exist for a group named "contact", but no corresponding <references group="contact"/> tag was found