Organizations and events against synthetic human-like fakes
Here you can find organizations, workshops and events and services against synthetic human-like fakes and also organizations and curricula for media forensics.
Transcluded in this article are
- FacePinPoint.com, a crucial past service by Lionel Hagege
- Adequate Porn Watcher AI (concept), an AI concept practically identical with FacePinPoint.com
- A law proposal against synthetic non-consensual pornography
- A law proposal against digital sound-alikes.
For laws and bills in planning against synthetic filth see Laws against synthesis and other related crimes.
In resources there are likely a few services that would fit here.
Services that should get back to the task at hand - FacePinPoint.com[edit | edit source]
Transcluded from FacePinPoint.com
FacePinPoint.com was a service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 1]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015, when he set out to research the feasibility of his action plan idea against non-consensual pornography. The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.
Organizations against synthetic human-like fakes[edit | edit source]
AI incident repositories
- The 'AI Incident Database' at incidentdatabase.ai was introduced on 2020-11-18 by the w:Partnership on AI.
- Artificial Intelligence Algorithmic Automation Incidents Controversies at aiaaic.org[contact 1][contacted 2] was founded by Charlie Pownall. The AIAAIC repository at aiaaic.org contains tons of reporting on different problematic uses of AI. The domain name aiaaic.org was registered on Tuesday 2021-02-23.. The AIAAIC repository is a free, open resource which anyone can use, copy redistribute and adapt under the terms of its CC BY 4.0 license.
Help for victims of image or audio based abuse
- Cyber Civil Rights Initiative at cybercivilrights.org, a US-based NGO.[contact 2] History / Mission / Vision of cybercivilrights.org
- Get help now - CCRI Safety Center at cybercivilrights.org - CCRI Image Abuse Helpline - If you are a victim of Image- Based Sexual Abuse (IBSA), please call the CCRI Image Abuse Helpline at 1-844-878-2274, which is available free of charge, 24/7.
- Existing Nonconsensual Pornography, Sextortion, and Deep Fake Laws at cybercivilrights.org
- Report Remove: Remove a nude image shared online at childline.org.uk[1st seen in 1]. Report Remove is a service for under 19 yr olds by w:Childline, a UK service by w:National Society for the Prevention of Cruelty to Children (NSPCC) and powered by technology from the w:Internet Watch Foundation. - Childline is here to help anyone under 19 in the UK with any issue they’re going through. Info on Report Remove at iwf.org.uk
- Battling Against Demeaning and Abusive Selfie Sharing at badassarmy.org have compiled a list of revenge porn laws by US states
Awareness and countermeasures
- The Internet Watch Foundation at iwf.org.uk[contact 3] - The w:Internet Watch Foundation is a UK charity that seeks to to minimise the availability of online sexual abuse content, specifically child sexual abuse images and videos hosted anywhere in the world and non-photographic child sexual abuse images hosted in the UK. "About us" at iwf.org.uk, "Our technology" at iwf.org.uk
- Partnership on AI at partnershiponai.org (contact form)[contact 4] is based in the USA and funded by technology companies. They provide resources and have a vast amount and high caliber of partners. See w:Partnership on AI and Partnership on AI on LinkedIn.com for more info.
- The WITNESS Media Lab at lab.witness.org by w:Witness (organization)contact form) (contact form)[contact 5], a human rights non-profit organization based out of Brooklyn, New York, is against synthetic filth actively since 2018. They work both in awareness raising as well as media forensics.
- Open-source intelligence digital forensics - How do we work together to detect AI-manipulated media? at lab.witness.org. "In February 2019 WITNESS in association with w:George Washington University brought together a group of leading researchers in media forensics and w:detection of w:deepfakes and other w:media manipulation with leading experts in social newsgathering, w:User-generated content and w:open-source intelligence (w:OSINT) verification and w:fact-checking." (website)
- Prepare, Don’t Panic: Synthetic Media and Deepfakes at lab.witness.org is a summary page for WITNESS Media Lab's ongoing work against synthetic human-like fakes. Their work was launched in 2018 with the first multi-disciplinary convening around deepfakes preparedness which lead to the writing of the report “Mal-uses of AI-generated Synthetic Media and Deepfakes: Pragmatic Solutions Discovery Convening” (dated 2018-06-11). Deepfakes and Synthetic Media: What should we fear? What can we do? at blog.witness.org
- w:Financial Coalition Against Child Pornography could be interested in taking down payment possibilities also for sites distributing non-consensual synthetic pornography.
- Reality Defender at realitydefender.ai - Enterprise-Grade Deepfake Detection Platform for many stakeholders
- Screen Actors Guild - American Federation of Television and Radio Artists - w:SAG-AFTRA (sagaftra.org contact form[contact 6] SAG-AFTRA ACTION ALERT: "Support California Bill to End Deepfake Porn" at sagaftra.org endorses California Senate Bill SB 564 introduced to the w:California State Senate by w:California w:Senator Connie Leyva in Feb 2019.
Organizations for media forensics[edit | edit source]
- w:DARPA (darpa.mil) contact form[contact 7]DARPA program: 'Media Forensics' (MediFor) at darpa.mil aims to develop technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. Archive.org first crawled their homepage in June 2016.
- DARPA program: 'Semantic Forensics' (SemaFor) at darpa.mil aims to counter synthetic disinformation by developing systems for detecting semantic inconsistencies in forged media. They state that they hope to create technologies that "will help identify, deter, and understand adversary disinformation campaigns". More information at w:Duke University's Research Funding database: Semantic Forensics (SemaFor) at researchfunding.duke.edu and some at Semantic Forensics grant opportunity (closed Nov 2019) at grants.gov. Archive.org first cralwed their website in November 2019
- w:University of Colorado Denver's College of Arts & Media[contact 8] is the home of the National Center for Media Forensics at artsandmedia.ucdenver.edu at the w:University of Colorado Denver offers a Master's degree program, training courses and scientific basic and applied research. Faculty staff at the NCMF
- Media Forensics Hub at clemson.edu[contact 9] at the Watt Family Innovation Center of the w:Clemson University has the aims of promoting multi-disciplinary research, collecting and facilitating discussion and ideation of challenges and solutions. They provide resources, research, media forensics education and are running a Working Group on disinformation.[contact 10]
Organizations possibly against synthetic human-like fakes[edit | edit source]
Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 2]
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE at ieai.mcts.tum.de[contact 11] received initial funding from w:Facebook in 2019.[1st seen in 2] IEAI on LinkedIn.com
- The Institute for Ethical AI & Machine Learning at ethical.institute(contact form asks a lot of questions)[contact 12][contacted 3][1st seen in 2] The Institute for Ethical AI & Machine Learning on LinkedIn.com
- Future of Life Institute at futureoflife.org (contact form with also mailing list)[contact 14][contacted 4] received funding from private donors.[1st seen in 2] See w:Future of Life Institute for more info.
- The Japanese Society for Artificial Intelligence (JSAI) at ai-gakkai.or.jp[contact 15] Publication: Ethical guidelines.[1st seen in 2]
- AI4All at ai-4-all.org (contact form with also mailing list subscription) [contact 16][contacted 5] funded by w:Google[1st seen in 2] AI4All on LinkedIn.com
- The Future Society at thefuturesociety.org (contact form with also mailing list subscription)[contact 17][1st seen in 2]. Their activities include policy research, educational & leadership development programs, advisory services, seminars & summits and other special projects to advance the responsible adoption of Artificial Intelligence (AI) and other emerging technologies. The Future Society on LinkedIn.com
- The Ai Now Institute at ainowinstitute.org (contact form and possibility to subscribe to mailing list)[contact 18][contacted 6] at w:New York University[1st seen in 2]. Their work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License. The Ai Now Institute on LinkedIn.com
- The Foundation for Responsible Robotics at responsiblerobotics.org (contact form)[contact 19] is based in w:Netherlands.[1st seen in 2] The Foundation for Responsible Robotics on LinkedIn.com
- AI4People at ai4people.eu (contact form)[contact 20] is based in w:Belgium is a multi-stakeholder forum.[1st seen in 2] AI4People on LinkedIn.com
- The Ethics and Governance of Artificial Intelligence Initiative at aiethicsinitiative.org is a joint project of the MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society and is based in the USA.[1st seen in 2]
- Saidot at saidot.ai is a Finnish company offering a platform for AI transparency, explainability and communication.[1st seen in 2] Saidot on LinkedIn.com
- Centre for Data Ethics and Innovation at gov.uk, part of Department for Digital, Culture, Media & Sport is financed by the UK govt. Centre for Data Ethics and Innovation Blog at cdei.blog.gov.uk[1st seen in 2] Centre for Data Ethics and Innovation on LinkedIn.com
- ACM Special Interest Group on Artificial Intelligence at sigai.acm.org is a w:Special Interest Group on AI by ACM. 'AI Matters: A Newsletter of ACM SIGAI -blog at sigai.acm.org and the newsletter that the blog gets its contents from[1st seen in 2]
- IEEE Ethics in Action - in Autonomous and Intelligent Systems at ethicsinaction.ieee.org (mailing list subscription on website)[contact 21]
- The Center for Countering Digital Hate at counterhate.com (subscribe to mailing list on website[contact 22][contacted 7] is an international not-for-profit NGO that seeks to disrupt the architecture of online hate and misinformation with offices in London and Washington DC.
- Partnership for Countering Influence Operations (PCIO) at carnegieendowment.org (contact form)[contact 23] is a partnership by the w:Carnegie Endowment for International Peace
- UN Global Pulse at unglobalpulse.org is w:United Nations Secretary-General’s initiative on big data and artificial intelligence for development, humanitarian action, and peace.
- humane-ai.eu by Knowledge 4 All Foundation Ltd. at k4all.org[contact 24]
Other essential developments[edit | edit source]
- The Montréal Declaration for a Responsible Development of Artificial Intelligence at montrealdeclaration-responsibleai.com[contact 25] and the same site in French La Déclaration de Montéal IA responsable at declarationmontreal-iaresponsable.com[1st seen in 2]
- UNI Global Union at uniglobalunion.org is based in w:Nyon, w:Switzerland and deals mainly with labor issues to do with AI and robotics.[1st seen in 2] UNI Global Union on LinkedIn.com
- European Robotics Research Network at cordis.europa.eu funded by the w:European Commission.[1st seen in 2]
- European Robotics Platform at eu-robotics.net is funded by the w:European Commission. See w:European Robotics Platform and w:List of European Union robotics projects#EUROP for more info.[1st seen in 2]
Events against synthetic human-like fakes[edit | edit source]
Upcoming events[edit | edit source]
In reverse chronological order
- UPCOMING 2023 | World Anti-Bullying Forum Open Call for Hosting World Anti-Bullying Forum 2023 at worldantibullyingforum.com
- UPCOMING 2022 | w:European Conference on Computer Vision in Tel Aviv, Israel
- UPCOMING 2022 | HotNets 2022: Twenty-First ACM Workshop on Hot Topics in Networks at conferences.sigcomm.org - November 14-15, 2022 — Austin, Texas, USA. Presented at HotNets 2022 will be a very interesting paper on 'Global Content Revocation on the Internet: A Case Study in Technology Ecosystem Transformation' at farid.berkeley.edu
Ongoing events[edit | edit source]
- 2020 - ONGOING | w:National Institute of Standards and Technology (NIST) (nist.gov) (contacting NIST) | Open Media Forensics Challenge presented in Open Media Forensics Challenge at nist.gov and Open Media Forensics Challenge (OpenMFC) at mfc.nist.gov[contact 26] - Open Media Forensics Challenge Evaluation (OpenMFC) is an open evaluation series organized by the NIST to assess and measure the capability of media forensic algorithms and systems.
Past events[edit | edit source]
- 2022 | INTERSPEECH 2022 at interspeech2022.org organized by w:International Speech Communication Association was held on 18-22 September 2022 in Korea. The work 'Attacker Attribution of Audio Deepfakes' at arxiv.org was presented there
- 2022 | Technologies of Deception at law.yale.edu, a conference hosted by the w:Information Society Project (ISP) was held at Yale Law School in New Haven, Connecticut, on March 25-26, 2022
- 2021 | w:Conference on Neural Information Processing Systems NeurIPS 2021 at neurips.cc, was held virtually in December 2021. I haven't seen any good tech coming from there in 2021. On the problematic side w:StyleGAN3 was presented there.
- 2021 | w:Conference on Computer Vision and Pattern Recognition (CVPR) 2021 CVPR 2021 at cvpr2021.thecvf.com
- CVPR 2021 research areas visualization by Joshua Preston at public.tableau.com
- 2021 'Workshop on Media Forensics' in CVPR 2021 at sites.google.com, a June 2021 workshop at the Conference on Computer Vision and Pattern Recognition.
- 2020 | CVPR 2020 | 2020 Conference on Computer Vision and Pattern Recognition: 'Workshop on Media Forensics' at sites.google.com, a June 2020 workshop at the Conference on Computer Vision and Pattern Recognition.
- 2020 | The winners of the Deepfake Detection Challenge reach 82% accuracy in detecting synthetic human-like fakes
- 2020 | You Don't Say: An FTC Workshop on Voice Cloning Technologies at ftc.gov was held on Tuesday 2020-01-28 - reporting at venturebeat.com
- 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
- 2019 | w:NeurIPS 2019 | w:Facebook, Inc. "Facebook AI Launches Its Deepfake Detection Challenge" at spectrum.ieee.org w:IEEE Spectrum. More reporting at "Facebook, Microsoft, and others launch Deepfake Detection Challenge" at venturebeat.com
- 2017-2020 | NIST | NIST: 'Media Forensics Challenge' (MFC) at nist.gov, an iterative research challenge by the w:National Institute of Standards and Technology the evaluation criteria for the 2019 iteration are being formed. Succeeded by the Open Media Forensics Challenge.
- 2018 | w:European Conference on Computer Vision (ECCV) ECCV 2018: 'Workshop on Objectionable Content and Misinformation' at sites.google.com, a workshop at the 2018 w:European Conference on Computer Vision in w:Munich had focus on objectionable content detection e.g. w:nudity, w:pornography, w:violence, w:hate, w:children exploitation and w:terrorism among others and to address misinformation problems when people are fed w:disinformation and they punt it on as misinformation. Announced topics included w:image/video forensics, w:detection/w:analysis/w:understanding of w:fake images/videos, w:misinformation detection/understanding: mono-modal and w:multi-modal, adversarial technologies and detection/understanding of objectionable content
- 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
- 2017 | NIST NIST 'Nimble Challenge 2017' at nist.gov
- 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). 
Studies against synthetic human-like fakes[edit | edit source]
- 'Disinformation That Kills: The Expanding Battlefield Of Digital Warfare' at cbinsights.com, a 2020-10-21 research brief on disinformation warfare by w:CB Insights, a private company that provides w:market intelligence and w:business analytics services
- 'Media Forensics and DeepFakes: an overview' at arXiv.org (as .pdf at arXiv.org), an overview on the subject of digital look-alikes and media forensics published in August 2020 in Volume 14 Issue 5 of IEEE Journal of Selected Topics in Signal Processing. 'Media Forensics and DeepFakes: An Overview' at ieeexplore.ieee.org (paywalled, free abstract)
- 'DEEPFAKES: False pornography is here and the law cannot protect you' at scholarship.law.duke.edu by Douglas Harris, published in Duke Law & Technology Review - Volume 17 on 2019-01-05 by w:Duke University w:Duke University School of Law
Search for more
Reporting against synthetic human-like fakes[edit | edit source]
- 'Researchers use facial quirks to unmask ‘deepfakes’' at news.berkeley.edu 2019-06-18 reporting by Kara Manke published in Politics & society, Research, Technology & engineering-section in Berkley News of w:UC Berkeley.
Companies against synthetic human-like fakes[edit | edit source]
See resources for more.
- Cyabra.com is an AI-based system that helps organizations be on the guard against disinformation attacks[1st seen in 3]. Reuters.com reporting from July 2020.
[edit | edit source]
Transcluded from Juho's proposal for banning unauthorized synthetic pornography
§1 Models of human appearance[edit | edit source]
A model of human appearance means
- A realistic 3D model
- A 7D bidirectional reflectance distribution function model
- A direct-to-2D capable w:machine learning model
- Or a model made with any technology whatsoever, that looks deceivingly like the target person.
§2 Producing synthetic pornography[edit | edit source]
Making projections, still or videographic, where targets are portrayed in a nude or in a sexual situation from models of human appearance defined in §1 without express consent of the targets is illegal.
§3 Distributing synthetic pornography[edit | edit source]
Distributing, making available, public display, purchase, sale, yielding, import and export of non-authorized synthetic pornography defined in §2 are punishable.[footnote 1]
§4 Aggravated producing and distributing synthetic pornography[edit | edit source]
If the media described in §2 or §3 is made or distributed with the intent to frame for a crime or for blackmail, the crime should be judged as aggravated.
Afterwords[edit | edit source]
The original idea I had was to ban both the raw materials i.e. the models to make the visual synthetic filth and also the end product weaponized synthetic pornography, but then in July 2019 it appeared to me that Adequate Porn Watcher AI (concept) could really help in this age of industrial disinformation if it were built, trained and operational. Banning modeling of human appearance was in conflict with the revised plan.
It is safe to assume that collecting permissions to model each pornographic recording is not plausible, so an interesting question is that can we ban covert modeling from non-pornographic pictures, while still retaining the ability to model all porn found on the Internet.
In case we want to pursue banning modeling people's appearance from non-pornographic images/videos without explicit permission be pursued it must be formulated so that this does not make Adequate Porn Watcher AI (concept) illegal / impossible. This would seem to lead to a weird situation where modeling a human from non-pornographic media would be illegal, but modeling from pornography legal.
SSFWIKI proposed countermeasure to weaponized synthetic pornography: Adequate Porn Watcher AI (concept) (transcluded)[edit | edit source]
Transcluded main contents from Adequate Porn Watcher AI (concept)
Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.
The method and the effect
The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.
If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.
If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.
Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.
Definition of adequacy
An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.
What about the people in the porn-industry?
People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.
There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.
The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.
SSFWIKI proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded)[edit | edit source]
Transcluded from Juho's proposal on banning digital sound-alikes
Motivation: The current situation where the criminals can freely trade and grow their libraries of stolen voices is unwise.
[edit | edit source]
Acquiring such a model of a human's voice, that deceptively resembles some dead or living person's voice and the possession, purchase, sale, yielding, import and export without the express consent of the target are punishable.
[edit | edit source]
Producing and making available media from covert voice models defined in §1 is punishable.
[edit | edit source]
If the produced media is for a purpose to
- frame a human target or targets for crimes
- to attempt extortion or
- to defame the target,
the crime should be judged as aggravated.
Footnotes[edit | edit source]
- People who are found in possession of this synthetic pornography should probably not be penalized, but rather advised to get some help.
Contact information[edit | edit source]
Please contact these organizations and tell them to work harder against the disinformation weapons
Contact Artificial Intelligence Algorithmic Automation Incidents Controversies (AIAAIC) at aiaaic.org
- The Bradfield Centre
- 184 Cambridge Science Park
- Cambridge, CB4 0GA
- United Kingdom
Contact Cyber Civil Rights Initiative at cybercivilrights.org
- Contact form https://cybercivilrights.org/contact-us/
- CCRI is located in Coral Gables, Florida, USA.
The Internet Watch Foundation at iwf.org.uk
- Internet Watch Foundation
- Discovery House
- Vision Park
- Chivers Way
- CB24 9ZR
- Office Phone +44 (0)1223 20 30 30. - Phone lines are open between 8:30 - 16:30 Monday to Friday (UK time)
- Media enquiries Email media [at] iwf.org.uk
Partnership on AI at partnershiponai.org
- Partnership on AI
- 115 Sansome St, Ste 1200,
- San Francisco, CA 94104
- WITNESS (Media Lab)
- Contact form https://www.witness.org/get-involved/ incl. mailing list subscription possiblity
- 80 Hanson Place, 5th Floor
- Brooklyn, NY 11217
- Phone: 1.718.783.2000
- Screen Actors Guild - American Federation of Television and Radio Artists at https://www.sagaftra.org/
- Screen Actors Guild - American Federation of Television and Radio Artists
- 5757 Wilshire Boulevard, 7th Floor
- Los Angeles, California 90036
- Phone: 1-855-724-2387
- Email: firstname.lastname@example.org
- The Defense Advanced Research Projects Agency
- Contact form https://contact.darpa.mil/
- Email: email@example.com
- Defense Advanced Research Projects Agency
- 675 North Randolph Street
- Arlington, VA 22203-2114
- Phone 1-703-526-6630
- National Center for Media Forensics at https://artsandmedia.ucdenver.edu
- Email: CAM@ucdenver.edu
- College of Arts & Media
- National Center for Media Forensics
- CU Denver
- Arts Building
- Suite 177
- 1150 10th Street
- Denver, CO 80204
- Phone 1-303-315-7400
- Media Forensics Hub at Clemson University clemson.edu
- Media Forensics Hub
- Clemson University
- Clemson, South Carolina 29634
- Phone 1-864-656-3311
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
- Marsstrasse 40
- D-80335 Munich
- INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
- Arcisstrasse 21
- D-80333 Munich
The Institute for Ethical AI & Machine Learning
- The Institute for Ethical AI in Education
- The University of Buckingham
- The Institute for Ethical AI in Education
- Hunter Street
- MK18 1EG
- United Kingdom
Future of Life Institute
- No physical contact info
The Japanese Society for Artificial Intelligence
- The Japanese Society for Artificial Intelligence
- 402, OS Bldg.
- 4-7 Tsukudo-cho, Shinjuku-ku, Tokyo 162-0821
- 548 Market St
- PMB 95333
- San Francisco, California 94104
- The Future Society at thefuturesociety.org
- No physical contact info
- The AI Now Institute at ainowinstitute.org
- The Foundation for Responsible Robotics at responsiblerobotics.org
- AI4People at ai4people.eu
- No physical contact info
- IEEE Ethics in Action - in Autonomous and Intelligent Systems at ethicsinaction.ieee.org
- Carnegie Endowment for International Peace - Partnership for Countering Influence Operations (PCIO) at carnegieendowment.org
- Contact form https://carnegieendowment.org/about/?fa=contact
- Carnegie Endowment for International Peace
- Partnership for Countering Influence Operations
- 1779 Massachusetts Avenue NW
- Washington, DC 20036-2103
- Knowledge 4 All Foundation Ltd. - https://www.k4all.org/
- Betchworth House
- 57-65 Station Road
- Redhill, Surrey, RH1 1DL
- The Montréal Declaration for a Responsible Development of Artificial Intelligence at montrealdeclaration-responsibleai.com
- 1-514-343-6111, ext. 29669
Contacted[edit | edit source]
Contacted Lionel Hagege / FacePinPoint.com
- 2022-10-07 - Sent LinkedIn-mail to Lionel about https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 as I had promised to do so when I find out that someone has found something that sounds like it could work against the digital sound-alikes.
- 2022-02-22 - 2022-02-28 - We exchanged some messages with Mr. Hagege. I got some information to him that I thought he really should know and also got a very hope-bearing answer to my question about the IP: Mr. Hagege owns it after the dissolution of the company in 2021 and he would be interested in getting at it again, but only if sufficient financing would be supplied.
- 2022-02-21 - 2nd contact - I got another free month of LinkedIn pro and sent LinkedIn mail to Mr. Hagege, explaining the pressing need for FacePinPoint.com to make a return, as a public good. Hoping to get an answer.
- 2021-12-18 - first contact - I managed to reach someone at FacePinPoint via their Facebook chat and they told me that they could not find funding and needed to shut shop.
Relay access denied (in reply to RCPT TO command).
Contacted Artificial Intelligence Algorithmic Automation Incidents Controversies (AIAAIC) at aiaaic.org
- 2022-01-04 | Sent them email to info [at] aiaaic.org mentioning their site is down and thanking them for their effort
- 2022-01-04 | Received reply from Charlie. Replied back asking what is "AIAAIC" short for and Charlie Pownall responded "Artificial Intelligence Algorithmic Automation Incidents Controversies"
Contacted The Institute for Ethical AI & Machine Learning
- 2021-08-14 used the contact form at https://ethical.institute/#contact
Contacted Future of Life Institute
- 2021-08-14 | Subscribed to newsletter
- 2021-08-14 | Subscribed to mailing list
Contacted The AI Now Institute at ainowinstitute.org
- 2021-08-14 | Subscribed to mailing list
Contacted The Center for Countering Digital Hate at counterhate.com
- 2021-08-14 | Subscribed to mailing list
1st seen in[edit | edit source]
"The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17.
This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
References[edit | edit source]
- whois facepinpoint.com
- whois aiaaic.org
- https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November