Glossary: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
m (alphabetized)
 
(39 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is the '''SSF! wiki glossary''' with limited dictionary function. See '''[[resources]]''' for examples you will often find linked for your convenience.
This is the '''SSFWIKI glossary''' with limited dictionary function. See '''[[resources]]''' for examples you will often find linked for your convenience.


= ACM =
'''Glossaries elsewhere'''
* <section begin=US-Congress-glossary />[https://www.congress.gov/help/legislative-glossary '''Glossary of Legislative Terms''' at congress.gov]<section end=US-Congress-glossary />
* <section begin=UK-Parliament-glossary />[https://www.parliament.uk/site-information/glossary/ '''Glossary''' at parliament.uk]<section end=UK-Parliament-glossary />
* <section begin=Law-Society-glossary />[https://www.lawsociety.org.uk/public/for-public-visitors/resources/glossary '''Legal glossary''' at lawsociety.org.uk]<section end=Law-Society-glossary />
* [https://equalitynow.org/online-sexual-exploitation-and-abuse-a-glossary-of-terms/ '''''Online Sexual Exploitation and Abuse: a Glossary of Terms''''' at equalitynow.org]<ref group="1st seen in">https://equalitynow.org/resource/briefing-paper-deepfake-image-based-sexual-abuse-tech-facilitated-sexual-exploitation-and-the-law/</ref>, an NGO founded in 1992, [[w:Equality Now]]
 
= Association for Computing Machinery =


The '''[[w:Association for Computing Machinery]]''' ('''ACM''') is a US-based international [[w:learned society]] for [[w:computing]]. It was founded in 1947, and is the world's largest scientific and educational computing society. (Wikipedia)
The '''[[w:Association for Computing Machinery]]''' ('''ACM''') is a US-based international [[w:learned society]] for [[w:computing]]. It was founded in 1947, and is the world's largest scientific and educational computing society. (Wikipedia)
Line 31: Line 37:
= Appearance and voice theft =
= Appearance and voice theft =
Appearance is thieved with [[digital look-alikes]] and voice is thieved with [[digital sound-alikes]]. These are new and very extreme forms of identity theft. Ban covert modeling and possession and doing anything with a model of a human's voice, but don't ban the [[Adequate Porn Watcher AI (concept)]].
Appearance is thieved with [[digital look-alikes]] and voice is thieved with [[digital sound-alikes]]. These are new and very extreme forms of identity theft. Ban covert modeling and possession and doing anything with a model of a human's voice, but don't ban the [[Adequate Porn Watcher AI (concept)]].
----
= Audio forensics =
{{Q|'''[[w:Audio forensics]]''' is the field of [[w:forensic science]] relating to the acquisition, analysis, and evaluation of [[w:sound recording]]s that may ultimately be presented as admissible evidence in a court of law or some other official venue.<ref>{{cite web|url=http://www.soundonsound.com/sos/jan10/articles/forensics.htm|title=An Introduction To Forensic Audio|author=Phil Manchester|date=January 2010|publisher=Sound on Sound}}</ref><ref name=maherieee2009>{{cite journal|last=Maher|first=Robert C.|title=Audio forensic examination: authenticity, enhancement, and interpretation|journal=IEEE Signal Processing Magazine|volume=26|issue=2|pages=84–94|date=March 2009|doi=10.1109/msp.2008.931080|s2cid=18216777}}</ref><ref name=gelfandwired2007>{{cite web|url=https://www.wired.com/science/discoveries/news/2007/10/audio_forensics |title=Audio Forensics Experts Reveal (Some) Secrets |date=10 October 2007 |author=Alexander Gelfand |publisher=Wired Magazine |url-status=dead |archiveurl=https://web.archive.org/web/20120408153708/http://www.wired.com/science/discoveries/news/2007/10/audio_forensics |archivedate=2012-04-08 }}</ref><ref name=":0">{{Cite book |last=Maher |first=Robert C. |title=Principles of forensic audio analysis |publisher=Springer |year=2018 |isbn=9783319994536 |location=Cham, Switzerland |pages= |oclc=1062360764}}</ref>|Wikipedia|[[w:Audio forensics]]<ref group="permalink">https://en.wikipedia.org/w/index.php?title=Audio_forensics&oldid=1096442057 loaned on Friday 2022-10-21</ref>}}


----
----
Line 68: Line 78:
* Covertly modeling the '''human voice'''
* Covertly modeling the '''human voice'''
There is work ongoing to model e.g. '''human's style of writing''', but this is probably not as drastic a threat as the covert modeling of appearance and of voice.
There is work ongoing to model e.g. '''human's style of writing''', but this is probably not as drastic a threat as the covert modeling of appearance and of voice.
----
= Cyberbullying =
* '''[[w:Cyberbullying]]''' or '''cyberharassment''' is a form of [[w:bullying]] or [[w:harassment]] using [[w:Electronic communication network|w:electronic means]]. Cyberbullying and cyberharassment are also known as '''online bullying'''. ([https://en.wikipedia.org/w/index.php?title=Cyberbullying&oldid=1065285058 Wikipedia])
----
= DARPA =
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA|DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of [[Synthetic human-like fakes|the problems]] existing.]]
The '''Defense Advanced Research Projects Agency''' ('''[[w:DARPA]]''') is an agency of the [[w:United States Department of Defense]] responsible for the development of emerging technologies for use by the military. (Wikipedia)
* [https://www.darpa.mil/program/media-forensics '''DARPA program: 'Media Forensics (MediFor)'''' at darpa.mil] since 2016
* [https://www.darpa.mil/program/semantic-forensics '''DARPA program: 'Semantic Forensics (SemaFor)''' at darpa.mil] since 2019
See '''[[wikidata:Q207361]]''' for translations, descriptions and links to WMF wikis about DARPA.


----
----
Line 81: Line 107:


See '''[[wikidata:Q49473179]]''' for translations, descriptions and links to WMF wikis about deepfakes.
See '''[[wikidata:Q49473179]]''' for translations, descriptions and links to WMF wikis about deepfakes.
----
= DARPA =
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA|DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of [[Synthetic human-like fakes|the problems]] existing.]]
The '''Defense Advanced Research Projects Agency''' ('''[[w:DARPA]]''') is an agency of the [[w:United States Department of Defense]] responsible for the development of emerging technologies for use by the military. (Wikipedia)
* [https://www.darpa.mil/program/media-forensics '''DARPA program: 'Media Forensics (MediFor)'''' at darpa.mil] since 2016
* [https://www.darpa.mil/program/semantic-forensics '''DARPA program: 'Semantic Forensics (SemaFor)''' at darpa.mil] since 2019
See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about DARPA.


----
----
Line 121: Line 136:
* français | fr | French | les sonne-mêmes numeriques
* français | fr | French | les sonne-mêmes numeriques
* svenska | sv | Swedish | digitala dubbla ljud
* svenska | sv | Swedish | digitala dubbla ljud
----
= Generative artificial intelligence =
{{Q|'''[[w:Generative artificial intelligence]]''' or '''generative AI''' (also ''GenAI'') is a type of [[w:artificial intelligence]] (AI) system capable of generating text, images, or other media in response to [[w:Prompt engineering|w:prompts]].|[https://en.wikipedia.org/w/index.php?title=Generative_artificial_intelligence&oldid=1152362282 Wikipedia]|[[w:Generative artificial intelligence]]}}
Examples of generative AI that generate images / sceneries based on [[w:Prompt engineering|w:prompts]] by user. These genrated images may contain synthetic human-like fakes placed in fairly realistic looking scenarios.
* [[w:Midjourney]]
* [[w:Stable Diffusion]]
Examples of [[w:large language model]]s being used to generate conversational AIs
* [[w:ChatGPT]] by OpenAI based on OpenAI's [[w:Generative pre-trained transformer]] (GPT)
* [[w:Bard (chatbot)]] by Google and based on Google's [[w:LaMDA]]
Generative AI encompasses more than conversational AI as it is able to also create images / scenery
----
----


= Generative adversial network =
= Generative adversial network =


[[File:Woman 7.jpg|alt=An image generated by StyleGAN, a generative adversarial network (GAN), that looks deceptively like a portrait of a young woman.|thumb|250x250px|An image generated by [[w:StyleGAN]], a [[w:generative adversarial network]] (GAN), that looks deceptively like a portrait of a young woman.]]
[[File:Woman 1.jpg|alt=An image generated by StyleGAN, a generative adversarial network (GAN), that looks deceptively like a portrait of a young woman.|thumb|250x250px|An image generated by [[w:StyleGAN]], a [[w:generative adversarial network]] (GAN), that looks deceptively like a portrait of a young woman.]]


{{Q|A '''generative adversarial network''' ('''GAN''') is a class of [[w:machine learnin|g]] systems. Two [[w:neural network|neural network]]s contest with each other in a [[w:zero-sum game|zero-sum game]] framework. This technique can generate photographs that look at least superficially authentic to human observers,<ref name="GANs">{{cite arXiv |eprint=1406.2661|title=Generative Adversarial Networks|first1=Ian |last1=Goodfellow |first2=Jean |last2=Pouget-Abadie |first3=Mehdi |last3=Mirza |first4=Bing |last4=Xu |first5=David |last5=Warde-Farley |first6=Sherjil |last6=Ozair |first7=Aaron |last7=Courville |first8=Yoshua |last8=Bengio |class=cs.LG |year=2014 }}</ref> having many realistic characteristics. It is a form of [[w:unsupervised learning|unsupervised learning]]]].<ref name="ITT_GANs">{{cite arXiv |eprint=1606.03498|title=Improved Techniques for Training GANs|last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |class=cs.LG |year=2016 }}</ref>|Wikipedia|[[w:generative adversarial network|generative adversarial networks]]}}
{{Q|A '''generative adversarial network''' ('''GAN''') is a class of [[w:machine learnin|g]] systems. Two [[w:neural network|neural network]]s contest with each other in a [[w:zero-sum game|zero-sum game]] framework. This technique can generate photographs that look at least superficially authentic to human observers,<ref name="GANs">{{cite arXiv |eprint=1406.2661|title=Generative Adversarial Networks|first1=Ian |last1=Goodfellow |first2=Jean |last2=Pouget-Abadie |first3=Mehdi |last3=Mirza |first4=Bing |last4=Xu |first5=David |last5=Warde-Farley |first6=Sherjil |last6=Ozair |first7=Aaron |last7=Courville |first8=Yoshua |last8=Bengio |class=cs.LG |year=2014 }}</ref> having many realistic characteristics. It is a form of [[w:unsupervised learning|unsupervised learning]]]].<ref name="ITT_GANs">{{cite arXiv |eprint=1606.03498|title=Improved Techniques for Training GANs|last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |class=cs.LG |year=2016 }}</ref>|Wikipedia|[[w:generative adversarial network|generative adversarial networks]]}}
See '''[[wikidata:Q25104379]]''' for translations, descriptions and links to WMF wikis about GANs.


----
----
= Human image synthesis =
= Human-like image synthesis =
 
Human-like image synthesis is a dangerous technology. ''Human-like image synthesis'' would be semantically less wrong less wrong than ''[[w:human image synthesis]]''.


{{Q|'''Human image synthesis''' can be applied to make believable and even [[w:photorealism|photorealistic]] of human-likenesses, moving or still. This has effectively been the situation since the early [[w:2000s (decade)|2000s]]. Many films using [[w:computer generated imagery|computer generated imagery]] have featured synthetic images of human-like characters [[w:digital compositing|digitally composited]] onto the real or other simulated film material.|Wikipedia|[[w:Human image synthesis|Human image syntheses]]}}
{{Q|'''Human image synthesis''' can be applied to make believable and even [[w:photorealism|photorealistic]] of human-likenesses, moving or still. This has effectively been the situation since the early [[w:2000s (decade)|2000s]]. Many films using [[w:computer generated imagery|computer generated imagery]] have featured synthetic images of human-like characters [[w:digital compositing|digitally composited]] onto the real or other simulated film material.|Wikipedia|[[w:Human image synthesis|Human image syntheses]]}}


See '''[[wikidata:Q17118711]]''' for translations, descriptions and links to WMF wikis about human image synthesis and please contribute.
----
----


Line 140: Line 176:
[[File:Institute for Creative Technologies (logo).jpg|thumb|right|156px|Logo of the '''[[w:Institute for Creative Technologies|Institute for Creative Technologies]]''']]
[[File:Institute for Creative Technologies (logo).jpg|thumb|right|156px|Logo of the '''[[w:Institute for Creative Technologies|Institute for Creative Technologies]]''']]


See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about ICT.
See '''[[wikidata:Q6039265]]''' for translations, descriptions and links to WMF wikis about ICT.


----
----
= Large language model =
{{Q|A '''[[w:large language model]]''' ('''LLM''') is a [[w:language model]] consisting of a [[w:Artificial neural network|w:neural network]] with many parameters (typically billions of weights or more), trained on large quantities of unlabeled text using [[w:self-supervised learning]] or [[w:semi-supervised learning]].|[https://en.wikipedia.org/w/index.php?title=Large_language_model&oldid=1152606161 Wikipedia]|[[w:large language model]]}}


----
= Light stage =
= Light stage =


Line 155: Line 194:
{{Q|A '''light stage''' or '''light cage''' is equipment used for [[w:3D modeling|shape]], [[w:texture mapping|texture]], reflectance and [[w:motion capture|motion capture]] often with [[w:structured light|structured light]] and a [[w:multi-camera setup|multi-camera setup]].|Wikipedia|[[w:light stage|light stage]]s}}
{{Q|A '''light stage''' or '''light cage''' is equipment used for [[w:3D modeling|shape]], [[w:texture mapping|texture]], reflectance and [[w:motion capture|motion capture]] often with [[w:structured light|structured light]] and a [[w:multi-camera setup|multi-camera setup]].|Wikipedia|[[w:light stage|light stage]]s}}


See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about light stages.
See '''[[wikidata:Q17097238]]''' for translations, descriptions and links to WMF wikis about light stages.


----
----
Line 163: Line 202:
'''MATINE''' ([[w:fi:MATINE]]) is the [https://www.defmin.fi/en/frontpage/overview/ministry_of_defence/departments_and_units/defence_policy_department/scientific_advisory_board_for_defence_%28matine%29#d779859d '''Scientific Advisory Board for Defence'''] of the [[w:Ministry of Defence (Finland)|w:Ministry of Defence of Finland]]. MATINE is an abbreviation of '''''MA'''anpuolustuksen '''TI'''eteellinen '''NE'''uvottelukunta'' and it arranges an annual public research seminar. In 2019 a research group funded by MATINE presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf 'Synteettisen median tunnistus' at defmin.fi] (Recognizing synthetic media).
'''MATINE''' ([[w:fi:MATINE]]) is the [https://www.defmin.fi/en/frontpage/overview/ministry_of_defence/departments_and_units/defence_policy_department/scientific_advisory_board_for_defence_%28matine%29#d779859d '''Scientific Advisory Board for Defence'''] of the [[w:Ministry of Defence (Finland)|w:Ministry of Defence of Finland]]. MATINE is an abbreviation of '''''MA'''anpuolustuksen '''TI'''eteellinen '''NE'''uvottelukunta'' and it arranges an annual public research seminar. In 2019 a research group funded by MATINE presented their work [https://www.defmin.fi/files/4755/1315MATINE_seminaari_21.11.pdf 'Synteettisen median tunnistus' at defmin.fi] (Recognizing synthetic media).


See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about MATINE.
As of 2021-08-11 There is no wikidata item for translations, descriptions and links to WMF wikis about MATINE.


----
----
Line 170: Line 209:
'''Media forensics''' deal with ascertaining genuinity of media.
'''Media forensics''' deal with ascertaining genuinity of media.


{{Q|Wikipedia does not have an article on [[w:Media forensics]]|juboxi|2019-04-05}}
{{Q|Wikipedia does not have an article on media forensics, but it does have one on [[w:audio forensics]]|juboxi|2022-10-21}}


----
----
Line 178: Line 217:
{{Q|A '''niqab''' or '''niqāb''' ("[face] veil"; also called a '''ruband''') is a garment of clothing that covers the face, worn by some [[w:muslim women|muslim women]] as a part of a particular interpretation of [[w:hijab|hijab]] (modest dress).|Wikipedia|[[w:Niqāb|Niqābs]]}}
{{Q|A '''niqab''' or '''niqāb''' ("[face] veil"; also called a '''ruband''') is a garment of clothing that covers the face, worn by some [[w:muslim women|muslim women]] as a part of a particular interpretation of [[w:hijab|hijab]] (modest dress).|Wikipedia|[[w:Niqāb|Niqābs]]}}


See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about niqābs.
See '''[[wikidata:Q210583]]''' for translations, descriptions and links to WMF wikis about niqābs.


----
----
Line 206: Line 245:
'''Relighting''' means applying a completely different [[w:lighting]] situation to an image or video which has already been imaged. As of 2020-09 the English Wikipedia does not have an article on relighting.
'''Relighting''' means applying a completely different [[w:lighting]] situation to an image or video which has already been imaged. As of 2020-09 the English Wikipedia does not have an article on relighting.


As of 2020-11-19 Wikipedia does not have an article on '''relighting'''.
Since 2021-03-15 a [[w:Relighting]] is a redirect to [[w:Polynomial texture mapping]].
 
'''[[w:Polynomial texture mapping]]''' ('''PTM'''), also known as '''Reflectance Transformation Imaging''' (RTI), is a technique of [[w:digital imaging]] and [[w:interactive media|w:interactively]] displaying objects under varying [[w:lighting]] conditions to reveal surface phenomena. ([https://en.wikipedia.org/w/index.php?title=Polynomial_texture_mapping&oldid=1021704138 Wikipedia])
 
Past: As of 2020-11-19 Wikipedia does not have an article on '''relighting'''.


----
----
= Sexual bullying =
* '''[[w:Sexual bullying]]''' is a type of [[w:bullying]] and [[w:sexual harassment]] that occurs in connection with a person's [[w:sex]], body,  [[w:sexual orientation]] or with [[w:sexual activity]]. It can be [[w:physical abuse|w:physical]], [[w:verbal abuse|w:verbal]], and/or [[w:emotional abuse]]. ([https://en.wikipedia.org/w/index.php?title=Sexual_bullying&oldid=1063305355 Wikipedia])
----
= SISE =
'''SISE''' refers to the [[#Stopping Internet Sexual Exploitation Act]] - a House of Commons of Canada bill introduced in 2022.
----
= SISEA =
* [[#Stop Internet Sexual Exploitation Act]] - a US Senate bill in the 2019-2020 session
----
= Spectrogram =
= Spectrogram =
[[File:Spectrogram-19thC.png|thumb|right|360px|A [[w:spectrogram|spectrogram]] of a male voice saying 'nineteenth century']]
[[File:Spectrogram-19thC.png|thumb|right|360px|A [[w:spectrogram|spectrogram]] of a male voice saying 'nineteenth century']]
Line 214: Line 269:
'''[[w:Spectrogram]]s''' are used extensively in the fields of [[w:music]], [[w:linguistics]], [[w:sonar]], [[w:radar]], [[w:speech processing]], [[w:seismology]], and others. Spectrograms of audio can be used to identify spoken words [[w:phonetics|phonetic]]ally, and to analyse the [[w:Animal communication|various calls of animals]]. (Wikipedia)
'''[[w:Spectrogram]]s''' are used extensively in the fields of [[w:music]], [[w:linguistics]], [[w:sonar]], [[w:radar]], [[w:speech processing]], [[w:seismology]], and others. Spectrograms of audio can be used to identify spoken words [[w:phonetics|phonetic]]ally, and to analyse the [[w:Animal communication|various calls of animals]]. (Wikipedia)


See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about spectrograms.
See '''[[wikidata:Q103865657]]''' for translations, descriptions and links to WMF wikis about spectrograms.


----
----
Line 223: Line 278:
See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about speech syntheses.
See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about speech syntheses.


----
= Stop Internet Sexual Exploitation Act =
The Stop Internet Sexual Exploitation Act (SISE) was a bill introduced to the 2019-2020 session of the US Senate.
* [https://www.congress.gov/bill/116th-congress/senate-bill/5054?r=1&s=1 US Senate Bill ''''''S.5054''' - '''Stop Internet Sexual Exploitation Act''''' at congress.gov]
= Stopping Internet Sexual Exploitation Act =
* [https://www.parl.ca/DocumentViewer/en/44-1/bill/C-270/first-reading House of Commons of Canada bill C-270 ''''''Stopping Internet Sexual Exploitation Act'''''' at parl.ca], first read to the Commons on Thursday 2022-04-28. According to [https://www.townandcountrytoday.com/local-news/mp-submits-private-members-bill-for-second-time-5346232 townandcountrytoday.com] the author of the bill introduced an identical bill C-302 on Thursday 2021-05-27, but that got killed off by oncoming federal elections
----
----


= Synthetic porn =
= Synthetic pornography =
'''Synthetic pornography''' is a '''strong technological hallucinogen'''.
'''Synthetic pornography''' is a '''strong technological hallucinogen'''.


== Synthetic terror porn ==
== Synthetic terror porn ==
'''Synthetic terror porn''' is pornography synthesized with terrorist intent. '''Synthetic rape porn''' is probably by far the most prevalent form of this, but it must be noted that synthesizing '''concentual looking sex scenes''' can also be '''terroristic''' in intent and effect.
'''Synthetic terror porn''' is pornography synthesized with terrorist intent. '''Synthetic rape porn''' is probably by far the most prevalent form of this, but it must be noted that synthesizing '''concensual looking sex scenes''' can also be '''terroristic''' in intent and effect.
 
----
----


Line 235: Line 299:
{{Q|'''Transfer learning (TL)''' is a research problem in [[w:machine learning|machine learning]] (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.|Wikipedia|[[w:Transfer learning|Transfer learning]]}}
{{Q|'''Transfer learning (TL)''' is a research problem in [[w:machine learning|machine learning]] (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.|Wikipedia|[[w:Transfer learning|Transfer learning]]}}


See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about transfer learning.
See '''[[wikidata:Q6027324]]''' for translations, descriptions and links to WMF wikis about transfer learning.


----
----
Line 244: Line 308:
Please see '''[[Resources#List of voice changers]]''' for some alternatives.
Please see '''[[Resources#List of voice changers]]''' for some alternatives.


See '''[[wikidata:]]''' for translations, descriptions and links to WMF wikis about voice changers.
See '''[[wikidata:Q4224062]]''' for translations, descriptions and links to WMF wikis about voice changers.


----
----
= Permalinks =
<references group="permalink"/>


= References =
= References =
<references />
<references />
= 1st seen in =
<references group="1st seen in"/>
[[Category:Listing]]
[[Category:In English]]
[[Category:In Finnish]]
[[Category:Suomeksi]]

Latest revision as of 16:34, 16 December 2024

This is the SSFWIKI glossary with limited dictionary function. See resources for examples you will often find linked for your convenience.

Glossaries elsewhere

Association for Computing Machinery[edit | edit source]

The w:Association for Computing Machinery (ACM) is a US-based international w:learned society for w:computing. It was founded in 1947, and is the world's largest scientific and educational computing society. (Wikipedia)

See wikidata:Q127992 for translations, descriptions and links to WMF wikis about ACM.


Adequate Porn Watcher AI[edit | edit source]

Adequate Porn Watcher AI (concept) is a 2019 concept for an AI that would protect humans against visual synthetic filth by ripping the disinformation filth into revealing light.

Main article Adequate Porn Watcher AI (concept).

Dictionary entries

  • English | en | English | Adequate Porn Watcher AI
  • eesti | et | Estonian Piisav Pornovaataja AI on 2019. aasta tehisintellekti idee, mis kaitseks inimesi sünteetilise visuaalse saasta eest, rebides desinformatsioonisaasta valguse.
  • suomi | fi | Finnish | Adequate Porn Watcher AI -tekoälykonsepti (eng) on 2019 konsepti tekoälystä, joka suojelisi ihmisiä synteettistä visuaalista saastaa repimällä disinformaatiosaastan paljastavaan valoon.
  • français | fr | French | l’IA Observateur Adéquat de Porno (concept) (Adequate Porn Watcher AI en anglais) est une idée pour une IA qui protégerait les humains contre les saletés synthétiques visuelles en arrachent les saletés de désinformation en lumière révélatrice.
  • svenska | sv | Swedish Adekvat Porr Tittare AI är en idé från 2019 om artificiell intelligens som skulle skydda människor från syntetisk visuell orenhet genom att riva desinformationsorenhet till avslöjande ljus.



Appearance and voice theft[edit | edit source]

Appearance is thieved with digital look-alikes and voice is thieved with digital sound-alikes. These are new and very extreme forms of identity theft. Ban covert modeling and possession and doing anything with a model of a human's voice, but don't ban the Adequate Porn Watcher AI (concept).


Audio forensics[edit | edit source]

w:Audio forensics is the field of w:forensic science relating to the acquisition, analysis, and evaluation of w:sound recordings that may ultimately be presented as admissible evidence in a court of law or some other official venue.[1][2][3][4]



Bidirectional reflectance distribution function[edit | edit source]

Diagram showing vectors used to define the w:BRDF.

“The bidirectional reflectance distribution function (BRDF) is a function of four real variables that defines how light is reflected at an opaque surface. It is employed in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms.”

~ Wikipedia on BRDF


A BRDF model is a 7 dimensional model containing geometry, textures and reflectance of the subject.

The seven dimensions of the BRDF model are as follows:

  • 3 cartesian X,Y,Z
  • 2 for the entry angle
  • 2 for the exit angle of the light.

See wikidata:Q856980 for translations, descriptions and links to WMF wikis about BRDF.


Burqa[edit | edit source]

Some humans in w:burqas a the Bornholm burka happening

“A burqa, also known as chadri or paranja in Central Asia, is an enveloping outer garment worn by women in some Islamic traditions to cover themselves in public, which covers the body and the face.”

~ Wikipedia on burqas


See wikidata:Q167884 for translations, descriptions and links to WMF wikis about burqas.


Covert modeling[edit | edit source]

Covert modeling refers to both covertly modeling aspects of a subject i.e. without express consent.

Main known cases are

There is work ongoing to model e.g. human's style of writing, but this is probably not as drastic a threat as the covert modeling of appearance and of voice.


Cyberbullying[edit | edit source]


DARPA[edit | edit source]

The Defense Advanced Research Projects Agency, better known as DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.

The Defense Advanced Research Projects Agency (w:DARPA) is an agency of the w:United States Department of Defense responsible for the development of emerging technologies for use by the military. (Wikipedia)

See wikidata:Q207361 for translations, descriptions and links to WMF wikis about DARPA.


Deepfake[edit | edit source]

A side-by-side comparison of videos. To the left, a scene from the 2013 motion picture w:Man of Steel (film). To the right, the same scene modified using w:deepfake technology.

Man of Steel produced by DC Entertainment and Legendary Pictures, distributed by Warner Bros. Pictures. Modification done by Reddit user "derpfakes".

This is a sample from a copyrighted video recording. The person who uploaded this work and first used it in an article, and subsequent people who use it in articles, assert that this qualifies as fair use.

Deepfake (a portmanteau of "deep learning" and "fake") is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a "generative adversarial network" (GAN).”

~ Wikipedia on Deepfakes


See wikidata:Q49473179 for translations, descriptions and links to WMF wikis about deepfakes.


Digital look-alike[edit | edit source]

When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a digital look-alike. Alternative term is look-like-anyone-machine.

Saying "digital look-alike of X" would imply possession, but "digital look-alike made of X" is more suited, unless the target really is in possession of it.


Dictionary entries

  • English | en | English | Digital look-alikes
  • eesti | et | Estonian | digitaalsed duplikaadid
  • suomi | fi | Finnish | digitaaliset kaksoiskuvajaiset
  • français | fr | French | les sosies numériques
  • svenska | sv | Swedish | digitala dupletter

Digital sound-alike[edit | edit source]

When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a pre-recorded digital sound-alike. Alternative term is sound-like-anyone-machine.


Dictionary entries

  • English | en | English | Digital sound-alikes
  • eesti | et | Estonian | digitaalsed topelthelid
  • suomi | fi | Finnish | digitaaliset kaksoisäänet
  • français | fr | French | les sonne-mêmes numeriques
  • svenska | sv | Swedish | digitala dubbla ljud

Generative artificial intelligence[edit | edit source]

w:Generative artificial intelligence or generative AI (also GenAI) is a type of w:artificial intelligence (AI) system capable of generating text, images, or other media in response to w:prompts.”


Examples of generative AI that generate images / sceneries based on w:prompts by user. These genrated images may contain synthetic human-like fakes placed in fairly realistic looking scenarios.

Examples of w:large language models being used to generate conversational AIs

Generative AI encompasses more than conversational AI as it is able to also create images / scenery


Generative adversial network[edit | edit source]

An image generated by StyleGAN, a generative adversarial network (GAN), that looks deceptively like a portrait of a young woman.
An image generated by w:StyleGAN, a w:generative adversarial network (GAN), that looks deceptively like a portrait of a young woman.

“A generative adversarial network (GAN) is a class of g systems. Two neural networks contest with each other in a zero-sum game framework. This technique can generate photographs that look at least superficially authentic to human observers,[5] having many realistic characteristics. It is a form of unsupervised learning]].[6]


See wikidata:Q25104379 for translations, descriptions and links to WMF wikis about GANs.


Human-like image synthesis[edit | edit source]

Human-like image synthesis is a dangerous technology. Human-like image synthesis would be semantically less wrong less wrong than w:human image synthesis.

Human image synthesis can be applied to make believable and even photorealistic of human-likenesses, moving or still. This has effectively been the situation since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material.”

~ Wikipedia on Human image syntheses


See wikidata:Q17118711 for translations, descriptions and links to WMF wikis about human image synthesis and please contribute.


Institute for Creative Technologies[edit | edit source]

The Institute for Creative Technologies was founded in 1999 in the University of Southern California by the United States Army. It collaborates with the w:United States Army Futures Command, w:United States Army Combat Capabilities Development Command, w:Combat Capabilities Development Command Soldier Center and w:United States Army Research Laboratory.

See wikidata:Q6039265 for translations, descriptions and links to WMF wikis about ICT.


Large language model[edit | edit source]

“A w:large language model (LLM) is a w:language model consisting of a w:neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabeled text using w:self-supervised learning or w:semi-supervised learning.”



Light stage[edit | edit source]

Original w:light stage used in the 1999 reflectance capture by Debevec et al.

It consists of two rotary axes with height and radius control. Light source and a polarizer were placed on one arm and a camera and the other polarizer on the other arm.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
The ESPER LightCage - 3D face scanning rig is a modern w:light stage

“A light stage or light cage is equipment used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup.”

~ Wikipedia on light stages


See wikidata:Q17097238 for translations, descriptions and links to WMF wikis about light stages.


MATINE[edit | edit source]

MATINE (w:fi:MATINE) is the Scientific Advisory Board for Defence of the w:Ministry of Defence of Finland. MATINE is an abbreviation of MAanpuolustuksen TIeteellinen NEuvottelukunta and it arranges an annual public research seminar. In 2019 a research group funded by MATINE presented their work 'Synteettisen median tunnistus' at defmin.fi (Recognizing synthetic media).

As of 2021-08-11 There is no wikidata item for translations, descriptions and links to WMF wikis about MATINE.


Media forensics[edit | edit source]

Media forensics deal with ascertaining genuinity of media.

“Wikipedia does not have an article on media forensics, but it does have one on w:audio forensics

~ juboxi on 2022-10-21



Niqāb[edit | edit source]

Image of a human wearing a w:niqāb

“A niqab or niqāb ("[face] veil"; also called a ruband) is a garment of clothing that covers the face, worn by some muslim women as a part of a particular interpretation of hijab (modest dress).”

~ Wikipedia on Niqābs


See wikidata:Q210583 for translations, descriptions and links to WMF wikis about niqābs.


No camera[edit | edit source]

No camera (!) refers to the fact that a simulation of a camera is not a camera. If people realize the differences, and thus the different restrictions by many types of laws e.g. physics, physiology. Analogously see #No microphone, usually seen below this entry.


No microphone[edit | edit source]

No microphone is needed when using synthetic voices as you just model them, without needing to capture. Analogously see the entry #No camera, usually seen above this entry.


Reflectance capture[edit | edit source]

Reflectance capture is made by measuring the reflected light for each incoming light direction and every exit direction, often with many different wavelengths. Using polarisers allow to separately capture the specular and the diffuse reflected light. The first known reflectance capture over the human face was made in 1999 by Paul Debevec et al at the w:University of Southern California.

As of 2020-11-19 Wikipedia does not have an article on reflectance capture.


Relighting[edit | edit source]

Each image images a face in synthesized lighting. The lower images represent the captured illumination map. The images are generated taking a dot product of each pixel’s reflectance function with the illumination map.

Original image Copyright ACM 2000 – http://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Relighting means applying a completely different w:lighting situation to an image or video which has already been imaged. As of 2020-09 the English Wikipedia does not have an article on relighting.

Since 2021-03-15 a w:Relighting is a redirect to w:Polynomial texture mapping.

w:Polynomial texture mapping (PTM), also known as Reflectance Transformation Imaging (RTI), is a technique of w:digital imaging and w:interactively displaying objects under varying w:lighting conditions to reveal surface phenomena. (Wikipedia)

Past: As of 2020-11-19 Wikipedia does not have an article on relighting.


Sexual bullying[edit | edit source]


SISE[edit | edit source]

SISE refers to the #Stopping Internet Sexual Exploitation Act - a House of Commons of Canada bill introduced in 2022.


SISEA[edit | edit source]


Spectrogram[edit | edit source]

A spectrogram of a male voice saying 'nineteenth century'

w:Spectrograms are used extensively in the fields of w:music, w:linguistics, w:sonar, w:radar, w:speech processing, w:seismology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals. (Wikipedia)

See wikidata:Q103865657 for translations, descriptions and links to WMF wikis about spectrograms.


Speech synthesis[edit | edit source]

Speech synthesis is the artificial production of human speech

~ Wikipedia on speech syntheses


See wikidata: for translations, descriptions and links to WMF wikis about speech syntheses.


Stop Internet Sexual Exploitation Act[edit | edit source]

The Stop Internet Sexual Exploitation Act (SISE) was a bill introduced to the 2019-2020 session of the US Senate.

Stopping Internet Sexual Exploitation Act[edit | edit source]


Synthetic pornography[edit | edit source]

Synthetic pornography is a strong technological hallucinogen.

Synthetic terror porn[edit | edit source]

Synthetic terror porn is pornography synthesized with terrorist intent. Synthetic rape porn is probably by far the most prevalent form of this, but it must be noted that synthesizing concensual looking sex scenes can also be terroristic in intent and effect.


Transfer learning[edit | edit source]

Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.”

~ Wikipedia on Transfer learning


See wikidata:Q6027324 for translations, descriptions and links to WMF wikis about transfer learning.


Voice changer[edit | edit source]

“The term voice changer (also known as voice enhancer) refers to a device which can change the tone or pitch of or add distortion to the user's voice, or a combination and vary greatly in price and sophistication.”

~ Wikipedia on voice changers


Please see Resources#List of voice changers for some alternatives.

See wikidata:Q4224062 for translations, descriptions and links to WMF wikis about voice changers.


Permalinks[edit | edit source]

References[edit | edit source]

  1. Phil Manchester (January 2010). "An Introduction To Forensic Audio". Sound on Sound.
  2. Maher, Robert C. (March 2009). "Audio forensic examination: authenticity, enhancement, and interpretation". IEEE Signal Processing Magazine. 26 (2): 84–94. doi:10.1109/msp.2008.931080. S2CID 18216777.
  3. Alexander Gelfand (10 October 2007). "Audio Forensics Experts Reveal (Some) Secrets". Wired Magazine. Archived from the original on 2012-04-08.
  4. Maher, Robert C. (2018). Principles of forensic audio analysis. Cham, Switzerland: Springer. ISBN 9783319994536. OCLC 1062360764.
  5. Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks". arXiv:1406.2661 [cs.LG].
  6. Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). "Improved Techniques for Training GANs". arXiv:1606.03498 [cs.LG].

1st seen in[edit | edit source]