Glossary: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(+ == Digital look-alike = (needs rewording))
(→‎Synthetic porn: + Synthetic pornography is a strong technological hallucinogen)
(38 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Bidirectional reflectance distribution function ==
This is the '''SSF! wiki glossary'''. See '''[[resources]]''' for examples you will often find linked for your convenience.
= Adequate Porn Watcher AI =
See [[Adequate Porn Watcher AI]]


[[File:BRDF_Diagram.svg|thumb|right|300px|Diagram showing vectors used to define the [[w:BRDF|BRDF]].]]
= Appearance and voice theft =
Appearance is thieved with [[digital look-alikes]] and voice is thieved with [[digital sound-alikes]]. These are new and very extreme forms of identity theft. Ban covert modeling and possession and doing anything with a model of a human's voice, but don't ban the [[Adequate Porn Watcher AI]].
 
----
 
= Bidirectional reflectance distribution function =
 
[[File:BRDF_Diagram.svg|thumb|right|300px|Diagram showing vectors used to define the [[w:Bidirectional reflectance distribution function|w:BRDF]].]]


{{Q|The '''bidirectional reflectance distribution function''' ('''BRDF''') is a function of four real variables that defines how light is reflected at an [[w:Opacity (optics)|opaque]] surface. It is employed in the [[w:optics|optics]] of real-world light, in [[w:computer graphics|computer graphics]] algorithms, and in [[w:computer vision|computer vision]] algorithms.|Wikipedia|[[w:bidirectional reflectance distribution function|BRDF]]}}
{{Q|The '''bidirectional reflectance distribution function''' ('''BRDF''') is a function of four real variables that defines how light is reflected at an [[w:Opacity (optics)|opaque]] surface. It is employed in the [[w:optics|optics]] of real-world light, in [[w:computer graphics|computer graphics]] algorithms, and in [[w:computer vision|computer vision]] algorithms.|Wikipedia|[[w:bidirectional reflectance distribution function|BRDF]]}}
Line 14: Line 23:
----
----


== Covert modeling ==
= Burqa =
[[File:20180613 Folkemodet Bornholm burka happening 0118 (42739707262).jpg|thumb|right|300px|Some humans in [[w:burqa]]s a the Bornholm burka happening]]
{{Q|A '''burqa''', also known as '''chadri''' or [[w:paranja|paranja]] in [[w:Central Asia|Central Asia]], is an enveloping outer garment worn by women in some Islamic traditions to cover themselves in public, which covers the body and the face.|Wikipedia|[[w:burqa|burqa]]s}}
----
 
= Covert modeling =
'''Covert modeling''' refers to both covertly modeling aspects of a subject i.e. without express consent.
'''Covert modeling''' refers to both covertly modeling aspects of a subject i.e. without express consent.


Main known cases are
Main known cases are
* Covertly modeling the '''human appearance''' into 7-dimensional [[#Bidirectional reflectance distribution function|]] model or other type of model.
* Covertly modeling the '''human appearance''' into 7-dimensional [[#Bidirectional reflectance distribution function|Bidirectional reflectance distribution function]] model or other type of model.
* Covertly modeling the '''human voice'''
* Covertly modeling the '''human voice'''
There is work ongoing to model e.g. '''human's style of writing''', but this is probably not as drastic a threat as the covert modeling of appearance and of voice.
There is work ongoing to model e.g. '''human's style of writing''', but this is probably not as drastic a threat as the covert modeling of appearance and of voice.
----
----
== Deepfake ==
 
= Deepfake =
[[File:Deepfake example.gif|thumb|right|313px|link=https://en.wikipedia.org/wiki/File:Deepfake_example.gif|A side-by-side comparison of videos. To the left, a scene from the 2013 motion picture ''[[w:Man of Steel (film)]]''. To the right, the same scene modified using [[w:deepfake]] technology.
<br /><br />
Man of Steel produced by DC Entertainment and Legendary Pictures, distributed by Warner Bros. Pictures. Modification done by Reddit user "derpfakes".
<br /><br />
<small>This is a sample from a copyrighted video recording. The person who uploaded this work and first used it in an article, and subsequent people who use it in articles, assert that this qualifies as fair use.</small>]]
 
{{Q|'''Deepfake''' (a [[w:portmanteau|portmanteau]] of "[[w:deep learning|deep learning]]" and "fake") is a technique for [[w:human image synthesis|human image synthesis]] based on [[w:artificial intelligence|artificial intelligence]]. It is used to combine and [[w:Superimposition|superimpose]] existing images and videos onto source images or videos using a machine learning technique called a "[[w:generative adversarial network|generative adversarial network]]" (GAN).|Wikipedia|[[w:Deepfake|Deepfake]]s}}
{{Q|'''Deepfake''' (a [[w:portmanteau|portmanteau]] of "[[w:deep learning|deep learning]]" and "fake") is a technique for [[w:human image synthesis|human image synthesis]] based on [[w:artificial intelligence|artificial intelligence]]. It is used to combine and [[w:Superimposition|superimpose]] existing images and videos onto source images or videos using a machine learning technique called a "[[w:generative adversarial network|generative adversarial network]]" (GAN).|Wikipedia|[[w:Deepfake|Deepfake]]s}}


----
----
== Digital look-alike ==
 
= DARPA =
[[File:DARPA_Logo.jpg|thumb|right|240px|The Defense Advanced Research Projects Agency, better known as [[w:DARPA|DARPA]] has been active in the field of countering synthetic fake video for longer than the public has been aware of [[Synthetic human-like fakes|the problems]] existing.]]
The '''Defense Advanced Research Projects Agency''' ('''[[w:DARPA]]''') is an agency of the [[w:United States Department of Defense]] responsible for the development of emerging technologies for use by the military. (Wikipedia)
* [https://www.darpa.mil/program/media-forensics '''DARPA program: 'Media Forensics (MediFor)'''' at darpa.mil] since 2016
* [https://www.darpa.mil/program/semantic-forensics '''DARPA program: 'Semantic Forensics (SemaFor)''' at darpa.mil] since 2019
 
----
 
= Digital look-alike =
When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''[[digital look-alikes|digital look-alike]]'''.  
When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''[[digital look-alikes|digital look-alike]]'''.  


----
= Digital sound-alike =
When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a '''[[digital sound-alikes|digital sound-alike]]'''.
----
= Generative adversial network =
[[File:Woman 7.jpg|alt=An image generated by StyleGAN, a generative adversarial network (GAN), that looks deceptively like a portrait of a young woman.|thumb|250x250px|An image generated by [[w:StyleGAN]], a [[w:generative adversarial network]] (GAN), that looks deceptively like a portrait of a young woman.]]
{{Q|A '''generative adversarial network''' ('''GAN''') is a class of [[w:machine learnin|g]] systems. Two [[w:neural network|neural network]]s contest with each other in a [[w:zero-sum game|zero-sum game]] framework. This technique can generate photographs that look at least superficially authentic to human observers,<ref name="GANs">{{cite arXiv |eprint=1406.2661|title=Generative Adversarial Networks|first1=Ian |last1=Goodfellow |first2=Jean |last2=Pouget-Abadie |first3=Mehdi |last3=Mirza |first4=Bing |last4=Xu |first5=David |last5=Warde-Farley |first6=Sherjil |last6=Ozair |first7=Aaron |last7=Courville |first8=Yoshua |last8=Bengio |class=cs.LG |year=2014 }}</ref> having many realistic characteristics. It is a form of [[w:unsupervised learning|unsupervised learning]]]].<ref name="ITT_GANs">{{cite arXiv |eprint=1606.03498|title=Improved Techniques for Training GANs|last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |class=cs.LG |year=2016 }}</ref>|Wikipedia|[[w:generative adversarial network|generative adversarial networks]]}}
----
= Human image synthesis =
{{Q|'''Human image synthesis''' can be applied to make believable and even [[w:photorealism|photorealistic]] of human-likenesses, moving or still. This has effectively been the situation since the early [[w:2000s (decade)|2000s]]. Many films using [[w:computer generated imagery|computer generated imagery]] have featured synthetic images of human-like characters [[w:digital compositing|digitally composited]] onto the real or other simulated film material.|Wikipedia|[[w:Human image synthesis|Human image syntheses]]}}
----
= Institute for Creative Technologies =
The '''[[w:Institute for Creative Technologies|Institute for Creative Technologies]]''' was founded in 1999 in the [[w:University of Southern California|University of Southern California]] by the [[w:United States Army|United States Army]]. It collaborates with the [[w:United States Army Futures Command]], [[w:United States Army Combat Capabilities Development Command]], [[w:Combat Capabilities Development Command Soldier Center]] and [[w:United States Army Research Laboratory]].
[[File:Institute for Creative Technologies (logo).jpg|thumb|right|156px|Logo of the '''[[w:Institute for Creative Technologies|Institute for Creative Technologies]]''']]
----
----


== Light stage ==
= Light stage =
[[File:ESPER_LightCage.jpg|thumb|right|300px|The ESPER LightCage - 3D face scanning rig is a modern [[w:light stage|light stage]]]]
 
[[File:Deb2000-light-stage-low-res-rip.png|thumb|left|304px|Original [[w:light stage]] used in the 1999 reflectance capture by [[w:Paul Debevec|Debevec]] et al.<br /><br />
 
It consists of two rotary axes with height and radius control. Light source and a polarizer were placed on one arm and a camera and the other polarizer on the other arm.
<br /><br />
<small>Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]]
 
[[File:ESPER_LightCage.jpg|thumb|right|300px|The ESPER LightCage - 3D face scanning rig is a modern [[w:light stage|w:light stage]]]]
{{Q|A '''light stage''' or '''light cage''' is equipment used for [[w:3D modeling|shape]], [[w:texture mapping|texture]], reflectance and [[w:motion capture|motion capture]] often with [[w:structured light|structured light]] and a [[w:multi-camera setup|multi-camera setup]].|Wikipedia|[[w:light stage|light stage]]s}}
{{Q|A '''light stage''' or '''light cage''' is equipment used for [[w:3D modeling|shape]], [[w:texture mapping|texture]], reflectance and [[w:motion capture|motion capture]] often with [[w:structured light|structured light]] and a [[w:multi-camera setup|multi-camera setup]].|Wikipedia|[[w:light stage|light stage]]s}}
= Media forensics =
'''Media forensics''' deal with ascertaining genuinity of media.
{{Q|Wikipedia does not have an article on [[w:Media forensics]]|juboxi|2019-04-05}}


----
----
= Niqāb =
[[File:Muslim woman in Yemen.jpg|thumb|left|200px|Image of a human wearing a '''[[w:niqāb]]''']]
{{Q|A '''niqab''' or '''niqāb''' ("[face] veil"; also called a '''ruband''') is a garment of clothing that covers the face, worn by some [[w:muslim women|muslim women]] as a part of a particular interpretation of [[w:hijab|hijab]] (modest dress).|Wikipedia|[[w:Niqāb|Niqābs]]}}
----
= No camera =
'''No camera''' ('''!''') refers to the fact that a simulation of a camera is not a camera. If people realize the differences, and thus the different restrictions by many types of laws e.g. physics, physiology. Analogously see [[#No microphone]], usually seen below this entry.
----
= No microphone =
'''No microphone''' is needed when using synthetic voices as you just model them, without needing to capture. Analogously see the entry [[#No camera]], usually seen above this entry.
----
= Reflectance capture =
'''Reflectance capture''' is made by measuring the reflected light for each incoming light direction and every exit direction, often with many different wavelengths. Using polarisers allow to separately capture the specular and the diffuse reflected light. The first known reflectance capture over the human face was made in 1999 by Paul Debevec et al at the [[w:University of Southern California]].
----
= Relighting =
[[File:Deb2000-relighting-low-res-cropped.png|thumb|right|420px|Each image images a face in synthesized lighting. The lower images represent the captured illumination map. The images are generated taking a dot product of each pixel’s reflectance function with the illumination map.<br /><br />
<small>Original image Copyright ACM 2000 – http://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]]
'''Relighting''' means applying a completely different [[w:lighting]] situation to an image or video which has already been imaged. As of 2020-09 the English Wikipedia does not have an article on relighting.
----
= Spectrogram =
[[File:Spectrogram-19thC.png|thumb|right|360px|A [[w:spectrogram|spectrogram]] of a male voice saying 'nineteenth century']]
'''[[w:Spectrogram]]s''' are used extensively in the fields of [[w:music]], [[w:linguistics]], [[w:sonar]], [[w:radar]], [[w:speech processing]], [[w:seismology]], and others. Spectrograms of audio can be used to identify spoken words [[w:phonetics|phonetic]]ally, and to analyse the [[w:Animal communication|various calls of animals]]. (Wikipedia)
----
= Speech synthesis =
{{Q|'''Speech synthesis''' is the artificial production of human [[w:speech|speech]]|Wikipedia|[[w:Speech synthesis|speech syntheses]]}}
----
= Synthetic porn =
'''Synthetic pornography''' is a '''strong technological hallucinogen'''.
== Synthetic terror porn ==
== Synthetic terror porn ==
'''Synthetic terror porn''' is pornography synthesized with terrorist intent. '''Synthetic rape porn''' is probably by far the most prevalent form of this, but it must be noted that synthesizing '''consentual looking sex scenes''' can also be '''terroristic''' in intent and effect.
'''Synthetic terror porn''' is pornography synthesized with terrorist intent. '''Synthetic rape porn''' is probably by far the most prevalent form of this, but it must be noted that synthesizing '''concentual looking sex scenes''' can also be '''terroristic''' in intent and effect.
----
 
= Transfer learning =
{{Q|'''Transfer learning (TL)''' is a research problem in [[w:machine learning|machine learning]] (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.|Wikipedia|[[w:Transfer learning|Transfer learning]]}}
 
----
 
= Voice changer =
{{Q|The term '''voice changer''' (also known as voice enhancer) refers to a device which can change the tone or pitch of or add distortion to the user's voice, or a combination and vary greatly in price and sophistication.|Wikipedia|[[w:Voice changer|voice changers]]}}
 
Please see '''[[Resources#List of voice changers]]''' for some alternatives.
 
----
 
= References =
<references />

Revision as of 16:47, 3 October 2020

This is the SSF! wiki glossary. See resources for examples you will often find linked for your convenience.

Adequate Porn Watcher AI

See Adequate Porn Watcher AI

Appearance and voice theft

Appearance is thieved with digital look-alikes and voice is thieved with digital sound-alikes. These are new and very extreme forms of identity theft. Ban covert modeling and possession and doing anything with a model of a human's voice, but don't ban the Adequate Porn Watcher AI.


Bidirectional reflectance distribution function

Diagram showing vectors used to define the w:BRDF.

“The bidirectional reflectance distribution function (BRDF) is a function of four real variables that defines how light is reflected at an opaque surface. It is employed in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms.”

~ Wikipedia on BRDF


A BRDF model is a 7 dimensional model containing geometry, textures and reflectance of the subject.

The seven dimensions of the BRDF model are as follows:

  • 3 cartesian X,Y,Z
  • 2 for the entry angle
  • 2 for the exit angle of the light.

Burqa

Some humans in w:burqas a the Bornholm burka happening

“A burqa, also known as chadri or paranja in Central Asia, is an enveloping outer garment worn by women in some Islamic traditions to cover themselves in public, which covers the body and the face.”

~ Wikipedia on burqas



Covert modeling

Covert modeling refers to both covertly modeling aspects of a subject i.e. without express consent.

Main known cases are

There is work ongoing to model e.g. human's style of writing, but this is probably not as drastic a threat as the covert modeling of appearance and of voice.


Deepfake

A side-by-side comparison of videos. To the left, a scene from the 2013 motion picture w:Man of Steel (film). To the right, the same scene modified using w:deepfake technology.

Man of Steel produced by DC Entertainment and Legendary Pictures, distributed by Warner Bros. Pictures. Modification done by Reddit user "derpfakes".

This is a sample from a copyrighted video recording. The person who uploaded this work and first used it in an article, and subsequent people who use it in articles, assert that this qualifies as fair use.

Deepfake (a portmanteau of "deep learning" and "fake") is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a "generative adversarial network" (GAN).”

~ Wikipedia on Deepfakes



DARPA

The Defense Advanced Research Projects Agency, better known as DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.

The Defense Advanced Research Projects Agency (w:DARPA) is an agency of the w:United States Department of Defense responsible for the development of emerging technologies for use by the military. (Wikipedia)


Digital look-alike

When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a digital look-alike.


Digital sound-alike

When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a digital sound-alike.


Generative adversial network

File:Woman 7.jpg
An image generated by w:StyleGAN, a w:generative adversarial network (GAN), that looks deceptively like a portrait of a young woman.

“A generative adversarial network (GAN) is a class of g systems. Two neural networks contest with each other in a zero-sum game framework. This technique can generate photographs that look at least superficially authentic to human observers,[1] having many realistic characteristics. It is a form of unsupervised learning]].[2]



Human image synthesis

Human image synthesis can be applied to make believable and even photorealistic of human-likenesses, moving or still. This has effectively been the situation since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material.”

~ Wikipedia on Human image syntheses



Institute for Creative Technologies

The Institute for Creative Technologies was founded in 1999 in the University of Southern California by the United States Army. It collaborates with the w:United States Army Futures Command, w:United States Army Combat Capabilities Development Command, w:Combat Capabilities Development Command Soldier Center and w:United States Army Research Laboratory.


Light stage

Original w:light stage used in the 1999 reflectance capture by Debevec et al.

It consists of two rotary axes with height and radius control. Light source and a polarizer were placed on one arm and a camera and the other polarizer on the other arm.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
The ESPER LightCage - 3D face scanning rig is a modern w:light stage

“A light stage or light cage is equipment used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup.”

~ Wikipedia on light stages


Media forensics

Media forensics deal with ascertaining genuinity of media.

“Wikipedia does not have an article on w:Media forensics

~ juboxi on 2019-04-05



Niqāb

Image of a human wearing a w:niqāb

“A niqab or niqāb ("[face] veil"; also called a ruband) is a garment of clothing that covers the face, worn by some muslim women as a part of a particular interpretation of hijab (modest dress).”

~ Wikipedia on Niqābs



No camera

No camera (!) refers to the fact that a simulation of a camera is not a camera. If people realize the differences, and thus the different restrictions by many types of laws e.g. physics, physiology. Analogously see #No microphone, usually seen below this entry.


No microphone

No microphone is needed when using synthetic voices as you just model them, without needing to capture. Analogously see the entry #No camera, usually seen above this entry.


Reflectance capture

Reflectance capture is made by measuring the reflected light for each incoming light direction and every exit direction, often with many different wavelengths. Using polarisers allow to separately capture the specular and the diffuse reflected light. The first known reflectance capture over the human face was made in 1999 by Paul Debevec et al at the w:University of Southern California.


Relighting

Each image images a face in synthesized lighting. The lower images represent the captured illumination map. The images are generated taking a dot product of each pixel’s reflectance function with the illumination map.

Original image Copyright ACM 2000 – http://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Relighting means applying a completely different w:lighting situation to an image or video which has already been imaged. As of 2020-09 the English Wikipedia does not have an article on relighting.


Spectrogram

A spectrogram of a male voice saying 'nineteenth century'

w:Spectrograms are used extensively in the fields of w:music, w:linguistics, w:sonar, w:radar, w:speech processing, w:seismology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals. (Wikipedia)


Speech synthesis

Speech synthesis is the artificial production of human speech

~ Wikipedia on speech syntheses



Synthetic porn

Synthetic pornography is a strong technological hallucinogen.

Synthetic terror porn

Synthetic terror porn is pornography synthesized with terrorist intent. Synthetic rape porn is probably by far the most prevalent form of this, but it must be noted that synthesizing concentual looking sex scenes can also be terroristic in intent and effect.


Transfer learning

Transfer learning (TL) is a research problem in machine learning (ML) that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.”

~ Wikipedia on Transfer learning



Voice changer

“The term voice changer (also known as voice enhancer) refers to a device which can change the tone or pitch of or add distortion to the user's voice, or a combination and vary greatly in price and sophistication.”

~ Wikipedia on voice changers


Please see Resources#List of voice changers for some alternatives.


References

  1. Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks". arXiv:1406.2661 [cs.LG].
  2. Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). "Improved Techniques for Training GANs". arXiv:1606.03498 [cs.LG].