Glossary

Revision as of 14:39, 29 July 2019 by Juho Kunsola (talk | contribs) (+ = Appearance and voice theft = + Appearance is thieved with digital look-alikes and voice is thieved with digital sound-alikes. These are new and very extreme forms of identity theft.)

This is the BMC! wiki glossary. See resources for examples you will often find linked for your convenience.

Appearance and voice theft

Appearance is thieved with digital look-alikes and voice is thieved with digital sound-alikes. These are new and very extreme forms of identity theft. Ban covert modeling and possession and doing anything with a model of a human's voice, but don't ban the Adequate Porn Watcher AI.


Bidirectional reflectance distribution function

 
Diagram showing vectors used to define the w:BRDF.

“The bidirectional reflectance distribution function (BRDF) is a function of four real variables that defines how light is reflected at an opaque surface. It is employed in the optics of real-world light, in computer graphics algorithms, and in computer vision algorithms.”

~ Wikipedia on BRDF


A BRDF model is a 7 dimensional model containing geometry, textures and reflectance of the subject.

The seven dimensions of the BRDF model are as follows:

  • 3 cartesian X,Y,Z
  • 2 for the entry angle
  • 2 for the exit angle of the light.

Burqa

 
Some humans in w:burqas a the Bornholm burka happening

“A burqa, also known as chadri or paranja in Central Asia, is an enveloping outer garment worn by women in some Islamic traditions to cover themselves in public, which covers the body and the face.”

~ Wikipedia on burqas



Covert modeling

Covert modeling refers to both covertly modeling aspects of a subject i.e. without express consent.

Main known cases are

  • Covertly modeling the human appearance into 7-dimensional [[#Bidirectional reflectance distribution function|]] model or other type of model.
  • Covertly modeling the human voice

There is work ongoing to model e.g. human's style of writing, but this is probably not as drastic a threat as the covert modeling of appearance and of voice.


Deepfake

Deepfake (a portmanteau of "deep learning" and "fake") is a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called a "generative adversarial network" (GAN).”

~ Wikipedia on Deepfakes



Digital look-alike

When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a digital look-alike.


Digital sound-alike

When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a digital sound-alike.


Generative adversial network

“A generative adversarial network (GAN) is a class of g systems. Two neural networks contest with each other in a zero-sum game framework. This technique can generate photographs that look at least superficially authentic to human observers,[1][2] having many realistic characteristics. It is a form of unsupervised learning]].[3]



Human image synthesis

Human image synthesis can be applied to make believable and even photorealistic of human-likenesses, moving or still. This has effectively been the situation since the early 2000s. Many films using computer generated imagery have featured synthetic images of human-like characters digitally composited onto the real or other simulated film material.”

~ Wikipedia on Human image syntheses



Light stage

 
The ESPER LightCage - 3D face scanning rig is a modern w:light stage

“A light stage or light cage is equipment used for shape, texture, reflectance and motion capture often with structured light and a multi-camera setup.”

~ Wikipedia on light stages


Media forensics

Media forensics deal with ascertaining genuinity of media.

“Wikipedia does not have an article on w:Media forensics

~ juboxi on 2019-04-05



Niqāb

 
Image of a human wearing a w:niqāb

“A niqab or niqāb ("[face] veil"; also called a ruband) is a garment of clothing that covers the face, worn by some muslim women as a part of a particular interpretation of hijab (modest dress).”

~ Wikipedia on Niqābs



No camera

No camera (!) refers to the fact that a simulation of a camera is not a camera. If people realize the differences, and thus the different restrictions by many types of laws e.g. physics, physiology. Analogously see #No microphone, usually seen below this entry.


No microphone

No microphone is needed when using synthetic voices as you just model them, without needing to capture. Analogously see the entry #No camera, usually seen above this entry.


Reflectance capture

Reflectance capture is made by measuring the reflected light for each incoming light direction and every exit direction, often with many different wavelengths. Using polarisers allow to separately capture the specular and the diffuse reflected light. The first known reflectance capture over the human face was made in 1999 by Paul Debevec et al at the w:University of Southern California.


Speech synthesis

Speech synthesis is the artificial production of human speech

~ Wikipedia on speech syntheses



Synthetic terror porn

Synthetic terror porn is pornography synthesized with terrorist intent. Synthetic rape porn is probably by far the most prevalent form of this, but it must be noted that synthesizing consentual looking sex scenes can also be terroristic in intent and effect.


Voice changer

“The term voice changer (also known as voice enhancer) refers to a device which can change the tone or pitch of or add distortion to the user's voice, or a combination and vary greatly in price and sophistication.”

~ Wikipedia on voice changers


Please see Resources#List of voice changers for some alternatives.


References

  1. Cite error: Invalid <ref> tag; no text was provided for refs named GANnips
  2. Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks". arXiv:1406.2661 [cs.LG].
  3. Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). "Improved Techniques for Training GANs". arXiv:1606.03498 [cs.LG].