Main Page

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
Image 1: Separating specular and diffuse reflected light

(a) Normal image in dot lighting

(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

(d) Subtraction of c from b, which yields the specular component

Images are scaled to seem to be the same luminosity.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.


Welcome to Stop Synthetic Filth! wiki - A wiki about discovering ways of stopping and minimizing the damage from synthetic human-like fakes (transcluded below) i.e digital look-alikes and digital sound-alikes that result from covert modeling i.e. thieving of the human appearance and of the naked human voice.

Biblical explanation - The books of Daniel and Revelation, wherein Daniel 7 and Revelation 13 we are warned of this age of industrial filth, is for those who believe in Jesus.

Adequate Porn Watcher AI (transcluded below) is a concept for an AI to protect the humans against synthetic filth attacks by looking for porn that should not be.

How to protect yourself and others from covert modeling (transcluded below)

Law proposals to ban covert modeling | Current laws and their application | Atheist explanation | Resources | Glossary | About this wiki | SSF! Wordpress

SSF! wiki is an open non-profit public service by Juho Kunsola (my official website)


Introduction

Since the early 00's it has become (nearly) impossible to determine in still or moving pictures what is an image of a human, imaged with a (movie) camera and what on the other hand is a simulation of an image of a human imaged with a simulation of a camera. When there is no camera and the target being imaged with a simulation looks deceptively like some real human, dead or living, it is a digital look-alike.

Now in the late 2010's the equivalent thing is happening to our voices i.e. they can be stolen to some extent with the 2016 prototypes like w:Adobe Inc.'s w:Adobe Voco and w:Google's w:DeepMind w:WaveNet and made to say anything. When it is not possible to determine with human testing or testing with technological means what is a recording of some living or dead person's real voice and what is a simulation it is a digital sound-alike. 2018 saw the publication of Google Research's sound-like-anyone machine (transcluded below) at the w:NeurIPS conference and by the end of 2019 Symantec research had learned of 3 cases where digital sound-alike technology had been used for crimes.[1]

Therefore it is high time to act and to criminalize covert modeling of the naked human voice and synthesis from a covert voice model!

Covert modeling poses growing threats to

  1. The right to be the only one that looks like me (compromised by digital look-alikes)
  2. The right to be the only one able to make recordings that sound like me (compromised by digital sound-alikes)

And these developments have various severe effects on the right to privacy, provability by audio and video evidence and deniability.

Contents

Synthetic human-like fakes (transcluded)

Transcluded from Synthetic human-like fakes

When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a digital look-alike.

When it cannot be determined by human testing whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a digital sound-alike.


Image 2 (low resolution rip)
(1) Sculpting a morphable model to one single picture
(2) Produces 3D approximation
(4) Texture capture
(3) The 3D model is rendered back to the image with weight gain
(5) With weight loss
(6) Looking annoyed
(7) Forced to smile Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
See Biblical explanation - The books of Daniel and Revelation to see the advance warning for our time that we were given in 7th century BC and then again in 1st century.

'Saint John on Patmos' pictures w:John of Patmos on w:Patmos writing down the visions to make the w:Book of Revelation. Picture from folio 17 of the w:Très Riches Heures du Duc de Berry (1412-1416) by the w:Limbourg brothers. Currently located at the Musée Condé 40km north of Paris, France.

Digital look-alikes

A Computer Animated Hand is a 1972 short film by Edwin Catmull and Fred Parke. This was the first time that computer-generated imagery was used in film to animate likenesses of moving human appearance.

Introduction to digital look-alikes

Image 1: Separating specular and diffuse reflected light

(a) Normal image in dot lighting

(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

(d) Subtraction of c from b, which yields the specular component

Images are scaled to seem to be the same luminosity.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
Subtraction of the diffuse reflection from the specular reflection yields the specular component of the model's reflectance.

Original picture by Debevec et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855

In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. w:The Matrix Reloaded and w:The Matrix Revolutions released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the reflectance capture over the human face, was made for the first time in 1999 at the w:University of Southern California and was presented to the crème de la crème of the computer graphics field in their annual gathering SIGGRAPH 2000.[2]


“Do you think that was Hugo Weaving's left cheekbone that Keanu Reeves punched in with his right fist?”

~ Trad on The Matrix Revolutions


The problems with digital look-alikes

Extremely unfortunately for the humankind, organized criminal leagues, that posses the weapons capability of making believable looking synthetic pornography, are producing on industrial production pipelines synthetic terror porn[footnote 1] by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by.

These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This hate illustration increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve.

For these reasons the bannable raw materials i.e. covert models, needed to produce this disinformation terror on the information-industrial production pipelines, should be prohibited by law in order to protect humans from arbitrary abuse by criminal parties.

List of possible naked digital look-alike attacks

  • The classic "portrayal of as if in involuntary sex"-attack. (Digital look-alike "cries")
  • "Sexual preference alteration"-attack. (Digital look-alike "smiles")
  • "Cutting / beating"-attack (Constructs a deceptive history for genuine scars)
  • "Mutilation"-attack (Digital look-alike "dies")
  • "Unconscious and injected"-attack (Digital look-alike gets "disease")

Digital sound-alikes

Living people can defend[footnote 2] themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.


'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion)

The Iframe below is transcluded from 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io, the audio samples of a sound-like-anyone machine presented as at the 2018 w:NeurIPS conference by Google researchers.

Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking.

A picture of a cut-away titled "Voice-terrorist could mimic a leader" from a 2012 w:Helsingin Sanomat warning that the sound-like-anyone machines are approaching. Thank you to homie Prof. David Martin Howard of the w:University of York, UK and the anonymous editor for the heads-up.

Example of a hypothetical 4-victim digital sound-alike attack

A very simple example of a digital sound-alike attack is as follows:

Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:

  1. Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
  2. Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
  3. Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
  4. Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.

Thus it is high time to act and to criminalize the covert modeling of human appearance and voice!

Examples of speech synthesis software not quite able to fool a human yet

Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer.

Reporting on the sound-like-anyone-machines


Documented digital sound-alike attacks


The below video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine presented by Google Research in NeurIPS 2018.

Video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' describes the voice thieving machine by Google Research in NeurIPS 2018.
A spectrogram of a male voice saying 'nineteenth century'

Text synthesis

w:Chatbots have existed for a longer time, but only now armed with AI they are becoming more deceiving.

In w:natural language processing development in w:natural-language understanding leads to more cunning w:natural-language generation AI.

w:OpenAI's Generative Pre-trained Transformer (GPT) is a left-to-right transformer-based text generation model succeeded by GPT-2 and GPT-3

Reporting / announcements

External links


Countermeasures against synthetic human-like fakes

Organizations against synthetic human-like fakes

The Defense Advanced Research Projects Agency, better known as DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.
California Senator Connie Leyva introduced California Senate Bill SB 564 in Feb 2019. It has been endorsed by SAG-AFTRA, but has not yet passed.

Events against synthetic human-like fakes

Studies against synthetic human-like fakes

Search for more

Companies against synthetic human-like fakes


SSF! wiki proposed countermeasure to synthetic porn: Adequate Porn Watcher AI (transcluded)

Transcluded from Adequate Porn Watcher AI

Adequate Porn Watcher AI (APW_AI) is a working title for an w:AI to watch and model all porn ever found on the Internet to police porn for contraband and especially to protect humans by exposing digital look-alike attacks.

The purpose of the APW_AI is providing safety and security to its users, who can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial biometric app that checks that the model you want checked is yours and that you are awake.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.


Possible legal response: Outlawing digital sound-alikes (transcluded)

Transcluded from Juho's proposal on banning digital sound-alikes


§1 Covert modeling of a human voice

Acquiring such a model of a human's voice, that deceptively resembles some dead or living person's voice model of human voice, possession, purchase, sale, yielding, import and export without the express consent of the target is punishable.

§2 Application of covert voice models

Producing and making available media from a covert voice model is punishable.

§3 Aggravated application of covert voice models

If produced media is used in for the purpose to frame a human target or targets for crimes or to defame the target, the crime should be judged as aggravated.


Timeline of synthetic human-like fakes

2020's synthetic human-like fakes

Homie Marc Berman, a righteous fighter for our human rights in this age of industrial disinformation filth and a member of the w:California State Assembly, most loved for authoring AB-602, which came into effect on Jan 1 2020, banning both the manufacturing and distribution of synthetic pornography without the w:consent of the people depicted.
In Event of Moon Disaster - FULL FILM (2020) by the moondisaster.org project by the Center for Advanced Virtuality of the MIT

2010's synthetic human-like fakes

  • 2019 | US state law | Since September 1 w:Texas senate bill SB 751 w:amendments to the election code came into effect, giving w:candidates in w:elections a 30-day protection period to the elections during which making and distributing digital look-alikes or synthetic fakes of the candidates is an offense. The law text defines the subject of the law as "a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality"[10]
  • 2019 | demonstration | 'Thispersondoesnotexist.com' (since February 2019) by Philip Wang. It showcases a w:StyleGAN at the task of making an endless stream of pictures that look like no-one in particular, but are eerily human-like. Relevancy: certain
Google's logo. Google Research demonstrated their sound-like-anyone-machine at the 2018 Conference on Neural Information Processing Systems (NeurIPS). It requires only 5 seconds of sample to steal a voice.
  • 2018 | controversy / demonstration | The w:deepfakes controversy surfaces where porn videos were doctored utilizing deep machine learning so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.
w:Adobe Inc.'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in 2016 before an implementation was sold to criminal organizations.
  • 2013 | demonstration | At the 2013 SIGGGRAPH w:Activision and USC presented a real time "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,[19] utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture.[20] The end result both precomputed and real-time rendering with the modernest game GPU shown here and looks fairly realistic.

2000's synthetic human-like fakes

  • 2009 | movie | A digital look-alike of a younger w:Arnold Schwarzenegger was made for the movie w:Terminator Salvation though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger.
  • 2009 | demonstration | Paul Debevec: 'Animating a photo-realistic face' at ted.com Debevec et al. presented new digital likenesses, made by w:Image Metrics, this time of actress w:Emily O'Brien whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - Which is which is difficult to tell. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a w:treadmill. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. [21] Motion looks fairly convincing contrasted to the clunky run in the Animatrix: Final Flight of the Osiris which was w:state-of-the-art in 2003 if photorealism was the intention of the w:animators.
  • 2002 | music video | 'Bullet' by Covenant on Youtube by w:Covenant (band) from their album w:Northern Light (Covenant album). Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the classic "skin looks like cardboard"-bug (assuming this was not intended) that thwarted efforts to make digital look-alikes that pass human testing before the reflectance capture and dissection in 1999 by w:Paul Debevec et al. at the w:University of Southern California and subsequent development of the "Analytical BRDF" (quote-unquote) by ESC Entertainment, a company set up for the sole purpose of making the cinematography for the 2003 films Matrix Reloaded and Matrix Revolutions possible, lead by George Borshukov.

1990's synthetic human-like fakes

Traditional BRDF vs. subsurface scattering inclusive BSSRDF i.e. Bidirectional scattering-surface reflectance distribution function.

An analytical BRDF must take into account the subsurface scattering, or the end result will not pass human testing.
  • 1994 | movie | The Crow was the first film production to make use of w:digital compositing of a computer simulated representation of a face onto scenes filmed using a w:body double. Necessity was the muse as the actor w:Brandon Lee portraying the protagonist was tragically killed accidentally on-stage.

1970's synthetic human-like fakes

  • 1976 | movie | w:Futureworld reused parts of A Computer Animated Hand on the big screen.

1770's synthetic human-like fakes

A replica of Kempelen's speaking machine, built 2007–09 at the Department of Phonetics, Saarland University, Saarbrücken, Germany. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels

Media perhaps about synthetic human-like fakes

This is a chronological listing of media that are probably to do with synthetic human-like fakes.

The links currently include scripture, science, demonstrations, music videos, music, entertainment and movies.

2020's media perhaps about synthetic human-like fakes

  • 2022 | movie | w:The Matrix 4 (2022) will be the 4th installment of the w:The Matrix (franchise). Relevancy: High likelihood of relevance, but unknown as this film is not yet ready or released.

2010's media perhaps about synthetic human-like fakes

2000's media perhaps about synthetic human-like fakes

1990's media perhaps about synthetic human-like fakes

1980's media perhaps about synthetic human-like fakes

1st century media perhaps about synthetic human-like fakes


3rd century BC media perhaps about synthetic human-like fakes

6th century BC media perhaps about synthetic human-like fakes

Image taken from Silos Apocalypse. Originally published/produced in Spain (Silos), 1109.

Daniel 7, Daniel's vision of the three beasts Dan 7:1-6 and the fourth beast Dan 7:7-8 from the sea and the Ancient of DaysDan 7:9-10

Footnotes

  1. It is terminologically more precise, more inclusive and more useful to talk about 'synthetic terror porn', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.
  2. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.

1st seen in

References

  1. 1.0 1.1 https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ Cite error: Invalid <ref> tag; name "WaPo2019" defined multiple times with different content
  2. 2.0 2.1 Template:Cite book
  3. Template:Cite web
  4. https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
  5. https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
  6. Template:Cite web
  7. Template:Cite web
  8. Template:Cite web
  9. Template:Cite web
  10. Template:Cite web
  11. Template:Cite web
  12. 12.0 12.1 Template:Cite web
  13. Template:Cite web
  14. Template:Cite web
  15. Template:Cite web
  16. Template:Cite web
  17. Template:Citation
  18. Template:Cite web
  19. Template:Cite AV media
  20. Template:Cite web
  21. In this TED talk video at 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - Which is which is difficult to tell. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a w:treadmill. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.
  22. Template:Cite web
  23. https://ict.usc.edu/about/
  24. Template:Cite web
  25. Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine ("Mechanism of the human speech with description of its speaking machine", J. B. Degen, Wien).
  26. History and Development of Speech Synthesis, Helsinki University of Technology, Retrieved on November 4, 2006

How to protect yourself and others from covert modeling (transcluded)

Transcluded from How to protect yourself and others from covert modeling

Do not agree and do not be fooled to having your reflectance field captured on a light stage, such as the he ESPER LightCage in the picture..

“I feel pretty confident that mister photograph man will not be selling much of my data to the no camera scene.”

~ Honestly made up quote on the protecting power of e.g. niqāb

“If your whole industry's shared secret is that digital look-alikes are going to pass human testing (i.e. people in the delusion that they are seeing images of humans) then popular culture product such as w:The Matrix will appear. It is widely known that meetings of surfaces, especially soft ones are very very difficult to do convincingly. Look-alikes of eyes meeting look-alikes of eye-lids are additionally hard to do, because there is also a liquid phase in the equation.”

~ Juboxi on sunglasses


Sunglasses or sun glasses (informally called shades) are a form of w:protective eyewear designed primarily to prevent bright w:sunlight and w:high-energy visible light from damaging or discomforting the eyes.”

~ Wikipedia on sunglasses
Some humans in burqas at the Bornholm burqa happening

Help in case of appearance theft

Information on removing involuntary fake pornography from Google at support.google.com if it shows up in Google. Form for removing involuntary fake pornography at support.google.com, select 'I want to remove: A fake nude or sexually explicit picture or video of myself'

Google added “involuntary synthetic pornographic imagery” to its ban list in September 2018, allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”[1]


On 3rd of October 2019 California outlawed with the AB-602 the use of w:human image synthesis technologies to make fake pornography without the consent of the people depicted in. The law was authored by Assembly member w:Marc Berman.[2]


Protect your appearance from covert modeling

  • Avoid uploading facial and full body photos and video of yourself to services where they are exposed to the whole Internet.
  • If you need to upload photos, wear protective clothing, e.g. niqāb or burqa or protective accessories e.g. sunglasses.
  • Consider getting a non-photorealistic w:avatar of your liking and use pictures of it to shield your appearance.
  • Do not agree or get fooled to having your reflectance captured in a light stage.

Protect your voice from covert modeling

  • Avoid uploading unaltered recordings of your w:human voice to services where they are exposed to the whole Internet.
  • Consider altering the voice of your recordings if you must upload to the Internet with a voice changer or synthetic voice that does not match any human's voice.
  • Avoid getting recorded by parties whose identity and reliability you cannot verify, especially if they do not expressly state how, where and for what purpose they will use the recording
  • Ask for a voice changer to be applied if getting recorded to something that will be publicly broadcast

Protect your mind from the products of covert modeling

  • Teach your loved ones the 2 key media literacy skills for this age of industrial disinformation:
    1. Not everything that looks like a video of people is actually a video of people
    2. Not everything that sounds like a recording of a known human's voice is actually a recording of that person's voice.
  • Don't watch porn. A dodgy porn site or few is a hefty risk of seeing some digital look-alikes.
  • Be critical of gossip about stuff claimed seen on the Internet.

Protect others from the products of covert modeling

How to protect the humankind from products of covert appearance modeling

Adequate Porn Watcher AI is name for concept development to make an AI to look at and model all findable porn to provide protection the humankind and individual humans by modeling what it sees.

The reason why APW_AI makes sense is that if you can trust the service providers to keep safe your own model it will alert you when it finds something that looks really like a match and therefore lifting the looming threat of digital look-alike attacks and the destructiveness of the attacks that take place considerably lower and thus also the monetary value to the criminals.

How to protect our societies from covert modeling

Contact your representatives

Contact your representatives and ask them ...

  • If they are aware of these new classes of disinformation weapons?
  • What is their position on the criminalizing covert modeling-question?
  • If steps are being taken to protect the judiciary from covert modeling?
  • What, if anything, they are doing to put this hyper-modern lawlessness under some check?
  • To talk with colleagues and also publicly about the problems caused by covert modeling.

If they don't believe ask them to...

  • Work for the creation of fact-finding taskforce to ascertain the truth that synthetic terror porn has been used as a weapon for a longer time.
  • Create a fact-finding taskforce for determining if implementations of digital sound-alikes-technology are available to criminals and is it being used to commit crimes.

How to protect judiciaries from covert modeling

Digital look-alikes and digital sound-alikes technologies prompt some changes to w:rules of evidence and updates to what should be deemed deniable.

Recordings that sound like someone saying something may not be genuine and therefore the suspect should be allowed to state to the court "I did never say that thing you got on tape."

Pictures and videos that looks like someone doing something may not be genuine and therefore the suspect should be allowed to state to the court "I am not in that image/video."

If media forensics proves beyond suspicion the genuinity of the media in question or if credible witness to its creation is found, the media should be considered evidence.

References

Stop Synthetic Filth! wiki is hosted in Finland by Juho Kunsola at a hosting business that uses electricity from renewable sources only. (Check)