Synthetic human-like fakes: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(→‎Documented crimes with digital sound-alikes: + added a quote for the WaPo 2019 ref)
Line 193: Line 193:
==== 2021 digital sound-alike enabled fraud ====
==== 2021 digital sound-alike enabled fraud ====


<section begin=2021 digital sound-alike enabled fraud />The 2nd publicly known fraud done with a digital sound-alike took place on Friday 2021-01-15. A bank in Hong Kong was manipulated to wire money to numerous bank accounts by using a voice stolen from one of the their client company's directors. They managed to defraud $35 million of the U.A.E. based company's money.<ref name="Forbes reporting on 2021 digital sound-alike fraud">https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/</ref>
<section begin=2021 digital sound-alike enabled fraud />The 2nd publicly known fraud done with a digital sound-alike<ref group="1st seen in">https://www.reddit.com/r/VocalSynthesis/</ref> took place on Friday 2021-01-15. A bank in Hong Kong was manipulated to wire money to numerous bank accounts by using a voice stolen from one of the their client company's directors. They managed to defraud $35 million of the U.A.E. based company's money.<ref name="Forbes reporting on 2021 digital sound-alike fraud">https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/</ref>
** [https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/ '''''Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find''''' at forbes.com] 2021-10-14 original reporting
 
** [https://www.unite.ai/deepfaked-voice-enabled-35-million-bank-heist-in-2020/ '''''Deepfaked Voice Enabled $35 Million Bank Heist in 2020''''' at unite.ai]<ref group="1st seen in">https://www.reddit.com/r/VocalSynthesis/</ref> reporting updated on 2021-10-15
'''Reporting on the 2021 digital sound-alike enabled fraud'''
 
* [https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/ '''''Fraudsters Cloned Company Director’s Voice In $35 Million Bank Heist, Police Find''''' at forbes.com] 2021-10-14 original reporting
* [https://www.unite.ai/deepfaked-voice-enabled-35-million-bank-heist-in-2020/ '''''Deepfaked Voice Enabled $35 Million Bank Heist in 2020''''' at unite.ai] reporting updated on 2021-10-15
<section end=2021 digital sound-alike enabled fraud />
<section end=2021 digital sound-alike enabled fraud />



Revision as of 17:12, 1 January 2022

When the camera does not exist, but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a digital look-alike.

When it cannot be determined by human testing or media forensics whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a pre-recorded digital sound-alike. | Read more about synthetic human-like fakes, examine timeline of synthetic human-like fakes or view Mediatheque


This is not a picture of Obama, because it is not Obama in the video that this screenshot is from, but a synthetic human-like fake, more precisely a pre-recorded digital look-alike.

Click on the picture or Obama's appearance thieved - a public service announcement digital look-alike by Monkeypaw Productions and Buzzfeed to view an April 2018 public service announcement moving digital look-alike made to appear Obama-like. The video is accompanied with imitator sound-alike, and was made by w:Monkeypaw Productions (.com) in conjunction with w:BuzzFeed (.com). You can also View the same video at YouTube.com.[1]
Image 2 (low resolution rip) shows a 1999 technique for sculpting a morphable model, till it matches the target's appearance.
(1) Sculpting a morphable model to one single picture
(2) Produces 3D approximation
(4) Texture capture
(3) The 3D model is rendered back to the image with weight gain
(5) With weight loss
(6) Looking annoyed
(7) Forced to smile Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.

Digital look-alikes

It is recommended that you watch In Event of Moon Disaster - FULL FILM (2020) at the moondisaster.org project website (where it has interactive portions) by the Center for Advanced Virtuality of the w:MIT


Introduction to digital look-alikes

Image 1: Separating specular and diffuse reflected light

(a) Normal image in dot lighting

(b) Image of the diffuse reflection which is caught by placing a vertical polarizer in front of the light source and a horizontal in the front the camera

(c) Image of the highlight specular reflection which is caught by placing both polarizers vertically

(d) Subtraction of c from b, which yields the specular component

Images are scaled to seem to be the same luminosity.

Original image by Debevec et al. – Copyright ACM 2000 – https://dl.acm.org/citation.cfm?doid=311779.344855 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
Subtraction of the diffuse reflection from the specular reflection yields the specular component of the model's reflectance.

Original picture by w:Paul Debevec et al. - Copyright ACM 2000 https://dl.acm.org/citation.cfm?doid=311779.344855

In the cinemas we have seen digital look-alikes for over 15 years. These digital look-alikes have "clothing" (a simulation of clothing is not clothing) or "superhero costumes" and "superbaddie costumes", and they don't need to care about the laws of physics, let alone laws of physiology. It is generally accepted that digital look-alikes made their public debut in the sequels of The Matrix i.e. w:The Matrix Reloaded and w:The Matrix Revolutions released in 2003. It can be considered almost certain, that it was not possible to make these before the year 1999, as the final piece of the puzzle to make a (still) digital look-alike that passes human testing, the reflectance capture over the human face, was made for the first time in 1999 at the w:University of Southern California and was presented to the crème de la crème of the computer graphics field in their annual gathering SIGGRAPH 2000.[2]


“Do you think that was w:Hugo Weaving's left cheekbone that w:Keanu Reeves punched in with his right fist?”

~ Trad on The Matrix Revolutions



The problems with digital look-alikes

Extremely unfortunately for the humankind, organized criminal leagues, that posses the weapons capability of making believable looking synthetic pornography, are producing on industrial production pipelines synthetic terror porn[footnote 1] by animating digital look-alikes and distributing it in the murky Internet in exchange for money stacks that are getting thinner and thinner as time goes by.

These industrially produced pornographic delusions are causing great humane suffering, especially in their direct victims, but they are also tearing our communities and societies apart, sowing blind rage, perceptions of deepening chaos, feelings of powerlessness and provoke violence. This hate illustration increases and strengthens hate thinking, hate speech, hate crimes and tears our fragile social constructions apart and with time perverts humankind's view of humankind into an almost unrecognizable shape, unless we interfere with resolve.

List of possible naked digital look-alike attacks

  • The classic "portrayal of as if in involuntary sex"-attack. (Digital look-alike "cries")
  • "Sexual preference alteration"-attack. (Digital look-alike "smiles")
  • "Cutting / beating"-attack (Constructs a deceptive history for genuine scars)
  • "Mutilation"-attack (Digital look-alike "dies")
  • "Unconscious and injected"-attack (Digital look-alike gets "disease")

Age analysis and rejuvenating and aging syntheses

Temporal limit of digital look-alikes

A picture of the 1895 w:Cinematograph

w:History of film technology has information about where the border is.

Digital look-alikes cannot be used to attack people who existed before the technological invention of film. For moving pictures the breakthrough is attributed to w:Auguste and Louis Lumière's w:Cinematograph premiered in Paris on 28 December 1895, though this was only the commercial and popular breakthrough, as even earlier moving pictures exist. (adapted from w:History of film)

The w:Kinetoscope is an even earlier motion picture exhibition device. A prototype for the Kinetoscope was shown to a convention of the National Federation of Women's Clubs on May 20, 1891.[3] The first public demonstration of the Kinetoscope was held at the Brooklyn Institute of Arts and Sciences on May 9, 1893. (Wikipedia)[3]



Digital sound-alikes

A picture of a cut-away titled "Voice-terrorist could mimic a leader" from a 2012 w:Helsingin Sanomat warning that the sound-like-anyone machines are approaching. Thank you to homie Prof. David Martin Howard of the w:University of York, UK and the anonymous editor for the heads-up.

The first English speaking digital sound-alikes were first introduced in 2016 by Adobe and Deepmind, but neither of them were made publicly available.

Then in 2018 at the w:Conference on Neural Information Processing Systems (NeurIPS) the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' (at arXiv.org) was presented. The pre-trained model is able to steal voices from a sample of only 5 seconds with almost convincing results

The Iframe below is transcluded from 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"' at google.gituhub.io, the audio samples of a sound-like-anyone machine presented as at the 2018 w:NeurIPS conference by Google researchers.

Observe how good the "VCTK p240" system is at deceiving to think that it is a person that is doing the talking.


The to the right video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube describes the voice thieving machine presented by Google Research in w:NeurIPS 2018.

Video video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube describes the voice thieving machine by Google Research in w:NeurIPS 2018.

Documented crimes with digital sound-alikes

In 2019 reports of crimes being committed with digital sound-alikes started surfacing. As of Jan 2022 no reports of other types of attack than fraud have been found.

2019 digital sound-alike enabled fraud

By 2019 digital sound-alike anyone technology found its way to the hands of criminals. In 2019 Symantec researchers knew of 3 cases where digital sound-alike technology had been used for w:crime.[4]

Of these crimes the most publicized was a fraud case in March 2019 where 220,000€ were defrauded with the use of a real-time digital sound-alike.[5] The company that was the victim of this fraud had bought some kind of cyberscam insurance from French insurer w:Euler Hermes and the case came to light when Mr. Rüdiger Kirsch of Euler Hermes informed w:The Wall Street Journal about it.[6]

Reporting on the 2019 digital sound-alike enabled fraud

2021 digital sound-alike enabled fraud

The 2nd publicly known fraud done with a digital sound-alike[1st seen in 1] took place on Friday 2021-01-15. A bank in Hong Kong was manipulated to wire money to numerous bank accounts by using a voice stolen from one of the their client company's directors. They managed to defraud $35 million of the U.A.E. based company's money.[8]

Reporting on the 2021 digital sound-alike enabled fraud


What should we do about digital sound-alikes?

Living people can defend[footnote 2] themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.

For these reasons the bannable raw materials i.e. covert voice models should be prohibited by law in order to protect humans from abuse by criminal parties.


Example of a hypothetical 4-victim digital sound-alike attack

A very simple example of a digital sound-alike attack is as follows:

Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:

  1. Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
  2. Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
  3. Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
  4. Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.

Thus it is high time to act and to criminalize the covert modeling of human voice!

Examples of speech synthesis software not quite able to fool a human yet

Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer.

Reporting on the sound-like-anyone-machines

Temporal limit of digital sound-alikes

w:Thomas Edison and his early w:phonograph. Cropped from w:Library of Congress copy, ca. 1877, (probably 18 April 1878)

The temporal limit of whom, dead or living, the digital sound-alikes can attack is defined by the w:history of sound recording.

The article starts by mentioning that the invention of the w:phonograph by w:Thomas Edison in 1877 is considered the start of sound recording.

The phonautograph is the earliest known device for recording w:sound. Previously, tracings had been obtained of the sound-producing vibratory motions of w:tuning forks and other objects by physical contact with them, but not of actual sound waves as they propagated through air or other media. Invented by Frenchman W:Édouard-Léon Scott de Martinville, it was patented on March 25, 1857.[9]

Apparently, it did not occur to anyone before the 1870s that the recordings, called phonautograms, contained enough information about the sound that they could, in theory, be used to recreate it. Because the phonautogram tracing was an insubstantial two-dimensional line, direct physical playback was impossible in any case. Several phonautograms recorded before 1861 were successfully played as sound in 2008 by optically scanning them and using a computer to process the scans into digital audio files. (Wikipedia)

A w:spectrogram of a male voice saying 'nineteenth century'

Singing syntheses

As of 2020 the digital sing-alikes may not yet be here, but when we hear a faked singing voice and we cannot hear that it is fake, then we will know. An ability to sing does not seem to add much hostile capabilities compared to the ability to thieve spoken word.


Text syntheses

w:Chatbots have existed for a longer time, but only now armed with AI they are becoming more deceiving.

In w:natural language processing development in w:natural-language understanding leads to more cunning w:natural-language generation AI.

w:OpenAI's w:Generative Pre-trained Transformer (GPT) is a left-to-right w:transformer (machine learning model)-based text generation model succeeded by w:GPT-2 and w:GPT-3

Reporting / announcements

External links

Handwriting syntheses

Handwriting syntheses could be used

  1. Defensively, to hide one's handwriting style from public view
  2. Offensively, to thieve somebody else's handwriting style

If the handwriting-like synthesis passes human and media forensics testing, it is a digital handwrite-alike.

Here we find a risk similar to that which realized when the w:speaker recognition systems turned out to be instrumental in the development of digital sound-alikes. After the knowledge needed to recognize a speaker was w:transferred into a generative task in 2018 by Google researchers, we no longer cannot effectively determine for English speakers which recording is human in origin and which is from a machine origin.

Handwriting-like syntheses: w:Recurrent neural networks (RNN) seem are a popular choice for this task.


  1. Recurrent neural network handwriting generation demo at cs.toronto.edu is a demonstration site for publication
  2. Calligrapher.ai - Realistic computer-generated handwriting - The user may control parameters: speed, legibility, stroke width and style. The domain is registered by some organization in Iceland and the website offers no about-page[1st seen in 3]. According to this reddit post Calligrapher.ai is based on Graves' 2013 work, but "adds an w:inference model to allow for sampling latent style vectors (similar to the VAE model used by SketchRNN)".[10]

Handwriting recognition

Countermeasures against synthetic human-like fakes

Organizations that should get back to the task at hand - FacePinPoint.com

Transcluded from FacePinPoint.com

FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 1]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[12], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[13] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


Organizations against synthetic human-like fakes

Organizations for media forensics

The Defense Advanced Research Projects Agency, better known as w:DARPA has been active in the field of countering synthetic fake video for longer than the public has been aware of the problems existing.

Organizations possibly against synthetic human-like fakes

Originally harvested from the study The ethics of artificial intelligence: Issues and initiatives (.pdf) by the w:European Parliamentary Research Service, published on the w:Europa (web portal) in March 2020.[1st seen in 4]

Other essential developments

Events against synthetic human-like fakes

  • 2019 | At the annual Finnish w:Ministry of Defence's Scientific Advisory Board for Defence (MATINE) public research seminar, a research group presented their work 'Synteettisen median tunnistus'' at defmin.fi (Recognizing synthetic media). They developed on earlier work on how to automatically detect synthetic human-like fakes and their work was funded with a grant from MATINE.
  • 2018 | NIST NIST 'Media Forensics Challenge 2018' at nist.gov was the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery.
  • 2016 | Nimble Challenge 2016 - NIST released the Nimble Challenge’16 (NC2016) dataset as the MFC program kickoff dataset, (where NC is the former name of MFC). [19]

Studies against synthetic human-like fakes

Search for more

Reporting against synthetic human-like fakes

Companies against synthetic human-like fakes

See resources for more.


SSF! wiki proposed countermeasure to weaponized synthetic pornography: Outlaw unauthorized synthetic pornography (transcluded)

Transcluded from Juho's proposal for banning unauthorized synthetic pornography


§1 Models of human appearance

A model of human appearance means

§2 Producing synthetic pornography

Making projections, still or videographic, where targets are portrayed in a nude or in a sexual situation from models of human appearance defined in §1 without express consent of the targets is illegal.

§3 Distributing synthetic pornography

Distributing, making available, public display, purchase, sale, yielding, import and export of non-authorized synthetic pornography defined in §2 are punishable.[footnote 3]

§4 Aggravated producing and distributing synthetic pornography

If the media described in §2 or §3 is made or distributed with the intent to frame for a crime or for blackmail, the crime should be judged as aggravated.

Afterwords

The original idea I had was to ban both the raw materials i.e. the models to make the visual synthetic filth and also the end product weaponized synthetic pornography, but then in July 2019 it appeared to me that Adequate Porn Watcher AI (concept) could really help in this age of industrial disinformation if it were built, trained and operational. Banning modeling of human appearance was in conflict with the revised plan.

It is safe to assume that collecting permissions to model each pornographic recording is not plausible, so an interesting question is that can we ban covert modeling from non-pornographic pictures, while still retaining the ability to model all porn found on the Internet.

In case we want to pursue banning modeling people's appearance from non-pornographic images/videos without explicit permission be pursued it must be formulated so that this does not make Adequate Porn Watcher AI (concept) illegal / impossible. This would seem to lead to a weird situation where modeling a human from non-pornographic media would be illegal, but modeling from pornography legal.


SSF! wiki proposed countermeasure to weaponized synthetic porn pornography: Adequate Porn Watcher AI (concept) (transcluded)

Transcluded main contents from Adequate Porn Watcher AI (concept)

Adequate Porn Watcher AI (APW_AI) is an w:AI and w:computer vision concept to search for any and all porn that should not be by watching and modeling all porn ever found on the w:Internet thus effectively protecting humans by exposing covert naked digital look-alike attacks and also other contraband.

Obs. #A service identical to APW_AI used to exist - FacePinPoint.com

The method and the effect

The method by which APW_AI would be providing safety and security to its users, is that they can briefly upload a model they've gotten of themselves and then the APW_AI will either say nothing matching found or it will be of the opinion that something matching found.

If people are able to check whether there is synthetic porn that looks like themselves, this causes synthetic hate-illustration industrialists' product lose destructive potential and the attacks that happen are less destructive as they are exposed by the APW_AI and thus decimate the monetary value of these disinformation weapons to the criminals.

If you feel comfortable to leave your model with the good people at the benefactor for safekeeping you get alerted and help if you ever get attacked with a synthetic porn attack.

Rules

Looking up if matches are found for anyone else's model is forbidden and this should probably be enforced with a facial w:biometric w:facial recognition system app that checks that the model you want checked is yours and that you are awake.

Definition of adequacy

An adequate implementation should be nearly free of false positives, very good at finding true positives and able to process more porn than is ever uploaded.

What about the people in the porn-industry?

People who openly do porn can help by opting-in to help in the development by providing training material and material to test the AI on. People and companies who help in training the AI naturally get credited for their help.

There are of course lots of people-questions to this and those questions need to be identified by professionals of psychology and social sciences.

History

The idea of APW_AI occurred to User:Juho Kunsola on Friday 2019-07-12. Subsequently (the next day) this discovery caused the scrapping of the plea to ban convert modeling of human appearance as that would have rendered APW_AI legally impossible.

Countermeasures elsewhere

Partial transclusion from Organizations, studies and events against synthetic human-like fakes

Companies against synthetic filth


A service identical to APW_AI used to exist - FacePinPoint.com

Partial transclusion from FacePinPoint.com


FacePinPoint.com was a for-a-fee service from 2017 to 2021 for pointing out where in pornography sites a particular face appears, or in the case of synthetic pornography, a digital look-alike makes make-believe of a face or body appearing.[contacted 3]The inventor and founder of FacePinPoint.com, Mr. Lionel Hagege registered the domain name in 2015[21], when he set out to research the feasibility of his action plan idea against non-consensual pornography.[22] The description of how FacePinPoint.com worked is the same as Adequate Porn Watcher AI (concept)'s description.


SSF! wiki proposed countermeasure to digital sound-alikes: Outlawing digital sound-alikes (transcluded)

Transcluded from Juho's proposal on banning digital sound-alikes


Motivation: The current situation where the criminals can freely trade and grow their libraries of stolen voices is unwise.

§1 Unauthorized modeling of a human voice

Acquiring such a model of a human's voice, that deceptively resembles some dead or living person's voice and the possession, purchase, sale, yielding, import and export without the express consent of the target are punishable.

§2 Application of unauthorized voice models

Producing and making available media from covert voice models defined in §1 is punishable.

§3 Aggravated application of unauthorized voice models

If the produced media is for a purpose to

  • frame a human target or targets for crimes
  • to attempt extortion or
  • to defame the target,

the crime should be judged as aggravated.



Timeline of synthetic human-like fakes

See the #SSFWIKI Mediatheque for viewing media that is or is probably to do with synthetic human-like fakes.

2020's synthetic human-like fakes

  • 2021 | crime / fraud | The 2nd publicly known fraud done with a digital sound-alike[1st seen in 9] took place on Friday 2021-01-15. A bank in Hong Kong was manipulated to wire money to numerous bank accounts by using a voice stolen from one of the their client company's directors. They managed to defraud $35 million of the U.A.E. based company's money.[8]

Reporting on the 2021 digital sound-alike enabled fraud

  • 2020 | Chinese legislation | On Wednesday January 1 2020 Chinese law requiring that synthetically faked footage should bear a clear notice about its fakeness came into effect. Failure to comply could be considered a w:crime the w:Cyberspace Administration of China (cac.gov.cn) stated on its website. China announced this new law in November 2019.[28] The Chinese government seems to be reserving the right to prosecute both users and w:online video platforms failing to abide by the rules. [29]


2010's synthetic human-like fakes


Code of Virginia (TOC) » Title 18.2. Crimes and Offenses Generally » Chapter 8. Crimes Involving Morals and Decency » Article 5. Obscenity and Related Offenses » Section § 18.2-386.2. Unlawful dissemination or sale of images of another; penalty

The section § 18.2-386.2. Unlawful dissemination or sale of images of another; penalty. of Virginia is as follows:

A. Any w:person who, with the w:intent to w:coerce, w:harass, or w:intimidate, w:maliciously w:disseminates or w:sells any videographic or still image created by any means whatsoever that w:depicts another person who is totally w:nude, or in a state of undress so as to expose the w:genitals, pubic area, w:buttocks, or female w:breast, where such person knows or has reason to know that he is not w:licensed or w:authorized to disseminate or sell such w:videographic or w:still image is w:guilty of a Class 1 w:misdemeanor.

For purposes of this subsection, "another person" includes a person whose image was used in creating, adapting, or modifying a videographic or still image with the intent to depict an actual person and who is recognizable as an actual person by the person's w:face, w:likeness, or other distinguishing characteristic.

B. If a person uses w:services of an w:Internet service provider, an electronic mail service provider, or any other information service, system, or access software provider that provides or enables computer access by multiple users to a computer server in committing acts prohibited under this section, such provider shall not be held responsible for violating this section for content provided by another person.

C. Venue for a prosecution under this section may lie in the w:jurisdiction where the unlawful act occurs or where any videographic or still image created by any means whatsoever is produced, reproduced, found, stored, received, or possessed in violation of this section.

D. The provisions of this section shall not preclude prosecution under any other w:statute.[33]

The identical bills were House Bill 2678 presented by w:Delegate w:Marcus Simon to the w:Virginia House of Delegates on January 14 2019 and three day later an identical Senate bill 1736 was introduced to the w:Senate of Virginia by Senator w:Adam Ebbin.

  • 2019 | demonstration | 'Thispersondoesnotexist.com' (since February 2019) by Philip Wang. It showcases a w:StyleGAN at the task of making an endless stream of pictures that look like no-one in particular, but are eerily human-like. Relevancy: certain
w:Google's logo. Google Research demonstrated their sound-like-anyone-machine at the 2018 w:Conference on Neural Information Processing Systems (NeurIPS). It requires only 5 seconds of sample to steal a voice.
  • 2018 | controversy / demonstration | The w:deepfakes controversy surfaces where porn videos were doctored utilizing w:deep machine learning so that the face of the actress was replaced by the software's opinion of what another persons face would look like in the same pose and lighting.
w:Adobe Inc.'s logo. We can thank Adobe for publicly demonstrating their sound-like-anyone-machine in 2016 before an implementation was sold to criminal organizations.
#w:Adobe Voco. Adobe Audio Manipulator Sneak Peak with w:Jordan Peele (at Youtube.com). November 2016 demonstration of a Adobe's unreleased sound-like-anyone-machine, the w:Adobe Voco at the w:Adobe MAX 2016 event in w:San Diego, w:California. The original Adobe Voco required 20 minutes of sample to thieve a voice.
  • 2013 | demonstration | At the 2013 SIGGGRAPH w:Activision and USC presented a w:real time computing "Digital Ira" a digital face look-alike of Ari Shapiro, an ICT USC research scientist,[40] utilizing the USC light stage X by Ghosh et al. for both reflectance field and motion capture.[41] The end result both precomputed and real-time rendering with the modernest game w:GPU shown here and looks fairly realistic.

2000's synthetic human-like fakes

  • 2009 | movie | A digital look-alike of a younger w:Arnold Schwarzenegger was made for the movie w:Terminator Salvation though the end result was critiqued as unconvincing. Facial geometry was acquired from a 1984 mold of Schwarzenegger.
  • 2009 | demonstration | Paul Debevec: 'Animating a photo-realistic face' at ted.com Debevec et al. presented new digital likenesses, made by w:Image Metrics, this time of actress w:Emily O'Brien whose reflectance was captured with the USC light stage 5. At 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - Which is which is difficult to tell. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a w:treadmill. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video. [42] Motion looks fairly convincing contrasted to the clunky run in the w:Animatrix: Final Flight of the Osiris which was w:state-of-the-art in 2003 if photorealism was the intention of the w:animators.
Traditional w:BRDF vs. subsurface scattering inclusive BSSRDF i.e. w:Bidirectional scattering-surface reflectance distribution function.

An analytical BRDF must take into account the subsurface scattering, or the end result will not pass human testing.
Music video for Bullet by w:Covenant from 2002. Here you can observe the classic "skin looks like cardboard"-bug that stopped the pre-reflectance capture era versions from passing human testing.
  • 2002 | music video | 'Bullet' by Covenant on Youtube by w:Covenant (band) from their album w:Northern Light (Covenant album). Relevancy: Contains the best upper-torso digital look-alike of Eskil Simonsson (vocalist) that their organization could procure at the time. Here you can observe the classic "skin looks like cardboard"-bug (assuming this was not intended) that thwarted efforts to make digital look-alikes that pass human testing before the reflectance capture and dissection in 1999 by w:Paul Debevec et al. at the w:University of Southern California and subsequent development of the "Analytical w:BRDF" (quote-unquote) by ESC Entertainment, a company set up for the sole purpose of making the cinematography for the 2003 films Matrix Reloaded and Matrix Revolutions possible, lead by George Borshukov.

1990's synthetic human-like fakes

1970's synthetic human-like fakes

w:A Computer Animated Hand is a 1972 short film by w:Edwin Catmull and w:Fred Parke. This was the first time that w:computer-generated imagery was used in film to animate likenesses of moving human appearance.
  • 1976 | movie | w:Futureworld reused parts of A Computer Animated Hand on the big screen.

1960's synthetic human-like fakes

1930's synthetic human-like fakes

w:Voder demonstration pavillion at the w:1939 New York World's Fair

1770's synthetic human-like fakes

A replica of w:Wolfgang von Kempelen's w:Wolfgang von Kempelen's Speaking Machine, built 2007–09 at the Department of w:Phonetics, w:Saarland University, w:Saarbrücken, Germany. This machine added models of the tongue and lips, enabling it to produce w:consonants as well as w:vowels

Footnotes

  1. It is terminologically more precise, more inclusive and more useful to talk about 'synthetic terror porn', if we want to talk about things with their real names, than 'synthetic rape porn', because also synthesizing recordings of consentual looking sex scenes can be terroristic in intent.
  2. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.
  3. People who are found in possession of this synthetic pornography should probably not be penalized, but rather advised to get some help.

Contact information of organizations

Please contact these organizations and tell them to work harder against the disinformation weapons

    • WITNESS
    • 80 Hanson Place, 5th Floor
    • Brooklyn, NY 11217
    • USA
    • Phone: 1.718.783.2000
  1. Contact AIAAIC at aiaaic.org Snail mail
    • AIAAIC
    • The Bradfield Centre
    • 184 Cambridge Science Park
    • Cambridge, CB4 0GA
    • United Kingdom
    • Screen Actors Guild - American Federation of Television and Radio Artists
    • 5757 Wilshire Boulevard, 7th Floor
    • Los Angeles, California 90036
    • USA
    • Phone: 1-855-724-2387
    • Email: info@sagaftra.org
    • Email: outreach@darpa.mil
    • Defense Advanced Research Projects Agency
    • 675 North Randolph Street
    • Arlington, VA 22203-2114
    • Phone 1-703-526-6630
    • Email: CAM@ucdenver.edu
    • College of Arts & Media
    • National Center for Media Forensics
    • CU Denver
    • Arts Building
    • Suite 177
    • 1150 10th Street
    • Denver, CO 80204
    • USA
    • Phone 1-303-315-7400
    • Media Forensics Hub at Clemson University clemson.edu
    • Media Forensics Hub
    • Clemson University
    • Clemson, South Carolina 29634
    • USA
    • Phone 1-864-656-3311
  2. mediaforensics@clemson.edu
    • INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
    Visitor’s address
    • Marsstrasse 40
    • D-80335 Munich
    Postal address
    • INSTITUTE FOR ETHICS IN ARTIFICIAL INTELLIGENCE
    • Arcisstrasse 21
    • D-80333 Munich
    • Germany
    Email
    • ieai(at)mcts.tum.de
    Website
  3. The Institute for Ethical AI & Machine Learning Website https://ethical.institute/ Email
    • a@ethical.institute
    Contacted
    • The Institute for Ethical AI in Education
    From Mail
    • The University of Buckingham
    • The Institute for Ethical AI in Education
    • Hunter Street
    • Buckingham
    • MK18 1EG
    • United Kingdom
  4. Future of Life Institute Contact form
    • No physical contact info
    Contacted
    • 2021-08-14 | Subscribed to newsletter
  5. The Japanese Society for Artificial Intelligence Contact info Mail
    • The Japanese Society for Artificial Intelligence
    • 402, OS Bldg.
    • 4-7 Tsukudo-cho, Shinjuku-ku, Tokyo 162-0821
    • Japan
    Phone
    • 03-5261-3401
    • AI4ALL
    Mail
    • AI4ALL
    • 548 Market St
    • PMB 95333
    • San Francisco, California 94104
    • USA
    Contacted:
    • 2021-08-14 | Subscribed to mailing list
    • The Future Society at thefuturesociety.org
    Contact
    • No physical contact info
    • The Ai Now Institute at ainowinstitute.org
    Contact Email
    • info@ainowinstitute.org
    Contacted
    • 2021-08-14 | Subscribed to mailing list
    • Partnership on AI at partnershiponai.org
    Contact Mail
    • Partnership on AI
    • 115 Sansome St, Ste 1200,
    • San Francisco, CA 94104
    • USA
    • The Foundation for Responsible Robotics at responsiblerobotics.org
    Contact form Email
    • info@responsiblerobotics.org
    • AI4People at ai4people.eu
    Contact form
    • No physical contact info
    • IEEE Ethics in Action - in Autonomous and Intelligent Systems at ethicsinaction.ieee.org
    Email
    • aiopps@ieee.org
  6. Email
    • info@counterhate.com
    Contacted
    • 2021-08-14 | Subscribed to mailing list
    • Carnegie Endowment for International Peace - Partnership for Countering Influence Operations (PCIO) at carnegieendowment.org
    Mail
    • Carnegie Endowment for International Peace
    • Partnership for Countering Influence Operations
    • 1779 Massachusetts Avenue NW
    • Washington, DC 20036-2103
    • USA
    Phone
    • 1-202-483-7600
    Fax
    • 1-202-483-1840
    • Knowledge 4 All Foundation Ltd. - https://www.k4all.org/
    • Betchworth House
    • 57-65 Station Road
    • Redhill, Surrey, RH1 1DL
    • UK
    • The Montréal Declaration for a Responsible Development of Artificial Intelligence at montrealdeclaration-responsibleai.com
    Phone
    • 1-514-343-6111, ext. 29669
    Email
    • declaration-iaresponsable@umontreal.ca
  7. Email:
    • mfc_poc@nist.gov

1st seen in

  1. https://www.reddit.com/r/VocalSynthesis/
  2. https://www.ucl.ac.uk/news/2016/aug/new-computer-programme-replicates-handwriting via Google search for "ai handwriting generator"
  3. https://seanvasquez.com/handwriting-generation redirects to Calligrapher.ai - seen in https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/
  4. 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 "The ethics of artificial intelligence: Issues and initiatives" (PDF). w:Europa (web portal). w:European Parliamentary Research Service. March 2020. Retrieved 2021-02-17. This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies.
  5. https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E
  6. 6.0 6.1 https://spectrum.ieee.org/deepfake-porn
  7. Witness newsletter I subscribed to at https://www.witness.org/get-involved/
  8. 8.0 8.1 https://www.connectedpapers.com/main/8fc09dfcff78ac9057ff0834a83d23eb38ca198a/Transfer-Learning-from-Speaker-Verification-to-Multispeaker-TextToSpeech-Synthesis/graph
  9. https://www.reddit.com/r/VocalSynthesis/
  10. https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/
  11. 'US Lawmakers: AI-Generated Fake Videos May Be a Security Threat' at uk.pcmag.com, 2018-09-13 reporting by Michael Kan


References

  1. "You Won't Believe What Obama Says In This Video!". w:YouTube. w:BuzzFeed. 2018-04-17. Retrieved 2022-01-05. We're entering an era in which our enemies can make anyone say anything at any point in time.
  2. 2.0 2.1 Debevec, Paul (2000). "Acquiring the reflectance field of a human face". Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00. ACM. pp. 145–156. doi:10.1145/344779.344855. ISBN 978-1581132083. Retrieved 2020-06-27.
  3. 3.0 3.1 "Inventing Entertainment: The Early Motion Pictures and Sound Recordings of the Edison Companies". Memory.loc.gov. w:Library of Congress. Retrieved 2020-12-09.
  4. 4.0 4.1 Drew, Harwell (2020-04-16). "An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft". w:washingtonpost.com. w:Washington Post. Retrieved 2019-07-22. Researchers at the cybersecurity firm Symantec said they have found at least three cases of executives’ voices being mimicked to swindle companies. Symantec declined to name the victim companies or say whether the Euler Hermes case was one of them, but it noted that the losses in one of the cases totaled millions of dollars.
  5. 5.0 5.1 Stupp, Catherine (2019-08-30). "Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case". w:wsj.com. w:The Wall Street Journal. Retrieved 2022-01-01.
  6. 6.0 6.1 Damiani, Jesse (2019-09-03). "A Voice Deepfake Was Used To Scam A CEO Out Of $243,000". w:Forbes.com. w:Forbes. Retrieved 2022-01-01. According to a new report in The Wall Street Journal, the CEO of an unnamed UK-based energy firm believed he was on the phone with his boss, the chief executive of firm’s the German parent company, when he followed the orders to immediately transfer €220,000 (approx. $243,000) to the bank account of a Hungarian supplier. In fact, the voice belonged to a fraudster using AI voice technology to spoof the German chief executive. Rüdiger Kirsch of Euler Hermes Group SA, the firm’s insurance company, shared the information with WSJ.
  7. "Fake voices 'help cyber-crooks steal cash'". w:bbc.com. w:BBC. 2019-07-08. Retrieved 2020-07-22.
  8. 8.0 8.1 https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/
  9. Flatow, Ira (April 4, 2008). "1860 'Phonautograph' Is Earliest Known Recording". NPR. Retrieved 2012-12-09.
  10. https://www.reddit.com/r/MachineLearning/comments/gh9cbg/p_generate_handwriting_with_an_inbrowser/
  11. "What is IWR? (Intelligent Word Recognition)". eFileCabinet. 2016-01-04. Retrieved 2021-09-21.
  12. whois facepinpoint.com
  13. https://www.facepinpoint.com/aboutus
  14. whois aiaaic.org
  15. https://web.archive.org/web/20160630154819/https://www.darpa.mil/program/media-forensics
  16. https://web.archive.org/web/20191108090036/https://www.darpa.mil/program/semantic-forensics November
  17. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
  18. https://venturebeat.com/2020/06/12/facebook-detection-challenge-winners-spot-deepfakes-with-82-accuracy/
  19. https://www.nist.gov/itl/iad/mig/open-media-forensics-challenge
  20. https://www.crunchbase.com/organization/thatsmyface-com
  21. whois facepinpoint.com
  22. https://www.facepinpoint.com/aboutus
  23. Rosner, Helen (2021-07-15). "A Haunting New Documentary About Anthony Bourdain". w:The New Yorker. Retrieved 2021-08-25.
  24. https://www.partnershiponai.org/aiincidentdatabase/
  25. Johnson, R.J. (2019-12-30). "Here Are the New California Laws Going Into Effect in 2020". KFI. iHeartMedia. Retrieved 2021-01-23.
  26. "AB 602 - California Assembly Bill 2019-2020 Regular Session - Depiction of individual using digital or electronic technology: sexually explicit material: cause of action". openstates.org. openstates.org. Retrieved 2021-03-24.
  27. Mihalcik, Carrie (2019-10-04). "California laws seek to crack down on deepfakes in politics and porn". w:cnet.com. w:CNET. Retrieved 2021-01-23.
  28. "China seeks to root out fake news and deepfakes with new online content rules". w:Reuters.com. w:Reuters. 2019-11-29. Retrieved 2021-01-23.
  29. Statt, Nick (2019-11-29). "China makes it a criminal offense to publish deepfakes or fake news without disclosure". w:The Verge. Retrieved 2021-01-23.
  30. "Relating to the creation of a criminal offense for fabricating a deceptive video with intent to influence the outcome of an election". w:Texas. 2019-06-14. Retrieved 2021-01-23. In this section, "deep fake video" means a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality
  31. https://capitol.texas.gov/BillLookup/History.aspx?LegSess=86R&Bill=SB751
  32. "New state laws go into effect July 1".
  33. 33.0 33.1 "§ 18.2-386.2. Unlawful dissemination or sale of images of another; penalty". w:Virginia. Retrieved 2021-01-23.
  34. "NVIDIA Open-Sources Hyper-Realistic Face Generator StyleGAN". Medium.com. 2019-02-09. Retrieved 2020-07-13.
  35. Harwell, Drew (2018-12-30). "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". w:The Washington Post. Retrieved 2020-07-13. In September [of 2018], Google added “involuntary synthetic pornographic imagery” to its ban list
  36. Kuo, Lily (2018-11-09). "World's first AI news anchor unveiled in China". Retrieved 2020-07-13.
  37. Hamilton, Isobel Asher (2018-11-09). "China created what it claims is the first AI news anchor — watch it in action here". Retrieved 2020-07-13.
  38. Suwajanakorn, Supasorn; Seitz, Steven; Kemelmacher-Shlizerman, Ira (2017), Synthesizing Obama: Learning Lip Sync from Audio, University of Washington, retrieved 2020-07-13
  39. Giardina, Carolyn (2015-03-25). "'Furious 7' and How Peter Jackson's Weta Created Digital Paul Walker". The Hollywood Reporter. Retrieved 2020-07-13.
  40. ReForm - Hollywood's Creating Digital Clones (youtube). The Creators Project. 2020-07-13.
  41. Debevec, Paul. "Digital Ira SIGGRAPH 2013 Real-Time Live". Retrieved 2017-07-13.
  42. In this TED talk video at 00:04:59 you can see two clips, one with the real Emily shot with a real camera and one with a digital look-alike of Emily, shot with a simulation of a camera - Which is which is difficult to tell. Bruce Lawmen was scanned using USC light stage 6 in still position and also recorded running there on a w:treadmill. Many, many digital look-alikes of Bruce are seen running fluently and natural looking at the ending sequence of the TED talk video.
  43. Pighin, Frédéric. "Siggraph 2005 Digital Face Cloning Course Notes" (PDF). Retrieved 2020-06-26.
  44. https://ict.usc.edu/about/
  45. "Images de synthèse : palme de la longévité pour l'ombrage de Gouraud".
  46. Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine ("Mechanism of the human speech with description of its speaking machine", J. B. Degen, Wien).
  47. History and Development of Speech Synthesis, Helsinki University of Technology, Retrieved on November 4, 2006


Cite error: <ref> tags exist for a group named "contacted", but no corresponding <references group="contacted"/> tag was found