|
|
Line 1: |
Line 1: |
| When it cannot be determined by human testing whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a '''digital sound-alike'''.
| | #REDIRECT [[Digital look-alikes and sound-alikes#Digital sound-alikes]] |
| | |
| | |
| Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.
| |
| | |
| [[File:Spectrogram-19thC.png|thumb|right|640px|A [[w:spectrogram|spectrogram]] of a male voice saying 'nineteenth century']] | |
| ----
| |
| == Timeline of digital sound-alikes ==
| |
| * In '''2016''' [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]], an unreleased prototype, was publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco])
| |
| | |
| * In '''2016''' [[w:DeepMind]]'s [[w:WaveNet]] owned by [[w:Google]] also demonstrated ability to steal people's voices
| |
| | |
| * In '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] the work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented. The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results
| |
| ** Listen [https://google.github.io/tacotron/publications/speaker_adaptation/ 'Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"']
| |
| ** View [https://www.youtube.com/watch?v=0sR1rU3gLzQ Video summary of the work at YouTube: 'This AI Clones Your Voice After Listening for 5 Seconds']
| |
| | |
| * As of '''2019''' Symantec research knows of 3 cases where digital sound-alike technology '''has been used for crimes'''.<ref name="WaPo2019">
| |
| https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/</ref>
| |
| | |
| ----
| |
| | |
| == Examples of speech synthesis software not quite able to fool a human yet ==
| |
| Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer.
| |
| | |
| * '''[https://lyrebird.ai/ Lyrebird.ai]''' [https://www.youtube.com/watch?v=xxDBlZu__Xk (listen)]
| |
| * '''[https://candyvoice.com/ CandyVoice.com]''' [https://candyvoice.com/demos/voice-conversion (test with your choice of text)]
| |
| * '''[https://cstr-edinburgh.github.io/merlin/ Merlin]''', a [[w:neural network]] based speech synthesis system by the Centre for Speech Technology Research at the [[w:University of Edinburgh]]
| |
| | |
| | |
| == Documented digital sound-alike attacks ==
| |
| * [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft'], a 2019 Washington Post article
| |
| | |
| ----
| |
| | |
| == Example of a hypothetical digital sound-alike attack ==
| |
| A very simple example of a digital sound-alike attack is as follows:
| |
| | |
| Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:
| |
| | |
| # Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes | |
| # Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
| |
| # Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
| |
| # Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.
| |
| | |
| Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice!]]'''
| |
| | |
| ----
| |
| | |
| == See also in Ban Covert Modeling! wiki ==
| |
| * [[How to protect yourself from covert modeling]]
| |
| * [[Digital look-alikes]]
| |
| | |
| ----
| |
| Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.
| |