|
Tag: Redirect target changed |
(13 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
| When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a '''digital sound-alike'''.
| | #REDIRECT [[Synthetic human-like fakes#Digital sound-alikes]] |
| | |
| Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.
| |
| | |
| ----
| |
| | |
| == Examples of speech synthesis software capable to make a digital sound-alikes ==
| |
| * [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]] unreleased prototype publicly demonstrated in 2016. ([https://www.youtube.com/watch?v=I3l4XLZ59iw&t=5s View and listen to Adobe MAX 2016 presentation of Voco)]
| |
| * [[w:DeepMind]]'s [[w:WaveNet]] that was acquired by [[w:Google]] in 2014
| |
| | |
| Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly.
| |
| | |
| ----
| |
| == Examples of speech synthesis software not quite able to fool a human yet ==
| |
| Some other contenders to create digital sound-alikes are though, as of 2019, their speech synthesis in most use scenarios does not yet fool a human because the results contain tell tale signs that give it away as a speech synthesizer.
| |
| | |
| * [https://lyrebird.ai/ Lyrebird.ai] [https://www.youtube.com/watch?v=xxDBlZu__Xk (listen)]
| |
| * [https://candyvoice.com/ CandyVoice.com] [https://candyvoice.com/demos/voice-conversion (test with your choice of text)]
| |
| | |
| == Example of a digital sound-alike attack ==
| |
| A very simple example of a digital sound-alike attack is as follows:
| |
| | |
| Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:
| |
| | |
| # Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
| |
| # Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
| |
| # Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
| |
| # Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.
| |
| | |
| Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice]]!'''
| |
| | |
| | |
| ----
| |
| | |
| == See also in Ban Covert Modeling! wiki ==
| |
| * [[How to protect yourself from covert modeling]]
| |
| * [[Digital look-alikes]]
| |
| | |
| == See also in Wikipedia ==
| |
| * [[w:Speech synthesis]]
| |
| ----
| |
| Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.
| |