Digital sound-alikes: Difference between revisions
Juho Kunsola (talk | contribs) (On digital sound-alikes) |
Juho Kunsola (talk | contribs) (+ examples of voice thieving software: w:Adobe Inc.'s Voco unreleased prototype publicly demonstrated in 2016 + w:DeepMind's w:WaveNet that was acquired by w:Google in 2014) |
||
Line 14: | Line 14: | ||
Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice]]!''' | Thus it is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human appearance and voice]]!''' | ||
---- | |||
== Examples of software capable to make a digital sound-alike == | |||
* [[w:Adobe Inc.]]'s [[w:Adobe Voco|Voco]] unreleased prototype publicly demonstrated in 2016 | |||
* [[w:DeepMind]]'s [[w:WaveNet]] that was acquired by [[w:Google]] in 2014 | |||
Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly. | |||
---- | ---- | ||
Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice. | Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice. |
Revision as of 14:58, 3 April 2019
When it cannot be determined by human testing, is some synthesized recording a simulation of some person's speech, or is it a recording made of that person's actual real voice, it is a digital sound-alike.
Living people can defend¹ themselves against digital sound-alike by denying the things the digital sound-alike says if they are presented to the target, but dead people cannot. Digital sound-alikes offer criminals new disinformation attack vectors and wreak havoc on provability.
Example of a digital sound-alike attack
A very simple example of a digital sound-alike attack is as follows:
Someone puts a digital sound-alike to call somebody's voicemail from an unknown number and to speak for example illegal threats. In this example there are at least two victims:
- Victim #1 - The person whose voice has been stolen into a covert model and a digital sound-alike made from it to frame them for crimes
- Victim #2 - The person to whom the illegal threat is presented in a recorded form by a digital sound-alike that deceptively sounds like victim #1
- Victim #3 - It could also be viewed that victim #3 is our law enforcement systems as they are put to chase after and interrogate the innocent victim #1
- Victim #4 - Our judiciary which prosecutes and possibly convicts the innocent victim #1.
Thus it is high time to act and to criminalize the covert modeling of human appearance and voice!
Examples of software capable to make a digital sound-alike
- w:Adobe Inc.'s Voco unreleased prototype publicly demonstrated in 2016
- w:DeepMind's w:WaveNet that was acquired by w:Google in 2014
Neither of these software are available to the masses at large according to the "official truth", but as is known software has a high tendency to get pirated very quickly.
Footnote 1. Whether a suspect can defend against faked synthetic speech that sounds like him/her depends on how up-to-date the judiciary is. If no information and instructions about digital sound-alikes have been given to the judiciary, they likely will not believe the defense of denying that the recording is of the suspect's voice.