Main Page: Difference between revisions

Jump to navigation Jump to search
proper ref for <ref name="WaPo2019">
(+ == Thank yous for tech == + <gallery> + <section begin=thank-yous-for-tech /> + <section end=thank-yous-for-tech />)
(proper ref for <ref name="WaPo2019">)
Line 59: Line 59:


Now in the late 2010's the equivalent thing is happening to our '''voices''' i.e. they '''can be stolen''' to some extent with the 2016 prototypes like [[w:Adobe Inc.]]'s [[w:Adobe Voco|w:Adobe Voco]] and [[w:Google]]'s [[w:DeepMind]] [[w:WaveNet]] and '''made to say anything'''. When it is not possible to determine with human testing or testing with technological means what is a recording of some living or dead person's real voice and what is a simulation it is a '''[[digital sound-alikes|digital sound-alike]]'''. 2018 saw the publication of Google Research's sound-like-anyone machine [[#'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion)|(transcluded below)]] at the [[w:NeurIPS]] conference and by the end of '''2019''' Symantec research had learned of 3 cases where digital sound-alike technology '''had been used for crimes'''.<ref name="WaPo2019">
Now in the late 2010's the equivalent thing is happening to our '''voices''' i.e. they '''can be stolen''' to some extent with the 2016 prototypes like [[w:Adobe Inc.]]'s [[w:Adobe Voco|w:Adobe Voco]] and [[w:Google]]'s [[w:DeepMind]] [[w:WaveNet]] and '''made to say anything'''. When it is not possible to determine with human testing or testing with technological means what is a recording of some living or dead person's real voice and what is a simulation it is a '''[[digital sound-alikes|digital sound-alike]]'''. 2018 saw the publication of Google Research's sound-like-anyone machine [[#'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis' 2018 by Google Research (external transclusion)|(transcluded below)]] at the [[w:NeurIPS]] conference and by the end of '''2019''' Symantec research had learned of 3 cases where digital sound-alike technology '''had been used for crimes'''.<ref name="WaPo2019">
https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/</ref>
{{cite web
|url= https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/
|title= An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft
|last= Drew
|first= Harwell
|date= 2020-04-16
|website= [[w:washingtonpost.com]]
|publisher= [[w:Washington Post]]
|access-date= 2021-01-23
|quote= Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said}}
</ref>


Therefore it is high time to '''act''' and to build the '''[[Adequate Porn Watcher AI (concept)]]''' to protect humanity from visual synthetic filth and to '''[[Law proposals#Law proposal to ban covert modeling of human voice|criminalize covert modeling of the naked human voice and synthesis from a covert voice model]]!'''
Therefore it is high time to '''act''' and to build the '''[[Adequate Porn Watcher AI (concept)]]''' to protect humanity from visual synthetic filth and to '''[[Law proposals#Law proposal to ban covert modeling of human voice|criminalize covert modeling of the naked human voice and synthesis from a covert voice model]]!'''
We use only those cookies necessary for the functioning of the wiki and we will never sell your data. All data is stored in the EU.

Navigation menu