Synthetic human-like fakes: Difference between revisions

→‎Timeline of synthetic human-like fakes: + 2018 In September 2018 Google added “involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.” + links to the relevant info
(→‎Timeline of synthetic human-like fakes: + Adding the 2019 w:Virginia and w:Texas state laws against synthetic filth + The 2020 Chinese law and w:California state law that came into effect on Jan 1 (from my additions to w:Human image synthesis)
(→‎Timeline of synthetic human-like fakes: + 2018 In September 2018 Google added “involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.” + links to the relevant info)
Line 237: Line 237:
  | quote = }}
  | quote = }}
</ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera.
</ref> Neither the [[w:speech synthesis]] used nor the gesturing of the digital look-alike anchors were good enough to deceive the watcher to mistake them for real humans imaged with a TV camera.
* '''<font color="red">2018</font>''' | '''<font color="red">counter-measure</font>''' | In September 2018 Google added “'''involuntary synthetic pornographic imagery'''” to its '''ban list''', allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation.”<ref name="WashingtonPost2018">
{{cite web
|url= https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target/
|title= Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'
|last= Harwell
|first= Drew
|date= 2018-12-30
|website=
|publisher= [[w:The Washington Post]]
|access-date= 2020-07-13
|quote= In September [of 2018], Google added “involuntary synthetic pornographic imagery” to its ban list
}}
</ref> '''[https://support.google.com/websearch/answer/9116649?hl=en Information on removing involuntary fake pornography from Google at support.google.com]''' if it shows up in Google and the '''[https://support.google.com/websearch/troubleshooter/3111061#ts=2889054%2C2889099%2C2889064%2C9171203 form to request removing involuntary fake pornography at support.google.com]''', select "''I want to remove: A fake nude or sexually explicit picture or video of myself''"


[[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google|Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] (NeurIPS). It requires only 5 seconds of sample to steal a voice.]]
[[File:GoogleLogoSept12015.png|thumb|right|300px|[[w:Google|Google]]'s logo. Google Research demonstrated their '''[https://google.github.io/tacotron/publications/speaker_adaptation/ sound-like-anyone-machine]''' at the '''2018''' [[w:Conference on Neural Information Processing Systems|Conference on Neural Information Processing Systems]] (NeurIPS). It requires only 5 seconds of sample to steal a voice.]]


* '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] (NeurIPS). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results.
* '''<font color="red">2018</font>''' | <font color="red">science</font> and <font color="red">demonstration</font> | The work [http://papers.nips.cc/paper/7700-transfer-learning-from-speaker-verification-to-multispeaker-text-to-speech-synthesis 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis'] ([https://arxiv.org/abs/1806.04558 at arXiv.org]) was presented at the 2018 [[w:Conference on Neural Information Processing Systems]] (NeurIPS). The pre-trained model is able to steal voices from a sample of only '''5 seconds''' with almost convincing results.


* '''2019''' | crime | [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft'], a 2019 Washington Post article
* '''2019''' | crime | [https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/?noredirect=on 'An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft'], a 2019 Washington Post article