Synthetic human-like fakes: Difference between revisions

Jump to navigation Jump to search
→‎Handwriting syntheses: mentioning that here is similar risk to what happened in 2018 when w:speaker recognition knowledge was applied to a generative task with w:Transfer learning
(→‎Handwriting syntheses: + Recurrent neural network handwriting generation demo at cs.toronto.edu is a demonstration site for publication Generating Sequences With Recurrent Neural Networks at arxiv.org...)
(→‎Handwriting syntheses: mentioning that here is similar risk to what happened in 2018 when w:speaker recognition knowledge was applied to a generative task with w:Transfer learning)
Line 245: Line 245:
# Defensively, to hide one's handwriting style from public view
# Defensively, to hide one's handwriting style from public view
# Offensively, to thieve somebody else's handwriting style
# Offensively, to thieve somebody else's handwriting style
Here we find a similar risk to that which realized when the [[w:speaker recognition]] systems turning out to be instrumental in the development of [[#Digital sound-alikes|digital sound-alikes]]. After the knowledge needed to recognize a speaker was [[w:Transfer learning|w:transferred]] into a generative task in 2018 by Google researchers, we no longer cannot effectively determine for English speakers which recording is human in origin and which is from a machine origin.


Some syntheses:
Some syntheses:
We use only those cookies necessary for the functioning of the wiki and we will never sell your data. All data is stored in the EU.

Navigation menu