Open main menu
Home
Random
Recent changes
Special pages
Community portal
Settings
About Stop Synthetic Filth! wiki
Disclaimers
Stop Synthetic Filth! wiki
Search
User menu
Talk
Contributions
Log in
Editing
Synthetic human-like fakes
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
'''Definitions''' <section begin=definitions-of-synthetic-human-like-fakes /> When the '''[[Glossary#No camera|camera does not exist]]''', but the subject being imaged with a simulation of a (movie) camera deceives the watcher to believe it is some living or dead person it is a '''[[Synthetic human-like fakes#Digital look-alikes|digital look-alike]]'''. In 2017-2018 this started to be referred to as [[w:deepfake]], even though altering video footage of humans with a computer with a deceiving effect is actually 20 yrs older than the name "deep fakes" or "deepfakes".<ref name="Bohacek and Farid 2022 protecting against fakes"> {{cite journal | last1 = Boháček | first1 = Matyáš | last2 = Farid | first2 = Hany | date = 2022-11-23 | title = Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms | url = https://www.pnas.org/doi/10.1073/pnas.2216035119 | journal = [[w:Proceedings of the National Academy of Sciences of the United States of America]] | volume = 119 | issue = 48 | pages = | doi = 10.1073/pnas.221603511 | access-date = 2023-01-05 }} </ref><ref name="Bregler1997"> {{cite journal | last1 = Bregler | first1 = Christoph | last2 = Covell | first2 = Michele | last3 = Slaney | first3 = Malcolm | date = 1997-08-03 | title = Video Rewrite: Driving Visual Speech with Audio | url = https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/human/bregler-sig97.pdf | journal = SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques | volume = | issue = | pages = 353-360 | doi = 10.1145/258734.258880 | access-date = 2022-09-09 }} </ref> When it cannot be determined by human testing or media forensics whether some fake voice is a synthetic fake of some person's voice, or is it an actual recording made of that person's actual real voice, it is a pre-recorded '''[[Synthetic human-like fakes#Digital sound-alikes|digital sound-alike]]'''. This is now commonly referred to as [[w:audio deepfake]]. '''Real-time digital look-and-sound-alike''' in a video call was used to defraud a substantial amount of money in 2023.<ref name="Reuters real-time digital look-and-sound-alike crime 2023"> {{cite web | url = https://www.reuters.com/technology/deepfake-scam-china-fans-worries-over-ai-driven-fraud-2023-05-22/ | title = 'Deepfake' scam in China fans worries over AI-driven fraud | last = | first = | date = 2023-05-22 | website = [[w:Reuters.com]] | publisher = [[w:Reuters]] | access-date = 2023-06-05 | quote = }} </ref> <section end=definitions-of-synthetic-human-like-fakes /> ::[[Synthetic human-like fakes|Read more about '''synthetic human-like fakes''']], see and support '''[[organizations and events against synthetic human-like fakes]]''' and what they are doing, what kinds of '''[[Laws against synthesis and other related crimes]]''' have been formulated, [[Synthetic human-like fakes#Timeline of synthetic human-like fakes|examine the SSFWIKI '''timeline''' of synthetic human-like fakes]] or [[Mediatheque|view the '''Mediatheque''']]. [[File:Screenshot at 27s of a moving digital-look-alike made to appear Obama-like by Monkeypaw Productions and Buzzfeed 2018.png|thumb|right|480px|link=Mediatheque/2018/Obama's appearance thieved - a public service announcement digital look-alike by Monkeypaw Productions and Buzzfeed|{{#lst:Mediatheque|Obama-like-fake-2018}}]] [[File:BlV1999-morphable-model-till-match-low-res-rip.png|thumb|right|460px|Image 2 (low resolution rip) shows a 1999 technique for sculpting a morphable model, till it matches the target's appearance. <br/>(1) Sculpting a morphable model to one single picture <br/>(2) Produces 3D approximation <br/>(4) Texture capture <br/>(3) The 3D model is rendered back to the image with weight gain <br/>(5) With weight loss <br/>(6) Looking annoyed <br/>(7) Forced to smile <small>Image 2 by Blanz and Vettel – Copyright ACM 1999 – http://dl.acm.org/citation.cfm?doid=311535.311556 – Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.</small>]]
Summary:
Please note that all contributions to Stop Synthetic Filth! wiki are considered to be released under the Creative Commons Attribution-ShareAlike (see
SSF:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)