Scratchpad: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(INCOMING + https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/ - announcing the '''Microsoft Video Authenticator''' on 2020-09-01)
(one 404 replaced with last working archive.org capture)
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''INCOMING'''
'''INCOMING'''


* [https://www.youtube.com/watch?v=94d_h_t2QAA '''''Donald Sherman orders a pizza using a talking computer, Dec 4, 1974''''' at Youtube.com]. This speech synthesizer by [[w:Michigan State University]] is claimed to be the first one to be used for speech [[w:prosthesis]] in history on '''1974'''-12-04. Domino's missed their chance to make history by hanging up the call thinking it is just some prank. Mr. Mike's pizza was the first pizza parlor to take a pizza order via speech synthesis
* [https://jmpelletier.com/the-birth-of-the-synthetic-voice/ '''''The Birth of the Synthetic Voice''''' at jmpelletier.com]<ref group="1st seen in">https://www.reddit.com/r/VocalSynthesis/</ref> written by Jean-Marc Pelletier holds the view that the first voice synthesis took place in 1931 and was not done with computers. 
* https://vision.berkeley.edu/posts/deep-fakes
* https://uk.pcmag.com/features/95255/when-ai-blurs-the-line-between-reality-and-fiction - 2018-06-07
* https://www.lawfareblog.com/alls-clear-deepfakes-think-again by [[w:Bobby Chesney|Robert "Bobby" Chesney]], [[w:Danielle Citron]] and [[w:Hany Farid]] published on Monday 2020-05-11 in [[w:Lawfare (blog)]] of [[w:Lawfare]].<ref name="Lawfare2020>
{{Cite web
|url=https://www.lawfareblog.com/alls-clear-deepfakes-think-again
|title=All’s Clear for Deepfakes: Think Again
|last=Chesney
|first=Bobby
|last2=Citron
|first2=Danielle
|last3=Farid
|first3=Hany
|date=2020-05-11
|website=Lawfare (blog)
|publisher=Lawfare
|access-date=2021-08-24
|language=en
|archive-url=https://web.archive.org/web/20210824154249/https://www.lawfareblog.com/alls-clear-deepfakes-think-again
|archive-date=2021-08-24
|quote=Consider the fate that befell journalist and human rights activist Rana Ayyub. When a deepfake sex video appeared in April 2018 showing Ayyub engaged in a sex act in which she never engaged, the video spread like wildfire. Within 48 hours, the video appeared on more than half of the cellphones in India. Ayyub’s Facebook profile and Twitter account were overrun with death and rape threats. Posters disclosed her home addressed and claimed that she was available for anonymous sex.
}}
</ref>
* https://www.researchgate.net/scientific-contributions/Supasorn-Suwajanakorn-2056892335
* https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/ - announcing the '''Microsoft Video Authenticator''' on 2020-09-01
* https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/ - announcing the '''Microsoft Video Authenticator''' on 2020-09-01
* https://www.techrepublic.com/article/deepfakes-microsoft-and-others-in-big-tech-are-working-to-bring-authenticity-to-videos-photos/
* https://www.techrepublic.com/article/deepfakes-microsoft-and-others-in-big-tech-are-working-to-bring-authenticity-to-videos-photos/
Line 6: Line 33:
* https://www.ted.com/talks/danielle_citron_how_deepfakes_undermine_truth_and_threaten_democracy by [[w:Danielle Citron]] at the TEDSummit 2019
* https://www.ted.com/talks/danielle_citron_how_deepfakes_undermine_truth_and_threaten_democracy by [[w:Danielle Citron]] at the TEDSummit 2019
* [[w:Existential risk from artificial general intelligence]]
* [[w:Existential risk from artificial general intelligence]]
* https://www.humane-ai.eu/


{{#ev:youtube|k8X_Em-NQn0|640px|right|[[w:Maeil Broadcasting Network]] in South Korea published an AI news anchor in '''November 2020''' made in conjunction with MoneyBrain. You can enable YouTube auto-generated '''English subtitles''' by enabling closed captioning (cc) and then clicking on the setting icon on the video control bar.}}
''' Reporting '''
 
{{#ev:youtube|dCKbRCUyop8|640px|right|''''''Face editing with Generative Adversarial Networks'''''' by ''Arxiv Insights'' on YouTube] Premiered '''2019-09'''-13}}
 
{{#ev:youtube|cQ54GDm1eL0|640px|right|An '''April 2018''' digital look-and-sound-alike made of President Obama, uploaded by BuzzFeedVideo}}
 
''' Videos on display '''
* [[w:Maeil Broadcasting Network]] in South Korea published an AI news anchor. It is a digital look-and-sound-alike that is also scripted by an AI
* [https://www.youtube.com/watch?v=dCKbRCUyop8 ''''''Face editing with Generative Adversarial Networks'''''' by ''Arxiv Insights'' on YouTube] Premiered 2019-09-13
* [https://www.youtube.com/watch?v=cQ54GDm1eL0 ''''''You Won’t Believe What Obama Says In This Video!'''''' at YouTube], an April 2018 digital look-and-sound-alike made of President Obama, uploaded by BuzzFeedVideo


''' Reporting '''
* [https://uk.pcmag.com/security/117402/us-lawmakers-ai-generated-fake-videos-may-be-a-security-threat ''''''US Lawmakers: AI-Generated Fake Videos May Be a Security Threat'''''' at uk.pcmag.com], 2018-09-13 reporting by Michael Kan
* [https://www.washingtonpost.com/technology/2020/01/07/dating-apps-need-women-advertisers-need-diversity-ai-companies-offer-solution-fake-people/ ''''''Dating apps need women. Advertisers need diversity. AI companies offer a solution: Fake people'''''' at washingtonpost.com], 2020-01-07 technology reporting by Drew Harwell
* [https://www.washingtonpost.com/technology/2020/01/07/dating-apps-need-women-advertisers-need-diversity-ai-companies-offer-solution-fake-people/ ''''''Dating apps need women. Advertisers need diversity. AI companies offer a solution: Fake people'''''' at washingtonpost.com], 2020-01-07 technology reporting by Drew Harwell


Line 53: Line 68:
== AI ==
== AI ==
* [https://www.transcend.org/tms/2013/09/shall-we-play-a-game-the-rise-of-the-military-entertainment-complex/ '''''"Shall We Play a Game?: The Rise of the Military-Entertainment Complex"''''' at transcend.org]<ref group="1st seen in">[[w:Institute for Creative Technologies]]</ref> (2013-09-23)  
* [https://www.transcend.org/tms/2013/09/shall-we-play-a-game-the-rise-of-the-military-entertainment-complex/ '''''"Shall We Play a Game?: The Rise of the Military-Entertainment Complex"''''' at transcend.org]<ref group="1st seen in">[[w:Institute for Creative Technologies]]</ref> (2013-09-23)  
* [https://www.samsungnext.com/blog/the-ethics-of-synthetic-media '''''"Synthetic media, ethics, & commercial potential"''''' at samsungnext.com] (2020-11-07)
* '''''"Synthetic media, ethics, & commercial potential"''''' at samsungnext.com (2020-11-07), noticed to be 404 on 2022-02-22. [https://web.archive.org/web/20210307052547/https://www.samsungnext.com/blog/the-ethics-of-synthetic-media '''''"Synthetic media, ethics, & commercial potential"''''' archived at archive.org / originally at samsungnext.com]
* [https://venturebeat.com/2020/05/12/facebook-is-using-more-ai-to-detect-hate-speech/ '''''"Facebook is using more AI to detect hate speech"''''' at venturebeat.com] (2020-05-12)
* [https://venturebeat.com/2020/05/12/facebook-is-using-more-ai-to-detect-hate-speech/ '''''"Facebook is using more AI to detect hate speech"''''' at venturebeat.com] (2020-05-12)
* Harvest [https://venturebeat.com/wp-content/uploads/2020/08/Synthethic-Media-Landscape.jpg] for info on companies and their products and services.
* Harvest [https://venturebeat.com/wp-content/uploads/2020/08/Synthethic-Media-Landscape.jpg] for info on companies and their products and services.
== Assesments ==
* [https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237 ''''''Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios'''''' at carnegieendowment.org], a 2020-07-08 assesment identifies some types of criminalities that can be made using [[synthetic human-like fakes]].


== Synthetic media landscape ==
== Synthetic media landscape ==

Latest revision as of 12:08, 22 February 2022

INCOMING

Reporting

Content moved elsewhere[edit | edit source]

Digital look-alikes, lighting capture, simulation and relighting[edit | edit source]

Digital sound-alikes[edit | edit source]

AI[edit | edit source]

Synthetic media landscape[edit | edit source]

  • https://www.syntheticmedialandscape.com/ by Samsung Next. Downloading the guide requires contact info. Source of https://venturebeat.com/wp-content/uploads/2020/08/Synthethic-Media-Landscape.jpg that is a chart of the industry

Sources for links[edit | edit source]

Misc.[edit | edit source]

A side-by-side comparison of videos. To the left, a scene from the 2013 motion picture w:Man of Steel (film). To the right, the same scene modified using w:deepfake technology.

Man of Steel produced by DC Entertainment and Legendary Pictures, distributed by Warner Bros. Pictures. Modification done by Reddit user "derpfakes".

This is a sample from a copyrighted video recording. The person who uploaded this work and first used it in an article, and subsequent people who use it in articles, assert that this qualifies as fair use.

References[edit | edit source]

  1. Chesney, Bobby; Citron, Danielle; Farid, Hany (2020-05-11). "All's Clear for Deepfakes: Think Again". Lawfare (blog). Lawfare. Archived from the original on 2021-08-24. Retrieved 2021-08-24. Consider the fate that befell journalist and human rights activist Rana Ayyub. When a deepfake sex video appeared in April 2018 showing Ayyub engaged in a sex act in which she never engaged, the video spread like wildfire. Within 48 hours, the video appeared on more than half of the cellphones in India. Ayyub’s Facebook profile and Twitter account were overrun with death and rape threats. Posters disclosed her home addressed and claimed that she was available for anonymous sex.

1st seen in[edit | edit source]