Synthetic human-like fakes: Difference between revisions

Jump to navigation Jump to search
m
(→‎2020's synthetic human-like fakes: The Florida university researchers published [the popular article] Tuesday 2022-09-20 and permanently w:copylefted it under Creative Commons Attribution-NoDerivatives (CC-BY-ND))
Line 393: Line 393:
}}
}}


</ref> and [https://www.usenix.org/system/files/sec22_slides-blue.pdf slides] from researchers of the Florida Institute for Cybersecurity Research (FICS) in the [[w:University of Florida]] received funding from the [[w:Office of Naval Research]] and was presented in August 2020 at the [[w:USENIX]] Security Symposium. The scientists wrote an article on their work titled [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 '''''Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices''''' at theconversation.com] that was published Tuesday 2022-09-20 and permanently [[w:copyleft]]ed it under Creative Commons Attribution-NoDerivatives (CC-BY-ND).<ref name="The Conversation 2022">
</ref> and [https://www.usenix.org/system/files/sec22_slides-blue.pdf slides] from researchers of the Florida Institute for Cybersecurity Research (FICS) in the [[w:University of Florida]] received funding from the [[w:Office of Naval Research]] and was presented in August 2020 at the [[w:USENIX]] Security Symposium. The scientists wrote an article on their work titled [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 '''''Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices''''' at theconversation.com]<ref name="The Conversation 2022">
{{cite web
{{cite web
|url=https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104
|url=https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104
Line 407: Line 407:
|quote=By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.}}
|quote=By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.}}


</ref>
</ref> that was published Tuesday 2022-09-20 and permanently [[w:copyleft]]ed it under Creative Commons Attribution-NoDerivatives (CC-BY-ND).


* '''2022''' | '''<font color="green">counter-measure</font>''' | [https://arxiv.org/pdf/2206.12043.pdf '''''Protecting President Zelenskyy against deep fakes''''' a 2022 preprint at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.  
* '''2022''' | '''<font color="green">counter-measure</font>''' | [https://arxiv.org/pdf/2206.12043.pdf '''''Protecting President Zelenskyy against deep fakes''''' a 2022 preprint at arxiv.org] by Matyáš Boháček of Johannes Kepler Gymnasium and [[w:Hany Farid]], the dean and head of of [[w:University of California, Berkeley School of Information|w:Berkeley School of Information at the University of California, Berkeley]]. This brief paper describes their automated digital look-alike detection system and evaluate its efficacy and reliability in comparison to humans with untrained eyes. Their work provides automated evaluation tools to catch so called "deep fakes" and their motivation seems to have been to find automation armor against disinformation warfare against humans and the humanity. Automated digital [[Glossary#Media forensics|media forensics]] is a very good idea explored by many.  Boháček and Farid 2022 detection system works by evaluating both facial mannerisms as well as gestural mannerisms to detect the non-human ones from the ones that are human in origin.  
We use only those cookies necessary for the functioning of the wiki and we will never sell your data. All data is stored in the EU.

Navigation menu