Detecting deep-fake audio through vocal tract reconstruction: Difference between revisions

Jump to navigation Jump to search
+ LST sections named what-is-it and original-reporting
(+ my reaction that I have been drafting a longer time. Long story short: THANK YOU FOR THIS SCIENCE!)
(+ LST sections named what-is-it and original-reporting)
Line 1: Line 1:
''''[[Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction]]'''' is an epic scientific work, against fake human-like voices, from the [[w:University of Florida]] in published in 2022.
<section begin=what-is-it />''''[[Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction]]'''' is an epic scientific work, against fake human-like voices, from the [[w:University of Florida]] in published in August 2022.


The work [https://www.usenix.org/system/files/sec22fall_blue.pdf '''''Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction''''' at usenix.org], [https://www.usenix.org/conference/usenixsecurity22/presentation/blue presentation page], [https://www.usenix.org/system/files/sec22-blue.pdf version included in the proceedings]<ref name="University of Florida 2022">
The work [https://www.usenix.org/system/files/sec22fall_blue.pdf '''''Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction''''' at usenix.org], [https://www.usenix.org/conference/usenixsecurity22/presentation/blue presentation page], [https://www.usenix.org/system/files/sec22-blue.pdf version included in the proceedings]<ref name="University of Florida 2022">
Line 35: Line 35:


This work was done by PhD student Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor.
This work was done by PhD student Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor.
 
<section end=what-is-it />
The University of Florida researchers' work was funded by the [[w:Office of Naval Research]].


----
----
Line 42: Line 41:
''' Original reporting '''
''' Original reporting '''


PhD student Logan Blue and professor Patrick Traynor wrote an article for the general public on the work titled [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 '''''Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices''''' at theconversation.com]<ref name="The Conversation 2022">
<section begin=original-reporting />PhD student Logan Blue and professor Patrick Traynor wrote an article for the general public on the work titled [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 '''''Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices''''' at theconversation.com]<ref name="The Conversation 2022">
{{cite web
{{cite web
|url=https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104
|url=https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104
Line 56: Line 55:
|quote=By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.}}
|quote=By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.}}


</ref> that was published Tuesday 2022-09-20 and permanently [[w:copyleft]]ed it under Creative Commons Attribution-NoDerivatives (CC-BY-ND).  
</ref> that was published Tuesday 2022-09-20 and permanently [[w:copyleft]]ed it under Creative Commons Attribution-NoDerivatives (CC-BY-ND).<section end=original-reporting />


Below is an exact copy of the [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 original article] from the [https://stop-synthetic-filth.org/wp/republished/amazing-method-and-results-from-university-of-florida-scientists-in-2022-against-the-menaces-of-digital-sound-alikes-audio-deepfakes/ SSF! wordpress titled "''Amazing method and results from University of Florida scientists in 2022 against the menaces of digital sound-alikes / audio deepfakes''"]. Thank you to the original writers for having the wisdom of licensing the article under CC-BY-ND.  
Below is an exact copy of the [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 original article] from the [https://stop-synthetic-filth.org/wp/republished/amazing-method-and-results-from-university-of-florida-scientists-in-2022-against-the-menaces-of-digital-sound-alikes-audio-deepfakes/ SSF! wordpress titled "''Amazing method and results from University of Florida scientists in 2022 against the menaces of digital sound-alikes / audio deepfakes''"]. Thank you to the original writers for having the wisdom of licensing the article under CC-BY-ND.  
We use only those cookies necessary for the functioning of the wiki and we will never sell your data. All data is stored in the EU.

Navigation menu