Detecting deep-fake audio through vocal tract reconstruction: Difference between revisions

From Stop Synthetic Filth! wiki
Jump to navigation Jump to search
(+ <iframe> "Amazing method and results from University of Florida scientists in 2022 against the menaces of digital sound-alikes / audio deepfakes (originally published at TheConversation.com)")
(+ original reporting was written by PhD student Logan Blue and professor Patrick Traynor)
Line 37: Line 37:
''' Original reporting '''
''' Original reporting '''


The scientists wrote an article on their work titled [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 '''''Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices''''' at theconversation.com]<ref name="The Conversation 2022">
PhD student Logan Blue and professor Patrick Traynor wrote an article for the general public on the work titled [https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104 '''''Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices''''' at theconversation.com]<ref name="The Conversation 2022">
{{cite web
{{cite web
|url=https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104
|url=https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104

Revision as of 21:31, 26 October 2022

'Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction' is an epic scientific work against the voice thieving machines from the w:University of Florida in published in 2022.

The work Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction at usenix.org, presentation page, version included in the proceedings[1] and slides from researchers of the Florida Institute for Cybersecurity Research (FICS) in the w:University of Florida received funding from the w:Office of Naval Research and was presented in August 2020 at the w:USENIX Security Symposium.


Original reporting

PhD student Logan Blue and professor Patrick Traynor wrote an article for the general public on the work titled Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices at theconversation.com[2] that was published Tuesday 2022-09-20 and permanently w:copylefted it under Creative Commons Attribution-NoDerivatives (CC-BY-ND).

Below is an exact copy of the original article from the SSF! wordpress titled "Amazing method and results from University of Florida scientists in 2022 against the menaces of digital sound-alikes / audio deepfakes". Thank you to the original writers for having the wisdom of licensing the article under CC-BY-ND.

References

  1. Blue, Logan; Warren, Kevin; Abdullah, Hadi; Gibson, Cassidy; Vargas, Luis; O’Dell, Jessica; Butler, Kevin; Traynor, Patrick (August 2022). "Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction". Proceedings of the 31st USENIX Security Symposium: 2691–2708. ISBN 978-1-939133-31-1. Retrieved 2022-10-06.
  2. Blue, Logan; Traynor, Patrick (2022-09-20). "Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices". theconversation.com. w:The Conversation (website). Retrieved 2022-10-05. By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.