Detecting deep-fake audio through vocal tract reconstruction
'Detecting deep-fake audio through vocal tract reconstruction' is an epic scientific work, against fake human-like voices, from the w:University of Florida in published to peers in August 2022.
The work Detecting deep-fake audio through vocal tract reconstruction at usenix.org, presentation page, version included in the proceedings[1] and slides from researchers of the Florida Institute for Cybersecurity Research (FICS) at fics.institute.ufl.edu in the w:University of Florida received funding from the w:Office of Naval Research and was presented in August 2020 at the 31st w:USENIX Security Symposium.
This work was done by PhD student Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler and Professor Patrick Traynor.
The University of Florida Research Foundation Inc has filed for and received an US patent titled 'Detecting deep-fake audio through vocal tract reconstruction' registration number US20220036904A1 (link to patents.google.com) with 20 claims. The patent application was published on Thursday 2022-02-03. The patent application was approved on 2023-07-04 and has an adjusted expiration date of 2041-12-29.
Original reporting
PhD student Logan Blue and professor Patrick Traynor wrote an article for the general public on the work titled Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices at theconversation.com[2] that was published Tuesday 2022-09-20 and permanently w:copylefted it under Creative Commons Attribution-NoDerivatives (CC-BY-ND).
Below is an exact copy of the original article from the SSF! wordpress titled "Amazing method and results from University of Florida scientists in 2022 against the menaces of digital sound-alikes / audio deepfakes". Thank you to the original writers for having the wisdom of licensing the article under CC-BY-ND.
Watch
Reactions
Juho's reaction
“This is brilliant new science from the University of Florida against the human-like voice-thieving systems in 2022! 👏🏻 👏🏻 👏🏻Thank you to the University of Florida and their Florida Institute for Cybersecurity Research (FICS) and very warm thank you to the respective scientists for this ground-breaking science against the voice-thieving systems. May ☮️ be with you.
This looks like an awesome start on future automated armor for us humans against the voice-thieving systems!
These methods based on extremely innovative application of existing scientific knowledge and realizing to ask the right questions and the system the University of Florida researchers made to spot the fake synthesized voices and how this system was designed, implemented, tested and found highly effective against the voice-thieving machines of today gives us hope in humanity's struggle to stay humanity despite of the menaces of the synthetic human-like fakes.
This was an issue needing solving, as the digital sound-alikes / audio "deepfakes" / audio "deep fakes" / voice-thieving synthesis systems are known to have been used for crimes since March 2019. Office of Naval Research funding was a natural source of funding for this innovation, as it is clear that protecting the US Navy and US Marines audio communications networks against any voice-forging adversaries is of high importance.
People who were aware of the voice-thieving-machines problem had been waiting for something and this is more than at least I expected. 🥳 🥳
Cheers American taxpayers and DoD for funding this breakthrough science!
Please get this technology against the fake voices to humans to protect the humans and humanity! 🫵🏻”
See also
- Work by Boháček and Farid 'Protecting President Zelenskyy against deep fakes' 2022 to protect President Zelensky against digital look-alike attacks
- Further work by Boháček and Farid 'Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms' 2022
External links
- Florida Institute for Cybersecurity Research (FICS) at fics.institute.ufl.edu in the University of Florida. @uf_fics at twitter.com
- The Advanced Computing Systems Association at usenix.org founded in 1975. @usenix at twitter.com
- Office of Naval Research (ONR) at nre.navy.mil in the USA. @USNavyResearch at twitter.com
1st seen in
References
- ↑ Blue, Logan; Warren, Kevin; Abdullah, Hadi; Gibson, Cassidy; Vargas, Luis; O’Dell, Jessica; Butler, Kevin; Traynor, Patrick (August 2022). "Detecting deep-fake audio through vocal tract reconstruction". Proceedings of the 31st USENIX Security Symposium: 2691–2708. ISBN 978-1-939133-31-1. Retrieved 2022-10-06.
- ↑
Blue, Logan; Traynor, Patrick (2022-09-20). "Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices". theconversation.com. w:The Conversation (website). Retrieved 2022-10-05.
By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.