Synthetic human-like fakes: Difference between revisions
Juho Kunsola (talk | contribs) m housekeeping |
Juho Kunsola (talk | contribs) →Real-time digital look-and-sound-alike fraud in 2024: + '''''Arup lost $25mn in Hong Kong deepfake video conference scam''''' at ft.com] reporting by Cheng Leng and Chan Ho-him 2025-05-17 |
||
| (7 intermediate revisions by the same user not shown) | |||
| Line 210: | Line 210: | ||
{{#ev:youtube|0sR1rU3gLzQ|640px|right|Video [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.}} | {{#ev:youtube|0sR1rU3gLzQ|640px|right|Video [https://www.youtube.com/watch?v=0sR1rU3gLzQ video 'This AI Clones Your Voice After Listening for 5 Seconds' by '2 minute papers' at YouTube] describes the voice thieving machine by Google Research in [[w:NeurIPS|w:NeurIPS]] 2018.}} | ||
In November 2024, Nvidia researchers announced they have made and trained a [https://fugatto.github.io/ Foundational Generative Audio Transformer (Opus 1) at fugatto.github.io] or Fugatto for short. | |||
The researchers state ''Fugatto is a versatile audio synthesis and transformation model capable of following | |||
free-form text instructions with optional audio inputs. ''<ref>https://research.nvidia.com/publication/2024-11_fugatto-1-foundational-generative-audio-transformer-opus-1</ref> | |||
=== Documented crimes with digital sound-alikes === | === Documented crimes with digital sound-alikes === | ||
| Line 293: | Line 298: | ||
==== 2021 digital sound-alike enabled fraud ==== | ==== 2021 digital sound-alike enabled fraud ==== | ||
<section begin=2021 digital sound-alike enabled fraud />The 2nd publicly known fraud done with a digital sound-alike<ref group="1st seen in" name="2021 digital sound-alike fraud case">https://www.reddit.com/r/VocalSynthesis/</ref> took place on Friday 2021-01-15. A bank in Hong Kong was manipulated to wire money to numerous bank accounts by using a voice stolen from one of the their client company's directors. They managed to defraud $35 million of the U.A.E. based company's money.<ref name="Forbes reporting on 2021 digital sound-alike fraud">https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/</ref>. This case came into light when Forbes saw [https://www.documentcloud.org/documents/21085009-hackers-use-deep-voice-tech-in-400k-theft a document] where the U.A.E. financial authorities were seeking administrative assistance from the US authorities towards | <section begin=2021 digital sound-alike enabled fraud />The 2nd publicly known fraud done with a digital sound-alike<ref group="1st seen in" name="2021 digital sound-alike fraud case">https://www.reddit.com/r/VocalSynthesis/</ref> took place on Friday 2021-01-15. A bank in Hong Kong was manipulated to wire money to numerous bank accounts by using a voice stolen from one of the their client company's directors. They managed to defraud $35 million of the U.A.E. based company's money.<ref name="Forbes reporting on 2021 digital sound-alike fraud">https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/</ref>. This case came into light when Forbes saw [https://www.documentcloud.org/documents/21085009-hackers-use-deep-voice-tech-in-400k-theft a document] where the U.A.E. financial authorities were seeking administrative assistance from the US authorities towards recovering a small portion of the defrauded money that had been sent to bank accounts in the USA.<ref name="Forbes reporting on 2021 digital sound-alike fraud" /> | ||
'''Reporting on the 2021 digital sound-alike enabled fraud''' | '''Reporting on the 2021 digital sound-alike enabled fraud''' | ||
| Line 353: | Line 358: | ||
It is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human voice!]]''' | It is high time to act and to '''[[Law proposals to ban covert modeling|criminalize the covert modeling of human voice!]]''' | ||
== Digital look-and-sound-alikes == | |||
=== Real-time digital look-and-sound-alike fraud in 2023 === | |||
'''Real-time digital look-and-sound-alike''' in a video call was used to defraud a substantial amount of money in 2023.<ref name="Reuters real-time digital look-and-sound-alike crime 2023"> | |||
{{cite web | |||
| url = https://www.reuters.com/technology/deepfake-scam-china-fans-worries-over-ai-driven-fraud-2023-05-22/ | |||
| title = 'Deepfake' scam in China fans worries over AI-driven fraud | |||
| last = | |||
| first = | |||
| date = 2023-05-22 | |||
| website = [[w:Reuters.com]] | |||
| publisher = [[w:Reuters]] | |||
| access-date = 2023-06-05 | |||
| quote = | |||
}} | |||
</ref> | |||
=== Real-time digital look-and-sound-alike fraud in 2024 === | |||
'''Reporting''' | |||
* [https://www.ft.com/content/b977e8d4-664c-4ae4-8a8e-eb93bdf785ea '''''Arup lost $25mn in Hong Kong deepfake video conference scam''''' at ft.com] reporting by Cheng Leng and Chan Ho-him 2025-05-17 | |||
* [https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html '''''Finance worker pays out $25 million after video call with deepfake "chief financial officer"''''' at edition.cnn.com], February 2024 reporting by Heather Chen and Kathleen Magramo, CNN | |||
* [https://edition.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html '''''British engineering giant Arup revealed as $25 million deepfake scam victim''''' at edition.cnn.com] May 2024 reporting by Kathleen Magramo, CNN | |||
---- | |||
== Text syntheses == | == Text syntheses == | ||
| Line 466: | Line 495: | ||
== 2020's synthetic human-like fakes == | == 2020's synthetic human-like fakes == | ||
* '''2024''' | '''<font color="red">text-to-video model</font>''' | '''[[w:Sora (text-to-video model)]]''', a [[w:text-to-video model]] developed by [[w:OpenAI]], that has worrying levels of realism was published in 2024. It was released to subscription paying users of ChatGPT in December 2024. | |||
* '''2023''' | '''<font color="orange">Real-time digital look-and-sound-alike crime</font>''' | In April a man in northern China was defrauded of 4.3 million yuan by a criminal employing a digital look-and-sound-alike pretending to be his friend on a video call made with a stolen messaging service account.<ref name="Reuters real-time digital look-and-sound-alike crime 2023"/> | * '''2023''' | '''<font color="orange">Real-time digital look-and-sound-alike crime</font>''' | In April a man in northern China was defrauded of 4.3 million yuan by a criminal employing a digital look-and-sound-alike pretending to be his friend on a video call made with a stolen messaging service account.<ref name="Reuters real-time digital look-and-sound-alike crime 2023"/> | ||