South Korea approves bill targeting deepfake sexual exploitation

Share this post:


<img src='https://news.cgtn.com/news/2024-09-25/South-Korea-approves-bill-targeting-deepfake-sexual-exploitation–1xbMzgxCbio/img/448227de3ea84fe5bcd8b4c530b1b6a9/448227de3ea84fe5bcd8b4c530b1b6a9.jpeg' alt='Activists shout slogans and hold up masks during a protest against deepfake porn in Seoul, August 30, 2024. /CFP'

A parliamentary committee in South Korea has approved a bill imposing prison sentences for the possession or viewing of deepfake sexual content, according to Yonhap news agency. The bill is in response to rising public concern over digital sex crimes, particularly after the discovery of Telegram chat rooms where AI-generated deepfake pornography featuring female students and staff was shared.

The revised law imposes severe penalties for individuals who possess, purchase, store or view deepfake sexual content, with potential prison sentences of up to three years or fines of up to 30 million won (about $22,500). Lawmakers from both the ruling and opposition parties included a provision to exempt individuals who “unknowingly” come into contact with such materials from penalties.

The committee also advanced revisions to the act protecting children from sexual crimes, introducing harsher punishments for those who use exploitative material to blackmail or coerce minors. Under the new provisions, sentences for blackmail will increase to a minimum of three years, while coercion could result in sentences of five years or more.

The revisions also address the sexual violence prevention act, outlining the government’s responsibility to remove illegally recorded content and assist victims in reintegrating into society.

The increased efforts to combat sexual deepfakes coincide with the Pavel Durov case. The founder of Telegram, Durov, is currently under formal investigation in France as authorities look into organized crime activities associated with the messaging platform.

<img src='https://news.cgtn.com/news/2024-09-25/South-Korea-approves-bill-targeting-deepfake-sexual-exploitation–1xbMzgxCbio/img/841805c7020543dfaecf021031249c4d/841805c7020543dfaecf021031249c4d.jpeg' alt='A journalist views an example of a deepfake video manipulated using artificial intelligence by Carnegie Mellon University researchers at his desk in Washington, D.C., U.S., January 25, 2019. /CFP'

Deepfake technology refers to a type of artificial intelligence used to create convincing fake images, videos and audio recordings. This technology can manipulate existing content, swapping one person for another, or generate entirely new scenarios where individuals appear to say or do things they never actually did. The greatest danger of deepfakes lies in their ability to spread false information that seems to come from trusted sources.

“I’ve always been careful when it comes to personal information or photos,” said Ines Kwon, a university student. “But with so many crimes these days involving AI and deepfake technology, I find myself being even more guarded.”

Last week, South Korea’s National Police Agency announced plans to invest 2.7 billion won ($2 million) annually over the next three years to develop deep-learning technology for detecting digitally fabricated content, including deepfakes and voice cloning. The agency will also upgrade its current software for monitoring AI-generated videos.

The urgency for these measures is underscored by alarming statistics: cases of sexual cyberviolence have surged 11-fold this year compared to 2018, yet only 16 arrests have been made from 793 reported deepfake crimes since 2021, according to police data.

“Six years have passed since I was a victim of AI-generated deepfake pornography. As I transitioned from a student to a working professional, it would be a lie to say the incident hasn’t left its mark on me,” said an anonymous deepfake crime victim.

“However, when I think about other women who might suffer the same or even more severe harm, a fire ignites within me to become stronger and stand up against it.”

President Yoon Suk-yeol said that deepfakes represent a serious crime that undermines social harmony and urged relevant ministries to take decisive action. In a joint statement in late August, 84 women’s organizations emphasized that the root cause of the deepfake crisis is “structural gender discrimination,” advocating for gender equality as a fundamental solution.

Oh Kyung-Jin, general secretary of Korean Women’s Associations United, said the fundamental problem is the “culture of misogyny,” where women are seen merely as objects of sex, targets of crime and forms of entertainment, rather than as equal citizens, especially among teenagers.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *