The quick rise of artificial intelligence, or AI, technology has begun to take the world by storm and lead to the exploitation of individuals, such as Taylor Swift. This new wave of technology has most recently led to the quick spread of deepfake inappropriate photographs of Swift, which were viewed by millions all over the world. Deepfakes are photographs or videos in which a person’s face or body has been digitally altered to appear like somebody else’s.
The use of AI technology to alter and sexualize a person through a photograph or video has been happening much more frequently than in past years. According to The Guardian, the photographs of Swift were seen 47 million times before they were wiped from all social media platforms.
The use of technology to create artificially altered photos and videos has led United States politicians to demand stricter legislation to criminalize the act. Females, in particular, have become common victims of AI deepfake photographs being spread on social media. This occurs when the creator uses an image of them, often stolen from their social media, and alters it to fit an artificially altered body.
This product is often presented as a pornographic and sexualized image of the female.
“AI tends to oversexualize women, especially with the TikTok filters, more so than they already are, and I think it opens the door for a lot of sick creeps,” said Payton Cobb, a senior criminal justice major.
The harm of producing AI-generated images could be that the individual’s reputation and character could be detrimentally compromised due to the fact that they are now being portrayed as somebody they are not. This could also potentially lead to false information circulating and to the public’s opinion being swayed about the individual.
“The people that are creating AI-generated images and videos can use them to perpetuate certain viewpoints and biases that can lead to the manipulation of others,” said Adam Lamparello, a criminal justice professor at GC.
While this issue of AI technology targeting women is nothing new, the Swift scandal does seem to have prompted U.S. politicians to speak up on the matter at large.
“For years, women have been targets of deep fakes [without] their consent,” said Yvette D. Clarke, a Democratic New York congresswoman. “And [with] advancements in AI, creating deepfakes is easier & cheaper. This is an issue both sides of the aisle and even Swifties should be able to come together to solve.”
Currently, individual U.S. states have their own legislative protection against the use of AI technology, but there has recently been an uprising to amend it to federal law.
Lamparello thinks that pushing this into federal law would be a bad idea because each state has different problem areas it needs to address.
“A one-size-fits-all solution is not going to be able to account for those problem areas,” Lamparello said.
The reason there has been a push for stricter laws is because AI-generated images and videos are the results of breaches of individuals’ privacy and potential copyright infringement. By recreating images, videos and even music that closely resemble copyrighted material, the creator is violating copyright law. An example is when an AI creator took the vocals of Drake and The Weeknd to create an artificially generated song called “Heart on My Sleeve.”
The song was met with many different reactions from fans. According to The Independent, the majority of fans were shocked at the quality level of the music while also terrified at its accuracy to the artists’ voices. The song was not labeled as an AI-generated song, so many fans fell for it and believed it was a real, unreleased song. The song ended up receiving 15 million views on TikTok but was eventually taken down for copyright violation.
Whether or not the creator had innocent intentions, it does not change the fact that they took the voices of the artists and used them to make money without permission.
These deepfake photos have also begun to extend to advertisements featuring celebrities without their consent.
According to The Guardian, in Oct. 2023, Tom Hanks had to warn his Instagram followers not to fall for a phony dentistry advertisement that was utilizing his picture to promote their company without his consent.
These cases pose an ongoing problem. Without their knowledge, celebrities are being associated with these companies. Should something negative come out about the company or advertisement, celebrities like Hanks will be seen as liable and responsible for the issues as well since people think they are endorsing it. At the end of the day, the question that needs to be asked in reference to AI technology is whether the risks outweigh the rewards. This is something that only time will tell.