Technology has been rapidly advancing in Silicon Valley at an unprecedented pace. While AI has existed previously, it has been on smaller scales than it is now. It is being used in a variety of ways, like to create a new Beatles song and being used to aid climate-making decisions. The world is experiencing an AI explosion, and some argue that it has been advancing too fast.
The Artificial Intelligence Index Report of 2023, written by the Stanford Institute Human-Centered Artificial Intelligence, found that 36% are fearful that AI has the ability to result in a catastrophe. According to AIAAIC, or AI, Algorithmic and Automation Incidents and Controversies, AI incidents and controversies have increased 26 times since 2012. These incidents are not shallow by any means and have created unnecessary wildfire globally. One moment that the researchers note is the deep fake created in 2022 of Ukrainian President Volodymyr Zelenskyy surrendering, which was widely spread over social media.
While AI can be used positively, it will continue to have harmful implications due to the AI systems that are used. BLOOM, a 176-billion parameter language model, has been estimated to emit 50.5 tons of carbon dioxide equivalent, or CO2-eq. To put that into perspective, one ton of CO2-eq is equivalent to a one-way flight from Paris to New York for a single passenger. That is approximately 430 hours of flight time. The irony of AI systems that are meant to be used positively in advancing research, through a language system and in aiding climate-making decisions, are pushing out a colossal amount of greenhouse gasses is not lost.
On an individual level, AI has aided numerous papers and articles. While the common use of GPT in schools and universities by their students has become widespread, giant publications have also begun to participate in the use of AI on a very noticeable scale.
Sports Illustrated recently came under fire for having articles on its website written by authors who do not exist. The company denied a report that stated its stories were written by AI. Other journalism issues that have stemmed from AI this year have been at the Gannett newspaper chain and the CNET technology website that had failed experiments with AI. For an industry that desires truth and transparency in its work, having AI actively involved in journalism has worried human journalists with the security of their job and readers who will begin to question the reliability of powerhouse publications.
Highly publicized and discussed has been the 148-day strike by the Screen Actors Guild – American Federation of Television and Radio Artists, or SAG-AFTRA, and the Writers Guild of America, or WGA. Actors and writers around the nation walked out in protest against the use of AI in Hollywood. They focused on issues like demanding safeguards against AI — because of the possibility of using actors’ likeless without their permission — and creating AI-generated background actors. AI’s capabilities have the ability to put writers and actors out of work, with Hollywood attempting to normalize the use of AI in their work.
AI is difficult to predict, control and understand, and there has even been a petition signed by over 33,000 people on the Future of Life Institute’s website to pause giant AI experiments for the good of humanity. It is not just the common man who has raised concerns over AI. Geoffrey Hinton, widely known as “the godfather of AI” gave a warning during his recent BBC interview after he resigned from Google. He called AI chatbots “quite scary.”
“Right now, they’re not more intelligent than us, as far as I can tell,” Hinton said. “But I think they soon may be. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”
While AI has proven to be not all bad, depending on one’s perspective, like giving the world a new Beatles song, it has been questioned if the positive impact of AI in multiple industries is significant enough to forget about the harmful and lasting impacts of its work. How necessary is it to have it heavily aiding individuals and companies? Will it start making employees’ jobs obsolete if AI has already proven its capabilities? Is having a human component in products we consume entirely necessary? Has AI proven to be so beneficial to the good of our world that it outweighs the negative consequences? The questions surrounding AI and its ethics go on and will continue to as the world sees it being used in more industries and new ways every day. Because of the rapid progress of AI, the possibility of completely stopping its use seems impossible, and it may be time to turn our focus to how to mitigate its risks to avoid further danger to the world.