In a recent decision by the Biden administration, Elizabeth Kelly, a former economic policy adviser to current United States President Joe Biden, has been named a top aid for the National Institute of Standards and Technology, or NIST.
Within this position, Kelly is tasked to develop standardized safety tests for Artificial Intelligence, or AI, technologies that target businesses and consumers.
“Elizabeth Kelly is a great choice to lead the AI Safety Institute, which is essential for governing and benefitting from AI,” said Arati Prabhakar, a White House Office of Science and Technology Policy director and former NIST director. “She brings an understanding of the real-world implications of AI — how the use of this powerful technology affects people and business.”
Kelly’s appointment comes as a result of AI’s rapid expansion within the past decade, whereby many Americans are directly affected due to the integration of AI in business and consumer products. Consequently, healthy incorporations of AI become all the more important, making institutions such as NIST indispensable.
“We must have a firm understanding of the technology, its current and emerging capabilities and limitations,” said Elham Tabassi, the chief AI advisor for NIST. “NIST is taking the lead to create the science, practice and policy of AI safety and trustworthiness.”
Although Kelly’s specific safety codes are unknown, some predict the codes restrict common ethical concerns related to AI. These include programmed abilities to replace creative jobs, threaten data privacy and produce content that can potentially diminish essential skills and understanding.
Despite seeming like a setback to AI development, Hussain Alibrahim, a computer science professor at GC, considers these safety codes to lead innovation and utility to new lengths.
“Regulations play a role in ensuring that AI applications are developed and utilized in a manner prioritizing privacy protection and avoiding any biases or discrimination,” Alibrahim said. “Familiarity with the framework can guide AI developers in designing systems that not only fosters innovation but also complies with ethical standards, resulting in the creation of higher quality and sustainable technology.”
These ethically minded innovations can take the form of many different variations of AI products. However, the results should, ethically speaking, act as tools meant to improve the quality of services rather than replace workers.
“One major apprehension revolves around the loss of jobs due to automation,” Alibrahim said. “Roles that involve tasks are especially susceptible, which can pose social challenges.”
Relatedly, education can be a potent industry at the forefront of AI influences and changes in AI regulation. This has often been the subject of controversy regarding rules and policies in classrooms, specifically for utilizing AI chatbots, which can construct text responding to a given prompt.
Some GC students share Alibrahim’s ethical standard for using and regulating AI — that even AI chatbots could be used to improve the quality of education if used as a learning tool rather than a replacement teacher.
“There are a lot of redundant assignments in school, but both from the students’ and faculty’s perspectives, AI should be used to boost the quality of education rather than replace education,” said Will Turner, a sophomore accounting major. “This [AI regulation] can be a starting place for new and better ideas.”
Nonetheless, calling back to Kelly’s work in developing a code of AI regulations, the general consensus of AI regulation proponents consider regulation to be a necessity instead of a want. The truth is there are many probabilities for what AI is capable of. With regulation, the ability to concentrate these probabilities can become highly desirable.