As artificial intelligence rolls out to the public, a debate is sparking over its use in higher education.
Educators point to the risks of students cheating with ChatGPT— the artificial intelligence app that can write essays, solve math problems and create art. But educators see the possibilities of embracing the new technology.
Elizabeth Blakey, an associate professor of journalism at California State University, Northridge, sees the program as a potential tool in her classroom.
“Why not teach the students how to use ChatGPT effectively?” Blakey said. “It’s here to stay. Fear isn’t going to make AI go away.”
Blakey recently showed the students in her graduate-level mass communications class how to use ChatGPT to prepare research proposals. The results were encouraging, she said.
“I knew that my students were worried about doing research proposals. So, I said, ‘why don’t we just ask ChatGPT?’” Blakey said. “I pointed out the ways it’s helpful and the ways that it’s not helpful.”
One major problem is that ChatGPT can produce factual inaccuracies called “hallucinations.” This means that its facts cannot be relied on. But, Blakey said, ChatGPT can produce detailed, worthwhile outlines of projects, such as research proposals.
Using the innovative technology, she said, reduced the anxiety students were feeling about accomplishing a new task. They learned about research proposals and new technology in the process.
While there is no question that AI will influence teaching and learning in the coming years, some have called for regulations about how far the technology should be used in classrooms.
Those considering the pros and cons of AI include the California State University’s Academic Senate, which represents faculty across the system’s 23 campuses. In March, the Senate passed a resolution calling for a working group on AI in higher education, to be formed by the end of August.
The committee will be tasked with investigating AI’s potential in the classroom, professional development opportunities for faculty, identifying best practices to ensure academic integrity, and coordinating the university’s response across campuses.
While some oversight on AI is needed, Blakey said the concerns regarding mass plagiarism and academic integrity are overblown.
Blakey, who has subject-matter expertise in media history and media law, said she believes the fear over AI is similar to fears raised about other new technologies in the past.
“I’ve noticed that there was a generalized fear about AI and that something bad is going to happen because of it. But that type of fear or panic happens with every new technology,”Blakey said. “There’s always some sort of moral panic that the new technology going to change the way we do things and take over when it’s quite the opposite. Because of TV, we didn’t become zombies. We didn’t stop listening to the radio.”
Instead of putting legal restrictions on technology in its infancy, now is a good time to learn about AI and explore its possibilities, she added.
“My view is that this is another new technology, and we’re exploring it,” Blakey said. “I think the focus shouldn’t be on whether students are copying from AI, but instead on teaching them how to play with it, to use it. Rather than trying to check what they’re doing wrong, let’s figure out the worthwhile ways of using it.”
CSUN journalism professor David Blumenkrantz, who teaches visual culture and photography, also thinks the new technology needs to be embraced.
He gave students in his class the option to use AI technology to create a digital magazine. The assignment, however, included a stipulation that students had to explain which parts of their creations were AI-generated and why AI was a good choice.
“Chat GPT is just another tool in the toolbox, like Google and Wikipedia, for students to use,” Blakey said. “We don’t know its full potential yet and our students will be leading us in that discovery.”