This school year, Fremd policy has undergone a notable change regarding the use of artificial intelligence (AI). School policy now allows and supports the integration of AI tools, like ChatGPT, within the school environment. Currently, the only AI tools approved for classroom use are ChatGPT and Adobe Firefly, Adobe’s image-generating model.
Given the rapid growth of generative artificial intelligence, some believe that integrating these tools into an educational environment is a logical step to enhance learning and to utilize new technologies.
Shannon Denna, a Fremd Computer Science teacher and the sponsor of the Fremd Coding Club/Girls Who Code (which specializes in AI/ML topics) believes that the changes are good for both students and teachers.
“I really like that we are not only allowed to but encouraged to teach our students how to use these AI tools appropriately,” Denna said. “When the technology first came out, many staff members both at the high school and university level were apprehensive about how it would hinder the learning and be used for cheating. I’m trying to help students learn how they can use quality prompts to generate the information they want but also use it as a way of explaining complex topics.”
Students can now use ChatGPT in the classroom to learn more efficiently by asking it to explain concepts, generate practice questions, and more. In language classes, students can practice conversational skills with ChatGPT in the target language to improve their skills. In English classes, students are allowed to use ChatGPT to provide feedback on writing drafts or other assignments, give suggestions about restructuring, and grade drafts based on rubrics.
Additionally, teachers are incorporating AI in lessons. In English classes, students can use Firefly to create images based on their written descriptions, among other activities.
Denna also incorporates AI into her own lessons.
“I’ve used it before to generate an entire lesson plan around a discussion I wanted to have in class and to help get ideas for games or icebreakers,” Denna said. “I definitely see it as a way when, used appropriately, can be used as guided notes for students.”
These updates in the policy come at a time when there are significant breakthroughs in the field of artificial intelligence, with recent changes reverberating throughout all aspects of life, including in public education.
Denna reflects on these changes, while also noting that AI developers are making improvements towards responsible use.
“AI companies are starting to put safeguards in place so that there is a safe use for students,” Denna said. “It shows that they’re trying to find ways to make it acceptable at the educational level.”
There are obvious restrictions on the use of these tools. Students are not permitted to ask ChatGPT to write essays for them or claim AI-generated images from Firefly as their own work.
Parthiv Mudragada, a sophomore at Fremd, is enthusiastic about these changes.
“I personally believe that these changes are a good thing because they allow more opportunities for students to learn in an effective way.” Mudragada said. “However, there is the issue of academic dishonesty.”
Although these tools offer many advantages, their use also comes with certain risks. Large Language Models (LLMs) like ChatGPT may sometimes generate factually inaccurate or misleading information, known as “hallucinations.” Image generating models like Adobe Firefly may create images with errors, especially those with humans in it (some documented errors are 3-legged people, deformed faces, etc). Thus, students are expected to verify any information provided by these tools.
Denna acknowledges the potential for AI in the future.
“If students learn how to use AI well, it can be a very powerful tool,” Denna said.