Posted 8:12 a.m. Thursday, Nov. 13, 2025
School of Education task force prepares future teachers for ethical, effective AI use
Imagine you’re a high school student reading the instructions for a new homework assignment: Write an essay on the impact of the Industrial Revolution.
You sit at your desk, tempted to open an AI chatbot. With one short prompt, you could generate a polished essay in seconds — no reading, no note-taking, no thinking required.
But you pause. Should it really be that easy? And more importantly, will you actually understand or remember anything that was written?
A 2025 MIT Media Lab study comparing large language model (LLM) essay writing with traditional methods — using search engines or writing unaided — found that students who relied on LLMs like OpenAI’s ChatGPT consistently underperformed over four months in neural, linguistic and behavioral measures.
Findings like these highlight a growing challenge for educators: helping students harness AI as a learning tool without letting it do the learning for them.
“It is our responsibility in the School of Education to prepare future teachers for the realities they will face in their own classrooms,” says Kimberly Morris, associate professor of Spanish and World Language Education. “AI isn’t going away. Current and future educators need to know how to use it effectively and ethically.”
Building AI literacy across the School of Education
To address these challenges, the School of Education (SOE) established an AI Task Force last year. The group’s goal is to help faculty, staff and teacher candidates better understand artificial intelligence and integrate it thoughtfully into teaching and learning.
The task force studies emerging research; discusses challenges and experiences with AI; invites guest speakers from the local school districts, CESA, and other organizations; and pilots new initiatives designed to promote AI literacy and transparency.
“The School of Education is taking a critical lens on how we integrate AI in our programs,” Morris explains. “Rather than accepting it universally, we’re focused on integrating it in ethical and responsible ways — recognizing both its potential to enhance learning and the limitations it can pose on our students’ development and our own.”
Intentional AI training prepares SOE students to use technological tools more confidently in their coursework and future classrooms at a time when AI and big data are among the top skills for career readiness, according to the World Economic Forum’s Future Jobs Report 2025.
Clarifying expectations through AI use scale
One practical outcome of the task force’s work is a pilot AI usage scale being introduced in course syllabi. The scale helps both students and instructors clearly define expectations for when and how AI can be used in assignments.
The scale includes five levels:
- No AI
- AI-assisted idea generation and structuring
- AI-assisted editing
- AI task completion, human evaluation
- Full AI
Originally developed by researchers and recommended by a CESA representative, the scale doesn’t require faculty to use AI in their classes. Instead, it offers a transparent framework. Instructors who prefer not to incorporate AI can simply mark all assignments as “No AI.”
“This approach helps me make expectations clearer for my students — and they appreciate that,” says Morris. “It’s a way of acknowledging AI’s presence while supporting AI literacy and ethical training for future educators.”
Learning through experience
In a Language Teaching Methods course this semester, Morris is training teacher candidates how to integrate AI into their lessons, focusing on high-leverage teaching practices. They get the chance to experiment with different tools for purposes of lesson planning, instruction, assessment, and even training their own students how to use AI responsibly to support their learning.
“Using AI requires a critical understanding of where and how the tool adds value and maximizes learning,” says Morris.