This section provides structured reflection prompts and an AI Ethics Checklist for educators and learners, addressing transparency, data governance, and cultural representation in AI-enabled teaching.

Practical Prompts + Good Practice

This section provides structured reflection prompts and an AI Ethics Checklist for educators and learners, addressing transparency, data governance, and cultural representation in AI-enabled teaching. It includes a case study examining bias in AI-generated materials, demonstrating a kaitiakitanga-based response framework that emphasises co-designing culturally accurate content, and advocating for systemic improvements in AI systems used within educational contexts.

Created by Graeme Smith and Liza Kohunui

Ethical AI practice doesn’t emerge from rules alone — it emerges from the questions we ask.

These prompts help anchor both educators and learners in reflective, values-led use.

Prompts for Educators

  • “Have I disclosed when and why I’m using AI in my teaching?”

  • “Does this tool align with our values and Te Tiriti obligations?”

  • “Who holds the data, and can I explain that clearly to learners?”

Prompts for Learners

  • “How did AI help (or hinder) your thinking?”

  • “Whose voice is missing from this AI response?”

  • “If you were teaching an AI about your culture, what would it need to know?”

Suggested Good Practice

Include a simple AI Ethics Checklist whenever you introduce a new tool or activity.

Your checklist might ask:

  • Transparency: Is the tool clear about how it uses data?

  • Agency: Can learners use the tool critically, not passively?

  • Identity: Does the tool uplift or erase learner identity?

  • Safety: Are there any risks of misrepresentation, harm, or privacy loss?

A checklist makes expectations visible and normalises reflective practice across the class.

Case Study — Bias in AI-Generated Content

Scenario

You use AI to generate a set of learning materials.

The content includes stereotypes about Pacific peoples and completely overlooks te reo Māori perspectives.

🪶 Kaitiakitanga Response — Acting with Integrity

How to respond:

  • Acknowledge the harm — don’t minimise or ignore it.

  • Turn it into a learning opportunity:

Invite ākonga to identify what’s missing, what’s misrepresented, or what feels culturally off.

  • Co-design improvements:

Work with learners, colleagues, or community experts to rebuild the content accurately and respectfully.

  • Document and report patterns of bias to your institution or IT teams so systemic issues can be addressed.

Key Lesson

Bias in AI is not just a technical issue — it is cultural, relational, and ethical.

As kaitiaki, educators must critique outputs, correct them, and advocate for better tools.

Whaiwhakaaro | Reflection

Have you encountered bias in AI outputs?

How did you respond?

What might you do differently now, with a kaitiaki mindset