This section establishes a framework of five foundational principles for ethical artificial intelligence integration in educational practice: transparency, fairness, accountability, data sovereignty, and learner agency
Five Core Principles for Ethical AI in Education
Created by Graeme Smith and Liza Kohunui
This section establishes a framework of five foundational principles for ethical artificial intelligence integration in educational practice: transparency, fairness, accountability, data sovereignty, and learner agency. These principles are situated within Aotearoa New Zealand’s obligations under Te Tiriti o Waitangi and equity considerations for Māori, Pacifika, and neurodiverse learners.The content introduces the Four Pillars of Kaitiakitanga as a practical framework for addressing learner autonomy, knowledge provenance, and institutional accountability in AI-enabled educational environments.
These five principles blend global ethics standards with Aotearoa-specific responsibilities — especially around equity, sovereignty, and relational practice.
🪻 The Aotearoa Lens
In Aotearoa, AI ethics is not just a digital issue — it is a Te Tiriti and equity issue.
-
Te Tiriti o Waitangi requires protection of mātauranga Māori, engagement with iwi, and upholding rangatiratanga.
-
Māori Data Sovereignty affirms the rights of Māori to govern their own data — including what is shared with AI platforms.
-
Pacific, neurodiverse, and disabled learners face elevated risks if AI assumptions go unchallenged.
A useful institutional provocation:
“Does your organisation have an AI ethics policy that includes cultural safety, data sovereignty, and learner agency?”
If not — this module may be the starting point for that kōrero.
🪶 Kaupapa Māori Lens – Four Pillars of Kaitiakitanga in AI Ethics
The following Four Pillars offer clear, actionable guidance for educators, grounded in Māori data sovereignty and kaupapa Māori ethics. These pillars help us embody kaitiakitanga — guardianship — when making decisions about AI in learning spaces.
Pillar 1: Te Mana o te Tangata | Upholding Human Dignity
Principle:
AI must serve people, not reduce them to data points or automated judgments.
In Practice:
-
Avoid tools that surveil, shame, or diminish mana.
-
Ask: “Would I want this used on my own whānau?”
-
Centre relationships, not efficiency.
Pillar 2: Te Mana Motuhake o te Mātauranga | Data Sovereignty
Principle:
Learners — and Māori in particular — have the right to control their own data.
In Practice:
-
Know where data is stored and who has access.
-
Gain genuine informed consent, not just a tick-box.
-
Uphold Māori Data Sovereignty principles (Te Mana Raraunga).
Pillar 3: Te Whakapapa o te Mōhiotanga | Transparency of Source
Principle:
Knowledge has whakapapa.
AI-generated content does not — it has no living genealogy.
In Practice:
-
Make AI use visible, discussable, critique-able.
-
Teach learners to trace sources, not simply accept outputs.
-
Value knowledge that comes from people, place, and relationship.
Pillar 4: Te Kōrero Pono | Truth and Accountability
Principle:
We remain accountable for how AI is used — even when it automates parts of our work.
In Practice:
-
Don’t hide behind “the AI did it.”
-
Review outputs for bias, cultural harm, or inaccuracy.
-
Create space for feedback, correction, and relational repair.
| Principle | What It Means in Practice |
|---|---|
| Transparency | Be clear when AI is used — don’t hide it in tools, decisions, or marking. |
| Fairness | Watch for bias or exclusion; redesign activities and prompts if needed. |
| Accountability | Educators remain responsible — even when AI assists with tasks. |
| Data Sovereignty | Know where learner data goes. Uphold Māori data rights and protections. |
| Agency | Give learners options — don’t force AI use without meaningful choice. |