This section identifies key ethical risks in AI-enabled teaching, including bias, privacy concerns, opacity, and overreliance on algorithmic systems.

Ethics, Privacy, and Bias — A Starter Guide for Educators

This section identifies key ethical risks in AI-enabled teaching, including bias, privacy concerns, opacity, and overreliance on algorithmic systems. It emphasises grounding practice in relational values such as whanaungatanga (relationships), manaakitanga (care and respect), and kaitiakitanga (guardianship). The content guides educators to recognise ethical risks and apply culturally grounded values when making informed, dignity-centered decisions that protect the mana of learners and data sovereignty in educational contexts.

Created by Graeme Smith and Liza Kohunui

🪻 Why Ethics Matters in AI-Enabled Education

Education is built on trust.

AI is built on data.

When these meet, ethics becomes essential.

AI tools can be helpful, creative, and fast — but they are not neutral.

They reflect:

  • the biases of their datasets

  • the values (or blind spots) of their designers

  • the priorities of the institutions that deploy them

These factors shape what AI includes, excludes, distorts, or over-represents.

Key Ethical Risks with AI in Education

These are not theoretical concerns.

They appear in classrooms every day — in marking, feedback, content generation, lesson design, and the subtle shaping of whose voice appears (and whose disappears).

A Cultural Anchor for Ethics

In Aotearoa, ethics must be grounded in relational values, not just imported tech standards.

A practical ethical test:

“Would I feel comfortable if this AI tool made a judgment about me — or my learners — without our voice or context?”

If the answer is no, the tool — or the way it’s used — needs rethinking.

🪶 Kaupapa Māori Lens – Kaitiakitanga | Guardianship of Learning Spaces

In te ao Māori, ethics is inseparable from kaitiakitanga — the responsibility to protect the mauri (life force) of the learning environment.

When we bring AI into our teaching, we influence:

  • Mana o te ākonga — learner dignity and authority

  • Raraunga rangatiratanga — the sovereignty of learner data

  • Whakaaro matatika — the accuracy, fairness, and cultural integrity of knowledge

  • Whanaungatanga — the trust and relational depth that learning depends on

Educators are not passive adopters of technology.

We are guardians of the spaces where knowledge is shaped.

Whaiwhakaaro | Reflection

What values guide your decisions about technology in teaching?

How do they align with kaupapa Māori principles?

RiskWhat It Looks Like
BiasAI reproduces dominant culture perspectives and marginalises others
OpacityLearners can’t see how outputs or decisions are made
PrivacyLearner data stored offshore, sometimes without full informed consent
OverrelianceTutors or students treat AI as “truth,” replacing relational judgment
ValueHow It Shapes AI Ethics
WhanaungatangaPrioritise relationships and care in digital choices
ManaakitangaHonour dignity in how tools represent people
WhakamanatangaUplift learner identity, not overwrite it
MāramatangaEncourage clarity and informed reflection on tools
PūmautangaPromote trustworthiness, not shortcuts