This section examines ethical challenges in AI-enabled education through three applied scenarios addressing student use of generative AI, algorithmic bias in content generation, and third-party data privacy concerns.
Real-World Ethical Dilemmas in Education
This section examines ethical challenges in AI-enabled education through three applied scenarios addressing student use of generative AI, algorithmic bias in content generation, and third-party data privacy concerns. The content emphasises the practical application of ethical principles in everyday teaching decisions, positioning educators as guardians responsible for protecting learner data, identity, and educational autonomy within institutional contexts.
Created by Graeme Smith and Liza Kohunui
Ethical issues don’t just appear in policy documents — they surface in the everyday decisions we make with learners.
Here are three scenarios drawn from real teaching contexts that illustrate how AI raises practical ethical tensions.
Scenario 1: Student Use of AI
A learner rewrites their assignment using ChatGPT.
The writing improves — but no longer sounds like them.
What to consider:
-
Was this support or outsourcing?
-
Did AI obscure the student’s actual level of understanding?
-
Is your academic integrity policy clear about generative AI?
Relational response:
Ask: “How did you use this tool, and what did you learn in the process?”
Scenario 2: Biased Content Generation
An AI quiz tool generates stereotyped or culturally insensitive questions.
What to consider:
-
What dataset was the tool trained on?
-
Does it reflect diverse Aotearoa learners?
-
Can students audit the content as part of their learning?
Class activity:
“Spot the bias.”
Students often recognise ethical gaps faster than the tools do.
Scenario 3: Data Privacy in Third-Party Tools
A tool stores student work offshore without clear consent.
What to consider:
-
Was informed consent obtained?
-
Can data be anonymised or minimised?
-
Is this aligned with your organisation’s privacy and Treaty obligations?
Institutional kōrero:
Ask: “Do we have guidance for offshore data handling?”
🪶 Applying Kaitiakitanga — Three Foundational Questions
Before using any AI tool in your teaching, apply these three pātai as a kaitiaki — a guardian of your learners’ mana, data, and dignity.
1. Mana — “Does this tool respect the dignity of my learners?”
What to look for:
-
Does it uphold learners as whole people, not just metrics?
-
Could it be used to surveil or rank students publicly?
-
Does it preserve learner identity — or flatten diverse voices?
Warning signs:
-
Tools that shame or expose learner struggles
-
AI that categorises students in ways that diminish potential
-
Systems that prioritise efficiency over relationships
Reflection:
“Would I want this tool used with my own whānau or tamariki?”
2. Pūtaketanga — “Do I understand where this tool’s ‘knowledge’ comes from?”
What to look for:
-
What data was this model trained on?
-
Whose voices are included — and whose are missing?
-
Does it represent Māori, Pasifika, or diverse cultures accurately?
-
Can you trace the source, or is it a black box?
Warning signs:
-
Tools that cannot explain their data origins
-
AI that tokenises or misrepresents te reo Māori
-
Outputs that contradict or erase mātauranga Māori
Reflection:
“If this AI taught my students about tikanga, would I trust what it said?”
3. Rangatiratanga — “Does this tool support or undermine learner sovereignty and agency?”
What to look for:
-
Does it encourage critical thinking or replace it?
-
Do learners control their data — or does the platform own it?
-
Does it build capability — or dependency?
Warning signs:
-
Tools that discourage questioning
-
Systems that lock learner data behind proprietary walls
-
Platforms that reduce choice, voice, or decision-making
Reflection:
“Does this tool grow rangatira — self-determined learners — or compliant users?”