Teacher Agency and AI: 4 Questions for the Future of ELT
Introduction: More Than Warnings

A recent UK government report on AI in schools and further education carried a bold title: “The biggest risk is doing nothing.” While the urgency is warranted (and the report is a valuable read), the framing is strikingly uninviting. For educators already navigating a fast changing educational landscape of platformisation, and policy reform, a do-something-now mentality can feel more like pressure than support.
In English Language Teaching (ELT), conversations around artificial intelligence often hover between two poles: blind optimism or paralysing fear. What’s missing is space. Space for educators to explore critically and without intimidation. Space to reflect on what is being gained, what might be lost, and what teaching is for in the first place.
This article began as a response to that missing space. Over the past few months, I’ve been exploring teacher agency in the context of AI and automation, drawing on insights from psychology, design theory, ELT research, and even aviation. In the process, four key questions have surfaced, not as conclusions, but as provocations:
- What does teacher agency really mean in an age of automation?
- Are mindset, AI literacy, or technical fluency central to how teachers experience agency?
- What can we learn from other industries where automation has reshaped control and responsibility?
- And what would it mean to measure teacher purpose as AI takes on more of the “what” and “how”?

What follows is an attempt to work through those questions. It is not a prescriptive model, but an invitation to discussion. Inviting teachers, trainers, leaders, and designers to think beyond the hype and toward a more grounded, human-centred vision of AI in ELT.
Background
Agency is often mistaken for freedom, the unrestrained ability to choose or act. But in educational contexts, including in ELT, agency is far more complex. It is not a fixed trait but a dynamic capacity to act with purpose, shaped by context, structures, and relationships.

Priestley, Biesta, and Robinson (2015) define teacher agency as an ecological phenomenon, something that emerges from the interplay between an individual’s capacities, their professional environment, and the systems that surround them. In this framing, agency is situated, temporal, and relational. It is not about autonomy in a vacuum, but about influence, intent, and meaningful participation.
This matters in ELT because language education is rarely neutral. It is shaped by global exams, platform design, commercial content, policy mandates, and increasing reliance on digital infrastructure. These structures often constrain how teachers work, even when they promise flexibility and innovation.
The COVID-19 pandemic magnified this tension. Teachers adapted rapidly to remote instruction, experimenting with new tools while managing burnout, blurred boundaries, and changing learner needs. In the process, new expectations emerged for digital fluency, content creation, student wellbeing, and more. As highlighted in my MA dissertation on post-pandemic teacher competencies, these shifts demanded not just new skills but new identities and a renewed clarity of purpose.
And now, AI enters the frame. It is faster, more powerful, and more integrated than the technological transformations that preceded it.
1. What Does Teacher Agency Really Mean in an Age of Automation?
Teacher agency has always been a contested space, caught between policy, pedagogy, and practice. But in an age of automation, it is being reshaped in quieter, subtler ways. As AI tools enter the classroom, agency is no longer just about what teachers choose to do, but about what they are allowed to do, expected to do, or quietly relieved of doing.
The risk is not that teachers will be replaced. It’s that they’ll be displaced, nudged to the margins of decision-making while still being held responsible for outcomes. When lesson plans are auto-generated, assessments are pre-marked, and feedback is templated, teachers may no longer feel like authors of the learning experience. They become implementers rather than designers.
And yet, agency is not simply the inverse of automation. It’s not a binary choice. Instead, it’s about how decisions are shared between humans and systems, and what professional judgment looks like when many choices are pre-coded.

This idea is well illustrated in Lisa Feldman Barrett’s theory of constructed emotion: our brains are not reactive, but predictive. Constantly constructing meaning from past experience and present context. Teaching, too, is predictive. It involves interpreting classroom cues, adjusting in real time, and navigating uncertainty. These acts of sense-making and adaptation sit at the core of teacher agency.
The challenge with AI is that it can flatten that uncertainty. It reduces complexity into categories, recommendations, or scripted paths. And while that can help with efficiency, it can also remove the need for interpretation, which is the very work that defines teaching.
To preserve agency in this new landscape, we may need to rethink it as a negotiated process. A continual act of resisting over-automation, reclaiming judgment, and making visible the decisions that matter. But it is important to remember that agency is not lost all at once. It is eroded, gradually, when systems do too much thinking for teachers, or when teachers cease to view thinking as part of their role.
2. Are Mindset, AI Literacy, or Technical Fluency Central to How Teachers Experience Agency?
When discussing digital transformation in education, it’s tempting to assume that better training equals better outcomes. That once teachers “know how to use the tools,” the problems will be solved. But agency is not the same as access, nor is it guaranteed by competence.
Many teachers today are technically fluent. They can operate platforms, navigate digital environments, and even prompt AI tools. But despite this fluency, many do not feel empowered. Instead, they report feeling overwhelmed, uncertain, or sidelined. And in many cases, it is as if technology is something being done to them rather than with them.

This disconnect points to something deeper: the difference between functional literacy and critical literacy (Janks, 2010). Functional AI literacy is about knowing how to use a tool. Critical literacy is about understanding how that tool makes decisions, what assumptions it encodes, and what pedagogical choices are being embedded without consent.
Mindset plays a key role here. Dweck’s growth mindset model reminds us that belief in one’s ability to learn and adapt is central to professional resilience. But in the context of AI, an uncritical growth mindset can be dangerous. Enthusiastically adopting AI without questioning it can accelerate the erosion of agency, not prevent it.
Instead, teachers need a dual mindset:
- One that is open to experimentation, but
- Anchored in scepticism, ethics, and pedagogy.
My MA dissertation touched on this directly. Teachers’ digital competencies were not just technical, they were reflective. Those who adapted most effectively weren’t just skilled with tools. They understood their role within a changing system. They could locate themselves professionally, ethically, and pedagogically within new expectations.
So yes, mindset, literacy, and fluency all matter. But only when they are channelled through the concepts of teacher agency: a commitment to teaching as an intentional, relational, and meaning-making act.
Without that purpose, technological fluency risks becoming just faster compliance.
3. What Can We Learn from Other Industries Where Automation Has Reshaped Control and Responsibility?
Education is not the first industry to grapple with the promises and perils of automation. In fact, sectors like aviation, healthcare, and industrial engineering have spent decades studying what happens when humans and machines share cognitive responsibility. The results are sobering and relevant to ELT.
One concept that stood out during my Human–Computer Interaction (HCI) course was the idea of “designed agency”. The extent to which a system either preserves or erodes the user’s sense of control. What became clear is that agency isn’t something users bring to the system; it’s something the system can either support or suppress.

Take aviation, for example. The “out-of-the-loop” performance problem describes how excessive reliance on autopilot can lead to decreased situational awareness. Pilots may become excellent monitors of systems but lose the sharpness required for manual intervention during emergencies. The skills aren’t just atrophied, they are de-prioritised by the design itself.
Or consider healthcare. Clinical decision-support systems often provide suggestions or alerts based on algorithmic risk models. When these systems are trusted blindly (or conversely, ignored due to poor calibration), critical errors can occur. And responsibilities becomes blurred: if the machine suggested it, who is accountable for the outcome?
These dilemmas are not just theoretical. They are embedded in system design choices made long before the product reaches the user. Trade-offs between automation and agency are weighed during the design process, balancing efficiency, scalability, trust, and human oversight. Once these trade-offs are set, they shape how users behave, think, and feel.
So what does this mean for education?
It means that teachers’ agency is being shaped before they even open the app. If a platform generates the lesson structure, selects the material, and prompts intervention based on predictive analytics, then the space for teacher judgment is narrowed by design. Even if a teacher could override or modify it, the default path is optimised to make that unnecessary or undesirable.
The HCI course demonstrated how system designers map out the “intended mental model” of the user: how much initiative are they expected to take? What feedback loops reinforce control or deference? These questions are critical and rarely openly discussed in the ELT EdTech sector, where commercial EdTech tools often prioritise usability over pedagogy.
The lesson here? Agency is not lost at the point of use; it is negotiated upstream. It is in the assumptions built into the tools we choose, design, or accept. If we want teachers to feel empowered, we must start much earlier by embedding teacher voice, values, and variability into the very architecture of educational technology.
Because once the path is paved for them, it takes conscious resistance to walk another way.
4. What Would It Mean to Measure Teacher Purpose as AI Takes on More of the “What” and “How”?
As AI systems become more embedded in curriculum design, lesson planning, and assessment, one might ask: what’s left for the teacher?
The answer, arguably, is everything that matters. But that only holds true if we’re willing to shift our focus. From tracking inputs and outcomes to recognising and valuing purposeful pedagogical decision-making within the tools we use.
In traditional performance frameworks, teacher effectiveness is often measured by delivery and results. In its most simplistic sense, performance is measured on content coverage, task completion, learner progress. These metrics align neatly with AI’s strengths as they can be quantifiable. So it is easy (and tempting) to automate these markers.
Teacher purpose is harder to pin down, but not impossible. It is relational, ethical, and intuitive. It emerges in moments of hesitation, redirection, adaptation, and improvisation. It is not what is done, but why it is done, and for whom.
During the HCI course, I explored how user intent is often interpreted through action. Designers try to infer what users want based on clicks, sequences, and timing. But intent and purpose are not the same. In education, especially, teachers often act against the dominant flow of a system. They skip a task, rephrase a question, or pause to respond to a learner’s emotional cue.

These acts of pedagogical divergence are where purpose becomes visible. But most often current systems (human and technological) aren’t designed to detect and record them, let alone value them.
So what would it mean to measure teacher purpose?
It might start with a shift in where and how we look. Instead of only quantifying outcomes, we could:
- Document teacher reasoning. Asking not just what decisions were made, but why.
- Create space for narrative reflection. Where teachers describe moments of tension, adaptation, or resistance.
These first two points are already established practices in well-designed and managed teacher development and education programmes, and they often contribute to building reflective teaching practices. But, technology may be able to add something else…
- Use AI to surface patterns of divergence. When teachers frequently override system recommendations, that signal is worth listening to.
- Rethink feedback loops. Designing platforms that ask teachers to confirm, question, or contextualise algorithmic suggestions.
From an HCI perspective, this aligns with a model of meaningful human control (MHC): systems that support intentional interaction, value human context, and allow users to modify outcomes based on principles, not just preferences.
In ELT, this would mean treating teacher purpose not as a side-effect, but as a design criterion. Building platforms that invite teachers to embed their values into the workflow and not just forcing teachers to work around the system when something isn’t fit for their context or purpose.
If we fail to do this, we risk reducing teaching to coordination. But if we succeed, we might create systems that actually enhance teacher purpose and free up space for it to thrive.
Conclusion: Building Agency as AI Evolves
Agency is not a fixed trait. It’s a process that is enacted, negotiated, sometimes resisted, but always shaped by context. In the age of AI, that context is shifting fast, but speed does not have to mean we surrender.
Throughout this piece, four questions have guided the exploration:
- What does teacher agency mean when decisions are pre-scripted by design?
- How do mindset, AI literacy, and fluency influence not just what teachers can do, but what they believe they can do?
- What lessons can we learn from other industries where human judgment has been displaced — and sometimes degraded — by automation?
- And how might we begin to measure teacher purpose in ways that value professional judgment, not just procedural compliance?
I believe these aren’t abstract concerns. They are active tensions in classrooms, platforms, and institutions right now. And while ELT may not have the scale or lobbying power of other sectors, it does have something just as powerful: a global network of reflective, adaptive, and values-driven professionals.
If English language education is to remain human-centred, then AI must support the people who teach it, not just the systems that deliver. That means embedding agency into the tools, policies, and pedagogies we design. It means being honest about what we’re gaining and what we are losing.

And perhaps, as the role of AI expands, what matters most is not grand declarations or binary debates, but small spaces where educators can pause, question, and share. Spaces where purpose is not assumed, but it is explored. And where agency is not declared, but nurtured.
So, just maybe, building a thoughtful, supportive community of practice, amidst the noise of automation, might be the most meaningful step we can take.
References
- Bandura, A. (2001) Social cognitive theory: An agentic perspective. Annual Review of Psychology, 52, pp.1–26.
- Barrett, L.F. (2017) How Emotions Are Made: The Secret Life of the Brain. Boston: Houghton Mifflin Harcourt.
- Dweck, C.S. (2006) Mindset: The New Psychology of Success. New York: Random House.
- Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), pp.689–707.
DOI: https://doi.org/10.1007/s11023-018-9482-5 - Janks, H. (2010) Literacy and Power. London: Routledge.
- Priestley, M., Biesta, G. and Robinson, S. (2015) Teacher Agency: An Ecological Approach. London: Bloomsbury Academic.
- Segura Harvey, B. (2023) Teacher competencies post COVID-19: What constitutes an effective online teacher? London: British Council. Available at: https://www.teachingenglish.org.uk/sites/teacheng/files/2023-09/MDA%202023_Brighton_Beatrice_Segura_%20Harvey.pdf
- UK Department for Education. (2024) The biggest risk is doing nothing: Insights from early adopters of Artificial Intelligence in schools and further education colleges. London: DfE. Available at: https://www.gov.uk/government/publications/ai-in-schools-and-further-education-findings-from-early-adopters
Other Interesting Reads
- Luckin, R., Holmes, W., Griffiths, M. and Forcier, L.B. (2016) Intelligence Unleashed: An argument for AI in education. London: Pearson.
Available at: https://www.pearson.com/content/dam/corporate/global/pearson-dot-com/files/innovation/Intelligence-Unleashed-Publication.pdf - Selwyn, N. (2019) Should Robots Replace Teachers? AI and the Future of Education. Cambridge: Polity Press.
- Holmes, W., Bialik, M. and Fadel, C. (2019) Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Boston: Center for Curriculum Redesign.
Available at: https://curriculumredesign.org/wp-content/uploads/AIED-Book-Excerpt-CCR.pdf - Williamson, B. and Eynon, R. (2020) Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45(3), pp.223–235.
Available at: https://ora.ox.ac.uk/objects/uuid:ccba00ff-ddc0-4ffd-91ca-5670ece5414e/files/rf7623c73p - Tsai, Y.S., Poquet, O., Gašević, D., Dawson, S. and Pardo, A. (2020) Complexity leadership in learning analytics: Drivers, challenges and opportunities. British Journal of Educational Technology, 51(6), pp.2401–2419.
Available at: https://www.researchgate.net/publication/334510273_Complexity_leadership_in_learning_analytics_Drivers_challenges_and_opportunities - van de Oudeweetering, K. and Voogt, J. (2018) Teachers’ conceptualization and enactment of twenty-first century competences: Exploring dimensions for new curricula. The Curriculum Journal, 29(1), pp.116–133.
DOI: https://doi.org/10.1080/09585176.2017.1369136 - Winne, P.H. and Azevedo, R. (2014) Metacognition. In: R.K. Sawyer, ed., The Cambridge Handbook of the Learning Sciences, 2nd ed. Cambridge: Cambridge University Press.
- Floridi, L. et al. (2018) AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), pp.689–707. DOI: https://doi.org/10.1007/s11023-018-9482-5