AI and Education: What we risk when AI becomes the teacher
Introduction
AI has impacted almost every industry to some degree. Some industries have been affected more than others. Education, however, seems to not have a clear position on the AI targeting board. The views on this topic are polarised – some make the case that inefficiencies and declining grades need tech-based solutions. Others argue that education should not be treated as a commercialised commodity in the tech machine.
My case for teacher-led education is based on the fundamental ontological differences between artificial intelligence and human intelligence. The process of education can’t and shouldn’t be limited to efficiency, metrics and performance. The process of education involves other meaningful traits and experiences that rely on human interaction. Creativity, debate, academic stimulation, and critical thinking are messy, meandering and inefficient processes shaped by experience.
AI threatens to replace not education itself, but the processes that make education meaningful. In this post, I will explore a set of ethical issues raised by AI’s expansive growth in the education space. There are two parts to this blog. The first explores AI’s ramifications for education itself. The second explores arguments for and against AI as a stand-in for human teachers. With AI’s advances, Intelligence is becoming more abstracted from humans. Tech industry goals aimed at achieving maximum efficiency might directly work against the education of future generations.
AI’s advance in education
Education, broadly defined, is the cultivation of critical thinking, intellectual curiosity, and the ability to analyse and solve problems. It involves acquiring knowledge across various fields, understanding context to make informed decisions, and possessing a lifelong disposition to continue learning.
Tech advancements have been crucial for broadening access to education. The internet and social media have expanded the availability of educational resources. AI has taken this a step further by scaling and transforming access to now include real-time educational computation. At face value, ChatGPT, Claude and Gemini can provide the information and tools for an education instantaneously. There is a real fear amongst educators and academics that such unfettered access may have the opposite effect, slowly eroding the learning faculties of students in the future. To go even further, greater access to education will not necessarily translate to more educated people. The crux of the issue lies in the following question: Is education a means to an end or an end in itself?
The drive for efficiency – The Alpha School case study
AI-based education systems are designed to improve the efficiency of how students attain knowledge. This is to improve grades, cut down teaching time, and reduce the associated costs. The hyperfocus on efficiency, which has gripped almost every other industry, comes with unique challenges in the delivery of education.
Alpha School, founded by MacKenzie and Andrew Price in 2014, is a new set of schools that claim to educate students to the top 0.1 percentile of students in the US. The project started long before the AI boom. Alpha School's rapid scaling of AI based learning methods make it an informative case study for this discussion. It has replaced teachers with AI based algorithms, where all academic material is covered in only two hours per school day. ‘Guides’ have replaced teachers, whose roles have been strictly limited to monitoring the systems in place. All aspects of student performance are monitored by AI agents, including movements of their mouse cursors. The data is then fed into apps that read and summarise student performance, which are then used to optimise the next set of lessons. This is quite a remarkable example of how technology can be utilised for learning.
Alpha School is responding to a real issue of educational disparity in the US, which also exists in the UK. A significant proportion of my time tutoring is allocated to helping students fill educational gaps, due in part to their schooling. The financial stresses on public schools are mounting, with large class sizes and increased demands on teachers. Students have varying needs, with many often out of their depth, unable to keep pace with their peers. It is becoming increasingly difficult for overstretched teachers in many public schools to adequately provide the attention and quality of teaching needed for their students. The Alpha School and other AI-based educational programmes claim to have the solution to this structural problem.
My own experience as a tutor contradicts these approaches to education. The focus on student-driven metrics can have a detrimental effect on a student’s relationship to their education. In Alpha School, when a student is struggling with a lesson, the guide is there to ‘encourage the child to figure it out on their own, oftentimes through YouTube links or online searches’. Alpha has a very strong policy that they do not permit teaching from their staff – purportedly to foster the student’s control over their own education. While student independence is important for their academics, this is a lot of responsibility to place on a child so young. Independence is a learned skill and teachers have already been accepted in wider society as promoting this in students.
The drive to make students dictate their own education through algorithms has a wider issue of desocialisation. It feeds the idea that seeking knowledge is a personal pursuit and that asking for help is unnecessary, and even detrimental to their progress. There have been many cases of students dropping out of these schools due to anxiety from the AI-set learning targets placed on children. One parent, whose child had benefitted from increased grades, ‘noticed a change in both her children’ after they started at Alpha. Although on paper they met their academic targets, their attitude shifted from curiosity about knowledge to an extremely unhealthy focus on metrics. Education has always been social in nature. School and learning are opportunities to be with peers, have healthy debate, work towards issues and projects together. These are skills essential for socialisation in the workforce. Whereas AI engines are programmed to be hyper-validating. AI systems are built to be helpful to the reader and to avoid confrontation, which feels very agreeable and supportive. Educational institutions promote healthy discomfort for good reason – being agreed with all the time will not instil the temperament for future development.
Programmes like Alpha seem to have nosedived into an exclusively tech-based education, making students responsible for their education and data as the main driving force for their progression. To me, this comes across as an experimental project, with children as data points. And this is entirely missing the point of what education has represented to educators, students and previous generations long before. The idea of children being confined to rooms and communicating exclusively with AI bots is an image most parents and students are simply not prepared for.
Learning through struggle
Extreme examples like Alpha School draw attention to the difficulty to quantify skills imparted by a traditional educational model — skills that in my experience as a tutor have proven to be critical for academic and personal development of students. The foundation of academic progress is built on struggle and problem solving. An extreme fixation on efficiency and targets disregards this essential component. Often, my students can become overly fixated on achieving perfect accuracy or getting questions right quickly. A fundamental point of teaching which is less apparent to students themselves, particularly younger ones, is to help a student identify their own mistakes. There are many factors that can cause a mistake such as miscomprehension of the question, losing focus, or the pressure of time limits. I often find students can repeat the same mistake if I only provide the solution. When the solution is discussed, the student has a better chance of consolidating that information properly. This is why I spend a significant amount of time in my lessons probing my students to find where the issue lies. Familiarising oneself with the struggle of learning is essential to growth.
Another value which is harder to programme into AI’s capabilities is the value of inefficiency in education. A lot of the time, creative ideas don’t emerge on demand. A good educator allows space for this curiosity. If an AI agent’s task is to teach ‘problem A’ and a student wants to raise other questions that emerge from ‘problem A’, the AI will struggle to facilitiate that exploration. Veering off topic is a deeply integral part of learning. Getting lost in ideas has no particular mapped out route and is based on experience, external influence and social triggers. The AI will be fixated on moving students towards measurable goals, potentially harming a student’s ability for exploration around the topic. And to what extent does this restrain student independence when academic syllabuses will have already been dictated by pre-conceived metrics? Veering off topic is valuable as is the ability to connect one idea with something else. How effectively can an AI program facilitate this? A more appropriate quandary is why we want AI to do this in the first place.
Overreliance, complacency and academic integrity
While AI promises efficiency and support, it also introduces a fundamental tension between assistance and dependency. A more practical issue arises with students’ overreliance on AI tools. The speed of its computation can create a perceived veneer of academic authority. Students may struggle to second-guess the information presented. Part of academic study, particularly in research and higher education, is the ability of students to discern quality in their sources. At the moment, AI can make mistakes, so it is critical for students to be cautious and use their critical faculties in assessing information received. Another issue is the ease of access which can work against a person’s educational development. Younger students may rush to utilise it without exercising self-restraint, showing the importance of tighter controls for the so bad habits aren’t developed early. It is very difficult to exercise restraint when AI’s capability is a few clicks away. A good teacher will push their students to take the first step in enquiry themselves.
The data suggests that AI hasn’t necessarily made students more dishonest. However, universities are having to find ways of counteracting plagiarism as it has become harder to detect. Certain AI tools have been developed to counter these effects. Tools like GPTZero and Turnitin are used by teachers to verify AI plagiarism. Some of these programmes are still in early development, making their effectiveness questionable. However, a more pressing concern lies not in the technology itself, but in how students are adapting their use. Insights from other university tutors suggest that while AI-generated work was once easy to detect due to blatant misuse, it is now becoming increasingly difficult to identify—implying that students are becoming more tactful in how they utilise AI for their assignments. This is an example where AI is producing novel problems, forcing action by academic institutions.
Role, perspective and experience
We should consider what it actually means to be a teacher and what students risk losing. A teacher is more than an information processor. They are mentors, academic advisors, authorities and, to many students, influential figures. They help a student find their place in the world. This raises an interesting question on AI - If an emotional interaction doesn’t exist between student and agent, can students meaningfully react to the authority of an AI agent? The student-teacher relationship is unique in the way it is structured. Unlike the parent-child relationship, there are clear boundaries that allow knowledge and values to flow. This enables the conditions for authority and intellectual trust.
Education, in this sense, is not merely the transfer of information, solutions, or data. It is knowledge shaped by understanding, perspective, experience, and interpretation. The source of knowledge matters. The distinction between human and AI teaching raises a philosophical issue. Information from AI will always appear objective, detached from any lived experience, with no stake in what is being communicated. By contrast, a teacher’s knowledge is situated in their own perspective. It is embedded in the way the teacher frames the information: the methods they choose, the emphasis they place, and the explanations they prioritise. In this way, the student inevitably encounters not just the content, but a way of knowing.
From personal experience, when I was 10, I was taught a very simple math trick by my tutor that was different to how I had learned the method in school. This involved a very fine detail with how I wrote out a maths problem. On reflection, it was not important and I would not have missed something of importance if I wasn’t introduced to it. But my tutor’s idiosyncrasy has still remained with me—not because it was particularly effective, but because it was unique to their own experience. Knowledge can morph from one person to another, with slight adaptations based on a person’s experience – this is a beautiful and intimate process that will be lost with AI.
Moreover, there is an organic process to learning which isn’t always planned or directed. The flow of a lesson can meander, with secondary or tertiary aspects being highlighted. I have found this to be the case from my own experience of tutoring. For instance, if I am teaching a quantitative reasoning problem, I might notice the student misread the question, use poor handwriting, miss a specific step in their working, or spend too much time on the wrong detail. These points might not be directly related to the solution, but are critical secondary skills that require human intervention. If I simply provided the solution from start to finish, the student would miss useful insights which generally can be applied to other areas. For this to function well, a teacher needs to be attentive, observe, and intervene appropriately. These specific nuances will be very difficult to implant into AI code, at least not any time soon. Moreover, teachers can provide emotional regulation and reassurance when a student is frustrated.
Inequality and the AI divide
AI is often heralded as a solution to challenges that have overloaded schools and educational institutions. However, a new form of inequality in education appears to be emerging. Students from academically advantaged backgrounds are able to use AI far more frequently and effectively, with 93.4% of non-first-generation students reporting AI use compared to 14.7% of first-generation students. This suggests that prior exposure to academic environments may influence how confidently and strategically students engage with AI tools. As a result, claims that AI will automatically “level the playing field” in education appear overstated. AI doesn’t just give information—it rewards those who already know how to think with it.
Students who already possess strong critical thinking skills are able to use AI as a tool for refinement and acceleration, while others risk becoming passive recipients of information. This creates a new form of educational inequality: not in the access to knowledge, but in the ability to engage with it. Socioeconomic factors are at play here as well. Users with more resources are more likely to question AI outputs, whereas less educated ones are more likely to accept them at face value. Often, many tech companies recommend young people in schools and universities to familiarise themselves with AI tools – many children will not be able to do so, and some will be intimidated. This puts unhealthy pressure on adapting and utilising AI in daily life, which will create a sustained loop of the issues discussed earlier in this post.
AI to augment, not replace
Thus far, I have demonstrated what is lost when AI based approaches in education are taken to an extreme. However, as AI tools increasingly pervade the workplace it would be an equally extreme stance to suggest that there is no place for advanced technology in education at all. AI can bring a lot of progress if the drive to better educational metrics are balanced in line with other values. Many useful tech education startups have emerged, with some doing positive work with public schooling and local government authorities. There are many aspects of learning which are transferrable into an AI learning tool. I utilise many myself and recommend other tutors and teachers to do so too. But as teachers, there needs to be greater awareness of when to utilise these tools. There are still many students who prefer being taught and this should always be available to them. I balance between the two and interchange often.
AI certainly has value in its augmentation, rather than complete replacement, of education. Its value for educators comes primarily in time and resource efficiency from tasks that take the focus away from the teaching itself. There is an abundance of performance tracking applications that are very helpful in constructing profiles of students. I utilise many of these also—useful information for parents that would have taken hours to collect can now be done with a few inputs and prompts. Other uses are for homework planning and coordination, exercise question generation, resource preparation, structuring lessons and more.
However, even for certain administrative functions such as marking, humans still have a place. For humanities and the more creative subjects, teachers are important in the marking process. There is room for interpretation, and to objectify marking for such subjects will not produce fair outcomes. For marking STEM papers, however, teachers will likely be happy to have this marking out of their hands. Even for clearer use cases like academic research, there is subjectivity to the ways we research and the points we find valuable for a project. AI can miss information that a human might deem valuable, so skills like reading and comprehension are still important tools that should be taught—perhaps more important given the amount of false data and AI-driven information now available.
On the topic of assessing AI’s use case, we need to ask teachers and educators what they need assistance with and what they deem should still be under their responsibility. This is a more mature approach that will find the best of both worlds. Stigmas around the use of such tools should be removed, with teachers having more flexibility with AI tools. There is also room for teachers to be educated on how to utilise AI to their advantage. The main takeaway should be for teachers to be included in this transition as they are the ultimate authorities in education.
A teacher vs AI case study
I want to make a parallel case study comparing how a human teacher will differ to an AI based lesson.
The AI lesson would likely begin by introducing key terms and then guide the student toward applying them to the problem. The process would be efficient and closely tied to the question, with clearly defined targets. However, rather than simply following a fixed linear path, the AI would adapt the sequence of tasks based on the student’s responses, adjusting difficulty and focus in real time. For it to determine whether the student is progressing, the system relies on data generated by the student such as answers, error patterns, time spent, and completion rates. These inputs are then used to produce measurable performance indicators, making the process highly effective in tracking progress and identifying weaknesses on measurable aspects such as accuracy, definitions, and structured reasoning. While this allows for efficient optimisation of performance, it will prioritise what can be quantified over the more complex, less measurable processes involved in deeper understanding.
In contrast, my approach would begin by discussing the premise of the question—why there aren’t enough resources for everyone—and exploring what the student can bring to the discussion. I would take time to clarify terms and ensure definitions are properly understood, but not exclusively as an end in themselves. Instead, I would encourage the student to build their own case, developing their reasoning through dialogue. I would also allow space for ambiguity, recognising that economic concepts do not always fit neatly into fixed definitions, and that understanding often develops through questioning and refinement. Rather than moving directly toward a correct answer, I would respond to the student’s tone, hesitation, and confidence, adjusting the pace and direction of the lesson in a way that reflects their understanding. This provides the best of both worlds: it ensures the student develops the key knowledge and detail required for their exams, while also fostering the student’s curiosity about how and why they are studying in the first place. This allows the student to take some ownership of the learning process, where ideas are not simply delivered but developed collaboratively. There is a deeper philosophical question here: can a student truly embrace the idea that an AI is fostering their own agency of thought or will it instead feel as though their direction is ultimately deterministic? AI might be adaptive and non-linear, but will the student experience genuine interaction or will it still feel like a set of instructions being followed?
Conclusion
In recent years, the sheer pace of AI’s development has caught many off guard and is concerning many about their futures in education. It is important that we start having more conversations on what it is we are trying to achieve with AI. Are we aiming to build the most powerful tools for AI’s sake or bettering human experience in education?
There is currently too much fixation with proving AI can works in education. Whilst AI provides unique solutions for schooling challenges that need addressing, there is more to education than maximising children’s exam performance. Untapped AI will risk diluting what should be a focus on enhancing the educational landscape—creating critical thinkers, an educational space where values still have a role, motivating agency of thought, risk for experimentation, and acceptance of mistakes in the process of learning. Efficiency comes with its costs and it is critical that metrics don’t override experience.
The oversimplistic notion that AI is the solution for our education system falls into the same potholes as previous technological advancements. There are no simple fixes. AI has scaled growth and allowed educational expansion in ways that have boosted human potential. But to say that AI is our saviour fails to identify the reality that relies on a multitude of complex factors such as government decision-making, addressing social stigmas, educational biases, motivational issues, widening socio-economic disparities in society, challenges with student mental health, and so on.
Human growth, cultivated through human-based education, was the necessary precursor to AI and we should celebrate its continued role within it.
I leave you with a question to ponder: If education is no longer something we do, but something done for us, what becomes of the learner and the world they seek to understand?