Unpacking the ethics of using AI in primary and secondary education: a systematic literature review

Abstract

Abstract This paper provides a systematic review of the literature discussing the ethics of using artificial intelligence in primary and secondary education (AIPSED). Although recent advances in AI have led to increased interest in its use in education, discussions about the ethical implications of this new development are dispersed. Our literature review consolidates discussions that occurred in different epistemic communities interested in AIPSED and offers an ethical analysis of the debate. The review followed the PRISMA-Ethics guidelines and included 48 sources published between 2016 and 2023. Using a thematic approach, we subsumed the ethical implications of AIPSED under seventeen categories, with four outlining potential positive developments and thirteen identifying perceived negative consequences. We argue that empirical research and in-depth engagement with ethical theory and philosophy of education is needed to adequately assess the challenges introduced by AIPSED.

Locations

  • AI and Ethics
  • OSF Preprints (OSF Preprints)

Ask a Question About This Paper

Summary

Login to see paper summary

This paper presents the key conclusions to the forthcoming edited book on The Ethics of Artificial Intelligence in Education: Practices, Challenges and Debates (August 2022, Routlege). As well as highlighting … This paper presents the key conclusions to the forthcoming edited book on The Ethics of Artificial Intelligence in Education: Practices, Challenges and Debates (August 2022, Routlege). As well as highlighting the key contributions to the book, it discusses the key questions and the grand challenges for the field of AI in Education (AIED)in the context of ethics and ethical practices within the field. The book itself presents diverse perspectives from outside and from within the AIED as a way of achieving a broad perspective in the key ethical issues for AIED and a deep understanding of work conducted to date by the AIED community.
This paper provides a comprehensive review of Artificial Intelligence (AI) integration in K-12 education, examining current implementations, policy frameworks, and emerging challenges. We analyze over 40 recent publications (2024-2025) from … This paper provides a comprehensive review of Artificial Intelligence (AI) integration in K-12 education, examining current implementations, policy frameworks, and emerging challenges. We analyze over 40 recent publications (2024-2025) from academic journals, government reports, and industry whitepapers to identify key trends in AI adoption across primary and secondary education systems. This paper presents a comprehensive review of Artificial Intelligence (AI) integration in K-12 education, examining its pedagogical, technical, and policy dimensions. Through an analysis of recent literature, we highlight Generative AI as the most widely adopted paradigm in educational settings, with Agentic AI emerging as a significant secondary focus. The review identifies key trends in architectural approaches while noting underrepresented technical frameworks. Our review reveals three critical dimensions of AI in education: (1) pedagogical applications including personalized learning and administrative automation, (2) policy and ethical considerations at federal and state levels, and (3) infrastructure requirements for successful implementation. We highlight the rapid growth of Generative AI (GenAI) tools in classrooms alongside persistent concerns about equity, data privacy, and teacher preparedness. We summarize a conceptual framework for evaluating educational AI systems that balances pedagogical value with implementation considerations. The study proposes a readiness framework balancing pedagogical value against implementation complexity. Recommendations emphasize professional development, privacy-preserving architectures, and international governance standards to guide responsible adoption through 2030. The paper concludes with strategic recommendations for stakeholders, emphasizing the need for teacher professional development, privacy-preserving technologies, and international collaboration to guide responsible adoption. This review synthesizes critical insights for navigating the evolving landscape of AI in education while maintaining human-centered priorities. The paper concluddes with recommendations for policymakers, educators, and technology developers to ensure responsible AI integration that enhances rather than replaces human instruction.
This study offers a comprehensive examination of the scientific production related to the integration of artificial intelligence (AI) in education using qualitative research methods, an emerging intersection that reflects growing … This study offers a comprehensive examination of the scientific production related to the integration of artificial intelligence (AI) in education using qualitative research methods, an emerging intersection that reflects growing interest in understanding the pedagogical, ethical, and methodological implications of AI in educational contexts. Grounded in a theoretical framework that emphasizes the potential of AI to support personalized learning, augment instructional design, and facilitate data-driven deci-sion-making, the study applies a systematic literature review and bibliometric analysis to 630 publications indexed in Scopus between 2014 and 2024. Results show a significant increase in scholarly output, particularly since 2020, with notable contributions from authors and institutions in the United States, China, and the United Kingdom. High-impact research is found in top-tier journals, and dominant themes include health education, higher education, and the use of AI for feedback and assessment. The findings also highlight the role of semi-structured interviews, thematic analysis, and interdisciplinary approaches in capturing the nuanced impacts of AI integration. The study concludes that qualitative methods remain essential for critically evaluating AI’s role in education, reinforcing the need for ethically sound, human-centered, and context-sensitive applications of AI technologies in diverse learning environments.
The present study aims to analyze the impact of artificial intelligence (AI) in university education, with the objective of synthesizing its most effective applications, identifying its advantages and limitations, and … The present study aims to analyze the impact of artificial intelligence (AI) in university education, with the objective of synthesizing its most effective applications, identifying its advantages and limitations, and exploring the ethical, technical, and pedagogical challenges associated with its implementation. The review covers recent research on the nature of AI driven transformations of teaching methods, learning experiences and internal educational policies. Thus, we carried out a systematic analysis of 20 relevant studies, using a methodology based on searching and selecting academic literature in databases such as Scopus, EBSCO, and Web of Science. The data were grouped according to common approaches and then compared, to identify overlaps, debates, and research gaps. The results show that AI has significantly improved the personalization of learning and assessment processes, although challenges related to accessibility, algorithmic biases, and acceptance by educational stakeholders remain. The main conclusions highlight the need for ethical and sustainable approaches to integrate AI in diverse educational contexts, as well as to promote future research that addresses equity and cultural adaptation issues in its use. Received: 5 March 2025 / Accepted: 26 April 2025 / Published: 08 May 2025
The use of artificial intelligence (AI) in education is becoming increasingly widespread, offering numerous benefits while also raising ethical dilemmas. This study aims to explore teachers' and students' perceptions of … The use of artificial intelligence (AI) in education is becoming increasingly widespread, offering numerous benefits while also raising ethical dilemmas. This study aims to explore teachers' and students' perceptions of AI implementation in education, highlighting its benefits, challenges, and ethical implications. Using a qualitative approach, the research involved in-depth interviews with 20 teachers and students from various high schools in Indonesia. Data were analyzed using thematic coding techniques to identify key patterns in their responses. The findings indicate that AI can enhance learning efficiency, facilitate curriculum personalization, and support educational administration management. However, concerns exist regarding algorithmic bias, personal data security, and the impact on social interactions in learning. Teachers emphasized the need for clear regulations on AI usage, while students were more focused on the benefits that technology can offer. This study concludes that although AI holds great potential in education, strict policies and ethical guidelines are necessary to ensure its balanced use and to prevent it from replacing the human role in the learning process. These findings provide valuable insights for policymakers and education practitioners in designing more ethical and effective AI implementation strategies.
This systematic review explores the definitions and research surrounding Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) within higher education literature. The study aims to provide a comprehensive … This systematic review explores the definitions and research surrounding Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) within higher education literature. The study aims to provide a comprehensive synthesis of FATE’s evolution and the challenges and opportunities identified across the analyzed papers. The review encompasses 25 SCOPUS articles published between 2019 and 2023. FATE definitions were classified as either technical or descriptive, with some studies offering multiple definitions. Findings indicate a predominance of descriptive definitions, particularly for fairness, which are primarily quantitative. Ethics definitions are mostly qualitative, while fairness research tends to be quantitative. Future research could bridge the divide between experts and the public by integrating technical and descriptive definitions and combining qualitative and quantitative approaches.
This chapter examines the perspectives of classroom teachers at Science and Art Centers regarding artificial intelligence (AI). Employing a case chapter design and qualitative methodology, the research collects insights from … This chapter examines the perspectives of classroom teachers at Science and Art Centers regarding artificial intelligence (AI). Employing a case chapter design and qualitative methodology, the research collects insights from 18 teachers during the 2023-2024 academic year through semi-structured interviews, followed by content analysis. Findings indicate that AI enhances classroom processes, reduces teacher workloads, and boosts student engagement. However, challenges such as inadequate technological infrastructure, insufficient resources, and internet connectivity issues hinder effective AI integration. Teachers also express concerns about potential ethical dilemmas associated with AI applications. The chapter underscores the necessity for comprehensive in-service training, practical guidance, and high-quality technological resources to optimize AI utilization. Furthermore, it emphasizes the importance of establishing ethical guidelines and usage protocols to address these concerns and foster a responsible approach to AI in educational settings.
The rapid advancement of Artificial Intelligence (AI) has profoundly transformed education, revolutionizing teaching methodologies, student engagement, and institutional decision-making. However, despite the increasing integration of AI-driven technologies such as intelligent … The rapid advancement of Artificial Intelligence (AI) has profoundly transformed education, revolutionizing teaching methodologies, student engagement, and institutional decision-making. However, despite the increasing integration of AI-driven technologies such as intelligent tutoring systems, adaptive learning platforms, and generative AI tools like ChatGPT, there remains a pressing need for a systematic evaluation of research developments in this field. This study conducts a comprehensive bibliometric analysis to map global research trends, theoretical foundations, and key innovations in AI-powered education.Utilizing data from Scopus and Web of Science (WoS), this study analyzes 23,132 valid research papers using ScientoPy, a specialized bibliometric tool. The findings reveal a substantial surge in AI-related educational research, particularly post-2020, transitioning from theoretical discussions to practical applications in personalized learning, machine learning-driven assessments, and ethical AI governance. The United States, China, and the United Kingdom emerge as the leading contributors, while prolific publication sources include ACM International Conference Proceeding Series, Sustainability, and Education and Information Technologies. Citation analysis identifies influential studies that have shaped AI-driven educational policies, with prominent works addressing AI-assisted evaluations, the ethical challenges of AI in academic settings, and the role of generative AI in shaping modern pedagogy. This research also highlights dominant theoretical frameworks, including Self-Determination Theory and Activity Theory, offering insights into the cognitive and behavioral aspects of AI-enhanced learning environments. Moreover, emerging trends indicate growing research interest in AI applications in K-12 education, interdisciplinary collaborations, and the ethical implications of AI-generated content in academia. By presenting a data-driven bibliometric perspective, this study serves as a critical resource for educators, policymakers, and researchers, fostering informed decision-making and promoting the responsible integration of AI in educational landscapes.
In recent years, artificial intelligence (AI) has emerged as one of the most potent tools to promote digital transformation in education. The effective use of AI in education requires robust … In recent years, artificial intelligence (AI) has emerged as one of the most potent tools to promote digital transformation in education. The effective use of AI in education requires robust policy frameworks and ethical oversight mechanisms. Providing an overview of the transformative role of AI in education, this chapter discusses the prominent requirements for the ethical use of technology that supports inclusive and equitable education in education systems and offers policy recommendations that will maximize the potential of AI in creating equitable, fair, and inclusive educational environments. Special emphasis is placed on the role of interdisciplinary collaboration among stakeholders to ensure that AI systems are aligned with human-centered values and educational goals. The chapter aims to draw attention to the data-driven framework addressing the most debated issues of bias, data privacy, inclusion, risks, fair and equitable approach in education to establish an inclusive AI ecosystem in light of global standards, international policies, and evidence-based practices.
There is significant discussion about the opportunities and issues associated with the use of artificial intelligence in higher education. However, issues, such as academic integrity and privacy, continue to dominate … There is significant discussion about the opportunities and issues associated with the use of artificial intelligence in higher education. However, issues, such as academic integrity and privacy, continue to dominate the conversation. This has limited how well instructors and higher education institutions can identify the conditions necessary to support AI use, to benefit all students. The present synthesis of qualitative evidence has explored the available evidence from 2019-2024, to consider the opportunities and conditions of AI use in higher education learning. The result is five synthesized findings addressing this issue at both institutional and learning design levels and considering subcategories ranging from access, to interactions to future learning. Based on the synthesized findings as the proposed AIMED model, addressing the conditions necessary to create opportunities of AI-tool use in higher education for all students. Implications and future research are explored.
The integration of artificial intelligence (AI) in education is altering teaching and requires educators to develop new competencies. Through a systematic review of 247 articles published between 2022 and 2024, … The integration of artificial intelligence (AI) in education is altering teaching and requires educators to develop new competencies. Through a systematic review of 247 articles published between 2022 and 2024, this chapter explores three major themes, namely, AI and Teacher Competency Development, Challenges in Reskilling and Upskilling, and Strategies for Effective Reskilling. It has become evident that educators have to acquire digital literacy, data analytics skills, and AI-specific pedagogical strategies, with a corresponding need to address ethical concerns around algorithmic bias, data privacy, and equity of access. Moreover, institutional resistance, inequitable digital literacy, and especially the fast evolution of AI, which outpaces often existing training frameworks, are identified as evolving major concerns. This chapter, therefore, proposes to embed AI training into continuous professional development, enhance interdisciplinary collaborations, and adapt frameworks such as P21, TPACK, and DigCompEdu to the specific needs of AI.
Artificial intelligence (AI) is transforming special education by enabling personalized learning pathways and innovative assistive technologies. However, its growing use raises critical ethical concerns, including algorithmic bias, data privacy, and … Artificial intelligence (AI) is transforming special education by enabling personalized learning pathways and innovative assistive technologies. However, its growing use raises critical ethical concerns, including algorithmic bias, data privacy, and fairness. Biased algorithms can lead to misdiagnoses or inappropriate learning recommendations, while the collection of sensitive student data increases privacy risks. Many educators also lack the training to critically assess AI-generated outputs. Ensuring inclusive and transparent AI design is essential to providing equal opportunities and avoiding the reinforcement of educational disparities. Policymakers, developers, and educators must collaborate to establish clear, enforceable guidelines that protect student rights and promote ethical AI use. This chapter explores the expanding role of AI technologies in special education by advocating for a balanced approach that supports innovation while prioritizing ethical responsibility and inclusion.
The position and readiness of pre-service preschool and primary teachers to use artificial intelligence (AI) in the study process is becoming an increasingly widely studied area. Therefore, it is important … The position and readiness of pre-service preschool and primary teachers to use artificial intelligence (AI) in the study process is becoming an increasingly widely studied area. Therefore, it is important to analyse how students perceive AI opportunities, strengths, weaknesses, and threats. Namely, the conducted qualitative study analysed the position of pre-service preschool and primary teachers on the use of artificial intelligence (AI) in the study process, emphasising strengths, opportunities, weaknesses, and threats. A hundred and twelve first cycle university students participated in the study, and the study itself was conducted during the spring semester of 2024. An instrument of four open-ended questions was applied in the study, and the collected verbal data was analysed using quantitative content analysis. The research revealed that AI is valued as an important tool for improving the efficiency of studies. Respondents highlighted AI’s ability to quickly systematise and present information, save time, automate tasks, and promote creativity and the application of innovative teaching/learning methods. Also, the possibilities emerged to use AI to personalise teaching content and create interactive learning environments. At the same time, the problems of using AI were highlighted, including the lack of reliability of information, the limitation of technological capabilities, and the potential decline in creativity, independence, and critical thinking skills. Ethical issues, such as plagiarism and lack of academic integrity, as well as data privacy issues, were of the respondents’ concern. The research results show that it is necessary to integrate the AI teaching/learning elements into study programmes, in order to develop students’ competencies to effectively use AI tools. The need for clear ethical guidelines to ensure the responsible use of AI is also emphasised. The study showed that AI can be an effective tool for improving educational processes, however, its use must be carefully balanced with the potential threat management.
The transition of Artificial Intelligence (AI) from a lab-based science to live human contexts brings into sharp focus many historic, socio-cultural biases, inequalities, and moral dilemmas.Many questions that have been … The transition of Artificial Intelligence (AI) from a lab-based science to live human contexts brings into sharp focus many historic, socio-cultural biases, inequalities, and moral dilemmas.Many questions that have been raised regarding the broader ethics of AI are also relevant for AI in Education (AIED).AIED raises further specific challenges related to the impact of its technologies on users, how such technologies might be used to reinforce or alter the way that we learn and teach, and what we, as a society and individuals, value as outcomes of education.This chapter discusses key ethical dimensions of AI and contextualises them within AIED design and engineering practices to draw connections between the AIED systems we build, the questions about human learning and development we ask, the ethics of the pedagogies we use, and the considerations of values that we promote in and through AIED within a wider socio-technical system.Assumptions bias; malinformed/unverified definition of deployment context; techno-centric methods and focus
This PRISMA-based systematic review analyzes how artificial intelligence (AI) and Machine Learning (ML) are integrated into educational institutions, examining the challenges and opportunities associated with their adoption. Through a structured … This PRISMA-based systematic review analyzes how artificial intelligence (AI) and Machine Learning (ML) are integrated into educational institutions, examining the challenges and opportunities associated with their adoption. Through a structured selection process, 27 relevant studies published between 2019 and 2023 were analyzed. The results indicate that AI adoption in education remains uneven, with significant barriers such as limited teacher training, technological accessibility gaps, and ethical concerns. However, findings also highlight promising applications, including AI-driven adaptive learning systems, intelligent tutoring, and automated assessment tools that enhance personalized education. The geographical analysis reveals that most research on AI in education originates from North America, Europe, and East Asia, while developing regions remain underrepresented. Without strategic integration, the uneven implementation of AI in education may widen social inequalities, limiting access to innovative learning opportunities for disadvantaged populations. Consequently, this study underscores the urgent need for policies and teacher training programs to ensure equitable AI adoption in education, fostering an inclusive and technologically prepared learning environment.
In recent years, the use of artificial intelligence (AI) has been the subject of an increasingly intense debate in almost all fields of science. Education is no exception, including in … In recent years, the use of artificial intelligence (AI) has been the subject of an increasingly intense debate in almost all fields of science. Education is no exception, including in the public and higher education sectors. In our study, we explore the intersection of public education and teacher training in higher education, to see how we can most effectively prepare today's student teachers for tomorrow's practicing teachers. We will touch on the problems, challenges and fears of traditional higher education in relation to the use of AI by students, teachers and researchers, and take stock of the potential uses and limitations of AI in the everyday of teachers in public education. In the context of AI, we also consider the paradigm of "learning by teaching - teaching by learning" to be particularly important, as technological change is forcing us all into new roles, whether as students, academics or practising teachers. Particular emphasis will be placed on pedagogical process design, preparation and the renewal of assessment and evaluation. Our method is mainly document analysis, reviewing the relevant national and international literature. We then describe the design, methods and tools of our research, which will start in 2025.
Artificial Intelligence (AI) emerges as a critical technological intervention that promises to revolutionize educational approaches, pedagogical methodologies, and learning experiences. Traditional educational models are increasingly becoming inadequate in addressing the … Artificial Intelligence (AI) emerges as a critical technological intervention that promises to revolutionize educational approaches, pedagogical methodologies, and learning experiences. Traditional educational models are increasingly becoming inadequate in addressing the complex, dynamic skills requirements of a rapidly evolving global knowledge economy. The integration of AI technologies represents a pivotal mechanism for reimagining educational delivery, personalization, and strategic innovation. This systematic review examines the conceptual structure of artificial intelligence in education (AIED) research, focusing on AI applications, research topics, and research design elements. Following PRISMA guidelines, the study analyzed peer-reviewed articles published between 2020 and 2024 from major academic databases. The PRISMA methodology consists of a 27-item checklist and guidelines to enhance the quality of reporting as well as the credibility of systematic reviews. The findings reveal that AI applications in education primarily cluster into four categories: Emerging technologies, intelligent evaluation and management, personalized tutoring and adaptive learning, and profiling and prediction. The investigation of research topics indicates a strong emphasis on system and application design, followed by studies on the adoption and acceptance, impacts, and challenges of AIED. Analysis of research designs shows that while descriptive and survey methods dominate the field, there is limited use of experimental and mixed methods approaches. The theoretical foundations of AIED research demonstrate its multidisciplinary nature, drawing from fields such as education, psychology, mathematics, and sociology, though many studies lack robust theoretical grounding. Research contexts predominantly focus on higher education and K12 settings, with minimal attention to preschool education. This review contributes to the understanding of AIED's current landscape and suggests future research directions, including the need to incorporate emerging AI technologies, strengthen research in underrepresented educational contexts, enhance methodological rigour, deepen theoretical contributions, and foster interdisciplinary collaboration. These insights are particularly valuable for educational stakeholders navigating the transformation toward Education 4.0 and beyond.
The integration of generative artificial intelligence (GAI) in education has been met with both excitement and concern. According to a 2023 survey by the World Economic Forum, over 60% of … The integration of generative artificial intelligence (GAI) in education has been met with both excitement and concern. According to a 2023 survey by the World Economic Forum, over 60% of educators in advanced economies are now using some form of artificial intelligence (AI) in their classrooms, a significant increase from just 20% five years ago (World Economic Forum, 2023). The rapid adoption of AI technologies in education highlights their potential to revolutionize the learning experience. AI tools, such as intelligent tutoring systems and adaptive learning platforms, offer personalized educational experiences that can meet the unique needs of each student. However, with this potential comes significant ethical concerns, particularly regarding academic integrity.The International Center for Academic Integrity reported that 58% of students admitted to using AI tools to complete assignments dishonestly, highlighting the urgency of addressing these ethical concerns (International Center for Academic Integrity, 2023). This statistic underscores a critical issue: while AI has the potential to enhance education, its misuse can undermine the very foundations of academic integrity. The rise of AI technology has raised concerns about academic integrity. With tools that can generate text, solve problems, and even assist with research, students may find it easier to engage in plagiarism or other forms of cheating. This shift challenges traditional educational values, as it blurs the lines between original work and AI-generated content (Mohammadkarimi, 2023). Curriculum designers are thus faced with the challenge of integrating AI in ways that uphold ethical standards and promote genuine learning. This requires balancing the innovative potential of AI tools with a commitment to academic integrity, ensuring that technology enhances rather than undermines the educational experience.To navigate this landscape responsibly, it is essential to revisit established ethical frameworks and educational theories. The ethical principles guiding our use of technology in education have remained consistent, even as the tools themselves have evolved. By referencing seminal works and foundational theories, we can demonstrate that the core values of honesty, fairness, and responsibility are timeless. For example, deontological ethics, as articulated by Immanuel Kant, emphasizes the importance of adhering to moral principles such as honesty and integrity, rather than the consequences of actions (Kant, 1785). In the context of AI in education, deontological ethics would require that the use of AI respects fundamental moral principles. For example, it would be crucial to ensure that AI systems are designed and implemented in ways that uphold students' rights to privacy, ensure fairness, and avoid deception. Adhering to these principles would be seen as morally obligatory, regardless of the potential benefits or drawbacks of AI in educational settings. Similarly, consequentialism, as articulated by John Stuart Mill, evaluates actions based on their outcomes. Mill's version of consequentialism, known as utilitarianism, argues that the best actions are those that promote happiness or better well-being. In the context of AI in education, applying Mill's consequentialist principles would involve assessing how the use of AI impacts educational outcomes. If AI can be used to enhance learning, provide personalized educational experiences, or address inequalities and inequities in education, then its use would be considered morally justified according to Mill's framework, as it promotes overall well-being and positive outcomes for students.These ethical frameworks provide a robust foundation for the responsible use of GAI in modern educational settings. Moreover, educational theories such as constructivist learning and Self-Determination Theory (SDT) offer valuable insights into how AI can be used to enhance learning. Constructivist learning theory posits that students construct knowledge through active engagement with content, a process that can be greatly facilitated by AI tools. This approach emphasizes the importance of students' engagement in hands-on activities and interactions, which help them construct meaningful connections with new information (Hein, 1991). AI tools can significantly enhance this constructivist approach by providing personalized and interactive learning experiences. SDT, on the other hand, emphasizes the importance of autonomy, competence, and relatedness in fostering intrinsic motivation among students (Deci & Ryan, 2000). Integrating AI tools that align with the principles of SDT can help create a more engaging and supportive learning environment among students. This discussion will explore how GAI can be integrated into education in ways that support rather than erode academic integrity. By examining the ethical frameworks of deontological ethics and consequentialism, and educational theories like constructivist learning and SDT, we will argue that AI, when used responsibly, can enhance digital literacy, foster intrinsic motivation, and support genuine knowledge construction. The principles discussed in older foundational papers remain relevant, proving that ethical guidelines established decades ago still hold value in today's technologically advanced classrooms (Floridi & Taddeo, 2016;Ryan & Deci, 2017).The goal is to illustrate that the ethical use of GAI in education not only preserves but can also enhance academic integrity. Through responsible integration and ethical education, AI can empower students to become motivated, ethical, and engaged learners, well-prepared for the complexities of the modern world. By grounding our arguments in established ethical and educational theories, we can provide a comprehensive framework for understanding the potential benefits and challenges of AI in education.The integration of GAI in education raises significant concerns about its potential to disrupt traditional assessment methods. The ability of GAI to generate essays, problem solutions, and even creative works has sparked fears of plagiarism and academic dishonesty, challenging conventional forms of evaluation such as take-home exams, essays, or homework assignments. These concerns are valid, as the ease with which students can use AI-generated content without truly engaging in the learning process threatens to undermine academic integrity (Popenici and Kerr, 2017) .However, the disruptive nature of GAI also presents an opportunity to reimagine assessment practices in ways that prioritize authentic learning and deeper understanding. The rise of AI necessitates a shift away from traditional assessments focused on rote memorization and information recall, toward more authentic assessment methods that require students to demonstrate higher-order thinking skills. For example, project-based tasks, real-world problem-solving activities, oral presentations, and open-ended assignments that demand personal reflection and original insights can reduce the likelihood of misuse and encourage students to engage meaningfully with course material (Borenstein and Howard, 2020). Furthermore, GAI can play a constructive role in formative assessment by providing personalized feedback throughout the learning process. AI-driven tools can help students revise drafts, practice skills, and receive immediate guidance on areas needing improvement, fostering a deeper connection to the material. This approach transforms GAI from a potential threat to a valuable asset that supports continuous learning and skill development. Additionally, incorporating self-assessment and metacognitive practices, where students reflect on their progress and learning strategies, can ensure that AI augments rather than diminishes students' active participation in their education.It is also essential to address the ethical considerations involved in using AI for assessment. Concerns such as data privacy, algorithmic bias, and the fairness of AI-generated evaluations must be taken seriously (Borenstein and Howard, 2020) . Developing clear institutional policies that set boundaries on acceptable AI use in assessments can help maintain fairness and transparency. These policies should include guidelines for combining AI insights with human judgment to ensure that assessments reflect not only the outputs o AI but also the educator's understanding of the student's abilities and efforts.By embracing these strategies, educators and institutions can harness the potential of GAI to enhance assessments while maintaining academic integrity. This balanced approach allows for the responsible integration of AI in education, ensuring that it supports meaningful learning experiences and prepares students to navigate an AI-driven world with integrity.Constructivist learning theory posits that learners construct knowledge through experiences and reflections, actively engaging with content to build understanding. GAI, with its advanced capabilities, aligns well with this theory, offering tools that promote exploration, interaction, and personalized learning paths. Contrary to the belief that AI erodes academic integrity, some scholars argue that AI, when used thoughtfully, has the potential to enhance educational experiences by providing personalized learning opportunities and supporting students' individual learning needs (Weller, 2020). While Weller does not claim that AI inherently fosters critical thinking or deeper understanding, his discussion highlights the potential of AI in educational settings, suggesting that it could complement traditional teaching methods to improve learning outcomes.GAI tools, such as intelligent tutoring systems and adaptive learning platforms, provide students with tailored educational experiences. These systems analyze individual learning patterns and adapt content to meet specific needs, ensuring that students engage with material at an appropriate level of difficulty (Woolf, 2010). For instance, an AI-powered math tutor can identify a student's weaknesses in algebra and offer targeted exercises to address these gaps. This personalized approach not only supports knowledge construction but also encourages students to take ownership of their learning journey (Shute & Zapata-Rivera, 2012).In a classroom setting, imagine a high school history class studying the Industrial Revolution. The educator integrates a GAI tool that generates interactive timelines and simulations based on historical data. Students can manipulate variables within these simulations to observe the effects on industrial growth, labor conditions, and economic development. Through this exploration, they construct a deeper understanding of the era's complexities. Instead of passively receiving information, students actively engage with content, reflecting on the consequences of different actions and decisions (Kumar et. al., 2024).Another example is in language arts, where a GAI tool assists students in creative writing. By analyzing a student's writing style and providing real-time feedback on grammar, tone, and narrative structure, the AI helps students refine their skills (Song & Song, 2023). Additionally, it can suggest plot developments or character traits, sparking students' creativity and encouraging them to think critically about their stories. This interactive process supports constructivist principles by allowing students to experiment, reflect, and build upon their ideas (Bereiter & Scardamalia, 1989).Critics argue that AI tools may encourage academic dishonesty by making it easier for students to produce work with minimal effort. However, this perspective overlooks the potential for AI to promote genuine learning when used appropriately. Rather than replacing student effort, AI can enhance the learning process by offering personalized support, immediate feedback, and adaptive content, which fosters deeper engagement and learning outcomes (Nazaretsky et al., 2022). For instance, in a science class, AI-powered lab assistants can guide students through virtual experiments, providing explanations and prompting them to hypothesize, analyze data, and draw conclusions. Such interactions encourage active learning and promote a deeper understanding of scientific concepts and processes, rather than merely supplying answers (de Jong & van Joolingen, 1998). Additionally, as Al Darayseh (2023) notes, AI tools designed with input from educators help align the technology with pedagogical objectives, embedding ethical considerations to reduce the risk of academic dishonesty. Furthermore, it is important to acknowledge that AI is transforming science education and pedagogy, and the ethical implementation of these tools must reflect this shift to support genuine learning experiences while safeguarding academic integrity (Holstein et al., 2018;Erduran, 2023) .Moreover, GAI can facilitate collaborative learning, another key aspect of constructivist theory. In a project-based learning environment, students can use AI tools to collaboratively develop presentations or reports. AI can assist by organizing information, suggesting relevant sources, and providing feedback on the clarity and coherence of their work (Kreijns et al., 2003). This collaborative process encourages students to engage in dialogue, share perspectives, and build knowledge collectively.To further illustrate, consider a classroom where students are tasked with developing a business plan. An AI tool can generate market analysis reports, financial projections, and strategic recommendations based on input from the students. As they interact with the AI and with each other, they learn to critically evaluate information, make informed decisions, and adapt their plans. This dynamic, interactive process is at the heart of constructivist learning, fostering not only knowledge construction but also critical thinking and problem-solving skills (Jonassen, 1995).At present, there are multiple AI powered tools that are being used by most students that have significant potential to enhance a constructivist learning experience. One example is the ChatGPT. According to Rasul et.al (2023), ChatGPT supports the constructivist principle that learners construct their own understanding of knowledge by enabling students to explore and experiment with ideas, ask questions, and receive immediate feedback. This interactive engagement helps students to deeply connect with the content, refine their comprehension, and apply their learning in meaningful ways, ultimately enriching their educational experience.Also, according to Mota-Valtierra et al (2019), a constructivist approach is a great fit for teaching AI topics because it emphasizes building on prior knowledge and encouraging active learning. Their article outlines an innovative approach to teaching artificial intelligence (AI) through a constructivist methodology, specifically focusing on multilayer perceptrons (MLPs). After implementing it in different majors, the statistical analysis underscores the success of the proposed course methodology in enhancing student learning and providing a more consistent educational experience. The increase in average grades and the reduction in standard deviation highlight the effectiveness of the approach in improving both individual performance and overall learning outcomes.In conclusion, GAI aligns with constructivist learning theory by providing tools that facilitate exploration, interaction, and personalized learning. Rather than promoting dishonesty, AI can enhance academic integrity by supporting genuine learning experiences. Through personalized feedback, interactive simulations, and collaborative projects, AI empowers students to take an active role in their education, constructing knowledge in meaningful and engaging ways. By embracing these technologies, educators can create enriching learning environments that prepare students for the complexities of the modern world (Papert & Harel, 1991).The rise of GAI in education has sparked discussions on its ethical implications and the importance of fostering digital literacy. By examining ethical frameworks such as deontological ethics and consequentialism, we can argue that responsible use of GAI in the classroom can enhance students' digital literacy and prepare them to navigate the digital world ethically and effectively (Floridi & Taddeo, 2016;Stahl, 2012).Deontological ethics, which focuses on adherence to moral rules or duties, provides a foundation for integrating AI responsibly in education. This framework emphasizes the importance of principles such as honesty, fairness, and respect for others (Kant, 1785). In the context of GAI, this means ensuring that AI tools are used to support and enhance learning rather than replacing students' efforts or promoting dishonesty.For instance, in a high school history class studying the Industrial Revolution, an AI tool can generate interactive timelines and simulations based on historical data. Educators can emphasize the importance of using these tools ethically, encouraging students to engage with the material thoughtfully and critically. By adhering to principles of honesty and integrity, students learn to use AI as a supplementary resource that enhances their understanding rather than as a shortcut to completing assignments (Johnson, 2020). Consequentialism, as articulated by John Stuart Mill in Utilitarianism (1861), evaluates the morality of actions based on their outcomes. While Mill did not discuss AI, the principles of this framework can still be applied to contemporary debates about its use in education. By aiming to maximize positive outcomes-such as enhanced learning, critical thinking, and digital literacy-educators and curriculum designers can advocate for the responsible integration of AI. Emphasizing these benefits underscores how AI tools can contribute to better educational results and foster more informed digital citizens.In a language arts classroom, for example, a GAI tool can assist students in creative writing by providing real-time feedback on grammar, tone, and narrative structure. Educators can guide students to use this feedback to improve their writing skills, fostering a deeper understanding of language and storytelling. The positive outcomes of enhanced writing abilities and critical engagement with AI tools illustrate the ethical benefits of responsible AI use (Borenstein & Howard, 2020). To further promote digital literacy, it is crucial to educate students and educators on the ethical use of AI tools. This involves teaching them to understand how AI works, the potential biases and limitations of AI systems, and the importance of using AI responsibly (Brey, 2012). By fostering a culture of digital literacy, educators empower students to navigate the digital world with a critical and ethical mindset.Consider a science class where an AI-powered lab assistant guides students through virtual experiments. Educators can use this opportunity to discuss the ethical considerations of AI in scientific research, such as data privacy, bias, and the importance of accurate data interpretation. By engaging in these discussions, students develop a nuanced understanding of the role of AI in science and the ethical responsibilities of using AI in research (Floridi, 2013).Moreover, collaborative projects can further enhance digital literacy and ethical awareness. In a project-based learning environment, students can use AI tools to develop presentations or reports collaboratively. Educators can emphasize the importance of ethical collaboration, such as giving credit to sources, avoiding plagiarism, and ensuring that all team members contribute fairly. This approach not only enhances students' digital literacy but also instills ethical values that are essential in the digital age (Ess, 2015).For instance, in a business class where students are tasked with developing a business plan, an AI tool can generate market analysis reports and financial projections. Educators can guide students to critically evaluate the AI-generated data, discuss the ethical implications of using AI in business decision-making, and ensure transparency and accountability in their work. This process helps students understand the ethical dimensions of AI and develop skills to use AI responsibly in their future careers (Mittelstadt et al., 2016).The ethical frameworks of deontological ethics and consequentialism provide valuable insights into the responsible use of GAI in education. By emphasizing the importance of principles such as honesty, fairness, and positive outcomes, educators can foster digital literacy and ethical awareness among students. Teaching students to understand and navigate the ethical implications of AI tools prepares them to contribute positively to the digital world, ensuring that they use AI to enhance learning and uphold ethical standards. Through responsible AI integration and ethical education, we can create a generation of digitally literate and ethically aware individuals ready to thrive in a technologically advanced society (Moor, 1985).The integration of AI in education holds great promise for enhancing learning experiences but raises profound ethical questions. The need for careful ethical reflection is underscored in The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates, which argues that educators, researchers, and stakeholders must engage in ongoing dialogue to navigate the complexities of AI in educational contexts (Holmes and Porayska-Pomsta, 2022). Smuha (2022) points out that for AI in education to be ethically responsible, it must adhere to key principles such as fairness, accountability, and transparency. These principles are vital in mitigating biases and preventing AI from perpetuating or amplifying existing educational inequalities. Furthermore, the concept of Trustworthy AI, as discussed by Smuha, is crucial in ensuring that AI systems foster inclusivity and do not marginalize vulnerable student populations (Smuha, 2022). Similarly, Brossi et al. ( 2022) raise concerns about the uncertain impact of AI on learners' cognitive development and the risk of disempowering educators through over-automation of pedagogical processes, pointing to the need for ethical frameworks that avoid automating ineffective or inequitable practices. Williamson (2024) expands on this by highlighting the socio-political context of AI in education, warning against the assumption that technological innovations are inherently beneficial. Instead, he emphasizes that AI must be viewed as a socially embedded tool that could exacerbate educational inequities if not critically examined. The potential for AI to impact power dynamics, access, and social equity necessitates that educators and policymakers rigorously reflect on its broader implications, including how AI systems might reinforce or challenge existing educational structures. 2023) offer a practical step forward in addressing these concerns through their participatory futures approach, which is designed to help educators ethically integrate AI into their teaching environments. By using the Delphi method to gather diverse perspectives, their study presents hypothetical future scenarios that help educators and stakeholders reflect on the broader implications of AI in education. This approach ensures that the benefits of AI are balanced with ethical considerations related to privacy, bias, and the societal impacts of AI on education, promoting a thoughtful and inclusive implementation of AI technologies.Further supporting this ethical stance, the European Commission's Ethics Guidelines for Trustworthy AI (2019) lays out seven key requirements for Trustworthy AI, including human agency, privacy, transparency, and fairness. These guidelines align closely with the need to ensure that AI systems in education promote fairness and inclusivity, rather than exacerbating inequities in educational access and outcomes. The guidelines also emphasize the importance of continuous monitoring and accountability to ensure AI systems remain aligned with these ethical principles. By stressing the importance of transparency, diversity, and non-discrimination, these guidelines reinforce the participatory frameworks put forth by Mouta et al. (2023), which advocate for an inclusive, ethical approach to AI integration in education .Further reinforcing these ethical considerations, Floridi et al. (2018) in their "AI4People" framework emphasize the importance of a principled approach to AI that integrates ethical foundations like beneficence, non-maleficence, autonomy, justice, and explicability. These principles align with the need for AI in education to promote well-being and inclusivity while avoiding harm, respecting user autonomy, ensuring fair access to AI benefits, and fostering transparency. The framework also highlights that the potential risks of AI can include the erosion of human agency and privacy, making it essential for educational AI systems to be designed in ways that support rather than undermine student autonomy and self-determination. By embedding these principles into the development and deployment of AI, educators and policymakers can more effectively navigate the ethical challenges posed by AI in educational contexts, ultimately fostering a "Good AI Society" that supports human flourishing.SDT posits that individuals are most motivated when their needs for autonomy, competence, and relatedness are met. GAI, with its capability to provide personalized feedback and tailored learning resources, can significantly support SDT by fostering intrinsic motivation among students. By empowering students to take control of their learning, AI can enhance engagement and academic integrity (Deci & Ryan, 2000;Ryan & Deci, 2017).GAI can enhance students' sense of autonomy by offering them more control over their learning process. In a high school history class studying the Industrial Revolution, an AI tool can create interactive timelines and simulations. Students can explore these tools at their own pace, choosing which aspects of the Industrial Revolution to delve into more deeply. This self-directed exploration encourages students to take ownership of their learning, fostering a sense of autonomy (Reeve, 2006).For example, a student interested in labor conditions during the Industrial Revolution might use the AI tool to simulate different labor policies and observe their impacts. This personalized exploration helps students develop a deeper understanding of historical complexities, driven by their own curiosity and interests (Niemiec & Ryan, 2009).GAI tools can also support the need for competence by providing personalized feedback that helps students improve their skills and knowledge. In a language arts classroom, an AI-driven writing assistant can analyze a student's work and provide targeted feedback on grammar, tone, and narrative structure. This real-time, individualized feedback helps students understand their strengths and areas for improvement, fostering a sense of competence (Black & Deci, 2000).Imagine a student writing a short story. The AI tool can suggest improvements in plot development and character interactions, guiding the student to refine their narrative. As students see their writing improve through this iterative process, they gain confidence in their abilities, which enhances their intrinsic motivation to engage with the subject matter (Vansteenkiste et al., 2004).GAI can also facilitate relatedness by enabling collaborative learning and providing opportunities for meaningful interactions. In a project-based learning environment, AI tools can help students work together on presentations or reports. For instance, in a science class, an AI-powered lab assistant can guide groups of students through virtual experiments, encouraging collaboration and discussion (Ryan & Powelson, 1991).Consider a group of students using AI to simulate a chemical reaction. The AI provides each group member with specific tasks and prompts them to share their findings and discuss results. This collaborative process fosters a sense of relatedness, as students work together to achieve common goals and learn from each other (Jang et al., 2016).By fostering intrinsic motivation through autonomy, competence, and relatedness, GAI can also promote academic integrity. When students are genuinely interested and engaged in their learning, they are less likely to resort to dishonest practices. Personalized learning experiences make education more relevant and enjoyable, reducing the temptation to cheat (Deci et al., 1991).In history class, for example, students using AI to explore the Industrial Revolution are likely to develop a genuine interest in the subject. This intrinsic motivation drives them to produce original work and engage deeply with the material. Similarly, in the language arts class, students motivated by the desire to improve their writing skills are more likely to take pride in their work and avoid plagiarism (Vansteenkiste & Ryan, 2013).In a business class where students develop business plans using AI-generated market analysis reports and financial projections, educators can emphasize the importance of ethical decision-making and transparency. The AI tool provides personalized insights, allowing students to explore various business strategies and their consequences. This hands-on learning approach fosters intrinsic motivation by making the subject matter relevant and engaging (Ryan & Deci, 2000).For instance, a student interested in starting a sustainable business can use AI to analyze the environmental impact of different business models. This personalized exploration helps the student develop a deeper understanding of sustainability in business, driven by their own interests and values (Deci & Ryan, 2008). GAI, by supporting the principles of SDT, can foster intrinsic motivation among students. Through personalized feedback and tailored learning resources, AI empowers students to take control of their learning, enhancing their sense of autonomy, competence, and relatedness. This intrinsic motivation not only increases engagement but also promotes academic integrity. By integrating AI tools in educational settings, educators can create enriching learning environments that prepare students for the complexities of the modern world, ensuring that they are motivated, ethical, and engaged learners (Ryan & Deci, 2019).The integration of GAI in education has sparked significant debate regarding its impact on academic integrity. Critics argue that AI tools facilitate dishonesty by providing easy shortcuts for students to complete assignments. However, a closer examination of established educational theories and ethical frameworks reveals a different perspective. When used responsibly, GAI can foster intrinsic motivation, enhance digital literacy, and support constructivist learning principles, thereby promoting academic integrity rather than eroding it.The integration of GAI in various educational fields, including computer science, engineering, medical education, and communication, is revolutionizing teaching and learning. The integration of AI technologies in computer science education, particularly through tools like GitHub Copilot, offers significant benefits in fostering creativity, enhancing learning efficiency, and supporting advanced projects. In engineering education, GAI offers numerous benefits, leveraging advanced chatbots and text-generation models to enhance learning and problem-solving capabilities. Cloud-based frameworks and social robots significantly enhance engineering education by providing scalable resources, interactive learning environments, and personalized support. Moreover, GAI has the potential to revolutionize medical education by enhancing clinical training, improving diagnostic accuracy, supporting personalized medicine, and advancing public health education. Also, GAI models hold great potential to enhance communication education across journalism, media, and healthcare fields. By supporting content generation, data analysis, creative development, and patient communication, GAI tools can provide valuable learning experiences and improve productivity (Bahroun et al, 2023). GAI holds immense potential to transform education by enhancing teaching, learning, and educational processes. However, to fully realize these benefits, it is essential to address issues of responsible and ethical usage, potential biases, and academic integrity. By developing comprehensive guidelines, promoting transparency, mitigating bias, and fostering critical thinking skills, educators and institutions can ensure that AI technologies contribute positively to a technologically advanced, inclusive, and effective educational landscape (Bahroun et al, 2023).SDT posits that students are most motivated when their needs for autonomy, competence, and relatedness are met. GAI can significantly enhance these aspects, fostering intrinsic motivation among students. When students are intrinsically motivated, they are more likely to engage deeply with the material and maintain academic integrity. AI tools enhance autonomy by allowing students to control their learning process. In a history class, for instance, students can use AI-generated interactive timelines and simulations to explore different aspects of the Industrial Revolution at their own pace. This self-directed exploration encourages students to take ownership of their learning journey, which promotes a genuine interest in the subject matter. Such autonomy reduces the likelihood of dishonest behavior, as students are motivated by curiosity and a desire to learn.Moreover, AI tools support competence by providing personalized feedback that helps students improve their skills. In a language arts classroom, an AI-driven writing assistant can analyze a student's work and offer specific suggestions for improvement. This real-time feedback not only enhances the student's writing skills but also builds their confidence. When students see tangible improvements in their abilities, their intrinsic motivation to engage with the subject matter increases. This motivation fosters academic integrity, as students take pride in their work and are less inclined to plagiarize or cheat. GAI also facilitates relatedness by enabling collaborative learning. In project-based learning environments, AI tools can help students work together more effectively. For example, in a science class, an AI-powered lab assistant can guide groups through virtual experiments, encouraging discussion and collaboration. This collaborative process fosters a sense of community and shared purpose among students, which supports their intrinsic motivation to learn and succeed together. When students feel connected to their peers and their learning objectives, they are more likely to adhere to ethical standards and maintain academic integrity.Digital literacy is essential in today's technology-driven world, and GAI can play a crucial role in fostering this skill. Ethical frameworks such as deontological ethics and consequentialism provide valuable insights into the responsible use of AI in education, emphasizing the importance of honesty, fairness, and positive outcomes.Deontological ethics, which focuses on adherence to moral principles, underscores the need for using AI tools responsibly. Educators can teach students to use AI ethically by emphasizing principles such as honesty and integrity. For instance, when using AI-generated simulations in a history class, educators can guide students to engage thoughtfully with the material, ensuring that their use of AI supports genuine learning rather than shortcuts. By instilling these ethical values, educators help students understand the importance of maintaining academic integrity.Consequentialism, which evaluates the morality of actions based on their outcomes, further supports the responsible use of AI in education. The ethical use of AI should aim to produce positive educational outcomes, such as enhanced learning, critical thinking, and digital literacy. In a language arts classroom, an AI writing assistant can provide constructive feedback that helps students refine their writing skills. This positive outcome not only improves their competence but also instills a sense of responsibility in using AI tools ethically. When students see the benefits of using AI to enhance their skills, they are more likely to use these tools responsibly, maintaining academic integrity.Moreover, educating students on the ethical use of AI tools is crucial for fostering digital literacy. In a science class, an AI-powered lab assistant can guide students through virtual experiments, prompting discussions on ethical considerations such as data privacy and accuracy. By engaging in these discussions, students develop a nuanced understanding of the role of AI in scientific research and the ethical responsibilities that come with it. This awareness empowers students to navigate the digital world ethically and effectively, reducing the likelihood of dishonest behavior.Constructivist learning theory emphasizes that students construct knowledge through experiences and reflections. GAI aligns well with this theory, offering tools that promote exploration, interaction, and personalized learning paths. By supporting constructivist principles, AI enhances academic integrity by encouraging deeper understanding and critical thinking.In a history class studying the Industrial Revolution, an AI tool that generates interactive timelines and simulations allows students to manipulate variables and observe outcomes. This hands-on exploration helps students construct a deeper understanding of historical complexities. Rather than passively receiving information, students actively engage with the content, reflecting on the consequences of different actions. This active engagement fosters a genuine interest in learning, reducing the temptation to cheat.Similarly, in a language arts classroom, a GAI tool that provides real-time feedback on writing helps students improve their narrative skills. By experimenting with different plot developments and character traits, students engage in a creative process that aligns with constructivist principles. This interactive learning experience encourages students to think critically about their stories, fostering a deeper understanding of language and storytelling. When students are genuinely invested in their learning process, they are less likely to engage in dishonest practices.Collaborative learning, another key aspect of constructivist theory, is also enhanced by GAI. In project-based learning environments, AI tools can facilitate collaboration by organizing information, suggesting relevant sources, and providing feedback on the clarity of students' work. For example, in a business class, an AI tool can help students develop a business plan by generating market analysis reports and financial projections. This collaborative process encourages students to engage in dialogue, share perspectives, and build knowledge collectively. When students work together to achieve common goals, they are more likely to adhere to ethical standards and maintain academic integrity.GAI, when integrated responsibly in education, does not erode academic integrity. Instead, it fosters intrinsic motivation, enhances digital literacy, and supports constructivist learning principles. By promoting autonomy, competence, and relatedness, AI tools help students develop a genuine interest in their subjects, reducing the likelihood of dishonest behavior. Ethical education and personalized feedback further empower students to navigate the digital world responsibly, ensuring that they use AI tools to enhance their learning rather than as shortcuts. Through interactive and collaborative learning experiences, GAI encourages deeper understanding and critical thinking, ultimately promoting academic integrity in today's educational landscape.To provide practical guidance for using AI in education, we recommend focusing on integrating AI in ways that support established educational goals while adhering to ethical guidelines. Transparency is crucial in this process, as educators must actively involve students in understanding how AI tools are being used, what their limitations are, and why ethical use is important. This includes making the need to understand and actively utilize AI an explicit part of program objectives, course objectives, and learning outcomes, ensuring that its integration aligns with educational goals like developing digital literacy and critical thinking skills. By discussing potential biases, data privacy concerns, and limitations of AI-generated content, educators foster a culture of critical engagement where students learn to use AI responsibly and ethically rather than blindly relying on it. This proactive approach equips students with the discernment and integrity needed to navigate an AI-driven world.Professional development for educators is crucial for the effective integration of AI in education. Governments and administrative bodies must exert the necessarily sustained and concerted pressures to make this a priority. Alongside sustained and concerted pressures, they need to sufficiently invest in resources and provide support and encouragement to ensure that this training is effective and widespread. Training programs should equip educators with practical skills for using AI tools, while also covering ethical considerations like data privacy, algorithmic bias, and the limitations of AI-generated feedback. By mandating and funding professional development, policymakers and administrators can ensure that educators are well-prepared to navigate the potential risks and benefits of AI. This comprehensive support empowers educators to guide students in using AI tools responsibly, fostering genuine learning and upholding academic integrity, rather than allowing misuse or over-reliance on technology to take root.Finally, an iterative approach to integrating AI is crucial, and this must be encouraged at the policymaking level as well. Educators should continuously assess the impacts of AI on learning outcomes and be prepared to adjust their strategies accordingly. This involves collecting feedback from students, reviewing the effectiveness of AI tools, and making necessary changes to ensure AI contributes to meaningful educational experiences. Policymakers can support this process by implementing guidelines and providing resources that promote regular evaluation and adaptation of AI integration practices in schools. By emphasizing these practical steps at both the classroom and policy levels, educators can incorporate AI in ways that not only enhance learning but also foster responsible, ethical engagement with technology.
This study examines the ethical and philosophical implications of integrating artificial intelligence (AI) in education. In today's digital era, AI holds promise for enhancing educational quality and accessibility through personalized … This study examines the ethical and philosophical implications of integrating artificial intelligence (AI) in education. In today's digital era, AI holds promise for enhancing educational quality and accessibility through personalized learning, data analysis, and task automation. However, it raises ethical concerns, such as data privacy, algorithmic bias, and reduced human interaction. To explore these issues, we conducted a literature review of academic sources, including journals and research reports. Our findings suggest that while AI can improve learning efficiency and personalization, significant concerns remain regarding student data privacy and diminished teacher-student interaction. Algorithmic bias presents a risk of exacerbating educational disparities. These results align with constructivist theory, which supports adaptive learning, yet conflict with humanist theory, which values human interaction. This research underscores the importance of addressing ethical and philosophical considerations in AI's educational integration. It suggests the need for policies that are both ethical and inclusive and calls for stricter regulations to tackle privacy and bias issues. Additionally, training educators to use AI effectively and ethically is essential. This study contributes to the discourse by highlighting these critical aspects and offering a foundation for developing responsible AI applications in education.
This study investigates the acceptability of different artificial intelligence (AI) applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI in education, … This study investigates the acceptability of different artificial intelligence (AI) applications in education from a multi-stakeholder perspective, including students, teachers, and parents. Acknowledging the transformative potential of AI in education, it addresses concerns related to data privacy, AI agency, transparency, explainability and the ethical deployment of AI. Through a vignette methodology, participants were presented with four scenarios where AI's agency, transparency, explainability, and privacy were manipulated. After each scenario, participants completed a survey that captured their perceptions of AI's global utility, individual usefulness, justice, confidence, risk, and intention to use each scenario's AI if available. The data collection comprising a final sample of 1198 multi-stakeholder participants was distributed through a partner institution and social media campaigns and focused on individual responses to four AI use cases. A mediation analysis of the data indicated that acceptance and trust in AI varies significantly across stakeholder groups. We found that the key mediators between high and low levels of AI's agency, transparency, and explainability, as well as the intention to use the different educational AI, included perceived global utility, justice, and confidence. The study highlights that the acceptance of AI in education is a nuanced and multifaceted issue that requires careful consideration of specific AI applications and their characteristics, in addition to the diverse stakeholders' perceptions.
ABSTRACT There are many ethical decisions in the practice of health research and care, and in the creation of policy and guidelines. We argue that those charged with making such … ABSTRACT There are many ethical decisions in the practice of health research and care, and in the creation of policy and guidelines. We argue that those charged with making such decisions need a new genre of review. The new genre is an application of the systematic review, which was developed over decades to inform medical decision‐makers about what the totality of studies that investigate links between smoking and cancer, for example, implies about whether smoking causes cancer. We argue that there is a need for similarly inclusive and rigorous reviews of reason‐based bioethics, which uses reasoning to address ethical questions. After presenting a brief history of the systematic review, we reject the only existing model for writing a systematic review of reason‐based bioethics, which holds that such a review should address an ethical question. We argue that such a systematic review may mislead decision‐makers when a literature is incomplete, or when there are mutually incompatible but individually reasonable answers to the ethical question. Furthermore, such a review can be written without identifying all the reasons given when the ethical questions are discussed, their alleged implications for the ethical question, and the attitudes taken to the reasons. The reviews we propose address instead the empirical question of which reasons have been given when addressing a specified ethical question, and present such detailed information on the reasons. We argue that this information is likely to improve decision‐making, both directly and indirectly, and also the academic literature. We explain the limitations of our alternative model for systematic reviews.
There is a growing recognition of the value of synthesising qualitative research in the evidence base in order to facilitate effective and appropriate health care. In response to this, methods … There is a growing recognition of the value of synthesising qualitative research in the evidence base in order to facilitate effective and appropriate health care. In response to this, methods for undertaking these syntheses are currently being developed. Thematic analysis is a method that is often used to analyse data in primary qualitative research. This paper reports on the use of this type of analysis in systematic reviews to bring together and integrate the findings of multiple qualitative studies. We describe thematic synthesis, outline several steps for its conduct and illustrate the process and outcome of this approach using a completed review of health promotion research. Thematic synthesis has three stages: the coding of text 'line-by-line'; the development of 'descriptive themes'; and the generation of 'analytical themes'. While the development of descriptive themes remains 'close' to the primary studies, the analytical themes represent a stage of interpretation whereby the reviewers 'go beyond' the primary studies and generate new interpretive constructs, explanations or hypotheses. The use of computer software can facilitate this method of synthesis; detailed guidance is given on how this can be achieved. We used thematic synthesis to combine the studies of children's views and identified key themes to explore in the intervention studies. Most interventions were based in school and often combined learning about health benefits with 'hands-on' experience. The studies of children's views suggested that fruit and vegetables should be treated in different ways, and that messages should not focus on health warnings. Interventions that were in line with these suggestions tended to be more effective. Thematic synthesis enabled us to stay 'close' to the results of the primary studies, synthesising them in a transparent way, and facilitating the explicit production of new concepts and hypotheses. We compare thematic synthesis to other methods for the synthesis of qualitative research, discussing issues of context and rigour. Thematic synthesis is presented as a tried and tested method that preserves an explicit and transparent link between conclusions and the text of primary studies; as such it preserves principles that have traditionally been important to systematic reviewing.
Modern standards for evidence-based decision making in clinical care and public health still rely solely on eminence-based input when it comes to normative ethical considerations. Manuals for clinical guideline development … Modern standards for evidence-based decision making in clinical care and public health still rely solely on eminence-based input when it comes to normative ethical considerations. Manuals for clinical guideline development or health technology assessment (HTA) do not explain how to search, analyze, and synthesize relevant normative information in a systematic and transparent manner. In the scientific literature, however, systematic or semi-systematic reviews of ethics literature already exist, and scholarly debate on their opportunities and limitations has recently bloomed. A systematic review was performed of all existing systematic or semi-systematic reviews for normative ethics literature on medical topics. The study further assessed how these reviews report on their methods for search, selection, analysis, and synthesis of ethics literature. We identified 84 reviews published between 1997 and 2015 in 65 different journals and demonstrated an increasing publication rate for this type of review. While most reviews reported on different aspects of search and selection methods, reporting was much less explicit for aspects of analysis and synthesis methods: 31 % did not fulfill any criteria related to the reporting of analysis methods; for example, only 25 % of the reviews reported the ethical approach needed to analyze and synthesize normative information. While reviews of ethics literature are increasingly published, their reporting quality for analysis and synthesis of normative information should be improved. Guiding questions are: What was the applied ethical approach and technical procedure for identifying and extracting the relevant normative information units? What method and procedure was employed for synthesizing normative information? Experts and stakeholders from bioethics, HTA, guideline development, health care professionals, and patient organizations should work together to further develop this area of evidence-based health care.
(Semi-)systematic approaches to finding, analysing, and synthesising ethics literature on medical topics are still in their infancy. However, our recent systematic review showed that the rate of publication of such … (Semi-)systematic approaches to finding, analysing, and synthesising ethics literature on medical topics are still in their infancy. However, our recent systematic review showed that the rate of publication of such (semi-)systematic reviews has increased in the last two decades. This is not only true for reviews of empirical ethics literature, but also for reviews of normative ethics literature. In the latter case, there is currently little in the way of standards and guidance available. Therefore, the methods and reporting strategies of such reviews vary greatly. The purpose of the follow-up study we present was to obtain deeper methodological insight into the ways reviews of normative literature are actually conducted and to analyse the methods used. Our search in the PubMed, PhilPapers, and Google Scholar databases led to the identification of 183 reviews of ethics literature published between 1997 and 2015, of which 84 were identified as reviews of normative and mixed literature. Qualitative content analysis was used to extract and synthesise descriptions of search, selection, quality appraisal, analysis, and synthesis methods. We further assessed quantitatively how often certain methods (e.g. search strategies, data analysis procedures) were used by the reviews. The overall reporting quality varies among the analysed reviews and was generally poor even for major criteria regarding the search and selection of literature. For example, only 24 (29%) used a PRISMA flowchart. Also, only 55 (66%) reviews mentioned the information unit they sought to extract, and 12 (14%) stated an ethical approach as the theoretical basis for the analysis. Interpretable information on the synthesis method was given by 47 (60%); the most common methods applied were qualitative methods commonly used in social science research (83%). Reviews which fail to provide sufficient relevant information to readers have reduced methodological transparency regardless of actual methodological quality. In order to increase the internal validity (i.e. reproducibility) as well as the external validity (i.e. utility for the intended audience) of future reviews of normative literature, we suggest more accurate reporting regarding the goal of the review, the definition of the information unit, the ethical approach used, and technical aspects.
Abstract Artificial intelligence and data analysis (AIDA) are increasingly entering the field of education. Within this context, the subfield of learning analytics (LA) has, since its inception, had a strong … Abstract Artificial intelligence and data analysis (AIDA) are increasingly entering the field of education. Within this context, the subfield of learning analytics (LA) has, since its inception, had a strong emphasis upon ethics, with numerous checklists and frameworks proposed to ensure that student privacy is respected and potential harms avoided. Here, we draw attention to some of the assumptions that underlie previous work in ethics for LA, which we frame as three tensions. These assumptions have the potential of leading to both the overcautious underuse of AIDA as administrators seek to avoid risk, or the unbridled misuse of AIDA as practitioners fail to adhere to frameworks that provide them with little guidance upon the problems that they face in building LA for institutional adoption. We use three edge cases to draw attention to these tensions, highlighting places where existing ethical frameworks fail to inform those building LA solutions. We propose a pilot open database that lists edge cases faced by LA system builders as a method for guiding ethicists working in the field towards places where support is needed to inform their practice. This would provide a middle space where technical builders of systems could more deeply interface with those concerned with policy, law and ethics and so work towards building LA that encourages human flourishing across a lifetime of learning. Practitioner Notes What is already known about this topic Applied ethics has a number of well‐established theoretical groundings that we can use to frame the actions of ethical agents, including, deontology, consequentialism and virtue ethics. Learning analytics has developed a number of checklists, frameworks and evaluation methodologies for supporting trusted and ethical development, but these are often not adhered to by practitioners. Laws like the General Data Protection Regulation (GDPR) apply to fields like education, but the complexity of this field can make them difficult to apply. What this paper adds Evidence of tensions and gaps in existing ethical frameworks and checklists to support the ethical development and implementation of learning analytics. A set of three edge cases that demonstrate places where existing work on the ethics of AI in education has failed to provide guidance. A “practical ethics” conceptualisation that draws on virtue ethics to support practitioners in building learning analytics systems. Implications for practice and/or policy Those using AIDA in education should collect and share example edge cases to support development of practical ethics in the field. A multiplicity of ethical approaches are likely to be useful in understanding how to develop and implement learning analytics ethically in practical contexts.
Abstract According to various international reports, Artificial Intelligence in Education (AIEd) is one of the currently emerging fields in educational technology. Whilst it has been around for about 30 years, … Abstract According to various international reports, Artificial Intelligence in Education (AIEd) is one of the currently emerging fields in educational technology. Whilst it has been around for about 30 years, it is still unclear for educators how to make pedagogical advantage of it on a broader scale, and how it can actually impact meaningfully on teaching and learning in higher education. This paper seeks to provide an overview of research on AI applications in higher education through a systematic review. Out of 2656 initially identified publications for the period between 2007 and 2018, 146 articles were included for final synthesis, according to explicit inclusion and exclusion criteria. The descriptive results show that most of the disciplines involved in AIEd papers come from Computer Science and STEM, and that quantitative methods were the most frequently used in empirical studies. The synthesis of results presents four areas of AIEd applications in academic support services, and institutional and administrative services: 1. profiling and prediction, 2. assessment and evaluation, 3. adaptive systems and personalisation, and 4. intelligent tutoring systems. The conclusions reflect on the almost lack of critical reflection of challenges and risks of AIEd, the weak connection to theoretical pedagogical perspectives, and the need for further exploration of ethical and educational approaches in the application of AIEd in higher education.
This article examines benefits and risks of Artificial Intelligence (AI) in education in relation to fundamental human rights. The article is based on an EU scoping study [Berendt, B., A. … This article examines benefits and risks of Artificial Intelligence (AI) in education in relation to fundamental human rights. The article is based on an EU scoping study [Berendt, B., A. Littlejohn, P. Kern, P. Mitros, X. Shacklock, and M. Blakemore. 2017. Big Data for Monitoring Educational Systems. Luxembourg: Publications Office of the European Union. https://publications.europa.eu/en/publication-detail/-/publication/94cb5fc8-473e-11e7-aea8-01aa75ed71a1/]. The study takes into account the potential for AI and ‘Big Data’ to provide more effective monitoring of the education system in real-time, but also considers the implications for fundamental human rights and freedoms of both teachers and learners. The analysis highlights a need to balance the benefits and risks as AI tools are developed, marketed and deployed. We conclude with a call to embed consideration of the benefits and risks of AI in education as technology tools into the development, marketing and deployment of these tools. There are questions around who – which body or organisation – should take responsibility for regulating AI in education, particularly since AI impacts not only data protection and privacy, but on fundamental rights in general. Given AI’s global impact, it should be regulated at a trans-national level, with a global organisation such as the UN taking on this role.
The introduction of artificial intelligence in education (AIED) is likely to have a profound impact on the lives of children and young people. This article explores the different types of … The introduction of artificial intelligence in education (AIED) is likely to have a profound impact on the lives of children and young people. This article explores the different types of artificial intelligence (AI) systems in common use in education, their social context and their relationship with the growth of commercial knowledge monopolies. This in turn is used to highlight data privacy rights issues for children and young people, as defined by the 2018 General Data Protection Regulations (GDPR). The article concludes that achieving a balance between fairness, individual pedagogic rights (Bernstein, 2000), data privacy rights and effective use of data is a difficult challenge, and one not easily supported by current regulation. The article proposes an alternative, more democratically aware basis for artificial intelligence use in schools.
We discuss the new challenges and directions facing the use of big data and artificial intelligence (AI) in education research, policy-making, and industry. In recent years, applications of big data … We discuss the new challenges and directions facing the use of big data and artificial intelligence (AI) in education research, policy-making, and industry. In recent years, applications of big data and AI in education have made significant headways. This highlights a novel trend in leading-edge educational research. The convenience and embeddedness of data collection within educational technologies, paired with computational techniques have made the analyses of big data a reality. We are moving beyond proof-of-concept demonstrations and applications of techniques, and are beginning to see substantial adoption in many areas of education. The key research trends in the domains of big data and AI are associated with assessment, individualized learning, and precision education. Model-driven data analytics approaches will grow quickly to guide the development, interpretation, and validation of the algorithms. However, conclusions from educational analytics should, of course, be applied with caution. At the education policy level, the government should be devoted to supporting lifelong learning, offering teacher education programs, and protecting personal data. With regard to the education industry, reciprocal and mutually beneficial relationships should be developed in order to enhance academia-industry collaboration. Furthermore, it is important to make sure that technologies are guided by relevant theoretical frameworks and are empirically tested. Lastly, in this paper we advocate an in-depth dialogue between supporters of “cold” technology and “warm” humanity so that it can lead to greater understanding among teachers and students about how technology, and specifically, the big data explosion and AI revolution can bring new opportunities (and challenges) that can be best leveraged for pedagogical practices and learning.
Artificial intelligence (AI) is impacting education in many different ways. From virtual assistants for personalized education, to student or teacher tracking systems, the potential benefits of AI for education often … Artificial intelligence (AI) is impacting education in many different ways. From virtual assistants for personalized education, to student or teacher tracking systems, the potential benefits of AI for education often come with a discussion of its impact on privacy and well-being. At the same time, the social transformation brought about by AI requires reform of traditional education systems. This article discusses what a responsible, trustworthy vision for AI is and how this relates to and affects education.
There is a wide diversity of views on the potential for artificial intelligence (AI), ranging from overenthusiastic pronouncements about how it is imminently going to transform our lives to alarmist … There is a wide diversity of views on the potential for artificial intelligence (AI), ranging from overenthusiastic pronouncements about how it is imminently going to transform our lives to alarmist predictions about how it is going to cause everything from mass unemployment to the destruction of life as we know it. In this article, I look at the practicalities of AI in education and at the attendant ethical issues it raises. My key conclusion is that AI in the near- to medium-term future has the potential to enrich student learning and complement the work of (human) teachers without dispensing with them. In addition, AI should increasingly enable such traditional divides as ‘school versus home’ to be straddled with regard to learning. AI offers the hope of increasing personalization in education, but it is accompanied by risks of learning becoming less social. There is much that we can learn from previous introductions of new technologies in school to help maximize the likelihood that AI can help students both to flourish and to learn powerful knowledge. Looking further ahead, AI has the potential to be transformative in education, and it may be that such benefits will first be seen for students with special educational needs. This is to be welcomed.
Abstract While Artificial Intelligence in Education (AIED) research has at its core the desire to support student learning, experience from other AI domains suggest that such ethical intentions are not … Abstract While Artificial Intelligence in Education (AIED) research has at its core the desire to support student learning, experience from other AI domains suggest that such ethical intentions are not by themselves sufficient. There is also the need to consider explicitly issues such as fairness, accountability, transparency, bias, autonomy, agency, and inclusion. At a more general level, there is also a need to differentiate between doing ethical things and doing things ethically , to understand and to make pedagogical choices that are ethical, and to account for the ever-present possibility of unintended consequences. However, addressing these and related questions is far from trivial. As a first step towards addressing this critical gap, we invited 60 of the AIED community’s leading researchers to respond to a survey of questions about ethics and the application of AI in educational contexts. In this paper, we first introduce issues around the ethics of AI in education. Next, we summarise the contributions of the 17 respondents, and discuss the complex issues that they raised. Specific outcomes include the recognition that most AIED researchers are not trained to tackle the emerging ethical questions. A well-designed framework for engaging with ethics of AIED that combined a multidisciplinary approach and a set of robust guidelines seems vital in this context.
To accompany the special issue in Artificial Intelligence and Education, this article presents a short history of research in the field and summarises emerging challenges.We highlight key paradigm shifts that … To accompany the special issue in Artificial Intelligence and Education, this article presents a short history of research in the field and summarises emerging challenges.We highlight key paradigm shifts that are becoming possible but also the need to pay attention to theory, implementation and pedagogy while adhering to ethical principles. We conclude by drawing attention to international co-operation structures in the field that can support the interdiscipniary perspectives and methods required to undertake research in the area.
With all the world's information literally at our fingertips and big data and artificial intelligence (AI) quickly gaining traction in education, trust in data quality and AI-powered decision-making is rapidly … With all the world's information literally at our fingertips and big data and artificial intelligence (AI) quickly gaining traction in education, trust in data quality and AI-powered decision-making is rapidly emerging as an essential issue in 21st-century education. The potential of artificial intelligence in education is immense, but not without risk, which can dampen stakeholder trust, potentially limiting its use. This chapter discusses the benefits and challenges of AI in education, current and emerging applications of AI at all educational levels, and best practice resources for increasing stakeholder trust in and responsible use of AI for managing the educational enterprise and preparing learners for the careers of today and tomorrow.
This exploratory review attempted to gather evidence from the literature by shedding light on the emerging phenomenon of conceptualising the impact of artificial intelligence in education. The review utilised the … This exploratory review attempted to gather evidence from the literature by shedding light on the emerging phenomenon of conceptualising the impact of artificial intelligence in education. The review utilised the PRISMA framework to review the analysis and synthesis process encompassing the search, screening, coding, and data analysis strategy of 141 items included in the corpus. Key findings extracted from the review incorporate a taxonomy of artificial intelligence applications with associated teaching and learning practice and a framework for helping teachers to develop and self-reflect on the skills and capabilities envisioned for employing artificial intelligence in education. Implications for ethical use and a set of propositions for enacting teaching and learning using artificial intelligence are demarcated. The findings of this review contribute to developing a better understanding of how artificial intelligence may enhance teachers’ roles as catalysts in designing, visualising, and orchestrating AI-enabled teaching and learning, and this will, in turn, help to proliferate AI-systems that render computational representations based on meaningful data-driven inferences of the pedagogy, domain, and learner models.
Systematic reviews (SR) are very well elaborated and established for synthesizing statistical information, for example of clinical studies, for determining whether a clinical intervention is effective. SRs are also becoming … Systematic reviews (SR) are very well elaborated and established for synthesizing statistical information, for example of clinical studies, for determining whether a clinical intervention is effective. SRs are also becoming more and more popular in bioethics. However, the established approach of conducting and reporting a SR cannot be transferred to corresponding work on ethically sensible questions directly. This is because the object of investigation is not statistical information, but conceptual or normative information, e.g., ethical norms, principles, arguments or conclusions. There is some evidence that the quality of reporting of SRs on ethics literature could be improved in many regards. Although insufficient reporting is not a problem specific to bioethics, as poor study reports are also very common in SRs in e.g. medicine, authors of such SRs have the possibility to follow a reporting guideline – the well-established statement on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). For SRs on ethics literature, the PRISMA Statement can only be partially adopted due to the different type of information searched and analyzed.Thus, an international group of authors with years of experience in conducting and reviewing SRs on ethics literature adapted PRISMA for its application in the field of bioethics (“PRISMA-Ethics”). As methods stemming from qualitative research are often used for analysis and synthesis in SRs on ethics literature, also elements of the ENTREQ Guideline were incorporated in PRISMA-Ethics. The resulting reporting guideline has 22 items and is intended to provide authors of SRs on ethics literature with all information necessary for an adequate reporting of their SRs. It also allows readers, reviewers and journal editors critically evaluating the presented results and conclusions made. In this paper, we explain the rationale and give examples for each item. While we acknowledge heterogeneity on how to conduct a SR on ethics literature, we still maintain that there is a need for general reporting standards for improving transparency, understandability and verifiability of such SRs. We invite authors of SRs on ethics literature to test PRISMA-Ethics and to evaluate its usefulness. We hope for a critical discussion of the guideline and welcome its broad implementation.
Abstract This chapter traces the ethical issues around applying artificial intelligence (AI) in education from the early days of artificial intelligence in education in the 1970s to the current state … Abstract This chapter traces the ethical issues around applying artificial intelligence (AI) in education from the early days of artificial intelligence in education in the 1970s to the current state of this field, including the increasing sophistication of the system interfaces and the rise in data use and misuse. While in the early days most tools were largely learner-facing, now there are tools that are teacher-facing, supporting their management of the classroom, and administrator-facing, assisting in their management of cohorts of students. Learner-facing tools now take into account the affective and motivational aspects of learning as well as the cognitive. The rise of data collection and its associated analytic tools has enabled the development of dashboards for the dynamic management and reflective understanding of learners, teachers, and administrators. Ethical issues hardly figured in the early days of the field but now they loom large. This is because of the legitimate fears that learners’ and teachers’ autonomy will be compromised, that learner data will be collected and potentially misappropriated for other purposes, and that AI will introduce extra biases into educational decisions and increase existing inequity and also because of the scary reputation that AI has in general.
A developing trend is the integration of artificial intelligence (AI) into education. As a result of the widespread adoption of educational AI, the ethical risks it poses have raised growing … A developing trend is the integration of artificial intelligence (AI) into education. As a result of the widespread adoption of educational AI, the ethical risks it poses have raised growing public concern. The use of educational AI poses three key ethical risks: the risk of educational data security; the risk of deconstructing the teacher-student role structure and educational inequality; and the risk of alienation from educational goals. We herein advocate as countermeasures the redefinition of teachers’ fundamental duties, the instruction of students in the sensible use of AI, and the promotion of effective regulation of AI’s deployment in education.
The study seeks to understand how the AI ecosystem might be implicated in a form of knowledge production which reifies particular kinds of epistemologies over others. Using text mining and … The study seeks to understand how the AI ecosystem might be implicated in a form of knowledge production which reifies particular kinds of epistemologies over others. Using text mining and thematic analysis, this paper offers a horizon scan of the key themes that have emerged over the past few years during the AIEd debate. We begin with a discussion of the tools we used to experiment with digital methods for data collection and analysis. This paper then examines how AI in education systems are being conceived, hyped, and potentially deployed into global education contexts. Findings are categorised into three themes in the discourse: (1) geopolitical dominance through education and technological innovation; (2) creation and expansion of market niches, and (3) managing narratives, perceptions, and norms.
Our data-driven decision processes reduce diversity and complexity. Data analysis is dependent on large homogeneous data sets. This leads to bias against outliers and small minorities. Most Artificial Intelligence (AI) … Our data-driven decision processes reduce diversity and complexity. Data analysis is dependent on large homogeneous data sets. This leads to bias against outliers and small minorities. Most Artificial Intelligence (AI) amplifies and automates this pattern. This worsens disparity and blind spots in education and research. Data is about the past; automated decisions based on data exacerbate past patterns. The disruption in education caused by the COVID-19 pandemic offers an opportunity to consider what it is we want AI to amplify and automate. Is this the trajectory we wish to accelerate using machine learning? How will this prepare students to navigate out of crises to come and the changes in society brought about by machine intelligence?
Artificial Intelligence in Education (AIEd) has experienced a rapid rise in the past decade. This systematic review is the first examining the use of AIEd in K-12 including 169 extant … Artificial Intelligence in Education (AIEd) has experienced a rapid rise in the past decade. This systematic review is the first examining the use of AIEd in K-12 including 169 extant studies from 2011 to 2021. This study provides contextual information from the research, such as the educational disciplines, educational levels, research purposes, methodologies, year published and who the AI was intended to support. The grounded coding revealed affordances fit into three main themes of AIEd connecting to pedagogies (e.g., gaming, personalization), administration (e.g., diagnostic tools), and subject content. Challenges in AIEd K-12 included issues toward negative perceptions, lack of student and teacher technology skills, ethical concerns, and issues directly with the ease of use and design of the AI tools.
Educational technologies, and the systems of schooling in which they are deployed, enact particular ideologies about what is important to know and how learners should learn. As Artificial Intelligence—in education … Educational technologies, and the systems of schooling in which they are deployed, enact particular ideologies about what is important to know and how learners should learn. As Artificial Intelligence—in education and beyond—may contribute to inequitable outcomes for marginalized communities, approaches have been developed to evaluate and mitigate AI's harmful impacts. However, we argue in this chapter that the dominant paradigm of evaluating fairness on the basis of performance disparities in AI models is inadequate for confronting the systemic inequities that educational AI systems (re)produce. We draw on lenses of structural injustice informed by critical theory and Black feminist scholarship to critically interrogate several widely studied and adopted categories of educational AI; and we explore how they are bound up in and reproduce historical legacies of structural injustice and inequity, regardless of the parity of their models' performance. We close with alternative visions for a more equitable future for educational AI.
The development of educational AI (AIED) systems has often been motivated by their potential to promote educational equity and reduce achievement gaps across different groups of learners – for example, … The development of educational AI (AIED) systems has often been motivated by their potential to promote educational equity and reduce achievement gaps across different groups of learners – for example, by scaling up the benefits of one-on-one human tutoring to a broader audience, or by filling gaps in existing educational services. Given these noble intentions, why might AIED systems have inequitable impacts in practice? In this chapter, we discuss four lenses that can be used to examine how and why AIED systems risk amplifying existing inequities. Building from these lenses, we outline possible paths towards more equitable futures for AIED, while highlighting debates surrounding each proposal. In doing so, we hope to provoke new conversations around the design of equitable AIED, and to push ongoing conversations in the field forward.
Two of the central principles underlying the professional ethics of teachers are that they should do their best for their students and not exploit their positions of responsibility in so … Two of the central principles underlying the professional ethics of teachers are that they should do their best for their students and not exploit their positions of responsibility in so doing. Recent criticisms of some deployments of AIED systems have claimed that both these principles have been violated. This chapter briefly examines these ethical criticisms, and then goes on to explore some of the ethical dilemmas that can be encountered in "doing one's best". Both human teachers and AIED systems sometimes feel the need to challenge, discomfort, confuse or provide false feedback to their students in the short term as a means to increase learning in the long term.
Artificial Intelligence (AI) applications are entering all domains of our lives, including education. Besides benefits, the use of AI can also entail ethical risks, which are increasingly appearing on legislators' … Artificial Intelligence (AI) applications are entering all domains of our lives, including education. Besides benefits, the use of AI can also entail ethical risks, which are increasingly appearing on legislators' agendas. Many of these risks are context-specific and increase when vulnerable individuals are involved, asymmetries of power exist, and human rights and democratic values are at stake. Surprisingly, regulators thus far have paid only little attention to the specific risks arising in the context of AI in education (AIED). In this chapter, I assess the ethical challenges posed by AIED, taking as a normative framework the seven requirements for Trustworthy AI set out in the Ethics Guidelines of the European Commission's High-Level Expert Group on AI. After an overview of the Guidelines' broader context), I examine each requirement in the educational domain and assess the pitfalls that should be addressed. I pay particular attention to the role of education in shaping people's minds, and the manner in which this role can be used both to empower and exploit individuals. I note that AIED's main strength – offering education on a wider scale through more flexible and individualized learning methods – also constitutes a liability when left unchecked. Finally, I discuss various pathways that policymakers should consider to foster Trustworthy AIED beyond the adoption of guidelines, before concluding.
If we believe that AI can bring positive innovation to education, learning, and teaching then we must look at the risks and consequences too. Not doing so will be harmful … If we believe that AI can bring positive innovation to education, learning, and teaching then we must look at the risks and consequences too. Not doing so will be harmful to the social and technological progress our society desperately needs right now. With this in mind, this chapter focuses on the intrinsic role of education as the main leveller for improving equal opportunity and social mobility. Regardless of our background, it is through schooling that we can achieve access to knowledge, work, and financial security, with all that that entails. Therefore, the introduction of AI in education policy and practice needs to be seen through the lenses of the role of education in our society, thus ascertaining whether the AI can act as a springboard for opportunity, to further or hinder the role of education. The chapter focuses on several potential risks, including the erosion of human agency and the perpetuation of existing social divides, with a view to informing a cautious yet optimistic approach to the deployment of AI in education.
The advancement of artificial intelligence in education (AIED) has the potential to transform the educational landscape and influence the role of all involved stakeholders. In recent years, the applications of … The advancement of artificial intelligence in education (AIED) has the potential to transform the educational landscape and influence the role of all involved stakeholders. In recent years, the applications of AIED have been gradually adopted to progress our understanding of students' learning and enhance learning performance and experience. However, the adoption of AIED has led to increasing ethical risks and concerns regarding several aspects such as personal data and learner autonomy. Despite the recent announcement of guidelines for ethical and trustworthy AIED, the debate revolves around the key principles underpinning ethical AIED. This paper aims to explore whether there is a global consensus on ethical AIED by mapping and analyzing international organizations' current policies and guidelines. In this paper, we first introduce the opportunities offered by AI in education and potential ethical issues. Then, thematic analysis was conducted to conceptualize and establish a set of ethical principles by examining and synthesizing relevant ethical policies and guidelines for AIED. We discuss each principle and associated implications for relevant educational stakeholders, including students, teachers, technology developers, policymakers, and institutional decision-makers. The proposed set of ethical principles is expected to serve as a framework to inform and guide educational stakeholders in the development and deployment of ethical and trustworthy AIED as well as catalyze future development of related impact studies in the field.
Abstract In light of fast‐growing popular, political and professional discourses around AI in education, this article outlines five broad areas of contention that merit closer attention in future discussion and … Abstract In light of fast‐growing popular, political and professional discourses around AI in education, this article outlines five broad areas of contention that merit closer attention in future discussion and decision‐making. These include: (1) taking care to focus on issues relating to 'actually existing' AI rather than the overselling of speculative AI technologies; (2) clearly foregrounding the limitations of AI in terms of modelling social contexts, and simulating human intelligence, reckoning, autonomy and emotions; (3) foregrounding the social harms associated with AI use; (4) acknowledging the value‐driven nature of claims around AI; and (5) paying closer attention to the environmental and ecological sustainability of continued AI development and implementation. Thus, in contrast to popular notions of AI as a neutral tool, the argument is made for engaging with the ongoing use of AI in education as a political action that has varying impacts on different groups of people in various educational contexts.
Abstract Recent developments in Artificial Intelligence (AI) have generated great expectations for the future impact of AI in education and learning (AIED). Often these expectations have been based on misunderstanding … Abstract Recent developments in Artificial Intelligence (AI) have generated great expectations for the future impact of AI in education and learning (AIED). Often these expectations have been based on misunderstanding current technical possibilities, lack of knowledge about state‐of‐the‐art AI in education, and exceedingly narrow views on the functions of education in society. In this article, we provide a review of existing AI systems in education and their pedagogic and educational assumptions. We develop a typology of AIED systems and describe different ways of using AI in education and learning, show how these are grounded in different interpretations of what AI and education is or could be, and discuss some potential roadblocks on the AIED highway.