This literature review looks at the the report produced by The Council of Europe which discusses the use of AI in Education through the lens of the Council's objectives around democracy, rule of law and human rights. .
The Review:
The Council of Europe’s minister noted in 2019 that AI’s impact on education was increasing and though this brought opportunities there were also threats. They, therefore commissioned a report which looked at ‘the application and the teaching of AI in education, which we refer to collectively as “AI and education” (AI&ED)’, looked at ‘AI&ED through the lens of the Council of Europe’s core values: human rights, democracy and the rule of law’ and third, took a ‘critical approach to AI&ED, considering both the opportunities and the challenges’.
Their aim was to provide a holistic view of AI&ED, place the power in the hands of educators and provide a non-biased view of the future of AIED.
Their definition of AIED was useful. They differentiated it in the following way:
The connections between AI and education: “learning with AI” (learner-supporting, teacher-supporting and system-supporting AI), using AI to “learn about learning” (sometimes known as learning analytics) and “learning about AI” (repositioned as the human and technological dimensions of AI literacy).
Though they acknowledge AI offers opportunities they feel there are also many threats which have the potential to overpower educators and undermine education and citizens reducing critical thinking and autonomy, and thus should not be widely used in schools, a stance which almost leaves no space for the positive impact AIED may have in schools. This stance, made right from the start of their review, immediately positions the biased nature of a report which should be critical and fair.
They take note of the plethora of tools produced by profit making companies and balance this against the backdrop for the use of AI in education. They say they are to provide a critical analysis rather than just a glowing recommendation or report in order to protect users.
In order to understand the viewpoint of the writers, context is important and they produce their report within the context of ‘Digital Citizenship Education Project (DCE), which aims to empower children through education and active participation in the increasingly digital society’. Understanding this context puts the report in perspective especially with their sometimes overly critical approach to AIED and especially since their audience are policy makers, educators and governments.
The report was guided by the following questions (all through the lens of the Council of Europe’s core values):
What is meant by AI and education, what does it involve, and what are its potential benefits? What key issues and potential risks may arise in this context, and what are the possible mitigations?
What are the gaps in what is known, documented and reported, and what questions still need to be asked?
They state that technology is complex and non-linear with ‘dangerous unforeseen consequences’ almost as though there is the presence of bogey man waiting for all who may try to enter this room of technology which they claim their report seeks to unravel or decipher for their audience.
They provide a preferred definition of AI which is useful for the review of this document and provides an understanding of how European policy makers view AI, which is very important to any discussion.
They use a definition by UNICEF which is derived from the ‘Organisation for Economic Co-operation and Development (OECD) member states):
AI refers to machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour by learning about the context. (UNICEF 2021: 16)
The report prefers this definition because they feel it does not just depend on data. It includes ‘rule-based or symbolic AI and any new paradigm of AI that might emerge in future years’; and it depends on human beings to drive. It. Their criticism of the definition is that it implies AI may be able to learn on its own which they say will not happen.
(Rehak 2021).
In fact, they believe Machine Learning will soon reach its development ceiling and not be able to progress further. An interesting idea as Professor, Hinton, the man called the ‘godfather of AI’ declares AI will soon be super intelligent. Will that be the point at which it reaches its ceiling?
They disagree with the concept of AI as a panacea and state that it must be seen as a tool which can have ‘positive impacts’ but also has various limitations such as the lack of accuracy in some cases and its tendency to change as soon as it gets a different stimulus, and bias.
It is interesting to note they find the name ‘artificial’ problematic as it implies that the
‘creation of a non-human intelligence is possible’ and that it has the capacity to ‘learn’ on its own.
They seem to have missed the fact that AI is supposed to have the capacity to learn on its own, as the more information it gathers from various sources the more it learns and changes its responses. They say ChatGPT can dish out nonsense in certain situations, however it is learning and changing constantly.
They make the important decision to just not define or discuss AI within its technological capacity but to address its ‘sociotechnical’ characteristics as it is created, developed and used within human processes and contexts and they feel a look at AI must include both sides.
The paper does not see AI as a threat to jobs but rather more nuanced which may even create new ones. They mention a paper by Frey and Osborne which says AI will create 700 new jobs such as ‘“hidden ghost work” of AI: the data cleaning, image labelling and content moderation being undertaken by usually poorly-paid workers in developing economies (Gent 2019; Raval 2019)’. They state that the impact of AI on jobs does call for more research.
They critique the idea that AI is seen as an answer to a lot of education’s problems such as ‘the lack of qualified teachers, student underachievement and the growing achievement gap between rich and poor learners)’. Instead they feel several issues need to be understood and they include: ‘the aims of using AI in education, where it is used, by whom (by individuals, institutions or industry), how it is operationalised, at what levels (from the single learner to whole classrooms, collaborative networks and national and transnational levels), how it works and so on’. Thus they would like to ensure a more holistic view or understanding of the context of AI in education is provided prior to its deployment.
They provide a useful breakdown of AI into four categories for anyone involved in AIED:
1.“Learning with AI”, which involves the use of AI driven tools in teaching and learning for both students and tutors. They include tutoring systems chatbots, leaning network orchestrations and environments for learners and recruitment and timetabling for tutors. They do not feel the market has provided enough tools or materials to support teachers apart from in the area of smart curation.
The writers feel the early claims of how AI will improve learning have remained aspirational and not materialised.
They feel that although the use of AIED has increased around the world in areas as varied as physics, maths, programming, languages and across various types of hardware such as mobile phones and headsets, their effectiveness has not been justified and they therefore only serve as marketing material for multimillion companies who develop these tools. They feel that though AIED has been around for 30 years there is no clear advantage in terms of the pedagogy of education nor has it provided any meaningful change or impact. They say there is little evidence to support their efficacy and list the following as exceptions:
‘the homework-oriented ITS ASSISTments22 (Roschelle et al. 2017), the geometry Cognitive Tutor23 (Pane et al. 2010), and Multi Smart Øving24 (Egelandsdal et al. 2019; Kynigos 2019)’.
They mention the discussion around the usefulness of AI where there are teacher shortages which they say may be a short-term solution and fails to address the root cause of the shortage.
They mention the bias and inequality of AIED but state that this is a contextual problem within which AI operate and not caused by AI, a useful point to make as the report appears to blame AI.
2. “Using AI to learn about learning”, uses AI to learn about and analyse data about how learners learn, system designs in order to influence the programming and design or support things like admissions, retention and planning.
The paper suggests that the ‘European Commission’s Ethics guidelines for trustworthy AI25 should be applied to AIED systems too’ as trust in the systems will increase the use of the tools by teachers. When teachers know the tools are accredited, they feel more confident to use them and they suggest the onus on ensuring the tools are trustworthy should not rest on the tutors but on the developers. This is not a realistic position as the developers are not qualified to provide accreditation for themselves. The accreditation is the responsibility of educators, policy makers and government who must ensure developers, design within the parameters of the guidelines provided.
They suggest the argument that AI will save teachers time such as in grading, assessments monitoring student outputs and so on to the point of replacing them is unproven and the impact on the quality of teaching and learning has not been evidenced.
While AI’s recent growth was fed by access to large amounts of data, Ai is now a large collector of data which includes collecting data about how users interact with technology, answers they provide, sites they visit, how they move their mouse and so on. Apparently a ‘single session, with a child interacting with an AI or other electronic education system (such as a MOOC or a serious game, Hwang et al. 2020), can generate “around 5-10 million actionable data points per student each day”.
This uninhibited access to data and information is an issue for the writers as they feel these are collected by large private sector companies who use this information to develop tools which are then sold for profit which does not trickle down to the learners, tutors or institutions. They feel this is a transfer of power from the public to the private sector ensuring a case of commercialisation of AI in education tools by private companies with little oversight from public or policy making bodies.
They suggest that little is written about how AIED impacts policy makers and nation states. They ask questions about how these should be managed by states, if the designers should be allowed to continue as profit making bodies or make their content open sourced as they take information from the public to build these money making tools. These are key questions which are coming too late for the West.
3.“Learning about AI” involves ‘increasing the AI knowledge and skills of learners of all ages (that is, from primary education, through secondary, to tertiary) and their teachers, covering the techniques of AI (e.g. ML) and technologies of AI (e.g. natural language processing), together with the statistics and coding on which it all depends (Miao and Holmes 2021a)’ They refer to this as the ‘technological dimension’.
They suggest it is imperative that as people are taught about the technical nature of AI they also receive equal information on the human dimensions of AI, not as an add-on but as an important and equal aspect of learning about AI.
They feel AIED tool developers focus on the objective of learning as a cognitive exercise where content for learning is dictated by policy makers, produced in books taught by teachers and accessed in exams.
They feel this definition of education is limiting as it fails to take into consideration the development o the child as suggested by United Nations Convention on the Rights of the Child (UNCRC) which provides a more holistic view of the child to include their talents, mental and physical capacities. They feel the Council of Europe and other policy makers have not provided adequate guidance for educators on all the different facets which should be taught and are relevant for the 21st century: essentially allowing educators to decide on what they teach about AI based on what the objective of education is for them, leaving it open to interpretation.
They feel more should be done to encourage literacies in other areas such as scientific, IT and financial literacies and not just maintain education around functional education such as reading and writing.
They suggest that whilst not many people are being asked to become AI technologists they should have a basic knowledge of AI:
‘The world’s citizens need to understand what the impact of AI might be, what AI can do and what it cannot do, when AI is useful and when its use should be questioned, and how AI might be steered for the public good. (Miao and Holmes 2021a: 6)’.
Thus, they also feel people must be taught the roles of humans in the development of AI and also how AI is used by those in power to make decisions which impact on their lives
4.Preparing for AI involves ensuring that people everywhere are prepared for the possible impacts of AI on their lives. This would include issues of bias, ethics, impact on jobs. They suggest this should be integrated into learning with AI so that as learners learn about using AI tools they are also taught about the implications on their lives. They refer to this as ‘the human dimension’.
AI applications and pedagogy
Despite research on other effective forms of learning such as, Entwistle 2000; guided discovery learning, Gagné and Brown 1963; productive failure, Kapur 2008; project-based learning, Kokotsaki et al. 2016; and active learning, Matsushita 2018), producers of AI tools for education have been traditional in focusing on cognitive and behaviourist learning methods, Not been as creative or innovative which they say undermines independence critical thinking or self-development. They provide an example of E-proctoring which they say brings nothing new to the table.
Despite this criticism of AIED, it can be argued that the developers are simply reproducing what the education system wants, how the majority of educational establishments approach pedagogy.
They argue the personalisation of AI in education has not worked—Some AI producers used to claim this was the Netflix of education providing personalised education but they provided personalised pathways to the same end and not personalised learning leading to individual outcomes and actuations.
AI applications and identifying learners at risk
AI could be used to monitor attendance levels or monitor students at risk of dropping out but this must be balanced against the need for privacy, less intrusion, labelling and data protection.
AI applications and the developing brain
Data protection laws at the moment govern its use for identification and not for processing which governs behaviour. It is used to influence behaviour especially impact is higher with young people still developing their own values and identity. The issue is if this has not already been done in the current climate with all the access young people have to technology, it cannot be done with the proliferation of AI. Once again, policy makers are later to the game.
AI applications and learner agency
They are concerned about the freedom and agency of learners but they posit this as though children and young people have no independent thinking capacity or filters through which they can receive information or process information or interact with AI. This is Banduar’s theory and although true in some cases does have many barriers to total blanket definition of the way children and young people learn.
AI applications for children with disabilities
They admit that although AI tools in this area can also have limitations in reproducing what is already available they do have many advantages in supporting learning with disabilities such as ‘(Drigas and Ioannidou 2013); for example, to diagnose dyslexia (Kohli and Prasad 2010), attention deficit hyperactivity disorder (ADHD) (Anuradha et al. 2010) and autism spectrum disorder (Stevens et al. 2019), and to support the inclusion of children with neuro-diversity (Porayska-Pomsta et al. 2018)’.
AI applications and parents
The article says parents want to be involved in the learning experience of students but provide no reference for this statement. They feel students are highly influenced by AI tech in classrooms but do not know to what extent they are impacted. This research should provide some answers here rather than present assumptions.
The writers feel the design of AI tools in education aims to influence what children learn as it dictates what comes up in a search. This is not a problem as that cannot be identified as a new thing or an impact of AI. It’s been happening on Google and other search engines for a long time.
They present a negative view which makes parents fear rather than bring them alongside to work with educational settings to ensure their wards are comfortable and safe- They imply AI will mess up your child’s behaviour. They contest the idea that algorithms can safely predict human behaviour and describe AI as something which can systematically harm the user and say parents have little recourse to deal with this.
AI applications as “high-risk”
They identify AI applications which children use as ‘high risk’. No wonder people are so fearful of them. They suggest these AI applications should be subject to compliance in relation to data governance, transparency, human oversight , robustness and accuracy. This is not new either as the internet already has these measures which can be replicated. They also refer to AI systems as though they create themselves and have no human involvement in their creation
They feel using AIED tools to make predictions and determine learner grades are discriminatory as basing decisions on characteristics such as gender and race can be discriminatory.
Apart from the issues of data and privacy they feel technology is ‘shaping children in ways that schools and parents cannot see’ and feel parents nor children have any power over this. They gave an example of how the University of Buckingham started monitoring social media posts of students, a very intrusive exercise.
Research shows emotions impact learning but no research data yet on how AIED affects or impacts emotions.
They discus the potential use of AI to diagnose mental issues, or predict behaviour, monitor or use facial recognition and mention the European Data Protection Supervisor who has called for a ban on these uses of AI which would include in educational settings; A bit drastic to call for a full blanket ban as this would also mean the benefits are lost.
AI and digital safeguarding
The irony is that the tools which they suggest are a threat to personal liberty are the very tools that keep users safe online. They say by monitoring, one can predict behaviours which could lead to radicalisation or sexual exploitation. They also say this monitoring makes learners feel less safe and alter behaviour to change their footprint. They say schools claim this surveillance help them predict how learners can transition into work
The ethics of AI
They discuss the ethics of AI and say all citizens should pay attention to it and admit this ethics is complicated. They mention it has received a lot of attention referencing the works of ‘Boddington 2017; Whittaker et al. 2018; Winfield and Jirotka 2018) and more widely (e.g. the House of Lords,64 UNESCO,65 World Economic Forum66)’.
Apparently, institutions which look at ethics in AI have been set up and they include ‘the Ada Lovelace Institute,67 the AI Ethics Initiative,68 the AI Ethics Lab,69 AI Now,70 and DeepMind Ethics and Society,71 to name just a few)
In 2019, Jobin and colleagues (2019) identified 84 published sets of ethical principles for AI, which they concluded converged on five areas: transparency, justice and fairness, non-maleficence, responsibility and privacy. However, they say these all remain open to interpretation in different contexts both to the development and use of AI.
Despite this, they feel the impact of AI can be seen in education in the way it may choose candidates with a single ‘No’ negatively impacting someone’s life. But this is done already and not new.
They say the ‘social ills’ of computing will not disappear just because there are codes of ethics. This is fair to say about almost everything in life.
The article further states that although universities tend to have robust ethics requirements, they do not have such robust requirements in place for AI. This can be explained as the new area of AI where policy makers themselves are vague in their definitions and requirements of AI. It should be interesting to look at the guideline for the use of AI recently published by EU to discuss what guidelines they have provided.
If their suggestion that companies are using children for their own commercial benefits, is the case then governments need to set the rules and companies need to abide by them. This issue goes beyond the private companies not abiding by ethics to the lack of clarity by policy makers and governments to say what they are and thus leave them open for definition and interpretation.
They suggest ethics in AI must go beyond data collection and privacy to include
the ethics of teacher expectations, of resource allocations (including teacher expertise), of gender and ethnic biases, of behaviour and discipline, of the accuracy and validity of assessments, of what constitutes useful knowledge, of teacher roles, of power relations between teachers and their students, and of particular approaches to pedagogy (teaching and learning, such as instructionism and constructivism). (ibid.: 521) because it also involves the ethics of education.
They mentioned ethics in education has been developed for over 20 years whilst ethics in healthcare have been developed over a long period of time, AI is recent which is explanation for the slowness in developing ethics for AI in education. This should not be a cause for polarity or disparaging of the private sector but an exciting moment to create something new for generations to come and policy makers must wake up to the urgency.
‘In summary, the ethics of AI and education is complex but under-researched and without oversight or regulation – despite its potential impact on pedagogy, quality education, agency and children’s developing minds. Accordingly, “multi-stakeholder co-operation, with Council of Europe oversight, remains key to ensuring that ethical guidelines are applied to AI in education, especially as it affects the well-being of young people and other vulnerable groups’.
The authors ask an interesting question: ‘For whom does an AI system work? The learners, the schools, the education system, the commercial players, or politicians and other decision makers?’ They suggest that the ethics of AI is less about the technology and more about the people designing and using it. A valid point as AI is just a tool like a hammer or a car. It is about who designed it why, for whom. Thus, the question is: for whom does AI work? AI Loyalty.
They suggest that all stakeholders such as parents, policy makers, civil society, industry, children and teachers should be involved in the design of the technology being used in AIED. Designers must already have testers and case studies to work from but remember first they are responding to market demand. Governments need to set the rules and let companies produce within it , the same as every other tech or product.
Political and economic drivers
The article suggests that with AIED being readily available it will lead to the downgrading of the quality of education. This is so far-fetched and hard to defend, especially as they fail to present adequate evidence to support this statement. AI systems and tools are built that way because that is how the education system is built—it is not a creation of their own
The writes posit that AIED producers are profit focused and not focused on educational benefit of learners. They called the use of AI in schools as ‘privatisation by stealth’. They seem to forget these companies are operating within a capitalist system and respond to market demand. They would not produce what is not required by the market, and the market will not purchase what they do not require.
They say AIED will exacerbate inequalities between rich and poor, disabled, marginalised communities but admit in another part of the document these inequalities are created by global systems.
They suggest teachers, educators and policy makers also need training to decide on tools to use which should indeed be the case. Interestingly, in the 'call for evidence' by the department of education, teachers do suggest the need for education and training in the use of AI tools as an important requirement for them.
Evaluating AI in education
The writers decry the absence of empirical research on the use of AIED. They feel there is not enough robust data on which policy can be formed. They feel there is a lack of information on the impact of AIED on the learners. This is true as a current look at research in the field indicates a dearth of research of the impact of AIED on learners.
They feel schools and governments have decided to use AIED without adequate information and usually decide to police its use after harm has been done and not prior to its use.
They decry the lack of accreditation of the tools and the access to student and teacher data which private companies have---they call this ‘data rent’.
They suggest that many teachers do not have enough knowledge of AI tools to make the right decisions on use. This is true as indicated earlier, however teachers use AI tools to augment their own teaching. Earlier the authors mentioned that AI tools are designed as cognitive and behavioural tools much like common didactive means of teaching used in schools. Teachers therefore use tools which support what they do, not tools they do not understand or cannot use.
They suggest that AIED designers may go out of businesses and thus a threat to AIED. This is a ridiculous position as this is not unique to AIED companies: Businesses going bust happens across all sectors.
The writers suggest that as AIED impacts on learners’ mental capacities and health they should be assessed to ensure efficacy and safety. Like all educational systems this should be a norm and not an exception or novel.
AIED colonialism
‘In 2020, despite the coronavirus pandemic, venture capital (VC) investments in AI start-ups reached a total of US$75 billion for the year, of which around US$2 billion was invested in AI in education companies,’ mostly in the US. It is these companies that are selling their approaches globally, creating what has been called an AIED colonialism.
The tools produced in the West have been sold globally without consideration of cultural differences or nuances. This is an interesting criticism as commercial enterprises are set up to make profit and though there must be ethical considerations of doing business as expected across all sectors it is difficult to see how a private investor profit seeking company will invest in being culturally appropriate if there is no profit in doing so.
It must be incumbent on governments and policy makers to make this happen and ensure what is supplied into their territories is culturally appropriate. Commercial organisation will respond to market demands made on them not necessarily on emotional requirements.
They suggest how Google’s dominance with google classroom is problematic. It is true that dominance by one company is usually not good for business, for variety and for innovation.
They suggest that the use of English for example across the globe will lead to lower school attainment giving sub-Saharan Africa as an example where, they claim, lower school attainment is linked to language, yet some of the most educated people in the world are from sub–Saharan Africa. Once again, they need to appeal to the markets and not the companies to produce in the various languages around the globe. They will do so if market and profit demand it. Otherwise, there is no obligation to create in another person’s language, especially when there are so many and it might mean losing money.
They conclude that AIED monitoring cannot be left to profit making companies but to policy makers and governments which is correct. They should make the rules like they do with books and the curriculum. AIED designers will deliver according to the brief since they want clients to purchase their products.
AI, education, human rights, democracy and the rule of law
The article encourages government to take a cautious approach the adoption of AIED and minimise risks to human rights especially children’s rights as education can enhance rights when ‘enjoyed fully’ or ‘negatively if not’.
This is so true and is its solid argument for the importance of education in general.
They add that rights must be looked at in the context within which they are applied and pay attention to groups who may have these curtailed. This is correct, however, the world acts like inequalities did not exist before AI arrived. AI did not create inequality and bias and prejudice. It walked into a world that is already racist, biased and prejudiced. The tools do not need fixing: the structures, the people the world needs fixing and since that is too large and almost impossible to do it’s unlikely AI can fix it.
They discuss the rights in the United Nations Convention on the Rights of the Child (UNCRC) as the most widely ratified in the world and the basis for the protection of children’s rights everywhere.
They however state the weaknesses, such as the weak monitoring and enforcement mechanisms globally and conclude that children must be educated about their rights.
Human rights, AI and education
The article admits ‘There is little substantive literature that focuses specifically on, or even mentions in any meaningful way, AI, education and human rights’, an interesting situation noting the constant mention of the lack of literature even though research in AIED has been undertaken for more than 25 years. (See Ido Roll1 & Ruth Wylie, Evolution and Revolution in Artificial Intelligence in Education, (2016)
The report posits that one of the calls for the use of AIED is when there is a lack of teachers or well qualified teachers especially in rural areas but proceed to add that AI tools will not solve this problem which has deeper roots caused by socio political and economic contexts. Although this is a correct analysis it is also a positive rather than a negative deployment of AI because AI could alleviate some of the issues, which are caused by deeper societal limitations in this specific example. If AI did not cause them then the fact it can help alleviate some of it is useful and should be used and not vilified.
Right to human dignity
‘In the context of AI and education, this human right implies that the teaching, assessment and accreditation of learning, and all related pedagogical and other educational decisions, should not be delegated to an AI system, unless it can be shown that doing so does not risk violating the dignity of the participating children. Instead, all such tasks should be carried out by human teachers’.
The onus is on policy makers, AI developers or teacher to prove it will not negatively affect the dignity of the learner which is not a difficult thing to do once the right guidelines are in place in institutions. For example, most learning institutions have IT policies which can be replicated or improved to include AI.
Right to autonomy
The writers insist on the right of students not to be subjected to automated decisions and the right to contest these decisions.
No problem with this except automated decision-making has already been in play for a long time across many industries like banking and insurance.
They state that a dependence on AI to profile children and determine their learning pathways could be problematic as AI could get it wrong, negatively impacting children’s psychological, mental and emotional wellbeing.
They say since old data is used to train AI, this could be problematic. Also saying grades awarded during COVID were amended afterwards. They suggest children should have the right to refuse AI in the classroom but do not provide a reason why this should be the case. If children will use the textbook provided by the teacher, why refuse the AI tool? The gatekeeping should be with the authorities long before it arrives in the classroom. The child will not even know what works best for them because they are in school where the choice on what and how they will learn will be made for them before they arrive in the classroom.
Right not to suffer from discrimination (fairness and bias)
From design to use it must be non-discriminatory and accessible to all. This is ideal but we do not live in that world and AI cannot create that world for us,
They discuss how bias is inherent in data sets which perpetuate historic biases and stereotyping.
It can create positive discrimination for the disabled as it can include but it can also create negative discrimination by excluding others.
They suggest companies profiling the learners could get it wrong which is a valid point for other educational tools and not unique to AIED.
Right to privacy and right to data protection
Data collection can go either way: negative and positive uses of data collected—positive more people can be reached, more data for wider decision making, negative is the use of personal information which may affect learners in the future. Thus, the opportunities come with the risks and a choice or decision must be made as to whether the risk is worth the benefits. One which all institutions must make.
Although AI can support mental health by helping learners move from negative to positive state of mind, the authors feel the exclusion of the teacher from the loop takes away the personal touch and constant assessment of the situation.
The issue is data is not just the collection but how it is stored, who has access, future use and others using it who did not have permission in the first place and anonymising it.
Right to transparency and explainability
Their issue is that data or AI tools are often anonymous and lack proprietary so there is an absence of ownership or accountability making it difficult for teachers to challenge AI decisions.
Right to withhold or withdraw consent
The right to consent is a valid argument that this is not new but issues around being able to withdraw it once given and used remain undecided. Hence the value of learning about AI for all stakeholders prior to agreement… read the fine print. It is of course understood that these can be ambiguous—which then leads to issues around exploitation especially if money has been offered.
Right to be protected from economic exploitation
Their example is when a child makes a song they own the rights—but when they create with AI it is unclear about ownership. This is an issue around the copyright and ownership of content used via AI. Suggestions are that developers must seek to have a library of where they mine content from and deliver a system of reimbursement to the originators.
They call for the rights of the child to be imbedded in policies on ethics in AI. That would depend on what the rules are around ownership in AI, a discussion to be reviewed. Also note that data around the child involves stakeholders around the child, such as the family, school, and other personal data.
The issues discussed here can be far more simplified than the document implies. Once the rules are made globally around the use of AI content, these will be implemented by developers and users as implemented on most issues such as curriculum, books vehicles, computers and so on. Once regulated people will follow the rules in the main.
They discuss how the accumulation of data in the hands of AI developers make them powerful. They feel this is not prolific with education creators but growing and take note that organisations tend to take a single solution which creates a potential for monopoly. Given the West is a capitalist society this would be operating within that system and so will need to be regularised as other industries.
Rights of parents
They suggest parents may allow the child to use tools which allow for large amounts of data to be harvested and by so doing waiving the child’s right to privacy knowingly or unknowingly.
Can parents refuse for their child to use AI tools in school—not tested yet by law as of course they may affect the child’s learning if others have it. But consent must always be sought and not assumed which must schools do.
Remedies and redress
They suggest that children must be able to enforce their rights when they come into conflict with AIED however, most rights cannot be enforced by children. They are given by the people responsible for them such as parents or learning institutions.
AI, education and democracy
‘Regarding the role of digital technologies in modern society and their potential negative impact on democracy, Diamond noted that “once hailed as a great force for human empowerment and liberation, social media – and the various related digital tools that enable people to search for, access, accumulate, and process information – have rapidly come to be regarded as a major threat to democratic stability and human freedom” (2019: 20).
That is a serious claim to make. What is the proof of that? If anything social media has been a force for alternative voices and the practice of democracy and the opportunity for marginalised voices to have a platform, fight for freedom and democratic transparency. Examples are the riots in Kenya, and the diverse black voices telling stories not often in the mainstream Western media on all issues important to them and not imposed by hegemonic powers.
Some risks here in terms of cyber-attacks, negative propaganda inauthentic behaviour but to date unproven in any major sense and caused solely by AI.
They also agree it can be a force for the good of democratic principles. AI is not only influenced by humans but implemented by humans, therefore humans can set the laws and policies in place which regulate its use especially within education,
Democracy and AI in education
Ethics in AI started being discussed 20 years ago but not pursued due to a need for diversity and cultural sensitivity.
They suggest that if democracy is to live up to its idea of being for all then AI should be for all but there is a disparity as this is not the case. Yes, there will be a disparity, but it is not caused by AI. Thus, if democracy really does not exist in its ideal form in the world, it is unlikely to do so with AI.
They say democratisation of education is employed by public schools- so equal access—making sure all children have the same access but of course this will differ even among public schools depending on the socio-economic location of the school.
They argue that AIED follows cognitive and behavioural systems of education when they should be more connectivism or social constructivist in practice, but this is not dependent on AI it depends on how the learning institution is set up. AI will replicate it or work within it but AI cannot manufacture it.
They suggest AI replicates global inequalities as it does not take cognisance of diversity around the world and since it is trained on past data it replicates past prejudices as well.
This is true but developing countries need to develop their own AI systems and tools—that’s how equality can be created across AIED globally. Yes, ML systems learn backwards as it must collect data backwards obviously but remember data is constantly being updated.
Critical reflections
The article posits that AI has issues especially in relation to AIED but so far criticisms levied are cliched and not unique to AI but to the internet, to global systems, corporate bodies, capitalism and other structures within which the world operates.
They say AIED has always made a claim to personalisation but question if this a good thing. Could it not cause division in society both in terms of attainment and access. Is it really personalised when its data collection is singular and therefore information really is homogenised and not personalised. They add that that since the classroom are a preparation for the world it is currently unclear how AIED impacts learners’ preparedness.
AI, education and the rule of law
They provide a list of international legal frameworks protecting human rights, disability rights and the rights of the child but they do not provide any examples of how or where these are being breached due to the use of AIED.
No legal frameworks that govern AIED in particular at the moment but they feel this is important given AI’s issues around data rights profiling and cross-cultural impact on education. It is important to note, however that there are several laws that govern designers and within which they must operate including the international and human rights laws that govern nations.
There are also already GDPR laws which currently govern designers.
School owners (e.g. municipalities) are required to carry out a DPIA that identifies and evaluates the risks associated with the use of digital tools in schools when the tools engage certain processing operations. Thus, a DPIA is included for all “learning with AI” learner- and teacher-supporting AI systems which adds another layer of protection.
‘With respect to the values of the Council of Europe, Schwemer, Tomada and Pasini write that: it is noteworthy that the proposal does not follow a rights-based approach, which would, for example, introduce new rights for individuals that are subject to decisions made by AI systems. Instead, it focuses on regulating providers and users of AI systems in a product regulation-akin manner. (2021: 6)’.
This is the way forward.
In a survey in a German university which involved assessing the response of students to AI and human assessments and decision making, students stated they trust AI more as there would be an absence of bias.
The ‘’European Digital Competence Framework for Citizens – DigComp 2.2 (Vuorikari et al. 2022)’, expects all people to be AI literate in understanding how their data is processed but not many people do in this or in any other area
AI and grade prediction
The article presents an interesting discussion on how AI negatively impacted learner’s grades for the international baccalaureate. Due to covid they could not take exams, so grades were based on coursework, teachers’ predictions and past performance data. This performance data was not based on the learner but the school which means schools who usually did well mainly in higher socio economic locations had better grades and this was considered unfair and so were the results.
Biometric data use in schools
The Finish data authority and the Swedish data authority both took schools to court for using students’ biometrics to identify them. They felt that the schools’ statement of having consent from the students and their parents was not balanced in terms of the authority of the schools and that it was unnecessary for the schools to use that information to identify the students.
In Finland the data was collected for impact assessment and in Sweden it was for school lunches.
Critical reflections
The critical reflections of the authors were as follows:
· There is an absence of research information on AIED
· Training required for all stakeholders
· They are calling for law schools to include AI in their course for example AI and health care, AI and war etc.
Open Questions/Recommendation
Open questions/recommendations from the article:
The issue of use of data should be addressed as it is not fairly given in educational settings given the power dynamics of students and institution
The writers feel there are three challenges to be addressed:
· Can children be required to use AI systems which can exploit their data
· If there are others ways of learning over AIED can these not be deployed instead
· If parents refuse to give consent in certain schools could the schools insist by making the tools compulsory.
Conclusion
In their conclusion the authors say that AIED should not just be discussed in terms of the technology but the human impact, design and use.
In relation AIED they presented the following needs analysis:
· Given the very critical nature of the document the writers did admit that AIED of itself is not problematic just its deployment and use in the hands of humans.
· They also call for further research on the impact of AI on education.
· They suggest that AI tools are mainly commercially driven and may work outside the system however it should be noted that if they are used in schools, it is because they work for the schools otherwise they will not be used.
· They suggest parents are always given the right to choose but this is not unique as parent are given an option when schools introduce new ideas, subjects or tools.
· They suggest that tool designers should embed ethics in the tools however, remember marketing and money is biased. Change that, change everything. AI simply reflects the world it lives in.
· They say that children should not be forced to be research subjects. Force is definitely the wrong use of word as that is unlikely.
· They suggest that the data rights should remain with the learners and not the designers who mine the information. They suggest that at the very least if data is mined from public schools then the tools produced should be open source. That is a discussion worth holding with the many stakeholders. Schools and governments could make this a requirement of using the tools or is too late?
· They call for education for stakeholders including policy makers, educators and parents on AI tools in order to facilitate a better understanding, discussion and deployment of AIED tools, a practice present for most new interventions.
In Conclusion.
From the start they say this is a critical document but sometimes the criticism is questionable as some of the criticisms levied against AIED is appears too critical such as suggesting AIED should not be used in case manufacturers go out of business, expecting manufacturers to produce in many languages and also be responsible for equality in a world where it does not exist, whilst labelling AIED a ‘high risk’ activity.
The usefulness of the document is its discussion on human rights, the rule of law and democracy as it provides another perspective in the deployment of AIED with the rights and needs of the learners at the fore of any decisions or consideration in all learning tools including AI tools.
Additionally they discuss concepts such as ‘data rent’, ‘AI loyalty’, ‘AI colonialism’, ideas not readily discussed in the mainstream but which are all important considerations in AIED.
The writers have called for the education of policy makers which is important and also an understanding of the fact that AI exists within an unjust society.
It is important that the rules, laws and requirements that are required for human beings are upheld across the world. Once this is done and clarity is provided by policy makes in relation to AIED, commercial producers will produce tools that meet these criteria as they know failure to do so could lose them business and profit.
As the writers have said, there is a need for further research on the impact of AIED on learners which is important otherwise all of this debate continues to be just theory in the absence of an understanding of its impact.
Wayne Holmes, Jen Persson, Irene-Angelica Chounta, Barbara Wasson and Vania Dimitrova ARTIFICIAL INTELLIGENCE AND EDUCATION A critical view through the lens of human rights, democracy and the rule of law, 2022, Council of Europe
Comments