Cambridge University Press & Assessment has established six guidelines to ensure the ethical use of AI in English language education. The paper follows growing worry about the role of AI in English learning and assessment.
According to a recent YouGov poll, the British public's main concerns about the use of AI in English proficiency tests are a higher possibility of cheating and a failure to measure suitable language abilities (39 per cent each).
Cambridge's stance is based on the recommendation of a human-centered approach to AI, which recognises the critical role of humans in both language acquisition and quality assessment.
The varsity also calls for greater care to be taken to ensure AI is fair and inclusive, and that security, privacy, and consent are continually prioritised.
The six principles of AI in English language education are:
AI must consistently meet human examiners’ standards
AI-based language learning and assessment systems must be trained on inclusive data to ensure they are fair and free from bias
Data privacy and consent are non-negotiable
Learners need to know when and how AI is used to determine their results
Language learning must remain a human endeavour
The environmental impact of AI use must be kept in mind when considering how AI should be developed or used