%0 Journal Article %T The Use of Large Language Models in Foreign Language Education %A Anna Shcherbakova %A Andrey Shcherbakov %J Open Access Library Journal %V 13 %N 4 %P 1-8 %@ 2333-9721 %D 2026 %I Open Access Library %R 10.4236/oalib.1115245 %X Large language models (LLMs), such as GPT-based systems, are transforming foreign language education by enabling personalized practice, instant feedback, and immersive interactions beyond traditional computer-assisted language learning (CALL) tools. The article outlines LLMsĄŻ advantages over rule-based systems, analyzes applications across listening, speaking, reading, and writing skills, and addresses risks like overreliance and bias. It emphasizes pedagogy-aligned integration to maximize benefits while mitigating ethical concerns. For listening and pronunciation, LLMs generate customized audio scripts via text-to-speech, improving comprehension but relying on quality TTS integration. In speaking, they simulate dialogues to build fluency and reduce anxiety, though they lack real-time pragmatics. Reading benefits from adaptive texts and glosses for vocabulary building, while writing leverages drafting and revision feedback, enhancing efficiency in large classes. Cognitive risks include shallow learning and assessment disruption; biases from training data affect cultural representation; privacy issues arise from data usage. Design principles advocate framing LLMs as supplements, embedding metacognitive scaffolds, clear guidelines, and teacher training. Learner attitudes are positive, especially for out-of-class practice, but effects vary by proficiency. LLMs hold promise for scalable language learning when critically structured, calling for longitudinal research on long-term competence.
%K Large Language Model %K Artificial Intelligence %K Computer-Assisted Language Learning (CALL) %U http://www.oalib.com/paper/6893845