The International Monetary Fund (IMF) has expanded its online learning program, offering over 100 Massive Open Online Courses (MOOCs) to support economic and financial policymaking worldwide. This paper explores the application of Artificial Intelligence (AI), specifically Large Language Models (LLMs), to analyze qualitative feedback from participants in these courses. By fine-tuning a pre-trained LLM on expert-annotated text data, we develop models that efficiently classify open-ended survey responses with accuracy comparable to human coders. The models’ robust performance across multiple languages, including English, French, and Spanish, demonstrates its versatility. Key insights from the analysis include a preference for shorter, modular content, with variations across genders, and the significant impact of language barriers on learning outcomes. These and other findings from unstructured learner feedback inform the continuous improvement of the IMF's online courses, aligning with its capacity development goals to enhance economic and financial expertise globally.