Logo image
SQUREL at TSAR 2025 Shared Task: CEFR-Controlled Text Simplification with Prompting and Reinforcement Fine-Tuning
Conference proceeding   Open access   Peer reviewed

SQUREL at TSAR 2025 Shared Task: CEFR-Controlled Text Simplification with Prompting and Reinforcement Fine-Tuning

Daria Sokova, Anastasiia Bezobrazova and Constantin Orasan
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025), pp.242-250
The Fourth Workshop on Text Simplification, Accessibility and Readability
2025 Conference on Empirical Methods in Natural Language Processing (Suzhou, China, 04/11/2025–09/11/2025)
11/2025

Abstract

Text simplification Natural Language Processing Large Language Models Reinforcement learning
This paper summarises the submissions of our team to the TSAR 2025 Shared Task on Readability-Controlled Text Simplification, which aims to create text simplifications that balance reduced linguistic complexity, meaning preservation, and fluency while meeting a predefined target readability level. In this work, we proposed two different methods for CEFR-controlled text simplification: a setup which employed reinforcement fine-tuning of large language models (LLMs) and a conservative lexical pipeline which relied on prompting LLMs to simplify sentences.
pdf
2025.tsar-1.21 (1)169.69 kBDownloadView
Author's Accepted Manuscript CC BY V4.0 Open Access
url
https://2025.emnlp.org/View
Event WebsiteConference website
url
https://tsar-workshop.github.io/View
Event WebsiteWorkshop website

Metrics

2 Record Views

Details

Logo image

Usage Policy