Abstract
This paper summarises the submissions of our team to the TSAR 2025 Shared Task on Readability-Controlled Text Simplification, which aims to create text simplifications that balance reduced linguistic complexity, meaning preservation, and fluency while meeting a predefined target readability level. In this work, we proposed two different methods for CEFR-controlled text simplification: a setup which employed reinforcement fine-tuning of large language models (LLMs) and a conservative lexical pipeline which relied on prompting LLMs to simplify sentences.