Abstract
In recent years, considerable research has been dedicated to the application
of neural models in the field of natural language generation (NLG). The primary
objective is to generate text that is both linguistically natural and
human-like, while also exerting control over the generation process. This paper
offers a comprehensive and task-agnostic survey of the recent advancements in
neural text generation. These advancements have been facilitated through a
multitude of developments, which we categorize into four key areas: data
construction, neural frameworks, training and inference strategies, and
evaluation metrics. By examining these different aspects, we aim to provide a
holistic overview of the progress made in the field. Furthermore, we explore
the future directions for the advancement of neural text generation, which
encompass the utilization of neural pipelines and the incorporation of
background knowledge. These avenues present promising opportunities to further
enhance the capabilities of NLG systems. Overall, this survey serves to
consolidate the current state of the art in neural text generation and
highlights potential avenues for future research and development in this
dynamic field.