Language Modeling and Understanding Through Paraphrase Generation and Detection
Doctoral thesis
Date of Examination:2025-11-28
Date of issue:2025-12-18
Advisor:Prof. Dr. Bela Gipp
Referee:Prof. Dr. Bela Gipp
Referee:Prof. Dr. Tobias Meisen
Referee:Prof. Dr. Lisa Beinborn
Files in this item
Name:Dissertation_Jan_Philip_Wahle.pdf
Size:12.2Mb
Format:PDF
Abstract
English
Language enables humans to share knowledge, reason about the world, and pass on strategies for survival and innovation across generations. At the heart of this process is not just the ability to communicate but also the remarkable flexibility in how we can express ourselves. We can express the same thoughts in virtually infinite ways using different words and structures — this ability to rephrase and reformulate expressions is known as paraphrase. Modeling paraphrases is a keystone to meaning in computational language models; being able to construct different variations of texts that convey the same meaning or not shows strong abilities of semantic understanding. If computational language models are to represent meaning, they must understand and control the different aspects that construct the same meaning as opposed to different meanings at a fine granularity. Yet most existing approaches reduce paraphrasing to a binary decision between two texts or to producing a single rewrite of a source, obscuring which linguistic factors are responsible for meaning preservation. In this thesis, I propose that decomposing paraphrases into their constituent linguistic aspects (paraphrase types) offers a more fine-grained and cognitively grounded view of semantic equivalence. I show that even advanced machine learning models struggle with this task. Yet, when explicitly trained on paraphrase types, models achieve stronger performance on related paraphrase tasks and downstream applications. For example, in plagiarism detection, language models trained on paraphrase types surpass human baselines: 89.6% accuracy compared to 78.4% for plagiarism cases from Wikipedia, and 66.5% compared to 55.7% for plagiarism of scientific papers from arXiv. In identifying duplicate questions on Quora, models trained with paraphrase types improve over models trained on binary pairs. Furthermore, I demonstrate that these models can act as prompt engineers, reformulating instructions to boost capabilities across tasks, yielding average gains of 6.4% in title generation, 6.0% in text completion, and 6.3% in summarization. These results reveal that learning paraphrase types not only strengthens paraphrase understanding but also generalizes to plagiarism detection, authorship verification, commonsense reasoning, and prompt optimization. Beyond these applications, paraphrase-aware models hold the potential to improve semantic understanding in other areas such as summarization and overall semantic evaluation. I conclude that decomposing paraphrases into specific linguistic transformations provides a path toward more robust and semantically grounded language models. This work offers a foundation for training models that can represent meaning beyond surface-level patterns.
Keywords: Natural Language Processing; Deep Learning; Text Generation; Paraphrase Generation
