Fine-tuning large language models (LLMs) on niche text corpora has emerged as a crucial step in enhancing their performance on research tasks. This study investigates various fine-tuning approaches for LLMs when applied to technical text. We evaluate the impact of different parameters, such as training, neural structure, and hyperparameter tuning,