TY - JOUR T1 - PEFT Methods for Domain Adaptation AU - Jin, Lee You AU - Koo, Yoon Kyung AU - Dam, Chung Woo JO - The Transactions of the Korea Information Processing Society PY - 2025 DA - 2025/1/30 DO - https://doi.org/10.3745/TKIPS.2025.14.4.239 KW - Parameter-Efficient Fine-tuning KW - Domain Adaptation KW - Deep Learning KW - Large Language Model KW - LoRA KW - MoRA AB - This study analyzed that the biggest obstacle in deploying Large Language Models (LLMs) in industrial settings is incorporating domain specificity into the models. To mitigate this issue, the study compared model performance when domain knowledge was additionally trained using MoRA, which enables learning more knowledge information, and LoRA, which is the most common among various PEFT methods. Along with this, training time was reduced through securing high-quality data and efficient data loading. The findings of this research will provide practical guidelines for developing efficient domain-specific language models with limited computing resources.