PEFT Methods for Domain Adaptation 


Vol. 14,  No. 4, pp. 239-247, Apr.  2025
https://doi.org/10.3745/TKIPS.2025.14.4.239


PDF
  Abstract

This study analyzed that the biggest obstacle in deploying Large Language Models (LLMs) in industrial settings is incorporating domain specificity into the models. To mitigate this issue, the study compared model performance when domain knowledge was additionally trained using MoRA, which enables learning more knowledge information, and LoRA, which is the most common among various PEFT methods. Along with this, training time was reduced through securing high-quality data and efficient data loading. The findings of this research will provide practical guidelines for developing efficient domain-specific language models with limited computing resources.

  Statistics


  Cite this article

[IEEE Style]

L. Y. Jin, Y. K. Koo, C. W. Dam, "PEFT Methods for Domain Adaptation," The Transactions of the Korea Information Processing Society, vol. 14, no. 4, pp. 239-247, 2025. DOI: https://doi.org/10.3745/TKIPS.2025.14.4.239.

[ACM Style]

Lee You Jin, Yoon Kyung Koo, and Chung Woo Dam. 2025. PEFT Methods for Domain Adaptation. The Transactions of the Korea Information Processing Society, 14, 4, (2025), 239-247. DOI: https://doi.org/10.3745/TKIPS.2025.14.4.239.