A Survey on the Latest Research Trends in Retrieval-Augmented Generation 


Vol. 13,  No. 9, pp. 429-436, Sep.  2024
https://doi.org/10.3745/TKIPS.2024.13.9.429


PDF
  Abstract

As Large Language Models (LLMs) continue to advance, effectively harnessing their potential has become increasingly important. LLMs, trained on vast datasets, are capable of generating text across a wide range of topics, making them useful in applications such as content creation, machine translation, and chatbots. However, they often face challenges in generalization due to gaps in specific or specialized knowledge, and updating these models with the latest information post-training remains a significant hurdle. To address these issues, Retrieval-Augmented Generation (RAG) models have been introduced. These models enhance response generation by retrieving information from continuously updated external databases, thereby reducing the hallucination phenomenon often seen in LLMs while improving efficiency and accuracy. This paper presents the foundational architecture of RAG, reviews recent research trends aimed at enhancing the retrieval capabilities of LLMs through RAG, and discusses evaluation techniques. Additionally, it explores performance optimization and real-world applications of RAG in various industries. Through this analysis, the paper aims to propose future research directions for the continued development of RAG models.

  Statistics


  Cite this article

[IEEE Style]

E. Lee and H. Bae, "A Survey on the Latest Research Trends in Retrieval-Augmented Generation," The Transactions of the Korea Information Processing Society, vol. 13, no. 9, pp. 429-436, 2024. DOI: https://doi.org/10.3745/TKIPS.2024.13.9.429.

[ACM Style]

Eunbin Lee and Ho Bae. 2024. A Survey on the Latest Research Trends in Retrieval-Augmented Generation. The Transactions of the Korea Information Processing Society, 13, 9, (2024), 429-436. DOI: https://doi.org/10.3745/TKIPS.2024.13.9.429.