|
|
{ |
|
|
"File Number": "1049", |
|
|
"Title": "DSLR: Document Refinement with Sentence-Level Re-ranking and Reconstruction to Enhance Retrieval-Augmented Generation", |
|
|
"Limitation": "While our DSLR shows significant improvements in RAG performance, it is important to recognize that there is still room for further improvement. First, although we aim to preserve the original contextual integrity with the reconstruction step, there is a risk of unintentionally removing important sentences that might contain query-relevant information. We believe that developing more advanced re-ranking models to more accurately capture relevance scores could address this, which we leave as valuable future work. Second, since DSLR aims to refine the set of retrieved documents, there might be a bottleneck stemming from the initial retrieval step; the overall performance can be negatively affected by incorrectly retrieved documents. Therefore, future work may focus on developing a more precise retrieval module. Since the DSLR framework is composed of off-the-shelf modules, we believe that its overall performance will improve concurrently with the development of these modules.", |
|
|
"abstractText": "Recent advancements in Large Language Models (LLMs) have significantly improved their performance across various Natural Language Processing (NLP) tasks. However, LLMs still struggle with generating non-factual responses due to limitations in their parametric memory. Retrieval-Augmented Generation (RAG) systems address this issue by incorporating external knowledge with a retrieval module. Despite their successes, however, current RAG systems face challenges with retrieval failures and the limited ability of LLMs to filter out irrelevant information. Therefore, in this work, we propose DSLR (Document Refinement with Sentence-Level Re-ranking and Reconstruction), an unsupervised framework that decomposes retrieved documents into sentences, filters out irrelevant sentences, and reconstructs them again into coherent passages. We experimentally validate DSLR on multiple opendomain QA datasets and the results demonstrate that DSLR significantly enhances the RAG performance over conventional fixed-size passage. Furthermore, our DSLR enhances performance in specific, yet realistic scenarios without the need for additional training, providing an effective and efficient solution for refining retrieved documents in RAG systems.", |
|
|
"1 Introduction": "Recent advancements in Large Language Models (LLMs) (Brown et al., 2020; OpenAI, 2023b; Touvron et al., 2023) have significantly expanded their capabilities across diverse knowledge-intensive tasks in Natural Language Processing (NLP), such as Question Answering (QA) (Kwiatkowski et al., 2019; Joshi et al., 2017; Rajpurkar et al., 2016). However, despite these capabilities, LLMs still face challenges such as generating plausible yet non-factual responses, known as hallucination, due to their reliance on limited parametric memory\n* Corresponding author\n(Mallen et al., 2023). Also, it is noted that this parametric memory is static, as LLMs can learn knowledge only up to the specific date on which the training was completed. Therefore, these limitations restrict their adaptability to long-tailed or ever-evolving domains (Kasai et al., 2023) and to unseen knowledge outside their training data (Baek et al., 2023).\nRetrieval-Augmented Generation (RAG) (Khandelwal et al., 2020; Lewis et al., 2020; Borgeaud et al., 2022; Shi et al., 2023b) has been introduced as an effective solution to address such problems. Specifically, RAG enhances LLMs by integrating non-parametric memories fetched from external knowledge bases using a retrieval module, which helps LLMs’ responses grounded on factual evidence and makes them more up-to-date.\nWhile the efficacy of RAG depends on the performance of the retrieval module, the instability of LLMs in incorporating the retrieved knowledge is also a critical challenge to RAG. To be specific, retrieved documents sometimes contain irrelevant information (Cho et al., 2023), and LLMs often struggle to effectively filter out such redundant details and focus on the most query-relevant knowledge (Shi et al., 2023a; Li et al., 2023; Liu et al., 2023; Wu et al., 2024), which leads to the failure of the overall RAG systems. Therefore, it is crucial to investigate how to effectively refine retrieved documents before augmenting them with LLMs, ensuring that the LLMs are not distracted by irrelevant information within retrieved documents.\nRe-ranking the order of the retrieved document set (Nogueira et al., 2020; Qin et al., 2023a) or refining them into new documents (Wang et al., 2023; Xu et al., 2024) can be considered as solutions. However, they generally require high computational costs for training additional re-ranking or refining models. Another proposed solution is to reduce the retrieval granularity from passage-level to sentence-level which can help eliminate redun-\n73\ndant information within passages (Lee et al., 2021a; Chen et al., 2023). However, this might also inadvertently remove important contextual information, which is crucial for accurately answering the given queries (Choi et al., 2021). Therefore, we should explore a novel method that can effectively and efficiently filter out irrelevant information while maintaining the necessary contextual details.\nIn this work, we introduce an unsupervised DSLR (Document Refinement with SentenceLevel Re-ranking and Reconstruction) framework that consists of three steps: 1) decomposition, 2) reranking, and 3) reconstruction. Specifically, after retrieving the passage-level document, the DSLR framework operates by first decomposing the retrieved document into sentences for finer granularity and then filtering out the irrelevant sentences based on their re-ranking scores from the ranking models, including off-the-shelf retrievers and re-rankers. Finally, the remaining sentences are reconstructed into a single document to preserve the original contextual information. Note that DSLR is an unsupervised refinement framework, which does not require any additional training for re-ranking or reconstruction steps. The overall DSLR framework is illustrated in Figure 1.\nWe validate our framework across a diverse range of open-domain QA benchmarks, which include three general QA datasets and three specific QA datasets that require domain-specific or ever-evolving knowledge. Our experimental results show that DSLR significantly enhances the overall RAG performance and is comparable to, or even outperforms, the supervised baseline approaches. Specifically, when evaluated with specific QA datasets, DSLR shows high robustness in realistic settings. Furthermore, a detailed analysis\ndemonstrates the effectiveness of each proposed step and how it contributes to the overall performance.\nOur contributions in this work are threefold: • We point out that recent RAG systems are\nlargely vulnerable to redundant knowledge within fixed-size passage-level retrieved documents and that the existing refining strategies generally require additional training steps.\n• We propose a DSLR framework that incorporates sentence-level re-ranking and reconstruction to effectively remove redundant knowledge that negatively affects the RAG system.\n• We show that DSLR is highly effective and efficient even without additional training steps in both general and specific scenarios.", |
|
|
"2 Related Work": "Information Retrieval. Information Retrieval (IR) is the task of searching for query-relevant documents from a large corpus (Ponte and Croft, 1998), which has been widely applied for both search systems and various NLP tasks such as open-domain QA (Petroni et al., 2021). IR models can be categorized into sparse retrievers (Salton and Buckley, 1988; Robertson and Zaragoza, 2009), which use lexical metrics to calculate relevance scores between queries and documents, and dense retrievers (Karpukhin et al., 2020; Izacard et al., 2022), which embed queries and documents into a dense space that captures semantic relationships but requires significant computational resources (Jeong et al., 2022).\nIn order to further enhance retrieval performance, additional strategies have been proposed. Specifically, the re-ranking strategy improves retrieval per-\nformance by recalculating relevance scores using an additional re-ranking model (Nogueira and Cho, 2019; Nogueira et al., 2020; Zhuang et al., 2023), and then reordering the documents based on these scores. Recently, LLMs have shown remarkable re-ranking performance by generating relevance labels without requiring further fine-tuning (Liang et al., 2022; Qin et al., 2023b).\nWhile the aforementioned work on IR (Wang et al., 2019; Karpukhin et al., 2020) generally assumes fixed-size, 100-word passages as the document length, some work has explored an optimal level of retrieval granularity (Seo et al., 2019; Lee et al., 2021a; Jeong et al., 2023; Chen et al., 2023). These approaches validate that a finegrained level of granularity, containing only the knowledge needed to answer the query, can enhance the overall performance by excluding redundant details in the lengthy retrieved documents. However, reducing retrieval granularity to the sentence level can disrupt the original context and result in a loss of the document’s coherence (Choi et al., 2021). In addition, sentence-level retrieval generally requires a much larger index size compared to passage-level retrieval (Lee et al., 2021b). By contrast, we investigate a novel framework for effectively re-ranking sentences within retrieved passage-level documents and then reconstructing the re-ranked sentences to preserve contextual integrity.\nRetrieval-Augmented Generation. RAG has emerged as a promising solution for addressing LLMs’ hallucination issues by leveraging external knowledge fetched by the retrieval module. Specifically, RAG incorporates retrieval modules that reduce the need to update the parameters of LLMs and help them generate accurate and reliable responses (Khandelwal et al., 2020; Lewis et al., 2020; Borgeaud et al., 2022; Shi et al., 2023b). Additionally, various real-world applications integrate RAG as a core component when deploying LLMbased services (OpenAI, 2023a; Chase, 2022; Qin et al., 2024). However, they still have limitations due to the imperfections of the retrieval module within RAG, where the retrieved documents containing query-irrelevant information can negatively lead the LLMs to generate inaccurate answers.\nTo address them, several studies have attempted to leverage the capabilities of LLMs to enhance their resilience against irrelevant knowledge. These approaches include crafting specialized\nprompts (Press et al., 2023; Cho et al., 2023), training plug-in knowledge verification models (Baek et al., 2023), adaptively retrieving the required knowledge (Jeong et al., 2024; Asai et al., 2024; Yu et al., 2023b), and augmenting knowledge using the capabilities of the LLM itself (Yu et al., 2023a). Among the promising solutions, recent studies show that further refining the retrieved documents into fine-grained knowledge can improve the RAG performance (Xu et al., 2024; Wang et al., 2024, 2023; Jin et al., 2024). However, such refinement strategies generally require additional finetuning on a specific dataset, which might result in limited generalizability and high computational cost. By contrast, our proposed refinement framework removes irrelevant information with unsupervised sentence-level re-ranking and reconstruction steps by using off-the-shelf ranking models without requiring additional training costs.", |
|
|
"3 Method": "In this section, we describe a novel framework DSLR for enhancing the precision of retrieval results through sentence-level ranking and reconstruction, integrated into the RAG system. Note that DSLR does not require additional training.", |
|
|
"3.1 Preliminaries": "We first introduce the general RAG system, which consists of three steps: the retrieval step, the reranking step, and the generation step. Note that all\nsteps focus on passage-level documents.", |
|
|
"3.1.1 Retrieval Step": "The retrieval step searches for a potentially relevant document set D to the given query q from a retrieval corpus C consisting of millions of documents. This retrieval step is conventionally performed using a sparse retriever S, such as BM25, which is widely used for processing large corpora due to its low latency. The sparse retriever S fetches the relevant documents having high relevant scores based on lexical values such as document length or unique word count. Formally, we define the retrieval step as:\nD = Retrieve(q, C;S) = {d1, d2, ..., dn}\nwhere dk represents a document having the topk score among the retrieval corpus C for a given query q, and n denotes the size of D, generally ranging from tens to hundreds.", |
|
|
"3.1.2 Re-ranking Step": "While the sparse retriever S can efficiently handle a large corpus, it cannot consider semantic similarities, thereby limiting its retrieval performance for lexically different but semantically relevant pairs. To address this, the re-ranking step aims for more precise retrieval results by reordering the retrieved document set D using the ranking model R. This model transforms D into a newly ordered document set D′ based on relevance scores with a query q, capturing semantic meanings that could not be addressed in the retrieval step with S. Formally, we define the re-ranking step as:\nD′ = Re-rank(q,D;R) = {d′1, . . . , d′m}\nwhere d′k represents the document that has top-k relevance score among D and m ≪ n, indicating that the subset D′ contains significantly fewer documents than the original set D.", |
|
|
"3.1.3 Generation Step": "After the re-ranking step, the document set D′ is augmented to the LLM M with the supporting documents to generate the correct answer a for the given query q. The generation step can be formalized as:\na = Generate(q,D′;M) In RAG systems, the three key steps are designed to retrieve the most query-relevant knowledge for LLMs, typically at the passage level. However, this\nfixed granularity can overlook finer relevance between queries and individual sentences. Therefore, in this work, we introduce a fine-grained, sentencelevel ranking strategy in the re-ranking step, aiming to reduce distractions from irrelevant information and enhance answer accuracy.", |
|
|
"3.2 Document Refinement with Sentence-Level Re-ranking and Reconstruction (DSLR)": "We propose a novel unsupervised refinement framework, Document Refinement with Sentence-Level Re-ranking and Reconstruction (DSLR), designed to assess the fine-grained relevance of individual sentences within a passage and reconstruct to preserve the original contextual coherence. Figure 2 illustrates examples generated by each step in our DSLR framework.", |
|
|
"3.2.1 Sentence Decomposition and Re-ranking": "After the retrieval step (§3.1.1), we conduct sentence-level re-ranking for the documents within the retrieved set D. First, each document di ∈ D is decomposed into a sentence set Si = {sj}lj=1, where sj represents the j-th sentence in document di and l is the number of sentences in di. Then, the passage-level retrieved set D is redefined to the sentence-level retrieved set S = ∪ni=1Si. For instance, as illustrated in Figure 2, a passage retrieved for a query “How many episodes in \"Grace and Frankie\" Season 1?\" is decomposed into three sentences s1, s2, and s3 during the sentence decomposition step.\nTo extract sentences containing relevant information for a query q, we initially perform re-ranking to assess relevance scores at the sentence level. Sentences in S with scores below a predefined threshold T are deemed irrelevant and removed, resulting in a refined set S ′. The sentence-level re-ranking is formally defined as follows:\nS ′ = Re-rank(q,S;R) = {s′1, . . . , s′m} where each s′k is a sentence from S whose relevance score exceeds T . Figure 2 demonstrates the reordering of sentences, highlighting the exclusion of s′3 due to its insufficient relevance score. Note that this step of the DSLR framework utilizes offthe-shelf ranking models, which are identical to those used in passage-level re-ranking.", |
|
|
"3.2.2 Contextual Reconstruction": "While the sentence decomposition and re-ranking steps select the top-m relevant sentences for the\nquery q, these sentences may lack contextual relationships to one another, as these steps can disrupt the original contextual flow of the passage by discarding some sentences. Instead of following a widely used approach of simply concatenating these sentences in descending order of their relevance scores, we propose to reconstruct them into the contextually organized set, S∗, to reflect the order in which they were originally positioned before being decomposed from passages, ensuring the original coherence and logical flow:\nS∗ = Reconstruction(S ′,S) = {s∗1, . . . , s∗m}\nwhere s∗i is the sentence included in S ′ and i denotes the relative position of s∗i within S . As shown in Figure 2, the remaining two sentences are reconstructed in their original order by switching their positions to preserve the context before the sentence re-ranking step. Then, LLM M generates the answer a for a given query q with S∗ formalized as: a = Generate(q,S∗;M).", |
|
|
"4 Experiment Setups": "In this section, we describe the experimental setup for evaluating DSLR across various scenarios. We provide additional details in Appendix A.", |
|
|
"4.1 Models": "Retriever. We use BM25 (Robertson and Zaragoza, 2009) as a passage-level retriever, which is a widely used sparse retriever due to its notable performance with high efficiency. The retriever fetches the top1 passage-level query-relevant document from an external corpus, which serves as the baseline document. Re-ranker. We operationalize a variety of ranking models as re-rankers, including off-the-shelf retrievers, fine-tuned re-rankers, and LLMs. 1) Sparse Retriever: We use BM25 (Robertson and Zaragoza, 2009) as a sentence-level re-ranker. Note that BM25 is only applied at the sentence level, as it is primarily utilized in the retrieval step. 2) Dense Retriever: We utilize two representative dense retrievers, Contriever (Izacard et al., 2022) and DPR (Karpukhin et al., 2020), which are better at capturing the semantic similarity between documents and queries than sparse retrievers. 3) Supervised Re-ranker1: We employ two\n1It is important to note that the terms ‘supervised’ and ‘unsupervised’ in this context refer to the models being trained on document ranking tasks, and not on document refinement tasks.\nsupervised re-ranking models based on T5 (Raffel et al., 2020), MonoT5 (Nogueira et al., 2020) and RankT5 (Zhuang et al., 2023). These models are specifically trained for pointwise document ranking tasks. 4) Unsupervised Re-ranker1: We explore Relevance Generation (RG) (Liang et al., 2022), a pointwise ranking method using the inherent ranking ability of LLMs, validating its effectiveness in scenarios lacking extensive labeled data. We use LLama2-13b-chat (Touvron et al., 2023) as a ranking model for RG. Reader. We use the instruction-tuned, open-source LLM LLama2-13b-chat as our reader. To generate the final answer, the document is prepended to the system prompt.", |
|
|
"4.2 Datasets": "We evaluate our DSLR across 6 open-domain QA datasets, including both general and specific domains. First, we conduct our experiment using the development set of Natural Questions (NQ) (Kwiatkowski et al., 2019), TriviaQA (TQA) (Joshi et al., 2017), and SQuAD (SQD) (Rajpurkar et al., 2016), consisting of queries with general topics. Additionally, we incorporate specialized datasets such as RealtimeQA (RQA) (Kasai et al., 2023), SciQ (SQ) (Welbl et al., 2017), and BioASQ (BASQ) (Tsatsaronis et al., 2015; Krithara et al., 2023) for evaluating the generalizability of our proposed method. In detail, RQA includes questions that are updated periodically to test our system’s ability to handle ever-evolving knowledge. In addition, SQ and BASQ are domainspecific datasets in science and biology, respectively. Specifically, for BASQ, we selectively use the questions from the BioASQ6 challenge (task b) that are suitable for yes/no and factoid responses. We report the effectiveness of our framework with Accuracy (Acc), which determines whether the prediction contains golden answers, following Asai et al. (2024).", |
|
|
"4.3 Implementation Details": "The threshold T , used to remove irrelevant content, was determined empirically by sampling 1,000 random entries from each of the NQ, TQA, and SQD training sets and setting T to the relevance score at the 90th percentile. Detailed values of T for various models are provided in Table 5. The retrieval corpus for NQ, TQA, and SQD is a preprocessed Wikipedia dump from Dec. 20, 2018 following Karpukhin et al. (2020), and for BASQ\nand RQA, we use their own retrieval corpora. To be specific, BASQ used the BEIR (v1.0.0) 2 BioASQ corpus, specializing in biomedical information retrieval. For the RQA dataset, spanning from 2022 to 2023, we use the search documents provided at the time of dataset creation through the Google Cloud Search (GCS) API to align the periods of the queries and answers. When implementing each component in DSLR, we decompose passage-level documents into sentences using the Sentencizer from Spacy3. All predictions in our experiments are generated via greedy decoding.", |
|
|
"5 Experimental Results and Analyses": "In this section, we show the overall experimental results with in-depth analyses of our framework.\nMain Results. First of all, Table 1 shows that our DSLR-refined top-1 document consistently outperforms the original top-1 document across all datasets and scenarios, despite reduced token counts. This confirms our hypothesis that the redundant information within the fix-sized passages adversely affects the RAG performance and highlights the importance of providing only query-\n2https://github.com/beir-cellar/beir 3https://spacy.io/\nrelevant information in RAG with finer-grained sentences.\nFurthermore, DSLR also shows performance enhancement over specialized datasets, such as ever-evolving RQA and domain-specific SQ and BASQ datasets. Specifically, the re-rankers based on pre-trained models such as T5 and the LLM demonstrate remarkable performance improvement. Given that DSLR requires no additional training, the robust and effective performance suggests its applicability to diverse real-world scenarios, particularly where queries frequently change across different timelines and domains.\nDSLR in Multiple Passages. To assess the effectiveness and efficiency of DSLR in multiple passages, we gradually increased the number of documents N and compared the performance, token count, and end-to-end (E2E) latency4 of the original top-N documents with those refined by DSLR.\nAs shown in the left panel of Figure 3, both sets of documents show consistent performance improvements as N increases. However, DSLR consistently outperforms the original documents across all N levels, with more notable differences\n4These experiments were conducted using four V100 GPUs.\nat lower N values. This suggests that DSLR can significantly enhance performance in RAG, even as the number of documents increases.\nDue to the quadratic increase in memory and time requirements with the number of tokens in transformer-based LLMs, reducing the token count is crucial for improving efficiency (Vaswani et al., 2017). As depicted in the center and right panels of Figure 3, DSLR substantially reduces the token count compared to the original documents, with the difference becoming more significant as N increases. This reduction in tokens also decreases E2E latency in all scenarios except top-1. Notably, at top-10, while the performance difference is minimal (39.6 vs. 39.7), the token count reduction from 1,713 to 577 (nearly 2.97 times) and the corresponding E2E latency reduction from 7.382 seconds to 5.422 seconds (nearly 2 seconds) demonstrate that DSLR can enhance both performance and efficiency in RAG. Detailed results are available in Table 14.\nImpact of Threshold Adjustment. To examine the impact of varying T , we adjusted the threshold in increments of 10, starting from the 10th percentile, and measured the resulting performance. Additionally, to explore the theoretical maximum performance of our method, we configured an oracle setting where any correct response, regardless of the threshold setting, was counted as correct.\nAs shown in Figure 4, increasing the threshold T generally improves performance by removing irrelevant content, thus reducing the number of tokens. However, our experimental results revealed that the performance at the 90th percentile threshold was 29.4, while a lower 80th percentile threshold yielded better performance at 29.9. This indicates that an overly stringent threshold can also remove essential information, suggesting that task-specific threshold fine-tuning could improve results.\nFurthermore, in the oracle setting, accuracy significantly improved to 34.1, and the token count was reduced to 77. This shows a marked performance improvement over the best performing threshold (80th percentile), with a similar reduction in tokens. This result implies that dynamically adjusting the threshold based on the query could achieve substantial performance improvements with a comparable number of tokens, suggesting an area for future work. Detailed results are available in Table 15.", |
|
|
"Token Distribution and Refinement Strategies.": "The left panel of Figure 5 displays the distribution of token counts in documents refined by DSLR. Unlike methods that trim passages to a fixed length, DSLR reduces token counts based on a relevance score threshold, resulting in a wide distribution of token counts, with many instances nearly devoid of external knowledge. The average token count postrefinement is 46. We analyzed performance by comparing this approach with cases where passages are consistently cut to 46 tokens: one where passages are simply truncated at 46 tokens, another using sentence-level re-ranking to select the most relevant sentences up to 46 tokens, and a third where sentences are randomly cut to 46 tokens.\nAs demonstrated in the right panel of Figure 5, DSLR, which trims content based on relevance, significantly outperforms methods that trim to a fixed length, improving scores from 25.3 to 33.7. This\nsuggests that trimming based on relevance score thresholds, rather than a fixed length, is more effective. This method accommodates the variability in the amount of relevant information per query, indicating that non-essential content should be dynamically removed.", |
|
|
"Effectiveness of Sentence-Level Re-ranking.": "To assess the effectiveness of sentence-level reranking within our framework, we compared it to conventional passage-level re-ranking using the same context length in RAG, under an initial top-100 retrieval setting. Figure 6 demonstrates that sentence-level re-ranking markedly outperforms passage-level re-ranking by enhancing performance through increased information density at a finer granularity. Additionally, while dense retrievers and fine-tuned ranking models demonstrate improvements as re-rankers, BM25 as a re-ranker significantly decreases the performance. This highlights the limitations of keyword-matching approaches for assessing low-granularity, sentencelevel relevance, underscoring the necessity for semantic understanding in sentence ranking tasks. Moreover, off-the-shelf ranking models, originally designed for passage-level relevance assessment, are also effective at determining relevance at the\nmore granular level of individual sentences. Interestingly, even though it is not specifically trained for ranking tasks, the unsupervised re-ranker using LLMs shows remarkable performance in sentencelevel re-ranking.\nAblation Studies on the Sentence-Level Reranking and Reconstruction Steps. To see how each step in DSLR contributes to the overall performance, we conduct the ablation studies, the results shown in Table 2, for the sentence-level re-ranking and reconstruction steps. These studies were uniquely tailored to the variable token counts reduced by DSLR, rather than using a fixed length.\nFirst, we examine the impact of removing the sentence-level re-ranking step. In this scenario, after initially retrieving the top-1 passage, the results are decomposed into sentences. Subsequently, these sentences are randomly used as sources for generating answers. The performance drastically drops from 33.7 to 30.6 on the NQ, highlighting the crucial role of sentence-level re-ranking, which helps effectively filter out query-irrelevant information based on relevance scores.\nFurthermore, we explore the effectiveness of the reconstruction step. The performance also drops from 64.1 to 63.8 on the TQA. This finding is similar to those from Choi et al. (2021), which suggests that removing contextual coherence negatively affects the performance. Therefore, in DSLR, reconstructing the order of sentences to reflect their original sequence within the retrieved passage is an essential step. Interestingly, the widely used approach of prepending external knowledge in descending order of relevance scores is not effective in our sentence-level refinement framework, showing similar results to a randomly ordered setting.\nin blue.\nComparative Analysis of Document Refining methods: Evaluating RECOMP and DSLR. We further compare our DSLR to the concurrent supervised refinement method, RECOMP (Xu et al., 2024), which requires additional training steps for refining the retrieved documents. To be specific, RECOMP is designed to refine the retrieved passages by either abstractively or extractively summarizing them with additional models. Note that due to significant differences between supervised and unsupervised schemes, directly comparing DSLR with RECOMP on an apples-to-apples basis is difficult. However, to ensure as fair a comparison as possible, we evaluate both refining methods under the same conditions by adopting a two-sentence extraction context length, following the extractive setting used for RECOMP. Additionally, RECOMP’s extractive compressor, which requires Contriever to be fine-tuned on specific datasets, shares similarities with our DSLR implementation that also uses Contriever, though ours is not additionally fine-tuned.\nFigure 7 shows the results of the comparison between DSLR and RECOMP in both in-domain and out-of-domain settings. While RECOMP shows robust performance on the in-domain datasets where it is particularly trained, its performance drops drastically for the out-of-domain settings, notably for BASQ from 54 to 47.9. This indicates the challenges of dataset-specific tuning for the supervised refinement methods. On the other hand, our DSLR with RankT5 and RG shows robust performance even without additional training steps for refinement.\nCase Study. We conduct a case study of the DSLR framework in Table 3. Specifically, a conven-\ntional fixed-size passage may contain distractors, such as unrelated knowledge and irrelevant conceptual details about Nitrogen (highlighted in red). Note that, although the retrieved passage-level document includes ‘Oxygen’, which is the correct answer to the given query, the LLM used as the reader fails to generate the accurate answer by being distracted by irrelevant information. On the other hand, DSLR effectively filters out such query-irrelevant sentences. Furthermore, DSLR also helps focus on the information closely related to the query (highlighted in blue), thus correctly generating the answer.", |
|
|
"6 Conclusion": "In this work, we present DSLR, a novel unsupervised document refinement framework that enhances the performance of RAG systems. The DSLR framework aids RAG systems to generate more accurate answers by decomposing passages into sentences, re-ranking them based on each relevance score, and then reconstructing them to preserve the continuity and coherence of the context. Our comprehensive experiments on multiple QA datasets show that DSLR consistently outperforms the conventional approaches of using fixedsize passage in RAG, especially in ever-evolving and domain-specific contexts. Our ablation studies highlight the importance of sentence-level reranking and contextual reconstruction for improvement on RAG. We believe that DSLR suggests a promising research direction for refining document retrieval without additional training, together with potential applications across a wide range of knowledge-intensive NLP tasks by integrating more diverse retrieval or ranking models.", |
|
|
"Ethics Statement": "The experimental results on DSLR validate the effectiveness of sentence-level re-ranking and reconstruction in RAG. However, since RAG requires processing a large amount of textual data, we should always be aware of the documents containing sensitive or private information when applying it to real-world scenarios. While it is not within the scope of our study, we believe that developing filtering strategies to mitigate such problems is essential.", |
|
|
"Acknowledgments": "This work was supported by Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korea government (No. 2018-0-00582, Prediction and augmentation of the credibility distribution via linguistic analysis and automated evidence document collection), Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (RS-2023-00275747), and the Artificial Intelligence Industrial Convergence Cluster Development project funded by the Ministry of Science and ICT (MSIT, Korea) & Gwangju Metropolitan City.", |
|
|
"A.1 Datasets": "Table 4 shows the number of queries of the datasets utilized in our experiments. Following (Karpukhin et al., 2020), we used the development sets of the NQ, TQA, and SQD datasets. The SQ dev-set was also employed. For RQA, we selected answerable queries from documents available on GCS spanning from 2022 to 2023. In BASQ, we selectively employed questions from BioASQ6 challenge (task b) that permitted either factoid or yes/no responses to ensure accuracy.", |
|
|
"A.2 Models": "To construct the retrieval system for our RAG model, we employed BM25 with Pyserini5, using pre-indexed corpora provided by the framework. To improve answer generation across datasets, we include document titles to provide context to the LLM, following Asai et al. (2024). Additionally, recognizing that sentences alone may offer insufficient context, we also included document titles in the reranking process to further ensure contextual richness.\nTo select models for our re-ranking experiments, we considered a range of realistic scenarios and selected representative models from three key categories: dense retrieval, supervised re-ranking, and unsupervised re-ranking. Specifically, for dense retrieval, we chose DPR and Contriever. In the category of supervised re-ranking, we used the established pointwise ranking models MonoT5 and RankT5. For unsupervised re-ranking, we employed RG, a widely used pointwise re-ranking method. Additionally, acknowledging the significance of latency in practical settings, we favored pointwise methods to efficiently manage the computational overhead associated with processing and decomposing passages into sentences.\n5https://github.com/castorini/pyserini", |
|
|
"A.2.1 Model Weights": "All model weights were sourced from Hugging Face, and the models were used without any additional training. Below, we list the specific Hugging Face model names corresponding to the weights employed in our experiments: DPR:\n- facebook/dpr-question_encoder-multiset-base - facebook/dpr-ctx_encoder-multiset-base\nContriever: - facebook/contriever MonoT5: - castorini/monot5-base-msmarco RankT5: - Soyoung97/RankT5-base RECOMP: - fangyuan/nq_abstractive_compressor - fangyuan/nq_extractive_compressor - fangyuan/tqa_abstractive_compressor - fangyuan/tqa_extractive_compressor - fangyuan/hotpotqa_abstractive_compressor - fangyuan/hotpotqa_extractive_compressor LLama2-13b-chat: - meta-llama/Llama-2-13b-chat-hf", |
|
|
"A.2.2 Threshold T for Each Model": "As shown in the Figure 8, the distribution of relevance scores varies significantly across models. Experimentally, we sampled 1,000 entries each from the training sets of the NQ, TQA, and SQD datasets to set the 90th percentile threshold T . Sentences scoring below this threshold were removed. Although it is possible to sample from the training set in each experiment to establish new thresholds, our experiments conducted in Section 5 across various thresholds consistently yielded better performance than using the top-1 documents directly. Therefore, the thresholds established in this experiment could be used as the standard. The specific values are listed in the accompanying Table 5.", |
|
|
"A.3 Prompt Templates": "For a fair comparison, we fixed the prompt templates. In this section, we introduce these fixed templates.", |
|
|
"A.3.1 QA Prompt Template": "We use a QA template for open-domain queries from the publicly available llama-index6. Below is the QA prompt template used in our experiments:\nQA Prompt Template for LLMs\n[INST] We have provided context information below. ——————— {context_str} ——————— Given this information, please answer the question: {query_str} [/INST]", |
|
|
"A.3.2 RG Ranking Prompt Template": "We use an RG Ranking Prompt Template following (Liang et al., 2022). Below is the RG Ranking prompt template used in our experiments:\n6https://www.llamaindex.ai/\nRanking Prompt Template for LLMs\n[INST] Passage: ——————— {title_str} {document_str} ——————— Query: {query_str} Does the passage answer the query? Answer ‘Yes’ or ‘No’ [/INST]", |
|
|
"B.1 Main Result on Top-5 documents": "In Table 6, we compared the performance of DSLRrefined documents for the top-5 settings with original documents in RAG. While DSLR remained effective, the margin of performance improvement was less significant than the top-1 setting, suggesting that increasing the volume of documents can modestly enhance performance. However, DSLR managed to maintain similar or better performance while significantly reducing token count, thus improving efficiency. In this setting, models like MonoT5, RankT5, and RG, based on pre-trained models, outperformed traditional models such as BM25, Contriever, and DPR, likely due to the superior capability of sentence-level re-ranking.", |
|
|
"B.2 Detailed Results for the Comparative Analysis of Document Refining Methods:": "Evaluating RECOMP and DSLR\nTable 7 provides a detailed comparison between the RECOMP and DSLR frameworks. RECOMP focuses on minimizing token usage in RAG without sacrificing performance, utilizing a fine-tuned Contriever for extractive compression and a T5-large for abstractive compression. By contrast, DSLR enhances RAG performance by eliminating redundant content. Although their different objectives pose a challenge for direct comparison, both aim to extract essential information effectively. To ensure a fair comparison, we aligned the context length to two sentences and refined the top-5 documents, mirroring RECOMP’s methodology. Our experiments utilized the LLama2-13b-chat model as the reader to maintain consistency. This analysis underscores the importance of zero-shot refinement approaches in advancing document refinement for RAG.\nB.3 DSLR with Proprietary Models\nWe evaluated the performance of DSLR in proprietary LLMs with larger parameter sizes and undisclosed data and training processes, specifically testing on GPT-3.5-turbo7 and Claude-3-haiku8 using the same settings for the top-1 document. As shown in Table 8, consistent with previous findings, DSLR significantly enhanced performance by simply eliminating irrelevant content at the sentence level from the original document. Additionally, since these models calculate API costs on a per-token basis, the substantial reduction in token count9 is expected to\n7gpt-3.5-turbo-1106 8claude-3-haiku-20240307 9Due to the unavailability of the tokenizers for gpt-3.5turbo and claude-3-haiku, token counts were necessarily performed using the LlamaTokenizer.\nsignificantly decrease API costs.", |
|
|
"B.4 Sentence-Level Re-ranking Results": "In DSLR, the sentence-level re-ranking step is crucial for enhancing performance. We evaluated this approach against conventional passage-level reranking within the RAG framework, maintaining identical context lengths (L). Initial retrievals were configured for top-{20, 100}, followed by analyses at L = {100, 500}. These settings were chosen because 100 and 500 words represent typical lengths for segments in top-1 and top-5 passage-level rerankings, respectively. Notably, when counting words, only the content is considered, excluding titles.", |
|
|
"B.4.1 Comparative Performance of Sentence-Level vs. Passage-Level Re-Ranking": "The results presented in Table 9 demonstrate that sentence-level re-ranking consistently outperforms passage-level re-ranking across all settings, except when using BM25.", |
|
|
"B.4.2 Effectiveness of Sentence-Level Re-Ranking in Varying Conditions": "Table 10 shows the sentence-level and passagelevel re-ranking over various context lengths L. Table 11 shows performance in top-{5, 10, 20, 50, 100} settings adjusted for L = 100 and L = 500. Our experiments on the NQ dataset indicate that sentence-level re-ranking is effective across diverse conditions, omitting the less effective BM25 reranking.", |
|
|
"B.4.3 Effectiveness of Sentence-Level Re-Ranking on the Gold Answer Hit Rate": "We present detailed results for the Gold Answer Hit Rate in Table 12. The rate is binary, assigned 1 if the re-ranked context contains the gold answer, and 0 otherwise, averaged over all dataset entries for each L.", |
|
|
"B.4.4 Ablation Studies on Various Models": "Table 13 explores the significance of each step under various models in the initial top-100 retrieval and L=500 setting. The absence of the sentencelevel re-ranking (SR) highlights its necessity in filtering irrelevant information, while excluding the reconstruction (RC) step demonstrates its crucial role in enhancing answer generation accuracy." |
|
|
} |