Limitoy / ACL_24_with_limitation /ACL_24_106.json
Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "106",
"Title": "LoRAMoE: Alleviating World Knowledge Forgetting in Large Language Models via MoE-Style Plugin",
"7 Limitations": "In this section, we discuss the potential limitations of our proposed method LoRAMoE. Firstly, although we have demonstrated the effectiveness of LoRAMoE in alleviating world knowledge forgetting while enhancing the downstream ability of the LLMs with SFT, we limit the model size to 7B due to resource and time constraints. Further work will be conducted on the larger LLMs, to understand the influence of large-scale SFT on these LLMs and to boost their multitasking abilities. Secondly, the localized balancing constraint can softly constrain the type of experts and balance the experts utilization. However, we haven’t studied the case where there are more experts types for a more fine-grained task category. Future work will be conducted on a more fine-grained understanding of the influence of SFT and the utilization of LoRAMoE.",
"abstractText": "Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks. Substantially increasing instruction data is a direct solution to align the model with a broader range of downstream tasks or notably improve its performance on a specific task. However, we find that large-scale increases in instruction data can damage the world knowledge previously stored in LLMs. To address this challenge, we propose LoRAMoE, a novel framework that introduces several low-rank adapters (LoRA) and integrates them by using a router network, like a plugin version of Mixture of Experts (MoE). It freezes the backbone model and forces a portion of LoRAs to focus on leveraging world knowledge to solve downstream tasks, to alleviate world knowledge forgetting. Experimental results show that, as the instruction data increases, LoRAMoE can significantly improve the ability to process downstream tasks, while maintaining the world knowledge stored in the LLM. Our code is available at https://github.com/ Ablustrund/LoRAMoE.",
"1 Introduction": "Supervised fine-tuning (SFT) provides a pivotal technique to make large language models (LLMs) follow human instructions and improve their performance of downstream tasks (Chung et al., 2022; Ouyang et al., 2022). Although some studies (Zhou et al., 2023; Cao et al., 2023) indicate that LLMs trained on a little data can follow instructions well, increasing the amount of data is a straightforward\n* Equal contribution. † Corresponding author.\nway to enhance their ability to multiple downstream tasks or improve their performance on a specific task, as shown in the left of Figure 1.\nHowever, the large-scale increase in instruction data can destroy the world knowledge stored in LLMs, as illustrated in the right of Figure 1. Specifically, as the amount of instruction data increases, we observe a notable decline in performance on Closed-Book Question Answering (CBQA) datasets, which are used to measure world knowledge in LLMs (Touvron et al., 2023; Neeman et al., 2022). In the paradigm of supervised fine-tuning, the conflict between maintaining world knowledge inside LLMs and improving their performance on downstream tasks by scaling up instruction data has not been thoroughly examined.\nIn this paper, we propose LoRAMoE, a novel framework for SFT, to enhance the models’ capability of solving downstream tasks, while alleviat-\n1932\ning world knowledge forgetting during the training phase. LoRAMoE is a Mixture-of-Experts-style (MoE-style) plugin, which introduces several lowrank adapters (LoRA) (Hu et al., 2021) as experts and integrates them by using a router network. The router network automatically assigns weights to experts, which can improve the LLM’s performance on multiple downstream tasks.\nTo demonstrate the efficacy of our proposed method, we conduct extensive experiments across a range of downstream tasks. Experiment results show that LoRAMoE can significantly improve LLM’s ability to address the various downstream tasks by fine-tuning the model on a large amount of instruction data, while maintaining the world knowledge stored in the model. In addition, we further evaluate our method by visualizing the expert weight for tasks. The result indicates that LoRAMoE adequately alleviates world knowledge forgetting and achieves an improvement of models by fostering collaboration among experts. The main contributions of our paper are as follows:\n1. We find that significantly increasing the amount of instruct data during the SFT phase can damage the world knowledge inside the LLMs. The need for improvement in downstream tasks by scaling up instruction data conflicts with maintaining the world knowledge inside the model.\n2. We introduce LoRAMoE, a novel framework for SFT, which introduces LoRAs as experts and integrates them by the router. LoRAMoE can enhance the model’s ability to address downstream tasks, while alleviating the world knowledge forgetting.\n3. Extensive Experiments demonstrate the efficacy of our proposed approach in multi-tasks and mitigating the forgetting of world knowledge inside the model. The visualization experiment shows that LoRAMoE can achieve an improvement by fostering collaboration among experts.",
"2 Motivation": "In this section, we verify that a large-scale SFT can cause irreversible damage to world knowledge within the LLMs while improving the LLMs’ performance in various downstream tasks.",
"2.1 A Diverging Trend": "We constructed a dataset containing seven categories of tasks with a total of five million training samples, and used it to conduct SFT on a Llama2-7B model. The implementation details are described in Appendix A. During the expansion of fine-tuning data, we observed a diverging trend in the performance across two types of tasks, as shown in Figure 2:\nAcross downstream tasks such as summarization, Natural Language Inference (NLI), machine translation, and others, the performance of the fine-tuned model initially showed a magnificent increase and eventually stabilized at a promising level. However, when it comes to closed-book QA (CBQA) tasks that are used as world knowledge benchmark (Touvron et al., 2023; Neeman et al., 2022), the model’s performance catastrophically declines under the baseline Notably, with the training data expanding, a contiguous decline can be witnessed. Moreover, this decline will occur earlier if the test set is filtered.1 Appendix B case with a larger dataset including more tasks shows an even steeper drop on world knowledge benchmarks, although performance remains competitive on others.",
"2.2 Irreversible Knowledge Forgetting": "In this section, we dissect the reason behind the decline on these world knowledge benchmarks during the expansion of fine-tuning data. We find this results from the occurrence of irreversible knowledge forgetting inside the LLM.\nThe performance on world knowledge benchmarks highly relies on the knowledge and skills learned during pre-training phase. To investigate the relationship between the performance on world knowledge benchmarks and the knowledge embedded in pre-trained models (Petroni et al.,\n1Considering previous work that has noted train-test overlap in CBQA datasets (Lewis et al., 2020), we elaborately select parts of the CBQA dataset without train-test overlap for our testing set, namely Filtered NQ and Filtered TriviaQA.\n2019; Roberts et al., 2020; AlKhamissi et al., 2022), we conduct fine-tuning solely on the CBQA dataset with 250k samples and run evaluation on the test sets without training-testing overlap (e.g. Filtered NQ and Filtered TriviaQA). Results in Figure 3 show initial training boosts performance significantly, especially the first 1% (approximately 1k samples), with limited gains thereafter. This is because early fine-tuning aligns existing knowledge with new instructions, improving CBQA results. However, due to minimal training-testing data overlap, adding more samples doesn’t further enhance performance. Thus, a model’s benchmark success relies on world knowledge acquired from the pretraining.\nGiven this, it is naturally assumed that the diminished performance on knowledge benchmark stems from the damage of knowledge stored in the LLM due to large-scale instruction tuning. To verify the hypothesis, we sequentially fine-tuned a model using two datasets, first excluding CBQA data, then with CBQA data. Results presented in Table 1 show a great decline in knowledge capabilities versus the original LLM. This indicates that the world knowledge within the model was compromised during the first stage of large-scale fine-tuning, resulting in the model’s inability to forge the alignment between human instructions and the already destroyed knowledge in the subsequent stage of fine-tuning solely with CBQA.\nTo sum up, the pursuit of enhancing performance on downstream tasks through the expansion of training data conflicts with the preservation of world knowledge within the model in vanilla SFT.",
"3 LoRAMoE": "In this section, we elaborate on the methodological details of LoRAMoE, which is an MoE-style plugin and introduced Localized Balancing Constraint during the training phase to alleviate the\nworld knowledge, as shown in Figure 4.",
"3.1 Architecture": "The left of Figure 4 illustrates the forward process of the standard MoE architecture (Shazeer et al., 2016; Fedus et al., 2021; Lepikhin et al., 2020). In the MoE, the router assigns weights of experts according to the data, allowing them to divide their labor to complete the forward process (Jacobs et al., 1991). The key sight of LoRAMoE is that we freeze the backbone model to maintain world knowledge and introduce experts to leverage this knowledge to address tasks, while improving the performance on multiple downstream tasks. Additionally, we utilize the LoRA (Hu et al., 2021) as the architecture of the expert to improve training and inference efficiency.\nFormally, for the traditional transformers architecture, the forward propagation process of the feed-forward neural (FFN) network block can be simplified as follows:\nf(x) = x+ fFNN(x). (1)\nThe matrix operation of the linear layer in this forward propagation can be expressed as:\no = Wx = W0x+∆Wx (2)\nwhere W0 ∈ Rdin×dout represents the parameter matrix of the backbone model and ∆W ∈ Rdin×dout denotes the updated parameter during the training phase. For LoRAMoE, we replace the linear layer in the FFN block with the MoE-style plugin, which makes experts collaborate to address tasks. During the training phase, we freeze the backbone to maintain the world knowledge and only update ∆W . Consider the LoRAMoE layer containing N experts, which is denoted as {Ei}Ni=1, the forward process of the layer can be mathematically expressed as follows:\no = W0x+∆Wx = W0x+ N∑\ni=1\nG(x)iEi(x) (3)\nwhere Ei(·) and G(·) = Softmax(xWg) represent the i-th expert and the router in the LoRAMoE layer, respectively. The Wg is the trainable parameter matrix of the route network. By this, the experts and the outer work in tandem, enabling the experts to develop varied capabilities and efficiently handle diverse types of tasks.\nIn addition, LoRA has been proven to be both effective and efficient for the SFT phase of LLMs (Wang et al., 2023a; Liu et al., 2022; Pan et al., 2022). To enhance the efficiency and resource conservation of the fine-tuning process, we replace the\nparameter matrix of the experts with a low-rank format. Specifically, the matrix ∆WE ∈ Rdin×dout of the expert E(·) in the LoRAMoE layer can be written as follows:\n∆WE = BA (4)\nwhere A ∈ Rdin×r, B ∈ Rr×dout , and the rank r ≪ min(din, dout). LoRA contributes to a significant reduction in the trainable parameters, thereby enhancing efficiency and saving costs during the fine-tuning process.\nOverall, the forward process of the LoRAMoE layer replaced the traditional FFN layer can be represented as:\no = W0x+ α\nr\nN∑\ni=1\nωi ·BiAix (5)\nwhere ωi denotes the weight of i-th expert and α is the constant hyper-parameter, approximately equivalent to the learning rate.",
"3.2 Localized Balancing Constraint": "The imbalance of the experts’ utilization is a typical problem in MoE (Shazeer et al., 2016; Fedus et al., 2021), which is also observed in our proposed method, as shown in Figure 5. The conventional solution is balancing expert utilization (Shazeer et al., 2016), which involves making the coefficient of variation of the experts’ importance as the loss function. However, this method assumes all the training samples are under the same distribution,\nwhich ignores the fact that samples may be from different distributions such as the question-answering task and other downstream tasks, more detailed analysis and conceptual proof in Appendix C.\nConsidering the mixed characteristics of data distributions are important, during the training phase, we introduce localized balancing constraint, a novel balancing expert utilization method to make a portion of experts focus more on leveraging world knowledge to solve tasks. As shown in Figure 6, during the fine-tuning phase, we softly constrain experts to concentrate on two aspects, one of which focuses on leveraging world knowledge by learning on its related datasets, while another focuses on other downstream tasks. In addition, all experts within the same aspects are balanced such as balancing expert utilization.\nFormally, we define the importance matrix Q of the LoRAMoE layer and Qn,m denotes the sum of router values of the n-th expert for the m-th training sample in a batch, which can be represented as follows:\nQn,m =\nTm∑\nj=1 G(xj)i = exp(ωji /τ)∑N k=1 exp(ω j i /τ)\n(6)\nwhere N and Tm denote the number of experts and the number of tokens of m-th training sample, respectively. xj is the hidden input of the j-th token. We then define the coefficient matrix I with the same size of Q, corresponding to the importance matrix Q. In,m denotes the importance coefficient of Qn,m, which can be written as follows:\nIn,m = { 1 + δ, Typee(n) = Types(m) 1− δ, Typee(n) ̸= Types(m)\n(7)\nwhere δ ∈ [0, 1] controls the degree of imbalance between experts types. Typee(n) and Types(m) are pre-defined target type of n-th expert and the task type of m-th training sample in a batch, respectively.\nWe categorize the instruction data into two distinct types: world knowledge-related tasks such as TriviaQA, and other downstream tasks such as Flores. Then, we enable a portion of experts to learn on world knowledge-related tasks to align human instructions with world knowledge, while making other experts focus more on enhancing the performance of downstream tasks. Formally, suppose that Ii,k and Ij,k denote the importance coefficient of the i-th and j-th expert for the k-th sample, respectively. If experts are in the same group, their values at corresponding positions in the coefficient matrix are identical, i.e., Ii,k = Ij,k. This indicates that these experts have the same importance because they are assigned to focus on learning the same type of tasks. On the contrary, the values of experts from distinct groups at their coefficient matrix are different, i.e., Ii,k ̸= Ij,k.\nThe localized balancing constraint loss Llbc is defined to measure the dispersion of the weighted importance matrix Z = I ◦Q, which can be mathematically represented as:\nLlbc = σ 2(Z)\nµ(Z) (8)\nwhere σ2(Z) and µ(Z) represent the variance and mean of Z, respectively. Specifically, if a specific sample is from the world knowledge-related dataset, experts focusing on solving this type will have larger values in the coefficient matrix I. Optimizing the loss Llbc reducing can make corresponding experts learn more from this sample and be assigned a larger weight by the router. Meanwhile, experts solving the same type of task are balanced such as Shazeer et al. (2016). In addition, the constraint is soft to encourage cooperation among experts to preserve the capacity for generalization.\nOverall, localized balancing constraint Llbc achieves a localized balance between two types of experts: one specializes in leveraging world knowledge by training more on world knowledge-related datasets, while the other concentrates on various downstream tasks. The loss of LoRAMoE can be represented as follows:\nLtotal = L+ βLlbc (9)\nwhere L is the next-token prediction loss of LLMs and β controls the strength of localized balancing constraint. In the training phase, we freeze the backbone model and the trainable parameters are only those of the experts and routers within the LoRAMoE layers. In the inference process, the router automatically assigns weights to all experts, which avoids the need for pre-specified data types.",
"4.1 Experiment Setup": "In this section, we introduce the training implementation for LoRAMoE. We only replace the linear layer in the feed-forward neural network of LLM with the LoRAMoE layer, initializing each layer with six experts, of which three experts are dedicated to addressing downstream tasks, and the other three are responsible for leveraging world knowledge in the base model by learning on its related tasks. The hyperparameters for control constraint strength β and degree of imbalance δ are both set to 0.1. For LoRA settings, the α, and r are set to 32 and four for the main result, respectively. The dropout is 0.05, and the learning rate is 2e − 4. The training dataset is the 3 million set the same as the one described in Appendix A, so as the evaluation settings. We freeze the parameters of the base model, rendering only the experts and router in LoRAMoE trainable. The batch size per node is set to 16.",
"4.2 Main Results": "Table 2 displays the performance of LoRAMoE and compares this result with the outcomes of directly applying SFT to the model or utilizing LoRA tuning. The results show that the language model with LoRAMoE gets good performance on both world knowledge benchmarks and others, indicating its effectiveness in avoiding knowledge forgetting while improving multi-tasking abilities.\nFor world knowledge benchmarks, contrary to the catastrophic collapse seen in Section 2, LoRAMoE not only avoids this issue but also surpasses the model fine-tuned solely with the CBQA dataset. LoRAMoE shows a significant performance boost on world knowledge benchmarks over vanilla SFT, with up to a 63.9% improvement and an average increase of 35.3%.\nFor other downstream tasks, LoRAMoE is capable of achieving performance close to or even surpassing that of direct SFT. For instance, in all read-\ning comprehension tasks (i.e., Race, ReCoRD, multiRC), LoRAMoE achieved superior performance.\nWe also compare our method against PEFT by single LoRA. The knowledge forgetting also occurred during the single LoRA-tuning, as it is essentially the same as vanilla SFT (Hu et al., 2021). Compared with a single LoRA, multiple collaborative LoRAs in LoRAMoE enhance both world knowledge retention and multitasking performance. They offer an average boost of 30.9% in world knowledge benchmarks and 8.4% in other downstream tasks.\nBesides, Llbc improves outcomes for LoRAMoE in the vast majority of tasks, both world knowledge benchmarks and others. Notably, for reading comprehension, NLI, and the original CBQA dataset, the benefits of this method were quite substantial, up to 17.6%. This indicates capability partitioning in the expert group benefits the performance in multi-task learning.",
"4.3 Sensitivity Analysis": "In this section, we analyze the parameter sensitivity of LoRAMoE. Keeping other settings constant, we vary the number of experts and the rank of LoRA. The average performance with varied parameter settings on all test sets including the world knowledge benchmark and all other downstream tasks is shown in Table 3. In Appendix D there are detailed results.\nAs the number of trainable parameters increases,\nperformance is generally stable. the number of 6 experts is the most beneficial choice, as more experts do not lead to higher performance. While the increase in LoRA rank improves the model’s capabilities somewhat, it brings about an exponential rise in trainable parameters.",
"4.4 Visualizing the Experts Utilization": "To confirm the effectiveness of LoRAMoE in specializing the experts with two types, we visualize their weight assigned by the router when encountered with data from downstream tasks and knowledge benchmarks respectively, as illustrated in Figure 7.\nThere is a distinct contrast in the utilization of the two types of experts when dealing with world knowledge benchmarks and other down-\nHotpotQA\nFiltered NQ\nWSC\nFiltered TriviaQA\nFlores\nRace-high\nReCoRD\nExperts type 1 Experts type 2\nFigure 7: Visualization of routers’ weight on different types of data, where type 1 refers to the experts dedicated to aligning the world knowledge in the base model with the human instruction and type 2 refers to the experts that focus on downstream tasks. The utilization rate of the type of experts diverged significantly across tasks.\nstream tasks. This suggests that the routers can automatically allocate specific tasks to experts with corresponding abilities during the inference phase. Specifically, the experts requested to leverage world knowledge are greatly employed in world knowledge benchmarks (e.g., TriviaQA, Natural Questions, and HotpotQA), underscoring their vital role in preventing world knowledge forgetting. This corresponds to the fact we state in Section 2 that supervised fine-tuning boosts the model’s capabilities in these tasks by associating pre-stored world knowledge in the model with human instructions. On the other hand, experts assigned to focus on enhancing performance in downstream tasks are given increased prominence when encountering these tasks. Through this visualized result, we find that some downstream tasks still require experts of another type. It is reasonable. For example, in reading comprehension tasks, the knowledge learned by the model during pre-training can better assist in making factual judgments. This phenomenon is even more pronounced in language-based tasks. In the WSC task (Levesque et al., 2012), the router allocates an average of about 45% of its attention to the experts responsible for world knowledge.",
"5 Related Work": "Parameter-Efficient Fine-tuning. With the size of language models growing larger, parameterefficient fine-tuning (PEFT) (He et al., 2021) has become crucial for resource savings. Researchers have proposed several approaches such as LoRA (Hu et al., 2021), adapters (Houlsby et al., 2019), and prompt learning (Lester et al., 2021), to en-\nhance fine-tuning efficiency. PEFT based on lowrank adapters (Hu et al., 2021) is popular and widely used, which introduces two trainable lowrank matrices in each fully connected layer, to achieve significant savings in training resources without adding additional inference computation cost. We apply low-rank techniques to the structure of experts to save resource consumption.\nMixture-of-Experts. The mixture of Experts (MoE) replaces the feed-forward neural network layer with sparsely activated experts, which significantly enlarges the model without remarkably increasing the computational cost (Jacobs et al., 1991). Currently, the token-level MoE architectures are widely used in pre-trained language models and vision models (Shazeer et al., 2016; Lepikhin et al., 2020; Du et al., 2022; Riquelme et al., 2021). In addition, researchers (Zhou et al., 2022; Chi et al., 2022) aim to investigate the router selection problem in MoE. Unlike these efforts to expand the model size and address the selection problem, we propose an MoE-style framework for multitask learning and maintaining the world knowledge stored in LLMs.\nMulti-LoRA Architecture. Researchers also have utilized multiple LoRAs for enhanced model performance. Huang et al. (2023) propose LoraHub to choose different LoRA combinations for task generalization. MOELoRA (Liu et al., 2023) leverage LoRA and MoE for task-specific tuning and multitasking, especially in healthcare. However, these methods need the data type as the input during the inference phase, which limits the application of the model to other tasks. Chen et al. (2023a) first introduces multiple LoRA serving systems and Sheng et al. (2023) proposes S-LoRA, a system that can serve thousands of LoRA adapters from a single machine. Chen et al. (2023b) introduces several experts to enhance the model’s ability for multimodal learning. Unlike these approaches, LoRAMoE introduces an MoE-style plugin and Localize Balancing Constraint to tackle world knowledge forgetting in LLMs, while enhancing the model’s ability to multi-task learning.",
"6 Conclusion": "In this paper, we first delve into the conflict between improving LLM’s performance on downstream tasks by scaling up data during the SFT phase and discouraging world knowledge forgetting. To address this conflict, we then introduce Lo-\nRAMoE, a novel framework for SFT, which introduces LoRAs as experts and integrates them by the router. Extensive experimental results demonstrate that LoRAMoE can foster collaboration among experts to enhance the model’s performance of downstream tasks, while preserving the world knowledge inside it.",
"8 Acknowledgements": "The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.62206057,62076069), Shanghai Rising-Star Program (23QA1400200), Natural Science Foundation of Shanghai (23ZR1403500), Program of Shanghai Academic Research Leader under grant 22XD1401100.",
"A Details about Experiment Implementation": "Datasets. The seven tasks are closed-book question answering (CBQA), coreference resolution,\nnatural language inference (NLI), abstract summarization, multi-lingual translation, reading comprehension, and text classification. Table 4 shows the composition of the 3-million-sample dataset. The five million fine-tuning data we use includes three million versions and their variants from data augmentation strategies. The 1-million-sample version is the subset of the original 3-million-sample dataset.\nEvaluation. We utilize the opencompass6 framework to run the evaluation process on the aforementioned tasks. Notably, considering previous work that has noted train-test overlap in CBQA datasets (Lewis et al., 2020), we elaborately select parts of the CBQA dataset without train-test overlap for our testing set, namely Filtered NQ and Filtered TriviaQA, to analyze the world knowledge of models better.",
"B The World Knowledge of LLM Further Declines after Being Trained with More Data": "With the task types increasing, there is an inevitable trend to increase the amount of SFT training data. To further verify that a large-scale SFT training process can lead to knowledge forgetting of LLM as stated in Section 2, we construct a much larger dataset containing ten million training samples. In addition to the dataset from the previous section, we also added the following tasks:\n• Named Entity Recognition: sampled from Wang et al. (2023b). Contains 17 different NER tasks.\n• Program Execution: sampled from Wang et al. (2022). Contains 90 different tasks requiring the LLM to understand the instructions about a program and execute it.\n• Question Generation: sampled from a existing huggingface dataset 7. Given a context, the LLM needs to generate an appropriate question based on the answer.\n• Text2sql: sampled from two existing huggingface datasets8. Given a description in natural language, the LLM needs to generate an appropriate sequence of SQL.\n6https://opencompass.org.cn/ 7https://huggingface.co/datasets/qa_zre 8https://huggingface.co/datasets/Clinton/\nText-to-sql-v1, https://huggingface.co/datasets/ cfq\n• Toxic Classification: sampled from a existing huggingface datasets9.\nAfter training the LLaMa-2-7b on this 10- million-sample dataset with the same experiment setup with Appendix A, we find the LLM exhibit a greater knowledge-forgetting but a promising performance in other tasks apart from knowledge benchmarks.",
"C Mixed Distribution Dilemmas for Expert Balancing": "When fine-tuning MoE without any constraints, the router mechanism often converges to a state in which a small number of experts receive a disproportionately large share of preferences by the router, as depicted in Figure 5. This imbalance among experts presents a challenge to correct, as experts that receive greater routing weights in the early stages of training undergo more rapid optimization, thereby garnering increased preferences from the router. A similar phenomenon has been documented in the work presented in Shazeer et al. (2016) and Fedus et al. (2021).\nA conventional solution for balancing experts utilization involves employing the coefficient of\n9https://huggingface.co/datasets/google/civil_ comments\nvariation of the experts’ importance as the loss function, aimed at equalizing the significance of each expert (Shazeer et al., 2016). This solution assumes that the distribution of training samples for optimising MoE is a single distribution, which inherently eliminates the necessity of considering the diverse origins of data distribution. Specifically, this traditional approach simplifies the modeling process by assuming homogeneity in data sources that often do not align with fine-tuning data containing both factual knowledge QA and other downstream tasks. Therefore, such simplification can lead to significant biases, particularly when encountering datasets with varied distributional characteristics.\nTraditional balancing constraints, which aim to allocate a uniform distribution of training samples across all experts, can lead to inaccurate parameter estimation. This is because such constraints do not account for the intrinsic differences in data representation and importance across various categories. Recognizing the disparate nature of data distributions, LoRAMoE strategically assigns data to experts, not uniformly, but based on the observed imbalances. This allocation is governed by a set of weights that are calibrated to reflect the varying significance and representation of different data categories within the overall dataset.\nSuch a specialized allocation method is pivotal in addressing the challenges posed by uneven data distributions. By tailoring the distribution of training samples to each expert based on the inherent disparities in the data, LoRAMoE facilitates a more accurate and representative parameter estimation. This nuanced approach to data distribution allows for a more effective fitting of the model to diverse data subsets, significantly enhancing the model’s predictive accuracy and generalization capability. This strategy is particularly effective in scenarios where data imbalance could otherwise lead to skewed learning and generalization errors, ensuring that each data category is appropriately represented and modeled within the overall system.\nTo illustrate the concept with a simplified model, let’s assume our training data is sampled from a mixture of two Gaussian distributions. The means (µ1, µ2) and variances (σ21, σ 2 2) of these distributions are implicit. The proportion of training data from each distribution is denoted as p1andP2 where p1 + p2 = 1, without loss of generality, we assume that p1 ≤ p2. When a MoE model fits the proposed distribution with balanced weights m, the likelihood of the model given the data can be expressed\nas:\nL(X) = ∏\nx∈X1\n( mN ( x;µ′1, σ ′2 1 )\n+(1−m)N ( x;µ′2, σ ′2 2 )) × ∏\nx∈X2\n( mN ( x;µ′1, σ ′2 1 )\n+(1−m)N ( x;µ′2, σ ′2 2 )) , (10)\nwhere Card(X1) : Card(X2) = p1 : p2. Using N1(x) and N2(x) for N ( x;µ ′ 1, σ ′2 1 ) and N ( x;µ\n′ 2, σ ′2 2\n) ,\nThe optimal mean value for µ ′ 1 satisfies the following conditions, whose value is 0 when the fitted distribution is in the same family of mixed distributions N(θ, p1) as the sampling distribution:\n∂ logL(X)\n∂µ′1 =\n∑\nx∈X1∪X2\n∂\n∂µ′1 log (mN1(x)\n+(1−m)N2(x))\n= ∑\nx∈X1∪X2\n( x− µ′1 σ′21 )\n× mN1(x) mN1(x) + (1−m)N2(x) , (11)\nIn equation 10, we can replace part of the summation with the empirical estimate of the mean of the input x. For an ideal routing network, there must exist a distribution Ni such that the data allocated to this distribution is independently and identically distributed with one of the peaks in the sampling distribution. Let’s assume this distribution to be N2. In this case, if m ≥ p1, then the fitting result for distribution µ1′ will be µ′1 = (p1µ1+(m−p1)µ2)/m. Based on the chain rule of differential derivation, we end up with:\nd logL dm = ∂ logL\n∂µ′1\ndµ′1 dm\n=\n( ∑\nx∈X1∪X2\n( x− µ′1 σ′21 )\n× mN1(x) mN1(x) + (1−m)N2(x)\n)\n× p1(µ2 − µ1) m2\n≤0, (12)\nThe inverse result can be derived similarly. Therefore, the best training error is achieved only when the mixing coefficient m of the prior distribution is consistent with the actual sampling distribution weight p1.",
"D Detalied Results of Sensitivity Study": "Table 6 shows the detailed results presented in Section 4.3."
}