|
|
{ |
|
|
"File Number": "1056", |
|
|
"Title": "Analysing zero-shot temporal relation extraction on clinical notes using temporal consistency", |
|
|
"Limitation": "Limitations\nThe gold relations annotated in the dataset are only three, coarse-grained and not well-defined with respect to when they start and end. The consistency analysis we performed is based on rules, which are connected to the definition of relations and their starting and end points. So in order to make sure that the consistency is calculated accurately, we used a set of 5 well-defined fine-grained relations. However, for evaluating the results we need to map the 5 relations to the original set of 3. This, in some cases, could lead to an inaccurate comparison between the gold and the predicted relations. Also, for the prompts, we used only the set of questions mentioned in Section 3.2 and did not perform any prompt tuning. Experimenting with different ways of formulating the questions could help in finding prompts that yield better results. Another research direction could be to add instructions to the prompts for uniqueness and transitivity towards obtaining consistent predictions.", |
|
|
"abstractText": "This paper presents the first study for temporal relation extraction in a zero-shot setting focusing on biomedical text. We employ two types of prompts and five Large Language Models (LLMs; GPT-3.5, Mixtral, Llama 2, Gemma, and PMC-LLaMA) to obtain responses about the temporal relations between two events. Our experiments demonstrate that LLMs struggle in the zero-shot setting, performing worse than fine-tuned specialized models in terms of F1 score. This highlights the challenging nature of this task and underscores the need for further research to enhance the performance of LLMs in this context. We further contribute a novel comprehensive temporal analysis by calculating consistency scores for each LLM. Our findings reveal that LLMs face challenges in providing responses consistent with the temporal properties of uniqueness and transitivity. Moreover, we study the relation between the temporal consistency of an LLM and its accuracy, and whether the latter can be improved by solving temporal inconsistencies. Our analysis shows that even when temporal consistency is achieved, the predictions can remain inaccurate.", |
|
|
"1 Introduction": "Reasoning regarding the temporality of events detected in a text (e.g., understanding their duration, frequency, and order) is an essential part of natural language understanding (Allen, 1983; Wenzel and Jatowt, 2023). Event ordering can be approached as identifying temporal relations between two events, a task often referred to as temporal relation extraction (TempRE). This task can also be applied to medical text (BioTempRE), e.g., clinical notes written by clinicians regarding a patient’s visit, and various medical events such as symptoms, treatments, tests, and other medical terms (see Figure 1). BioTempRE has numerous useful applications in\nFigure 1: An example of three event pairs annotated with temporal relations. In the right part, the order of the events with respect to time (t) is shown and the consistency of uniqueness and transitivity.\nhealthcare and can assist in medical diagnosis, including adverse drug event detection and medical history construction (Sun et al., 2013; Gumiel et al., 2021; Haq et al., 2021; Tu et al., 2023). Current state-of-the-art methods perform supervised learning, which requires annotated datasets (Wang et al., 2022; Yao et al., 2022; Knez and Žitnik, 2024). However, acquiring high-quality annotated data for TempRE poses significant challenges causing problems to existing datasets like missing relations and low inter-annotator agreement (Ning et al., 2017). In the biomedical domain, this challenge is aggravated by the need for expert knowledge and the sensitive nature of medical data.\nIn TempRE, there are important properties that emerge from the temporal nature of this task and determine the relations between events (see Figure 1). Such properties are symmetry (e.g., A BEFORE B ⇒ B AFTER A) and transitivity (e.g., A BEFORE B and B BEFORE C ⇒ A BEFORE C). Additionally, we identify the property of uniqueness: each\npair of events can have only one temporal relation since they are mutually exclusive. These properties can be utilized to enforce global temporal consistency on a model’s predictions: for example, on a unified output of different classifiers (Chambers et al., 2014; Tang et al., 2013), on a model that operates locally (i.e., with one pair of events as input, Ning et al. (2017)), or on predicted relations between different types of events (Wang et al., 2022).\nRecently, Large Language Models (LLMs) have shown remarkable performance in several tasks even in a zero-shot setting, which helps to tackle the need for training data (Bubeck et al., 2023; Wei et al., 2022a). Numerous works experiment with predictions of LLMs and study their reasoning abilities and the impact of various prompts in different tasks (Wu et al., 2023b; Jain et al., 2023; Tan et al., 2023). Despite the success of LLMs, studies show that these models continue to face challenges in temporal reasoning, especially in TempRE (Tan et al., 2023; Jain et al., 2023; Yuan et al., 2023), as well as in biomedical tasks (Wu et al., 2023b). In zero-shot TempRE, Yuan et al. (2023) employed different prompts for ChatGPT and found that it has a considerably lower performance compared to standard supervised methods. They also report ChatGPT’s tendency to provide temporally inconsistent responses, but only in terms of symmetry and did not perform an evaluation of temporal consistency specifically. Furthermore, to the best of our knowledge, we are the first ones to investigate the temporal reasoning capabilities of LLMs on medical data.\nIn this paper, we perform zero-shot BioTempRE on clinical notes (i.e., medical texts documenting patients visits) by using prompts consisting of a clinical note and questions regarding which temporal relation exists between a pair of events.1 We experiment with two different prompting strategies (BatchQA and CoT) and five widely-used LLMs (GPT-3.5, Mixtral 8x7B, Llama2 70B, Gemma 7B, and PMC-LLaMA 13B). Our findings reveal that LLMs perform poorly in this task, with a difference of approximately 0.2 in F1 score compared to supervised models. Furthermore, we calculate consistency scores for uniqueness and transitivity for each LLM in order to assess their temporal consistency and its impact on accuracy. Consistency is later enforced on the predictions with an Integer\n1We do not perform event detection and instead consider the events in each text already known.\nLinear Programming (ILP) method, revealing that solving the inconsistencies does not improve the F1 score.\nOverall, our contributions are:\n• To the best of our knowledge, this is the first study of zero-shot BioTempRE.\n• We provide extensive quantitative results of two types of prompts and five different LLMs.\n• We perform a novel temporal consistency analysis by calculating consistency scores for temporal properties.\n• We study how temporal consistency relates to accuracy and enforce it using an ILP method.\n• The code and data containing the prompts, the raw and the processed responses by the LLMs for around 600,000 pair instances, will be publicly shared for further analysis.2", |
|
|
"2.1 Temporal Relation Extraction": "Multiple studies on addressing TempRE have applied temporal properties to classifiers’ predictions, either during training or at inference time, aiming to improve their performance (Tang et al., 2013; Chambers et al., 2014; Ning et al., 2017, 2018; Wang et al., 2022). Other works have also employed linguistic properties or properties based on causality (Chambers et al., 2014; Ning et al., 2018). Ning et al. (2018) formulated temporal, causal, and linguistic properties as constraints for an ILP method. Later, Liu et al. (2021) showed that ILP constraints can improve temporal consistency, although in certain cases, the F1 score may decrease.\nTemporal Relation Extraction in the Medical Domain. The 2012 Informatics for Integrating Biology and the Bedside (i2b2) challenge was the first to address the BioTempRE task (Sun et al., 2013). The best-performing method involved merging predictions from different SVM and CRF classifiers with regard to temporal consistency (Tang et al., 2013). Following challenges at SemEval, called Clinical TempEval, were organized from 2015 to 2017 (Bethard et al., 2015, 2016, 2017) and utilized the THYME corpus (Styler IV et al.,\n2https://github.com/vasilikikou/consistent_ bioTempRE\n2014).3 In 2015 and 2016, the best-performing methods were CRF- and SVM- based (Velupillai et al., 2015; Lee et al., 2016; Khalifa et al., 2016), while in 2017 the winning approach employed an LSTM (Tourille et al., 2017). Following approaches have utilized BERT (Lin et al., 2019; Haq et al., 2021; Tu et al., 2023) for relation classification given a text and an event pair. Recently, Knez and Žitnik (2024) introduced a multimodal method in which, they constructed a graph with medical information and then, they combined textual representations (extracted by BERT) and graph representations (extracted by a GNN). However, even though temporal consistency has been used in existing TempRE works, it has not been utilized for analyzing the performance of a model by calculating consistency scores.", |
|
|
"2.2 Zero-Shot Temporal Relation Extraction": "Zero-shot learning (Xian et al., 2019) enables models to execute tasks without explicit training, a capability demonstrated by scaling models since GPT-3 (Brown et al., 2020; Wei et al., 2022a). Instruction tuning techniques (Wei et al., 2022a) further enhance zero-shot learning in LLMs. Recent openly available LLMs like LLama (Touvron et al., 2023) and Mixtral (Jiang et al., 2024) narrow the gap with closed-source models, while chain-of-thought (CoT) prompting (Wei et al., 2022b) has enhanced their ability to handle complex tasks. Research studies have shown that the temporal reasoning tasks remain challenging for LLMs (Jain et al., 2023), and specifically for TempRE, where Yuan et al. (2023) explored zero-shot TempRE with ChatGPT and found that it yields a large performance gap compared to supervised methods. However, previous research has not analyzed zero-shot TempRE in the medical domain or the temporal consistency and its impact on the performance of zeroshot TempRE - both gaps we aim to fill in our work. In this paper, we calculate consistency scores and study their connection to the F1 scores.", |
|
|
"3.1 Problem Formulation": "Given a text document D and a set of events E = {e1, .., e|E|} mentioned in the text, we create pairs of events, which are represented by the\n3The i2b2 dataset is publicly available. The THYME corpus is provided upon request, however our requests were not answered.\nset P = {p1, .., pi, .., p|P |}, where pi indicates the ith pair, 1 ≤ i ≤ |P |. BioTempRE aims at assigning the appropriate temporal relation r to the corresponding pair of events. Each pi ∈ P is described by two distinct events {ej , ek}, where 1 ≤ j, k ≤ |E|. Furthermore, each event e ∈ E is characterized by the points in time at which it began and finished. These temporal points are denoted as b and f , respectively.\nFollowing the work of Ning et al. (2018), we employ the same relation scheme, which consists of 5 different types of temporal relations r: before, after, includes, is included, and simultaneously, represented by the label set RT = {rB, rA, rI , rII , rS}. We choose this set of relations based on the fact that they are fine-grained and well-defined, and hence, suitable for creating temporal rules for our analysis. An rB temporal relation indicates that b(ej) < b(ek) and f(ej) < f(ek) , while an rA temporal relation signifies that b(ej) > b(ek) and f(ej) > f(ek). Furthermore, rI indicates that b(ej) ≤ b(ek) and that f(ej) ≥ f(ek), and rII signifies that b(ej) ≥ b(ek) and that f(ej) ≤ f(ek). Finally, rS signifies that b(ej) = b(ek) and f(ej) = f(ek).", |
|
|
"3.2 Zero-shot BioTempRE": "We experiment with two different types of prompting: Batch-of-Questions (BatchQA) and Chain-ofThought (CoT) (Wei et al., 2022b; Yuan et al., 2023) (see Figure 4 in Appendix A). In both, we start with a preamble consisting of the document text (D) and an instruction. Then, we introduce questions regarding the temporal relations for a pair of events pi consisting of events ej and ek.4 We formulate the question for each relation based on its temporal definition, as follows:\n• BEFORE: Did ej start before ek started and end before ek ended?\n• AFTER: Did ej start after ek started and end after ek ended?\n• INCLUDES: Did ek start and end while ej was happening?\n• IS INCLUDED: Did ej start and end while ek was happening?\n• SIMULTANEOUS: Did ej and ek start and end at the same time?\n4The questions were ordered randomly.\nWe also specify the desired output format by adding “Answer with Yes or No.” to the end of each question. For each event pair there is an independent interaction with the LLM, and depending on the type of prompt the questions mentioned above are sent to the LLM in one or multiple prompts.\nBatch-of-Questions (BatchQA). In BatchQA, a single prompt is sent to the LLM. In the preamble, after the document D, this instruction follows: “Given document D, answer the following questions ONLY with Yes or No.”. Next, all the questions regarding the temporal relations are added in the same prompt. The expected model response includes five Yes/No answers for each of the questions.\nChain-of-Thought (CoT). We use the same format of temporal prompts as in Yuan et al. (2023) (based on their examples in the paper), and we formulate the questions for the set of the 5 temporal relations used in Ning et al. (2018). The first prompt is the preamble composed of the document D and the question “Given the document D, are ej and ek referring to the same event? Answer ONLY with Yes or No.”. If the response is No, then the questions are sent, each one after another, as they are defined above. If the response is Yes, the phrase “In that event,” is appended at the beginning of each question.", |
|
|
"4.1 Data": "In our experiments, we use the dataset created for the 2012 i2b2 challenge, which consists of 310 discharge summaries, 190 for training and 120 for testing. The texts were initially annotated with 8 fine-grained relations but due to low inter-annotator agreement these relations were merged to the following three: before, after and overlap. Each discharge text contains 30.8 sentences on average, with each sentence having an average number of 17.7 tokens. The average number of tokens per discharge text is 514.\nThe i2b2 dataset contains three types of events: 1) medical events, 2) time expressions, and 3) the dates of admission and discharge. The average number of medical events per discharge summary is 86.7, while the average number of time expressions is 10.5. The admission and discharge dates are included in each text; however, in a few cases, one of them might be missing. The annotators of i2b2\nhave assigned temporal relations to 27,540 pairs of events (gold pairs).\nAn important step in TempRE is to identify the pairs of events for which the models will decide if there is a relation expressed or not since it would not be feasible to check for every pair of events mentioned in a document. In order to generate candidate event pairs, we follow the approach of the best-performing method in the i2b2 challenge (Tang et al., 2013). This is a rule-based approach, which creates pairs consisting of every event and the admission and discharge dates, every two consecutive events within the same sentence, and events in the same as well as in different sentences based on linguistic rules. The generated candidate pairs are 60,840 in total, from which 28.16% appears also in the gold pairs.\nThe five relations we use in our experiments (see Section 3) are different from the gold ones existing in the dataset. In order to evaluate the prediction of our methods, we map the five relations to the three gold ones as follows: before → before, after → after, includes → overlap, is included → overlap and simultaneously → overlap.", |
|
|
"4.2 Methods": "LLMs We employ the following five (one closedsource and four open-weight) models of various sizes: GPT-3.5 (“ChatGPT”),5 Gemma 7B (Team et al., 2024),6 Mixtral 8x7B (Jiang et al., 2024),7 Llama2 70B (Touvron et al., 2023),8 and PMCLLaMA 13B, which is pre-trained on medical text (Wu et al., 2023a).9 PMC-LLaMA is only instruction-tuned on QA data (respond to one question at a time) and thus does not follow the format of BatchQA prompts, which expect multiple outputs. Therefore, we use it only for CoT. The experiments were costly in terms of time (and money for GPT-3.5), especially for CoT, where each question is sent separately. The running times ranged from three hours (Gemma BatchQA) to 7 days (Llama CoT) (see more details in Appendix A).\nBaselines We implement a rule-based baseline, called W-order, where only the before and after\n5https://openai.com/index/ gpt-3-5-turbo-fine-tuning-and-api-updates/\n6https://huggingface.co/google/gemma-1. 1-7b-it\n7https://huggingface.co/mistralai/ Mixtral-8x7B-Instruct-v0.1\n8https://huggingface.co/meta-llama/ Llama-2-70b-chat-hf\n9https://huggingface.co/axiong/PMC_LLaMA_13B\nrelations are predicted for each event pair based on the order in which the events are mentioned in the text. A combination of the predictions of each LLM with the W-order predictions is also implemented. In cases where the LLM gives a negative or uncertain prediction for all the relations, the prediction of W-order is used instead.", |
|
|
"5 Zero-shot TempRE results": "To evaluate the correctness of the predicted relations, we calculate the precision, recall and F1 scores. For each pair of events pi = (ej , ek), we check if the predicted relation ri matches the gold relation. Hence, we calculate the triple (ej , ek, ri) match between the predictions and the ground truth. In Table 1, the results for the gold and candidate pairs are presented. In order to perform a fair comparison, considering that not every candidate pair of events has a gold annotation (and therefore it is unknown whether a prediction is correct or wrong), we only evaluate those generated candidate pairs that are also contained in the gold pairs. If a gold pair does not exist in the generated candidate pairs, there is no prediction for it, and that would affect the recall score negatively. In the Supervised setting, we show scores reported by the corresponding papers. Knez and Žitnik (2024) do not mention event detection or candidate pair generation, hence we assume they used the gold pairs. On the other hand, we show the results from Haq et al. (2021) and Tu et al. (2023) in the Candidate column since they operate on events they have detected in the text.\nOur experiments demonstrate that the best performing methods are the same for the gold and the candidate pairs. As expected, the F1 score of the methods when the candidate pairs are used is lower, mostly due to the decrease in recall. The best performing method is Llama CoT + W-order in terms of F1 score. On the other hand, Mixtral CoT achieves the best precision score and Gemma BatchQA + W-order the best recall. Overall, the supervised methods consistently outperform the methods in the zero-shot setting, with an average difference of approximately 20% F1 score. In general, most LLMs (except for Gemma) exhibit improved performance when the CoT prompting approach is used. However, in an LLM-based comparison, we observe that the performance varies depending on the type of prompt used. For example, Llama with CoT has the highest F1 score, but when BatchQA\nis used, the score drops almost in half. Moreover, the combination of W-order predictions with the zero-shot methods yields improvements in recall and F1 score, but in most cases, it harms precision. Notably, PMC-LLaMA, the medical LLM we employed, has low results and is often outperformed by the general domain LLMs, showing no advantage from pre-training on biomedical text.", |
|
|
"6 Temporal consistency analysis": "Considering the temporal nature of the TempRE task, we investigate the impact of incorporating the following two temporal properties in the zero-shot setting: 1. uniqueness, requiring that each event pair has exactly one relation, and 2. transitivity (see transitivity rules in Table 4 in the Appendix). First, we evaluate the zero-shot methods based on their consistency, i.e., if their predictions follow the temporal properties or not. Then, we use ILP to enforce temporal consistency on the predictions. Specifically, we examine the following three questions:\n• How consistent are different zero-shot methods?\n• How is the temporal consistency of the predictions connected to their correctness?\n• Can the predictions be improved by a temporal constraint-based ILP method?\nHow consistent are different zero-shot methods? We calculate two consistency scores: one for uniqueness cU and one for transitivity cT , which show the percentage of cases where the corresponding temporal property was not violated. The consistency score for uniqueness is calculated based on the number of pairs as follows:\ncU =\n∑P i=1 pi,|ri|=1\n|P | , (1)\nwhere only the pairs pi with a singular predicted relation ri are considered. In Table 2, the consistency scores for uniqueness are reported. Furthermore, we present the number of pairs for which no relation was predicted (# 0) and the number of pairs with more than one predicted relation (# >1). We observe that all the models struggle to keep consistency, especially because of assigning more than one relation to a pair. For the majority of the evaluated LLMs, this occurs for at least 50% of the pairs\nand can go up to 97% (Gemma BatchQA). In this evaluation, we also find that there is no clear winner among the LLMs or the prompt types, since the same LLM shows different levels of consistency with different prompt types. The combination with the highest consistency for uniqueness is Llama with BatchQA. The consistency score for transitivity is calculated based on triples of event pairs in the following form: ((ei, ej), (ej , ek), (ei, ek)). We first find these triples in the dataset and then obtain the relations predicted for them. If r1, r2 and r3 are the pre-\ndictions for each respective pair in the triple, then for r3, it should hold that r3 ∈ trans(r1, r2).10 If it does not hold, then we have a transitivity violation. Therefore cT is calculated as:\ncT =\n∑|Tr| i=1 ti,r3∈trans(r1,r2)\n|Tr| , (2)\nwhere Tr is the set of transitivity triples and, for each triple ti, the transitivity for the predicted relations holds.11 Table 2 showcases the cT scores for each of the evaluated methods. Similar to uniqueness, Llama BatchQA demonstrates the highest consistency for transitivity. In general, for all LLMs, we observe that the BatchQA approach yields higher transitivity consistency scores than CoT.\nHow is the temporal consistency of the predictions connected to their correctness? When comparing the consistency scores with F1, we observe a contradiction. Models that have high consistency have a lower F1 score. In particular, Llama BatchQA is the most consistent in terms of uniqueness and transitivity, but has one of the lowest F1 scores. Especially for the candidate pairs, the F1 score is even lower than the rule-based baseline, yet cU is 70.58% and cT is 80.05%. Moreover, Llama CoT, which is the best in terms of F1 score, has low consistency with around only 30% of predictions being unique and 60% correct transitivity triples. These insights suggest that temporal consistency does not always mean correctness.\nCan the predictions be improved by a temporal constraint-based ILP approach? Following the approach proposed by Ning et al. (2017, 2018), we implemented an ILP step that uses the temporal properties as constraints and changes inconsistent predictions so that the constraints are not violated.12 This study aims to investigate whether enforcing consistency can improve the accuracy of the predictions. First, we assign a confidence score sc to each triple (ei, ej , rk),∀rk ∈ RT . The score sc for a pair of events p = (ei, ej) equals 1, if the relation was predicted from the model, and 0.2 otherwise. Next, we create a binary vector, which is\n10The transitive relations for the relation set we used can be found in Table 4 in Appendix A.\n11Triples where at least one pair was not assigned a relation are excluded from this calculation.\n12For the ILP implementation we used the Gurobi optimizer (https://www.gurobi.com/solutions/ gurobi-optimizer/).\noptimized with ILP. We refer to it as the indicator I(pi, ri) ∈ [0, 1], ∀p ∈ P, r ∈ RT . We formulate the constraints as follows:\n• Uniqueness: ∑ p∈P,r∈RT I(p, r) = 1 (3)\n• Symmetry:\nI(pi, ri) = I(p s i , r̄i) (4)\nwhere pi = (ei, ej) and psi = (ej , ei), and r̄i is the reverse relation of ri.13\n• Transitivity:\nI((ei, ej), r1)+I((ej , ek), r2)− ∑\nr3∈tr(r1,r2)\n≤ 1\n(5)\nwhere r1, r2, r3 ∈ RT and trans(r1, r2) is the set of relations that are the transitive of relations r1 and r2.\nThe objective of the ILP method is to maximize the confidence score sc based on the indicator I:\n∧ I = argmax ∑ p∈P ∑ r∈RT sc(p, r)I(p, r) (6)\nAs shown in Table 2, when the ILP reasoning step is applied, the consistency scores for both uniqueness and transitivity reach 100%. We applied this step to the predictions of Llama BatchQA and Llama CoT, which are the models with the highest contradiction between F1 and consistency. In Table 3, we show the results before and after applying the temporal constraints. Even though the consistency of the predictions after reasoning is 100%, the F1 score decreases slightly for BatchQA and by 0.066 for CoT. This means that the predictions are temporally consistent, but they are not accurate. To get a better understanding of this issue, Figure 2 demonstrates two examples of transitivity triples for which the predictions violate the transitivity constraint. This indicates that at least one of the three predictions is incorrect and needs to change. In the top example, the first two relations were correct, but these relations were changed by the ILP step, resulting in only one relation being\n13The reverse of each relation can be found in Table 5 in Appendix A.\ncorrect in the consistent predictions. In the bottom example, only one relation was changed to enforce consistency. This resulted in two correctly predicted relations after the ILP, but still the first relation remained incorrect. This analysis highlights our previous observation regarding the relation between consistency and accuracy, and points out to the need of aligning these two aspects more effectively in order for models to achieve an improved performance in temporal reasoning.", |
|
|
"7 Pairs distance-based analysis": "Since clinical notes contain long texts (see Section 4.1), we perform an analysis based on the dis-\ntance of event pairs for the best-performing LLM (Llama CoT). First, we calculate the distance in terms of characters between the events for all the gold pairs. Then, we sort the pairs by their distances and split them to 10 bins, so that each bin contains roughly the same amount of pairs. Finally, the F1 score is calculated for the prediction of the pairs contained in each bin. Figure 3 depicts the barplot with the bars representing the pairs in the specific distance range and the corresponding F1 scores. We observe that 37.5% of the pairs have a distance of 0 to 100 characters. Larger distances appear less frequently and hence the range of distance is smaller in the first bars, while the last bars have larger ranges. There is no consistent decrease in F1 score as the distance increases, meaning that the model is not affected by the distance of events in the text.", |
|
|
"8 Conclusion": "In this paper, we performed BioTempRE on clinical notes in a zero-shot setting employing five different LLMs. Two types of prompts were used, namely BatchQA and CoT, in order to obtain LLMs’ responses. The zero-shot performance of all LLMs is lower compared to supervised learning methods. Moreover, we perform a temporal evaluation by calculating the consistency score of each LLM for the temporal properties of uniqueness and transivity. We find that, in general, LLMs’ predictions are temporally inconsistent and, interestingly, the model with the higher consistency scores (Llama\nBatchQA) has one of the lowest F1 scores. An ILP method utilized to enforce consistency on the models’ predictions fails to improve their accuracy. These findings indicate the importance of the relation between temporal consistency and correctness, emphasizing the need for further study in order to assist temporal reasoning.", |
|
|
"Acknowledgments": "This research has been funded by the Vienna Science and Technology Fund (WWTF)[10.47379/VRG19008] ”Knowledgeinfused Deep Learning for Natural Language Processing”.\nLimitations\nThe gold relations annotated in the dataset are only three, coarse-grained and not well-defined with respect to when they start and end. The consistency analysis we performed is based on rules, which are connected to the definition of relations and their starting and end points. So in order to make sure that the consistency is calculated accurately, we used a set of 5 well-defined fine-grained relations. However, for evaluating the results we need to map the 5 relations to the original set of 3. This, in some cases, could lead to an inaccurate comparison between the gold and the predicted relations. Also, for the prompts, we used only the set of questions mentioned in Section 3.2 and did not perform any prompt tuning. Experimenting with different ways of formulating the questions could help in finding prompts that yield better results. Another research direction could be to add instructions to the prompts for uniqueness and transitivity towards obtaining consistent predictions.", |
|
|
"A Appendix": "Technical details Getting responses from GPT3.5 for all the pairs for both types of prompts costed around 800$ and lasted 27 hours. For the opensource models we used a single H100 GPU, and for the rest two H100 GPUs. The running time for each model was:\n• Mixtral 8x7B BatchQA: 6 hours\n• Mixtral 8x7B CoT: 48 hours\n• Llama2 70B BatchQA: 24 hours\n• Llama2 70B CoT: 7 days\n• Gemma 7B BatchQA: 3 hours\n• Gemma 7B CoT: 25 hours\n• PMC-Llama 13B CoT: 2.5 days" |
|
|
} |