Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
programming-language
code
program-synthesis
automatic-code-repair
code-retrieval
code-translation
License:
update apr
Browse files- README.md +47 -45
- loader.py +46 -1
- xCodeEval.py +1 -0
README.md
CHANGED
|
@@ -1,57 +1,42 @@
|
|
| 1 |
---
|
| 2 |
annotations_creators:
|
| 3 |
-
- expert-generated
|
| 4 |
language:
|
| 5 |
-
- code
|
| 6 |
-
- en
|
| 7 |
language_creators:
|
| 8 |
-
- found
|
| 9 |
-
- expert-generated
|
| 10 |
license:
|
| 11 |
-
- cc-by-nc-4.0
|
| 12 |
multilinguality:
|
| 13 |
-
- multilingual
|
| 14 |
pretty_name: xCodeEval
|
| 15 |
size_categories:
|
| 16 |
-
- 1M<n<10M
|
| 17 |
-
- 10M<n<100M
|
| 18 |
source_datasets:
|
| 19 |
-
- original
|
| 20 |
tags:
|
| 21 |
-
- programming-language
|
| 22 |
-
- code
|
| 23 |
-
- program-synthesis
|
| 24 |
-
- automatic-code-repair
|
| 25 |
-
- code-retrieval
|
| 26 |
-
- code-translation
|
| 27 |
-
- code-classification
|
| 28 |
task_categories:
|
| 29 |
-
- translation
|
| 30 |
-
- token-classification
|
| 31 |
-
- text2text-generation
|
| 32 |
-
- text-retrieval
|
| 33 |
-
- text-generation
|
| 34 |
-
- text-classification
|
| 35 |
-
- feature-extraction
|
| 36 |
-
- question-answering
|
| 37 |
-
task_ids: []
|
| 38 |
-
configs:
|
| 39 |
-
|
| 40 |
-
- tag_classification
|
| 41 |
-
- code_compilation
|
| 42 |
-
- program_synthesis
|
| 43 |
-
- code_translation
|
| 44 |
-
- apr
|
| 45 |
-
- retrieval_code_code
|
| 46 |
-
- retrieval_nl_code
|
| 47 |
-
- retrieval_corpus
|
| 48 |
-
- problem_descriptions
|
| 49 |
-
- unittest_db
|
| 50 |
---
|
| 51 |
|
| 52 |
|
| 53 |
-
**NOTE**: Please ignore the Dataset Preview.
|
| 54 |
-
|
| 55 |
[github](https://github.com/ntunlp/xCodeEval)
|
| 56 |
|
| 57 |
# xCodeEval
|
|
@@ -63,11 +48,27 @@ This repository contains the sample code and data link for xCodeEval [paper](htt
|
|
| 63 |
|
| 64 |
# Data Download
|
| 65 |
|
| 66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
-

|
| 69 |
|
| 70 |
-
You can download the full data using the following command.
|
| 71 |
|
| 72 |
```
|
| 73 |
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval
|
|
@@ -83,8 +84,6 @@ cd xCodeEval
|
|
| 83 |
git lfs pull --include "apr/test/*"
|
| 84 |
```
|
| 85 |
|
| 86 |
-
**NOTE**: Currently we don't support huggingface `load_dataset()` module. At this moment use `git lfs` to download the data.
|
| 87 |
-
|
| 88 |
|
| 89 |
We propose 7 Tasks.
|
| 90 |
|
|
@@ -96,8 +95,11 @@ We propose 7 Tasks.
|
|
| 96 |
6. [Code-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
|
| 97 |
7. [NL-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
|
| 98 |
|
|
|
|
| 99 |
# Common Data for different tasks
|
| 100 |
|
|
|
|
|
|
|
| 101 |

|
| 102 |
|
| 103 |
We have two data files that are required for multiple tasks.
|
|
|
|
| 1 |
---
|
| 2 |
annotations_creators:
|
| 3 |
+
- expert-generated
|
| 4 |
language:
|
| 5 |
+
- code
|
| 6 |
+
- en
|
| 7 |
language_creators:
|
| 8 |
+
- found
|
| 9 |
+
- expert-generated
|
| 10 |
license:
|
| 11 |
+
- cc-by-nc-4.0
|
| 12 |
multilinguality:
|
| 13 |
+
- multilingual
|
| 14 |
pretty_name: xCodeEval
|
| 15 |
size_categories:
|
| 16 |
+
- 1M<n<10M
|
| 17 |
+
- 10M<n<100M
|
| 18 |
source_datasets:
|
| 19 |
+
- original
|
| 20 |
tags:
|
| 21 |
+
- programming-language
|
| 22 |
+
- code
|
| 23 |
+
- program-synthesis
|
| 24 |
+
- automatic-code-repair
|
| 25 |
+
- code-retrieval
|
| 26 |
+
- code-translation
|
| 27 |
+
- code-classification
|
| 28 |
task_categories:
|
| 29 |
+
- translation
|
| 30 |
+
- token-classification
|
| 31 |
+
- text2text-generation
|
| 32 |
+
- text-retrieval
|
| 33 |
+
- text-generation
|
| 34 |
+
- text-classification
|
| 35 |
+
- feature-extraction
|
| 36 |
+
- question-answering
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
---
|
| 38 |
|
| 39 |
|
|
|
|
|
|
|
| 40 |
[github](https://github.com/ntunlp/xCodeEval)
|
| 41 |
|
| 42 |
# xCodeEval
|
|
|
|
| 48 |
|
| 49 |
# Data Download
|
| 50 |
|
| 51 |
+
Currently this repository supports huggingface [`load_dataset()`](https://huggingface.co/docs/datasets/v1.11.0/package_reference/loading_methods.html#datasets.load_dataset) api. Follow the following example to load dataset for individual examples.
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
+
import datasets
|
| 55 |
+
|
| 56 |
+
prog_synthesis_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "program_synthesis")
|
| 57 |
+
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
|
| 58 |
+
tag_classification_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "tag_classification")
|
| 59 |
+
apr_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "apr")
|
| 60 |
+
pcode_compilation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_compilation")
|
| 61 |
+
retrieval_code_code_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "retrieval_code_code")
|
| 62 |
+
retrieval_nl_code_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "retrieval_nl_code")
|
| 63 |
+
retrieval_corpus_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "retrieval_corpus")
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
Data can be also downloaded as a git LFS repo from huggingface.
|
| 68 |
|
| 69 |
+

|
| 70 |
|
| 71 |
+
You can download the full data using the following command.
|
| 72 |
|
| 73 |
```
|
| 74 |
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/NTU-NLP-sg/xCodeEval
|
|
|
|
| 84 |
git lfs pull --include "apr/test/*"
|
| 85 |
```
|
| 86 |
|
|
|
|
|
|
|
| 87 |
|
| 88 |
We propose 7 Tasks.
|
| 89 |
|
|
|
|
| 95 |
6. [Code-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
|
| 96 |
7. [NL-Code Retrieval](https://github.com/ntunlp/xCodeEval/blob/main/retrieval.md)
|
| 97 |
|
| 98 |
+
|
| 99 |
# Common Data for different tasks
|
| 100 |
|
| 101 |
+
If you are not using huggingface [`load_dataset()`](https://huggingface.co/docs/datasets/v1.11.0/package_reference/loading_methods.html#datasets.load_dataset) api, you may need to link some data with different tasks.
|
| 102 |
+
|
| 103 |

|
| 104 |
|
| 105 |
We have two data files that are required for multiple tasks.
|
loader.py
CHANGED
|
@@ -1,6 +1,51 @@
|
|
| 1 |
-
|
| 2 |
import datasets
|
| 3 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
prog_synthesis_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "program_synthesis")
|
| 5 |
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
|
| 6 |
tag_classification_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "tag_classification")
|
|
|
|
|
|
|
| 1 |
import datasets
|
| 2 |
|
| 3 |
+
SHORT_LANG_MAP = {
|
| 4 |
+
"GNU C++": "C++",
|
| 5 |
+
"GNU C++17": "C++",
|
| 6 |
+
"MS C++ 2017": "C++",
|
| 7 |
+
"MS C++": "C++",
|
| 8 |
+
"Java 8": "Java",
|
| 9 |
+
"Java 6": "Java",
|
| 10 |
+
"GNU C++11": "C++",
|
| 11 |
+
"Java 11": "Java",
|
| 12 |
+
"GNU C++14": "C++",
|
| 13 |
+
"Mono C#": "C#",
|
| 14 |
+
"GNU C": "C",
|
| 15 |
+
"Python 3": "Python",
|
| 16 |
+
"PyPy 3": "Python",
|
| 17 |
+
"GNU C11": "C",
|
| 18 |
+
"Go": "Go",
|
| 19 |
+
"Rust": "Rust",
|
| 20 |
+
"PyPy 2": "Python",
|
| 21 |
+
"Python 2": "Python",
|
| 22 |
+
"MS C#": "C#",
|
| 23 |
+
"Kotlin": "Kotlin",
|
| 24 |
+
"GNU C++0x": "C++",
|
| 25 |
+
"Java 7": "Java",
|
| 26 |
+
"Node.js": "Javascript",
|
| 27 |
+
".NET Core C#": "C#",
|
| 28 |
+
"PHP": "PHP",
|
| 29 |
+
"GNU C++17 Diagnostics": "C++",
|
| 30 |
+
"Clang++17 Diagnostics": "C++",
|
| 31 |
+
"JavaScript": "Javascript",
|
| 32 |
+
"Ruby": "Ruby",
|
| 33 |
+
"C# 10": "C#",
|
| 34 |
+
"C# 8": "C#",
|
| 35 |
+
"Clang++20 Diagnostics": "C++",
|
| 36 |
+
"GNU C++17 (64)": "C++",
|
| 37 |
+
"GNU C++20 (64)": "C++",
|
| 38 |
+
"Java 17": "Java",
|
| 39 |
+
"Kotlin 1.4": "Kotlin",
|
| 40 |
+
"Kotlin 1.5": "Kotlin",
|
| 41 |
+
"Kotlin 1.6": "Kotlin",
|
| 42 |
+
"Kotlin 1.7": "Kotlin",
|
| 43 |
+
"PyPy 3-64": "Python",
|
| 44 |
+
"Python 3 + libs": "Python",
|
| 45 |
+
"Ruby 3": "Ruby",
|
| 46 |
+
"Rust 2021": "Rust",
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
prog_synthesis_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "program_synthesis")
|
| 50 |
code_translation_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "code_translation")
|
| 51 |
tag_classification_dataset = datasets.load_dataset("NTU-NLP-sg/xCodeEval", "tag_classification")
|
xCodeEval.py
CHANGED
|
@@ -287,6 +287,7 @@ _TEXT_FEATURES = {
|
|
| 287 |
"code_compilation": {
|
| 288 |
"file_name",
|
| 289 |
"lang",
|
|
|
|
| 290 |
"source_code",
|
| 291 |
"compilation_error",
|
| 292 |
"code_uid",
|
|
|
|
| 287 |
"code_compilation": {
|
| 288 |
"file_name",
|
| 289 |
"lang",
|
| 290 |
+
"lang_cluster",
|
| 291 |
"source_code",
|
| 292 |
"compilation_error",
|
| 293 |
"code_uid",
|