Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
sentiment-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
| language: | |
| - en | |
| license: | |
| - other | |
| multilinguality: | |
| - monolingual | |
| size_categories: | |
| - 1k<10K | |
| task_categories: | |
| - text-classification | |
| task_ids: | |
| - sentiment-classification | |
| pretty_name: TweetTopicSingle | |
| dataset_info: | |
| config_name: tweet_topic_multi | |
| features: | |
| - name: text | |
| dtype: string | |
| - name: date | |
| dtype: string | |
| - name: label | |
| sequence: | |
| class_label: | |
| names: | |
| '0': arts_&_culture | |
| '1': business_&_entrepreneurs | |
| '2': celebrity_&_pop_culture | |
| '3': diaries_&_daily_life | |
| '4': family | |
| '5': fashion_&_style | |
| '6': film_tv_&_video | |
| '7': fitness_&_health | |
| '8': food_&_dining | |
| '9': gaming | |
| '10': learning_&_educational | |
| '11': music | |
| '12': news_&_social_concern | |
| '13': other_hobbies | |
| '14': relationships | |
| '15': science_&_technology | |
| '16': sports | |
| '17': travel_&_adventure | |
| '18': youth_&_student_life | |
| - name: label_name | |
| sequence: string | |
| - name: id | |
| dtype: string | |
| splits: | |
| - name: test_2020 | |
| num_bytes: 231142 | |
| num_examples: 573 | |
| - name: test_2021 | |
| num_bytes: 666444 | |
| num_examples: 1679 | |
| - name: train_2020 | |
| num_bytes: 1864206 | |
| num_examples: 4585 | |
| - name: train_2021 | |
| num_bytes: 595183 | |
| num_examples: 1505 | |
| - name: train_all | |
| num_bytes: 2459389 | |
| num_examples: 6090 | |
| - name: validation_2020 | |
| num_bytes: 233321 | |
| num_examples: 573 | |
| - name: validation_2021 | |
| num_bytes: 73135 | |
| num_examples: 188 | |
| - name: train_random | |
| num_bytes: 1860509 | |
| num_examples: 4564 | |
| - name: validation_random | |
| num_bytes: 233541 | |
| num_examples: 573 | |
| - name: test_coling2022_random | |
| num_bytes: 2250137 | |
| num_examples: 5536 | |
| - name: train_coling2022_random | |
| num_bytes: 2326257 | |
| num_examples: 5731 | |
| - name: test_coling2022 | |
| num_bytes: 2247725 | |
| num_examples: 5536 | |
| - name: train_coling2022 | |
| num_bytes: 2328669 | |
| num_examples: 5731 | |
| download_size: 6377923 | |
| dataset_size: 17369658 | |
| configs: | |
| - config_name: tweet_topic_multi | |
| data_files: | |
| - split: test_2020 | |
| path: tweet_topic_multi/test_2020-* | |
| - split: test_2021 | |
| path: tweet_topic_multi/test_2021-* | |
| - split: train_2020 | |
| path: tweet_topic_multi/train_2020-* | |
| - split: train_2021 | |
| path: tweet_topic_multi/train_2021-* | |
| - split: train_all | |
| path: tweet_topic_multi/train_all-* | |
| - split: validation_2020 | |
| path: tweet_topic_multi/validation_2020-* | |
| - split: validation_2021 | |
| path: tweet_topic_multi/validation_2021-* | |
| - split: train_random | |
| path: tweet_topic_multi/train_random-* | |
| - split: validation_random | |
| path: tweet_topic_multi/validation_random-* | |
| - split: test_coling2022_random | |
| path: tweet_topic_multi/test_coling2022_random-* | |
| - split: train_coling2022_random | |
| path: tweet_topic_multi/train_coling2022_random-* | |
| - split: test_coling2022 | |
| path: tweet_topic_multi/test_coling2022-* | |
| - split: train_coling2022 | |
| path: tweet_topic_multi/train_coling2022-* | |
| default: true | |
| # Dataset Card for "cardiffnlp/tweet_topic_multi" | |
| ## Dataset Description | |
| - **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824) | |
| - **Dataset:** Tweet Topic Dataset | |
| - **Domain:** Twitter | |
| - **Number of Class:** 19 | |
| ### Dataset Summary | |
| This is the official repository of TweetTopic (["Twitter Topic Classification | |
| , COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 19 labels. | |
| Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021. | |
| See [cardiffnlp/tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single) for single label version of TweetTopic. | |
| The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7). | |
| The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too. | |
| ### Preprocessing | |
| We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`. | |
| For verified usernames, we replace its display name (or account name) with symbols `{@}`. | |
| For example, a tweet | |
| ``` | |
| Get the all-analog Classic Vinyl Edition | |
| of "Takin' Off" Album from @herbiehancock | |
| via @bluenoterecords link below: | |
| http://bluenote.lnk.to/AlbumOfTheWeek | |
| ``` | |
| is transformed into the following text. | |
| ``` | |
| Get the all-analog Classic Vinyl Edition | |
| of "Takin' Off" Album from {@herbiehancock@} | |
| via {@bluenoterecords@} link below: {{URL}} | |
| ``` | |
| A simple function to format tweet follows below. | |
| ```python | |
| import re | |
| from urlextract import URLExtract | |
| extractor = URLExtract() | |
| def format_tweet(tweet): | |
| # mask web urls | |
| urls = extractor.find_urls(tweet) | |
| for url in urls: | |
| tweet = tweet.replace(url, "{{URL}}") | |
| # format twitter account | |
| tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet) | |
| return tweet | |
| target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek""" | |
| target_format = format_tweet(target) | |
| print(target_format) | |
| 'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}' | |
| ``` | |
| ### Data Splits | |
| | split | number of texts | description | | |
| |:------------------------|-----:|------:| | |
| | test_2020 | 573 | test dataset from September 2019 to August 2020 | | |
| | test_2021 | 1679 | test dataset from September 2020 to August 2021 | | |
| | train_2020 | 4585 | training dataset from September 2019 to August 2020 | | |
| | train_2021 | 1505 | training dataset from September 2020 to August 2021 | | |
| | train_all | 6090 | combined training dataset of `train_2020` and `train_2021` | | |
| | validation_2020 | 573 | validation dataset from September 2019 to August 2020 | | |
| | validation_2021 | 188 | validation dataset from September 2020 to August 2021 | | |
| | train_random | 4564 | randomly sampled training dataset with the same size as `train_2020` from `train_all` | | |
| | validation_random | 573 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` | | |
| | test_coling2022_random | 5536 | random split used in the COLING 2022 paper | | |
| | train_coling2022_random | 5731 | random split used in the COLING 2022 paper | | |
| | test_coling2022 | 5536 | temporal split used in the COLING 2022 paper | | |
| | train_coling2022 | 5731 | temporal split used in the COLING 2022 paper | | |
| For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`. | |
| In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`. | |
| **IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set). | |
| ### Models | |
| | model | training data | F1 | F1 (macro) | Accuracy | | |
| |:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:| | |
| | [cardiffnlp/roberta-large-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-all) | all (2020 + 2021) | 0.763104 | 0.620257 | 0.536629 | | |
| | [cardiffnlp/roberta-base-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-all) | all (2020 + 2021) | 0.751814 | 0.600782 | 0.531864 | | |
| | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all) | all (2020 + 2021) | 0.762513 | 0.603533 | 0.547945 | | |
| | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all) | all (2020 + 2021) | 0.759917 | 0.59901 | 0.536033 | | |
| | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all) | all (2020 + 2021) | 0.764767 | 0.618702 | 0.548541 | | |
| | [cardiffnlp/roberta-large-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-2020) | 2020 only | 0.732366 | 0.579456 | 0.493746 | | |
| | [cardiffnlp/roberta-base-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-2020) | 2020 only | 0.725229 | 0.561261 | 0.499107 | | |
| | [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020) | 2020 only | 0.73671 | 0.565624 | 0.513401 | | |
| | [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020) | 2020 only | 0.729446 | 0.534799 | 0.50268 | | |
| | [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020) | 2020 only | 0.731106 | 0.532141 | 0.509827 | | |
| Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py). | |
| ## Dataset Structure | |
| ### Data Instances | |
| An example of `train` looks as follows. | |
| ```python | |
| { | |
| "date": "2021-03-07", | |
| "text": "The latest The Movie theater Daily! {{URL}} Thanks to {{USERNAME}} {{USERNAME}} {{USERNAME}} #lunchtimeread #amc1000", | |
| "id": "1368464923370676231", | |
| "label": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], | |
| "label_name": ["film_tv_&_video"] | |
| } | |
| ``` | |
| ### Labels | |
| | <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> | | |
| |-----------------------------|---------------------|----------------------------|--------------------------| | |
| | 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports | | |
| | 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure | | |
| | 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life | | |
| | 4: family | 9: gaming | 14: relationships | | | |
| Annotation instructions can be found [here](https://docs.google.com/document/d/1IaIXZYof3iCLLxyBdu_koNmjy--zqsuOmxQ2vOxYd_g/edit?usp=sharing). | |
| The label2id dictionary can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/dataset/label.multi.json). | |
| ### Citation Information | |
| ``` | |
| @inproceedings{dimosthenis-etal-2022-twitter, | |
| title = "{T}witter {T}opic {C}lassification", | |
| author = "Antypas, Dimosthenis and | |
| Ushio, Asahi and | |
| Camacho-Collados, Jose and | |
| Neves, Leonardo and | |
| Silva, Vitor and | |
| Barbieri, Francesco", | |
| booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", | |
| month = oct, | |
| year = "2022", | |
| address = "Gyeongju, Republic of Korea", | |
| publisher = "International Committee on Computational Linguistics" | |
| } | |
| ``` |