metadata
dataset_info:
features:
- name: doc
dtype: string
- name: split
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 26636395
num_examples: 395892
- name: test
num_bytes: 8601871
num_examples: 114099
download_size: 5977728
dataset_size: 35238266
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: cc-by-nc-sa-4.0
pretty_name: hyperprobe-dataset
task_categories:
- feature-extraction
language:
- en
tags:
- decoding
- probing
- llms
- concepts
- analogy
size_categories:
- 100K<n<1M
Dataset Card for hyperdimensional probe
This repository contains the official datasets of "Hyperdimensional Probe: Decoding LLM Representations via Vector Symbolic Architectures".
- This work combines symbolic representations and neural probing to introduce Hyperdimensional Probe, a new paradigm for decoding LLM vector space into human-interpretable features, consistently extracting meaningful concepts across models and inputs.
Datasets
- Corpus of factual and linguistic analogies (input-completition tasks): saturnMars/hyperprobe-dataset-analogy
- SQuAD-based corpus (question-answering tasks): saturnMars/hyperprobe-dataset-squad
Dataset Details
Dataset Description
This repository includes our syntethic corpora for the training and experimental stages.
Information
- Curated by: Marco Bronzini
- Funded by: IpaziaAi
- Language(s) (NLP): English
- License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
Dataset Sources
- Repository: github.com/Ipazia-AI/hyperprobe
- Paper: Hyperdimensional Probe: Decoding LLM Representations via Vector Symbolic Architectures
Examples
Train data
[
" 10 : 1 = 60 : 6",
" 10 : 100 = 12 : 144",
" plato : kepler = philosopher : mathematician",
" significant : successful = significantly : successfully",
" important : importantly = subsequent : subsequently"],
" 10 : 100 = 28 : 784",
" coyote : canine = cat : feline",
" coyote : canine = cow : bovid",
" sold : oversold = played : overplayed",
" sold : oversold = populated : overpopulated"],
" 10 : 1 = 80 : 8",
" rarely : quietly = rare : quiet",
" rarely : rare = calmly : calm",
" rarely : rare = critically : critical",
" youngest : young = sweetest : sweet"]
]
Test data
{
"capital_world": [
" Athens is to Greece as Baghdad is to Iraq",
" Athens is to Greece as Bangkok is to Thailand"],
"currency": [
" Algeria is to dinar as Angola is to kwanza",
" Algeria is to dinar as Brazil is to real"],
"family": [
" boy is to girl as brother is to sister",
" boy is to girl as dad is to mom"],
"comparative": [
" bad is to worse as big is to bigger",
" bad is to worse as bright is to brighter"],
"verb_Ving_3pSg": [
" adding is to adds as advertising is to advertises",
" adding is to adds as appearing is to appears"],
"male_female": [
" actor is to actress as batman is to batwoman",
" actor is to actress as boy is to girl"]
}
Source Data
This corpora were generated using two knowledge bases:
Google Analogy Test Set is distributed by TensorFlow under the Apache License 2.0, whereas BATS is released under the CC-BY-NC 4.0 License.
- See the
GitHub repositoryto reconstruct the corpora from the these two knowledge bases.
Citation
If you use any of these datasets in your research, please cite the following work:
@misc{bronzini2025hyperdimensional,
title={Hyperdimensional Probe: Decoding LLM Representations via Vector Symbolic Architectures},
author={Marco Bronzini and Carlo Nicolini and Bruno Lepri and Jacopo Staiano and Andrea Passerini},
year={2025},
eprint={2509.25045},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
APA: Bronzini, M., Nicolini, C., Lepri, B., Staiano, J., & Passerini, A. (2025). Hyperdimensional Probe: Decoding LLM Representations via Vector Symbolic Architectures.