Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
name: string
version: string
created_at: string
description: string
statistics: struct<total_items: int64, categories: struct<devops: int64>, sources: struct<huggingface: int64>, average_quality_score: double, quality_distribution: struct<high (>0.8): int64, medium (0.7-0.8): int64, low (<0.7): int64>>
collection: struct<sources_stats: struct<huggingface: struct<status: string, fetched: int64, saved: int64, ban_until: null, errors: int64>, github_gists: struct<status: string, fetched: int64, saved: int64, ban_until: null, errors: int64>, official_docs: struct<status: string, fetched: int64, saved: int64, ban_until: null, errors: int64>>, quality_validation: struct<total_checked: int64, accepted: int64, rejected: int64, acceptance_rate: double, top_rejection_reasons: struct<Quality score too low: int64, Question too long: int64, Contains toxic content: int64, Contains spam keywords: int64, 1 validation error for QAItem
answer
Value error, Answer is too short or empty [type=value_error, input_value='<p>g++ and make</p>\n', input_type=str]
For further information visit https: int64>>, deduplication: struct<total_checked: int64, exact_duplicates: int64, semantic_duplicates: int64, unique: int64, unique_rate: double, hashes_in_cache: int64, embeddings_in_cache: int64>>
files: struct<created: list<item: string>, total_size_mb: double, checksums: struct<devops_dataset.jsonl: string, devops_dataset.jsonl.gz: string, devops_dataset.parquet: string>>
schema: struct<id: string, question: string, answer: string, tags: string, source: string, category: string, quality_score: string, question_score: string, answer_score: string, has_code: string, url: string, collected_at: string>
license: string
usage: string
vs
id: string
question: string
question_body: string
answer: string
tags: list<item: string>
source: string
category: string
difficulty: string
quality_score: double
question_score: int64
answer_score: int64
has_code: bool
url: string
collected_at: string
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 547, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
name: string
version: string
created_at: string
description: string
statistics: struct<total_items: int64, categories: struct<devops: int64>, sources: struct<huggingface: int64>, average_quality_score: double, quality_distribution: struct<high (>0.8): int64, medium (0.7-0.8): int64, low (<0.7): int64>>
collection: struct<sources_stats: struct<huggingface: struct<status: string, fetched: int64, saved: int64, ban_until: null, errors: int64>, github_gists: struct<status: string, fetched: int64, saved: int64, ban_until: null, errors: int64>, official_docs: struct<status: string, fetched: int64, saved: int64, ban_until: null, errors: int64>>, quality_validation: struct<total_checked: int64, accepted: int64, rejected: int64, acceptance_rate: double, top_rejection_reasons: struct<Quality score too low: int64, Question too long: int64, Contains toxic content: int64, Contains spam keywords: int64, 1 validation error for QAItem
answer
Value error, Answer is too short or empty [type=value_error, input_value='<p>g++ and make</p>\n', input_type=str]
For further information visit https: int64>>, deduplication: struct<total_checked: int64, exact_duplicates: int64, semantic_duplicates: int64, unique: int64, unique_rate: double, hashes_in_cache: int64, embeddings_in_cache: int64>>
files: struct<created: list<item: string>, total_size_mb: double, checksums: struct<devops_dataset.jsonl: string, devops_dataset.jsonl.gz: string, devops_dataset.parquet: string>>
schema: struct<id: string, question: string, answer: string, tags: string, source: string, category: string, quality_score: string, question_score: string, answer_score: string, has_code: string, url: string, collected_at: string>
license: string
usage: string
vs
id: string
question: string
question_body: string
answer: string
tags: list<item: string>
source: string
category: string
difficulty: string
quality_score: double
question_score: int64
answer_score: int64
has_code: bool
url: string
collected_at: stringNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
DevOps Q&A Dataset v1.0
Overview
High-quality dataset of 25,670 DevOps technical examples collected from GitHub repositories, Stack Exchange, and official documentation.
Statistics
- Total examples: 25,670
- Average quality score: ~0.82
- Unique (deduplicated): High accuracy via MD5
- Categories: Docker, Kubernetes, CI/CD, Cloud, Linux, Terraform, Ansible
- Sources: StackExchange/HuggingFace (
70%), GitHub Repositories (29%), Official Documentation (~1%)
Use Cases
- Fine-tuning LLMs for DevOps automation
- Training specialized models for technical documentation
- DevOps support chatbot development
- Infrastructure-as-Code assistant
Dataset Schema
{
"question": "How to fix OOMKilled errors in Kubernetes?",
"answer": "1. Check pod memory limits...",
"category": "kubernetes",
"difficulty": "intermediate",
"quality_score": 0.89,
"source": "github"
}
Download
from datasets import load_dataset
dataset = load_dataset("Skilln/devops-qa-dataset")
License
CC-BY-SA 4.0 (compatible with source licenses)
Citation
@dataset{devops_qa_2026,
title={DevOps Q&A Dataset},
author={Skilln},
year={2026},
url={https://huggingface.co/datasets/Skilln/devops-qa-dataset}
}
- Downloads last month
- 14