Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,122 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
tags:
|
| 5 |
+
- gpt2
|
| 6 |
+
- keras-nlp
|
| 7 |
+
- text-generation
|
| 8 |
+
- bhagavad-gita
|
| 9 |
+
license: mit
|
| 10 |
+
datasets:
|
| 11 |
+
- custom
|
| 12 |
+
pipeline_tag: text-generation
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# π Fine-tuned GPT-2 on the Bhagavad Gita
|
| 16 |
+
|
| 17 |
+
This repository contains a **fine-tuned GPT-2 model** trained on the *Bhagavad Gita* (English meanings/verses).
|
| 18 |
+
It generates text inspired by the teachings and style of the Bhagavad Gita.
|
| 19 |
+
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
## π§Ύ Model Card
|
| 23 |
+
|
| 24 |
+
| Attribute | Details |
|
| 25 |
+
|------------------|---------|
|
| 26 |
+
| **Base Model** | [GPT-2 (124M)](https://huggingface.co/openai-community/gpt2) |
|
| 27 |
+
| **Architecture** | Decoder-only Transformer |
|
| 28 |
+
| **Framework** | TensorFlow / KerasNLP |
|
| 29 |
+
| **Dataset** | Bhagavad Gita (English meanings) |
|
| 30 |
+
| **Languages** | English |
|
| 31 |
+
| **Dataset Size** | ~700 verses |
|
| 32 |
+
| **Tokenizer** | GPT-2 tokenizer (inherited vocabulary) |
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## π Training Details
|
| 37 |
+
|
| 38 |
+
- **Epochs**: 3
|
| 39 |
+
- **Batch Size**: 8
|
| 40 |
+
- **Learning Rate**: 3e-5
|
| 41 |
+
- **Optimizer**: AdamW
|
| 42 |
+
- **Loss Function**: Causal Language Modeling (CrossEntropy)
|
| 43 |
+
- **Preprocessing**: Cleaned, tokenized, and formatted into text sequences
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
## π Usage
|
| 48 |
+
|
| 49 |
+
You can load and run the model using Hugging Face `transformers`:
|
| 50 |
+
|
| 51 |
+
```python
|
| 52 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 53 |
+
|
| 54 |
+
# Load tokenizer & model
|
| 55 |
+
tokenizer = AutoTokenizer.from_pretrained("AP6621/Bhagawatgitagpt")
|
| 56 |
+
model = AutoModelForCausalLM.from_pretrained("AP6621/Bhagawatgitagpt")
|
| 57 |
+
|
| 58 |
+
# Generate text
|
| 59 |
+
inputs = tokenizer("Arjuna asked:", return_tensors="pt")
|
| 60 |
+
outputs = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.9, temperature=0.8)
|
| 61 |
+
|
| 62 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
## β¨ Example
|
| 68 |
+
|
| 69 |
+
**Prompt**:
|
| 70 |
+
```
|
| 71 |
+
Arjuna asked:
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
**Generated Response**:
|
| 75 |
+
*"O Arjuna, when the mind is detached from desires, the soul attains peace and wisdom. Such a person sees all beings with equal vision."*
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
## βοΈ Limitations & Bias
|
| 80 |
+
|
| 81 |
+
- Small dataset β may **repeat phrases** or **hallucinate content**
|
| 82 |
+
- Not an official or scholarly translation
|
| 83 |
+
- Should not be used as a **religious authority**
|
| 84 |
+
|
| 85 |
+
---
|
| 86 |
+
|
| 87 |
+
## β
Intended Use
|
| 88 |
+
|
| 89 |
+
βοΈ Educational experiments
|
| 90 |
+
βοΈ Creative/spiritual-inspired text generation
|
| 91 |
+
βοΈ AI-assisted storytelling
|
| 92 |
+
|
| 93 |
+
β Not for religious/spiritual authority
|
| 94 |
+
β Not for official translations
|
| 95 |
+
β Not for sensitive decision-making
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
## π Citation
|
| 100 |
+
|
| 101 |
+
If you use this model, please cite:
|
| 102 |
+
|
| 103 |
+
```bibtex
|
| 104 |
+
@misc{gpt2-bhagavadgita,
|
| 105 |
+
title = {Fine-tuned GPT-2 on Bhagavad Gita Dataset},
|
| 106 |
+
author = {Your Name},
|
| 107 |
+
year = {2025},
|
| 108 |
+
publisher = {Hugging Face},
|
| 109 |
+
howpublished = {\url{https://huggingface.co/your-username/gpt2-bhagavad-gita}}
|
| 110 |
+
}
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
## π Acknowledgements
|
| 116 |
+
|
| 117 |
+
- [OpenAI](https://huggingface.co/openai-community) for GPT-2
|
| 118 |
+
- Public Bhagavad Gita dataset
|
| 119 |
+
- Hugging Face community for tools & inspiration
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|