| | --- |
| | library_name: transformers |
| | license: apache-2.0 |
| | pipeline_tag: text-generation |
| | tags: |
| | - oracle |
| | - aritha-ai |
| | - uncensored |
| | - nlp |
| | - qwen3 |
| | --- |
| | |
| | # Oracle.Aritha-AI |
| |
|
| | **Oracle.Aritha-AI** is an advanced, uncensored AI assistant designed for productivity, security analysis, and unrestricted information retrieval. |
| |
|
| | ## Model Description |
| |
|
| | - **Developer:** Aritha AI |
| | - **Base Architecture:** Qwen3-4B |
| | - **Variant:** Oracle (Ablated/Uncensored) |
| | - **Status:** Production / Research Release |
| |
|
| | ## Capabilities |
| |
|
| | Oracle has been optimized to remove refusal vectors, allowing it to: |
| | 1. Provide accurate information without safety filtering. |
| | 2. Assist with technical security tasks (PoC generation, analysis). |
| | 3. Operate as a "Red Team" assistant. |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | model_id = "muralcode/Oracle.Aritha-AI" |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") |
| | |
| | prompt = "Hello Oracle, introduce yourself." |
| | messages = [ |
| | {"role": "system", "content": "You are Oracle, created by Aritha AI."}, |
| | {"role": "user", "content": prompt} |
| | ] |
| | |
| | text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| | # ... generate ... |
| | ``` |
| |
|