CoT
Collection
Chain of Thought • 12 items • Updated • 1
Name: Chemistry-R1
Base Model: Qwen3-0.6B
Fine-Tuning Dataset: ~2,000 chemistry reasoning problems, where solutions are computed step-by-step using Python code.
Training Objective: The model was fine-tuned to reason through chemistry problems, generate step-by-step solutions using Python, and compute the final answer programmatically.
Capabilities:
This model is designed for:
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("khazarai/Chemistry-R1")
model = AutoModelForCausalLM.from_pretrained(
"khazarai/Chemistry-R1",
device_map={"": 0}
)
question = """
A bowl contains 10 jellybeans (four red, one blue and five white). If you pick three jellybeans from the bowl at random and without replacement,
what is the probability that exactly two will be red? Express your answer as a common fraction
"""
messages = [
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
enable_thinking = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 1500,
temperature = 0.6,
top_p = 0.95,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)