Pacific-Prime: Python Node
Pure Python specialist fine-tuned from Pacific-Prime Code (I64 architecture, 1.5B parameters).
Skills
- Python basics & standard library
- Algorithms & data structures
- Object-oriented programming
- Decorators & generators
- List comprehensions
- File I/O & error handling
Training
- Architecture: I64 (Complexity-Deep)
- Parameters: 1.5B
- Base model: pacific-prime-code (checkpoint epoch 70)
- Method: Full SFT (no LoRA)
- Dataset: python_code_instructions_18k_alpaca (18K samples)
- Epochs: 1000
- Max context: 4096 tokens
Inference with vLLM-I64
Use our custom vLLM engine with native I64 support:
๐ vllm-i64
git clone https://github.com/Complexity-ML/vllm-i64.git
cd vllm-i64
pip install -e .
from vllm import LLM, SamplingParams
model = LLM(model="Pacific-Prime/python-node")
params = SamplingParams(temperature=0.7, max_tokens=4096)
prompt = "User: Write a Python function to find the longest common subsequence of two strings.\nAssistant:"
output = model.generate([prompt], params)
print(output[0].outputs[0].text)
Serve Your Own I64 Model
Trained your own I64 model with complexity-deep? Serve it with vllm-i64:
from vllm import LLM, SamplingParams
model = LLM(model="/path/to/your/i64-model")
params = SamplingParams(temperature=0.7, max_tokens=4096)
output = model.generate(["User: Hello!\nAssistant:"], params)
print(output[0].outputs[0].text)
Links
- Complexity Framework โ ML framework for building & training I64 models
- Complexity-Deep โ Training framework & architecture
- vllm-i64 โ Inference engine for I64 models
License
CC BY-NC 4.0
- Downloads last month
- 133
Model tree for Pacific-Prime/python-node
Base model
Pacific-Prime/pacific-prime-code