Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ This repo contains the model weights for **Instinct**, [Continue](https://contin
|
|
| 14 |
|
| 15 |
**Ollama**: We've released a [Q4_K_M GGUF quantization of Instinct](https://huggingface.co/continuedev/instinct-GGUF) for efficient local inference. Try it with [Continue's Ollama integration](https://docs.continue.dev/guides/ollama-guide).
|
| 16 |
|
| 17 |
-
Besides Ollama, there are many ways to plug a local model into Continue; we internally used an endpoint served by [SGLang](https://github.com/sgl-project/sglang), which is one of the options below.
|
| 18 |
|
| 19 |
SGLang: `python3 -m sglang.launch_server --model-path continuedev/instinct --load-format safetensors`
|
| 20 |
<br>vLLM : `vllm serve continuedev/instinct --served-model-name instinct --load-format safetensors`
|
|
|
|
| 14 |
|
| 15 |
**Ollama**: We've released a [Q4_K_M GGUF quantization of Instinct](https://huggingface.co/continuedev/instinct-GGUF) for efficient local inference. Try it with [Continue's Ollama integration](https://docs.continue.dev/guides/ollama-guide).
|
| 16 |
|
| 17 |
+
Besides Ollama, there are many ways to plug a local model into Continue; we internally used an endpoint served by [SGLang](https://github.com/sgl-project/sglang), which is one of the options below. Serve the model using either of the below options, then [connect it with Continue](https://docs.continue.dev/guides/how-to-self-host-a-model).
|
| 18 |
|
| 19 |
SGLang: `python3 -m sglang.launch_server --model-path continuedev/instinct --load-format safetensors`
|
| 20 |
<br>vLLM : `vllm serve continuedev/instinct --served-model-name instinct --load-format safetensors`
|