Deploy Deepseek R1 671B param model using Tensorfuse
Deepseek-R1 is an advanced large language model designed to handle a wide range of conversational and generative tasks. It
has proven capabilities in various benchmarks and excels in complex reasoning. In this guide, we will walk you through deploying
the Deepseek-R1 671B parameter model on your cloud account using Tensorfuse. We will be using H100 GPUs for this example, however
it is super easy to deploy on other GPUs as well (see Tip).
Although this guide focuses on deploying the 671B param model, you can easily adapt the instructions to deploy any distilled version
of Deepseek-R1, including 2B, 7B, and 70B param models. See the table at the end of the guide or visit the github repo.
High Performance on Evaluations: Achieves strong results on industry-standard benchmarks.
Advanced Reasoning: Handles multi-step logical reasoning tasks with minimal context.
Multilingual Support: Pretrained on diverse linguistic data, making it adept at multilingual understanding.
Scalable Distilled Models: Smaller distilled variants (2B, 7B, 32B, 70B) offer cheaper options without compromising on cost.
Below is a quick snapshot of benchmark scores for Deepseek-R1:
Benchmark
Deepseek-R1 (671B)
Remarks
MMLU
90.8%
Near state-of-the-art
AIME 2024 (Pass@1)
79.8%
Mathematical and reasoning abilities
LiveCodeBench (Pass@1-COT)
65.9%
Excels at multi-step reasoning
The combination of these strengths makes Deepseek-R1 an excellent choice for production-ready applications, from chatbots to enterprise-level data analytics.
Your code (in this example, vLLM API server code is used from the Docker image).
Your environment (as a Dockerfile).
A deployment configuration (deployment.yaml).
We will also add token-based authentication to our service, compatible with OpenAI client libraries. We will store the authentication token (VLLM_API_KEY) as a Tensorfuse secret. Unlike some other models, Deepseek-R1 671B does not require a separate Hugging Face token, so we can skip that step.
Generate a random string that will be used as your API authentication token. Store it as a secret in Tensorfuse using the command below. For the purpose of this demo, we will be using vllm-key as your API key.
Ensure that in production you use a randomly generated token. You can quickly generate one
using openssl rand -base64 32 and remember to keep it safe as Tensorfuse secrets are opaque.
We will use the official vLLM Openai image as our base image. This image comes with all the necessary
dependencies to run vLLM. The image is present on DockerHub as vllm/vllm-openai.
Dockerfile
Copy
Ask AI
# Dockerfile for Deepseek-R1-671BFROM tensorfuse/vllm-openai:v0.8.4-patched# Enable HF Hub TransferENV HF_HUB_ENABLE_HF_TRANSFER 1# Expose port 80EXPOSE 80# Entrypoint with API keyENTRYPOINT python3 -m vllm.entrypoints.openai.api_server \ --model deepseek-ai/DeepSeek-R1 \ --dtype bfloat16 \ --trust-remote-code \ --tensor-parallel-size 8 \ --max-model-len 4096 \ --port 80 \ --cpu-offload-gb 80 \ --gpu-memory-utilization 0.95 \ --api-key ${VLLM_API_KEY}# DeepSeek-R1 model configuration# - Using deepseek-ai/DeepSeek-R1 model with bfloat16 dtype (~1400GB GPU memory required)# - Running on 8 GPUs with tensor parallelism# - Max 4096 tokens to avoid OOM errors# - CPU offload of 80GB needed as 8 H100s are not sufficient# - Using 95% GPU memory utilization# - Server runs on port 80# - API key from environment variable for authentication
We’ve configured the vLLM server with numerous CLI flags tailored to our specific use case. A comprehensive list of all
other vLLM flags is available for further reference, and if you have questions about selecting flags for production, the Tensorfuse Community is an excellent place to seek guidance.
Although you can deploy tensorfuse apps using command line, it is always recommended to have a config file so
that you can follow a GitOps approach to deployment.
Don’t forget the readiness endpoint in your config. Tensorfuse uses this endpoint to ensure that your service is healthy.
If no readiness endpoint is configured, Tensorfuse tries the /readiness path on port 80 by default which can cause issues if your app is not listening on that path.
Now you can deploy your service using the following command:
Voila! Your autoscaling production LLM service is ready. Only authenticated requests will be served by your endpoint.Once the deployment is successful, you can see the status of your app by running:
Copy
Ask AI
tensorkube deployment list
And that’s it! You have successfully deployed the world’s strongest Open Source Reasoning Model
Remember to configure a TLS endpoint with a custom domain before going to production.
To test it out, replace YOUR_APP_URL with the endpoint shown in the output of the above command and run:
Because vllm is compatible with the OpenAI API, you can useOpenAI’s client libraries
as well. Here’s a sample snippet using Python:
Copy
Ask AI
import openai# Replace with your actual URL and tokenbase_url = "YOUR_APP_URL/v1"api_key = "vllm-key"openai.api_base = base_urlopenai.api_key = api_keyresponse = openai.Completion.create( model="deepseek-ai/DeepSeek-R1", prompt="Hello, Deepseek R1! How are you today?", max_tokens=200)print(response)
Although this guide has focused on Deepseek-R1 671B, there are smaller distilled variants available. Each variant changes primarily in:
• Model name in the Dockerfile (--model flag).
• GPU resources in deployment.yaml.
• (Optional) --tensor-parallel-size depending on your hardware.Below is a table summarizing the key changes for each variant: