đź’ˇ
DeepSeek-R1 is a ‌reasoning-focused language model‌ developed by DeepSeek-AI, designed to tackle complex tasks in mathematics, code generation, and natural language reasoning through advanced reinforcement learning (RL) techniques‌.

Request

Request method

Request address Request method
https://t.ruhr/api/deepseek_r1 POST

Description of request parameters

Parameter Name Type Description Required
access_key string The identifier of the API request true
content string true

Response

Description of response parameters

Parameter Name Type Description
success boolean A normal response will return true
date string
reasoning_content string
content string

Rapid deployment

Since the reasoning processes of DeepSeek-R1-like models may be long, which may lead to slow response or timeout, it is recommended that you prioritize the call using Stream Output.

Python

import os
from openai import OpenAI

# Initializing the OpenAI Client
client = OpenAI(
    # If no environment variable is configured, replace the following line with Access Key: access_key = "<YOUR-ACCESS-KEY>":
    access_key = os.getenv("CONFIG_ACCESS_KEY"),
    base_url = "https://t.ruhr/api/deepseek_r1"
)

# Creating a Chat Completion Request
completion = client.chat.completions.create(
    version = "latest",
    messages = [
        {'role': 'user', 'content': 'Hello!'}
    ]
)

# Print the process of reasoning through the "reasoning_content" field:
print("Process of reasoning:")
print(completion.choices[0].message.reasoning_content)

# Print the final answer via the "content" field:
print("Final answer:")
print(completion.choices[0].message.content)

Please replace <YOUR-ACCESS-KEY> with your access key.

NodeJS

import OpenAI from 'openai';

// Initializing the OpenAI Client
const openai = new OpenAI({
    accessKey: process.env.CONFIG_ACCESS_KEY, // Read from environment variable
    baseURL: 'https://t.ruhr/api/deepseek_r1'
});


const completion = await openai.chat.completions.create({
    version: 'latest',
    messages: [
        {role: 'user', content: 'Hello!'}
    ],
});

console.log('Process of reasoning:')
console.log(completion.choices[0].message.reasoning_content)
console.log('Final answer:')
console.log(completion.choices[0].message.content)

Please replace <YOUR-ACCESS-KEY> with your access key.

cURL

đź’ˇ
You can quickly experience the DeepSeek model via the OpenAI SDK or OpenAI-compatible HTTP.
curl -X POST https://t.ruhr/api/deepseek_r1 \
-H "Authorization: Bearer $CONFIG_ACCESS_KEY" \
-H "Content-Type: application/json" \
-d '{
    "version": "latest",
    "messages": [
        {
            "role": "user", 
            "content": "Hello!"
        }
    ]
}'

Sample response (JSON)

{
    "success": true,
    "date": "2025-01-01 08:00:00",
    "message": {
        "reasoning_content": "",
        "content": null,
        "role": "assistant"
    },
    "finish_reason": "stop",
    "index": 0,
    "logprobs": null,
    "id": "00000000-0000-0000-0000-000000000000"
}

This JSON data is only a demo and does not represent the actual request output.

Many rounds of dialogue

The T.ruhr API does not record your conversation history by default. The multi-round dialog feature allows large models to “have a memory” for scenarios such as follow-up questions, information gathering, etc. that require continuous communication. If you are using a DeepSeek-R1-like model, you will receive a reasoning_content field (the thought process) and a content (the content of the reply), and you can add the content field to the context via {'role': 'assistant', 'content': the content returned by the API}. There is no need to add the reasoning_content field.

Python

import os
from openai import OpenAI

# Initializing the OpenAI Client
client = OpenAI(
    # If no environment variable is configured, replace the following line with Access Key: access_key = "<YOUR-ACCESS-KEY>":
    access_key = os.getenv("CONFIG_ACCESS_KEY"),
    base_url = "https://t.ruhr/api/deepseek_r1"
)

# Context management via the messages array
messages = [
    {'role': 'user', 'content': 'Hello!'}
]

# Creating a Chat Completion Request
completion = client.chat.completions.create(
    version = "latest",
    messages = messages
)

print("=" * 20 + "First round of dialogues:" + "=" * 20 + "\n")
# Print the process of reasoning through the "reasoning_content" field:
print("=" * 20 + "Process of reasoning:" + "=" * 20 + "\n")
print(completion.choices[0].message.reasoning_content)
# Print the final answer via the "content" field:
print("=" * 20 + "Final answer:" + "=" * 20 + "\n")
print(completion.choices[0].message.content)

messages.append({'role': 'assistant', 'content': completion.choices[0].message.content})
messages.append({'role': 'user', 'content': 'Who are you?'})

# Creating a Chat Completion Request
completion = client.chat.completions.create(
    version = "latest",
    messages = messages
)

print("=" * 20 + "Second round of dialogues:" + "=" * 20 + "\n")
# Print the process of reasoning through the "reasoning_content" field:
print("=" * 20 + "Process of reasoning:" + "=" * 20 + "\n")
print(completion.choices[0].message.reasoning_content)
# Print the final answer via the "content" field:
print("=" * 20 + "Final answer:" + "=" * 20 + "\n")
print(completion.choices[0].message.content)

Please replace <YOUR-ACCESS-KEY> with your access key.

NodeJS

import OpenAI from 'openai';

// Initializing the OpenAI Client
const openai = new OpenAI({
    accessKey: process.env.CONFIG_ACCESS_KEY, // Read from environment variable
    baseURL: 'https://t.ruhr/api/deepseek_r1'
});


const completion = await openai.chat.completions.create({
    version: 'latest',
    messages: [
        {role: 'user', content: 'Hello!'}
    ],
});

console.log('Process of reasoning:')
console.log(completion.choices[0].message.reasoning_content)
console.log('Final answer:')
console.log(completion.choices[0].message.content)

Please replace <YOUR-ACCESS-KEY> with your access key.

cURL

đź’ˇ
You can use the multi-round dialog feature via the OpenAI SDK or OpenAI-compatible HTTP.
curl -X POST https://t.ruhr/api/deepseek_r1 \
-H "Authorization: Bearer $CONFIG_ACCESS_KEY" \
-H "Content-Type: application/json" \
-d '{
    "version": "latest",
    "messages": [
        {
            "role": "user", 
            "content": "Hello!"
        },
        {
            "role": "assistant",
            "content": ""
        },
        {
            "role": "user",
            "content": "Who are you?"
        }
    ]
}'

Sample response (JSON)

{
    "success": true,
    "date": "2025-01-01 08:00:00",
    "message": {
        "reasoning_content": "",
        "content": null,
        "role": "assistant"
    },
    "finish_reason": "stop",
    "index": 0,
    "logprobs": null,
    "id": "00000000-0000-0000-0000-000000000000"
}

This JSON data is only a demo and does not represent the actual request output.

Stream Output

DeepSeek-R1-like models may output longer reasoning processes, to reduce the risk of timeouts, it is recommended that you call DeepSeek-R1-like models using stream output.

Python

from openai import OpenAI
import os

# Initializing the OpenAI Client
client = OpenAI(
    # If no environment variable is configured, replace the following line with Access Key: access_key = "<YOUR-ACCESS-KEY>":
    access_key = os.getenv("CONFIG_ACCESS_KEY"),
    base_url = "https://t.ruhr/api/deepseek_r1"
)

reasoning_content = ""  # Defining the complete reasoning process
answer_content = ""     # Define full answer
is_answering = False    # Determine whether to end the reasoning process and start the answer

# Creating a Chat Completion Request
completion = client.chat.completions.create(
    version = "latest",
    messages = [
        {"role": "user", "content": "Hello!"}
    ],
    stream = True
)

print("\n" + "=" * 20 + "Process of reasoning:" + "=" * 20 + "\n")

for chunk in completion:
    # If chunk.choices is empty, print usage
    if not chunk.choices:
        print("\nUsage:")
        print(chunk.usage)
    else:
        delta = chunk.choices[0].delta
        # Print the reasoning process
        if hasattr(delta, 'reasoning_content') and delta.reasoning_content != None:
            print(delta.reasoning_content, end='', flush=True)
            reasoning_content += delta.reasoning_content
        else:
            # Start answering
            if delta.content != "" and is_answering == False:
                print("\n" + "=" * 20 + "Final answer:" + "=" * 20 + "\n")
                is_answering = True
            # Print the answer process
            print(delta.content, end='', flush=True)
            answer_content += delta.content

# Print the process of reasoning through the "reasoning_content" field:
# print("=" * 20 + "Process of reasoning:" + "=" * 20 + "\n")
# print(reasoning_content)
# Print the final answer via the "answer_content" field:
# print("=" * 20 + "Final answer:" + "=" * 20 + "\n")
# print(answer_content)

Please replace <YOUR-ACCESS-KEY> with your access key.

NodeJS

import OpenAI from 'openai';
import process from 'process';

// Initializing the OpenAI Client
const openai = new OpenAI({
    accessKey: process.env.CONFIG_ACCESS_KEY, // Read from environment variable
    baseURL: 'https://t.ruhr/api/deepseek_r1'
});

let reasoningContent = '';  // Defining the complete reasoning process
let answerContent = '';     // Define full answer
let isAnswering = false;    // Determine whether to end the reasoning process and start the answer

async function main() {
    try {
        const stream = await openai.chat.completions.create({
            version: 'latest',
            messages: [{role: 'user', content: 'Hello!'}],
            stream: true
        });

        console.log('\n' + '='.repeat(20) + 'Process of reasoning:' + '='.repeat(20) + '\n');

        for await (const chunk of stream) {
            if (!chunk.choices?.length) {
                console.log('\nUsage:');
                console.log(chunk.usage);
                continue;
            }

            const delta = chunk.choices[0].delta;
            
            // Process of reasoning
            if (delta.reasoning_content) {
                process.stdout.write(delta.reasoning_content);
                reasoningContent += delta.reasoning_content;
            } 
            // Processing of formal answer
            else if (delta.content) {
                if (!isAnswering) {
                    console.log('\n' + '='.repeat(20) + 'Final answer:' + '='.repeat(20) + '\n');
                    isAnswering = true;
                }
                process.stdout.write(delta.content);
                answerContent += delta.content;
            }
        }
    } catch (error) {
        console.error('Error:', error);
    }
}

main();

Please replace <YOUR-ACCESS-KEY> with your access key.

cURL

curl -X POST https://t.ruhr/api/deepseek_r1 \
-H "Authorization: Bearer $CONFIG_ACCESS_KEY" \
-H "Content-Type: application/json" \
-d '{
    "version": "latest",
    "messages": [
        {
            "role": "user", 
            "content": "Who are you?"
        }
    ],
    "stream": true
}'

Sample response (JSON)

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": "", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": "Hello", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": ",", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": "i", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": "am", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

...

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": "help", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": "you", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": ".", "content": null, "role": "assistant"}, "finish_reason": null, "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: {"success": true, "date": "2025.01.01 08:00:00", "message": {"reasoning_content": "", "content": null, "role": "assistant"}, "finish_reason": "stop", "index": 0, "logprobs": null, "id": "00000000-0000-0000-0000-000000000000"}

data: [DONE]

This JSON data is only a demo and does not represent the actual request output.

Caveat

Unsupported features

  • Function Calling
  • JSON Output
  • Dialog Prefix Continued
  • Contextual hard disk caching

DeepSeek officially does not recommend setting up system messages, sic: "Avoid adding a system prompt; all instructions should be contained within the user prompt.".

Model calls do not currently support internet search.

Reflect in depth

Simply calling the DeepSeek-R1 class model means that deep thinking is turned on (the deep thinking process is returned via reasoning_content).

Unsupported parameters: temperature, top_p, presence_penalty, frequency_penalty, logprobs, top_logprobs.

Setting any of these parameters will not take effect, even if no error message is output.

Stability

If there is no response, response timeout or error "An internal error has occured, please try again later or contact service support" after execution, the task may queue or fail during peak hours, please retry later if the call fails.