A lightweight async Python client for interacting with DeepSeek chat infrastructure through a local bypass + cookie + proof-of-work pipeline.
This project follows the same conceptual direction (and also contains elements from) as:
Warning
This project is built around reverse-engineered infrastructure behavior. Use responsibly and be aware that API changes may break functionality without notice.
Important
Cookie generation is required before using the API client. Run python run_and_get_cookies.py first — the client will not work without valid cookies.
dsk++ provides:
- async DeepSeek chat client
- streaming response support with dual format parsing
- session-based conversation handling
- automated cookie acquisition system
- local Cloudflare bypass server
- WASM-based proof-of-work solver with async thread pool
- concurrent file upload support using asyncio.gather()
- automatic Cloudflare detection and cookie refresh
Caution
Do not cd (change directory) into it, you'll import it like dskpp.api!
git clone https://github.com/fundiman/dskppNote
The requirements.txt file was generated with pipreqs.
pip install -r requirements.txtSystem dependencies (Linux / server environments):
- google-chrome or chromium
- xvfb (for headless fallback)
- python 3.10+
Important
Before using the API client, cookies must be generated:
python run_and_get_cookies.pyThis will:
- start local bypass server
- launch Chromium automation
- solve Cloudflare challenges
- store cookies in
dsk/dsk/cookies.json
Note
The bypass server runs locally on port 8021 by default. Ensure this port is available.
dskpp/
│
├── api.py # async DeepSeek API client
├── server.py # FastAPI bypass + cookie server
├── CloudflareBypasser.py # Chromium-based challenge solver
├── bypass.py # automation helper logic
├── pow.py # WASM proof-of-work solver (async)
├── run_and_get_cookies.py # bootstrap cookie acquisition
│
├── dsk/ # runtime cookie storage
├── wasm/ # WASM binaries for hashing
└── README.md
from dskpp.api import DeepSeekAPI
import asyncio
api = DeepSeekAPI(auth_token="your_token")Note
The auth token is obtained after logging into DeepSeek chat. Extract it from browser developer tools (Application → Local Storage → chat.deepseek.com → USER_TOKEN).
session_id = await api.create_chat_session()result = await api.delete_chat_session(session_id)
print(result) # "Successfully deleted session: session_id"# Upload multiple files concurrently
file_ids = await api.upload_files([
"document.pdf",
"image.png",
"data.csv"
])
# file_ids returned in same order as input
print(file_ids) # ['file_id_1', 'file_id_2', 'file_id_3']Note
The system uploads all files simultaneously using asyncio.gather() for maximum efficiency.
async for chunk in api.chat_completion(
chat_session_id=session_id,
prompt="Analyze these uploaded files",
ref_file_ids=file_ids, # List of file IDs from upload
thinking_enabled=True,
search_enabled=False # Must be False when using files
):
# chunk is a dictionary with 'content' key
print(chunk.get("content", ""), end="")Warning
File uploads require search_enabled=False. Attempting to use both will raise UploadFilesUnavailable.
async for chunk in api.chat_completion(
chat_session_id=session_id,
prompt="What's the latest news about AI?",
thinking_enabled=True,
search_enabled=True # Enables web search for current info
):
print(chunk.get("content", ""), end="")The client parses DeepSeek's SSE format and returns dictionaries:
{
'type': 'text', # Type of chunk (text, message_ids, etc.)
'content': 'incremental text...', # The actual content
'finish_reason': None # 'stop' when complete, None otherwise
}Note
The parser automatically handles both full format (with 'p' and 'o' fields) and simplified format (just 'v' field) chunks.
A fully buffered response can be constructed manually:
response = ""
async for chunk in api.chat_completion(
chat_session_id=session_id,
prompt="Hello world"
):
response += chunk.get("content", "")history = await api.get_history(session_id)
print(history)Important
Always close the session when done to free resources:
await api.close()Run bypass server manually:
python server.pyDefault endpoint:
http://localhost:8021
Endpoints:
/cookies→ returns validated cookies + user-agent/html→ returns raw HTML + cookies header metadata
Enable Docker compatibility:
export DOCKERMODE=trueThis enables:
- headless Chromium adjustments
- sandbox flags
- remote debugging port support
Note
Set DOCKERMODE=true when running inside containers to avoid sandbox-related crashes.
The system is composed of three core layers:
Async client for session-based chat interaction with:
- concurrent file uploads using asyncio.gather()
- streaming response parsing for DeepSeek SSE format
- automatic retry logic with Cloudflare detection and cookie refresh
- async session management with curl_cffi
FastAPI + Chromium automation for:
- Cloudflare bypass
- cookie extraction
- page validation
WebAssembly-based solver with async wrapper using asyncio.to_thread() to keep event loop responsive during CPU-bound computations.
The system is designed around non-blocking execution:
- network I/O uses async HTTP sessions from curl_cffi
- streaming responses use async generators that yield control between chunks
- blocking WASM computations are offloaded to threads via asyncio.to_thread()
- browser automation runs in separate processes
- cookie acquisition runs outside event loop control path
- concurrent file uploads using asyncio.gather()
- colored warning outputs for better visibility
The client provides specific exceptions for different failure modes:
from dskpp.api import (
AuthenticationError, # Invalid/expired token
RateLimitError, # API rate limit exceeded
NetworkError, # Network communication failure
CloudflareError, # Cloudflare block detected
UploadFilesUnavailable, # Search enabled during file upload
APIError # Generic API error with status code
)import asyncio
from dskpp.api import DeepSeekAPI
async def main():
api = DeepSeekAPI("your_token_here")
# Create session
session = await api.create_chat_session()
print(f"Session created: {session}")
# Upload files concurrently
file_ids = await api.upload_files([
"report.pdf",
"data.xlsx"
])
print(f"Uploaded files: {file_ids}")
# Chat with file context
async for chunk in api.chat_completion(
session,
"Analyze these files and summarize key points",
ref_file_ids=file_ids,
search_enabled=False # Required for file uploads
):
print(chunk.get("content", ""), end="")
# Cleanup
await api.delete_chat_session(session)
await api.close()
asyncio.run(main())Caution
This project is experimental and based on reverse-engineered behavior of DeepSeek's infrastructure. DeepSeek may modify their API at any time, which could break this client — while efforts will be made to address issues, full compatibility cannot be guaranteed. Use of this client may also violate DeepSeek's terms of service and could result in account suspension or banning. Please use at your own risk.
Tip
If you encounter Cloudflare blocks, try deleting dsk/dsk/cookies.json and re-running python run_and_get_cookies.py to refresh your cookies.
Note
The client automatically handles both SSE event lines and data lines, parsing the simplified chunk format (just 'v' field) that appears after the first response chunk.
This project is licensed under Apache 2.0.