Qwen3 Coder 480B A35B on W&B Inference
Price per 1M tokens
$1.00 (input)
$1.50 (output)
Parameters
35B (active)
480B (total)
Context window
262K
Release date
July 2025
Qwen3 Coder 480B A35B
inference details
Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts).
Created by: Alibaba
License: apache-2.0
🤗 model card: Qwen3-Coder-480B-A35B-Instruct
import openai
import weave
# Weave autopatches OpenAI to log LLM calls to W&B
weave.init("/")
client = openai.OpenAI(
# The custom base URL points to W&B Inference
base_url='https://api.inference.wandb.ai/v1',
# Get your API key from https://wandb.ai/authorize
# Consider setting it in the environment as OPENAI_API_KEY instead for safety
api_key="",
# Team and project are required for usage tracking
project="/",
)
response = client.chat.completions.create(
model="Qwen/Qwen3-Coder-480B-A35B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke."}
],
)
print(response.choices[0].message.content)