Skip to main content

MCP Server

OATDA Model Context Protocol (MCP) Server

Use OATDA directly from Claude Desktop, Claude Code, and other MCP-compatible applications. No installation required – just API key and credits!

No Installation Required!

  • An OATDA API key from your dashboard
  • Credits in your account (minimum: €1)
  • Done! Ready to use immediately.

Quick Start with Claude Code

Up and running in 3 minutes

1. Create API Key

Go to your OATDA Dashboard and create an API key:

Go to Dashboard

2. Add Credits

Top up your account with at least €1 in credits – flexible via credit card or PayPal.

3. Configure Claude Code

Add OATDA to your Claude Code configuration (see below)

Configuration

Claude Code & Claude Desktop Setup

Claude Code Configuration

Add OATDA to your Claude configuration file:

Configuration File
{
  "mcpServers": {
    "oatda": {
      "type": "http",
      "url": "https://oatda.com/api/v1/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_OATDA_API_KEY"
      }
    }
  }
}
json

Find the file at::

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

Replace YOUR_OATDA_API_KEY with your actual API key.

Available Tools

6 Tools & Resources for Every Use Case

OATDA MCP provides the following tools:

chat_completion

Text generation with 10+ LLM providers

Perfect for: Code generation, writing, analysis, summaries

"Use OATDA MCP chat_completion with model openai/gpt-4o-mini to write TypeScript types for this function"
text

vision_analysis

Image analysis with vision-capable models

Perfect for: Analyzing screenshots, extracting text from images, document OCR

"Describe the image attached using OATDA MCP with model openai/gpt-4o-mini"
text

generate_image

Image generation (DALL-E, Imagen, etc.). Use list_models(type="image") first to discover each model's supported_params, then pass model-specific options via the model_params object.

Perfect for: Logos, illustrations, design assets, prototyping

"First list image models with OATDA MCP to see supported parameters, then generate an image using openai/gpt-image-1 with model_params: { quality: \"high\", background: \"transparent\", outputFormat: \"png\" }"
text

generate_video

Video generation (MiniMax, Google Veo, Seedance, etc.). Use list_models(type="video") first to discover each model's supported_params, then pass model-specific options via the model_params object.

Perfect for: Social media content, product demos, creative videos

"First list video models with OATDA MCP to see supported parameters, then generate a video using bytedance/seedance-1-5-pro-251215 with model_params: { ratio: \"16:9\", duration: \"5\", generate_audio: true }"
text

get_video_status

Check async video generation status

Perfect for: Monitor video generation and get download URL when complete

"Check the status of video task minimax-T2V01-xyz using OATDA MCP"
text

list_models

List all available AI models (chat, image, video). For image and video models, returns supported_params with parameter names, types, allowed values, and defaults.

Perfect for: Discover available models and their supported parameters before calling generate_image or generate_video

"List all image models using OATDA MCP to see their supported parameters" or "Show me the video models and what parameters they support"
text

Model Format

All models use the provider/model format

Examples:

openai:gpt-5.2, gpt-4.1, o4-mini, dall-e-3, gpt-image-1, sora-2
anthropic:claude-sonnet-4-6, claude-opus-4-6, claude-haiku-4-5
google:gemini-2.5-pro, gemini-3-pro-preview, imagen-4.0-generate-001, veo-3.1-generate-preview
deepseek:deepseek-chat, deepseek-reasoner
mistral:mistral-large-latest, codestral-latest
xai:grok-4-fast, grok-imagine-image, grok-imagine-video
alibaba:qwen3.5-397b-a17b, qwen-image, wan2.6-t2v
minimax:minimax-m2.5, image-01, MiniMax-Hailuo-2.3
moonshot:kimi-k2.5, kimi-k2-thinking
bytedance:seed-1.8, seedance-1.5-pro, seedream-4.5
zai:glm-5, cogview-4-250304, cogvideox-3

Prompt Examples

Copy & adapt for your use cases

Code Generation

"Use OATDA MCP chat_completion with model anthropic/claude-sonnet-4-6 to write a React component for a user profile card"
text

Document Analysis

"Use OATDA MCP vision_analysis with model google/gemini-2.5-flash to extract all table data from this spreadsheet screenshot"
text

Image Generation with model_params

"Use list_models to check openai/gpt-image-1 supported parameters, then generate a transparent PNG logo with OATDA MCP using model_params: { quality: \"high\", background: \"transparent\", outputFormat: \"png\" }"
text

Video Generation with model_params

"Use list_models to check bytedance/seedance-1-5-pro-251215 supported parameters, then generate a 5-second video with OATDA MCP and model_params: { ratio: \"16:9\", generate_audio: true }: Coffee being poured into a mug, steam rising, cinematic lighting"
text

Troubleshooting

Session Expired

If you receive 'Invalid or expired session', reinitialize the MCP connection.

Authentication Failed

Verify your API key is valid and has sufficient credits: https://oatda.com/dashboard/credits

Provider Not Found

Check the model format: use 'provider/model' (e.g., 'openai/gpt-4o')

Rate Limits

For fair usage and stability:

  • Init
    10 per minute per IP
  • chat_completion
    100 per minute per user
  • vision_analysis
    50 per minute per user
  • generate_image
    10 per minute per user
  • generate_video
    10 per hour per user
  • get_video_status
    100 per minute per user
  • list_models
    60 per minute per user

Cost & Billing

Only pay for what you use. Transparent billing across all 10+ providers.

Pay-per-use: No monthly fees

One bill: All providers on one account

Transparent: Every request with cost info

OATDA MCP Server – Multi-Provider LLM Integration via Model Context Protocol