Skip to main content

Supported Models

Gopher supports multiple LLM providers and models for strategy generation and backtesting.

Providers

ProviderDescriptionAPI Key Required
Gopher CreditsHosted inference, easiest setupYes (Gopher Key)
OpenRouterAccess 100+ models with one keyYes
OpenAIGPT-4, GPT-4o modelsYes
OllamaLocal modelsNo
BasilicaOpenAI-compatible deploymentsNo (deployment URL)
CustomAny OpenAI-compatible APIVaries

The easiest way to use Gopher - no external API accounts needed.

Setup

  1. Go to gotrader.gopher-ai.com/settings
  2. Create an account or sign in
  3. Purchase credits
  4. Copy your Gopher Key (format: gopher_xxx...)
  5. Enter in Gopher Settings or set as environment variable:
    export BART_GOPHER_CODE='gopher_your-key-here'

Benefits

  • No external accounts: Single account for everything
  • Optimized models: Pre-configured for best results
  • Pay-as-you-go: Only pay for what you use
  • Simple setup: Just one key to configure

When using Gopher Credits, model selection is handled automatically with optimized defaults.

For Strategy Generation (Loop Model)

These models are best for the main evolution loop - generating and refining strategies.

ModelProviderQualitySpeedCost
qwen/qwen3-maxOpenRouterExcellentMedium$$
anthropic/claude-3.5-sonnetOpenRouterExcellentFast$$$
openai/gpt-4-turboOpenRouter/OpenAIExcellentMedium$$$
deepseek/deepseek-chatOpenRouterVery GoodFast$
meta-llama/llama-3.1-70b-instructOpenRouterGoodMedium$

Recommendation: qwen/qwen3-max offers the best balance of quality and cost.

For Backtesting (Backtest Model)

These models make trade decisions during backtests. Speed and efficiency matter here.

ModelProviderQualitySpeedCost
qwen/qwen3-vl-8b-instructOpenRouterGoodVery Fast$
openai/gpt-4o-miniOpenRouter/OpenAIGoodFast$
meta-llama/llama-3.1-8b-instructOpenRouterGoodVery Fast$
mistral/mistral-7b-instructOpenRouterGoodVery Fast$

Recommendation: qwen/qwen3-vl-8b-instruct is fast and cost-effective.

OpenRouter Models

OpenRouter provides access to models from multiple providers with a single API key.

Getting Started

  1. Create an account at openrouter.ai
  2. Generate an API key at openrouter.ai/keys
  3. Add credits to your account
  4. Enter the key in Gopher Settings

Premium Tier:

  • anthropic/claude-3.5-sonnet - Fast, excellent reasoning
  • anthropic/claude-3-opus - Most capable Claude model
  • openai/gpt-4-turbo - Latest GPT-4
  • openai/gpt-4o - Optimized GPT-4

Mid Tier:

  • qwen/qwen3-max - Excellent quality/cost ratio
  • deepseek/deepseek-chat - Good for technical tasks
  • meta-llama/llama-3.1-70b-instruct - Strong open model

Budget Tier:

  • qwen/qwen3-vl-8b-instruct - Fast and efficient
  • openai/gpt-4o-mini - Affordable GPT-4 variant
  • meta-llama/llama-3.1-8b-instruct - Good performance/cost

OpenAI Models

Use OpenAI directly with an OpenAI API key.

Available Models

Model IDDescription
gpt-4-turboLatest GPT-4, 128k context
gpt-4oOptimized GPT-4, faster
gpt-4o-miniSmaller, more affordable
gpt-4Original GPT-4
gpt-3.5-turboFast, budget option

Getting Started

  1. Create account at platform.openai.com
  2. Generate API key at platform.openai.com/api-keys
  3. Add credits to your account
  4. Enter the key in Gopher Settings

Local Models (Ollama)

Run models locally without API costs using Ollama.

Setup

  1. Install Ollama from ollama.ai
  2. Pull a model:
    ollama pull llama3.1:8b
  3. Start Ollama (usually runs automatically)
  4. In Gopher Settings, add a custom model:
    • Model ID: llama3.1:8b
    • Provider: Ollama
    • Base URL: http://localhost:11434/v1
    • API Key: (leave empty)
ModelSizeVRAM Required
llama3.1:8b4.7 GB8 GB
llama3.1:70b40 GB48 GB
mistral:7b4.1 GB8 GB
codellama:13b7.4 GB16 GB
qwen2:7b4.4 GB8 GB

Note: Local models may produce lower quality results than cloud models, especially for smaller sizes.

Custom Models

Add any OpenAI-compatible API endpoint.

Adding a Custom Model

  1. Go to Settings > Models
  2. Click + Add Custom
  3. Fill in:
    • Model ID: The model identifier
    • Display Name: Friendly name
    • Provider: Select "Custom"
    • Base URL: OpenAI-compatible root URL only (e.g., https://api.example.com/v1)
    • API Key: Your key for this endpoint

Note: Do not include /chat/completions in the base URL. For per-loop overrides, see Configuration → Inference.

Settings - Models

Compatible Services

Many services offer OpenAI-compatible APIs:

Model Selection Tips

Quality vs Cost

PriorityLoop ModelBacktest Model
Best Qualityclaude-3.5-sonnetgpt-4o-mini
Balancedqwen3-maxqwen3-vl-8b
Budgetdeepseek-chatllama-3.1-8b
Free (Local)llama3.1:70bllama3.1:8b

Cost Optimization

  1. Use a capable model for the Loop (strategy generation)
  2. Use a fast/cheap model for Backtest (trade decisions)
  3. Reduce iterations if costs are too high
  4. Monitor usage in your provider dashboard

Testing Models

Click Test next to any model in Settings to verify:

  • API key is valid
  • Endpoint is reachable
  • Model responds correctly

Fallback Model

Configure a fallback model in Settings for resilience:

  • Used when primary model fails
  • Should be a reliable, well-supported model
  • Default: anthropic/claude-sonnet-4.5

Next Steps