Skip to content
Built 26/04/16 06:15commit dacb639

OpenClaw Provider Endpoint DNS Failures

中文 | English

Summary

The error LLM request failed: DNS lookup for the provider endpoint failed. points at a provider endpoint resolution failure, not at a generic model bug. In the observed OpenClaw setup, the durable fix path was to inspect the active provider configuration in ~/.openclaw/openclaw.json and verify which endpoint the default model actually used.

Symptom

OpenClaw surfaced:

text
LLM request failed: DNS lookup for the provider endpoint failed.

This symptom was easy to treat as a generic GPT problem, but the actionable layer was the configured provider endpoint.

Root Cause Pattern

When the model alias and the provider configuration drift apart, or when the configured endpoint is wrong, unreachable, or no longer resolvable in the runtime environment, requests fail at DNS resolution before model execution even begins.

In the examined configuration, the GPT-facing model path was controlled through a custom provider block in ~/.openclaw/openclaw.json:

json
"models": {
  "mode": "replace",
  "providers": {
    "openai-codex": {
      "api": "openai-codex-responses",
      "baseUrl": "https://chatgpt.com/backend-api"
    }
  }
}

with the default model pointed at:

json
"agents": {
  "defaults": {
    "model": {
      "primary": "openai-codex/gpt-5.4"
    }
  }
}

How To Confirm

  • Read ~/.openclaw/openclaw.json rather than guessing from the UI.
  • Check which provider block owns the active default model.
  • Inspect the provider baseUrl and confirm that the endpoint is the one the runtime should actually call.
  • Compare the current config against any .bak snapshots to see whether a provider override was introduced as part of the fix.

Fix Strategy

Do not treat this as a vague "GPT is broken" issue. Trace it through the configuration chain:

  1. Identify the active default model alias.
  2. Resolve which provider owns that alias.
  3. Inspect the provider endpoint in openclaw.json.
  4. Correct the provider configuration so the default model points at a resolvable endpoint.

In the observed setup, the active fix state used a custom openai-codex provider with baseUrl set to https://chatgpt.com/backend-api and the default model switched to openai-codex/gpt-5.4.

Operational Lesson

For provider-layer failures, the important question is not "which GPT model did we mean to use" but "which endpoint does the active provider config make the runtime call right now".