Models#
Info
Most providers can be configured via /login command, otherwise via providers config.
Models capabilities and configurations are retrieved from models.dev API.
ECA will return to clients the models configured, either via config or login.
Built-in providers and capabilities#
| model | tools (MCP) | reasoning / thinking | prompt caching | web_search | image_input |
|---|---|---|---|---|---|
| OpenAI | √ | √ | √ | √ | √ |
| Anthropic (Also subscription) | √ | √ | √ | √ | √ |
| Github Copilot | √ | √ | √ | X | √ |
| √ | √ | √ | X | √ | |
| Ollama local models | √ | √ | X | X |
Config#
Built-in providers have already base initial providers configs, so you can change to add models or set its key/url.
For more details, check the config schema.
Example:
{
"providers": {
"openai": {
"key": "your-openai-key-here", // configuring a key
"models": {
"o1": {} // adding models to a built-in provider
"o3": {
"extraPayload": { // adding to the payload sent to LLM
"temperature": 0.5
}
}
}
}
}
}
Environment Variables: You can also set API keys using environment variables following "<PROVIDER>_API_KEY", examples:
OPENAI_API_KEYfor OpenAIANTHROPIC_API_KEYfor Anthropic
Custom providers#
ECA allows you to configure custom LLM providers that follow API schemas similar to OpenAI or Anthropic. This is useful when you want to use:
- Self-hosted LLM servers (like LiteLLM)
- Custom company LLM endpoints
- Additional cloud providers not natively supported
You just need to add your provider to providers and make sure add the required fields
Schema:
| Option | Type | Description | Required |
|---|---|---|---|
api |
string | The API schema to use ("openai-responses", "openai-chat", or "anthropic") |
Yes |
urlEnv |
string | Environment variable name containing the API URL | No* |
url |
string | Direct API URL (use instead of urlEnv) |
No* |
keyEnv |
string | Environment variable name containing the API key | No* |
keyRc |
string | Lookup specification to read the API key from Unix RC credential files | No* |
key |
string | Direct API key (use instead of keyEnv) |
No* |
completionUrlRelativePath |
string | Optional override for the completion endpoint path (see defaults below and examples like Azure) | No |
thinkTagStart |
string | Optional override the think start tag tag for openai-chat (Default: " |
No |
thinkTagEnd |
string | Optional override the think end tag for openai-chat (Default: "") api | No |
models |
map | Key: model name, value: its config | Yes |
models <model> extraPayload |
map | Extra payload sent in body to LLM | No |
models <model> modelName |
string | Override model name, useful to have multiple models with different configs and names that use same LLM model | No |
* url and key will be searched as envs <provider>_API_URL and <provider>_API_KEY, they require the env to be found or config to work.
Examples:
{
"providers": {
"my-company": {
"api": "openai-chat",
"urlEnv": "MY_COMPANY_API_URL", // or "url"
"keyEnv": "MY_COMPANY_API_KEY", // or "key"
"models": {
"gpt-5": {},
"deepseek-r1": {}
}
}
}
}
Using modelName, you can configure multiple model names using same model with different settings:
{
"providers": {
"openai": {
"api": "openai-responses",
"models": {
"gpt-5": {},
"gpt-5-high": {
"modelName": "gpt-5",
"extraPayload": { "reasoning": {"effort": "high"}}
}
}
}
}
}
This way both will use gpt-5 model but one will override the reasoning to be high instead of the default.
API Types#
When configuring custom providers, choose the appropriate API type:
anthropic: Anthropic's native API for Claude models.openai-responses: OpenAI's new responses API endpoint (/v1/responses). Best for OpenAI models with enhanced features like reasoning and web search.openai-chat: Standard OpenAI Chat Completions API (/v1/chat/completions). Use this for most third-party providers:- OpenRouter
- DeepSeek
- Together AI
- Groq
- Local LiteLLM servers
- Any OpenAI-compatible provider
Most third-party providers use the openai-chat API for compatibility with existing tools and libraries.
Endpoint override (completionUrlRelativePath)#
Some providers require a non-standard or versioned completion endpoint path. Use completionUrlRelativePath to override the default path appended to your provider url.
Defaults by API type:
- openai-responses: /v1/responses
- openai-chat: /v1/chat/completions
- anthropic: /v1/messages
Only set this when your provider uses a different path or expects query parameters at the endpoint (e.g., Azure API versioning).
Credential File Authentication#
ECA also supports standard plain-text .netrc file format for reading credentials.
Use keyRc in your provider config to read credentials from ~/.netrc without storing keys directly in config or env vars.
Example:
{
"providers": {
"openai": {"keyRc": "api.openai.com"},
"anthropic": {"keyRc": "work@api.anthropic.com"}
}
}
keyRc lookup specification format: [login@]machine[:port] (e.g., api.openai.com, work@api.anthropic.com, api.custom.com:8443).
ECA by default search .netrc file stored in user's home directory. You can also provide the path to the actual file to use with :netrcFile in ECA config.
Tip for those wish to store their credentials encrypted with tools like gpg or age:
# via secure tempororay file
gpg --batch -q -d ./netrc.gpg > /tmp/netrc.$$ && chmod 600 /tmp/netrc.$$ && ECA_CONFIG='{"netrcFile": "/tmp/netrc.$$"}' eca server && shred -u /tmp/netrc.$$
Further reading on credential file formats: - Curl Netrc documentation - GNU Inetutils .netrc documentation
Notes:
- Authentication priority (short): key > keyRc files > keyEnv > OAuth.
- All providers with API key auth can use credential files.
Providers examples#
- Login to Anthropic via the chat command
/login. - Type 'anthropic' and send it.
- Type the chosen method
- Authenticate in your browser, copy the code.
- Paste and send the code and done!
- Login to Github copilot via the chat command
/login. - Type 'github-copilot' and send it.
- Authenticate in Github in your browser with the given code.
- Type anything in the chat to continue and done!
Tip: check Your Copilot plan to enable models to your account.
- Login to Google via the chat command
/login. - Type 'google' and send it.
- Choose 'manual' and type your Google/Gemini API key. (You need to create a key in google studio)
{
"providers": {
"litellm": {
"api": "openai-responses",
"url": "https://litellm.my-company.com", // or "urlEnv"
"key": "your-api-key", // or "keyEnv"
"models": {
"gpt-5": {},
"deepseek-r1": {}
}
}
}
}
OpenRouter provides access to many models through a unified API:
- Login via the chat command
/login. - Type 'openrouter' and send it.
- Specify your Openrouter API key.
- Inform at least a model, ex:
openai/gpt-5 - Done, it should be saved to your global config.
or manually via config:
{
"providers": {
"openrouter": {
"api": "openai-chat",
"url": "https://openrouter.ai/api/v1", // or "urlEnv"
"key": "your-api-key", // or "keyEnv"
"models": {
"anthropic/claude-3.5-sonnet": {},
"openai/gpt-4-turbo": {},
"meta-llama/llama-3.1-405b": {}
}
}
}
}
DeepSeek offers powerful reasoning and coding models:
- Login via the chat command
/login. - Type 'deepseek' and send it.
- Specify your Deepseek API key.
- Inform at least a model, ex:
deepseek-chat - Done, it should be saved to your global config.
or manually via config:
{
"providers": {
"deepseek": {
"api": "openai-chat",
"url": "https://api.deepseek.com", // or "urlEnv"
"key": "your-api-key", // or "keyEnv"
"models": {
"deepseek-chat": {},
"deepseek-coder": {},
"deepseek-reasoner": {}
}
}
}
}
- Login via the chat command
/login. - Type 'azure' and send it.
- Specify your API key.
- Specify your API url with your resource, ex: 'https://your-resource-name.openai.azure.com'.
- Inform at least a model, ex:
gpt-5 - Done, it should be saved to your global config.
or manually via config:
{
"providers": {
"azure": {
"api": "openai-responses",
"url": "https://your-resource-name.openai.azure.com", // or "urlEnv"
"key": "your-api-key", // or "keyEnv"
"completionUrlRelativePath": "/openai/responses?api-version=2025-04-01-preview",
"models": {
"gpt-5": {}
}
}
}
}
- Login via the chat command
/login. - Type 'azure' and send it.
- Specify your API key.
- Inform at least a model, ex:
GLM-4.5 - Done, it should be saved to your global config.
or manually via config:
{
"providers": {
"z-ai": {
"api": "anthropic",
"url": "https://api.z.ai/api/anthropic",
"key": "your-api-key", // or "keyEnv"
"models": {
"GLM-4.5": {},
"GLM-4.5-Air": {}
}
}
}
}