Support Models Guide
This guide covers accessing the WIIL Platform Support Model Registry using the JS SDK. The registry maintains a curated list of LLM models from various providers (OpenAI, Anthropic, Google, ElevenLabs, etc.) that are supported by the platform.
Quick Start​
- TypeScript
- Python
import { WiilClient } from 'wiil-js';
const client = new WiilClient({
apiKey: 'your-api-key',
});
// Get the default multi-mode model for agent configurations
const model = await client.supportModels.getDefaultMultiMode();
console.log('Model ID:', model.modelId);
console.log('Name:', model.name);
console.log('Proprietor:', model.proprietor);
import os
from wiil import WiilClient
client = WiilClient(
api_key=os.environ['WIIL_API_KEY']
)
# Get the default multi-mode model for agent configurations
model = client.support_models.get_default_multi_mode()
print('Model ID:', model.model_id)
print('Name:', model.name)
print('Proprietor:', model.proprietor)
Architecture Overview​
The Support Model Registry is a read-only resource that provides:
- Centralized Catalog: All supported LLM models with their capabilities
- Model Types: TEXT, VOICE (TTS), STT, MULTI_MODE, etc.
- Provider Info: OpenAI, Anthropic, Google, ElevenLabs, Deepgram, Cartesia
- Used By: Agent Configurations reference models via
modelId
Key Distinction:
modelId: WIIL Platform unique identifier (use this in configurations)provider_model_id: Original ID from the provider's system (e.g., 'gpt-4-1106-preview')
Enums​
SupportedProprietor​
- TypeScript
- Python
enum SupportedProprietor {
OPENAI = 'OpenAI',
GOOGLE = 'Google',
ANTHROPIC = 'Anthropic',
GROQ = 'Groq',
DEEPGRAM = 'Deepgram',
ELEVENLABS = 'ElevenLabs',
CARTESIA = 'Cartesia'
}
from wiil.models.service_mgt import SupportedProprietor
# Available values:
SupportedProprietor.OPENAI # 'OpenAI'
SupportedProprietor.GOOGLE # 'Google'
SupportedProprietor.ANTHROPIC # 'Anthropic'
SupportedProprietor.GROQ # 'Groq'
SupportedProprietor.DEEPGRAM # 'Deepgram'
SupportedProprietor.ELEVENLABS # 'ElevenLabs'
SupportedProprietor.CARTESIA # 'Cartesia'
LLMType​
- TypeScript
- Python
enum LLMType {
STS = 'sts', // Speech-to-speech
TTS = 'tts', // Text-to-speech
STT = 'stt', // Speech-to-text
TRANSCRIBE = 'transcribe', // Transcription
TEXT_PROCESSING = 'text', // Text processing
MULTI_MODE = 'multi_mode' // Multi-modal
}
from wiil.models.service_mgt import LLMType
# Available values:
LLMType.STS # 'sts' - Speech-to-speech
LLMType.TTS # 'tts' - Text-to-speech
LLMType.STT # 'stt' - Speech-to-text
LLMType.TRANSCRIBE # 'transcribe' - Transcription
LLMType.TEXT_PROCESSING # 'text' - Text processing
LLMType.MULTI_MODE # 'multi_mode' - Multi-modal
Support Model Schema​
| Field | Type | Description |
|---|---|---|
| modelId | string | WIIL Platform unique model identifier |
| name | string | Human-readable model name |
| proprietor | SupportedProprietor | Model provider (OpenAI, Anthropic, etc.) |
| provider_model_id | string | Original provider model ID |
| description | string | Model capabilities and use cases |
| type | LLMType | Model functionality type |
| discontinued | boolean | Whether model is discontinued |
| supportedVoices | Voice[] | Available voices (for TTS models) |
| supportLanguages | Language[] | Supported languages |
Operations​
List All Models​
- TypeScript
- Python
const models = await client.supportModels.list();
console.log('Available models:', models.length);
models.forEach(model => {
console.log(`- ${model.name} (${model.proprietor}) - ${model.type}`);
});
models = client.support_models.list()
print('Available models:', len(models.data))
for model in models.data:
print(f'- {model.name} ({model.proprietor}) - {model.type}')
Get Model by ID​
- TypeScript
- Python
const model = await client.supportModels.get('model_123');
console.log('Model:', model.name);
console.log('Proprietor:', model.proprietor);
console.log('Provider Model ID:', model.provider_model_id);
console.log('Type:', model.type);
console.log('Discontinued:', model.discontinued);
model = client.support_models.get('model_123')
print('Model:', model.name)
print('Proprietor:', model.proprietor)
print('Provider Model ID:', model.provider_model_id)
print('Type:', model.type)
print('Discontinued:', model.discontinued)
Get Default Models​
The registry provides default models for each capability type:
- TypeScript
- Python
// Default multi-mode model (for agent configurations)
const multiMode = await client.supportModels.getDefaultMultiMode();
console.log('Default multi-mode:', multiMode?.name);
// Default Text-to-Speech model
const tts = await client.supportModels.getDefaultTTS();
console.log('Default TTS:', tts?.name);
console.log('Available voices:', tts?.supportedVoices?.length);
// Default Speech-to-Text model
const stt = await client.supportModels.getDefaultSTT();
console.log('Default STT:', stt?.name);
console.log('Supported languages:', stt?.supportLanguages?.length);
// Default Speech-to-Speech model
const sts = await client.supportModels.getDefaultSTS();
console.log('Default STS:', sts?.name);
// Default Transcription model
const transcribe = await client.supportModels.getDefaultTranscribe();
console.log('Default transcription:', transcribe?.name);
// Default Batch processing model
const batch = await client.supportModels.getDefaultBatch();
console.log('Default batch:', batch?.name);
// Default Translation STT model
const translationStt = await client.supportModels.getDefaultTranslationSTT();
console.log('Default translation STT:', translationStt?.name);
// Default Translation TTS model
const translationTts = await client.supportModels.getDefaultTranslationTTS();
console.log('Default translation TTS:', translationTts?.name);
# Default multi-mode model (for agent configurations)
multi_mode = client.support_models.get_default_multi_mode()
print('Default multi-mode:', multi_mode.name if multi_mode else None)
# Default Text-to-Speech model
tts = client.support_models.get_default_tts()
print('Default TTS:', tts.name if tts else None)
if tts and tts.supported_voices:
print('Available voices:', len(tts.supported_voices))
# Default Speech-to-Text model
stt = client.support_models.get_default_stt()
print('Default STT:', stt.name if stt else None)
if stt and stt.support_languages:
print('Supported languages:', len(stt.support_languages))
# Default Speech-to-Speech model
sts = client.support_models.get_default_sts()
print('Default STS:', sts.name if sts else None)
# Default Transcription model
transcribe = client.support_models.get_default_transcribe()
print('Default transcription:', transcribe.name if transcribe else None)
# Default Batch processing model
batch = client.support_models.get_default_batch()
print('Default batch:', batch.name if batch else None)
# Default Translation STT model
translation_stt = client.support_models.get_default_translation_stt()
print('Default translation STT:', translation_stt.name if translation_stt else None)
# Default Translation TTS model
translation_tts = client.support_models.get_default_translation_tts()
print('Default translation TTS:', translation_tts.name if translation_tts else None)
Lookup Models​
Find models by type and proprietor or by provider model ID:
- TypeScript
- Python
// Get model by type and proprietor
const textModel = await client.supportModels.getByTypeAndProprietor('TEXT', 'OpenAI');
if (textModel) {
console.log('Found:', textModel.name);
}
// Get model by proprietor and provider model ID
const specificModel = await client.supportModels.getByProprietorAndProviderModelId(
'Google',
'gemini-2.0-flash-exp'
);
if (specificModel) {
console.log('Wiil Model ID:', specificModel.modelId);
}
# Get model by type and proprietor
text_model = client.support_models.get_by_type_and_proprietor('TEXT', 'OpenAI')
if text_model:
print('Found:', text_model.name)
# Get model by proprietor and provider model ID
specific_model = client.support_models.get_by_proprietor_and_provider_model_id(
'Google',
'gemini-2.0-flash-exp'
)
if specific_model:
print('Wiil Model ID:', specific_model.model_id)
Check Model Support​
Verify if a specific model is supported:
- TypeScript
- Python
// Check if a model is supported
const isSupported = await client.supportModels.isSupported('OpenAI', 'gpt-4-turbo');
if (isSupported) {
console.log('Model is supported');
} else {
console.log('Model is not supported');
}
// Validate models before configuration
const sttSupported = await client.supportModels.isSupported('Deepgram', 'nova-2');
const ttsSupported = await client.supportModels.isSupported('ElevenLabs', 'eleven_turbo_v2');
console.log('STT supported:', sttSupported);
console.log('TTS supported:', ttsSupported);
# Check if a model is supported
is_supported = client.support_models.is_supported('OpenAI', 'gpt-4-turbo')
if is_supported:
print('Model is supported')
else:
print('Model is not supported')
# Validate models before configuration
stt_supported = client.support_models.is_supported('Deepgram', 'nova-2')
tts_supported = client.support_models.is_supported('ElevenLabs', 'eleven_turbo_v2')
print('STT supported:', stt_supported)
print('TTS supported:', tts_supported)
Complete Example​
Full workflow demonstrating support models usage:
- TypeScript
- Python
import { WiilClient, LLMType, SupportedProprietor, BusinessSupportServices } from 'wiil-js';
const client = new WiilClient({
apiKey: process.env.WIIL_API_KEY!,
});
async function exploreSupportModels() {
// 1. List all available models
const allModels = await client.supportModels.list();
console.log('Total models in registry:', allModels.length);
// 2. Group models by type
const modelsByType = allModels.reduce((acc, model) => {
acc[model.type] = (acc[model.type] || 0) + 1;
return acc;
}, {} as Record<string, number>);
console.log('Models by type:', modelsByType);
// 3. Get default model for agent configuration
const defaultModel = await client.supportModels.getDefaultMultiMode();
if (!defaultModel) {
throw new Error('No default multi-mode model available');
}
console.log('Using model:', defaultModel.name);
console.log('Model ID:', defaultModel.modelId);
// 4. Create instruction configuration
const instruction = await client.instructionConfigs.create({
instructionName: 'Demo Agent Instructions',
role: 'Demo Agent',
introductionMessage: 'Hello!',
instructions: 'You are a helpful assistant.',
guardrails: 'Be professional.',
supportedServices: [BusinessSupportServices.APPOINTMENT_MANAGEMENT],
});
// 5. Create agent configuration with the model
const agent = await client.agentConfigs.create({
name: 'DemoAgent',
modelId: defaultModel.modelId,
instructionConfigurationId: instruction.id,
});
console.log('Agent created with model:', agent.modelId);
// 6. Explore voice models for phone agents
const ttsModel = await client.supportModels.getDefaultTTS();
if (ttsModel?.supportedVoices) {
console.log('Available voices for TTS:');
ttsModel.supportedVoices.slice(0, 5).forEach(voice => {
console.log(` - ${voice.name} (${voice.voiceId})`);
});
}
// 7. Explore STT models for transcription
const sttModel = await client.supportModels.getDefaultSTT();
if (sttModel?.supportLanguages) {
console.log('Supported languages for STT:');
sttModel.supportLanguages.slice(0, 5).forEach(lang => {
console.log(` - ${lang.name} (${lang.code})`);
});
}
// 8. Clean up
await client.agentConfigs.delete(agent.id);
await client.instructionConfigs.delete(instruction.id);
console.log('Cleanup complete');
}
exploreSupportModels().catch(console.error);
import os
from wiil import WiilClient
from wiil.models.service_mgt import (
LLMType,
SupportedProprietor,
BusinessSupportServices,
CreateInstructionConfiguration,
CreateAgentConfiguration
)
client = WiilClient(
api_key=os.environ['WIIL_API_KEY']
)
def explore_support_models():
# 1. List all available models
all_models = client.support_models.list()
print('Total models in registry:', all_models.meta.total_count)
# 2. Group models by type
models_by_type = {}
for model in all_models.data:
model_type = model.type
models_by_type[model_type] = models_by_type.get(model_type, 0) + 1
print('Models by type:', models_by_type)
# 3. Get default model for agent configuration
default_model = client.support_models.get_default_multi_mode()
if not default_model:
raise Exception('No default multi-mode model available')
print('Using model:', default_model.name)
print('Model ID:', default_model.model_id)
# 4. Create instruction configuration
instruction = client.instruction_configs.create(
CreateInstructionConfiguration(
instruction_name='Demo Agent Instructions',
role='Demo Agent',
introduction_message='Hello!',
instructions='You are a helpful assistant.',
guardrails='Be professional.',
supported_services=[BusinessSupportServices.APPOINTMENT_MANAGEMENT]
)
)
# 5. Create agent configuration with the model
agent = client.agent_configs.create(
CreateAgentConfiguration(
name='DemoAgent',
model_id=default_model.model_id,
instruction_configuration_id=instruction.id
)
)
print('Agent created with model:', agent.model_id)
# 6. Explore voice models for phone agents
tts_model = client.support_models.get_default_tts()
if tts_model and tts_model.supported_voices:
print('Available voices for TTS:')
for voice in tts_model.supported_voices[:5]:
print(f' - {voice.name} ({voice.voice_id})')
# 7. Explore STT models for transcription
stt_model = client.support_models.get_default_stt()
if stt_model and stt_model.support_languages:
print('Supported languages for STT:')
for lang in stt_model.support_languages[:5]:
print(f' - {lang.name} ({lang.code})')
# 8. Clean up
client.agent_configs.delete(agent.id)
client.instruction_configs.delete(instruction.id)
print('Cleanup complete')
if __name__ == '__main__':
explore_support_models()
Best Practices​
-
Always use
getDefaultMultiMode()for agent configs - This returns the recommended model for general agent configurations. Avoid hardcoding model IDs. -
Check for null results - Default model methods return
nullif no model is configured. Always handle this case. -
Use
isSupported()for validation - Before using a specific provider model ID, verify it's supported by the platform. -
Avoid discontinued models - Check the
discontinuedflag before using a model. Discontinued models are only for legacy support. -
Match model type to use case - Use the appropriate model type for your needs:
MULTI_MODEfor general agentsTTSfor voice synthesisSTTfor speech recognitionTRANSCRIBEfor transcription
Troubleshooting​
Model Not Found​
Error:
- TypeScript
- Python
WiilAPIError: Model not found
WiilAPIError: Model not found
Solution: Verify the model ID is correct by listing all available models:
- TypeScript
- Python
const models = await client.supportModels.list();
const model = models.find(m => m.modelId === 'your-model-id');
if (!model) {
console.log('Available models:');
models.forEach(m => console.log(` ${m.modelId}: ${m.name}`));
}
models = client.support_models.list()
model = next((m for m in models.data if m.model_id == 'your-model-id'), None)
if not model:
print('Available models:')
for m in models.data:
print(f' {m.model_id}: {m.name}')
No Default Model Available​
Error:
- TypeScript
- Python
TypeError: Cannot read property 'modelId' of null
AttributeError: 'NoneType' object has no attribute 'model_id'
Solution: Always check if the default model exists:
- TypeScript
- Python
const model = await client.supportModels.getDefaultMultiMode();
if (!model) {
// Fall back to listing and selecting first available
const models = await client.supportModels.list();
if (models.length > 0) {
model = models[0];
} else {
throw new Error('No models available in registry');
}
}
model = client.support_models.get_default_multi_mode()
if not model:
# Fall back to listing and selecting first available
models = client.support_models.list()
if models.data:
model = models.data[0]
else:
raise Exception('No models available in registry')
Model is Discontinued​
Warning: Using a discontinued model may result in degraded service.
Solution: Check the discontinued flag and use an active model:
- TypeScript
- Python
const models = await client.supportModels.list();
const activeModels = models.filter(m => !m.discontinued);
console.log('Active models:', activeModels.length);
activeModels.forEach(m => {
console.log(`- ${m.name} (${m.proprietor})`);
});
models = client.support_models.list()
active_models = [m for m in models.data if not m.discontinued]
print('Active models:', len(active_models))
for m in active_models:
print(f'- {m.name} ({m.proprietor})')