Multi-LLM Orchestrator — OpenAI (ChatGPT), Claude (Anthropic), Gemini (Google), DeepSeek
Use the same endpoint with different provider/model parameters:
curl -X POST "https://skeleton.dev.fastorder.com/api/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message":"Hello","purpose":"general","provider":"openai","model":"gpt-4o"}'
curl -X POST "https://skeleton.dev.fastorder.com/api/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message":"Hello","purpose":"general","provider":"claude","model":"claude-sonnet-4-20250514"}'
curl -X POST "https://skeleton.dev.fastorder.com/api/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message":"Hello","purpose":"general","provider":"gemini","model":"gemini-2.0-flash"}'
curl -X POST "https://skeleton.dev.fastorder.com/api/ai/chat" \
-H "Content-Type: application/json" \
-d '{"message":"Hello","purpose":"general","provider":"deepseek","model":"deepseek-chat"}'
{
"statusCode": 200,
"data": {
"trace_id": "...",
"provider": "openai",
"model": "gpt-4o",
"content": "The AI response text",
"latency_ms": 1234.56,
"tokens": { "input": 10, "output": 25 }
}
}