feat: add MiniMax provider support#2207
Conversation
- Add MiniMax chat model provider using OpenAI-compatible API - Add MINIMAX enum to AiProviderEnum - Register minimax provider in PROVIDER_MAP - Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model support - Add unit tests for the new provider
|
|
Thanks for the contribution, @octo-patch ! I can see this issue when testing it in the main chat, you can try a prompt like: Looks related to generating JSON output from our query router. Could you please check? |
|
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 5 days if no further activity occurs. Please feel free to give a status update by leaving a comment. Thank you for your contributions! |
|
Sorry for the delay here, @alexandrudanpop! Sharing what I found and where I think this should land. Root cause. Why I do not want to flip What I think the right fix is, in order of preference:
Approach 1 is the more robust fix and keeps both models usable for both |
The plan sounds ok for me, but we would need to make it in a generic way (both 1 and 2). For point 2 for example currently we don't have a special setting, so the generateObject call happens with the same model. Not sure if it's a good idea to hardcode the secondary model, as it will get stale as we have new model releases. So I would propose to have it in the AI settings configuration something like secondary model with short explanation what it does (generate chat name, used for code generation block, etc.). @octo-patch happy to assist if you need more information how to adapt the PR |



Summary
MiniMax-M2.7andMiniMax-M2.7-highspeedmodels to the provider's model listAiProviderEnumandPROVIDER_MAPImplementation Details
MiniMax exposes an OpenAI-compatible chat API at
https://api.minimax.io/v1. This PR implements a dedicated MiniMax provider using@ai-sdk/openai-compatible(already a project dependency), following the same pattern as other providers in this codebase.Users can configure MiniMax by setting:
MiniMaxMINIMAX_API_KEYMiniMax-M2.7(flagship) orMiniMax-M2.7-highspeed(faster variant)API Reference
Test plan
packages/openops/test/ai/providers.test.ts)api.minimax.ioreturns valid responseAiProviderEnumandPROVIDER_MAPMiniMax-M2.7andMiniMax-M2.7-highspeed