All Integrations
Groq
LLM ProvidersUltra-fast LPU inference engine delivering sub-second latency for Llama, Mixtral, and Gemma models. Ideal for real-time applications that require instant responses.
How to Connect
- 1Go to Settings and then Credentials.
- 2Click Add Credential and select Groq.
- 3Enter your Groq API key.
- 4Use the credential in any agent configuration.