CrewAI Plugin: LLM
The FlotorchCrewAILLM provides a CrewAI-compatible interface for accessing language models through FloTorch Gateway. It acts as a wrapper that seamlessly integrates FloTorch’s managed model infrastructure with CrewAI’s agent framework, handling complexities such as response parsing, error handling, and structured output generation.
Prerequisites
Section titled “Prerequisites”Before using FlotorchCrewAILLM, ensure you have completed the general prerequisites outlined in the CrewAI Plugin Overview, including installation and environment configuration.
Configuration
Section titled “Configuration”Parameters
Section titled “Parameters”Configure your LLM instance with the following parameters:
FlotorchCrewAILLM( model_id: str, # Model identifier from FloTorch Console (required) api_key: str, # FloTorch API key for authentication (required) base_url: str # FloTorch Gateway endpoint URL (required))Parameter Details:
model_id- The unique identifier of the model configured in FloTorch Consoleapi_key- Authentication key for accessing FloTorch Gateway (can be set via environment variableFLOTORCH_API_KEY)base_url- The FloTorch Gateway endpoint URL (can be set via environment variableFLOTORCH_BASE_URL)
Features
Section titled “Features”CrewAI Integration
Section titled “CrewAI Integration”Provides seamless integration with CrewAI’s agent framework:
- Agent Compatibility - Works directly with CrewAI’s
Agentclass - Response Parsing - Automatically parses and formats responses for CrewAI
- Error Handling - Implements robust error handling with fallback responses
- Streaming Support - Supports streaming responses when needed
- Multi-Agent Support - Works seamlessly with CrewAI’s multi-agent Crew framework
- Task Integration - Integrates with CrewAI tasks for task-based workflows
Gateway Integration
Section titled “Gateway Integration”Leverages FloTorch Gateway’s capabilities:
- Unified API - Uses FloTorch Gateway’s
/api/openai/v1/chat/completionsendpoint - Model Management - Accesses models configured in FloTorch Console
- Cost Tracking - Automatic cost and usage tracking through Gateway
- Provider Abstraction - Works with any model provider configured in Gateway
- Centralized Configuration - Model settings managed in FloTorch Console
Response Processing
Section titled “Response Processing”Handles complex response scenarios:
- Structured Outputs - Supports structured response generation when needed
- Tool Call Support - Integrates with CrewAI’s tool framework
- Format Conversion - Converts between Gateway and CrewAI response formats
- Content Extraction - Automatically extracts relevant content from responses
Error Handling
Section titled “Error Handling”Implements robust error handling mechanisms:
- Network Resilience - Handles connectivity issues with appropriate retry logic and error messages
- Response Format Issues - Manages unexpected response formats with fallback strategies
- Validation Errors - Provides clear feedback on validation failures
- Graceful Degradation - Falls back to error responses when model calls fail
Usage Example
Section titled “Usage Example”Basic LLM Usage
Section titled “Basic LLM Usage”from flotorch.crewai.llm import FlotorchCrewAILLMfrom crewai import Agent
# Initialize FloTorch LLMllm = FlotorchCrewAILLM( model_id="your-model-id", api_key="your_api_key", base_url="https://gateway.flotorch.cloud")
# Create a CrewAI agent with FloTorch LLMagent = Agent( role="Customer Support Specialist", goal="Help customers with their inquiries", backstory="You are a helpful customer support agent", llm=llm, verbose=True)LLM with Agent Manager
Section titled “LLM with Agent Manager”from flotorch.crewai.agent import FlotorchCrewAIAgentfrom crewai import Crew, Task
# Agent manager automatically uses FloTorch LLM from Console configurationagent_manager = FlotorchCrewAIAgent( agent_name="my-agent", # LLM is configured in FloTorch Console base_url="https://gateway.flotorch.cloud", api_key="your_api_key")
# Get the agent (includes LLM from Console configuration)agent = agent_manager.get_agent()
# Use with Crewcrew = Crew( agents=[agent], tasks=[task], verbose=True)
result = crew.kickoff()Direct LLM Usage in Multi-Agent Setup
Section titled “Direct LLM Usage in Multi-Agent Setup”from flotorch.crewai.llm import FlotorchCrewAILLMfrom crewai import Agent, Crew, Task
# Create multiple agents with FloTorch LLMllm = FlotorchCrewAILLM( model_id="your-model-id", api_key="your_api_key", base_url="https://gateway.flotorch.cloud")
# Create specialized agentsresearcher = Agent( role="Researcher", goal="Research and gather information", backstory="You are a thorough researcher", llm=llm, verbose=True)
writer = Agent( role="Writer", goal="Write compelling content", backstory="You are a skilled writer", llm=llm, verbose=True)
# Use in a Crewcrew = Crew( agents=[researcher, writer], tasks=[research_task, writing_task], verbose=True)
result = crew.kickoff()Best Practices
Section titled “Best Practices”- Environment Variables - Use environment variables for credentials to enhance security
- Model Selection - Choose appropriate models based on your task requirements and configure them in FloTorch Console
- Error Handling - Implement proper error handling for production environments
- Agent Configuration - When using
FlotorchCrewAIAgent, configure LLM settings in FloTorch Console for centralized management - Cost Management - Monitor model usage and costs through FloTorch Gateway dashboard
- Multi-Agent Optimization - Use the same LLM instance across multiple agents in a Crew when appropriate to optimize resource usage
- Model Caching - Leverage FloTorch Gateway’s model caching capabilities for improved performance
Next Steps
Section titled “Next Steps”- Agent Configuration - Learn how to integrate LLMs with agents
- Memory Integration - Add memory capabilities to your LLM-powered agents
- Session Management - Implement persistent conversations