Skip to content

CrewAI Plugin: LLM

The FlotorchCrewAILLM provides a CrewAI-compatible interface for accessing language models through FloTorch Gateway. It acts as a wrapper that seamlessly integrates FloTorch’s managed model infrastructure with CrewAI’s agent framework, handling complexities such as response parsing, error handling, and structured output generation.

Before using FlotorchCrewAILLM, ensure you have completed the general prerequisites outlined in the CrewAI Plugin Overview, including installation and environment configuration.

Configure your LLM instance with the following parameters:

FlotorchCrewAILLM(
model_id: str, # Model identifier from FloTorch Console (required)
api_key: str, # FloTorch API key for authentication (required)
base_url: str # FloTorch Gateway endpoint URL (required)
)

Parameter Details:

  • model_id - The unique identifier of the model configured in FloTorch Console
  • api_key - Authentication key for accessing FloTorch Gateway (can be set via environment variable FLOTORCH_API_KEY)
  • base_url - The FloTorch Gateway endpoint URL (can be set via environment variable FLOTORCH_BASE_URL)

Provides seamless integration with CrewAI’s agent framework:

  • Agent Compatibility - Works directly with CrewAI’s Agent class
  • Response Parsing - Automatically parses and formats responses for CrewAI
  • Error Handling - Implements robust error handling with fallback responses
  • Streaming Support - Supports streaming responses when needed
  • Multi-Agent Support - Works seamlessly with CrewAI’s multi-agent Crew framework
  • Task Integration - Integrates with CrewAI tasks for task-based workflows

Leverages FloTorch Gateway’s capabilities:

  • Unified API - Uses FloTorch Gateway’s /api/openai/v1/chat/completions endpoint
  • Model Management - Accesses models configured in FloTorch Console
  • Cost Tracking - Automatic cost and usage tracking through Gateway
  • Provider Abstraction - Works with any model provider configured in Gateway
  • Centralized Configuration - Model settings managed in FloTorch Console

Handles complex response scenarios:

  • Structured Outputs - Supports structured response generation when needed
  • Tool Call Support - Integrates with CrewAI’s tool framework
  • Format Conversion - Converts between Gateway and CrewAI response formats
  • Content Extraction - Automatically extracts relevant content from responses

Implements robust error handling mechanisms:

  • Network Resilience - Handles connectivity issues with appropriate retry logic and error messages
  • Response Format Issues - Manages unexpected response formats with fallback strategies
  • Validation Errors - Provides clear feedback on validation failures
  • Graceful Degradation - Falls back to error responses when model calls fail
from flotorch.crewai.llm import FlotorchCrewAILLM
from crewai import Agent
# Initialize FloTorch LLM
llm = FlotorchCrewAILLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Create a CrewAI agent with FloTorch LLM
agent = Agent(
role="Customer Support Specialist",
goal="Help customers with their inquiries",
backstory="You are a helpful customer support agent",
llm=llm,
verbose=True
)
from flotorch.crewai.agent import FlotorchCrewAIAgent
from crewai import Crew, Task
# Agent manager automatically uses FloTorch LLM from Console configuration
agent_manager = FlotorchCrewAIAgent(
agent_name="my-agent", # LLM is configured in FloTorch Console
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
)
# Get the agent (includes LLM from Console configuration)
agent = agent_manager.get_agent()
# Use with Crew
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
from flotorch.crewai.llm import FlotorchCrewAILLM
from crewai import Agent, Crew, Task
# Create multiple agents with FloTorch LLM
llm = FlotorchCrewAILLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Create specialized agents
researcher = Agent(
role="Researcher",
goal="Research and gather information",
backstory="You are a thorough researcher",
llm=llm,
verbose=True
)
writer = Agent(
role="Writer",
goal="Write compelling content",
backstory="You are a skilled writer",
llm=llm,
verbose=True
)
# Use in a Crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True
)
result = crew.kickoff()
  1. Environment Variables - Use environment variables for credentials to enhance security
  2. Model Selection - Choose appropriate models based on your task requirements and configure them in FloTorch Console
  3. Error Handling - Implement proper error handling for production environments
  4. Agent Configuration - When using FlotorchCrewAIAgent, configure LLM settings in FloTorch Console for centralized management
  5. Cost Management - Monitor model usage and costs through FloTorch Gateway dashboard
  6. Multi-Agent Optimization - Use the same LLM instance across multiple agents in a Crew when appropriate to optimize resource usage
  7. Model Caching - Leverage FloTorch Gateway’s model caching capabilities for improved performance