Skip to content

CrewAI Plugin: Agent

The FlotorchCrewAIAgent serves as the primary component for integrating FloTorch-managed agent configurations with CrewAI. It enables developers to define agent configurations centrally in the FloTorch Console and instantiate them with minimal code, eliminating the need for complex in-code configuration management.

Before using FlotorchCrewAIAgent, ensure you have completed the general prerequisites outlined in the CrewAI Plugin Overview, including installation and environment configuration.

Configure your agent using the following parameters:

FlotorchCrewAIAgent(
agent_name: str, # Agent name from FloTorch Console (required)
enable_memory: bool = False, # Enable memory functionality
custom_tools: list = None, # List of custom user-defined tools
base_url: str = None, # FloTorch Gateway URL
api_key: str = None, # FloTorch API key
tracer_config: dict = None # Optional tracing configuration
)

Parameter Details:

  • agent_name - Must match an existing agent name in your FloTorch Console
  • enable_memory - When True, automatically includes CrewAI memory tools (preload/search) for the agent
  • custom_tools - List of custom CrewAI tools to add to the agent’s capabilities
  • base_url - FloTorch Gateway endpoint (defaults to environment variable FLOTORCH_BASE_URL)
  • api_key - Authentication key (defaults to environment variable FLOTORCH_API_KEY)
  • tracer_config - Optional dictionary for configuring distributed tracing and observability

Note: If enable_memory=True, ensure a Memory Provider is configured in your FloTorch Console. See the CrewAI Memory documentation for details.

The tracer_config parameter enables distributed tracing for your agent, allowing you to monitor and debug agent execution across your system. This is particularly useful for production deployments and complex multi-agent workflows.

Tracer Configuration Parameters:

tracer_config = {
"enabled": True, # Required: Enable or disable tracing
"endpoint": "<your_observability_endpoint>", # Required: OTLP HTTP endpoint for trace export
"sampling_rate": 1.0, # Required: Sampling rate (0.0 to 1.0), where 1.0 captures all traces
"protocol": "https", # Optional: Protocol for the exporter ("https" for HTTP exporter)
"service_name": "flotorch-crewai-plugin", # Optional: Service name to distinguish plugin traces from gateway
"service_version": "1.0.0", # Optional: Service version identifier
# "enable_plugins_llm_tracing": True, # Optional: Enable LLM tracing in plugins
}

Required Fields:

  • enabled (bool) - Set to True to enable tracing, False to disable
  • endpoint (str) - OTLP HTTP endpoint URL where traces will be exported (e.g., "<your_observability_endpoint>")
  • sampling_rate (float) - Trace sampling rate between 0.0 and 1.0. Use 1.0 to capture all traces, or lower values (e.g., 0.1) to reduce overhead in high-traffic scenarios

Optional Fields:

  • protocol (str) - Protocol for the exporter. Use "https" for HTTP exporter (default)
  • service_name (str) - Service identifier to distinguish plugin traces from gateway traces in observability dashboards
  • service_version (str) - Version identifier for your service
  • enable_plugins_llm_tracing (bool) - Enable detailed LLM tracing within plugins

Example Tracing Configuration:

from flotorch.crewai.agent import FlotorchCrewAIAgent
# Configure tracing for production observability
tracer_config = {
"enabled": True,
"endpoint": "<your_observability_endpoint>",
"protocol": "https",
"service_name": "flotorch-crewai-plugin",
"service_version": "1.0.0",
"sampling_rate": 1.0,
}
agent_manager = FlotorchCrewAIAgent(
agent_name="my-agent",
tracer_config=tracer_config,
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
)

Note: Tracing is disabled by default. You must explicitly provide tracer_config with enabled=True to activate tracing functionality.

The agent automatically loads comprehensive configuration from FloTorch Console, including:

  • Agent name and description (goal)
  • System instructions and prompts (backstory)
  • LLM model configuration
  • Input/output schemas
  • MCP tools configuration
  • Synchronization settings

Supports automatic configuration updates based on:

  • syncEnabled - Enables automatic configuration reloading
  • syncInterval - Defines the synchronization interval in seconds

This ensures your agent stays up-to-date with changes made in the FloTorch Console without requiring code redeployment.

Automatically integrates tools configured in FloTorch Console:

  • MCP Tools - Loaded directly from agent configuration
  • Memory Tools - Automatically includes memory tools (preload/search) when enable_memory=True
  • Custom Tools - Add your own CrewAI tools via the custom_tools parameter
  • Tool Management - Handles authentication and connection automatically
  • CrewAI Compatibility - Seamlessly integrates with CrewAI’s tool framework

Designed for CrewAI’s multi-agent framework:

  • Crew Integration - Works seamlessly with CrewAI’s Crew class
  • Task Assignment - Supports task-based agent workflows
  • Agent Collaboration - Enables multiple agents to work together on complex tasks
  • State Management - Handles agent state across multi-agent interactions
  • Agent Specialization - Supports creating specialized agents for different roles and tasks
  • Workflow Orchestration - Enables complex multi-agent workflows with task dependencies

FloTorch CrewAI Plugin supports configurable logging to help you monitor and debug your agents. For comprehensive logging configuration details, including environment variables and programmatic setup, refer to the SDK Logging Configuration documentation.

Quick Setup:

Terminal window
# Enable debug logging to console
export FLOTORCH_LOG_DEBUG=true
export FLOTORCH_LOG_PROVIDER="console"

Or configure programmatically:

from flotorch.sdk.logger.global_logger import configure_logger
configure_logger(debug=True, log_provider="console")

For detailed logging configuration options, see the SDK Logging Configuration guide.

from flotorch.crewai.agent import FlotorchCrewAIAgent
from crewai import Crew, Task
# Initialize the agent manager
agent_manager = FlotorchCrewAIAgent(
agent_name="my-agent", # Must exist in FloTorch Console
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
# Note: Agent goal and backstory are configured
# in the FloTorch Console during agent creation
)
# Get the configured CrewAI agent
agent = agent_manager.get_agent()
# Use with CrewAI Crew
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
from flotorch.crewai.agent import FlotorchCrewAIAgent
from crewai import Crew, Task
# Configure tracing for observability
tracer_config = {
"enabled": True,
"endpoint": "<your_observability_endpoint>",
"sampling_rate": 1.0,
"service_name": "flotorch-crewai-plugin",
}
# Initialize the agent manager with tracing
agent_manager = FlotorchCrewAIAgent(
agent_name="my-agent",
tracer_config=tracer_config,
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
)
# Get the configured agent
agent = agent_manager.get_agent()
# Use with CrewAI Crew
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
from flotorch.crewai.agent import FlotorchCrewAIAgent
from crewai import Crew, Task
from crewai.tools import tool
# Define a custom weather tool using CrewAI's tool decorator
@tool
def get_weather(location: str) -> str:
"""
Get the current weather for a specific location.
Args:
location: The city or location name to get weather for
Returns:
A string describing the current weather conditions
"""
# In a real implementation, this would call a weather API
# For demonstration, we'll return a mock response
return f"The weather in {location} is sunny with a temperature of 72°F (22°C)."
# Initialize agent with custom weather tool
agent_manager = FlotorchCrewAIAgent(
agent_name="weather-agent",
custom_tools=[get_weather], # Add the custom weather tool
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
)
# Get the configured agent
agent = agent_manager.get_agent()
# Use with CrewAI Crew
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
from flotorch.crewai.agent import FlotorchCrewAIAgent
from crewai import Crew, Task
# Initialize agent with memory capabilities
agent_manager = FlotorchCrewAIAgent(
agent_name="customer-support",
enable_memory=True, # Enable memory tools
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
)
# Get the configured agent with memory tools
agent = agent_manager.get_agent()
# Use with CrewAI Crew
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
  1. Configuration Management - Define agent configurations in FloTorch Console rather than in code
  2. Environment Variables - Use environment variables for credentials to avoid hardcoding sensitive information
  3. Configuration Sync - Enable synchronization in the Console to receive updates without redeployment
  4. Memory Tools - Enable memory only when agents need to access persistent memory to avoid unnecessary overhead
  5. Custom Tools - Define custom tools with clear descriptions and proper error handling for robust agent behavior
  6. Tracing - Enable tracing in production environments for observability and debugging. Use appropriate sampling rates to balance visibility with performance
  7. Logging - Configure logging based on your environment: use console logging for development and file logging for production. See SDK Logging Configuration for details
  8. Multi-Agent Workflows - Leverage CrewAI’s multi-agent capabilities by creating specialized agents for different tasks
  9. Agent Roles - Clearly define agent roles and goals in FloTorch Console to ensure proper task assignment
  10. Task Design - Design tasks that leverage each agent’s strengths and enable effective collaboration