Skip to content

FloTorch AutoGen Plugin Overview

The FloTorch AutoGen Plugin streamlines the development of Microsoft AutoGen (agentchat) agents by providing managed infrastructure and centralized configuration through the FloTorch Console. Instead of managing complex configurations in code, developers can leverage a powerful set of pre-configured services:

  • Centralized Agent Management - Configure and manage agents through the FloTorch Console
  • Managed LLM Access - Seamless integration with FloTorch Gateway for model inference
  • Persistent Memory Services - Optional memory and session capabilities with provider setup
  • Automated Tool Integration - Support for MCP tools and custom tool implementations
  • Configurable Logging - Flexible logging configuration for debugging and monitoring (see SDK Logging Configuration)

Before getting started with the FloTorch AutoGen Plugin, ensure you have completed the following:

  1. FloTorch Account - Create an account at console.flotorch.cloud
  2. Agent Configuration - Set up your agent following the Gateway Agents documentation
  3. API Credentials - Generate your API key from API Keys settings
  4. Memory Provider (Optional) - Configure if using memory features, as detailed in the Memory documentation

Install the FloTorch AutoGen Plugin using pip:

Terminal window
pip install flotorch[autogen]

Configure your environment variables to avoid hardcoding credentials:

Terminal window
export FLOTORCH_API_KEY="your_api_key"
export FLOTORCH_BASE_URL="https://gateway.flotorch.cloud"

Optional Logging Configuration:

Terminal window
# Enable debug logging (optional)
export FLOTORCH_LOG_DEBUG=true
export FLOTORCH_LOG_PROVIDER="console" # or "file"
export FLOTORCH_LOG_FILE="flotorch_logs.log" # if provider is "file" (default: "flotorch_logs.log")

For comprehensive logging configuration details, see the SDK Logging Configuration guide.

from flotorch.autogen.agent import FlotorchAutogenAgent
# Initialize the agent manager with your FloTorch Console configuration
agent_manager = FlotorchAutogenAgent(
agent_name="your-agent-name", # Must match the agent name in FloTorch Console
custom_tools=[your_tool], # Optional: Add custom tools
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
# Note: Agent system prompt should be configured
# in the FloTorch Console when creating the agent
)
# Get the configured agent
agent = agent_manager.get_agent()

The following comparison illustrates the simplified configuration approach offered by FloTorch AutoGen:

# Configuration managed centrally in FloTorch Console
agent_manager = FlotorchAutogenAgent(
agent_name="weather-agent", # Reference agent from Console
custom_tools=[weather_tool], # Optional custom tools
base_url="https://gateway.flotorch.cloud",
api_key="<your_api_key>"
# Note: Agent system prompt should be configured
# in the FloTorch Console during agent creation
)
agent = agent_manager.get_agent()
# All configuration defined in code
assistant = AssistantAgent(
name="weather_agent",
model_client=model,
tools=[weather_tool],
system_message="You are a weather specialist..."
)

The main entry point for loading agent configurations from the FloTorch Console. It provides fully configured, AutoGen-compatible agents with minimal code setup.

An AutoGen-compatible LLM wrapper that integrates with FloTorch Gateway for model inference, providing seamless access to managed language models.

  • FlotorchAutogenMemory - Long-term persistent memory storage for external memory
  • FlotorchAutogenSession - Short-term model context storage for conversation context

Manages persistent session storage through FloTorch Gateway, enabling conversation continuity across multiple interactions.

Explore the detailed documentation for each component: