Skip to content

FloTorch LangChain Plugin Overview

The FloTorch LangChain Plugin streamlines the development of LangChain agents by providing managed infrastructure and centralized configuration through the FloTorch Console. Instead of managing complex configurations in code, developers can leverage a powerful set of pre-configured services:

  • Centralized Agent Management - Configure and manage agents through the FloTorch Console
  • Managed LLM Access - Seamless integration with FloTorch Gateway for model inference
  • Persistent Memory Services - Optional memory and session capabilities with provider setup
  • Automated Tool Integration - Support for MCP tools and custom tool implementations
  • Configurable Logging - Flexible logging configuration for debugging and monitoring (see SDK Logging Configuration)

Before getting started with the FloTorch LangChain Plugin, ensure you have completed the following:

  1. FloTorch Account - Create an account at console.flotorch.cloud
  2. Agent Configuration - Set up your agent following the Gateway Agents documentation
  3. API Credentials - Generate your API key from API Keys settings
  4. Memory Provider (Optional) - Configure if using memory features, as detailed in the Memory documentation

Install the FloTorch LangChain Plugin using pip:

Terminal window
pip install flotorch[langchain]

Configure your environment variables to avoid hardcoding credentials:

Terminal window
export FLOTORCH_API_KEY="your_api_key"
export FLOTORCH_BASE_URL="https://gateway.flotorch.cloud"

Optional Logging Configuration:

Terminal window
# Enable debug logging (optional)
export FLOTORCH_LOG_DEBUG=true
export FLOTORCH_LOG_PROVIDER="console" # or "file"
export FLOTORCH_LOG_FILE="flotorch_logs.log" # if provider is "file" (default: "flotorch_logs.log")

For comprehensive logging configuration details, see the SDK Logging Configuration guide.

from flotorch.langchain.agent import FlotorchLangChainAgent
from langchain.agents import AgentExecutor
# Initialize the agent manager with your FloTorch Console configuration
agent_manager = FlotorchLangChainAgent(
agent_name="your-agent-name", # Must match the agent name in FloTorch Console
custom_tools=[your_tool], # Optional: Add custom tools
base_url="https://gateway.flotorch.cloud",
api_key="your_api_key"
# Note: Agent goal and system prompt should be configured
# in the FloTorch Console when creating the agent
)
# Get the configured agent and tools
agent = agent_manager.get_agent()
tools = agent_manager.get_tools()
# Use with AgentExecutor
executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=False
)

The following comparison illustrates the simplified configuration approach offered by FloTorch LangChain:

# Configuration managed centrally in FloTorch Console
agent_manager = FlotorchLangChainAgent(
agent_name="weather-agent", # Reference agent from Console
custom_tools=[weather_tool], # Optional custom tools
base_url="https://gateway.flotorch.cloud",
api_key="<your_api_key>"
# Note: Agent goal and system prompt should be configured
# in the FloTorch Console during agent creation
)
agent = agent_manager.get_agent()
tools = agent_manager.get_tools()
# All configuration defined in code
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant..."),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
tools = [weather_tool]
agent = create_openai_functions_agent(model, tools, prompt)

The main entry point for loading agent configurations from the FloTorch Console. It provides fully configured, LangChain-compatible agents with minimal code setup.

A LangChain-compatible LLM wrapper that integrates with FloTorch Gateway for model inference, providing seamless access to managed language models.

  • FlotorchLangChainMemory - Long-term persistent memory storage
  • FlotorchLangChainSession - Short-term session memory for conversation context

Manages persistent session storage through FloTorch Gateway, enabling conversation continuity across multiple interactions.

Explore the detailed documentation for each component: