LangGraph Plugin: LLM
The FlotorchLangChainLLM provides a LangChain-compatible interface for accessing language models through FloTorch Gateway. It implements LangChain’s BaseChatModel interface, enabling seamless integration with LangGraph workflows while leveraging FloTorch’s managed model infrastructure. It handles complexities such as message conversion, tool bindings, structured output generation, and function calling.
Prerequisites
Section titled “Prerequisites”Before using FlotorchLangChainLLM, ensure you have completed the general prerequisites outlined in the LangGraph Plugin Overview, including installation and environment configuration.
Configuration
Section titled “Configuration”Parameters
Section titled “Parameters”Configure your LLM instance with the following parameters:
FlotorchLangChainLLM( model_id: str, # Model identifier from FloTorch Console (required) api_key: str, # FloTorch API key for authentication (required) base_url: str # FloTorch Gateway endpoint URL (required))Parameter Details:
model_id- The unique identifier of the model configured in FloTorch Consoleapi_key- Authentication key for accessing FloTorch Gateway (can be set via environment variable)base_url- The FloTorch Gateway endpoint URL (can be set via environment variable)
Features
Section titled “Features”BaseChatModel Interface
Section titled “BaseChatModel Interface”Fully implements LangChain’s BaseChatModel interface:
- Message Conversion - Seamlessly converts LangChain messages to FloTorch format
- Tool Bindings - Supports tool and function bindings via
bind_toolsandbind - Structured Output - Supports structured output via
with_structured_output - Streaming Support - Supports streaming responses
Response Processing
Section titled “Response Processing”Provides comprehensive response handling:
- Content Extraction - Extracts text content from model responses
- Function Calls - Processes function calls and tool invocations
- Finish Reasons - Handles various completion states
- Token Usage - Tracks token usage and provides usage statistics
Gateway Integration
Section titled “Gateway Integration”Seamlessly integrates with FloTorch Gateway:
- OpenAI-Compatible API - Uses FloTorch Gateway
/api/openai/v1/chat/completionsendpoint - Model Registry - Works with models configured in FloTorch Model Registry
- Authentication - Handles API key authentication automatically
- Error Handling - Provides robust error handling for network and API issues
LangGraph Integration
Section titled “LangGraph Integration”Enables comprehensive LangGraph integration:
- create_react_agent - Works seamlessly with LangGraph’s
create_react_agent - Tool Bindings - Supports tool bindings for LangGraph workflows
- State Management - Compatible with LangGraph’s state management patterns
Usage Example
Section titled “Usage Example”Basic LLM Usage
Section titled “Basic LLM Usage”from flotorch.langchain.llm import FlotorchLangChainLLM
# Initialize FloTorch LLMllm = FlotorchLangChainLLM( model_id="your-model-id", api_key="your_api_key", base_url="https://gateway.flotorch.cloud")
# Use with LangGraphfrom langgraph.prebuilt import create_react_agent
agent = create_react_agent( model=llm, tools=tools)LLM with Tool Bindings
Section titled “LLM with Tool Bindings”from flotorch.langchain.llm import FlotorchLangChainLLMfrom langchain.tools import tool
# Define a tool@tooldef get_weather(location: str) -> str: """Get weather for a location.""" return f"Weather in {location}: Sunny, 72°F"
# Initialize FloTorch LLMllm = FlotorchLangChainLLM( model_id="your-model-id", api_key="your_api_key", base_url="https://gateway.flotorch.cloud")
# Bind toolsllm_with_tools = llm.bind_tools([get_weather])
# Use with LangGraphfrom langgraph.prebuilt import create_react_agent
agent = create_react_agent( model=llm_with_tools, tools=[get_weather])Structured Output
Section titled “Structured Output”from flotorch.langchain.llm import FlotorchLangChainLLMfrom pydantic import BaseModel
# Define schemaclass WeatherResponse(BaseModel): location: str temperature: float condition: str
# Initialize FloTorch LLMllm = FlotorchLangChainLLM( model_id="your-model-id", api_key="your_api_key", base_url="https://gateway.flotorch.cloud")
# Use structured outputstructured = llm.with_structured_output(WeatherResponse)result = structured.invoke([HumanMessage(content="What's the weather in New York?")])print(result)Best Practices
Section titled “Best Practices”- Environment Variables - Use environment variables for credentials to enhance security
- Model Selection - Choose appropriate models based on your task requirements and performance needs
- Error Handling - Implement proper error handling for production environments
- Tool Integration - Define tools with clear descriptions and proper error handling
- Structured Output - Use structured output for predictable response formats when needed
- LangGraph Integration - Use with
create_react_agentfor seamless agent creation
Next Steps
Section titled “Next Steps”- Agent Configuration - Learn how to integrate LLMs with agents
- Memory Integration - Add memory capabilities to your LLM-powered agents
- Session Management - Implement persistent conversations