Skip to content

LangGraph Plugin: LLM

The FlotorchLangChainLLM provides a LangChain-compatible interface for accessing language models through FloTorch Gateway. It implements LangChain’s BaseChatModel interface, enabling seamless integration with LangGraph workflows while leveraging FloTorch’s managed model infrastructure. It handles complexities such as message conversion, tool bindings, structured output generation, and function calling.

Before using FlotorchLangChainLLM, ensure you have completed the general prerequisites outlined in the LangGraph Plugin Overview, including installation and environment configuration.

Configure your LLM instance with the following parameters:

FlotorchLangChainLLM(
model_id: str, # Model identifier from FloTorch Console (required)
api_key: str, # FloTorch API key for authentication (required)
base_url: str # FloTorch Gateway endpoint URL (required)
)

Parameter Details:

  • model_id - The unique identifier of the model configured in FloTorch Console
  • api_key - Authentication key for accessing FloTorch Gateway (can be set via environment variable)
  • base_url - The FloTorch Gateway endpoint URL (can be set via environment variable)

Fully implements LangChain’s BaseChatModel interface:

  • Message Conversion - Seamlessly converts LangChain messages to FloTorch format
  • Tool Bindings - Supports tool and function bindings via bind_tools and bind
  • Structured Output - Supports structured output via with_structured_output
  • Streaming Support - Supports streaming responses

Provides comprehensive response handling:

  • Content Extraction - Extracts text content from model responses
  • Function Calls - Processes function calls and tool invocations
  • Finish Reasons - Handles various completion states
  • Token Usage - Tracks token usage and provides usage statistics

Seamlessly integrates with FloTorch Gateway:

  • OpenAI-Compatible API - Uses FloTorch Gateway /api/openai/v1/chat/completions endpoint
  • Model Registry - Works with models configured in FloTorch Model Registry
  • Authentication - Handles API key authentication automatically
  • Error Handling - Provides robust error handling for network and API issues

Enables comprehensive LangGraph integration:

  • create_react_agent - Works seamlessly with LangGraph’s create_react_agent
  • Tool Bindings - Supports tool bindings for LangGraph workflows
  • State Management - Compatible with LangGraph’s state management patterns
from flotorch.langchain.llm import FlotorchLangChainLLM
# Initialize FloTorch LLM
llm = FlotorchLangChainLLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Use with LangGraph
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(
model=llm,
tools=tools
)
from flotorch.langchain.llm import FlotorchLangChainLLM
from langchain.tools import tool
# Define a tool
@tool
def get_weather(location: str) -> str:
"""Get weather for a location."""
return f"Weather in {location}: Sunny, 72°F"
# Initialize FloTorch LLM
llm = FlotorchLangChainLLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Bind tools
llm_with_tools = llm.bind_tools([get_weather])
# Use with LangGraph
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(
model=llm_with_tools,
tools=[get_weather]
)
from flotorch.langchain.llm import FlotorchLangChainLLM
from pydantic import BaseModel
# Define schema
class WeatherResponse(BaseModel):
location: str
temperature: float
condition: str
# Initialize FloTorch LLM
llm = FlotorchLangChainLLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Use structured output
structured = llm.with_structured_output(WeatherResponse)
result = structured.invoke([HumanMessage(content="What's the weather in New York?")])
print(result)
  1. Environment Variables - Use environment variables for credentials to enhance security
  2. Model Selection - Choose appropriate models based on your task requirements and performance needs
  3. Error Handling - Implement proper error handling for production environments
  4. Tool Integration - Define tools with clear descriptions and proper error handling
  5. Structured Output - Use structured output for predictable response formats when needed
  6. LangGraph Integration - Use with create_react_agent for seamless agent creation