Skip to content

LangChain Plugin: LLM

The FlotorchLangChainLLM provides a LangChain-compatible interface for accessing language models through FloTorch Gateway. It implements LangChain’s BaseChatModel interface, enabling seamless integration with LangChain’s agent framework while leveraging FloTorch’s managed model infrastructure. It handles complexities such as message conversion, tool bindings, structured output generation, and function calling.

Before using FlotorchLangChainLLM, ensure you have completed the general prerequisites outlined in the LangChain Plugin Overview, including installation and environment configuration.

Configure your LLM instance with the following parameters:

FlotorchLangChainLLM(
model_id: str, # Model identifier from FloTorch Console (required)
api_key: str, # FloTorch API key for authentication (required)
base_url: str # FloTorch Gateway endpoint URL (required)
)

Parameter Details:

  • model_id - The unique identifier of the model configured in FloTorch Console
  • api_key - Authentication key for accessing FloTorch Gateway (can be set via environment variable)
  • base_url - The FloTorch Gateway endpoint URL (can be set via environment variable)

Fully implements LangChain’s BaseChatModel interface:

  • Message Conversion - Seamlessly converts LangChain messages to FloTorch format
  • Tool Bindings - Supports tool and function bindings via bind_tools and bind
  • Structured Output - Supports structured output via with_structured_output
  • Streaming Support - Supports streaming responses

Provides comprehensive response handling:

  • Content Extraction - Extracts text content from model responses
  • Function Calls - Processes function calls and tool invocations
  • Finish Reasons - Handles various completion states
  • Token Usage - Tracks token usage and provides usage statistics

Seamlessly integrates with FloTorch Gateway:

  • OpenAI-Compatible API - Uses FloTorch Gateway /api/openai/v1/chat/completions endpoint
  • Model Registry - Works with models configured in FloTorch Model Registry
  • Authentication - Handles API key authentication automatically
  • Error Handling - Provides robust error handling for network and API issues

Enables comprehensive tool integration:

  • Tool Bindings - Use bind_tools for tool/function bindings
  • Function Bindings - Use bind with functions parameter for OpenAI functions agent
  • Structured Output - Use with_structured_output for schema-based responses
  • LangChain Compatibility - Works seamlessly with create_openai_functions_agent
from flotorch.langchain.llm import FlotorchLangChainLLM
# Initialize FloTorch LLM
llm = FlotorchLangChainLLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Use with LangChain
from langchain_core.messages import HumanMessage
messages = [HumanMessage(content="Hello, how are you?")]
response = llm.invoke(messages)
print(response.content)
from flotorch.langchain.llm import FlotorchLangChainLLM
from langchain.tools import tool
# Define a tool
@tool
def get_weather(location: str) -> str:
"""Get weather for a location."""
return f"Weather in {location}: Sunny, 72°F"
# Initialize FloTorch LLM
llm = FlotorchLangChainLLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Bind tools
llm_with_tools = llm.bind_tools([get_weather])
# Use with agent
from langchain.agents import create_openai_functions_agent
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_openai_functions_agent(llm_with_tools, [get_weather], prompt)
from flotorch.langchain.llm import FlotorchLangChainLLM
# Initialize FloTorch LLM
llm = FlotorchLangChainLLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Define functions
functions = [
{
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city name"}
},
"required": ["location"]
}
}
]
# Bind functions
llm_with_functions = llm.bind(functions=functions)
from flotorch.langchain.llm import FlotorchLangChainLLM
from pydantic import BaseModel
# Define schema
class WeatherResponse(BaseModel):
location: str
temperature: float
condition: str
# Initialize FloTorch LLM
llm = FlotorchLangChainLLM(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Use structured output
structured = llm.with_structured_output(WeatherResponse)
result = structured.invoke([HumanMessage(content="What's the weather in New York?")])
print(result)
  1. Environment Variables - Use environment variables for credentials to enhance security
  2. Model Selection - Choose appropriate models based on your task requirements and performance needs
  3. Error Handling - Implement proper error handling for production environments
  4. Tool Integration - Define tools with clear descriptions and proper error handling
  5. Structured Output - Use structured output for predictable response formats when needed
  6. Agent Integration - Use with create_openai_functions_agent for seamless agent creation