Skip to content

Strands Plugin: LLM

The FlotorchStrandsModel provides a Strands-compatible interface for accessing language models through FloTorch Gateway. It integrates seamlessly with Strands’ agent framework while leveraging FloTorch’s managed model infrastructure, providing reasoning capabilities for Strands agents.

Before using FlotorchStrandsModel, ensure you have completed the general prerequisites outlined in the Strands Plugin Overview, including installation and environment configuration.

Configure your model instance with the following parameters:

FlotorchStrandsModel(
model_id: str, # Model identifier from FloTorch Console (required)
api_key: str, # FloTorch API key for authentication (required)
base_url: str # FloTorch Gateway endpoint URL (required)
)

Parameter Details:

  • model_id - The unique identifier of the model configured in FloTorch Console
  • api_key - Authentication key for accessing FloTorch Gateway (can be set via environment variable)
  • base_url - The FloTorch Gateway endpoint URL (can be set via environment variable)

Fully implements Strands’ model interface:

  • Reasoning Capabilities - Provides reasoning capabilities for Strands agents
  • Message Processing - Handles message conversion and processing
  • Response Generation - Generates responses compatible with Strands framework
  • Tool Support - Supports tool integration for agent workflows

Seamlessly integrates with FloTorch Gateway:

  • Model Inference - Uses FloTorch Gateway for model inference
  • Model Registry - Works with models configured in FloTorch Model Registry
  • Authentication - Handles API key authentication automatically
  • Error Handling - Provides robust error handling for network and API issues

Enables comprehensive Strands integration:

  • Agent Class - Works seamlessly with Strands’ Agent class
  • Workflow Support - Compatible with Strands workflows
  • State Management - Integrates with Strands’ state management patterns
from flotorch.strands.llm import FlotorchStrandsModel
from strands.agent.agent import Agent
# Initialize FloTorch Model
model = FlotorchStrandsModel(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Use with Strands Agent
agent = Agent(
model=model,
tools=tools
)
from flotorch.strands.llm import FlotorchStrandsModel
from strands.agent.agent import Agent
# Define tools
def get_weather(location: str) -> str:
"""Get weather for a location."""
return f"Weather in {location}: Sunny, 72°F"
tools = [get_weather]
# Initialize FloTorch Model
model = FlotorchStrandsModel(
model_id="your-model-id",
api_key="your_api_key",
base_url="https://gateway.flotorch.cloud"
)
# Use with Strands Agent
agent = Agent(
model=model,
tools=tools,
system_prompt="You are a helpful assistant."
)
  1. Environment Variables - Use environment variables for credentials to enhance security
  2. Model Selection - Choose appropriate models based on your task requirements and performance needs
  3. Error Handling - Implement proper error handling for production environments
  4. Tool Integration - Define tools with clear descriptions and proper error handling
  5. Strands Integration - Use with Strands’ Agent class for seamless agent creation