LangChain Adapter
Drop-in replacements for LangChain components with MACAW security. Use the same APIs you already know - just change the import path.
Quick Start
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
response = llm.invoke("Hello!")from macaw_adapters.langchain import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
response = llm.invoke("Hello!")True Drop-in Replacement
MACAW's LangChain adapter uses the exact same class names as the original libraries. Just change your import path - your existing code works unchanged.
Available Components
| Original Import | MACAW Import | Description |
|---|---|---|
| langchain_openai.ChatOpenAI | macaw_adapters.langchain.ChatOpenAI | OpenAI chat models |
| langchain_anthropic.ChatAnthropic | macaw_adapters.langchain.ChatAnthropic | Anthropic chat models |
| langchain.memory.* | macaw_adapters.langchain.memory.* | Secure conversation memory |
| langchain.agents.* | macaw_adapters.langchain.agents.* | Agent creation functions |
Submodule Imports
Mirror the structure of LangChain packages for easy migration:
# LLM providers (mirrors langchain_openai / langchain_anthropic)
from macaw_adapters.langchain.openai import ChatOpenAI
from macaw_adapters.langchain.anthropic import ChatAnthropic
# Memory classes (mirrors langchain.memory)
from macaw_adapters.langchain.memory import (
ConversationBufferMemory,
ConversationBufferWindowMemory,
ConversationSummaryMemory
)
# Agent functions (mirrors langchain.agents)
from macaw_adapters.langchain.agents import (
create_react_agent,
create_openai_functions_agent,
AgentExecutor
)
# Tool wrappers
from macaw_adapters.langchain.tools import SecureToolWrapper, wrap_tools
# Callback handler for audit logging
from macaw_adapters.langchain.callbacks import MACAWCallbackHandlerChatOpenAI / ChatAnthropic
from macaw_adapters.langchain import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
temperature=0.7,
max_tokens=1024,
api_key="sk-...", # Or use OPENAI_API_KEY env var
base_url=None, # Custom API endpoint
organization=None, # OpenAI organization ID
timeout=None, # Request timeout
max_retries=2, # Retry attempts
macaw_client=None # Optional pre-configured MACAWClient
)
# All standard LangChain methods work:
response = llm.invoke("Hello!")
async_response = await llm.ainvoke("Hello!")
for chunk in llm.stream("Tell me a story"):
print(chunk.content, end="")
responses = llm.batch(["Q1", "Q2", "Q3"])MACAWCallbackHandler
Audit logging for LangChain workflows via the callback system:
from macaw_adapters.langchain import ChatOpenAI, MACAWCallbackHandler
# Create LLM (has its own MACAWClient)
llm = ChatOpenAI(model="gpt-4o")
# Create callback handler from LLM's MACAW client
handler = MACAWCallbackHandler.from_client(llm._macaw)
# Or create with a new client
handler = MACAWCallbackHandler.create(app_name="my-app")
# Use with any LangChain component
chain = LLMChain(llm=llm, callbacks=[handler])
# Events logged:
# • on_llm_start / on_llm_end / on_llm_error
# • on_tool_start / on_tool_end / on_tool_error
# • on_chain_start / on_chain_end / on_chain_error
# • on_agent_action / on_agent_finishSecuring Agents
Agent functions support optional security_policy for tool access control:
from macaw_adapters.langchain import (
ChatOpenAI,
MACAWCallbackHandler,
create_react_agent,
AgentExecutor
)
from langchain.tools import Tool
# Secure LLM
llm = ChatOpenAI(model="gpt-4o")
# Define tools
tools = [
Tool(name="calculator", func=lambda x: eval(x), description="Math"),
Tool(name="search", func=search_web, description="Web search"),
]
# Create agent with MAPL security policy
agent = create_react_agent(
llm, tools, prompt,
security_policy={
"resources": ["tool:calculator", "tool:search"],
"denied_resources": ["tool:admin"],
"constraints": {
"denied_parameters": {
"tool:*": {"input": ["*password*", "*secret*"]}
}
}
}
)
executor = AgentExecutor(
agent=agent,
tools=tools,
security_policy={...} # Same MAPL format
)
result = executor.invoke({"input": "What's 2+2?"})| Function | Description |
|---|---|
| create_react_agent(llm, tools, prompt, security_policy?) | ReAct agent |
| create_openai_functions_agent(llm, tools, prompt, security_policy?) | OpenAI functions agent |
| AgentExecutor(agent, tools, security_policy?) | Agent executor wrapper |
Note: Parameter name is input because LangChain tools pass a single input string.
Secure Memory
Memory classes backed by MACAW context vault with audit logging and session isolation:
from macaw_adapters.langchain import (
ChatOpenAI,
ConversationBufferMemory,
ConversationBufferWindowMemory,
ConversationSummaryMemory
)
# Each session_id gets isolated memory
alice_memory = ConversationBufferMemory(session_id="alice")
bob_memory = ConversationBufferMemory(session_id="bob")
# Save conversation turn
alice_memory.save_context(
{"input": "Hello"},
{"output": "Hi there!"}
)
# Load history
history = alice_memory.load_memory_variables({})
# Window memory (last k exchanges)
window_memory = ConversationBufferWindowMemory(k=5, session_id="user-123")
# Summary memory (uses LLM to summarize)
summary_memory = ConversationSummaryMemory(llm=ChatOpenAI(), session_id="user-123")| Class | Description |
|---|---|
| ConversationBufferMemory | Full conversation history |
| ConversationBufferWindowMemory | Last k turns only |
| ConversationSummaryMemory | LLM-summarized history |
Tool Wrappers
from macaw_adapters.langchain import SecureToolWrapper, wrap_tools
from langchain.tools import Tool
# Wrap individual tools
calculator = Tool(name="Calculator", func=calc, description="Math")
secure_calc = SecureToolWrapper(calculator)
# Or wrap all tools at once
tools = [calculator, search_tool, database_tool]
secure_tools = wrap_tools(tools)
# Use wrapped tools in agents
agent = create_react_agent(llm, secure_tools, prompt)Feature Coverage
Implemented
ChatOpenAI, ChatAnthropic
stream(), astream()
batch(), abatch()
ReAct, OpenAI Functions
SecureToolWrapper, wrap_tools
Buffer, Window, Summary
Full audit logging
Planned
Graph-based workflows
Cleanup
from macaw_adapters.langchain import cleanup
# Clean up all MACAW resources when done
cleanup()