LangChain Integration
Use Attestd as a LangChain tool to give agents real-time CVE risk and supply chain integrity data. The tool returns structured, deterministic output — no CVSS score interpretation, no ambiguous text for the agent to parse.
The most important correctness concern: outside_coverage is not a safety signal. An agent that treats AttestdUnsupportedProductError as “no known vulnerabilities” is wrong. Catch it and return an explicit unknown-risk response.
Install
pip install attestd "langchain-core>=0.3" langchain langchain-openaiStructuredTool definition
Use StructuredTool.from_function with an explicit Pydantic args_schema. This gives the LLM a validated schema and gives you typed inputs. Instantiate attestd.Client once at module level and capture it in the closure — never instantiate inside the tool function.
from langchain_core.tools import StructuredTool
from pydantic import BaseModel
import attestd
# Instantiate once — captured in closure. Never instantiate inside the tool function.
_client = attestd.Client(api_key="YOUR_API_KEY")
class AttestdInput(BaseModel):
product: str
version: str
def check_vulnerability(product: str, version: str) -> dict:
try:
result = _client.check(product, version)
return {
"outside_coverage": False,
"risk_state": result.risk_state,
"actively_exploited": result.actively_exploited,
"patch_available": result.patch_available,
"fixed_version": result.fixed_version,
"supply_chain_compromised": (
result.supply_chain.compromised
if result.supply_chain is not None
else False
),
}
except attestd.AttestdUnsupportedProductError:
# Product is outside Attestd coverage. This is NOT a safety signal.
return {
"outside_coverage": True,
"risk_state": None,
"message": f"No attestd coverage for '{product}'. Treat as unknown risk.",
}
attestd_tool = StructuredTool.from_function(
func=check_vulnerability,
name="check_package_vulnerability",
description=(
"Check whether a software package version has known CVE vulnerabilities "
"or supply chain compromise. Use before deploying or recommending any "
"software dependency. outside_coverage=true means Attestd has no data — "
"treat as unknown risk, not safe. "
"Input: product slug (e.g. 'nginx', 'runc', 'log4j') and exact version string."
),
args_schema=AttestdInput,
)Agent executor pattern
Use create_tool_calling_agent (LangChain 0.3+) with any function-calling LLM. The system prompt must state the deployment-blocking policy explicitly — the model cannot infer the correct behaviour from the tool description alone.
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
prompt = ChatPromptTemplate.from_messages([
(
"system",
"You are a security-aware deployment assistant. "
"Before approving any software dependency, check its vulnerability status "
"using the check_package_vulnerability tool. "
"Block deployment if risk_state is 'critical' or 'high', or if "
"supply_chain_compromised is true. "
"outside_coverage=true means unknown risk — state this explicitly, "
"do not treat it as safe.",
),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, [attestd_tool], prompt)
executor = AgentExecutor(agent=agent, tools=[attestd_tool], verbose=True)
# Example: check two dependencies before a deploy
response = executor.invoke({
"input": "Is it safe to deploy with runc 1.0.0 and litellm 1.82.7?"
})
print(response["output"])Async support
Use attestd.AsyncClient for async agent deployments. Instantiate at module level — not inside the coroutine. Pass coroutine= (not func=) to StructuredTool.from_function.
import attestd
from langchain_core.tools import StructuredTool
from pydantic import BaseModel
# Instantiate once at module level — NOT inside the coroutine.
_async_client = attestd.AsyncClient(api_key="YOUR_API_KEY")
class AttestdInput(BaseModel):
product: str
version: str
async def acheck_vulnerability(product: str, version: str) -> dict:
try:
result = await _async_client.check(product, version)
return {
"outside_coverage": False,
"risk_state": result.risk_state,
"actively_exploited": result.actively_exploited,
"patch_available": result.patch_available,
"fixed_version": result.fixed_version,
"supply_chain_compromised": (
result.supply_chain.compromised
if result.supply_chain is not None
else False
),
}
except attestd.AttestdUnsupportedProductError:
return {
"outside_coverage": True,
"risk_state": None,
"message": f"No attestd coverage for '{product}'. Treat as unknown risk.",
}
# Pass coroutine= (not func=) for async tools
async_attestd_tool = StructuredTool.from_function(
coroutine=acheck_vulnerability,
name="check_package_vulnerability",
description=(
"Check whether a software package version has known CVE vulnerabilities "
"or supply chain compromise. outside_coverage=true means unknown risk, not safe."
),
args_schema=AttestdInput,
)
# Use with async agent executor
response = await executor.ainvoke({
"input": "Check litellm 1.82.7 before we deploy."
})Return fields
The tool returns these fields. Design agent branching logic around them.
| Field | Semantics |
|---|---|
outside_coverage | true if Attestd has no data for this product. Not a safety signal — treat as unknown risk. |
risk_state | "critical" | "high" | "elevated" | "low" | "none" | null (when outside_coverage). Block on critical or high. |
actively_exploited | true if in the CISA KEV catalog. Block regardless of risk_state if true. |
patch_available | true if a fixed version is known. Use with fixed_version to tell the agent what to recommend. |
fixed_version | The earliest clean version, or null if no patch exists yet. |
supply_chain_compromised | true if a malicious publish or security yank was detected on PyPI. Block immediately. |
- → AI Agent Integration (generic function tool, OpenAI function calling, outside-coverage policy)
- → SDK Reference (AsyncClient, error types, attestd.testing mock transports)
- → Response Field Reference (full semantics for every field returned by /v1/check)