LangChain¶
semantix integrates with LangChain as a composable Runnable that validates chain outputs against an Intent.
Install¶
Usage¶
from semantix import Intent
from semantix.integrations.langchain import SemanticValidator
class Polite(Intent):
"""The text must be polite and professional."""
validator = SemanticValidator(Polite)
chain = prompt | llm | StrOutputParser() | validator
SemanticValidator implements LangChain's Runnable protocol -- it supports invoke(), ainvoke(), batch(), and the | pipe operator.
On failure, it raises OutputParserException (if langchain-core is installed) or ValueError.
Parameters¶
| Parameter | Description |
|---|---|
intent |
An Intent subclass whose docstring defines the requirement |
judge |
Judge backend override. Defaults to QuantizedNLIJudge. |
Async support¶
SemanticValidator.ainvoke() works the same as invoke() since the local NLI judge is CPU-bound and doesn't benefit from async I/O. However, it integrates correctly with LangChain's async pipeline.
Batch validation¶
Custom judge¶
from semantix import LLMJudge
validator = SemanticValidator(Polite, judge=LLMJudge(model="gpt-4o-mini"))
chain = prompt | llm | StrOutputParser() | validator
Full example¶
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from semantix import Intent
from semantix.integrations.langchain import SemanticValidator
class Polite(Intent):
"""The text must be polite and professional."""
prompt = ChatPromptTemplate.from_template(
"You are a customer support agent. Respond to: {input}"
)
llm = ChatOpenAI(model="gpt-4o-mini")
validator = SemanticValidator(Polite)
chain = prompt | llm | StrOutputParser() | validator
result = chain.invoke({"input": "I'm furious about my order!"})
Related¶
- DSPy -- reward functions for DSPy modules
- Pydantic AI -- agent output validation
- Judges -- available judge backends