, authorized contracts are foundational paperwork that outline the relationships, obligations, and duties between events. Whether or not it’s a partnership settlement, an NDA, or a provider contract, these paperwork usually include important info that drives decision-making, danger administration, and compliance. Nonetheless, navigating and extracting insights from these contracts could be a advanced and time-consuming course of.
On this put up, we’ll discover how we are able to streamline the method of understanding and dealing with authorized contracts by implementing an end-to-end resolution utilizing Agentic Graphrag. I see GraphRAG as an umbrella time period for any technique that retrieves or causes over info saved in a data graph, enabling extra structured and context-aware responses.
By structuring authorized contracts right into a data graph in Neo4j, we are able to create a strong repository of data that’s simple to question and analyze. From there, we’ll construct a LangGraph agent that enables customers to ask particular questions concerning the contracts, making it doable to quickly uncover new insights.
The code is offered on this GitHub repository.
Why structuring information issues
Some domains work properly with naive RAG, however authorized contracts current distinctive challenges.

As proven within the picture, relying solely on a vector index to retrieve related chunks can introduce dangers, reminiscent of pulling info from irrelevant contracts. It is because authorized language is extremely structured, and related wording throughout completely different agreements can result in incorrect or deceptive retrieval. These limitations spotlight the necessity for a extra structured strategy, reminiscent of GraphRAG, to make sure exact and context-aware retrieval.
To implement GraphRAG, we first must assemble a data graph.

To construct a data graph for authorized contracts, we’d like a solution to extract structured info from paperwork and retailer it alongside the uncooked textual content. An LLM may also help by studying by means of contracts and figuring out key particulars reminiscent of events, dates, contract varieties, and essential clauses. As a substitute of treating the contract as only a block of textual content, we break it down into structured elements that mirror its underlying authorized which means. For instance, an LLM can acknowledge that “ACME Inc. agrees to pay $10,000 per 30 days beginning January 1, 2024” incorporates each a cost obligation and a begin date, which we are able to then retailer in a structured format.
As soon as we have now this structured information, we retailer it in a data graph, the place entities like corporations, agreements, and clauses are represented as represented together with their relationships. The unstructured textual content stays accessible, however now we are able to use the structured layer to refine our searches and make retrieval way more exact. As a substitute of simply fetching probably the most related textual content chunks, we are able to filter contracts based mostly on their attributes. This implies we are able to reply questions that naive RAG would battle with, reminiscent of what number of contracts have been signed final month or whether or not we have now any lively agreements with a selected firm. These questions require aggregation and filtering, which isn’t doable with normal vector-based retrieval alone.
By combining structured and unstructured information, we additionally make retrieval extra context-aware. If a person asks a couple of contract’s cost phrases, we be certain that the search is constrained to the suitable settlement slightly than counting on textual content similarity, which could pull in phrases from unrelated contracts. This hybrid strategy overcomes the constraints of naive RAG and permits for a a lot deeper and extra dependable evaluation of authorized paperwork.
Graph building
We’ll leverage an LLM to extract structured info from authorized paperwork, utilizing the CUAD (Contract Understanding Atticus Dataset), a extensively used benchmark dataset for contract evaluation licensed beneath CC BY 4.0. CUAD dataset incorporates over 500 contracts, making it a great dataset for evaluating our structured extraction pipeline.
The token rely distribution for the contracts is visualized under.

Most contracts on this dataset are comparatively quick, with token counts under 10,000. Nonetheless, there are some for much longer contracts, with just a few reaching as much as 80,000 tokens. These lengthy contracts are uncommon, whereas shorter ones make up the bulk. The distribution reveals a steep drop-off, which means lengthy contracts are the exception slightly than the rule.
We’re utilizing Gemini-2.0-Flash for extraction, which has a 1 million token enter restrict, so dealing with these contracts isn’t an issue. Even the longest contracts in our dataset (round 80,000 tokens) match properly throughout the mannequin’s capability. Since most contracts are a lot shorter, we don’t have to fret about truncation or breaking paperwork into smaller chunks for processing.
Structured information extraction
Most business LLMs have the choice to make use of Pydantic objects to outline the schema of the output. An instance for location:
class Location(BaseModel):
"""
Represents a bodily location together with tackle, metropolis, state, and nation.
"""
tackle: Optionally available[str] = Subject(
..., description="The road tackle of the situation.Use None if not supplied"
)
metropolis: Optionally available[str] = Subject(
..., description="The town of the situation.Use None if not supplied"
)
state: Optionally available[str] = Subject(
..., description="The state or area of the situation.Use None if not supplied"
)
nation: str = Subject(
...,
description="The nation of the situation. Use the two-letter ISO normal.",
)
When utilizing LLMs for structured output, Pydantic helps outline a transparent schema by specifying the forms of attributes and offering descriptions that information the mannequin’s responses. Every subject has a kind, reminiscent of str
or Optionally available[str]
, and an outline that tells the LLM precisely find out how to format the output.
For instance, in a Location
mannequin, we outline key attributes like tackle
, metropolis
, state
, and nation
, specifying what information is anticipated and the way it needs to be structured. The nation
subject, as an example, follows two-letter nation code normal like "US"
, "FR"
, or "JP"
, as a substitute of inconsistent variations like “United States” or “USA.” This precept applies to different structured information as properly, ISO 8601 retains dates in a typical format (YYYY-MM-DD
), and so forth.
By defining structured output with Pydantic, we make LLM responses extra dependable, machine-readable, and simpler to combine into databases or APIs. Clear subject descriptions additional assist the mannequin generate accurately formatted information, lowering the necessity for post-processing.
The Pydantic schema fashions might be extra subtle just like the Contract mannequin under, which captures key particulars of a authorized settlement, making certain the extracted information follows a standardized construction.
class Contract(BaseModel):
"""
Represents the important thing particulars of the contract.
"""
abstract: str = Subject(
...,
description=("Excessive degree abstract of the contract with related info and particulars. Embody all related info to offer full image."
"Do no use any pronouns"),
)
contract_type: str = Subject(
...,
description="The kind of contract being entered into.",
enum=CONTRACT_TYPES,
)
events: Checklist[Organization] = Subject(
...,
description="Checklist of events concerned within the contract, with particulars of every occasion's position.",
)
effective_date: str = Subject(
...,
description=(
"Enter the date when the contract turns into efficient in yyyy-MM-dd format."
"If solely the 12 months (e.g., 2015) is understood, use 2015-01-01 because the default date."
"At all times fill in full date"
),
)
contract_scope: str = Subject(
...,
description="Description of the scope of the contract, together with rights, duties, and any limitations.",
)
length: Optionally available[str] = Subject(
None,
description=(
"The length of the settlement, together with provisions for renewal or termination."
"Use ISO 8601 durations normal"
),
)
end_date: Optionally available[str] = Subject(
None,
description=(
"The date when the contract expires. Use yyyy-MM-dd format."
"If solely the 12 months (e.g., 2015) is understood, use 2015-01-01 because the default date."
"At all times fill in full date"
),
)
total_amount: Optionally available[float] = Subject(
None, description="Complete worth of the contract."
)
governing_law: Optionally available[Location] = Subject(
None, description="The jurisdiction's legal guidelines governing the contract."
)
clauses: Optionally available[List[Clause]] = Subject(
None, description=f"""Related summaries of clause varieties. Allowed clause varieties are {CLAUSE_TYPES}"""
)
This contract schema organizes key particulars of authorized agreements in a structured manner, making it simpler to investigate with LLMs. It contains various kinds of clauses, reminiscent of confidentiality or termination, every with a brief abstract. The events concerned are listed with their names, places, and roles, whereas contract particulars cowl issues like begin and finish dates, whole worth, and governing regulation. Some attributes, reminiscent of governing regulation, might be outlined utilizing nested fashions, enabling extra detailed and complicated outputs.
The nested object strategy works properly with some AI fashions that deal with advanced information relationships, whereas others could battle with deeply nested particulars.
We will check our strategy utilizing the next instance. We’re utilizing the LangChain framework to orchestrate LLMs.
llm = ChatGoogleGenerativeAI(mannequin="gemini-2.0-flash")
llm.with_structured_output(Contract).invoke(
"Tomaz works with Neo4j since 2017 and can make a billion greenback till 2030."
"The contract was signed in Las Vegas"
)
which outputs
Contract(
abstract="Tomaz works with Neo4j since 2017 and can make a billion greenback till 2030.",
contract_type="Service",
events=[
Organization(
name="Tomaz",
location=Location(
address=None,
city="Las Vegas",
state=None,
country="US"
),
role="employee"
),
Organization(
name="Neo4j",
location=Location(
address=None,
city=None,
state=None,
country="US"
),
role="employer"
)
],
effective_date="2017-01-01",
contract_scope="Tomaz will work with Neo4j",
length=None,
end_date="2030-01-01",
total_amount=1_000_000_000.0,
governing_law=None,
clauses=None
)
Now that our contract information is in a structured format, we are able to outline the Cypher question wanted to import it into Neo4j, mapping entities, relationships, and key clauses right into a graph construction. This step transforms uncooked extracted information right into a queryable data graph, enabling environment friendly traversal and retrieval of contract insights.
UNWIND $information AS row
MERGE (c:Contract {file_id: row.file_id})
SET c.abstract = row.abstract,
c.contract_type = row.contract_type,
c.effective_date = date(row.effective_date),
c.contract_scope = row.contract_scope,
c.length = row.length,
c.end_date = CASE WHEN row.end_date IS NOT NULL THEN date(row.end_date) ELSE NULL END,
c.total_amount = row.total_amount
WITH c, row
CALL (c, row) {
WITH c, row
WHERE row.governing_law IS NOT NULL
MERGE (c)-[:HAS_GOVERNING_LAW]->(l:Location)
SET l += row.governing_law
}
FOREACH (occasion IN row.events |
MERGE (p:Social gathering {title: occasion.title})
MERGE (p)-[:HAS_LOCATION]->(pl:Location)
SET pl += occasion.location
MERGE (p)-[pr:PARTY_TO]->(c)
SET pr.position = occasion.position
)
FOREACH (clause IN row.clauses |
MERGE (c)-[:HAS_CLAUSE]->(cl:Clause {kind: clause.clause_type})
SET cl.abstract = clause.abstract
)
This Cypher question imports structured contract information into Neo4j by creating Contract
nodes with attributes reminiscent of abstract
, contract_type
, effective_date
, length
, and total_amount
. If a governing regulation is specified, it hyperlinks the contract to a Location
node. Events concerned within the contract are saved as Social gathering
nodes, with every occasion related to a Location
and assigned a task in relation to the contract. The question additionally processes clauses, creating Clause
nodes and linking them to the contract whereas storing their kind and abstract.
After processing and importing the contracts, the ensuing graph follows the next graph schema.

Let’s additionally check out a single contract.

This graph represents a contract construction the place a contract (orange node) connects to varied clauses (purple nodes), events (blue nodes), and places (violet nodes). The contract has three clauses: Renewal & Termination, Legal responsibility & Indemnification, and Confidentiality & Non-Disclosure. Two events, Modus Media Worldwide and Dragon Programs, Inc., are concerned, every linked to their respective places, Netherlands (NL) and United States (US). The contract is ruled by U.S. regulation. The contract node additionally incorporates further metadata, together with dates and different related particulars.
A public read-only occasion containing CUAD authorized contracts is offered with the next credentials.
URI: neo4j+s://demo.neo4jlabs.com
username: legalcontracts
password: legalcontracts
database: legalcontracts
Entity decision
Entity decision in authorized contracts is difficult on account of variations in how corporations, people, and places are referenced. An organization would possibly seem as “Acme Inc.” in a single contract and “Acme Company” in one other, requiring a course of to find out whether or not they discuss with the identical entity.
One strategy is to generate candidate matches utilizing textual content embeddings or string distance metrics like Levenshtein distance. Embeddings seize semantic similarity, whereas string distance measures character-level variations. As soon as candidates are recognized, further analysis is required, evaluating metadata reminiscent of addresses or tax IDs, analyzing shared relationships within the graph, or incorporating human evaluate for important instances.
For resolving entities at scale, each open-source options like Dedupe and business instruments like Senzing supply automated strategies. Selecting the best strategy is determined by information high quality, accuracy necessities, and whether or not handbook oversight is possible.
With the authorized graph constructed, we are able to transfer onto the agentic GraphRAG implementation.
Agentic GraphRAG
Agentic architectures differ extensively in complexity, modularity, and reasoning capabilities. At their core, these architectures contain an LLM appearing as a central reasoning engine, usually supplemented with instruments, reminiscence, and orchestration mechanisms. The important thing differentiator is how a lot autonomy the LLM has in making selections and the way interactions with exterior methods are structured.
One of many easiest and best designs, significantly for chatbot-like implementations, is a direct LLM-with-tools strategy. On this setup, the LLM serves because the decision-maker, dynamically choosing which instruments to invoke (if any), retrying operations when essential, and executing a number of instruments in sequence to meet advanced requests.

The diagram represents a easy LangGraph agent workflow. It begins at __start__
, transferring to the assistant
node, the place the LLM processes person enter. From there, the assistant can both name instruments
to fetch related info or transition on to __end__
to finish the interplay. If a device is used, the assistant processes the response earlier than deciding whether or not to name one other device or finish the session. This construction permits the agent to autonomously decide when exterior info is required earlier than responding.
This strategy is especially well-suited to stronger business fashions like Gemini or GPT-4o, which excel at reasoning and self-correction.
Instruments
LLMs are highly effective reasoning engines, however their effectiveness usually is determined by how properly they’re geared up with exterior instruments. These instruments , whether or not database queries, APIs, or search features, lengthen an LLM’s skill to retrieve info, carry out calculations, or work together with structured information.

Designing instruments which are each common sufficient to deal with various queries and exact sufficient to return significant outcomes is extra artwork than science. What we’re actually constructing is a semantic layer between the LLM and the underlying information. Relatively than requiring the LLM to grasp the precise construction of a Neo4j data graph or a database schema, we outline instruments that summary away these complexities.
With this strategy, the LLM doesn’t must know whether or not contract info is saved as graph nodes and relationships or as uncooked textual content in a doc retailer. It solely must invoke the suitable device to fetch related information based mostly on a person’s query.
In our case, the contract retrieval device serves as this semantic interface. When a person asks about contract phrases, obligations, or events, the LLM calls a structured question device that interprets the request right into a database question, retrieves related info, and presents it in a format the LLM can interpret and summarize. This permits a versatile, model-agnostic system the place completely different LLMs can work together with contract information while not having direct data of its storage or construction.
There’s no one-size-fits-all normal for designing an optimum toolset. What works properly for one mannequin could fail for one more. Some fashions deal with ambiguous device directions gracefully, whereas others battle with advanced parameters or require specific prompting. The trade-off between generality and task-specific effectivity means device design requires iteration, testing, and fine-tuning for the LLM in use.
For contract evaluation, an efficient device ought to retrieve contracts and summarize key phrases with out requiring customers to phrase queries rigidly. Reaching this flexibility is determined by considerate immediate engineering, strong schema design, and adaptation to completely different LLM capabilities. As fashions evolve, so do methods for making instruments extra intuitive and efficient.
On this part, we’ll discover completely different approaches to device implementation, evaluating their flexibility, effectiveness, and compatibility with numerous LLMs.
My most popular strategy is to dynamically and deterministically assemble a Cypher question and execute it towards the database. This technique ensures constant and predictable question era whereas sustaining implementation flexibility. By structuring queries this manner, we reinforce the semantic layer, permitting person inputs to be seamlessly translated into database retrievals. This retains the LLM targeted on retrieving related info slightly than understanding the underlying information mannequin.
Our device is meant to determine related contracts, so we have to present the LLM with choices to look contracts based mostly on numerous attributes. The enter description is once more supplied as a Pydantic object.
class ContractInput(BaseModel):
min_effective_date: Optionally available[str] = Subject(
None, description="Earliest contract efficient date (YYYY-MM-DD)"
)
max_effective_date: Optionally available[str] = Subject(
None, description="Newest contract efficient date (YYYY-MM-DD)"
)
min_end_date: Optionally available[str] = Subject(
None, description="Earliest contract finish date (YYYY-MM-DD)"
)
max_end_date: Optionally available[str] = Subject(
None, description="Newest contract finish date (YYYY-MM-DD)"
)
contract_type: Optionally available[str] = Subject(
None, description=f"Contract kind; legitimate varieties: {CONTRACT_TYPES}"
)
events: Optionally available[List[str]] = Subject(
None, description="Checklist of events concerned within the contract"
)
summary_search: Optionally available[str] = Subject(
None, description="Examine abstract of the contract"
)
nation: Optionally available[str] = Subject(
None, description="Nation the place the contract applies. Use the two-letter ISO normal."
)
lively: Optionally available[bool] = Subject(None, description="Whether or not the contract is lively")
monetary_value: Optionally available[MonetaryValue] = Subject(
None, description="The full quantity or worth of a contract"
)
With LLM instruments, attributes can take numerous types relying on their goal. Some fields are easy strings, reminiscent of contract_type
and nation
, which retailer single values. Others, like events
, are lists of strings, permitting a number of entries (e.g., a number of entities concerned in a contract).
Past primary information varieties, attributes may signify advanced objects. For instance, monetary_value
makes use of a MonetaryValue
object, which incorporates structured information reminiscent of foreign money kind and the operator. Whereas attributes with nested objects supply a transparent and structured illustration of information, fashions are inclined to battle to deal with them successfully, so we must always preserve them easy.
As a part of this challenge, we’re experimenting with a further cypher_aggregation
attribute, offering the LLM with larger flexibility for situations that require particular filtering or aggregation.
cypher_aggregation: Optionally available[str] = Subject(
None,
description="""Customized Cypher assertion for superior aggregations and analytics.
This shall be appended to the bottom question:
```
MATCH (c:Contract)
WITH c, abstract, contract_type, contract_scope, effective_date, end_date, events, lively, monetary_value, contract_id, nations
```
Examples:
1. Depend contracts by kind:
```
RETURN contract_type, rely(*) AS rely ORDER BY rely DESC
```
2. Calculate common contract length by kind:
```
WITH contract_type, effective_date, end_date
WHERE effective_date IS NOT NULL AND end_date IS NOT NULL
WITH contract_type, length.between(effective_date, end_date).days AS length
RETURN contract_type, avg(length) AS avg_duration ORDER BY avg_duration DESC
```
3. Calculate contracts per efficient date 12 months:
```
RETURN effective_date.12 months AS 12 months, rely(*) AS rely ORDER BY 12 months
```
4. Counts the occasion with the very best variety of lively contracts:
```
UNWIND events AS occasion
WITH occasion.title AS party_name, lively, rely(*) AS contract_count
WHERE lively = true
RETURN party_name, contract_count
ORDER BY contract_count DESC
LIMIT 1
```
"""
The cypher_aggregation
attribute permits LLMs to outline customized Cypher statements for superior aggregations and analytics. It extends the bottom question by appending question-specified aggregation logic, enabling versatile filtering and computation.
This characteristic helps use instances reminiscent of counting contracts by kind, calculating common contract length, analyzing contract distributions over time, and figuring out key events based mostly on contract exercise. By leveraging this attribute, the LLM can dynamically generate insights tailor-made to particular analytical wants with out requiring predefined question buildings.
Whereas this flexibility is effective, it needs to be fastidiously evaluated, as elevated adaptability comes at the price of decreased consistency and robustness because of the added complexity of the operation.
We should clearly outline the operate’s title and outline when presenting it to the LLM. A well-structured description helps information the mannequin in utilizing the operate accurately, making certain it understands its goal, anticipated inputs, and outputs. This reduces ambiguity and improves the LLM’s skill to generate significant and dependable queries.
class ContractSearchTool(BaseTool):
title: str = "ContractSearch"
description: str = (
"helpful for when it's worthwhile to reply questions associated to any contracts"
)
args_schema: Sort[BaseModel] = ContractInput
Lastly, we have to implement a operate that processes the given inputs, constructs the corresponding Cypher assertion, and executes it effectively.
The core logic of the operate facilities on establishing the Cypher assertion. We start by matching the contract as the muse of the question.
cypher_statement = "MATCH (c:Contract) "
Subsequent, we have to implement the operate that processes the enter parameters. On this instance, we primarily use attributes to filter contracts based mostly on the given standards.
Easy property filtering
For instance, the contract_type
attribute is used to carry out easy node property filtering.
if contract_type:
filters.append("c.contract_type = $contract_type")
params["contract_type"] = contract_type
This code provides a Cypher filter for contract_type
whereas utilizing question parameters for values to forestall question injection safety concern.
For the reason that doable contract kind values are offered within the attribute description
contract_type: Optionally available[str] = Subject(
None, description=f"Contract kind; legitimate varieties: {CONTRACT_TYPES}"
)
we don’t have to fret about mapping values from enter to legitimate contract varieties because the LLM will deal with that.
Inferred property filtering
We’re constructing instruments for an LLM to work together with a data graph, the place the instruments function an abstraction layer over structured queries. A key characteristic is the flexibility to make use of inferred properties at runtime, much like an ontology however dynamically computed.
if lively shouldn't be None:
operator = ">=" if lively else "<"
filters.append(f"c.end_date {operator} date()")
Right here, lively
acts as a runtime classification, figuring out whether or not a contract is ongoing (>= date()
) or expired (< date()
). This logic extends structured KG queries by computing properties solely when wanted, enabling extra versatile LLM reasoning. By dealing with logic like this inside instruments, we make sure the LLM interacts with simplified, intuitive operations, retaining it targeted on reasoning slightly than question formulation.
Neighbor filtering
Typically filtering is determined by neighboring nodes, reminiscent of limiting outcomes to contracts involving particular events. The events
attribute is an elective listing, and when supplied, it ensures solely contracts linked to these entities are thought-about:
if events:
parties_filter = []
for i, occasion in enumerate(events):
party_param_name = f"party_{i}"
parties_filter.append(
f"""EXISTS {{
MATCH (c)<-[:PARTY_TO]-(occasion)
WHERE toLower(occasion.title) CONTAINS ${party_param_name}
}}"""
)
params[party_param_name] = occasion.decrease()
This code filters contracts based mostly on their related events, treating the logic as AND, which means all specified circumstances should be met for a contract to be included. It iterates by means of the supplied events
listing and constructs a question the place every occasion situation should maintain.
For every occasion, a singular parameter title is generated to keep away from conflicts. The EXISTS
clause ensures that the contract has a PARTY_TO
relationship to a celebration whose title incorporates the required worth. The title is transformed to lowercase to permit case-insensitive matching. Every occasion situation is added individually, implementing an implicit AND between them.
If extra advanced logic have been wanted, reminiscent of supporting OR circumstances or permitting completely different matching standards, the enter would want to alter. As a substitute of a easy listing of occasion names, a structured enter format specifying operators can be required.
Moreover, we might implement a party-matching technique that tolerates minor typos, bettering the person expertise by dealing with variations in spelling and formatting.
Customized operator filtering
So as to add extra flexibility, we are able to introduce an operator object as a nested attribute, permitting extra management over filtering logic. As a substitute of hardcoding comparisons, we outline an enumeration for operators and use it dynamically.
For instance, with financial values, a contract would possibly should be filtered based mostly on whether or not its whole quantity is bigger than, lower than, or precisely equal to a specified worth. As a substitute of assuming a set comparability logic, we outline an enum that represents the doable operators:
class NumberOperator(str, Enum):
EQUALS = "="
GREATER_THAN = ">"
LESS_THAN = "<"
class MonetaryValue(BaseModel):
"""The full quantity or worth of a contract"""
worth: float
operator: NumberOperator
if monetary_value:
filters.append(f"c.total_amount {monetary_value.operator.worth} $total_value")
params["total_value"] = monetary_value.worth
This strategy makes the system extra expressive. As a substitute of inflexible filtering guidelines, the device interface permits the LLM to specify not only a worth however the way it needs to be in contrast, making it simpler to deal with a broader vary of queries whereas retaining the LLM’s interplay easy and declarative.
Some LLMs battle with nested objects as inputs, making it more durable to deal with structured operator-based filtering. Including a between operator introduces further complexity because it requires two separate values, which might result in ambiguity in parsing and enter validation.
Min and Max attributes
To maintain issues easier, I are inclined to gravitate towards utilizing min
and max
attributes for dates, as this naturally helps vary filtering and makes the between logic simple.
if min_effective_date:
filters.append("c.effective_date >= date($min_effective_date)")
params["min_effective_date"] = min_effective_date
if max_effective_date:
filters.append("c.effective_date <= date($max_effective_date)")
params["max_effective_date"] = max_effective_date
This operate filters contracts based mostly on an efficient date vary by including an elective decrease and higher sure situation when min_effective_date
and max_effective_date
are supplied, making certain that solely contracts throughout the specified date vary are included.
Semantic search
An attribute can be used for semantic search, the place as a substitute of counting on a vector index upfront, we use a post-filtering strategy to metadata filtering. First, structured filters, like date ranges, financial values, or events, are utilized to slender down the candidate set. Then, vector search is carried out over this filtered subset to rank outcomes based mostly on semantic similarity.
if summary_search:
cypher_statement += (
"WITH c, vector.similarity.cosine(c.embedding, $embedding) "
"AS rating ORDER BY rating DESC WITH c, rating WHERE rating > 0.9 "
) # Outline a threshold restrict
params["embedding"] = embeddings.embed_query(summary_search)
else: # Else we kind by newest
cypher_statement += "WITH c ORDER BY c.effective_date DESC "
This code applies semantic search when summary_search
is supplied by computing cosine similarity between the contract’s embedding and the question embedding, ordering outcomes by relevance, and filtering out low-scoring matches with a threshold of 0.9. In any other case, it defaults to sorting contracts by the latest effective_date
.
Dynamic queries
The cypher aggregation attribute is an experiment I needed to check that provides the LLM a level of partial text2cypher functionality, permitting it to dynamically generate aggregations after the preliminary structured filtering. As a substitute of predefining each doable aggregation, this strategy lets the LLM specify calculations like counts, averages, or grouped summaries on demand, making queries extra versatile and expressive. Nonetheless, since this shifts extra question logic to the LLM, making certain all generated queries work accurately turns into difficult, as malformed or incompatible Cypher statements can break execution. This trade-off between flexibility and reliability is a key consideration in designing the system.
if cypher_aggregation:
cypher_statement += """WITH c, c.abstract AS abstract, c.contract_type AS contract_type,
c.contract_scope AS contract_scope, c.effective_date AS effective_date, c.end_date AS end_date,
[(c)<-[r:PARTY_TO]-(occasion) | {occasion: occasion.title, position: r.position}] AS events, c.end_date >= date() AS lively, c.total_amount as monetary_value, c.file_id AS contract_id,
apoc.coll.toSet([(c)<-[:PARTY_TO]-(occasion)-[:LOCATED_IN]->(nation) | nation.title]) AS nations """
cypher_statement += cypher_aggregation
If no cypher aggregation is supplied, we return the whole rely of recognized contracts together with solely 5 instance contracts to keep away from overwhelming the immediate. Dealing with extreme rows is essential, as an LLM combating an enormous outcome set isn’t helpful. Moreover, LLM producing solutions with 100 contract titles isn’t a very good person expertise both.
cypher_statement += """WITH acquire(c) AS nodes
RETURN {
total_count_of_contracts: dimension(nodes),
example_values: [
el in nodes[..5] |
{abstract:el.abstract, contract_type:el.contract_type,
contract_scope: el.contract_scope, file_id: el.file_id,
effective_date: el.effective_date, end_date: el.end_date,
monetary_value: el.total_amount, contract_id: el.file_id,
events: [(el)<-[r:PARTY_TO]-(occasion) | {title: occasion.title, position: r.position}],
nations: apoc.coll.toSet([(el)<-[:PARTY_TO]-()-[:LOCATED_IN]->(nation) | nation.title])}
]
} AS output"""
This cypher assertion collects all matching contracts into an inventory, returning the whole rely and as much as 5 instance contracts with key attributes, together with abstract, kind, scope, dates, financial worth, related events with roles, and distinctive nation places.
Now that our contract search device is constructed, we hand it off to the LLM and similar to that, we have now agentic GraphRAG applied.
Agent Benchmark
For those who’re severe about implementing agentic GraphRAG, you want an analysis dataset, not simply as a benchmark however as a basis for your entire challenge. A well-constructed dataset helps outline the scope of what the system ought to deal with, making certain that preliminary improvement aligns with real-world use instances. Past that, it turns into a useful device for evaluating efficiency, permitting you to measure how properly the LLM interacts with the graph, retrieves info, and applies reasoning. It’s additionally important for immediate engineering optimizations, letting you iteratively refine queries, device use, and response formatting with clear suggestions slightly than guesswork. With no structured dataset, you’re flying blind, making enhancements more durable to quantify and inconsistencies harder to catch.
The code for the benchmark is available on GitHub.
I’ve compiled an inventory of twenty-two questions which we’ll use to judge the system. Moreover, we’re going to introduce a brand new metric referred to as answer_satisfaction
the place we shall be present a customized immediate.
answer_satisfaction = AspectCritic(
title="answer_satisfaction",
definition="""You'll consider an ANSWER to a authorized QUESTION based mostly on a supplied SOLUTION.
Fee the reply on a scale from 0 to 1, the place:
- 0 = incorrect, considerably incomplete, or deceptive
- 1 = appropriate and sufficiently full
Think about these analysis standards:
1. Factual correctness is paramount - the reply should not contradict the answer
2. The reply should tackle the core parts of the answer
3. Further related info past the answer is appropriate and should improve the reply
4. Technical authorized terminology needs to be used appropriately if current within the resolution
5. For quantitative authorized analyses, correct figures should be supplied
+ fewshots
"""
Many questions can return a considerable amount of info. For instance, asking for contracts signed earlier than 2020 would possibly yield tons of of outcomes. For the reason that LLM receives each the whole rely and some instance entries, our analysis ought to give attention to the whole rely, slightly than which particular examples the LLM chooses to point out.

The supplied outcomes point out that every one evaluated fashions (Gemini 1.5 Professional, Gemini 2.0 Flash, and GPT-4o) carry out equally properly for many device calls, with GPT-4o barely outperforming the Gemini fashions (0.82 vs. 0.77). The noticeable distinction emerges primarily when partial text2cypher
is used, significantly for numerous aggregation operations.
Observe that that is solely 22 pretty easy questions, so we didn’t actually discover reasoning capabilities of LLMs.
Moreover, I’ve seen tasks the place accuracy might be improved considerably by leveraging Python for aggregations, as LLMs sometimes deal with Python code era and execution higher than producing advanced Cypher queries immediately.
Net Utility
We’ve additionally constructed a easy React net utility, powered by LangGraph hosted on FastAPI, which streams responses on to the frontend. Particular due to Anej Gorkic for creating the online app.
You’ll be able to launch your entire stack with the next command:
docker compose up
And navigate to localhost:5173

Abstract
As LLMs acquire stronger reasoning capabilities, they, when paired with the suitable instruments, can change into highly effective brokers for navigating advanced domains like authorized contracts. On this put up, we’ve solely scratched the floor, specializing in core contract attributes whereas barely touching the wealthy number of clauses present in real-world agreements. There’s important room for progress, from increasing clause protection to refining device design and interplay methods.
The code is offered on GitHub.
Photographs
All pictures on this put up have been created by the writer.