have entered the world of pc science at a report tempo. LLMs are highly effective fashions able to successfully performing all kinds of duties. Nonetheless, LLM outputs are stochastic, making them unreliable. On this article, I talk about how one can guarantee reliability in your LLM purposes by correctly prompting the mannequin and dealing with the output.
You can even learn my articles on Attending NVIDIA GTC Paris 2025 and Creating Powerful Embeddings for Machine Learning.
Desk of Contents
Motivation
My motivation for this text is that I’m persistently growing new purposes utilizing LLMs. LLMs are generalized instruments that may be utilized to most text-dependent duties corresponding to classification, summarization, info extraction, and far more. Moreover, the rise of imaginative and prescient language fashions additionally allow us to deal with pictures just like how we deal with textual content.
I usually encounter the issue that my LLM purposes are inconsistent. Generally the LLM doesn’t reply within the desired format, or I’m unable to correctly parse the LLM response. It is a big drawback if you end up working in a manufacturing setting and are absolutely depending on consistency in your software. I’ll thus talk about the strategies I take advantage of to make sure reliability for my purposes in a manufacturing setting.
Guaranteeing output consistency
Markup tags
To make sure output consistency, I take advantage of a way the place my LLM solutions in markup tags. I take advantage of a system immediate like:
immediate = f"""
Classify the textual content into "Cat" or "Canine"
Present your response in tags
"""
And the mannequin will nearly all the time reply with:
Cat
or
Canine
Now you can simply parse out the response utilizing the next code:
def _parse_response(response: str):
return response.break up("")[1].break up(" ")[0]
The rationale utilizing markup tags works so effectively is that that is how the mannequin is educated to behave. When OpenAI, Qwen, Google, and others prepare these fashions, they use markup tags. The fashions are thus tremendous efficient at using these tags and can, in nearly all circumstances, adhere to the anticipated response format.
For instance, with reasoning fashions, which have been on the rise recently, the fashions first do their pondering enclosed in
Moreover, I additionally attempt to use as many markup tags as doable elsewhere in my prompts. For instance, if I’m offering just a few shot examples to my mannequin, I’ll do one thing like:
immediate = f"""
Classify the textual content into "Cat" or "Canine"
Present your response in tags
That is a picture exhibiting a cat -> Cat
That is a picture exhibiting a canine -> Canine
"""
I do two issues that assist the mannequin carry out right here:
- I present examples in
tags. - In my examples, I guarantee to stick to my very own anticipated response format, utilizing the
Utilizing markup tags, you’ll be able to thus guarantee a excessive degree of output consistency out of your LLM
Output validation
Pydantic is a software you should utilize to make sure and validate the output of your LLMs. You may outline varieties and validate that the output of the mannequin adheres to the sort we count on. For instance, you’ll be able to observe the instance under, primarily based on this article:
from pydantic import BaseModel
from openai import OpenAI
shopper = OpenAI()
class Profile(BaseModel):
identify: str
electronic mail: str
cellphone: str
resp = shopper.chat.completions.create(
mannequin="gpt-4o",
messages=[
{
"role": "user",
"content": "Return the `name`, `email`, and `phone` of user {user} in a json object."
},
]
)
Profile.model_validate_json(resp.decisions[0].message.content material)
As you’ll be able to see, we immediate GPT to reply with a JSON object, and we then run Pydantic to make sure the response is as we count on.
I might additionally like to notice that typically it’s simpler to easily create your individual output validation operate. Within the final instance, the one necessities for the response object are primarily that the response object incorporates the keys identify, electronic mail, and cellphone, and that every one of these are of the string kind. You may validate this in Python with a operate:
def validate_output(output: str):
assert "identify" in output and isinstance(output["name"], str)
assert "electronic mail" in output and isinstance(output["email"], str)
assert "cellphone" in output and isinstance(output["phone"], str)
With this, you should not have to put in any packages, and in a number of circumstances, it’s simpler to arrange.
Tweaking the system immediate
You can even make a number of different tweaks to your system immediate to make sure a extra dependable output. I all the time advocate making your immediate as structured as doable, utilizing:
- Markup tags as talked about earlier
- Lists, such because the one I’m writing in right here
On the whole, you also needs to all the time guarantee clear directions. You should utilize the next to make sure the standard of your immediate
For those who gave the immediate to a different human, that had by no means seen the duty earlier than, and with no prior data of the duty. Would the human be capable to carry out the duty successfully?
For those who can not have a human do the duty, you often can not count on an AI to do it (at the very least for now).
Dealing with errors
Errors are inevitable when coping with LLMs. For those who carry out sufficient API calls, it’s nearly sure that typically the response won’t be in your required format, or one other challenge.
In these situations, it’s essential that you’ve got a strong software geared up to deal with such errors. I take advantage of the next strategies to deal with errors:
- Retry mechanism
- Enhance the temperature
- Have backup LLMs
Now, let me elaborate on every level.
Exponential backoff retry mechanism
It’s essential to have a retry mechanism in place, contemplating a number of points can happen when making an API name. You would possibly encounter points corresponding to price limiting, incorrect output format, or a gradual response. In these situations, you should guarantee to wrap the LLM name in a try-catch and retry. Often, it’s additionally sensible to make use of an exponential backoff, particularly for rate-limiting errors. The rationale for that is to make sure you wait lengthy sufficient to keep away from additional rate-limiting points.
Temperature improve
I additionally typically advocate growing the temperature a bit. For those who set the temperature to 0, you inform the mannequin to behave deterministically. Nonetheless, typically this will have a adverse impact.
For instance, you probably have an enter instance the place the mannequin failed to reply within the correct output format. For those who retry this utilizing a temperature of 0, you might be more likely to simply expertise the identical challenge. I thus advocate you set the temperature to a bit larger, for instance 0.1, to make sure some stochasticness within the mannequin, whereas additionally making certain its outputs are comparatively deterministic.
This is identical logic that a number of brokers use: a better temperature.
They should keep away from being stuch in a loop. Having a better temperature can assist them keep away from repetitive errors.
Backup LLMs
One other highly effective methodology to take care of errors is to have backup LLMs. I like to recommend utilizing a series of LLM suppliers for all of your API calls. For instance, you first attempt OpenAI, if that fails, you employ Gemini, and if that fails, you should utilize Claude.
This ensures reliability within the occasion of provider-specific points. These may very well be points corresponding to:
- The server is down (for instance, if OpenAI’s API is just not accessible for a time frame)
- Filtering (typically, an LLM supplier will refuse to reply your request if it believes your request is in violation of jailbreak insurance policies or content material moderation)
On the whole, it’s merely good observe to not be absolutely depending on one supplier.
Conclusion
On this article, I’ve mentioned how one can guarantee reliability in your LLM software. LLM purposes are inherently stochastic since you can not immediately management the output of an LLM. It’s thus essential to make sure you have correct insurance policies in place, each to reduce the errors that happen and to deal with the errors after they happen.
I’ve mentioned the next approaches to reduce errors and deal with errors:
- Markup tags
- Output validation
- Tweaking the system immediate
- Retry mechanism
- Enhance the temperature
- Have backup LLMs
For those who mix these strategies into your software, you’ll be able to obtain each a strong and strong LLM software.
👉 Comply with me on socials:
🧑💻 Get in touch
🌐 Personal Blog
🔗 LinkedIn
🐦 X / Twitter
✍️ Medium
🧵 Threads