Sunnyvale, CA – Could 8, 2025 – Rafay Methods, a cloud-native and AI infrastructure orchestration and administration firm, introduced common availability of the corporate’s Serverless Inference providing, a token-metered API for operating open-source and privately educated or tuned LLMs.
The corporate mentioned many NVIDIA Cloud Suppliers (NCPs) and GPU Clouds are already leveraging the Rafay Platform to ship a multi-tenant, Platform-as-a-Service expertise to their clients, full with self-service consumption of compute and AI functions. These NCPs and GPU Clouds can now ship Serverless Inference as a turnkey service at no further price, enabling their clients to construct and scale AI functions quick, with out having to cope with the associated fee and complexity of constructing automation, governance, and controls for GPU-based infrastructure.
The World AI inference market is anticipated to develop to $106 billion in 2025, and $254 billion by 2030. Rafay’s Serverless Inference empowers GPU Cloud Suppliers (GPU Clouds) and NCPs to faucet into the booming GenAI market by eliminating key adoption obstacles—automated provisioning and segmentation of complicated infrastructure, developer self-service, quickly launching new GenAI fashions as a service, producing billing information for on-demand utilization, and extra.
“Having spent the final 12 months experimenting with GenAI, many enterprises at the moment are centered on constructing agentic AI functions that increase and improve their enterprise choices. The flexibility to quickly devour GenAI fashions via inference endpoints is essential to sooner improvement of GenAI capabilities. That is the place Rafay’s NCP and GPU Cloud companions have a fabric benefit,” mentioned Haseeb Budhani, CEO and co-founder of Rafay Systems.
“With our new Serverless Inference providing, accessible without cost to NCPs and GPU Clouds, our clients and companions can now ship an Amazon Bedrock-like service to their clients, enabling entry to the most recent GenAI fashions in a scalable, safe, and cost-effective method. Builders and enterprises can now combine GenAI workflows into their functions in minutes, not months, with out the ache of infrastructure administration. This providing advances our firm’s imaginative and prescient to assist NCPs and GPU Clouds evolve from working GPU-as-a-Service companies to AI-as-a-Service companies.”
By providing Serverless Inference as an on-demand functionality to downstream clients, Rafay helps NCPs and GPU Clouds handle a key hole out there. Rafay’s Serverless Inference providing supplies the next key capabilities to NCPs and GPU Clouds:
-
Seamless developer integration: OpenAI-compatible APIs require zero code migration for present functions, with safe RESTful and streaming-ready endpoints that dramatically speed up time-to-value for finish clients.
-
Clever infrastructure administration: Auto-scaling GPU nodes with right-sized mannequin allocation capabilities dynamically optimize sources throughout multi-tenant and devoted isolation choices, eliminating over-provisioning whereas sustaining strict efficiency SLAs.
-
Constructed-in metering and billing: Token-based and time-based utilization monitoring for each enter and output supplies granular consumption analytics, whereas integrating with present billing platforms via complete metering APIs and enabling clear, consumption-based pricing fashions.
-
Enterprise-grade safety and governance: Complete safety via HTTPS-only API endpoints, rotating bearer token authentication, detailed entry logging, and configurable token quotas per crew, enterprise unit, or software fulfill enterprise compliance necessities.
-
Observability, storage, and efficiency monitoring: Finish-to-end visibility with logs and metrics archived within the supplier’s personal storage namespace, assist for backends like MinIO- a high-performance, AWS S3-compatible object storage system, and Weka-a high-performance, AI-native information platform; in addition to a centralized credential administration guarantee full infrastructure and mannequin efficiency transparency.
Rafay’s Serverless Inference providing is offered as we speak to all clients and companions utilizing the Rafay Platform to ship multi-tenant, GPU and CPU based mostly infrastructure. The corporate can be set to roll out fine-tuning capabilities shortly. These new additions are designed to assist NCPs and GPU Clouds quickly ship high-margin, production-ready AI companies, eradicating complexity.