Close Menu
    Trending
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    • Implementing IBCS rules in Power BI
    • What comes next for AI copyright lawsuits?
    • Why PDF Extraction Still Feels LikeHack
    • GenAI Will Fuel People’s Jobs, Not Replace Them. Here’s Why
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»llama.cpp: Writing A Simple C++ Inference Program for GGUF LLM Models | by Shubham Panchal | Jan, 2025
    Artificial Intelligence

    llama.cpp: Writing A Simple C++ Inference Program for GGUF LLM Models | by Shubham Panchal | Jan, 2025

    Team_AIBS NewsBy Team_AIBS NewsJanuary 14, 2025No Comments15 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Exploring llama.cpp internals and a fundamental chat program movement

    Towards Data Science

    14 min learn

    ·

    13 hours in the past

    Photograph by Mathew Schwartz on Unsplash

    llama.cpp has revolutionized the area of LLM inference by the technique of vast adoption and ease. It has enabled enterprises and particular person builders to deploy LLMs on units starting from SBCs to multi-GPU clusters. Although working with llama.cpp has been made simple by its language bindings, working in C/C++ may be a viable alternative for efficiency delicate or useful resource constrained eventualities.

    This tutorial goals to let readers have an in depth look on how LLM inference is carried out utilizing low-level capabilities coming immediately from llama.cpp. We talk about this system movement, llama.cpp constructs and have a easy chat on the finish.

    The C++ code that we are going to write on this weblog can also be utilized in SmolChat, a local Android software that enables customers to work together with LLMs/SLMs within the chat interface, utterly on-device. Particularly, the LLMInference class we outline forward is used with the JNI binding to execute GGUF fashions.

    The code for this tutorial may be discovered right here:

    The code can also be derived from the official simple-chat example from llama.cpp.

    1. About llama.cpp
    2. Setup
    3. Loading the Model
    4. Performing Inference
    5. Good Habits: Writing a Destructor
    6. Running the Application

    llama.cpp is a C/C++ framework to deduce machine studying fashions outlined within the GGUF format on a number of execution backends. It began as a pure C/C++ implementation of the well-known Llama collection LLMs from Meta that may be inferred on Apple’s silicon, AVX/AVX-512, CUDA, and Arm Neon-based environments. It additionally features a CLI-based software llama-cli to run GGUF LLM fashions and llama-server to execute fashions through HTTP requests (OpenAI suitable server).

    llama.cpp makes use of ggml, a low-level framework that gives primitive capabilities required by deep studying fashions and abstracts backend implementation particulars from the consumer. Georgi Gerganov is the creator of ggml and llama.cpp.

    The repository’s README additionally lists wrappers constructed on high of llama.cpp in different programming languages. Widespread instruments like Ollama and LM Studio additionally use bindings over llama.cpp to reinforce consumer friendliness. The challenge has no dependencies on different third-party libraries

    How is llama.cpp completely different from PyTorch/TensorFlow?

    llama.cpp has emphasis on inference of ML fashions from its inception, whereas PyTorch and TensorFlow are end-to-end options providing knowledge processing, mannequin coaching/validation, and environment friendly inference in a single bundle.

    PyTorch and TensorFlow even have their light-weight inference-only extensions particularly ExecuTorch and TensorFlow Lite

    Contemplating solely the inference section of a mannequin, llama.cpp is light-weight in its implementation because of the absence of third-party dependencies and an in depth set of accessible operators or mannequin codecs to help. Additionally, because the identify suggests, the challenge began as an environment friendly library to deduce LLMs (the Llama model from Meta) and continues to help a variety of open-source LLM architectures.

    Analogy: If PyTorch/TensorFlow are luxurious, power-hungry cruise ships, llama.cpp is small, speedy motorboat. PyTorch/TF and llama.cpp have their very own use-cases.

    We begin our implementation in a Linux-based atmosphere (native or WSL) with cmake put in and the GNU/clang toolchain put in. We’ll compile llama.cpp from supply and add it as a shared library to our executable chat program.

    We create our challenge listing smol_chat with aexternals listing to retailer the cloned llama.cpp repository.

    mkdir smol_chat
    cd smol_chat

    mkdir src
    mkdir externals
    contact CMakeLists.txt

    cd externals
    git clone --depth=1 https://github.com/ggerganov/llama.cpp

    CMakeLists.txt is the place we outline our construct, permitting CMake to compile our C/C++ code utilizing the default toolchain (GNU/clang) by together with headers and shared libraries from externals/llama.cpp⁣.

    cmake_minimum_required(VERSION 3.10)
    challenge(llama_inference)

    set(CMAKE_CXX_STANDARD 17)
    set(LLAMA_BUILD_COMMON On)
    add_subdirectory("${CMAKE_CURRENT_SOURCE_DIR}/externals/llama.cpp")

    add_executable(
    chat
    src/LLMInference.cpp src/primary.cpp
    )
    target_link_libraries(
    chat
    PRIVATE
    widespread llama ggml
    )

    We’ve got now outlined how our challenge ought to be constructed by CMake. Subsequent, we create a header file LLMInference.h which declares a category containing high-level capabilities to work together with the LLM. llama.cpp supplies a C-style API, thus embedding it inside a category will assist us summary/conceal the interior working particulars.

    #ifndef LLMINFERENCE_H
    #outline LLMINFERENCE_H

    #embody "widespread.h"
    #embody "llama.h"
    #embody
    #embody

    class LLMInference {

    // llama.cpp-specific varieties
    llama_context* _ctx;
    llama_model* _model;
    llama_sampler* _sampler;
    llama_batch _batch;
    llama_token _currToken;

    // container to retailer consumer/assistant messages within the chat
    std::vector _messages;
    // shops the string generated after making use of
    // the chat-template to all messages in `_messages`
    std::vector _formattedMessages;
    // shops the tokens for the final question
    // appended to `_messages`
    std::vector _promptTokens;
    int _prevLen = 0;

    // shops the entire response for the given question
    std::string _response = "";

    public:

    void loadModel(const std::string& modelPath, float minP, float temperature);

    void addChatMessage(const std::string& message, const std::string& position);

    void startCompletion(const std::string& question);

    std::string completionLoop();

    void stopCompletion();

    ~LLMInference();
    };

    #endif

    The non-public members declared within the header above shall be used within the implementation of the general public member capabilities described within the additional sections of the weblog. Allow us to outline every of those member capabilities in LLMInference.cpp .

    #embody "LLMInference.h"
    #embody
    #embody

    void LLMInference::loadModel(const std::string& model_path, float min_p, float temperature) {
    // create an occasion of llama_model
    llama_model_params model_params = llama_model_default_params();
    _model = llama_load_model_from_file(model_path.knowledge(), model_params);

    if (!_model) {
    throw std::runtime_error("load_model() failed");
    }

    // create an occasion of llama_context
    llama_context_params ctx_params = llama_context_default_params();
    ctx_params.n_ctx = 0; // take context dimension from the mannequin GGUF file
    ctx_params.no_perf = true; // disable efficiency metrics
    _ctx = llama_new_context_with_model(_model, ctx_params);

    if (!_ctx) {
    throw std::runtime_error("llama_new_context_with_model() returned null");
    }

    // initialize sampler
    llama_sampler_chain_params sampler_params = llama_sampler_chain_default_params();
    sampler_params.no_perf = true; // disable efficiency metrics
    _sampler = llama_sampler_chain_init(sampler_params);
    llama_sampler_chain_add(_sampler, llama_sampler_init_min_p(min_p, 1));
    llama_sampler_chain_add(_sampler, llama_sampler_init_temp(temperature));
    llama_sampler_chain_add(_sampler, llama_sampler_init_dist(LLAMA_DEFAULT_SEED));

    _formattedMessages = std::vector(llama_n_ctx(_ctx));
    _messages.clear();
    }

    llama_load_model_from_filereads the mannequin from the file utilizing llama_load_model internally and populates the llama_model occasion utilizing the given llama_model_params . The consumer can provide the parameters, however we are able to get a pre-initialized default struct for it with llama_model_default_params .

    llama_context represents the execution atmosphere for the GGUF mannequin loaded. The llama_new_context_with_model instantiates a brand new llama_context and prepares a backend for execution by both studying the llama_model_params or by robotically detecting the accessible backends. It additionally initializes the Ok-V cache, which is vital within the decoding or inference step. A backend scheduler that manages computations throughout a number of backends can also be initialized.

    A llama_sampler determines how we pattern/select tokens from the chance distribution derived from the outputs (logits) of the mannequin (particularly the decoder of the LLM). LLMs assign a chance to every token current within the vocabulary, representing the probabilities of the token showing subsequent within the sequence. The temperature and min-p that we’re setting with llama_sampler_init_temp and llama_sampler_init_min_p are two parameters controlling the token sampling course of.

    There are a number of steps concerned within the inference course of that takes a textual content question from the consumer as enter and returns the LLM’s response.

    1. Making use of the chat template to the queries

    For an LLM, the incoming messages are categorized as belonging to 3 roles, consumer , assistant and system . consumer and assistant messages given by the consumer and the LLM, respectively, whereas system denotes a system-wide immediate that’s adopted throughout your complete dialog. Every message consists of a position and content material the place content material is the precise textual content and position is any one of many three roles.

    The system immediate is the primary message of the dialog. In our code, the messages are saved as a std::vector named _messages the place llama_chat_message is a llama.cpp struct with position and content material attributes. We use the llama_chat_apply_template perform from llama.cpp to use the chat template saved within the GGUF file as metadata. We retailer the string or std::vector obtained after making use of the chat template in _formattedMessages .

    2. Tokenization

    Tokenization is the method of dividing a given textual content into smaller elements (tokens). We assign every half/token a singular integer ID, thus reworking the enter textual content to a sequence of integers that type the enter to the LLM. llama.cpp supplies the common_tokenize or llama_tokenize capabilities to carry out tokenization, the place common_tokenize returns the sequence of tokens as a std::vector .

    void LLMInference::startCompletion(const std::string& question) {
    addChatMessage(question, "consumer");

    // apply the chat-template
    int new_len = llama_chat_apply_template(
    _model,
    nullptr,
    _messages.knowledge(),
    _messages.dimension(),
    true,
    _formattedMessages.knowledge(),
    _formattedMessages.dimension()
    );
    if (new_len > (int)_formattedMessages.dimension()) {
    // resize the output buffer `_formattedMessages`
    // and re-apply the chat template
    _formattedMessages.resize(new_len);
    new_len = llama_chat_apply_template(_model, nullptr, _messages.knowledge(), _messages.dimension(), true, _formattedMessages.knowledge(), _formattedMessages.dimension());
    }
    if (new_len < 0) {
    throw std::runtime_error("llama_chat_apply_template() in LLMInference::start_completion() failed");
    }
    std::string immediate(_formattedMessages.start() + _prevLen, _formattedMessages.start() + new_len);

    // tokenization
    _promptTokens = common_tokenize(_model, immediate, true, true);

    // create a llama_batch containing a single sequence
    // see llama_batch_init for extra particulars
    _batch.token = _promptTokens.knowledge();
    _batch.n_tokens = _promptTokens.dimension();
    }

    Within the code, we apply the chat template and carry out tokenization within the LLMInference::startCompletion technique after which create a llama_batch occasion holding the ultimate inputs for the mannequin.

    3. Decoding, Sampling and the KV Cache

    As highlighted earlier, LLMs generate a response by successively predicting the following token within the given sequence. LLMs are additionally educated to foretell a particular end-of-generation (EOG) token, indicating the top of the sequence of the anticipated tokens. The completion_loop perform returns the following token within the sequence and retains getting referred to as till the token it returns is the EOG token.

    • Utilizing llama_n_ctx and the llama_get_kv_cached_used_cells we decide the size of the context we have now utilized for storing the inputs. At the moment, we throw an error if the size of the tokenized inputs exceeds the context dimension.
    • llama_decode performs a forward-pass of the mannequin, given the inputs in _batch .
    • Utilizing the _sampler initialized within the LLMInference::loadModel we pattern or select a token as our prediction and retailer it in _currToken . We test if the token is an EOG token after which return an "EOG" indicating that the textual content technology loop calling LLMInference::completionLoop ought to be terminated. On termination, we append a brand new message to _messages which is the entire _response given by the LLM with position assistant .
    • _currToken remains to be an integer, which is transformed to a string token piece by the common_token_to_piece perform. This string token is returned from the completionLoop technique.
    • We have to reinitialize _batch to make sure it now solely incorporates _currToken and never your complete enter sequence, i.e. _promptTokens . It’s because the ‘keys’ and ‘values’ for all earlier tokens have been cached. This reduces the inference time by avoiding the computation of all ‘keys’ and ‘values’ for all tokens in _promptTokens .
    std::string LLMInference::completionLoop() {
    // test if the size of the inputs to the mannequin
    // have exceeded the context dimension of the mannequin
    int contextSize = llama_n_ctx(_ctx);
    int nCtxUsed = llama_get_kv_cache_used_cells(_ctx);
    if (nCtxUsed + _batch.n_tokens > contextSize) {
    std::cerr << "context dimension exceeded" << 'n';
    exit(0);
    }
    // run the mannequin
    if (llama_decode(_ctx, _batch) < 0) {
    throw std::runtime_error("llama_decode() failed");
    }

    // pattern a token and test whether it is an EOG (finish of technology token)
    // convert the integer token to its corresponding word-piece
    _currToken = llama_sampler_sample(_sampler, _ctx, -1);
    if (llama_token_is_eog(_model, _currToken)) {
    addChatMessage(strdup(_response.knowledge()), "assistant");
    _response.clear();
    return "[EOG]";
    }
    std::string piece = common_token_to_piece(_ctx, _currToken, true);

    // re-init the batch with the newly predicted token
    // key, worth pairs of all earlier tokens have been cached
    // within the KV cache
    _batch.token = &_currToken;
    _batch.n_tokens = 1;

    return piece;
    }

    • Additionally, for every question made by the consumer, LLM takes as enter your complete tokenized dialog (all messages saved in _messages ). If we tokenize your complete dialog each time within the startCompletion technique, the preprocessing time and thus the general inference time will enhance because the dialog will get longer.
    • To keep away from this computation, we solely must tokenize the most recent message/question added to _messages . The size as much as which messages in _formattedMessages have been tokenized is saved in _prevLen . On the finish of response technology, i.e. in LLMInference::stopCompletion , we replace the worth of _prevLen , by appending the LLM’s response to _messages and utilizing the return worth of llama_chat_apply_template .
    void LLMInference::stopCompletion() {
    _prevLen = llama_chat_apply_template(
    _model,
    nullptr,
    _messages.knowledge(),
    _messages.dimension(),
    false,
    nullptr,
    0
    );
    if (_prevLen < 0) {
    throw std::runtime_error("llama_chat_apply_template() in LLMInference::stop_completion() failed");
    }
    }

    We implement a destructor technique that deallocates dynamically-allocated objects, each in _messages and llama. cpp internally.

    LLMInference::~LLMInference() {
    // free reminiscence held by the message textual content in messages
    // (as we had used strdup() to create a malloc'ed copy)
    for (llama_chat_message &message: _messages) {
    delete message.content material;
    }
    llama_kv_cache_clear(_ctx);
    llama_sampler_free(_sampler);
    llama_free(_ctx);
    llama_free_model(_model);
    }

    We create a small interface that enables us to have a conversion with the LLM. This contains instantiating the LLMInference class and calling all strategies that we outlined within the earlier sections.

    #embody "LLMInference.h"
    #embody
    #embody

    int primary(int argc, char* argv[]) {

    std::string modelPath = "smollm2-360m-instruct-q8_0.gguf";
    float temperature = 1.0f;
    float minP = 0.05f;
    std::unique_ptr llmInference = std::make_unique();
    llmInference->loadModel(modelPath, minP, temperature);

    llmInference->addChatMessage("You're a useful assistant", "system");

    whereas (true) {
    std::cout << "Enter question:n";
    std::string question;
    std::getline(std::cin, question);
    if (question == "exit") {
    break;
    }
    llmInference->startCompletion(question);
    std::string predictedToken;
    whereas ((predictedToken = llmInference->completionLoop()) != "[EOG]") {
    std::cout << predictedToken;
    fflush(stdout);
    }
    std::cout << 'n';
    }

    return 0;
    }

    We use the CMakeLists.txt authored in one of many earlier sections that use it to create a Makefile which can compile the code and create an executable prepared to be used.

    mkdir construct
    cd construct
    cmake ..
    make
    ./chat

    Right here’s how the output appears to be like:

    register_backend: registered backend CPU (1 units)
    register_device: registered gadget CPU (eleventh Gen Intel(R) Core(TM) i3-1115G4 @ 3.00GHz)
    llama_model_loader: loaded meta knowledge with 33 key-value pairs and 290 tensors from /house/shubham/CPP_Projects/llama-cpp-inference/fashions/smollm2-360m-instruct-q8_0.gguf (model GGUF V3 (newest))
    llama_model_loader: Dumping metadata keys/values. Word: KV overrides don't apply on this output.
    llama_model_loader: - kv 0: normal.structure str = llama
    llama_model_loader: - kv 1: normal.sort str = mannequin
    llama_model_loader: - kv 2: normal.identify str = Smollm2 360M 8k Lc100K Mix1 Ep2
    llama_model_loader: - kv 3: normal.group str = Loubnabnl
    llama_model_loader: - kv 4: normal.finetune str = 8k-lc100k-mix1-ep2
    llama_model_loader: - kv 5: normal.basename str = smollm2
    llama_model_loader: - kv 6: normal.size_label str = 360M
    llama_model_loader: - kv 7: normal.license str = apache-2.0
    llama_model_loader: - kv 8: normal.languages arr[str,1] = ["en"]
    llama_model_loader: - kv 9: llama.block_count u32 = 32
    llama_model_loader: - kv 10: llama.context_length u32 = 8192
    llama_model_loader: - kv 11: llama.embedding_length u32 = 960
    llama_model_loader: - kv 12: llama.feed_forward_length u32 = 2560
    llama_model_loader: - kv 13: llama.consideration.head_count u32 = 15
    llama_model_loader: - kv 14: llama.consideration.head_count_kv u32 = 5
    llama_model_loader: - kv 15: llama.rope.freq_base f32 = 100000.000000
    llama_model_loader: - kv 16: llama.consideration.layer_norm_rms_epsilon f32 = 0.000010
    llama_model_loader: - kv 17: normal.file_type u32 = 7
    llama_model_loader: - kv 18: llama.vocab_size u32 = 49152
    llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 64
    llama_model_loader: - kv 20: tokenizer.ggml.add_space_prefix bool = false
    llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = false
    llama_model_loader: - kv 22: tokenizer.ggml.mannequin str = gpt2
    llama_model_loader: - kv 23: tokenizer.ggml.pre str = smollm
    llama_model_loader: - kv 24: tokenizer.ggml.tokens arr[str,49152] = ["<|endoftext|>", "<|im_start|>", "<|...
    llama_model_loader: - kv 25: tokenizer.ggml.token_type arr[i32,49152] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
    llama_model_loader: - kv 26: tokenizer.ggml.merges arr[str,48900] = ["Ġ t", "Ġ a", "i n", "h e", "Ġ Ġ...
    llama_model_loader: - kv 27: tokenizer.ggml.bos_token_id u32 = 1
    llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 2
    llama_model_loader: - kv 29: tokenizer.ggml.unknown_token_id u32 = 0
    llama_model_loader: - kv 30: tokenizer.ggml.padding_token_id u32 = 2
    llama_model_loader: - kv 31: tokenizer.chat_template str = {% for message in messages %}{% if lo...
    llama_model_loader: - kv 32: general.quantization_version u32 = 2
    llama_model_loader: - type f32: 65 tensors
    llama_model_loader: - type q8_0: 225 tensors
    llm_load_vocab: control token: 7 '' is not marked as EOG
    llm_load_vocab: control token: 13 '' is not marked as EOG
    llm_load_vocab: control token: 16 '' is not marked as EOG
    llm_load_vocab: control token: 11 '' is not marked as EOG
    llm_load_vocab: control token: 10 '' is not marked as EOG
    llm_load_vocab: control token: 6 '' is not marked as EOG
    llm_load_vocab: control token: 8 '' is not marked as EOG
    llm_load_vocab: control token: 3 '' is not marked as EOG
    llm_load_vocab: control token: 12 '' is not marked as EOG
    llm_load_vocab: control token: 15 '' is not marked as EOG
    llm_load_vocab: control token: 4 '' is not marked as EOG
    llm_load_vocab: control token: 1 '<|im_start|>' is not marked as EOG
    llm_load_vocab: control token: 9 '' is not marked as EOG
    llm_load_vocab: control token: 5 '' is not marked as EOG
    llm_load_vocab: control token: 14 '' is not marked as EOG
    llm_load_vocab: special tokens cache size = 17
    llm_load_vocab: token to piece cache size = 0.3170 MB
    llm_load_print_meta: format = GGUF V3 (latest)
    llm_load_print_meta: arch = llama
    llm_load_print_meta: vocab type = BPE
    llm_load_print_meta: n_vocab = 49152
    llm_load_print_meta: n_merges = 48900
    llm_load_print_meta: vocab_only = 0
    llm_load_print_meta: n_ctx_train = 8192
    llm_load_print_meta: n_embd = 960
    llm_load_print_meta: n_layer = 32
    llm_load_print_meta: n_head = 15
    llm_load_print_meta: n_head_kv = 5
    llm_load_print_meta: n_rot = 64
    llm_load_print_meta: n_swa = 0
    llm_load_print_meta: n_embd_head_k = 64
    llm_load_print_meta: n_embd_head_v = 64
    llm_load_print_meta: n_gqa = 3
    llm_load_print_meta: n_embd_k_gqa = 320
    llm_load_print_meta: n_embd_v_gqa = 320
    llm_load_print_meta: f_norm_eps = 0.0e+00
    llm_load_print_meta: f_norm_rms_eps = 1.0e-05
    llm_load_print_meta: f_clamp_kqv = 0.0e+00
    llm_load_print_meta: f_max_alibi_bias = 0.0e+00
    llm_load_print_meta: f_logit_scale = 0.0e+00
    llm_load_print_meta: n_ff = 2560
    llm_load_print_meta: n_expert = 0
    llm_load_print_meta: n_expert_used = 0
    llm_load_print_meta: causal attn = 1
    llm_load_print_meta: pooling type = 0
    llm_load_print_meta: rope type = 0
    llm_load_print_meta: rope scaling = linear
    llm_load_print_meta: freq_base_train = 100000.0
    llm_load_print_meta: freq_scale_train = 1
    llm_load_print_meta: n_ctx_orig_yarn = 8192
    llm_load_print_meta: rope_finetuned = unknown
    llm_load_print_meta: ssm_d_conv = 0
    llm_load_print_meta: ssm_d_inner = 0
    llm_load_print_meta: ssm_d_state = 0
    llm_load_print_meta: ssm_dt_rank = 0
    llm_load_print_meta: ssm_dt_b_c_rms = 0
    llm_load_print_meta: model type = 3B
    llm_load_print_meta: model ftype = Q8_0
    llm_load_print_meta: model params = 361.82 M
    llm_load_print_meta: model size = 366.80 MiB (8.50 BPW)
    llm_load_print_meta: general.name = Smollm2 360M 8k Lc100K Mix1 Ep2
    llm_load_print_meta: BOS token = 1 '<|im_start|>'
    llm_load_print_meta: EOS token = 2 '<|im_end|>'
    llm_load_print_meta: EOT token = 0 '<|endoftext|>'
    llm_load_print_meta: UNK token = 0 '<|endoftext|>'
    llm_load_print_meta: PAD token = 2 '<|im_end|>'
    llm_load_print_meta: LF token = 143 'Ä'
    llm_load_print_meta: EOG token = 0 '<|endoftext|>'
    llm_load_print_meta: EOG token = 2 '<|im_end|>'
    llm_load_print_meta: max token length = 162
    llm_load_tensors: ggml ctx size = 0.14 MiB
    llm_load_tensors: CPU buffer size = 366.80 MiB
    ...............................................................................
    llama_new_context_with_model: n_ctx = 8192
    llama_new_context_with_model: n_batch = 2048
    llama_new_context_with_model: n_ubatch = 512
    llama_new_context_with_model: flash_attn = 0
    llama_new_context_with_model: freq_base = 100000.0
    llama_new_context_with_model: freq_scale = 1
    llama_kv_cache_init: CPU KV buffer size = 320.00 MiB
    llama_new_context_with_model: KV self size = 320.00 MiB, K (f16): 160.00 MiB, V (f16): 160.00 MiB
    llama_new_context_with_model: CPU output buffer size = 0.19 MiB
    ggml_gallocr_reserve_n: reallocating CPU buffer from size 0.00 MiB to 263.51 MiB
    llama_new_context_with_model: CPU compute buffer size = 263.51 MiB
    llama_new_context_with_model: graph nodes = 1030
    llama_new_context_with_model: graph splits = 1
    Enter query:
    How are you?
    I'm a text-based AI assistant. I don't have emotions or personal feelings, but I can understand and respond to your requests accordingly. If you have questions or need help with anything, feel free to ask.
    Enter query:
    Write a one line description on the C++ keyword 'new'
    New C++ keyword represents memory allocation for dynamically allocated memory.
    Enter query:
    exit

    llama.cpp has simplified the deployment of large language models, making them accessible across a wide range of devices and use cases. By understanding its internals and building a simple C++ inference program, we have demonstrated how developers can leverage its low-level functions for performance-critical and resource-constrained applications. This tutorial not only serves as an introduction to llama.cpp’s core constructs but also highlights its practicality in real-world projects, enabling efficient on-device interactions with LLMs.

    For developers interested in pushing the boundaries of LLM deployment or those aiming to build robust applications, mastering tools like llama.cpp opens the door to immense possibilities. As you explore further, remember that this foundational knowledge can be extended to integrate advanced features, optimize performance, and adapt to evolving AI use cases.

    I hope the tutorial was informative and left you fascinated by running LLMs in C++ directly. Do share your suggestions and questions in the comments below; they are always appreciated. Happy learning and have a wonderful day!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow to Answer Questions About Neural Networks in Machine Learning Interviews | by Aakash Gupta | Jan, 2025
    Next Article How to Craft Marketing Campaigns That Reach Multiple Generations
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Artificial Intelligence

    Lessons Learned After 6.5 Years Of Machine Learning

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Designing the future of entertainment

    February 13, 2025

    The Benefits and Risks of AI in Content Moderation

    February 21, 2025

    PromptChan vs Candy AI

    April 11, 2025
    Our Picks

    Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025

    July 1, 2025

    The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z

    July 1, 2025

    Musk’s X appoints ‘king of virality’ in bid to boost growth

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.