Docs
  • Getting Started
    • Monitoring & Alerting
    • Path based routing
    • Staging Environments
    • Custom Domains
    • CI Pipelines
    • Zero Downtime Releases
    • Horizontal Scaling with replicas
    • Deployment modes
    • Environment Variables
    • Troubleshooting Deployments
    • Using the IDE
    • Code Reviews
    • Pricing: Hosting and Seats
    • Installing OS-level packages using Nix
    • Speedrun: Deploying an app
  • Templates
    • FlowiseAI - Low Code LLM Apps Builder
    • Llama2 API
    • Llama2.cpp Chat
    • Jupyter Notebook
    • Stable Diffusion Text2Img
    • Text generation web UI
Powered by GitBook
On this page
  • llama.cpp
  • Overview
  • Features
  • Usage
  • Documentation
  • Note
  1. Templates

Llama2.cpp Chat

PreviousLlama2 APINextJupyter Notebook

Last updated 1 year ago

llama.cpp

This repository is a fork of customized to facilitate Llama2 inference within Arkane Cloud.

Overview

Llama.cpp is a powerful tool for running Llama2 inference, and this fork is tailored specifically for seamless integration with Arkane Cloud environments.

Features

  • Pre-Configured CI Pipeline: The CI pipeline is set up to automatically fetch a pre-converted and quantized llama code instruct model from TheBloke on Hugging Face.

  • HTTP Server Example: The repository includes an HTTP server example, allowing for easy deployment and testing. Configuration options can be found in the /examples/server directory.

Usage

  1. Clone this repository in a new workspace (at least Pro/GPU)

  2. Start the Prepare stage in the CI-Pipeline

  3. After the Prepare stage is done you can start the run stage

  4. Click on Open deployment in the top right corner

Documentation

For detailed configuration options and usage instructions, refer to the README file located in the /examples/server directory.

Note

Please note that while this repository provides a convenient setup for running Llama2 inference in Arkane Cloud, further customization may be required to suit specific use cases or preferences.

llama.cpp