Llama2.cpp Chat
llama.cpp
This repository is a fork of llama.cpp customized to facilitate Llama2 inference within Arkane Cloud.
Overview
Llama.cpp is a powerful tool for running Llama2 inference, and this fork is tailored specifically for seamless integration with Arkane Cloud environments.
Features
Pre-Configured CI Pipeline: The CI pipeline is set up to automatically fetch a pre-converted and quantized llama code instruct model from TheBloke on Hugging Face.
HTTP Server Example: The repository includes an HTTP server example, allowing for easy deployment and testing. Configuration options can be found in the /examples/server directory.
Usage
Clone this repository in a new workspace (at least Pro/GPU)
Start the
Prepare
stage in theCI-Pipeline
After the
Prepare
stage is done you can start therun
stageClick on
Open deployment
in the top right corner
Documentation
For detailed configuration options and usage instructions, refer to the README file located in the /examples/server directory.
Note
Please note that while this repository provides a convenient setup for running Llama2 inference in Arkane Cloud, further customization may be required to suit specific use cases or preferences.
Last updated