Llama2.cpp Chat
Last updated
Last updated
This repository is a fork of llama.cpp customized to facilitate Llama2 inference within Arkane Cloud.
Llama.cpp is a powerful tool for running Llama2 inference, and this fork is tailored specifically for seamless integration with Arkane Cloud environments.
Pre-Configured CI Pipeline: The CI pipeline is set up to automatically fetch a pre-converted and quantized llama code instruct model from TheBloke on Hugging Face.
HTTP Server Example: The repository includes an HTTP server example, allowing for easy deployment and testing. Configuration options can be found in the /examples/server directory.
Clone this repository in a new workspace (at least Pro/GPU)
Start the Prepare
stage in the CI-Pipeline
After the Prepare
stage is done you can start the run
stage
Click on Open deployment
in the top right corner
For detailed configuration options and usage instructions, refer to the README file located in the /examples/server directory.
Please note that while this repository provides a convenient setup for running Llama2 inference in Arkane Cloud, further customization may be required to suit specific use cases or preferences.