Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a dockerfile here https://github.com/purton-tech/rust-llm-guide/tree/main/llm-... for running mpt-7b

docker run -it --rm ghcr.io/purton-tech/mpt-7b-chat

It's a big download due to the model size i.e. 5GB. The model is quantized and runs via the ggml tensor library. https://ggml.ai/.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: