▲xguru 2023-05-05 | parent | favorite | on: OpenLLaMA - LLaMA의 개방형 복제본(github.com/openlm-research)HN댓글에 "llama.cpp + 8GB RAM에서 OpenLLaMA 사용하기" 코맨드를 올려놨네요 https://news.ycombinator.com/item?id=35798888 git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && cmake -B build && cmake --build build python3 -m pip install -r requirements.txt cd models && git clone https://huggingface.co/openlm-research/open_llama_7b_preview_200bt/ && cd - python3 convert-pth-to-ggml.py models/open_llama_7b_preview_200bt/open_llama_7b_preview_200bt_transformers_weights 1 ./build/bin/quantize models/open_llama_7b_preview_200bt/open_llama_7b_preview_200bt_transformers_weights/ggml-model-f16.bin models/open_llama_7b_preview_200bt_q5_0.ggml q5_0 ./build/bin/main -m models/open_llama_7b_preview_200bt_q5_0.ggml --ignore-eos -n 1280 -p "Building a website can be done in 10 simple steps:" --mlock
HN댓글에 "llama.cpp + 8GB RAM에서 OpenLLaMA 사용하기" 코맨드를 올려놨네요
https://news.ycombinator.com/item?id=35798888