Playing with llm#


pip install llm
llm models … cool! I could try LLaMA with this!

llm install llm-gpt4all
llm -m Meta-Llama-3-8B-Instruct 'Hi!'  # downloads the model
llm models default Meta-Llama-3-8B-Instruct

Does not work in qemu VM / virsh:

Error: Unable to instantiate model: CPU does not support AVX

To test:

lscpu | grep avx