I'm trying to use convert_hf_to_gguf.py to convert the DeepSeek-R1-0528-FP4 safetensor files into gguf format. I hope llama.cpp can be further improved to fully support the conversion of models like ...
ggml-org / llama.cpp Public Notifications You must be signed in to change notification settings Fork 13.8k Star 90.2k ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results