The evolution of open large language models (LLMs) has significantly impacted the AI research community, particularly in developing chatbots and similar applications. Following the release of models ...
The Zephyr-7B model has been trained using a three-step strategy. The first step involves distilled supervised fine-tuning using the Ultra Chat dataset. This dataset, comprising 1.47 million ...
A new language model known as Zephyr has been created. The Zephyr-7B-α large language model, has been designed to function as helpful assistants, providing a new level of interaction and utility in ...
This repo contains an api and a chat demo with a redis cache for the zephyr-7b-alpha model from Huggingface. This is an unofficial api that can be self hosted on docker, or if you wish you can add the ...
In this project, I fine tuned zephyr-7B-alpha-GPTQ which is the quantized model using Q-Lora technique with the help of SFTTrainer from trl. The primary goal of this project is to train a support ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results