A new version of the Huihui-gemma model shows improved perplexity metrics compared to its original, indicating potential quality enhancements. This release may interest engineers looking for better-performing models in their AI systems.
An absolutely unexpected result: tested with llama-perplexity, the ablated version actually has a lower PPL than the original model.
The smaller the PPL value, the higher the model quality.
We will upload the Huihui-gemma-4-31B-it-abliteratedv2 version, with fewer warnings and