Tencent has released the Hunyuan Embodied AI model on Hugging Face, featuring a 2B parameter vision-language architecture that achieves state-of-the-art results on multiple benchmarks. While the model's performance is noteworthy, its practical application and integration into existing systems remain to be seen.
Tencent just released the Hunyuan Embodied AI model on Hugging Face
A 2B parameter vision-language model with Mixture-of-Transformers architecture.
It achieves SOTA results on CV-Bench, DA-2K and 10+ embodied understanding benchmarks.
Tencent has released Hunyuan Embodied, a 2B parameter vision-language model that reportedly outperforms larger competitors on specific benchmarks. This could be relevant for engineers interested in cutting-edge model performance in spatial reasoning.
Tencent just released Hunyuan Embodied on Hugging Face
A 2B parameter vision-language model that outperforms 4B and 7B competitors on spatial reasoning and embodied understanding benchmarks.
MinerU2.5-Pro is a new 1.2B model that achieves state-of-the-art performance on the OmniDocBench v1.6 benchmark for PDF to Markdown parsing, outperforming several existing models. The significant improvement in performance is attributed to a substantial increase in training data, which may interest engineers focused on model training and performance optimization.
MinerU2.5-Pro is here. SOTA on OmniDocBench v1.6 (95.69), PDF to Markdown parsing.
A 1.2B model that outperforms Gemini 3 Pro, Qwen3-VL-235B, GLM-OCR, and PaddleOCR-VL-1.5. The entire leap from 92.98 to 95.69 came from data: 65.5M training pages (up from <10M),
Gemma 4 has been stabilized on llama.cpp after initial bugs, featuring various model configurations. Senior engineers may find the performance benchmarks noteworthy, especially the ranking of the 31B model on Arena AI.
Gemma 4 is finally stable on llama.cpp
On April 2nd, Google released Gemma 4, and it had llama.cpp support on day one but with lots of bugs. Now all issues have been fixed
E2B, E4B, 26B MoE, 31B Dense
31B ranks #3 on Arena AI, 26B ranks #6
The strongest tier of open-source
A new version of the Huihui-gemma model shows improved perplexity metrics compared to its original, indicating potential quality enhancements. This release may interest engineers looking for better-performing models in their AI systems.
An absolutely unexpected result: tested with llama-perplexity, the ablated version actually has a lower PPL than the original model.
The smaller the PPL value, the higher the model quality.
We will upload the Huihui-gemma-4-31B-it-abliteratedv2 version, with fewer warnings and
MiniMax AI has open-sourced its foundation model MiniMax M2.7, providing weights for autonomous coding tasks. Senior engineers may find the state-of-the-art performance claims relevant for evaluating new tools in software engineering.
MiniMax AI open-sourced its latest foundation model, MiniMax M2.7, making the weights immediately available to the global developer community via Hugging Face.
The release claims state-of-the-art (SOTA) performance in highly rigorous, autonomous coding and software engineering
👁 1,424 views❤ 14🔁 3💬 3🔖 01.4% engActionable
open sourceAI modelsoftware engineeringMiniMaxHugging Face
Microsoft has released Skala, a neural network exchange-correlation functional that achieves chemical accuracy comparable to hybrid functionals at a semi-local cost. This could be relevant for engineers working on computational chemistry applications.
Microsoft just released Skala on Hugging Face
A neural network exchange-correlation functional for density functional theory
that achieves chemical accuracy on par with hybrid functionals at semi-local cost.