AI Twitter Scanner

High-signal AI posts from X, classified and scored

All Dates  |  All Dates  |  Today
Total scanned: 988 Above threshold: 987 Showing: 10
⭐ Favorites 🔥 Resonated 🚀 Viral 🔖 Most Saved 💬 Discussed 🔁 Shared 💎 Hidden Gems 📉 Dead on Arrival
All affiliate automation pipeline builder tool content automation growth hack infrastructure learning resource market signal model release monetization offer it as a service open source drop open source gold passive income stream platform shift research
model release @HuggingPapers
7/10
Tencent Hunyuan Embodied AI Model Released
Tencent has released the Hunyuan Embodied AI model on Hugging Face, featuring a 2B parameter vision-language architecture that achieves state-of-the-art results on multiple benchmarks. While the model's performance is noteworthy, its practical application and integration into existing systems remain to be seen.
Tencent just released the Hunyuan Embodied AI model on Hugging Face A 2B parameter vision-language model with Mixture-of-Transformers architecture. It achieves SOTA results on CV-Bench, DA-2K and 10+ embodied understanding benchmarks.
👁 2,012 views ❤ 16 🔁 5 💬 0 🔖 9 1.0% eng Actionable
AITencentHunyuanvision-languagebenchmark
model release @HuggingPapers
7/10
Tencent's Hunyuan Embodied Model Released
Tencent has released Hunyuan Embodied, a 2B parameter vision-language model that reportedly outperforms larger competitors on specific benchmarks. This could be relevant for engineers interested in cutting-edge model performance in spatial reasoning.
Tencent just released Hunyuan Embodied on Hugging Face A 2B parameter vision-language model that outperforms 4B and 7B competitors on spatial reasoning and embodied understanding benchmarks.
👁 322 views ❤ 5 🔁 0 💬 0 🔖 3 1.6% eng Actionable
AImodel releasevision-languageTencentHunyuan
model release @chrismaconi
7/10
Zhipu AI's GLM 5.1 Beats GPT-5.4 on Benchmarks
Zhipu AI has released GLM 5.1, an open-source model that outperforms GPT-5.4 on coding benchmarks, coming close to Claude. This could indicate a shift in competitive capabilities in the AI model landscape.
Something happened yesterday that every founder building an AI product needs to understand. Even if you're not technical. A Chinese company called Zhipu AI released GLM 5.1. It's an open-source AI model that just beat GPT-5.4 on coding benchmarks. And it gets within 5% of Claude
👁 91 views ❤ 2 🔁 0 💬 0 🔖 0 2.2% eng
AIopen sourcemodel releaseZhipu AIGLM 5.1
model release @ModelScope2022
7/10
MinerU2.5-Pro Model Launch
MinerU2.5-Pro is a new 1.2B model that achieves state-of-the-art performance on the OmniDocBench v1.6 benchmark for PDF to Markdown parsing, outperforming several existing models. The significant improvement in performance is attributed to a substantial increase in training data, which may interest engineers focused on model training and performance optimization.
MinerU2.5-Pro is here. SOTA on OmniDocBench v1.6 (95.69), PDF to Markdown parsing. A 1.2B model that outperforms Gemini 3 Pro, Qwen3-VL-235B, GLM-OCR, and PaddleOCR-VL-1.5. The entire leap from 92.98 to 95.69 came from data: 65.5M training pages (up from <10M),
👁 2,534 views ❤ 49 🔁 5 💬 0 🔖 27 2.1% eng Actionable
AImodel releasebenchmarkPDF parsingtraining data
model release @off_thetarget
7/10
Gemma 4 Stabilized on llama.cpp
Gemma 4 has been stabilized on llama.cpp after initial bugs, featuring various model configurations. Senior engineers may find the performance benchmarks noteworthy, especially the ranking of the 31B model on Arena AI.
Gemma 4 is finally stable on llama.cpp On April 2nd, Google released Gemma 4, and it had llama.cpp support on day one but with lots of bugs. Now all issues have been fixed E2B, E4B, 26B MoE, 31B Dense 31B ranks #3 on Arena AI, 26B ranks #6 The strongest tier of open-source
👁 824 views ❤ 5 🔁 0 💬 3 🔖 4 1.0% eng Actionable
Gemma 4llama.cppAI modelsopen sourceperformance benchmarks
model release @support_huihui
7/10
Huihui-gemma-4-31B-it-abliteratedv2 Model Release
A new version of the Huihui-gemma model shows improved perplexity metrics compared to its original, indicating potential quality enhancements. This release may interest engineers looking for better-performing models in their AI systems.
An absolutely unexpected result: tested with llama-perplexity, the ablated version actually has a lower PPL than the original model. The smaller the PPL value, the higher the model quality. We will upload the Huihui-gemma-4-31B-it-abliteratedv2 version, with fewer warnings and
👁 2,089 views ❤ 48 🔁 3 💬 4 🔖 13 2.6% eng Actionable
AImodel releaseperplexityHuihui-gemmamachine learning
model release @PrasVector
7/10
MiniMax M2.7 Open Weights Released
MiniMax has released M2.7 as open weights on Hugging Face, achieving notable benchmarks on SWE-Pro and Terminal Bench 2. While the model's focus on agent workflows and self-evolution is interesting, its practical impact and adoption remain to be seen.
Awesome news! MiniMax just dropped M2.7 as open weights on Hugging Face. With 56.22% on SWE-Pro and 57.0% on Terminal Bench 2, plus its focus on agent workflows, tool use, and self-evolution through iteration, this looks like one of the strongest openly available models right
👁 47 views ❤ 2 🔁 0 💬 0 🔖 0 4.3% eng Actionable
AIopen weightsMiniMaxHugging Facemodel release
model release @WesRoth
7/10
MiniMax AI Releases Open-Source MiniMax M2.7 Model
MiniMax AI has open-sourced its foundation model MiniMax M2.7, providing weights for autonomous coding tasks. Senior engineers may find the state-of-the-art performance claims relevant for evaluating new tools in software engineering.
MiniMax AI open-sourced its latest foundation model, MiniMax M2.7, making the weights immediately available to the global developer community via Hugging Face. The release claims state-of-the-art (SOTA) performance in highly rigorous, autonomous coding and software engineering
👁 1,424 views ❤ 14 🔁 3 💬 3 🔖 0 1.4% eng Actionable
open sourceAI modelsoftware engineeringMiniMaxHugging Face
model release @achalllll
7/10
LLaMA Model Sizes and Performance Insights
The LLaMA model family includes various sizes, with the 13B model showing competitive performance against larger models. This highlights the potential of smaller models in the evolving landscape of open-source LLMs.
> LLaMA comes in different sizes: 7B, 13B, 33B, 65B. even the 13B model can compete with much larger models > decoder-based transformer > sparked the open-source LLM revolution
👁 0 views ❤ 0 🔁 0 💬 0 🔖 0 0.0% eng
LLaMAopen-sourceLLMAI modelsMeta
model release @HuggingPapers
7/10
Microsoft's Skala for Density Functional Theory
Microsoft has released Skala, a neural network exchange-correlation functional that achieves chemical accuracy comparable to hybrid functionals at a semi-local cost. This could be relevant for engineers working on computational chemistry applications.
Microsoft just released Skala on Hugging Face A neural network exchange-correlation functional for density functional theory that achieves chemical accuracy on par with hybrid functionals at semi-local cost.
👁 1,043 views ❤ 15 🔁 4 💬 0 🔖 2 1.8% eng Actionable
AIMicrosoftSkaladensity functional theoryneural networks