AI Twitter Scanner

High-signal AI posts from X, classified and scored

← 2026-04-09 2026-04-10 2026-04-11 →  |  All Dates
Total scanned: 16 Above threshold: 16 Showing: 2
⭐ Favorites 🔥 Resonated 🚀 Viral 🔖 Most Saved 💬 Discussed 🔁 Shared 💎 Hidden Gems 📉 Dead on Arrival
All infrastructure market signal model release open source drop platform shift research
model release @ModelScope2022
7/10
MinerU2.5-Pro Model Launch
MinerU2.5-Pro is a new 1.2B model that achieves state-of-the-art performance on the OmniDocBench v1.6 benchmark for PDF to Markdown parsing, outperforming several existing models. The significant improvement in performance is attributed to a substantial increase in training data, which may interest engineers focused on model training and performance optimization.
MinerU2.5-Pro is here. SOTA on OmniDocBench v1.6 (95.69), PDF to Markdown parsing. A 1.2B model that outperforms Gemini 3 Pro, Qwen3-VL-235B, GLM-OCR, and PaddleOCR-VL-1.5. The entire leap from 92.98 to 95.69 came from data: 65.5M training pages (up from <10M),
👁 2,534 views ❤ 49 🔁 5 💬 0 🔖 27 2.1% eng Actionable
AImodel releasebenchmarkPDF parsingtraining data
model release @off_thetarget
7/10
Gemma 4 Stabilized on llama.cpp
Gemma 4 has been stabilized on llama.cpp after initial bugs, featuring various model configurations. Senior engineers may find the performance benchmarks noteworthy, especially the ranking of the 31B model on Arena AI.
Gemma 4 is finally stable on llama.cpp On April 2nd, Google released Gemma 4, and it had llama.cpp support on day one but with lots of bugs. Now all issues have been fixed E2B, E4B, 26B MoE, 31B Dense 31B ranks #3 on Arena AI, 26B ranks #6 The strongest tier of open-source
👁 824 views ❤ 5 🔁 0 💬 3 🔖 4 1.0% eng Actionable
Gemma 4llama.cppAI modelsopen sourceperformance benchmarks