AI Twitter Scanner

High-signal AI posts from X, classified and scored

← 2026-04-09 2026-04-10 2026-04-11 →  |  All Dates
Total scanned: 16 Above threshold: 16 Showing: 4
⭐ Favorites 🔥 Resonated 🚀 Viral 🔖 Most Saved 💬 Discussed 🔁 Shared 💎 Hidden Gems 📉 Dead on Arrival
All infrastructure market signal model release open source drop platform shift research
model release @ModelScope2022
7/10
MinerU2.5-Pro Model Launch
MinerU2.5-Pro is a new 1.2B model that achieves state-of-the-art performance on the OmniDocBench v1.6 benchmark for PDF to Markdown parsing, outperforming several existing models. The significant improvement in performance is attributed to a substantial increase in training data, which may interest engineers focused on model training and performance optimization.
MinerU2.5-Pro is here. SOTA on OmniDocBench v1.6 (95.69), PDF to Markdown parsing. A 1.2B model that outperforms Gemini 3 Pro, Qwen3-VL-235B, GLM-OCR, and PaddleOCR-VL-1.5. The entire leap from 92.98 to 95.69 came from data: 65.5M training pages (up from <10M),
👁 2,534 views ❤ 49 🔁 5 💬 0 🔖 27 2.1% eng Actionable
AImodel releasebenchmarkPDF parsingtraining data
infrastructure @elvissun
7/10
Optimizing Vercel Build Minutes
The tweet discusses a practical solution to reduce build minutes on Vercel by building locally and using turbo cache, resulting in significant cost savings. Senior engineers would find this relevant for optimizing CI/CD workflows.
if you have multiple agents opening PRs, each one triggers a full build. that's why I've been paying @vercel $150/mo in build minutes the past 2 months lol. the fix: build locally before push → turbo cache → vercel skips the build entirely. 78% fewer build minutes. 5x
👁 638 views ❤ 7 🔁 0 💬 3 🔖 4 1.6% eng Actionable
VercelCI/CDbuild optimizationturbo cacheinfrastructure
model release @off_thetarget
7/10
Gemma 4 Stabilized on llama.cpp
Gemma 4 has been stabilized on llama.cpp after initial bugs, featuring various model configurations. Senior engineers may find the performance benchmarks noteworthy, especially the ranking of the 31B model on Arena AI.
Gemma 4 is finally stable on llama.cpp On April 2nd, Google released Gemma 4, and it had llama.cpp support on day one but with lots of bugs. Now all issues have been fixed E2B, E4B, 26B MoE, 31B Dense 31B ranks #3 on Arena AI, 26B ranks #6 The strongest tier of open-source
👁 824 views ❤ 5 🔁 0 💬 3 🔖 4 1.0% eng Actionable
Gemma 4llama.cppAI modelsopen sourceperformance benchmarks
market signal @ai_for_success
7/10
Benchmark for AI Agents in Tax Workflows
A new benchmark reveals that GPT-5.4 leads at 28% in testing AI agents on real tax workflows, highlighting the challenges all models face in high-stakes, multi-step tasks. This insight could inform future model development and evaluation criteria.
We finally have a benchmark that tests AI agents on real tax workflows. GPT-5.4 is leading at 28% but all models still su**xs on high-stakes, multi-step tasks. New model cards should have benchmarks like this in future.
👁 1,513 views ❤ 12 🔁 0 💬 2 🔖 2 0.9% eng
AIbenchmarktax workflowsGPT-5.4model evaluation