AI Twitter Scanner

High-signal AI posts from X, classified and scored

← 2026-04-08 2026-04-09 2026-04-10 →  |  All Dates
Total scanned: 16 Above threshold: 16 Showing: 2
⭐ Favorites 🔥 Resonated 🚀 Viral 🔖 Most Saved 💬 Discussed 🔁 Shared 💎 Hidden Gems 📉 Dead on Arrival
All infrastructure market signal research
market signal @petergyang
7/10
Chinese AI Models Gaining Traction in Silicon Valley
The tweet highlights the adoption of Chinese open source AI models by notable companies like Cursor and Cognition, indicating a shift in the AI landscape. Senior engineers should note the implications of this trend on competition and innovation in AI infrastructure.
Silicon Valley is quietly running on Chinese open source AI models. Here are the receipts: → Cursor confirmed last month that Composer 2 is built on Moonshot's Kimi K2.5 → Cognition's SWE-1.6 model is likely post-trained on Zhipu's GLM → Shopify saved $5M a year by
👁 9,371 views ❤ 48 🔁 5 💬 13 🔖 23 0.7% eng
AIopen sourceSilicon ValleyChinese modelsmarket trends
infrastructure @PawelHuryn
7/10
Gemma 4's KV Cache Architecture Explained
The tweet discusses Gemma 4's use of shared KV cache layers, which allows it to run on a laptop but also highlights a limitation in cache reuse for llama.cpp. This insight into architecture could be relevant for engineers working on efficient AI system designs.
There is a catch nobody is talking about. Gemma 4 uses shared KV cache layers - the last layers reuse K/V tensors from earlier layers instead of computing their own. That is why it fits on a laptop. But that same architecture breaks cache reuse in llama.cpp. Every request
👁 5,927 views ❤ 33 🔁 9 💬 10 🔖 39 0.9% eng
AIinfrastructurecacheGemma 4llama.cpp