Meta has released its first model from the Superintelligence Labs, which may indicate a shift in their AI strategy. Senior engineers should evaluate its capabilities and potential integration into existing systems.
Top stories in AI today:
- Meta Superintelligence Labs ships first model
- HeyGenβs Avatar V solves AIβs identity drift
- Build an automated ad generator with this tool
- Anthropic simplifies the agent-building system
- 4 new AI tools, community workflows, and more
This tweet presents a comparative benchmark of three AI models on a Raspberry Pi 5, highlighting performance differences in cold-start, sustained throughput, and RAM usage. Senior engineers may find the insights useful for selecting the right model for edge deployment.
72 hours of LiteRT-LM vs Ollama vs llama.cpp on a Pi 5 8GB ($160 board, post-DRAM-hike pricing).
Clean result:
β LiteRT-LM wins cold-start by ~30%
β Ollama wins sustained tok/s
β llama.cpp still holds the RAM headroom past 4k context
No single "best edge runtime" on Pi. Pick
The tweet discusses Aave's transition plan to shift risk management to decentralized infrastructure, highlighting a significant move in DeFi. Senior engineers should note the implications for on-chain finance and risk management systems.
If you believe global finance belongs onchain, you cannot rely on centralized, off-chain risk silos.
@LlamaRisk
βs transition plan for Aave shifts risk management to neutral, trusted infrastructure.
DeFi will win with
@aave
V4.
The tweet highlights the adoption of Chinese open source AI models by notable companies like Cursor and Cognition, indicating a shift in the AI landscape. Senior engineers should note the implications of this trend on competition and innovation in AI infrastructure.
Silicon Valley is quietly running on Chinese open source AI models.
Here are the receipts:
β Cursor confirmed last month that Composer 2 is built on Moonshot's Kimi K2.5
β Cognition's SWE-1.6 model is likely post-trained on Zhipu's GLM
β Shopify saved $5M a year by
π 9,371 viewsβ€ 48π 5π¬ 13π 230.7% eng
Gemma 4 demonstrates impressive efficiency with 27B parameters, outperforming Llama 3.1 at 405B levels. This benchmark highlights the trend towards more efficient models without the need for extensive infrastructure.
Gemma 4's a beast hitting Llama 3.1 405B levels on benchmarks with just 27B params.
That's efficiency on steroids, no data center apocalypse required.
Open source winning big.
KellyBench tested frontier AI models in a simulated betting market, revealing that all models lost money, with varying degrees of ROI. This highlights the challenges and limitations of current AI models in real-world applications, which is crucial for engineers to consider.
Interesting new benchmark called KellyBench which put frontier models in a simulated Premier League betting market for a full season. Every model lost money.
- Claude Opus 4.6: -11% mean ROI, avoided ruin
- GPT-5.4: -13.6% mean ROI, avoided ruin
- Grok 4.20: -88.2% ROI, went
The announcement of MCP as a universal standard for AI agents indicates a significant shift in open-source AI, potentially impacting how AI systems are built and integrated. Senior engineers should monitor this trend as it may influence future infrastructure and development practices.
AI Agents just became the fastest-growing category in all of open-source AI.
Here's why this matters β and why 2026 is the year of the agent:
MCP (Model Context Protocol) changed everything.
Anthropic open-sourced MCP in late 2024 as a universal standard for connecting AI