Stanford's research reveals that leading AI models like GPT-5 and Google Gemini maintain high accuracy without images, highlighting a significant flaw in AI vision systems. This finding could prompt engineers to reassess model reliability in real-world applications.
Holy shit⦠Stanford University just exposed a massive flaw in AI vision.
GPT-5, Google Gemini, and Claude scored 70β80% accuracyβ¦ with no images at all.
They call it the βmirage effectβ β
β Researchers removed images from 6 major benchmarks
β Models kept answering like
π 932 viewsβ€ 10π 6π¬ 3π 22.0% eng
AI researchvision systemsStanfordGPT-5Google Gemini
Anthropic's change to Claude code's cache TTL from 1 hour to 5 minutes has led to increased quota usage and costs. This adjustment could impact developers relying on their API for cost management and performance optimization.
It looks like Anthropic changed claude codeβs cache TTL from 1h to 5m in March, causing significant quota and cost inflation.
π 8,766 viewsβ€ 84π 11π¬ 8π 391.2% eng
MiniMax AI has open-sourced its foundation model MiniMax M2.7, providing weights for autonomous coding tasks. Senior engineers may find the state-of-the-art performance claims relevant for evaluating new tools in software engineering.
MiniMax AI open-sourced its latest foundation model, MiniMax M2.7, making the weights immediately available to the global developer community via Hugging Face.
The release claims state-of-the-art (SOTA) performance in highly rigorous, autonomous coding and software engineering
Benchmark results indicate that Claude Opus 4.5 is outperforming its successor, 4.6, in terms of hallucination rates. This raises questions about the effectiveness of the latest model and could influence future development decisions.
Claude Opus 4.5 is now OUTPERFORMING Claude Opus 4.6 on BridgeBench Hallucination.
Read that again.
The legacy model is beating the current flagship.
We benchmarked Opus 4.5 this morning to confirm what we saw yesterday.
Claude Opus 4.6 fell from #2 to #10 with a 98%
π 36,211 viewsβ€ 599π 69π¬ 58π 842.0% eng
Grok 4.20 has achieved the top position on the BridgeBench Reasoning benchmark, outperforming GPT 5.4 and Claude Opus 4.6. This indicates a significant advancement in reasoning capabilities, which may influence future AI model development.
Grok 4.20 Reasoning just took #1 on the new BridgeBench Reasoning benchmark.
Beating GPT 5.4 and Claude Opus 4.6.
This model keeps climbing every single week.
Hallucination #1.
Now Reasoning #1.
While Anthropic is throwing 500 errors, xAI is quietly building the most
Grok 4.20 has achieved the top ranking on BridgeBench, surpassing other models like GPT-5.4 and Claude Opus 4.6. This benchmark may indicate a shift in competitive performance among AI models, which could influence future development decisions.
Grok 4.20 takes the #1 spot on BridgeBench
Outperforming GPT-5.4, Claude Opus 4.6, and Gemini.
It just keeps climbing
Announcement of a research presentation on AI's role in security, specifically focusing on a project called 'HTTP Terminator.' Senior engineers may find the insights relevant for understanding AI's application in security contexts.
I'm thrilled to announce "Can AI Do Novel Security Research? Meet the HTTP Terminator" will premiere at
@BlackHatEvents
#BHUSA! Check out the abstract:
π 8,260 viewsβ€ 181π 32π¬ 8π 552.7% eng