Stanford's research reveals that leading AI models like GPT-5 and Google Gemini maintain high accuracy without images, highlighting a significant flaw in AI vision systems. This finding could prompt engineers to reassess model reliability in real-world applications.
Holy shit⦠Stanford University just exposed a massive flaw in AI vision.
GPT-5, Google Gemini, and Claude scored 70β80% accuracyβ¦ with no images at all.
They call it the βmirage effectβ β
β Researchers removed images from 6 major benchmarks
β Models kept answering like
π 932 viewsβ€ 10π 6π¬ 3π 22.0% eng
AI researchvision systemsStanfordGPT-5Google Gemini
A researcher has developed a tool that effectively removes Google's SynthID watermark from images generated by Gemini, achieving 90% detection accuracy. This finding could have implications for watermarking techniques in AI-generated content.
One researcher beat Google's watermark with a math trick.
So Google puts an invisible watermark in every image Gemini generates.
They call it SynthID.
And this researcher figured out exactly how it works and built a tool to remove it.
90% detection accuracy. 43+ dB image
Announcement of a research presentation on AI's role in security, specifically focusing on a project called 'HTTP Terminator.' Senior engineers may find the insights relevant for understanding AI's application in security contexts.
I'm thrilled to announce "Can AI Do Novel Security Research? Meet the HTTP Terminator" will premiere at
@BlackHatEvents
#BHUSA! Check out the abstract:
π 8,260 viewsβ€ 181π 32π¬ 8π 552.7% eng