DeepMind introduces Elastic Looped Transformers, a novel architecture that reuses weights for visual generation, achieving state-of-the-art quality with fewer layers. This could influence future model designs and efficiency in AI systems.
Google DeepMind just dropped Elastic Looped Transformers, a recurrent engine that reuses weights to dominate visual generation.
It forces data through the same parameters over and over to hit SOTA quality with 4x fewer layers. By using self-distillation, this loop achieves an
This tweet discusses a significant achievement by GPT-5.4 in demonstrating the Mertens conjecture using von Mangoldt weights, offering a clean probabilistic interpretation. Senior engineers may find the novel application of AI in mathematical proofs intriguing.
A GPT-5 .4 le tomo 80minutos demostrar la conjetura.
Reemplaza el producto de Mertens por pesos de von Mangoldt (Ξ(n)).
Esto permite una interpretaciΓ³n probabilΓstica indirecta muy limpia, usando la identidad fundamental:
Simplemente elegante.
π 0 viewsβ€ 0π 0π¬ 0π 00.0% eng
GPT-5.4Mertens conjecturevon Mangoldtprobabilistic interpretationAI research
The tweet introduces Elastic Looped Transformers, which utilize recurrent weight-sharing and self-distillation to significantly reduce parameters while enabling dynamic inference. This could be of interest to engineers looking for innovative approaches to model efficiency and inference optimization.
ELT: Elastic Looped Transformers for efficient visual generation
Uses recurrent weight-shared blocks and Intra-Loop Self Distillation to reduce parameters by 4Γ.
Enables Any-Time inference with dynamic compute-quality trade-offs from a single training run.
π 0 viewsβ€ 0π 0π¬ 0π 00.0% eng
transformersvisual generationmodel efficiencyself-distillationAI research
This tweet discusses a new method presented at NLP2026 for resolving notation variations in medical department names using an LLM, achieving a high accuracy rate. Senior engineers may find the approach and results relevant for improving NLP applications in healthcare.
Published a new article on the KAKEHASHI Tech Blog.
We presented at NLP2026 a method that resolves "notation variations" in medical department names using an LLM, achieving a 97.5% accuracy rate with GPT-5. Please take a look.
The tweet discusses the need for real protocol security testing in open source telecom innovations, referencing findings from Ella Core. Senior engineers may find the insights valuable for understanding security challenges in telecom infrastructure.
Open source drives telecom innovation. It also needs real protocol security testing. Our latest Ella Core findings are on
cve.p1sec.com #TelecomSecurity #OpenSourceSecurity
This analysis reveals that 86% of OpenClaw skills are vulnerable, highlighting a significant gap in secure development practices among developers rather than an influx of malicious actors. Senior engineers should care about the implications for supply chain security and the need for better tooling.
We analyzed 2,354 OpenClaw skills on ClawHub.
86% are vulnerable. 4% are malicious.
The distinction matters. The supply chain isn't overrun with attackers. It's overrun with developers who haven't been given the tools to build securely.
Different problem, Different fix.
The tweet discusses a dataset with 24,815 samples and highlights both successes and failures in AI training, emphasizing the importance of failure analysis. Senior engineers may find value in the insights on validation gaps and prompt issues.
6/7
Honestly:
The dataset works: 24,815 samples, proper train/val/test split, published on Hugging Face.
But I also show what failed. Bad prompts, poisoned batches, validation gaps I caught too late. The failure analysis is actually the most valuable part. Iterative failure
π 0 viewsβ€ 0π 0π¬ 0π 00.0% eng
datasetfailure analysisAI trainingvalidationHugging Face
Anthropic's new research explores using a weak AI model to supervise the training of a stronger one, potentially accelerating alignment research. This could have implications for how AI systems are developed and aligned in the future.
New Anthropic Fellows research: developing an Automated Alignment Researcher.
We ran an experiment to learn whether Claude Opus 4.6 could accelerate research on a key alignment problem: using a weak AI model to supervise the training of a stronger one.
π 11,980 viewsβ€ 252π 47π¬ 21π 882.7% eng
AI alignmentresearchAnthropicClaude Opusmachine learning