Tencent has introduced DisCa, a method that enhances video diffusion transformers' performance by 11.8Γ while maintaining quality. This could be relevant for engineers looking to optimize their AI video processing workflows.
Tencent just released DisCa on Hugging Face
A distillation-compatible learnable feature caching method
that accelerates video diffusion transformers by 11.8Γ
while preserving generation quality.
π 999 viewsβ€ 16π 6π¬ 0π 62.2% eng
Tencentvideo diffusionAI infrastructureperformance optimizationHugging Face
LLAMMA introduces a novel approach to lending by spreading collateral across price bands, mitigating liquidation risks. This could be of interest to engineers focused on financial infrastructure and risk management in DeFi.
Lending doesnβt have to mean βone bad wick = liquidation.β
Hereβs what makes LLAMMA
@llamalend
different
Instead of a single liquidation price, collateral is spread across price bands. As price moves down, $ETH is gradually converted into $crvUSD. As it moves back up, the
This tweet outlines a new approach to video APIs that addresses fragmentation by normalizing parameters and enabling capability discovery. Senior engineers may find the async job-based generation and model-specific passthrough parameters particularly relevant for building robust video processing systems.
Video APIs are fragmented. Providers use different request shapes, parameter names, and billing units. Our approach:
- async job-based generations
- normalized params across models
- capability discovery via /api/v1/videos/models
- passthrough params for model-specific features
OpenAI's new Agents SDK allows developers to manage long-running agents with sandbox execution and direct control over memory and state, streamlining what previously required multiple components. This could simplify infrastructure for AI systems, making it relevant for engineers building complex applications.
OpenAI just turned the Agents SDK into a long-running agent runtime with sandbox execution and direct control over memory and state.
Before this, developers often had to stitch together 3 separate pieces themselves: the model loop, the machine where code runs, and the memory or