Today's Summary
Snapchatâs dropping the Imagine Lens for its premium users, putting AI image generation right in your Snaps with just a text prompt. Itâs a slick move, blending AR and generative AI but also stirs up questions about moderation and IP in social mediaâs AI future. Meanwhile, Sierra just scored a hefty $350 million to boost its AI agents for enterprises, showing thereâs still a big appetite for AI that handles customer service and workflows.
On the legal front, Anthropicâs $1.5 billion settlement over pirated training data is a wake-up call for AI firms about using copyrighted material and could change how companies approach data licensing. And if youâre into coding, OpenAIâs Codex just got a serious upgrade, now making it easier to integrate AI help directly into your coding toolsâcould be a game-changer for developers looking to speed up their workflow.
Stories
Snapchatâs new Imagine Lens brings open-prompt AI image generation to Snaps
Snap introduced Imagine Lens, a new AI-powered Lens that lets Snapchat+ Platinum and Lens+ subscribers generate, edit, and re-create images inside the app using free-form text prompts. The rollout expands Snapâs push to combine AR and generative AI on-device and via a mix of inâhouse and thirdâparty models, making image generation directly accessible inside the camera/Lens experience. Why it matters: the feature deepens Snapâs consumer AI product lineup (and its subscription value props), advances mobile-first generative workflows, and raises questions about content moderation, IP and monetization as social apps bake generative features into core experiences.
Sierra raises $350M at a $10B valuation to scale enterprise AI agents
Sierra â the enterprise AI agent startup coâfounded by Bret Taylor and Clay Bavor â closed a $350 million financing round led by Greenoaks that values the company at roughly $10 billion. Sierra says its platform powers custom AI agents for customer service and other workflows and claims rapid enterprise traction. Why it matters: the deal signals continued investor appetite for specialized, agentâfocused AI startups, accelerates consolidation of capital behind enterprise agent platforms, and will fuel Sierraâs product expansion and international rollout as companies adopt more autonomous, AI-driven customer operations.
DeepConf â Meta & UCSD show how LLMs can âthinkâ far cheaper by using model confidence to prune bad reasoning traces
Researchers from Meta AI and UC San Diego published DeepConf, a testâtime scaling technique that uses internal confidence signals to filter or terminate lowâquality reasoning traces during generation. According to the teamâs arXiv paper and VentureBeat coverage, DeepConf can match or improve reasoning accuracy while cutting the number of generated tokens (and thus inference cost) dramatically â in some experiments reducing token generation by over 70â80% and, in extreme cases, saturating math benchmarks like AIME. Because itâs a trainingâfree, plugâin method that works with existing serving stacks and open models, DeepConf could materially lower the cost of deploying highâquality reasoning pipelines and make advanced inferenceâtime scaling practical for enterprises and researchers.
LiquidGEMM: a new W4A8 GEMM kernel for much faster, hardwareâaware LLM serving
A fresh arXiv preprint (LiquidGEMM) describes a hardwareâaware W4A8 (4âbit weights, 8âbit activations) GEMM kernel and quantization scheme that addresses practical dequantization bottlenecks on GPUs. The authors introduce LiquidQuant and an implicit fineâgrained pipeline that overlaps weight loading, dequantization and MMA to avoid stalls; reported results show up to ~2.9Ă kernel and ~4.9Ă endâtoâend speedups versus stateâofâtheâart W4A8 kernels and consistent systemâlevel gains compared to NVIDIA TensorRTâLLM. This kind of coâdesign advance matters for the research community and industry because efficient, lowâprecision kernels are a key enabler for cheaper largeâmodel inference and realâtime LLM services.
Salesforce cuts 262 San Francisco roles as Benioff leans into AI automation
Salesforce told California regulators this week it laid off 262 employees in San Francisco as CEO Marc Benioff pushes the company to adopt AI-driven tools. The Washington Post reports the reductions are the latest in a string of workforce trims tied to efficiency and automation efforts; Benioff has publicly framed internal AI tools as enabling substantial productivity gains that reduce headcount needs in certain roles. Why it matters: Salesforce is one of San Franciscoâs largest employers and a bellwether for enterprise software â its move underscores a broader industry trend of large tech firms reshaping workforces around generative AI, creating near-term dislocation for certain job classes while increasing demand for senior AI talent. Impact: the layoffs may accelerate local economic ripple effects in the city, influence competitorsâ workforce planning, and sharpen regulatory and policy debates about AI-driven job displacement.
Read more â
The Washington Post
Anthropic agrees to $1.5 billion settlement with authors over pirated books used to train models
The Associated Press reports Anthropic has agreed to pay about $1.5 billion to settle a class-action suit by authors alleging the company trained its AI on pirated copies of books. The settlement â which includes payments to authors and destruction of the pirated datasets â is a major legal and financial development for one of the leading AI model makers. Why it matters: the payout is among the largest legal costs tied to dataset usage and sets a precedent that could affect other AI firms' risk calculus, training practices, licensing strategies, and balance-sheet planning. Impact: expect increased scrutiny of training data provenance across the industry, potential changes in how vendors license content for model training, and renewed attention from regulators and content owners on accountability and compliance.
Read more â
Associated Press
OpenAI launches Jobs Platform and certification push to train and credential AI-ready workers
OpenAI announced plans for an AIâpowered hiring product â the OpenAI Jobs Platform â plus a certification program delivered via its OpenAI Academy. The Jobs Platform (pilot rolling out ahead of a midâ2026 launch) will use AI to match employers with candidates and include dedicated tracks for small businesses and local governments. The certification program (pilot slated for late 2025) aims to validate âAI fluencyâ across levels and help connect learners with employers; OpenAI says it plans to certify millions of Americans over the coming years. Why it matters: this ties training, credentials, and hiring together, expanding OpenAI from tools into workforce development and potentially competing with incumbent platforms (e.g., LinkedIn), while giving learners new pathways to demonstrate practical AI skills.
OpenAIâs Codex rolls out crossâIDE/CLI updates (new release assets surfaced Sep 5, 2025)
OpenAIâs Codex â a cloudâbased softwareâengineering agent for writing, testing, and reviewing code â received recent updates and new release artifacts (GitHub release activity dated Sep 5, 2025) alongside product notes describing IDE extensions, a refreshed CLI, seamless localâcloud handoff, and GitHub codeâreview integrations. Why it matters: these updates bring agentic coding assistance directly into developersâ workflows (VS Code, Cursor, terminals, GitHub) and make it easier to delegate tasks, run tests, and generate PRs â potentially accelerating development and changing how engineers learn and use coding assistants in real projects.
Read more â
OpenAI (GitHub release)