Google pledges $9 billion to boost AI and cloud infrastructure in Oklahoma
Alphabetâs Google announced a $9 billion investment to build a new dataâcenter campus in Stillwater and expand its Pryor facility, part of a broader push to scale U.S. cloud and AI capacity. The package also ties into Googleâs $1 billion AI education and training initiative for U.S. universities and nonprofits. Why it matters: the spend signals continued heavy capital allocation by Big Tech into onâpremises AI infrastructure â accelerating competition for compute, talent and regional economic development while reinforcing the trend of AI onshoring in the U.S.
Sam Altman backs Merge Labs to take on Neuralink in brainâcomputer interfaces
The Financial Times reports that Sam Altman is backing a new venture, Merge Labs, that aims to develop highâbandwidth brainâcomputer interfaces as a rival to Neuralink. The move represents a major new entrant and funding interest in the brainâcomputer interface sector from one of AIâs most prominent investors. Why it matters: if undertaken at scale, the project could accelerate integration between advanced AI and neural hardware, drawing fresh capital and regulatory attention to an already highâstakes area of tech innovation.
Read more â
Financial Times
Sample More to Think Less â Microsoft researchers publish GFPO to cut RL-induced verbosity in LLM reasoning
What happened: A Microsoft Research / UW team posted an arXiv preprint (Aug 13, 2025) titled âSample More to Think Less: Group Filtered Policy Optimization (GFPO).â The paper introduces GFPO, a reinforcementâlearning training method for postâtraining/online fineâtuning of reasoning models that samples larger groups of candidate answers during training and filters them by length and tokenâefficiency (reward per token). Why it matters: common RL recipes for reasoning (e.g., GRPO) often drive models to inflate answers â trading extra tokens for marginal accuracy gains. GFPO explicitly optimizes for tokenâefficient reward and reports large reductions in length inflation (reported 46â85% reductions on several reasoning and coding benchmarks) while preserving accuracy. Impact: GFPO points to a practical trade â spend more compute at training time to cut inference cost and latency â that could make reasoningâspecialized models far cheaper and faster in deployment, and help teams building resourceâsensitive LLMs (education, edge, costâconstrained apps). The paper also proposes adaptive difficulty allocation to focus training compute where it most reduces inference burden.
NextStepâ1 pushes autoregressive image generation with continuous tokens at scale
What happened: A large team (NextStep) released an arXiv preprint (Aug 14, 2025) introducing NextStepâ1, a 14B autoregressive textâimage model that trains on discrete text tokens and continuous image tokens (with a 157M flowâmatching head). The paper claims stateâofâtheâart results among autoregressive approaches and stronger imageâediting capabilities, and the authors say they will release code and models. Why it matters: diffusion models currently dominate image synthesis, but autoregressive approaches offer clearer conditional likelihoods and different tradeoffs for editing and compositionality. NextStepâ1âs use of continuous visual tokens at scale (rather than VQ/quantized tokens) is a notable architectural direction that could reopen highâquality autoregressive image modeling at production scale. Impact: If reproducible, NextStepâ1 may shift some research and toolchains back toward autoregressive frameworks for tasks that benefit from tokenâlevel control (precise editing, sequential multimodal generation) and will be of interest to researchers exploring alternatives to diffusion and hybrid generation schemes.
Googleâs Gemini adds Guided Learning â an AI tutor that teaches you step-by-step
Google rolled out Guided Learning inside Gemini (announced Aug 6, 2025): a study-focused mode that breaks problems into step-by-step lessons, uses images/diagrams/videos, and generates interactive quizzes, flashcards and study guides. Google is positioning Gemini as a learning assistant (not just an answer engine) and is pairing the feature with broader education support â including a free 12âmonth AI Pro plan for eligible students and tighter integrations with NotebookLM and Geminiâs multimodal models. Why it matters: Guided Learning aims to make AI tools useful for actual learning and skill-building (useful for students, educators, and self-learners), and signals continued investment by major platforms to ship AI features tailored to education and skill development.
Perplexity ships builtâin AI video generation (Aug 15): short videos with audio for Pro/Max users
Perplexityâs Aug 15 changelog announces a built-in video generation feature: Pro and Max subscribers can create 8âsecond videos with audio (web, iOS, Android), powered by Veo 3 / Veo 3 Fast models and integrated into Perplexityâs workflows (Labs, Comet Assistant). The same update also expanded Perplexity Finance for Indian markets. Why it matters: adding native video (with audio) to an AI search/assistant platform turns research and ideation into shareable multimedia outputs â useful for creators, marketers, product teams and learners who want quick visualizations â and continues the shift toward multimodal, allâinâone AI productivity tools.
Read more â
Perplexity (Changelog)