16 Sep 2025

AI News Digest

🤖 AI-curated 8 stories

Today's Summary

OpenAI is rolling out a teenage version of ChatGPT, aiming to dodge safety hazards while helping parents keep an eye on things—pretty crucial as regulators are zooming in on chatbot impacts on young users. Meanwhile, Jack Altman’s $275 million fund for AI startups shows the continued buzz around AI ventures, especially those shaking up enterprise software. On the research front, DeepMind’s new method for cleaning up training data could be a game-changer, potentially unlocking heaps of otherwise unusable data for future AI models. Lastly, Google’s “Nano Banana” image editor is making waves, proving that slick, in-app generative tools can quickly catch fire among creators, though it does raise some eyebrows about image integrity and attribution.

Stories

OpenAI builds a teen-specific ChatGPT experience to limit safety risks

OpenAI announced it is developing a separate ChatGPT experience tailored for users under 18, using age-prediction tech to direct uncertain cases to the teen version. The offering will let parents link accounts, set blackout hours, toggle features like memory and chat history, and receive alerts if the system detects acute distress. The move comes as regulators and lawmakers intensify scrutiny of chatbots' impact on youth — OpenAI says it will roll out these guardrails by year-end. Why it matters: a teen-focused ChatGPT could set a new industry precedent for age-aware AI UX and compliance, balancing safety, privacy and product adoption while shaping how other AI platforms navigate youth protections and regulatory pressure.
Read more → Axios

Jack Altman closes $275M Alt Capital II to back AI enterprise startups

Venture investor Jack Altman has quickly raised $275 million for Alt Capital II, a fund focused on AI startups aimed at transforming or replacing traditional enterprise software. The fund closed in about a week and will primarily lead Series A rounds, building on Alt Capital’s prior investments in AI-enabled business software. Why it matters: the rapid close and sizable pool reflect continued investor appetite for enterprise AI plays and will accelerate funding for startups building agentic and productivity-focused AI tools — potentially reshaping how enterprises procure core software with more intelligent, AI-first alternatives.
Read more → The Wall Street Journal

Google DeepMind researchers propose 'Generative Data Refinement' to reclaim unusable training data

A Business Insider report (Sept 15, 2025) highlights a new DeepMind-led research method called Generative Data Refinement (GDR) — an approach that uses pretrained generative models to 'purify' mixed-quality text/code documents by rewriting or removing toxic, inaccurate, or personally identifiable fragments so the rest of the content can be reused for model training. The paper (linked in the report) shows GDR outperforming current industry heuristics and synthetic-data alternatives on proof-of-concept tests, suggesting a way to alleviate looming shortages of high-quality training tokens. Why it matters: if robust and safe in practice, GDR could recover large volumes of otherwise-discarded data (text, code and potentially other modalities), extending the usable data pool for foundation models and easing a major scaling bottleneck for future LLM training. Practical impacts include changes in data curation pipelines, new privacy/ethics scrutiny around automated redaction/rewriting, and implications for dataset provenance and evaluation.
Read more → Business Insider

VQualA 2025: ICCV workshop challenge shows LMMs improving at open-ended visual quality comparison

An arXiv preprint (submitted Sep 11, 2025) reports the VQualA 2025 Challenge results — a workshop benchmark and competition (ICCV 2025 workshop) that evaluated how instruction-tuned large multimodal models (LMMs) judge and compare visual quality across single images, image pairs, and multi-image groups. With ~100 submissions and five top models demonstrating emerging capabilities on coarse-to-fine quality-comparison tasks, the challenge provides a new, larger benchmark and standardized evaluation protocols (2AFC and MCQ formats) for open-domain visual-quality reasoning. Why it matters: VQualA supplies the community with a focused, reproducible benchmark to measure multimodal models’ nuanced visual judgment and comparison abilities — an important step toward rigorous evaluation of LMM perception, safety (e.g., detecting manipulations or artifacts), and fine-grained alignment with human aesthetic/quality judgments. It should spur follow-up research on evaluation methodology, dataset construction, and model improvements for real-world image-quality and multimodal reasoning tasks.
Read more → arXiv

OpenAI hires ex‑xAI finance chief Mike Liberatore to scale compute and business finance

OpenAI has appointed Mike Liberatore — who briefly served as CFO at Elon Musk’s xAI and helped arrange large financing deals there — as its business finance officer to help oversee AI infrastructure spending and scale compute access. The hire highlights intensifying talent competition and financial strategy jockeying among leading AI labs as they race to secure compute and capital; it may also sharpen the rivalry and boardroom/legal tensions that have marked relationships between OpenAI and Musk’s ventures. Impact: reinforces OpenAI’s push to professionalize and expand its finance/compute operations during a capital‑intensive growth phase. ([reuters.com](https://www.reuters.com/business/openai-hires-former-xai-cfo-mike-liberatore-business-finance-officer-2025-09-16/))
Read more → Reuters

Wired: Over 200 AI contractors who worked on Google’s Gemini and AI Overviews were fired amid pay and conditions dispute

WIRED reports that more than 200 contractors employed via outsourcers (not direct Alphabet employees) who rated and refined Google’s AI outputs — including work on Gemini and AI Overviews — were laid off, according to workers. The story highlights frictions over pay, working conditions, alleged retaliation against organizing efforts, and the precarious role of expert contractors in training and evaluating large models. Impact: underscores labor and supply‑chain risks for Big Tech AI projects, potential regulatory and PR fallout, and the broader industry tension as human raters push for better terms while companies explore automation and cost reductions. ([wired.com](https://www.wired.com/story/hundreds-of-google-ai-workers-were-fired-amid-fight-over-working-conditions))
Read more → WIRED

DeepLearning.AI launches hands‑on short course on knowledge graphs for AI agents

DeepLearning.AI released a new short course, "Knowledge Graphs for AI Agent API Discovery" (early September 2025). The course shows how to build knowledge graphs that help agentic systems discover and call the right APIs, with code notebooks and practical examples (topics include embeddings, vector DBs, agent orchestration and API-knowledge-graph construction). This is a useful, up-to-date resource for developers and ML engineers building reliable agentic or retrieval-augmented systems — it fills a practical skills gap (how to connect agents to real-world APIs and business processes) and complements RAG/vector-first workflows.
Read more → DeepLearning.AI

Gemini’s ‘Nano Banana’ image editor continues to surge — powerful in-app image edits go viral

Google’s Gemini app update (the Nano Banana image model) has driven a wave of adoption in early September 2025 by delivering fast, consistent image editing (character consistency across edits, multi‑turn inpainting/outpainting and blending). Coverage of the rollout highlights huge usage spikes and viral creator trends — a reminder that high-quality, integrated generative tools inside mainstream apps can rapidly drive user growth and reshape creator workflows. For makers and creators, Nano Banana is now a practical tool for rapid prototyping and social content; for platforms and regulators, the craze raises fresh questions about misuse, watermarking and attribution.
Read more → 9to5Google