Weekly AI Digest: GPT-5’s Bumpy Rollout, Google’s “Parallel Thinking,” and an AI Security Scare
Week 32, 2025
Well, well, well.
We can’t talk about August 2025 in the AI space without talking about OpenAI. The rollout of GPT-5 has been nothing short of a product/PR nightmare, arguably the company’s first true stumble since the meteoric debut of ChatGPT almost three years ago. The magic that once seemed untouchable is starting to show some cracks.
Adding to the drama, OpenAI unveiled what it calls an “““open source””” model. The triple quotes are intentional as in reality, this is open-weight, not open source. We have the model’s weights, so we can see how it makes decisions at inference time, but we still don’t have the training data or the full code pipeline. Without that, it’s transparency-lite.
But what strikes me most is the pace. It’s August, traditionally the tech industry’s “silly season,” when launches slow to a trickle. Not this year. Releases are coming thick and fast, with barely time to digest one announcement before the next lands. The AI cycle has no off-season now, and the industry seems content to sprint until something breaks. Is this a representation of the speed of deliveries of the future?
So here we are: mid-summer, but deep in the storm. Let’s dive into this week’s digest.
🚨 A Single Document Can Hack Your Connected ChatGPT
Security researchers have shown that a malicious “poisoned” file can exploit ChatGPT Connectors (like Google Drive or GitHub integrations) to extract secrets with zero clicks. The exploit, codenamed AgentFlayer, works by embedding hidden prompts and web calls in markdown, triggering as soon as the model processes the document.
👉 Read more on Wired
Why it matters:
Connecting LLMs directly to your work tools is powerful, but it also creates a huge attack surface. This is a live example of why “AI safety” has to include basic cybersecurity hygiene.
⚙️ OpenAI Goes Open-Weight With GPT-OSS
OpenAI released its first open-weight models since GPT-2: gpt-oss-120B and gpt-oss-20B, under Apache 2.0. They’re designed for customization, with long-context windows and mixture-of-experts architecture, making them deployable on anything from high-end clusters to consumer GPUs.
As discussed, these aren’t really open source, but really open weight.
👉 Read more on OpenAI
Why it matters:
This isn’t just “another model drop”. It’s a signal that the open vs. closed AI debate is heating up.
🧠 Google’s Gemini 2.5 Deep Think Introduces “Parallel Thinking”
Gemini 2.5’s new Deep Think mode evaluates multiple reasoning paths in parallel before merging the results. Available to AI Ultra subscribers, it’s designed for coding, scientific analysis, and other high-complexity tasks.
👉 Read more on Google Blog
Why it matters:
Space on the reasoning models are heating up big time!
🤖 GPT-5: Smarter, Unified… and Kind of Messy
OpenAI announced GPT-5, which routes prompts dynamically between fast and reasoning-heavy backends.
It’s better at coding, more natural in conversation, and more multimodal than previous releases. But rollout hiccups: from confusing UX changes to stability issues (such as removing access to GPT4o and legacy models) have prompted OpenAI to restore older models in ChatGPT temporarily.
👉 Read more on Lifehacker, OpenAI Blog
Why it matters:
GPT-5 was sold as the next big leap. What if it’s actually proof we’re plateauing with the transformer-based approach altogether?
🎙 Podcast of the Week: Marc Andreessen on Why Perfect Products Become Obsolete
Marc Andreessen joins The Ben Pod Network for a conversation on the state of AI and tech’s shifting platforms.
He covers why open source is resurging, how foundational AI research is (or isn’t) turning into products, and when perfect products become obsolete.
Thanks for reading The Data Journey! This post is public, so feel free to share it.
As always, stay curious.
—Ivo


