This Week in AI: Open-Weight Models, Video Editing Breakthroughs, and Wild Creative Tools You Can Try
AI innovation has been moving at a breakneck pace, but this week felt especially electric. From jaw-dropping open-weight language models to video editors that can swap babies for sandwiches, the latest wave of tools shows just how far, and weird, AI can go. Whether you’re into coding, creating visuals, or simply exploring the frontier of technology, this week’s releases are packed with surprises.
GLM 4.5: The Open-Weight Model That Wows
The star of the week is GLM 4.5, an open-weight large language model that’s surprisingly competitive with the best closed systems. Open-weight means you can download the model’s weights and run it on your own hardware or in the cloud, giving hobbyists and developers full freedom to experiment.
What’s shocking is how close it performs to Gro 4 and GPT-4.03, even outperforming Claude 4 Opus in coding and reasoning benchmarks. But numbers only tell part of the story, the magic shows up in real use.
One of GLM 4.5’s most delightful tricks is automatic slide deck creation. Ask it to create a presentation on the satirical “Birds Aren’t Real” movement, and it doesn’t just spit out bullet points, it researches, gathers images, and builds polished slides with humor and context. Compared to older slide deck outputs, which looked like dull black-on-white rectangles, these are shockingly professional and visually engaging.
Even more impressive? One-shot coding. Give GLM 4.5 a prompt like “Create a Vampire Survivors-style game in JavaScript” and it produces a playable browser game in a single pass. While the graphics are basic, the mechanics, enemy swarms, auto-attacks, XP pickups, just work. For a free, lightning-fast model, this is remarkable and hints at a new era of accessible, high-quality AI coding tools.
Runway ALF and Luma Labs: Editing Video Like a Sorcerer
Video editing took a giant leap forward with Runway’s ALF feature. This tool lets you selectively edit elements in a video with plain language instructions. You can:
- Move a fighter jet from the sky to outer space.
- Replace crying babies in The Flash with a giant hoagie sandwich.
- Alter backgrounds or objects without reshooting.
It’s not flawless, complex scene changes can get weird, but for environment swaps and single-object edits, the results are genuinely game-changing.
Meanwhile, Luma Labs’ “Modify with Instructions” offers a similar capability, with the added twist of strength controls. Crank the dial to max and your video morphs into something entirely new; dial it down and you get subtle, realistic tweaks. Both tools unlock creative possibilities for marketing, entertainment, and meme-making alike.
Emergent Video Behavior: Instructions Right in the Frame
Perhaps the most mind-bending experiment of the week came from Veo and other video AI systems showing emergent behavior. Users discovered that if you add written instructions directly onto an image, the model tries to follow them in motion.
Imagine drawing an arrow labeled “on fire” and writing “hits the moon, which ignites” on the frame. The model then animates a flaming arrow flying toward the moon, albeit imperfectly. Early tests are chaotic and hilarious, but they hint at future interfaces where storyboarding and directing happen directly on the frame, no scripting required.
MidJourney Video and Looping Experiments
MidJourney continues to push into video territory with start and end frame animations. Give it a human face as the starting frame and a wolf as the end frame, and it attempts a “werewolf morph” transition. Or use the same frame for start and end to create seamless loops, a clever trick for social media visuals or ambient animations.
In practice, results can be surreal, often ignoring basic physics or narrative logic. But the feature encourages experimentation and offers glimpses of how static AI art can evolve into dynamic storytelling.
Face Swaps, 3D Models, and Beyond
AI fun didn’t stop at video. This week also saw:
- Idog Character Swap: Upload a single selfie and instantly replace faces in photos, whether it’s the Oscars selfie or historical portraits. What once required 20+ reference photos now takes one.
- Meshy 5: A text-to-3D and image-to-3D tool that can create game-ready models and textures, from a loaded pizza to a rough Rick and Morty spaceship. Early outputs are playful but surprisingly detailed.
- Hunyuan 3D World Model: Tencent’s experimental platform for navigable AI-generated worlds, offering small-scale roaming in futuristic cities or fantastical landscapes.
These tools hint at a future where generating a game world, animated short, or custom 3D asset could be as casual as typing a sentence.
Quick Hits from the AI Frontier
- Google AI Search expands to the UK, including multimodal queries for images and objects.
- Notebook LM begins rolling out video overviews, turning your AI-generated notes into narrated, illustrated explainers.
- Microsoft Edge Copilot Mode leans into agentic browsing, automating web tasks within the browser.
- Amazon invests in Fable, a company pitching “Netflix for AI,” where you might someday prompt entire cartoon episodes.
- Higsfield’s Halo AI video model is free for a limited time, generating nearly any video concept without heavy restrictions.
- Cursor Bug Bot launches to catch logic and security flaws in code before production.
- Robotics meets chores as Figure and Unitree showcase humanoid robots folding laundry, doing handstands, and teasing a future where machines help, or at least entertain, at home.
Why This Week Matters
This week’s updates reflect a powerful trend: AI is shifting from novelty to creativity accelerator. Open-weight models like GLM 4.5 put cutting-edge capabilities in anyone’s hands. Video and 3D tools are democratizing effects that once required studios, budgets, and specialized skills. And the line between play and production continues to blur.
If last year’s AI hype was about chatbots and static images, this year is about movement, agency, and immersion. We’re seeing the foundations of a creative ecosystem where anyone can ideate, iterate, and share in ways previously reserved for professional pipelines.
And if nothing else, we now know that sandwiches can replace babies, flaming arrows can aim for the moon, and a robot might soon help with your laundry. The future is weird, fast, and undeniably fun.