Motion Comes to just4o Images

Motion Comes to just4o Images
As of April 23, 2026, the Images workspace on just4o.chat is no longer just a place to make and refine stills. It now reaches into motion too.
That means one workspace can now cover a bigger creative loop: generate an image, turn that image into video, edit a clip, extend a clip, or pause on a frame and push that frame back into a new image pass.
For a lot of just4o users, that matters more than it may sound at first.
People come here because they do not want model roulette, hidden swaps, or a workflow that falls apart the second they move from one medium to another. They want direct model choice, persistent context, and a product that feels like it was built by people who actually use it. This update brings that same philosophy into the visual side of the app.
What is live now
In the just4o Images workspace today, the creative stack now looks like this:
- Image generation remains the base layer for stills
- Image-to-video is available with Grok Imagine Video from xAI
- Image-to-video is also available with PixVerse V6 through Fal AI
- Video edits are available with Grok Imagine Video
- Video extensions are available with Grok Imagine Video
- Frame-level still editing is available by pausing a ready video on the frame you want and turning that frame into a new image edit
That last part is especially nice. It means a generated clip is not just an endpoint. It can also become source material for the next still image you want to develop.
The important product detail
This is not being presented as a full traditional video editor, and we should not pretend otherwise.
What just4o has now is a tighter generative media workflow inside the Images product itself. A still can become motion. A motion clip can be revised. A clip can be extended. And a paused frame can become the seed for the next image.
That is a very different proposition from sending users off into three separate tools and hoping they manually keep their context straight.
How the provider split works
The provider story is clear in the current just4o implementation.
Grok Imagine Video is the deeper motion route right now. In just4o, it is the model handling:
- image-to-video
- video edits
- video extensions
PixVerse V6, routed through Fal AI, is currently enabled in just4o for:
- image-to-video
That distinction matters because it keeps the launch honest. Fal's PixVerse materials describe a broader video feature set on their side, but in just4o today the PixVerse path is intentionally scoped to image-to-video. Grok is the path that currently goes further into clip editing and extension inside the product.
Why this fits the just4o crowd
The strongest just4o users have always cared about continuity.
You can see it all over the product and all over the community feedback: people want the model they picked to stay legible, they want memory and context to persist, and they want a small team that ships meaningful changes quickly instead of hiding behind abstraction.
This update fits that audience well because it does not ask people to leave the workspace mentality behind the moment they start doing visual work. The Images surface is becoming more like the rest of just4o: one place where the thread can keep evolving instead of resetting every time the medium changes.
What stands out to me
The strongest part of this release is not any single button. It is the shape of the loop.
You can start with a still image. Then:
- make a video from it
- revise the video
- extend the result
- pause on the exact frame that feels promising
- turn that frame back into a new still image pass
That is the kind of workflow that feels intuitive to artists, storytellers, moodboard builders, brand people, and obsessive iterators. It is also the kind of workflow that makes an AI tool feel less like a vending machine and more like a studio surface.
Where this likely fits first
If you are trying to get practical about it, this looks especially well-suited to:
- concept development where a still image needs to become a short motion idea
- social and promo ideation where the best frame from a clip might become the next key visual
- creative iteration loops where you want motion without rebuilding the whole prompt context in another app
- users who already treat just4o as a persistent creative environment rather than a single-shot generator
In other words: this is a strong fit for the exact kind of user just4o tends to attract.
This update also includes three new text models
This broader update is not only about media workflows.
It also brings three newly added text models into the just4o lineup:
- Kimi K2.6
- GLM 5.1
- MiniMax M2.7
That matters because the point of just4o has never been one isolated feature. The point is that people can keep one coherent workspace while choosing very different kinds of models for very different jobs.
In the current just4o configuration:
- Kimi K2.6 is positioned as a long-horizon multimodal agentic model with image input and function calling
- GLM 5.1 is the stronger long-horizon text route for sustained engineering and coding work
- MiniMax M2.7 is positioned around complex agent harnesses, productivity tasks, and dynamic tool-oriented workflows
So the broader shape of this release is easy to understand: the Images workspace gets more fluid, and the text-model lineup gets deeper at the same time.
What is next
There is also a clear next step to this work.
We are designing this media workflow to integrate cleanly with the upcoming Persona V2 system, and video capability is planned to arrive as inline model tools as part of that broader direction.
That is roadmap language, not a claim that Persona V2 integration or inline video tools are fully live today. But it is the direction this update is pointing toward: media generation that feels less like a detached side feature and more like a native part of the larger just4o system.
Why this release matters
There are a lot of AI products that technically "support" images or video now. That is not the interesting part anymore.
The interesting part is whether the workflow feels coherent.
On just4o.chat, coherence has always been the point: direct model choice, stable identity, memory, files, projects, personas, and a product experience that does not quietly shift underneath the user. Bringing motion into Images pushes that same philosophy further.
If you already use just4o because you care about continuity, this is one of those updates that should click immediately. The image editor is no longer just where you make a picture. It is becoming a place where stills and motion can actually talk to each other.
Research Notes
- As of April 23, 2026, the just4o Images workspace code supports
Create Video, plusExtend Video,Edit Video, andEdit Imageactions on an active video. - In the just4o repo,
grok-imagine-videois configured forimage_to_video,edit_video, andextend_video. - In the just4o repo,
pixverse-v6-image-to-videois configured only forimage_to_video. - In the just4o repo, paused-frame image editing captures the current video frame and sends that frame into a new image generation/edit flow.
- In the just4o repo, Kimi K2.6, GLM 5.1, and MiniMax M2.7 are all present in the active model registry.
- xAI's official video docs describe Grok Imagine Video for image-to-video generation, video edits, and video extensions, including an 8.7 second maximum input length for video editing.
- fal.ai's official PixVerse V6 page describes PixVerse V6 as supporting text-to-video, image-to-video, transitions, and extension generally, but just4o's current product integration scopes PixVerse to image-to-video.
- Moonshot's current K2.6 platform page describes K2.6 as its latest flagship model with stronger coding and agent execution, but does not clearly label it open-source on that page.
- Z.AI's official docs position GLM 5.1 as a flagship long-horizon model and describe it as having leading open-source coding capabilities.
- MiniMax's official M2.7 page describes M2.7 as a high-performing model for software engineering and says it leads among open-source models on GDPval-AA.

