Abstract
Everyone can write their stories in freeform text format -- it's something we all learn in school. Yet storytelling via video requires one to learn specialized and complicated tools. In this paper, we introduce Doki, a text-native interface for generative video authoring, aligning video creation with the natural process of text writing. In Doki, writing text is the primary interaction: within a single document, users define assets, structure scenes, create shots, refine edits, and add audio. We articulate the design principles of this text-first approach and demonstrate Doki's capabilities through a series of examples. To evaluate its real-world use, we conducted a week-long deployment study with participants of varying expertise in video authoring. This work contributes a fundamental shift in generative video interfaces, demonstrating a powerful and accessible new way to craft visual stories.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Vidmento: Creating Video Stories Through Context-Aware Expansion With Generative Video (2026)
- Rewriting Video: Text-Driven Reauthoring of Video Footage (2026)
- PrevizWhiz: Combining Rough 3D Scenes and 2D Video to Guide Generative Video Previsualization (2026)
- StoryComposerAI: Supporting Human-AI Story Co-Creation Through Decomposition and Linking (2026)
- ADCanvas: Accessible and Conversational Audio Description Authoring for Blind and Low Vision Creators (2026)
- VidTune: Creating Video Soundtracks with Generative Music and Contextual Thumbnails (2026)
- SketchDynamics: Exploring Free-Form Sketches for Dynamic Intent Expression in Animation Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper