Wan 2.5 Preview Image-to-Video model supports generating videos of 5 or 10 seconds based on the initial frame image and text. New audio capabilities: supports automatic dubbing, or you can provide a custom audio file.
This is an asynchronous API; only the task_id will be returned. You should use the task_id to request the Task Result API to retrieve the video generation results.
Text prompt. Supports both English and Chinese, with a maximum length of 2000 characters, and any excess will be automatically truncated.Example value: A small cat running on the grass.
Negative prompt, used to describe content that should be avoided in the video, allowing for restrictions on the video content.Supports both English and Chinese, with a maximum length of 500 characters. Any excess will be automatically truncated.Example value: Low resolution, errors, worst quality, low quality, incomplete, extra fingers, disproportionate, etc.
The URL of the initial frame image used for video generation.The URL must be publicly accessible and support HTTP or HTTPS protocols.Image restrictions:
Image formats: JPEG, JPG, PNG (no support for transparency), BMP, WEBP.
Image resolution: The width and height of the image should be within the range of [360, 2000] pixels.
URL of the audio file that the model will use to generate the video. See audio settings for usage instructions.Audio restrictions:
Format: wav, mp3.
Duration: 3-30s.
File size: No more than 15MB.
Overflow handling: If the audio length exceeds the duration value (5 seconds or 10 seconds), the first 5 seconds or 10 seconds will be automatically truncated, and the rest will be discarded. If the audio length is shorter than the video duration, the part beyond the audio length will be silent video. For example, if the audio is 3 seconds and the video duration is 5 seconds, the output video will have sound for the first 3 seconds and be silent for the last 2 seconds.
Whether to enable intelligent prompt rewriting. When enabled, a large model is used to intelligently rewrite the input prompt. This significantly improves the generation effect for shorter prompts but increases processing time.
Random seed, used to control the randomness of the model’s generated content. The value range is [0, 2147483647].If not provided, the algorithm automatically generates a random number as the seed. To maintain relatively stable generated content, you can use the same seed parameter value.Example value: 12345.