Accelerated inference for Wan 2.1 14B Image-to-Video, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation. By default, the API will generate a video with 5 seconds.
This is an asynchronous API; only the task_id will be returned. You should use the task_id to request the Task Result API to retrieve the video generation results.
Control the data content of the mock event. When set to TASK_STATUS_SUCCEED, you’ll receive a normal response; when set to TASK_STATUS_FAILED, you’ll receive an error response.Supports: TASK_STATUS_SUCCEED, TASK_STATUS_FAILED.
Width of the output video.Supports: 480, 720, 832, 1280.Default: 832. If the width or height is not specified, the width and the height will be forced to 832 and 480 respectively.
Default: 480. If the width or height is not specified, the width and the height will be forced to 832 and 480 respectively.
The output video will maintain the input image’s aspect ratio, and the width x height setting only determines the output video’s clarity. For example, a 720p video will be clearer than a 480p video.
The path to the LoRA model. You can specify either a LoRA model name from Hugging Face, for example: Remade-AI/Painting; or a model download URL from Civitai, for example: https://civitai.com/api/download/models/1513385?type=Model&format=SafeTensor.
The LoRA model must be compatible with Wan2.1 14B I2V, otherwise it will not work. Please check compatibility before using it.
A seed is a number generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical content each time.Range: -1 <= x <= 9999999999. Default: -1.
The flow_shift parameter primarily affects the speed and magnitude of object movement in the video. Higher values produce more pronounced and faster movement, while lower values make the motion slower and more subtle.Range: 1 <= x <= 10. Default: 5.0.
The enable_safety_checker parameter controls whether the safety filter is applied to the generated content. When enabled, it helps filter out potentially harmful or inappropriate content from the video output.Default: true.
Here is an example of how to use the Wan 2.1 Image to Video API.
Generate a task_id by sending a POST request to the Wan 2.1 Image to Video API.
Request:
Copy
curl --location 'https://api.novita.ai/v3/async/wan-i2v' \--header 'Authorization: Bearer {{API Key}}' \--header 'Content-Type: application/json' \--data '{ "image_url": "https://pub-f964a1c641c04024bce400ad128c8cd6.r2.dev/wan-i2v-input-image.jpg", "height": 1280, "width": 720, "steps": 25, "seed": -1, "prompt": "A cute panda is walking in the grassland slowly."}'
Response:
Copy
{ "task_id": "{Returned Task ID}"}
Use task_id to get output videos.
HTTP status codes in the 2xx range indicate that the request has been successfully accepted, while status codes in the 5xx range indicate internal server errors.You can get videos url in videos of response.Request: