Text to Video
This API transforms textual descriptions into dynamic videos. By interpreting and visualizing the input text, it creates engaging video content that corresponds to the described scenarios, events, or narratives. This capability is ideal for content creation where the visual representation of text-based information enhances understanding or entertainment value.
This is an asynchronous API; only the task_id will be returned. You should use the task_id to request the Task Result API to retrieve the video generation results.
Request Headers
Enum: application/json
Bearer authentication format, for example: Bearer {{API Key}}.
Request Body
Optional extra parameters for the request.
Name of SD1.x checkpoints; this parameter specifies the name of the model checkpoint. Retrieve the corresponding sd_name value by invoking the Query Model API with filter.types=checkpoint as the query parameter.
Height of the video, range [256, 1024]
Width of the video, range [256, 1024]
The number of denoising steps. More steps usually produce higher quality content but take more time to generate. Range [1, 50]
The total number of frames in prompts must be less than or equal to 128, where the total number of frames is the cumulative sum of all prompt frames.
Text input that will not guide the generation, separated by ,
, Range [1, 1024].
This setting determines how closely Stable Diffusion will adhere to your prompt. Higher guidance forces the model to better follow the prompt but may result in lower quality output. Range [1, 30].
A seed is a number from which Stable Diffusion generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical content each time, minimum -1. Defaults to -1.
LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. Currently supports up to 5 LoRAs.
Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images, currently supports up to 5 embeddings.
The closed_loop parameter controls the behavior of an animation when it loops. Specifically, it determines whether the last frame of the animation will transition smoothly back to the first frame.
This parameter indicates the number of layers to stop from the bottom during optimization, so clip_skip on 2 would mean, that in SD1.x model where the CLIP has 12 layers, you would stop at 10th layer, Range [1, 12], get reference at A brief introduction to Clip Skip.
Response
Use the task_id to request the Task Result API to retrieve the generated outputs.
Example
This API helps generate videos from text. The returned video can be accessed via the API /v3/async/task-result
using the task_id
.
Try it in playground.
Request:
Response:
Use task_id
to get images
HTTP status codes in the 2xx range indicate that the request has been successfully accepted, while status codes in the 5xx range indicate internal server errors.
You can get videos url in videos
of response.
Request:
Response:
Video files:
Was this page helpful?