POST
/
v3
/
async
/
txt2video

This API transforms textual descriptions into dynamic videos. By interpreting and visualizing the input text, it creates engaging video content that corresponds to the described scenarios, events, or narratives. This capability is ideal for content creation where the visual representation of text-based information enhances understanding or entertainment value.

This is an asynchronous API; only the task_id will be returned. You should use the task_id to request the Task Result API to retrieve the video generation results.

Request Headers

Content-Type
string
required

Enum: application/json

Authorization
string
required

Bearer authentication format, for example: Bearer {{API Key}}.

Request Body

extra
object

Optional extra parameters for the request.

model_name
string
required

Name of SD1.x checkpoints; this parameter specifies the name of the model checkpoint. Retrieve the corresponding sd_name value by invoking the Query Model API with filter.types=checkpoint as the query parameter.

height
integer
required

Height of the video, range [256, 1024]

width
integer
required

Width of the video, range [256, 1024]

steps
integer
required

The number of denoising steps. More steps usually produce higher quality content but take more time to generate. Range [1, 50]

prompts
object[]
required

The total number of frames in prompts must be less than or equal to 128, where the total number of frames is the cumulative sum of all prompt frames.

frames
integer
required

Frames of this video clip, Range [8, 64]

prompt
string
required

Text input required to guide the generation, separated by ,, Range [1, 1024].

negative_prompt
string

Text input that will not guide the generation, separated by ,, Range [1, 1024].

guidance_scale
number

This setting determines how closely Stable Diffusion will adhere to your prompt. Higher guidance forces the model to better follow the prompt but may result in lower quality output. Range [1, 30].

seed
integer

A seed is a number from which Stable Diffusion generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical content each time, minimum -1. Defaults to -1.

loras
object[]

LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. Currently supports up to 5 LoRAs.

embeddings
object[]

Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images, currently supports up to 5 embeddings.

closed_loop
boolean

The closed_loop parameter controls the behavior of an animation when it loops. Specifically, it determines whether the last frame of the animation will transition smoothly back to the first frame.

clip_skip
integer

This parameter indicates the number of layers to stop from the bottom during optimization, so clip_skip on 2 would mean, that in SD1.x model where the CLIP has 12 layers, you would stop at 10th layer, Range [1, 12], get reference at A brief introduction to Clip Skip.

Response

task_id
string

Use the task_id to request the Task Result API to retrieve the generated outputs.

Example

This API helps generate videos from text. The returned video can be accessed via the API /v3/async/task-result using the task_id.

Try it in playground.

Request:

curl --location 'https://api.novita.ai/v3/async/txt2video' \
--header 'Authorization: Bearer {{API Key}}' \
--header 'Content-Type: application/json' \
--data '{
    "model_name": "darkSushiMixMix_225D_64380.safetensors",
    "height": 512,
    "width": 512,
    "steps": 20,
    "seed": -1,
    "prompts": [
        {
            "frames": 32,
            "prompt": "In the wintry dusk, a little girl holds matches tightly."
        },
        {
            "frames": 32,
            "prompt": "A little girl, barefoot on the frosty pavement, seeks solace."
        },
        {
            "frames": 32,
            "prompt": "A little girl with each match experiences a fleeting dance of warmth and hope."
        },
        {
            "frames": 32,
            "prompt": "In the quiet night, a little girl's silent story unfolds."
        }
    ],
    "negative_prompt": "nsfw, ng_deepnegative_v1_75t, badhandv4, (worst quality:2), (low quality:2), (normal quality:2), lowres, ((monochrome)), ((grayscale)), watermark"
}'

Response:

{
    "task_id": "fa20dff3-18cb-4417-a7f8-269456a35154"
}

Use task_id to get images

HTTP status codes in the 2xx range indicate that the request has been successfully accepted, while status codes in the 5xx range indicate internal server errors.

You can get videos url in videos of response.

Request:

curl --location --request GET 'https://api.novita.ai/v3/async/task-result?task_id=fa20dff3-18cb-4417-a7f8-269456a35154' \
--header 'Authorization: Bearer {{API Key}}'

Response:

{
    "task": {
        "task_id": "fa20dff3-18cb-4417-a7f8-269456a35154",
        "task_type": "TXT_TO_VIDEO",
        "status": "TASK_STATUS_SUCCEED",
        "reason": "",
        "eta": 0,
        "progress_percent": 100
    },
    "images": [],
    "videos": [
        {
            "video_url": "https://faas-output-video.s3.ap-southeast-1.amazonaws.com/test/61bc0452-03a5-4e5b-ba78-2dbd3db6cc7d/99a87dec55c6431189aff4bad39fb4a0.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASVPYCN6LRCW3SOUV%2F20231219%2Fap-southeast-1%2Fs3%2Faws4_request&X-Amz-Date=20231219T143829Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=3b828cd8e72d9e83eb625e5e175defbfbcfc97acf4a605dc83588ae949b698b4",
            "video_url_ttl": "3600",
            "video_type": "mp4"
        }
    ]
}

Video files: