Model APIs
- Image, Audio and Video
- Introduction
- Webhook
- Model
- Task
- Image Generator
- Image Editor
- Video Generator
- Audio
- Face Editor
- Training
- Deprecated
GPUs
- GPU Instance
Inpainting
Enum:
Bearer authentication format, for example: Bearer {{API Key}}.
Optional extra parameters for the request.
The returned image type. Default is png.
Webhook settings. More details can be found at
The URL of the webhook endpoint. Novita AI will send the task generated outputs to your specified webhook endpoint.
By specifying Test Mode, a mock event will be sent to the webhook endpoint.
Set to true to enable Test Mode, or false to disable it. The default is false.
Control the data content of the mock event. When set to TASK_STATUS_SUCCEED, you’ll receive a normal response; when set to TASK_STATUS_FAILED, you’ll receive an error response.
Customer storage settings for saving the generated outputs.
AWS S3 Bucket settings.
AWS S3 regions,
AWS S3 bucket name.
AWS S3 bucket path for saving generated outputs.
Set this option to True to save the generated outputs directly to the specified path without creating any additional directory hierarchy.
Dedicated Endpoints settings, which only take effect for users who have already subscribed to the
Set to true to schedule this task to use your Dedicated Endpoints’s dedicated resources. Default is false.
When set to true, NSFW detection will be enabled, incurring an additional cost of $0.0015 for each generated image.
0: Explicit Nudity, Explicit Sexual Activity, Sex Toys; Hate Symbols.
This parameter specifies the name of the model checkpoint. Retrieve the corresponding sd_name value by invoking the
The base64 representation of the original image, with a maximum resolution of 16 megapixels and a maximum file size of 30 Mb.
The base64 representation of the mask image, with a maximum resolution of 16 megapixels and a maximum file size of 30 Mb. The mask image should have the same resolution as the original image.
Text input required to guide the image generation, divided by
Images numbers generated in one single generation. Range [1, 8].
The number of denoising steps. More steps usually can produce higher quality images, but take more time to generate, Range [1, 100].
This setting says how close the Stable Diffusion will listen to your prompt, higer guidance forces the model to better follow the prompt, but result in lower quality output. Range [1, 30].
This parameter determines the denoising algorithm employed during the sampling phase of Stable Diffusion. Each option represents a distinct method by which the model incrementally generates new images. These algorithms differ significantly in their processing speed, output quality, and the specific characteristics of the images they generate, allowing users to tailor the image generation process to meet precise requirements. Get reference at
Defines the degree of border blurring for the filled area. A lower value results in a sharper border, maintaining clear delineation between masked and unmasked areas. Conversely, a higher value increases the blur effect, creating a smoother, more blended transition at the borders. This adjustment allows for greater control over the visual integration of the mask with the original image. Range [0, 64].
Text input that specifies what to exclude from the generated images, divided by
VAE (Variational Auto Encoder). The sd_vae can be accessed in the API /v3/models with query parameters type=vae, such as sd_name: customVAE.safetensors. Get reference at
A seed is a number from which Stable Diffusion generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical image each time, minimum -1. Defaults to -1.
LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. Currently supports up to 5 LoRAs.
Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images, currently supports up to 5 embeddings.
This parameter indicates the number of layers to stop from the bottom during optimization, so clip_skip on 2 would mean, that in SD1.x model where the CLIP has 12 layers, you would stop at 10th layer, Range [1, 12], get reference at
Conceptually, the
Specifies whether to apply or protect the filled area. When set to 0, the inpainting process considers the entire image, which may result in the mask area failing to present the correct details, but the mask area will look more natural or blend better with the whole image. When set to 1, only the masked area is inpainted, ignoring the unmasked areas, which can produce more detailed and natural results within the mask but may appear strange or incompatible with the original background. Default is 0.
This setting controls how many additional pixels can be used as a reference point for only masked mode. You can increase this amount if you are having trouble producing a proper image. This is a numerical value for how much margin to set when Only masked is selected. The downside of increasing this value is that it may decrease the quality of the output. Guidance:
Specifies whether to invert the mask. Set to 1 to invert the mask. Default is 0.
Noise multiplier for img2img settings. This scaling factor is applied to the random latent tensor for img2img. Lowering the value of this multiplier reduces the amount of noise introduced into the image transformation process, which can help reduce flickering or instability in the output image. Range [0, 1.5]. Default is 0.5.
Inpainting is a conservation process in which damaged, deteriorated, or missing parts of an artwork are filled in to present a complete image.
This is an asynchronous API; only the task_id will be returned. You should use the task_id to request the Task Result API to retrieve the image generation results.
Request Headers
Enum: application/json
Bearer authentication format, for example: Bearer {{API Key}}.
Request Body
Optional extra parameters for the request.
The returned image type. Default is png.
Enum: png
, webp
, jpeg
Webhook settings. More details can be found at Webhook Documentation.
The URL of the webhook endpoint. Novita AI will send the task generated outputs to your specified webhook endpoint.
By specifying Test Mode, a mock event will be sent to the webhook endpoint.
Set to true to enable Test Mode, or false to disable it. The default is false.
Control the data content of the mock event. When set to TASK_STATUS_SUCCEED, you’ll receive a normal response; when set to TASK_STATUS_FAILED, you’ll receive an error response.
Enum: TASK_STATUS_SUCCEED
, TASK_STATUS_FAILED
Customer storage settings for saving the generated outputs.
By default, the generated outputs will be saved to Novita AI Storage temporarily and privately.
AWS S3 Bucket settings.
AWS S3 regions, more details.
AWS S3 bucket name.
AWS S3 bucket path for saving generated outputs.
Set this option to True to save the generated outputs directly to the specified path without creating any additional directory hierarchy.
If set to False, Novita AI will create an additional directory in the path to save the generated outputs. The default is False.
Dedicated Endpoints settings, which only take effect for users who have already subscribed to the Dedicated Endpoints Documentation.
Set to true to schedule this task to use your Dedicated Endpoints’s dedicated resources. Default is false.
When set to true, NSFW detection will be enabled, incurring an additional cost of $0.0015 for each generated image.
0: Explicit Nudity, Explicit Sexual Activity, Sex Toys; Hate Symbols.
1: Explicit Nudity, Explicit Sexual Activity, Sex Toys; Hate Symbols; Non-Explicit Nudity, Obstructed Intimate Parts, Kissing on the Lips.
2: Explicit Nudity, Explicit Sexual Activity, Sex Toys; Hate Symbols; Non-Explicit Nudity, Obstructed Intimate Parts, Kissing on the Lips; Female Swimwear or Underwear, Male Swimwear or Underwear.
Enum: 0
, 1
, 2
This parameter specifies the name of the model checkpoint. Retrieve the corresponding sd_name value by invoking the Query Model API with filter.types=checkpoint&filter.is_inpainting=true as the query parameter.
The base64 representation of the original image, with a maximum resolution of 16 megapixels and a maximum file size of 30 Mb.
The base64 representation of the mask image, with a maximum resolution of 16 megapixels and a maximum file size of 30 Mb. The mask image should have the same resolution as the original image.
Text input required to guide the image generation, divided by ,
. Range [1, 1024].
Images numbers generated in one single generation. Range [1, 8].
The number of denoising steps. More steps usually can produce higher quality images, but take more time to generate, Range [1, 100].
This setting says how close the Stable Diffusion will listen to your prompt, higer guidance forces the model to better follow the prompt, but result in lower quality output. Range [1, 30].
This parameter determines the denoising algorithm employed during the sampling phase of Stable Diffusion. Each option represents a distinct method by which the model incrementally generates new images. These algorithms differ significantly in their processing speed, output quality, and the specific characteristics of the images they generate, allowing users to tailor the image generation process to meet precise requirements. Get reference at A brief introduction to Sampler.
Enum: Euler a
, Euler
, LMS
, Heun
, DPM2
, DPM2 a
, DPM++ 2S a
, DPM++ 2M
, DPM++ SDE
, DPM fast
, DPM adaptive
, LMS Karras
, DPM2 Karras
, DPM2 a Karras
, DPM++ 2S a Karras
, DPM++ 2M Karras
, DPM++ SDE Karras
, DDIM
, PLMS
, UniPC
Defines the degree of border blurring for the filled area. A lower value results in a sharper border, maintaining clear delineation between masked and unmasked areas. Conversely, a higher value increases the blur effect, creating a smoother, more blended transition at the borders. This adjustment allows for greater control over the visual integration of the mask with the original image. Range [0, 64].
Text input that specifies what to exclude from the generated images, divided by ,
. Range [1, 1024].
VAE (Variational Auto Encoder). The sd_vae can be accessed in the API /v3/models with query parameters type=vae, such as sd_name: customVAE.safetensors. Get reference at A brief introduction to VAE.
A seed is a number from which Stable Diffusion generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical image each time, minimum -1. Defaults to -1.
LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. Currently supports up to 5 LoRAs.
Name of lora, retrieve the corresponding sd_name_in_api value by invoking the Get Model API endpoint with filter.types=lora as the query parameter.
The strength value of lora. The larger the value, the more biased the effect is towards lora, Range [0, 1]
Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images, currently supports up to 5 embeddings.
Name of textual Inversion model, you can call the Get Model API endpoint with parameter filter.types=textualinversion to retrieve the sd_name_in_api field as the model_name.
This parameter indicates the number of layers to stop from the bottom during optimization, so clip_skip on 2 would mean, that in SD1.x model where the CLIP has 12 layers, you would stop at 10th layer, Range [1, 12], get reference at A brief introduction to Clip Skip.
Conceptually, the strength
indicates the degree to which the reference image_base64
should be transformed. Must be between 0 and 1. image_base64
will be used as a starting point, with increasing levels of noise added as the strength value increases. The number of denoising steps depends on the amount of noise initially added. When strength
is 1, added noise will be maximum and the denoising process will run for the full number of iterations specified in steps
. A value of 1, therefore, essentially ignores image_base64
.
Specifies whether to apply or protect the filled area. When set to 0, the inpainting process considers the entire image, which may result in the mask area failing to present the correct details, but the mask area will look more natural or blend better with the whole image. When set to 1, only the masked area is inpainted, ignoring the unmasked areas, which can produce more detailed and natural results within the mask but may appear strange or incompatible with the original background. Default is 0.
Enum: 0
, 1
This setting controls how many additional pixels can be used as a reference point for only masked mode. You can increase this amount if you are having trouble producing a proper image. This is a numerical value for how much margin to set when Only masked is selected. The downside of increasing this value is that it may decrease the quality of the output. Guidance: https://civitai.com/articles/161/basic-inpainting-guide, Range [0, 256]. Default is 8.
Specifies whether to invert the mask. Set to 1 to invert the mask. Default is 0.
Enum: 0
, 1
Noise multiplier for img2img settings. This scaling factor is applied to the random latent tensor for img2img. Lowering the value of this multiplier reduces the amount of noise introduced into the image transformation process, which can help reduce flickering or instability in the output image. Range [0, 1.5]. Default is 0.5.
Response
Use the task_id to request the Task Result API to retrieve the generated outputs.
Example
I have no mask
images. How do I generate mask
parameters in the body?
You can use our playground to get the mask base64 information. Please be aware that mask images should have the same resolution as the input images. Guidance can be found here: Click Here
I already have mask images. How do I convert mask
images to base64?
You can use the following code to convert mask images to base64.
import base64
# mask files path
filename_input = "mask_edited.png"
# read mask file
with open(filename_input, "rb") as f:
base64_pic = base64.b64encode(f.read()).decode("utf-8")
# write mask file
with open("input.txt", "w") as f:
f.write(base64_pic)
Start requesting inpainting.
Please set the Content-Type
header to application/json
in your HTTP request to indicate that you are sending JSON data. Currently, only JSON format is supported.
"model_name":"realisticVisionV40_v40VAE-inpainting_81543.safetensors"
in body represent inpainting models, which, can be accessed in API /v3/model with sd_name
like %inpainting%.
Request:
curl --location --request POST 'http://api.novita.ai/v3/async/inpainting' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {{API Key}}' \
--data-raw '{
"extra": {
"response_image_type": "jpeg"
},
"request": {
"model_name": "realisticVisionV40_v40VAE-inpainting_81543.safetensors",
"prompt": "Leonardo DiCaprio",
"negative_prompt": "(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, BadDream, UnrealisticDream",
"image_num": 1,
"steps": 25,
"seed": -1,
"clip_skip": 1,
"guidance_scale": 7.5,
"sampler_name": "Euler a",
"mask_blur": 1,
"inpainting_full_res": 1,
"inpainting_full_res_padding": 32,
"inpainting_mask_invert": 0,
"initial_noise_multiplier": 1,
"strength": 0.85,
"image_base64": "{{base64 encoded image}}",
"mask_image_base64": "{{base64 encoded mask image}}"
}
}'
Response:
{
"code": 0,
"msg": "",
"data": {
"task_id": "270f4fba-2cb0-4a56-8b82-xxxx"
}
}
````"model_name":"realisticVisionV40_v40VAE-inpainting_81543.safetensors"` in body represent inpainting models, which, can be accessed in API /v3/model with `sd_name` like %inpainting%.
`Request:`
```bash
curl --location --request POST 'http://api.novita.ai/v3/async/inpainting' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {{API Key}}' \
--data-raw '{
"extra": {
"response_image_type": "jpeg"
},
"request": {
"model_name": "realisticVisionV40_v40VAE-inpainting_81543.safetensors",
"prompt": "Leonardo DiCaprio",
"negative_prompt": "(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, BadDream, UnrealisticDream",
"image_num": 1,
"steps": 25,
"seed": -1,
"clip_skip": 1,
"guidance_scale": 7.5,
"sampler_name": "Euler a",
"mask_blur": 1,
"inpainting_full_res": 1,
"inpainting_full_res_padding": 32,
"inpainting_mask_invert": 0,
"initial_noise_multiplier": 1,
"strength": 0.85,
"image_base64": "{{base64 encoded image}}",
"mask_image_base64": "{{base64 encoded mask image}}"
}
}'
Response:
{
"code": 0,
"msg": "",
"data": {
"task_id": "270f4fba-2cb0-4a56-8b82-xxxx"
}
}
Was this page helpful?