# Authentication Source: https://novita.ai/docs/api-reference/basic-authentication The `Novita AI` API uses API keys in the `request headers` to authenticate requests. You can view and manage your API keys in [settings page](https://novita.ai/settings#key-management?utm_source=getstarted). ```js { "Authorization": "Bearer {{API Key}}" } ``` # Account Info Source: https://novita.ai/docs/api-reference/basic-get-user-info GET https://api.novita.ai/v3/user > This endpoint facilitates queries for user information, primarily focusing on the functions available to the user and their account balance. ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Response The credit balance of the user, calculated as the Balance (shown in the top right corner of the novita.ai website) multiplied by 10,000. # Create Container Registry Authentication Source: https://novita.ai/docs/api-reference/gpu-instance-create-container-registry-auth POST https://api.novita.ai/gpu-instance/openapi/v1/repository/auth/save ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/repository/auth/save' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "name": "", "username": "", "password": "" }' ``` Response ```json {} ``` # Create Instance Source: https://novita.ai/docs/api-reference/gpu-instance-create-instance POST https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/create ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body Instance name. ID of the product used to deploy the instance. Number of GPUs allocated to the instance. Number of vCPU cores allocated to the instance. Memory allocated to the instance (GB). Disk space allocated to each instance (GB). Root filesystem storage (GB). Docker image URL to initialize the instance. Docker image registry credentials in username:password format. Exposed ports. Environment variables. Environment variable key. Environment variable value. Official docker image built-in tools. Tool name. description. The port used by the tool. The type of port used by the tool. Startup command for the instance. ID of the cluster where the instance will be deployed. Mount point for the local storage. Cloud Mount Configuration (supports up to 30 cloud storage mounts) ID of the network storage. Mount point for the network storage. ID of the network storage. Mount point for the network storage. VPC Network ID; leave empty if not using a VPC network. Instance TypeEnum: `gpu` ## Response ID of the created instance. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/create' \\ --header 'Authorization: Bearer {{API Key}}' \\ --header 'Content-Type: application/json' \\ --data '{ "name": "gpu-test", "productId": "26", "gpuNum": 1, "cpuNum": 0, "memory": 0, "diskSize": 60, "rootfsSize": 30, "imageUrl": "infrai/pytorch:2.2.1", "imageAuth": "", "ports": "8080/tcp,8081/http", "envs": [ { "key": "test", "value": "test" } ], "tools": [ { "name": "Jupyter", "port": "8888", "type": "http" } ], "command": "", "clusterId": "", "localStorageMountPoint": "/workspace", "networkStorages": [ { "Id": "e797ce8f-f4ff-4b3b-b7a8-6972d96f46f2", "mountPoint": "/network_0" }, { "Id": "b924cdf5-82ec-4903-bdb2-74a03f2b0ae7", "mountPoint": "/network_1" } ], "networkId": "18f1ff05-1370-45ce-a4c6-b58d5e8d547e", "kind": "gpu" }' ``` Response ```json { "id": "" } ``` # Create Network Storage Source: https://novita.ai/docs/api-reference/gpu-instance-create-network-storage POST https://api.novita.ai/gpu-instance/openapi/v1/networkstorage/create ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/networkstorage/create' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "clusterId": "string", "storageName": "string", "storageSize": 0 }' ``` Response ```json "" ``` # Create Template Source: https://novita.ai/docs/api-reference/gpu-instance-create-template POST https://api.novita.ai/gpu-instance/openapi/v1/template/create ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body Template settings. Template name. Template README content (in Markdown format). Template type.
Enum: `instance`
Template channel.
Enum: `private`
Docker image address for instance startup. Startup command for the instance. Root filesystem storage (GB). Local volume storage (GB). Local volume mount path. Exposed port settings. Exposed port types.
Enum: `http`, `tcp`
Exposed ports (maximum of 10).
Volume settings. Volume type. Volume size (GB). Volume mount path. Environment variables injected into the instance. Environment variable key. Environment variable value.
## Response ID of the created template. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/template/create' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "template": { "name": "test", "readme": "readme", "type": "instance", "channel": "private", "image": "redis", "startCommand": "", "rootfsSize": 10, "localVolumeSize": 20, "localVolumeMount": "/workspace", "ports": [ { "type": "tcp", "ports": [ "8080", "8090" ] }, { "type": "http", "ports": [ "6000", "60001" ] } ], "volumes": [ { "type": "local", "size": 20, "mountPath": "/workspace" } ], "envs": [ { "key": "testkey", "value": "123" } ] } }' ``` Response ```json { "templateId": "1" } ``` # Delete Container Registry Auth Source: https://novita.ai/docs/api-reference/gpu-instance-delete-container-registry-auth POST https://api.novita.ai/gpu-instance/openapi/v1/repository/auth/delete ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/repository/auth/delete' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "id": "" }' ``` Response ```json {} ``` # Delete Instance Source: https://novita.ai/docs/api-reference/gpu-instance-delete-instance POST https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/delete ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ID of the instance to be deleted. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/delete' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "instanceId": "" }' ``` Response ```json {} ``` # Delete Network Storage Source: https://novita.ai/docs/api-reference/gpu-instance-delete-network-storage POST https://api.novita.ai/gpu-instance/openapi/v1/networkstorage/delete ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/networkstorage/delete' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "storageId": "string" }' ``` Response ```json {} ``` # Delete Template Source: https://novita.ai/docs/api-reference/gpu-instance-delete-template POST https://api.novita.ai/gpu-instance/openapi/v1/template/delete ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body The ID of the template to be deleted. ## Response The ID of the deleted template. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/template/delete' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "templateId": "" }' ``` Response ```json { "templateId": "" } ``` # Edit Network Storage Source: https://novita.ai/docs/api-reference/gpu-instance-edit-network-storage POST https://api.novita.ai/gpu-instance/openapi/v1/networkstorage/update ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body The unique identifier for the storage. The name of the storage. The size of the storage in appropriate units. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/networkstorage/update' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "storageId": "string", "storageName": "string", "storageSize": "string" }' ``` Response ```json {} ``` # Edit Template Source: https://novita.ai/docs/api-reference/gpu-instance-edit-template POST https://api.novita.ai/gpu-instance/openapi/v1/template/update ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/template/update' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json {} ``` # Get Instance Source: https://novita.ai/docs/api-reference/gpu-instance-get-instance GET https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Query Parameters ID of the instance being queried. ## Response Instance ID. Instance name. Cluster ID. Cluster name. Instance status. SSH command for remote login. SSH password for remote login. Docker image URL used to initialize the instance. Image authentication ID. Command to be executed on instance startup. Number of CPUs allocated to the instance. Amount of memory allocated to the instance. Number of GPUs allocated to the instance. Timestamp when the instance was created. Timestamp when the instance was last started. Timestamp when the instance was last stopped. Total time the instance has been in use. Billing mode for the instance. Product ID associated with the instance. Product name associated with the instance. Size of the root filesystem in GB. List of tools installed on the instance. Size of the disk allocated to the instance. Price of the instance. Price for local storage. Price for exited local storage. ## Example Request ```bash curl --request GET \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json { "id": "12948885700ef8e3", "name": "", "clusterId": "China", "clusterName": "", "status": "running", "sshCommand": "", "sshPassword": "", "imageUrl": "nginx:latest", "imageAuthId": "", "command": "", "cpuNum": "8", "memory": "63", "gpuNum": "1", "createdAt": "1713230117", "lastStartedAt": "1713258924", "lastStoppedAt": "0", "useTime": "30286", "portMappings": [ { "port": 6000, "endpoint": "http://12948885700ef8e3-6000-proxy.gpu-instance.novita.ai.com:54080", "type": "http" }, { "port": 60001, "endpoint": "http://12948885700ef8e3-60001-proxy.gpu-instance.novita.ai.com:54080", "type": "http" }, { "port": 8080, "endpoint": "54.193.255.99:42741", "type": "tcp" }, { "port": 8090, "endpoint": "54.193.255.99:34121", "type": "tcp" } ], "billingMode": "instance_afterusage", "productId": "26", "productName": "RTX 4090 24GB", "rootfsSize": 10, "volumeMounts": [ { "type": "local", "size": 20, "Id": "054358de-a6a4-40fa-8b9d-f7eb461b7da8", "mountPath": "/workspace" } ], "tools": [], "statusError": { "state": "", "message": "" }, "envs": [ { "key": "testkey", "value": "123" } ], "diskSize": 0, "instancePrice": "269100", "localStoragePrice": "100", "exitedLocalStoragePrice": "100", "connectComponentSSH": null, "connectComponentWebTerminal": { "port": 0, "address": "54.193.255.99:41219", "isRunning": false, "isShow": true }, "connectComponentJupyter": null, "connectComponentLog": { "port": 0, "address": "http://12948885700ef8e3-2224-proxy.gpu-instance.novita.ai.com:54080/pod", "isRunning": false, "isShow": true } } ``` # Get Template Source: https://novita.ai/docs/api-reference/gpu-instance-get-template GET https://api.novita.ai/gpu-instance/openapi/v1/template ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Query Parameters ID of the template being queried. ## Response Template settings. Template ID. Template creation time. ID of the user who created the template. List of tools associated with the template. Template name. Template README content (in Markdown format). Template type.Enum: `instance` Template channel.Enum: `private` Docker image address for instance startup. Credentials for the Docker image registry to pull private images. Startup command for the instance. Rootfs storage (GB). Volumes settings. Volume type.Enum: `local` Volume size (GB). Volume mount path. Exposed ports settings. Exposed port types.Enum: `http, tcp` Exposed ports (maximum 10). Environment variables injected into instance. Environment variable key. Environment variable value. ## Example Request ```bash curl --request GET \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/template' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json { "template": { "Id": "212", "user": "9141320498493177", "name": "ubuntu", "readme": "readme", "logo": "", "type": "instance", "channel": "private", "image": "ubuntu:latest", "imageAuth": "", "startCommand": "", "rootfsSize": 10, "volumes": [ { "type": "local", "size": 20, "mountPath": "/workspace" } ], "ports": [ { "type": "tcp", "ports": [8080, 8090] }, { "type": "http", "ports": [6000, 60001] } ], "envs": [ { "key": "testkey", "value": "123" } ], "tools": [], "createTime": "1713162260" } } ``` # List Clusters Source: https://novita.ai/docs/api-reference/gpu-instance-list-clusters GET https://api.novita.ai/gpu-instance/openapi/v1/clusters ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Response ID of this cluster. Name of this cluster. Available GPU types in this cluster. Indicates whether network storage is supported. ## Example Request ```bash curl --request GET \ --location 'https://api.novita.ai/gpu-instance/openapi/v1/clusters' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json { "data": [ { "id": "25", "name": "NA-01", "availableGpuType": ["RTX 4090 24GB"], "supportNetworkStorage": false } ] } ``` # List Container Registry Auth Source: https://novita.ai/docs/api-reference/gpu-instance-list-container-registry-auth GET https://api.novita.ai/gpu-instance/openapi/v1/repository/auths ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Response ## Example Request ```bash curl --request GET \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/repository/auths' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json { "data": [ { "id": "75irlsvvsehnqdajg2r481mjk54ecodc", "name": "test", "username": "uuuu", "password": "1111" }, { "id": "eih4rdk77avum5lyau291w1i54zd897w", "name": "test", "username": "uuuu", "password": "1111" } ] } ``` # List Instances Source: https://novita.ai/docs/api-reference/gpu-instance-list-instances GET https://api.novita.ai/gpu-instance/openapi/v1/gpu/instances ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Query Parameters Maximum number of entries returned on one page. Current page index. Filter by instance name Filter by product name Instance status 1: Use saving plan; 2: Do not use saving plan ## Response Maximum number of entries returned on one page. Current page index. ## Example Request ```bash curl --request GET \ --url https://api.novita.ai/gpu-instance/openapi/v1/gpu/instances \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json { "instances": [ { "id": "12948885700ef8e3", "name": "", "clusterId": "5", "clusterName": "China", "status": "running", "imageUrl": "nginx:latest", "command": "", "cpuNum": "8", "memory": "63", "gpuNum": "1", "createdAt": "1713230117", "lastStartedAt": "1713258924", "lastStoppedAt": "0", "useTime": "30131", "billingMode": "instance_afterusage", "productId": "26", "productName": "RTX 4090 24GB", "rootfsSize": 10, "statusError": { "state": "", "message": "" }, "envs": [ { "key": "testkey", "value": "123" } ], "diskSize": 0, "instancePrice": "269100", "localStoragePrice": "100", "exitedLocalStoragePrice": "100" } ], "pageSize": 0, "pageNum": 0, "total": 2 } ``` # List Network Storage Source: https://novita.ai/docs/api-reference/gpu-instance-list-network-storage GET https://api.novita.ai/gpu-instance/openapi/v1/networkstorages/list ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Query Parameters ## Response ## Example Request ```bash curl --request GET \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/networkstorages/list' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json { "data": [ { "storageId": "d4e82677-3f80-4020-a731-d15b1c589aa8", "storageName": "123", "storageSize": 10, "clusterId": "5", "clusterName": "EU-01", "price": "100" }, { "storageId": "082383c6-bc28-4bfa-a3b3-b6d6511bdf64", "storageName": "123", "storageSize": 10, "clusterId": "5", "clusterName": "EU-01", "price": "100" } ], "total": 2 } ``` # List Products Source: https://novita.ai/docs/api-reference/gpu-instance-list-products GET https://api.novita.ai/gpu-instance/openapi/v1/products ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Query Parameters Filter the products of the cluster with ID = clusterId. Filter the products with Name = productName. ## Response Product ID. Product name. Number of vCPU cores allocated to each instance. Memory allocated to each instance (in GB). Disk space allocated to each instance (in GB). Indicates whether the product is available for deployment. Price per hour for this product, in units of \$0.00001. ## Example Request ```bash curl --request GET \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/products' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ```json { "data": [ { "id": "23", "name": "RTX 3090 24GB", "cpuPerGpu": 8, "memoryPerGpu": 56, "diskPerGpu": 670, "availableDeploy": true, "price": 20000 } ] } ``` # List Templates Source: https://novita.ai/docs/api-reference/gpu-instance-list-templates GET https://api.novita.ai/gpu-instance/openapi/v1/templates ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Query Parameters Maximum number of entries returned on one page. Current page index. Filter the templates by name. Filter the templates by type. Filter the templates by channel. ## Response Template ID. Template creation time. ID of the user who created the template. Template name. Template README content (in Markdown format). Template type.Enum: `instance` Template channel.Enum: `private` Docker image address for instance startup. Credentials for the Docker image registry to pull private images. Startup command for the instance. Rootfs storage (GB). Volumes settings. Volume type.Enum: `local` Volume size (GB). Volume mount path. Exposed ports settings. Exposed port types.Enum: `http, tcp` Exposed ports (maximum 10). Environment variables injected into instance. Environment variable key. Environment variable value. Maximum number of entries returned on one page. Current page index. Total number of templates. ## Example Request ```bash curl --request GET \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/templates' \ --header 'Authorization: Bearer {{API Key}}' ``` Response ````json { "template": [ { "Id": "100", "user": "1032922317920824", "name": "name", "readme": "```\nreadme\n```\n# uiohiu\n68686", "logo": "", "type": "instance", "channel": "private", "image": "image", "imageAuth": "", "startCommand": "startCommand", "rootfsSize": 10, "volumes": [ { "type": "local", "size": 20, "mountPath": "localVolumeMount" } ], "ports": [ { "type": "tcp", "ports": [8080, 8090] }, { "type": "http", "ports": [6000, 60001] } ], "envs": [ { "key": "testkey", "value": "123" } ], "tools": [], "createTime": "1711995567" } ], "pageSize": 1, "pageNum": 0, "total": 10 } ```` # Restart Instance Source: https://novita.ai/docs/api-reference/gpu-instance-restart-instance POST https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/restart **This API is used to restart a specific GPU instance.** ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ID of the instance to be restarted. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/restart' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "instanceId": "" }' ``` Response ```json {} ``` # Start Instance Source: https://novita.ai/docs/api-reference/gpu-instance-start-instance POST https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/start ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ID of the instance to be started. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/start' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "instanceId": "" }' ``` Response ```json {} ``` # Stop Instance Source: https://novita.ai/docs/api-reference/gpu-instance-stop-instance POST https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/stop ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body ID of the instance to be stopped. ## Example Request ```bash curl --request POST \ --url 'https://api.novita.ai/gpu-instance/openapi/v1/gpu/instance/stop' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "instanceId": "" }' ``` Response ```json {} ``` # Cleanup Source: https://novita.ai/docs/api-reference/model-apis-cleanup POST https://api.novita.ai/v3/cleanup **Easily remove unwanted objects, defects, people, or text from your pictures in just seconds.** ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body Optional extra parameters for the request. The returned image type. Default is png.
Enum: `png, webp, jpeg`
Dedicated Endpoints settings, which only take effect for users who have already subscribed to the [Dedicated Endpoints Documentation](/guides/model-apis-dedicated-endpoints). Set to true to schedule this task to use your Dedicated Endpoints's dedicated resources. Default is false.
The base64-encoded original image, with a maximum resolution of 16 megapixels and a maximum file size of 30 MB. The base64-encoded mask image, with a maximum resolution of 16 megapixels and a maximum file size of 30 MB. ## Response The Base64-encoded content of the returned image. The returned image type.
Enum: `png`, `webp`, `jpeg`
## Example Cleanup allows you to easily remove unwanted objects, defects, people, or text from your pictures in just seconds. **Try it in the [playground](https://novita.ai/playground#cleanup).** `Request:` ```bash curl --location --request POST 'https://api.novita.ai/v3/cleanup' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data-raw '{ "image_file":"{{Base64 encoded image}}", "mask_file":"{{Base64 encoded image}}" }' ``` `Response:` ```js { "image_file": "{{Base64 encoded image}}", "image_type": "png" } ``` HTTP status codes in the 2xx range indicate that the request has been successfully accepted. A code of 400 means there is an error with the request parameters, while status codes in the 5xx range indicate internal server errors. You can retrieve the image URL in the `image_file` field of the response in base64 format. `Response:` ```js { "image_file": "{{Base64 encoded image}}", "image_type": "png" } ``` # LoRA for Style Training Source: https://novita.ai/docs/api-reference/model-apis-create-style-training **You can train a LoRA model to generate images that emulate a specific artistic style.** ## Create Style Training Task `POST https://api.novita.ai/v3/training/style` **Use this API to start a style training task.** > This is an **asynchronous API**; only the **task\_id** is returned initially. Utilize this **task\_id** to query the **Task Result API** at [Get Style Training Result API](#get-style-training-result) to retrieve the results of the image generation. ### Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ### Request Body Task name for this model training. Base models used for training.
Enum: `stable-diffusion-xl-base-1.0`, `dreamshaperXL09Alpha_alpha2Xl10_91562`, `protovisionXLHighFidelity3D_release0630Bakedvae_154359`, `v1-5-pruned-emaonly`, `epicrealism_naturalSin_121250`, `chilloutmix_NiPrunedFp32Fix`, `abyssorangemix3AOM3_aom3a3_10864`, `dreamshaper_8_93211`, `WFChild_v1.0`, `majichenmixrealistic_v10`, `realisticVisionV51_v51VAE_94301`, `sdxlUnstableDiffusers_v11_216694`, `realisticVisionV40_v40VAE_81510`, `epicrealismXL_v10_247189`, `somboy_v10_172675`, `yesmixXL_v10_283329`, `animagineXLV31_v31_325600`
Width of training images, must be > 0 Height of training images, must be > 0 Image asset IDs and their captions. Image asset ID; see Upload Images For Training for reference. Image caption; refer to Training Image Caption Guidance for more information. batch size of training, Range \[1, 4] This parameter controls the extent of model parameter updates during each iteration. A higher learning rate results in larger updates, potentially speeding up the learning process but risking overshooting the optimal solution. Conversely, a lower learning rate ensures smaller, more precise adjustments, which may lead to a more stable convergence at the cost of slower training.
Enum: `1e-4, 1e-5, 1e-6, 2e-4, 5e-5`
This parameter specifies the maximum number of training steps to be executed before halting the training process. It sets a limit on the duration of training, ensuring that the model does not continue to train indefinitely. If the `max_train_steps` set to 2000 and images amount in parameter `image_dataset_items` is 10, the number of training steps per graph is 200. Minimum value is 1. A seed is a number from which Stable Diffusion generates noise, which, makes training deterministic. Using the same seed and set of parameters will produce identical LoRA each time, Minimum 1. This parameter specifies the type of learning rate scheduler to be used during the training process. The scheduler dynamically adjusts the learning rate according to one of the specified strategies. `constant`: Maintains a fixed learning rate throughout training. `linear`: Gradually decreases the learning rate linearly from a higher to a lower value. `cosine`: Adjusts the learning rate following a cosine curve, decreasing it initially and then increasing towards the end. `cosine_with_restarts`: Similar to cosine, but resets the rate periodically to avoid local minima. `polynomial`: Decreases the learning rate according to a polynomial decay. `constant_with_warmup`: Starts with a lower learning rate and warms up to a constant rate after a specified number of steps.
Enum: `constant, linear, cosine, cosine_with_restarts, polynomial, constant_with_warmup`
This parameter determines the number of initial training steps during which the learning rate increases gradually, effective only when the lr\_scheduler is set to one of the following modes: linear, cosine, cosine\_with\_restarts, polynomial, or constant\_with\_warmup. The warmup phase helps in stabilizing the training process before the main learning rate schedule begins. The minimum value for this parameter is 0, indicating no warmup, Minimum 0.
Common parameters configured for training. Type of components. When set to `face_crop_region`, args can be set to args: \[name: ratio, value: 1.0], ratio > 1 means more non-facial area will be included. When set to `resize`, args can be set to args: \[name: width, value: 512, name: height, value: 512], which mean all the images will be cropped to 512\*512. When set to `face_restore`, args can be set to args: \[name: method, value:gfpgan\_1.4], which mean face restore will be open.
Enum: `face_crop_region`, `resize`, `face_restore`
Component detail settings. Name of argument. Argument value.
### Response Utilize this `task_id` to query the Task Result API at Get style training result. ## Get style training result `GET https://api.novita.ai/v3/training/style` **Use this API to get the style training result, including the model.** ### Request Headers In Bearer \{\{API Key}} format. ### Request Body ### Response The task id of training. Represents the current status of a task, particularly useful for monitoring and managing the progress of training tasks. Each status indicates a specific phase in the task's lifecycle.
Enum: `UNKNOWN`, `QUEUING`, `TRAINING`, `SUCCESS`, `CANCELED`, `FAILED`
Model trained type.
Enum: `lora`
models info model file name. model status.
Enum: `DEPLOYING`, `SERVING`
extra info Estimated time of arrival in seconds. The progress percent with a range of 0 to 100. ## Example **In this document we will explain step by step how to use our API for LoRA model training.** Generally, model training involves following steps. * Upload the images for model training. * Set training parameters and start the training. * Get the training results and generate images with the trained model. ### 1. Upload images for training * Currently we only supports uploading images in `png` / `jpeg` / `webp` format. * Each task supports uploading up to 50 images. In order to make the final effect good, the images uploaded should meet some basic conditions, such as: "portrait in the center", "no watermark", "clear picture", etc. #### 1.1 Get image upload URL * This interface returns the URL for single image to upload and can be called multiple times to upload images for training. ```bash curl --location --request POST 'https://api.novita.ai/v3/assets/training_dataset' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data-raw '{ "file_extension": "png" }' ``` `Response:` ```js { "assets_id": "34558688e2f42a0137ca2d5274a8cf43", "upload_url": "https://faas-training-dataset.s3.ap-southeast-1.amazonaws.com/test/******", "method": "PUT", "headers": { "Host": { "values": [ "faas-training-dataset.s3.ap-southeast-1.amazonaws.com" ] } } } ``` * `assets_id`: The unique identifier of the image, which will be used in the training task. * `upload_url`: The URL for image upload. * `method`: The HTTP method for image upload. #### 1.2 Upload images After obtaining the `upload_url` at step `Get image upload URL`, please refer to the following document to complete the image upload: [https://docs.aws.amazon.com/zh\_cn/AmazonS3/latest/userguide/PresignedUrlUploadObject.html.](https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/PresignedUrlUploadObject.html) `Put images:` ```bash curl -X PUT -T "{{filepath}}" "{{upload_url}}" ``` `or` ``` curl --location --request PUT '{{upload_url}}' \ --header 'Content-Type: image/png' \ --data '{{filepath}}' ``` ### 2. Start training task and configure parameters In this step, we will begin the model training process, which is expected to take approximately 10 minutes, depending on the actual server's availability. There are four types of parameters for model traning: `Model info parameters`, `dataset parameters`, `components parameters`,`expert parameters`, you can set them according to our tables below. Here are some tips to train a good model: * At least 10 photos of faces that meet the requirements. * For parameters `instance_prompt`, we suggests using "a close photo of ohwx \" * For parameters `base_model`, value `v1-5-pruned-emaonly` has better generalization ability and can be used in combination with various Base models, such as `dreamshaper 2.5D`, value `epic-realism` has a strong sense of reality. | Type | Parameters | Description | | :-------------------- | :--------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- | | Model info parameters | name | Name of your training model | | Model info parameters | base\_model | base\_model type | | Model info parameters | width | Target image width | | Model info parameters | height | Target image height | | dataset parameters | image\_dataset\_items | Array: consist of `imageUrl` and image `caption` | | dataset parameters | - image\_dataset\_items.assets\_id | images assets\_id, which can be found in step `Get image upload URL` | | components parameters | components | Array: consist of `name` and `args`, this is a common parameters configured for training. | | components parameters | - components.name | Type of components, Enum: `face_crop_region`, `resize`, `face_restore` | | components parameters | - components.args | Detail values of components.name | | expert parameters | expert\_setting | expert parameters. | | expert parameters | - instance\_prompt | Captions for all the training images, here is a guidance of how to make a effective prompt : [Click Here](/guides/model-apis-training-guidance) | | expert parameters | - batch\_size | batch size of training. | | expert parameters | - max\_train\_steps | Max train steps, 500 is enought for lora model training. | | expert parameters | - ...... | More expert parameters can be access at api reference. | **Here is a example of how to start training:** ```bash curl --location --request POST 'https://api.novita.ai/v3/training/style' \ --header 'Accept: ' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer {{API Key}}' \ --data-raw '{ "name": "test_style_01", "base_model": "v1-5-pruned-emaonly", "width": 512, "height": 512, "image_dataset_items": [ { "assets_id": "34558688e2f42a0137ca2d5274a8cf43" }, { "assets_id": "1231231243f42a0137ca2d5274a8cf43" } ], "expert_setting": { "instance_prompt": "Xstyle, of a young woman, profile shot, from side,sitting, looking at viewer, smiling, head tilt, eyes open,long black hair, glowing skin,light smile,cinematic lighting,dark environment", "class_prompt": "person" }, "components": [ { "name": "face_crop_region", "args": [ { "name": "ratio", "value": "1" } ] }, { "name": "resize", "args": [ {"name": "width", "value": "512"}, {"name": "height", "value": "512"} ] }, { "name": "face_restore", "args": [ {"name": "method", "value": "gfpgan_1.4"}, {"name": "upscale", "value": "1.0"} ] } ] }' ``` Response: ```bash { "task_id": "d660cdd0-ab9b-4a55-8b78-4bc851051fb0" } ``` The `task_id` is the unique identifier of the training task, which can be used to query the training status and results. ### 3. Get training status #### 3.1 Get model training and deployment status In this step, we will obtain the progress of model training and the status of model deployment after training ```bash curl --location --request GET 'https://api.novita.ai/v3/training/style?task_id=d660cdd0-ab9b-4a55-8b78-4bc851051fb0' \ --header 'Authorization: Bearer {{API Key}}' ``` `Response:` ```bash { "task_id": "d660cdd0-ab9b-4a55-8b78-4bc851051fb0", "task_status": "SUCCESS", "model_type": "", "models": [ { "model_name": "model_1698904832_F2BB461625.safetensors", "model_status": "DEPLOYING" } ] } ``` * `task_status`: The status of the training task, Enum: `UNKNOWN`, `QUEUING`, `TRAINING`, `SUCCESS`, `CANCELED`, `FAILED`. * `model_status`: The status of the model, Enum: `DEPLOYING`, `SERVING`. * `model_name`: The name of the model, which can be used to generate images in next step. When the `task_status` is `SUCCESS`, the `model_status` is `SERVING` we can starting to use the lora model. #### 3.2 Start using the trained model After model deployed successfully, we can download the model files or generate images directly. ##### 3.2.1 Use the generated models to create images In order to use the trained lora models, We need to add `model_name` into the `request` of endpoint `/v3/async/txt2img` or `/v3/async/img2img`. **Currently trained lora model can not be used in /v3 endpoint.** Below is a example of how to generate images with trained model: Please set the **`Content-Type`** header to **`application/json`** in your HTTP request to indicate that you are sending JSON data. Currently, **only JSON format is supported**. `Request:` ```bash curl --location 'https://api.novita.ai/v3/async/txt2img' \ --header 'Authorization: Bearer {{API Key}};' \ --header 'Content-Type;' \ --data '{ "extra": { "response_image_type": "jpeg" }, "request": { "model_name": "realisticVisionV51_v51VAE_94301.safetensors", "prompt": "a young woman", "negative_prompt": "bottle, bad face", "sd_vae": "", "loras": [ { "model_name": "model_1698904832_F2BB461625.safetensors", "strength": 0.7 } ], "embeddings": [ { "model_name": "" } ], "hires_fix": { "target_width": 1024, "target_height": 768, "strength": 0.5 }, "refiner": { "switch_at": null }, "width": 512, "height": 384, "image_num": 2, "steps": 20, "seed": 123, "clip_skip": 1, "guidance_scale": 7.5, "sampler_name": "Euler a" } }' ``` `Response:` ```bash { "code": 0, "msg": "", "data": { "task_id": "bec2bcfe-47c6-4536-af34-f26cfe6fd457" } } ``` **Use `task_id` to get images** HTTP status codes in the 2xx range indicate that the request has been successfully accepted, while status codes in the 5xx range indicate internal server errors. You can get images url in `imgs` of response. `Request:` ```bash curl --location 'https://api.novita.ai/v3/async/task-result?task_id=bec2bcfe-47c6-4536-af34-f26cfe6fd457' \ --header 'Authorization: Bearer {{API Key}}' ``` `Response:` ```js { "task": { "task_id": "bec2bcfe-47c6-4536-af34-f26cfe6fd457", "status": "TASK_STATUS_SUCCEED", "reason": "" }, "images": [ { "image_url": "https://faas-output-image.s3.ap-southeast-1.amazonaws.com/dev/replace_object_a910c8f7-76ce-40bd-b805-f00f3ddd7dc1_0.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASVPYCN6LRCW3SOUV%2F20231019%2Fap-southeast-1%2Fs3%2Faws4_request&X-Amz-Date=20231019T084537Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=b9ad40a5cb3aecf89602c15fe72d28be5d8a33e0bfe3656ce968295fde1aab31", "image_url_ttl": 3600, "image_type": "png" } ], "videos": [ { "video_url": "https://faas-output-image.s3.ap-southeast-1.amazonaws.com/dev/replace_object_a910c8f7-76ce-40bd-b805-f00f3ddd7dc1_0.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASVPYCN6LRCW3SOUV%2F20231019%2Fap-southeast-1%2Fs3%2Faws4_request&X-Amz-Date=20231019T084537Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=b9ad40a5cb3aecf89602c15fe72d28be5d8a33e0bfe3656ce968295fde1aab31", "video_url_ttl": "3600", "video_type": "png" } ] } ``` #### 3.3 List training tasks In this step, we can obtain all the info of trained models. ```bash curl --location --request GET 'https://api.novita.ai/v3/training?pagination.limit=10&pagination.cursor=c_0' \ --header 'Authorization: Bearer {{API Key}}' ``` `Response:` ```js { "tasks": [ { "task_name": "test_01", "task_id": "a0c4cc90-0296-4972-a1d8-e6e227daf094", "task_type": "style", "task_status": "SUCCESS", "created_at": 1699325415, "models": [ { "model_name": "model_1699325939_E83A88DAC5.safetensors", "model_status": "SERVING" } ] }, { "task_name": "test_02", "task_id": "51e9bf41-8f7a-464d-b5ad-2fa217a1ec93", "task_type": "style", "task_status": "SUCCESS", "created_at": 1699267268, "models": [ { "model_name": "model_1699267603_27F0D9C81C.safetensors", "model_status": "SERVING" } ] }, { "task_name": "test_03", "task_id": "7bd205ab-63e9-452b-9a66-39c597000eaa", "task_type": "style", "task_status": "FAILED", "created_at": 1699264338, "models": [] } ], "pagination": { "next_cursor": "c_10" } } ``` * `task_name` : The name of the training task. * `task_id` : The unique identifier of the training task, which can be used to query the training status and results. * `task_type` : The type of the training task. * `task_status`: The status of the training task, Enum: `UNKNOWN`, `QUEUING`, `TRAINING`, `SUCCESS`, `CANCELED`, `FAILED`. * `created_at`: The time when the training task was created. * `model`: The trained model. * `model_name`: The sd name of the model. * `model_status`: The status of the model, Enum: `DEPLOYING`, `SERVING`. ### 4. Training playground You can also use our training playground to train models in a user-friendly way at: [Click Here](https://huggingface.co/spaces/novita-ai/Face-Stylization-Playground) #### 4.1 Input Novita AI API Key, images and select training type #### 4.2 Switch to the inferencing tab and add more detail #### Review the training results # LoRA for Subject Training Source: https://novita.ai/docs/api-reference/model-apis-create-subject-training **You can train a LoRA model to generate images featuring a subject, such as yourself.** ## Create Subject Training Task `POST https://api.novita.ai/v3/training/subject` **Use this API to start a subject training task.** > This is an **asynchronous API**; only the **task\_id** is returned initially. Utilize this **task\_id** to query the **Task Result API** at [Get Subject Training Result API](#get-subject-training-result) to retrieve the results of the image generation. ### Request Headers In Bearer \{\{API Key}} format. Enum: `application/json` ### Request Body Task name for this model training. Base models used for training.
Enum: `stable-diffusion-xl-base-1.0`, `dreamshaperXL09Alpha_alpha2Xl10_91562`, `protovisionXLHighFidelity3D_release0630Bakedvae_154359`, `v1-5-pruned-emaonly`, `epicrealism_naturalSin_121250`, `chilloutmix_NiPrunedFp32Fix`, `abyssorangemix3AOM3_aom3a3_10864`, `dreamshaper_8_93211`, `WFChild_v1.0`, `majichenmixrealistic_v10`, `realisticVisionV51_v51VAE_94301`, `sdxlUnstableDiffusers_v11_216694`, `realisticVisionV40_v40VAE_81510`, `epicrealismXL_v10_247189`, `somboy_v10_172675`, `yesmixXL_v10_283329`, `animagineXLV31_v31_325600`
Training image width. Minimum value is 1. Training image height. Minimum value is 1. Image asset IDs and image captions. Image asset ID. Refer to Upload Images For Training for more information. Image caption. Refer to Training Image Caption Guidance. The length must be between 1 and 1024 characters, inclusive. batch size of training, Range: \[1, 4] This parameter controls the extent of model parameter updates during each iteration. A higher learning rate results in larger updates, potentially speeding up the learning process but risking overshooting the optimal solution. Conversely, a lower learning rate ensures smaller, more precise adjustments, which may lead to a more stable convergence at the cost of slower training.
Enum: `1e-4`, `1e-5`, `1e-6`, `2e-4`, `5e-5`
This parameter specifies the maximum number of training steps to be executed before halting the training process. It sets a limit on the duration of training, ensuring that the model does not continue to train indefinitely. If the `max_train_steps` set to 2000 and images amount in parameter `image_dataset_items` is 10, the number of training steps per graph is 200. Minimum value is 1. A seed is a number from which Stable Diffusion generates noise, which, makes training deterministic. Using the same seed and set of parameters will produce identical LoRA each time, Minimum 1. This parameter specifies the type of learning rate scheduler to be used during the training process. The scheduler dynamically adjusts the learning rate according to one of the specified strategies.
Enum: `constant`, `linear`, `cosine`, `cosine_with_restarts`, `polynomial`, `constant_with_warmup`
This parameter determines the number of initial training steps during which the learning rate increases gradually, effective only when the lr\_scheduler is set to one of the following modes: linear, cosine, cosine\_with\_restarts, polynomial, or constant\_with\_warmup. The warmup phase helps in stabilizing the training process before the main learning rate schedule begins. The minimum value for this parameter is 0, indicating no warmup, Minimum 0. This parameter specifies a prompt that best describes the images associated with an instance. It is essential for accurately conveying the content or theme of the images, facilitating better context or guidance for operations such as classification, tagging, or generation. This parameter is used to specify a prompt that focuses the training process on a specific subject, in this case, a `person`. It guides the model to tailor its learning and output generation towards this defined class, enhancing specificity and relevance in tasks such as image recognition or generation related to human features or activities.
Enum: `person`
This parameter enables the option to preserve prior knowledge or settings in a model. When set to true, it ensures that existing configurations or learned patterns are maintained during updates or further training, enhancing the model's stability and consistency over time. This parameter specifies the weight assigned to the prior loss in the model's loss function. It must be greater than 0 to have an effect. Setting this parameter helps control the influence of prior knowledge on the training process, balancing new data learning with the retention of previously learned information. This parameter determines whether the text encoder component of the model should undergo training. Enabling this setting (true) allows the text encoder to adapt and improve its understanding of textual input based on the specific data and tasks at hand, potentially enhancing overall model performance. This parameter specifies the rank for the LoRA (Low-Rank Adaptation) modification. Valid values range from 4 to 128. Adjusting this parameter allows for tuning the complexity and capacity of the LoRA layers within the model, impacting both performance and computational efficiency. Range \[4 , 128]. This parameter sets the scaling factor (alpha) for the Low-Rank Adaptation (LoRA) layers within the model. It accepts values ranging from 4 to 128. Adjusting lora\_alpha modifies the degree of adaptation applied to the pre-trained layers, influencing the learning capability and the granularity of the adjustments made during training. Range \[4 , 128]. This parameter specifies the rank of the LoRA (Low-Rank Adaptation) modification applied specifically to the text encoder component of the model. Valid values range from 4 to 128. By setting this parameter, you can tune the complexity and impact of the LoRA adjustments on the text encoder, potentially enhancing its performance and adaptability to new textual data. Range \[4 , 128]. This parameter defines the scaling factor (alpha) for Low-Rank Adaptation (LoRA) specifically applied to the text encoder component of the model. It accepts values ranging from 4 to 128. The lora\_text\_encoder\_alpha parameter adjusts the degree of adaptation applied, allowing for finer control over how the text encoder processes and learns from textual input, thereby impacting the overall effectiveness and efficiency of the model. Range \[4 , 128].
Common parameters configured for training. Type of components.
Enum: `face_crop_region`, `resize`, `face_restore`
Component detail settings. Name of argument. Argument value.
### Response Utilize this `task_id` to query the Task Result API at Get subject training result. ## Get subject training result `GET https://api.novita.ai/v3/training/subject` **Use this API to get the subject training result, including the model.** ### Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ### Request Body ### Response The task id of training. Represents the current status of a task, particularly useful for monitoring and managing the progress of training tasks. Each status indicates a specific phase in the task's lifecycle.
Enum: `UNKNOWN`, `QUEUING`, `TRAINING`, `SUCCESS`, `CANCELED`, `FAILED`
Model trained type.
Enum: `lora`
models info model file name. model status.
Enum: `DEPLOYING`, `SERVING`
extra info Estimated time of arrival in seconds. The progress percent with a range of 0 to 100. ## Example **In this document we will explain step by step how to use our API for LoRA model training.** Generally, model training involves following steps. * Upload the images for model training. * Set training parameters and start the training. * Get the training results and generate images with the trained model. ### 1. Upload images for training * Currently we only supports uploading images in `png` / `jpeg` / `webp` format. * Each task supports uploading up to 50 images. In order to make the final effect good, the images uploaded should meet some basic conditions, such as: "portrait in the center", "no watermark", "clear picture", etc. #### 1.1 Get image upload URL * This interface returns the URL for single image to upload and can be called multiple times to upload images for training. ```bash curl --location --request POST 'https://api.novita.ai/v3/assets/training_dataset' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data-raw '{ "file_extension": "png" }' ``` `Response:` ```js { "assets_id": "34558688e2f42a0137ca2d5274a8cf43", "upload_url": "https://faas-training-dataset.s3.ap-southeast-1.amazonaws.com/test/******", "method": "PUT", "headers": { "Host": { "values": [ "faas-training-dataset.s3.ap-southeast-1.amazonaws.com" ] } } } ``` * `assets_id`: The unique identifier of the image, which will be used in the training task. * `upload_url`: The URL for image upload. * `method`: The HTTP method for image upload. #### 1.2 Upload images After obtaining the `upload_url` at step `Get image upload URL`, please refer to the following document to complete the image upload: [https://docs.aws.amazon.com/zh\_cn/AmazonS3/latest/userguide/PresignedUrlUploadObject.html.](https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/PresignedUrlUploadObject.html) `Put images:` ```bash curl -X PUT -T "{{filepath}}" "{{upload_url}}" ``` `or` ``` curl --location --request PUT '{{upload_url}}' \ --header 'Content-Type: image/png' \ --data '{{filepath}}' ``` ### 2. Start training task and configure parameters In this step, we will begin the model training process, which is expected to take approximately 10 minutes, depending on the actual server's availability. There are four types of parameters for model traning: `Model info parameters`, `dataset parameters`, `components parameters`,`expert parameters`, you can set them according to our tables below. Here are some tips to train a good model: * At least 10 photos of faces that meet the requirements. * For parameters `instance_prompt`, we suggests using "a close photo of ohwx \" * For parameters `base_model`, value `v1-5-pruned-emaonly` has better generalization ability and can be used in combination with various Base models, such as `dreamshaper 2.5D`, value `epic-realism` has a strong sense of reality. | Type | Parameters | Description | | :-------------------- | :--------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------- | | Model info parameters | name | Name of your training model | | Model info parameters | base\_model | base\_model type | | Model info parameters | width | Target image width | | Model info parameters | height | Target image height | | dataset parameters | image\_dataset\_items | Array: consist of `imageUrl` and image `caption` | | dataset parameters | - image\_dataset\_items.assets\_id | images assets\_id, which can be found in step `Get image upload URL` | | components parameters | components | Array: consist of `name` and `args`, this is a common parameters configured for training. | | components parameters | - components.name | Type of components, Enum: `face_crop_region`, `resize`, `face_restore` | | components parameters | - components.args | Detail values of components.name | | expert parameters | expert\_setting | expert parameters. | | expert parameters | - instance\_prompt | Captions for all the training images, here is a guidance of how to make a effective prompt : [Click Here](/guides/model-apis-training-guidance) | | expert parameters | - batch\_size | batch size of training. | | expert parameters | - max\_train\_steps | Max train steps, 500 is enought for lora model training. | | expert parameters | - ...... | More expert parameters can be access at api reference. | **Here is a example of how to start training:** ```bash curl --location --request POST 'https://api.novita.ai/v3/training/subject' \ --header 'Accept: ' \ --header 'Content-Type: application/json' \ --header 'Authorization: Bearer {{API Key}}' \ --data-raw '{ "name": "test_subject_01", "base_model": "v1-5-pruned-emaonly", "width": 512, "height": 512, "image_dataset_items": [ { "assets_id": "34558688e2f42a0137ca2d5274a8cf43" }, { "assets_id": "1231231243f42a0137ca2d5274a8cf43" } ], "expert_setting": { "instance_prompt": "Xsubject, of a young woman, profile shot, from side,sitting, looking at viewer, smiling, head tilt, eyes open,long black hair, glowing skin,light smile,cinematic lighting,dark environment", "class_prompt": "person" }, "components": [ { "name": "face_crop_region", "args": [ { "name": "ratio", "value": "1" } ] }, { "name": "resize", "args": [ {"name": "width", "value": "512"}, {"name": "height", "value": "512"} ] }, { "name": "face_restore", "args": [ {"name": "method", "value": "gfpgan_1.4"}, {"name": "upscale", "value": "1.0"} ] } ] }' ``` Response: ```js { "task_id": "d660cdd0-ab9b-4a55-8b78-4bc851051fb0" } ``` The `task_id` is the unique identifier of the training task, which can be used to query the training status and results. ### 3. Get training status #### 3.1 Get model training and deployment status In this step, we will obtain the progress of model training and the status of model deployment after training ```bash curl --location --request GET 'https://api.novita.ai/v3/training/subject?task_id=d660cdd0-ab9b-4a55-8b78-4bc851051fb0' \ --header 'Authorization: Bearer {{API Key}}' ``` `Response:` ```js { "task_id": "d660cdd0-ab9b-4a55-8b78-4bc851051fb0", "task_status": "SUCCESS", "model_type": "", "models": [ { "model_name": "model_1698904832_F2BB461625.safetensors", "model_status": "DEPLOYING" } ] } ``` * `task_status`: The status of the training task, Enum: `UNKNOWN`, `QUEUING`, `TRAINING`, `SUCCESS`, `CANCELED`, `FAILED`. * `model_status`: The status of the model, Enum: `DEPLOYING`, `SERVING`. * `model_name`: The name of the model, which can be used to generate images in next step. When the `task_status` is `SUCCESS`, the `model_status` is `SERVING` we can starting to use the lora model. #### 3.2 Start using the trained model After model deployed successfully, we can download the model files or generate images directly. ##### 3.2.1 Use the generated models to create images In order to use the trained lora models, We need to add `model_name` into the `request` of endpoint `/v3/async/txt2img` or `/v3/async/img2img`. **Currently trained lora model can not be used in /v3 endpoint.** Below is a example of how to generate images with trained model: Please set the **`Content-Type`** header to **`application/json`** in your HTTP request to indicate that you are sending JSON data. Currently, **only JSON format is supported**. `Request:` ```bash curl --location 'https://api.novita.ai/v3/async/txt2img' \ --header 'Authorization: Bearer {{API Key}};' \ --header 'Content-Type;' \ --data '{ "extra": { "response_image_type": "jpeg" }, "request": { "model_name": "realisticVisionV51_v51VAE_94301.safetensors", "prompt": "a young woman", "negative_prompt": "bottle, bad face", "sd_vae": "", "loras": [ { "model_name": "model_1698904832_F2BB461625.safetensors", "strength": 0.7 } ], "embeddings": [ { "model_name": "" } ], "hires_fix": { "target_width": 1024, "target_height": 768, "strength": 0.5 }, "refiner": { "switch_at": null }, "width": 512, "height": 384, "image_num": 2, "steps": 20, "seed": 123, "clip_skip": 1, "guidance_scale": 7.5, "sampler_name": "Euler a" } }' ``` `Response:` ```js { "code": 0, "msg": "", "data": { "task_id": "bec2bcfe-47c6-4536-af34-f26cfe6fd457" } } ``` **Use `task_id` to get images** HTTP status codes in the 2xx range indicate that the request has been successfully accepted, while status codes in the 5xx range indicate internal server errors. You can get images url in `imgs` of response. `Request:` ```bash curl --location 'https://api.novita.ai/v3/async/task-result?task_id=bec2bcfe-47c6-4536-af34-f26cfe6fd457' \ --header 'Authorization: Bearer {{API Key}}' ``` `Response:` ```js { "task": { "task_id": "bec2bcfe-47c6-4536-af34-f26cfe6fd457", "status": "TASK_STATUS_SUCCEED", "reason": "" }, "images": [ { "image_url": "https://faas-output-image.s3.ap-southeast-1.amazonaws.com/dev/replace_object_a910c8f7-76ce-40bd-b805-f00f3ddd7dc1_0.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASVPYCN6LRCW3SOUV%2F20231019%2Fap-southeast-1%2Fs3%2Faws4_request&X-Amz-Date=20231019T084537Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=b9ad40a5cb3aecf89602c15fe72d28be5d8a33e0bfe3656ce968295fde1aab31", "image_url_ttl": 3600, "image_type": "png" } ], "videos": [ { "video_url": "https://faas-output-image.s3.ap-southeast-1.amazonaws.com/dev/replace_object_a910c8f7-76ce-40bd-b805-f00f3ddd7dc1_0.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASVPYCN6LRCW3SOUV%2F20231019%2Fap-southeast-1%2Fs3%2Faws4_request&X-Amz-Date=20231019T084537Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=b9ad40a5cb3aecf89602c15fe72d28be5d8a33e0bfe3656ce968295fde1aab31", "video_url_ttl": "3600", "video_type": "png" } ] } ``` #### 3.3 List training tasks In this step, we can obtain all the info of trained models. ```bash curl --location --request GET 'https://api.novita.ai/v3/training?pagination.limit=10&pagination.cursor=c_0' \ --header 'Authorization: Bearer {{API Key}}' ``` `Response:` ```js { "tasks": [ { "task_name": "test_01", "task_id": "a0c4cc90-0296-4972-a1d8-e6e227daf094", "task_type": "subject", "task_status": "SUCCESS", "created_at": 1699325415, "models": [ { "model_name": "model_1699325939_E83A88DAC5.safetensors", "model_status": "SERVING" } ] }, { "task_name": "test_02", "task_id": "51e9bf41-8f7a-464d-b5ad-2fa217a1ec93", "task_type": "subject", "task_status": "SUCCESS", "created_at": 1699267268, "models": [ { "model_name": "model_1699267603_27F0D9C81C.safetensors", "model_status": "SERVING" } ] }, { "task_name": "test_03", "task_id": "7bd205ab-63e9-452b-9a66-39c597000eaa", "task_type": "subject", "task_status": "FAILED", "created_at": 1699264338, "models": [] } ], "pagination": { "next_cursor": "c_10" } } ``` * `task_name` : The name of the training task. * `task_id` : The unique identifier of the training task, which can be used to query the training status and results. * `task_type` : The type of the training task. * `task_status`: The status of the training task, Enum: `UNKNOWN`, `QUEUING`, `TRAINING`, `SUCCESS`, `CANCELED`, `FAILED`. * `created_at`: The time when the training task was created. * `model`: The trained model. * `model_name`: The sd name of the model. * `model_status`: The status of the model, Enum: `DEPLOYING`, `SERVING`. ### 4. Training playground You can also use our training playground to train models in a user-friendly way at: [Click Here](https://huggingface.co/spaces/novita-ai/Face-Stylization-Playground) #### 4.1 Input Novita AI API Key, images and select training type #### 4.2 Switch to the inferencing tab and add more detail #### Review the training results # FLUX.1 [schnell] Text to Image Source: https://novita.ai/docs/api-reference/model-apis-flux-1-schnell POST https://api.novita.ai/v3beta/flux-1-schnell **Generate images from text prompts using FLUX.1 \[schnell].** > **Pricing:** \$0.003 \* (Width \* Height \* Steps) / (1024\*1024\*4) ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body The returned image type. Default is png.
Enum: `png` `webp` `jpeg`
Text input required to guide the image generation, divided by `,` . Range \[1, 1024]. A seed is a number from which Stable Diffusion generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical image each time. Range \[0, 4294967295]. The number of denoising steps. More steps usually can produce higher quality images, but take more time to generate, Range \[1, 100]. Width of image. Range \[64, 2048]. Height of image. Range \[64, 2048]. Images numbers generated in one single generation. Range \[1, 8]. ## Response Task information. Task ID. Contains information about images associated with image-type tasks. This parameter provides detailed data on each image processed or generated during the task, such as file paths, metadata, or any image-specific attributes. It is returned only for tasks that involve image operations, facilitating enhanced tracking and management of image data. Image URL. Image expiration time in seconds. Image type.
Enum: `jpeg`, `png`, `webp`
## Example request ```bash curl --location 'https://api.novita.ai/v3beta/flux-1-schnell' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "prompt": "Extreme close-up of a single tiger eye, direct frontal view. Detailed iris and pupil. Sharp focus on eye texture and color. Natural lighting to capture authentic eye shine and depth. The word \"Novita AI\" is painted over it in big, white brush strokes with visible texture", "width": 512, "height": 512, "seed": 2024, "steps": 4, "image_num": 1 }' ``` response ```json { "images": [ { "image_url": "https://model-api-output.5e61b0cbce9f453eb9db49fdd85c7cac.r2.cloudflarestorage.com/xxx", "image_url_ttl": 604800, "image_type": "png" } ], "task": { "task_id": "xxx" } } ``` # Get Model Source: https://novita.ai/docs/api-reference/model-apis-get-model GET https://api.novita.ai/v3/model **This API endpoint is designed to retrieve information on both public and private models. It allows users to access details such as model specifications, status, and usage guidelines, ensuring comprehensive insights into the available modeling resources.** ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Query Parameters Model types: `public` or `private`. If not set, the interface will query all types of models. Source of the model.
Enum: `civitai`, `training`, `uploading`
Specifies the types of models to include in the query.
Enum: `checkpoint`, `lora`, `vae`, `controlnet`, `upscaler`, `textualinversion`
Whether the model is SDXL or not. Setting this parameter to `true` includes only SDXL models in the query results, which are typically large-scale, high-performance models designed for extensive data processing tasks. Conversely, setting it to `false` excludes these models from the results. If left unspecified, the filter will not discriminate based on the SDXL classification, including all model types in the search results. Searches the content of sd\_name, name, and tags. If set to true, it will filter out the checkpoints used for inpainting. The default is false. Number of models to query per request, range (0, 100]. pagination.cursor is used to specify which record to start returning from. If it is empty, it means to get it from the beginning. Generally, the content of the next page is obtained by passing in the next\_cursor field value from the response packet. ## Response ID of the model. Model name. Hash of the model file. Model file name. Model type name. Model categories. Model status: 0 for unavailable, 1 for available. Model download URL. Model tags, such as photorealistic, anatomical, base model, CGI, realistic, semi-realistic. Model cover image URL. The source of the model, such as civitai, training, uploading. Base model of the model, such as SD 1.5 or SDXL 1.0. Base model type of the model, such as Inpainting or Standard. The expiration time of the download URL in seconds, default is 1 day. The name users can add in the interface. Next request starting cursor. ## Example request ```bash curl --location 'https://api.novita.ai/v3/model?filter.visibility=public&pagination.limit=2&pagination.cursor=c_0' \ --header 'Authorization: Bearer {{API Key}}' ``` response ```json { "models": [ { "id": 114600, "name": "V4.0-inpainting (VAE)", "hash_sha256": "1A805277C8", "sd_name": "realisticVisionV40_v40VAE-inpainting_81543.safetensors", "type": { "name": "checkpoint", "display_name": "Checkpoint" }, "categories": [], "status": 1, "tags": [ "photorealistic", "anatomical", "base model", "cgi", "realistic", "semi-realistic" ], "cover_url": "https://next-app-static.s3.amazonaws.com/images-prod/xG1nkqKTMzGDvpLrqFT7WA/f291a219-4a86-45ab-96eb-c53446b3e4df/width=450/1495044.jpeg", "base_model": "SD 1.5", "base_model_type": "Inpainting", "download_url_ttl": 2592000, "sd_name_in_api": "realisticVisionV40_v40VAE-inpainting_81543.safetensors", "is_sdxl": false }, { "id": 55199, "name": "beta2", "hash_sha256": "BA43B0EFEE", "sd_name": "GoodHands-beta2_39807.safetensors", "type": { "name": "locon", "display_name": "locon" }, "categories": [], "status": 1, "tags": ["photorealistic", "concept", "hands"], "cover_url": "https://next-app-static.s3.amazonaws.com/images-prod/xG1nkqKTMzGDvpLrqFT7WA/031a378c-3d66-45da-5d67-966c47cd4800/width=450/599083.jpeg", "base_model": "SD 1.5", "base_model_type": "Standard", "download_url_ttl": 2592000, "sd_name_in_api": "GoodHands-beta2_39807", "is_sdxl": false } ], "pagination": { "next_cursor": "c_WzgwNDY2NiwiNTUxOTkiXQ==" } } ``` # Get Images URL Source: https://novita.ai/docs/api-reference/model-apis-get-training-images-url POST https://api.novita.ai/v3/assets/training_dataset **This API provides an S3 pre-signed uploading URL for training images.** ## Request Headers Enum: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body Enum: `png`, `webp`, `jpeg` ## Response The asset ID. The S3 pre-signed uploading URL. The method for uploading.
Enum: `PUT`
The host value. ## Example request ```bash curl --location 'https://api.novita.ai/v3/assets/training_dataset' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "file_extension": "png" }' ``` response ```json { "assets_id": 100024, "upload_url": "https://faas-training-dataset.s3.ap-southeast-1.amazonaws.com/test/743567e210ff505ce5502cfb46659c8e.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIASVPYCN6LRCW3SOUV%2F20231102%2Fap-southeast-1%2Fs3%2Faws4_request&X-Amz-Date=20231102T060519Z&X-Amz-Expires=120&X-Amz-SignedHeaders=host&x-id=PutObject&X-Amz-Signature=781d2156b707b7cfa87d94fb2836838e114c3afe4588368b9503c618ac125a67", "method": "PUT", "headers": { "Host": { "values": ["faas-training-dataset.s3.ap-southeast-1.amazonaws.com"] } } } ``` # Hunyuan Video Fast Source: https://novita.ai/docs/api-reference/model-apis-hunyuan-video-fast POST https://api.novita.ai/v3/async/hunyuan-video-fast **Accelerated inference for HunyuanVideo with high resolution, a state-of-the-art text-to-video generation model capable of creating high-quality videos with realistic motion from text descriptions.** This is an **asynchronous** API; only the **task\_id** will be returned. You should use the **task\_id** to request the [**Task Result API**](/api-reference/model-apis-task-result) to retrieve the video generation results. ## Request Headers Supports: `application/json` Bearer authentication format, for example: Bearer \{\{API Key}}. ## Request Body Name of the model checkpoint. Supports: `hunyuan-video-fast`. Width of the output video. Supports: `480`, `640`, `720`, `864`, `1280`. Height of the output video. Supports: * `480` for `width` of `640` * `640` for `width` of `480` * `864` for `width` of `480` * `480` for `width` of `864` * `720` for `width` of `1280` * `1280` for `width` of `720` A seed is a number generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical content each time. Range: `-1 <= x <= 9999999999`. The number of denoising steps. More steps usually produce higher quality content but take more time to generate. Range: `2 <= x <= 30`. Prompt text required to guide the generation. Range: `1 <= x <= 2000`. The number of frames in the output video. Supports: `85`, `129`. ## Response Use the task\_id to request the [Task Result API](/api-reference/model-apis-task-result) to retrieve the generated outputs. ## Example Here is an example of how to use the Hunyuan Video Fast API. 1. Generate a task\_id by sending a POST request to the Hunyuan Video Fast API. `Request:` ```bash curl --location 'https://api.novita.ai/v3/async/hunyuan-video-fast' \ --header 'Authorization: Bearer {{API Key}}' \ --header 'Content-Type: application/json' \ --data '{ "model_name": "hunyuan-video-fast", "height": 720, "width": 1280, "seed": -1, "steps": 30, "prompt": "A close up view of a glass sphere that has a zen garden within it. There is a small dwarf in the sphere who is raking the zen garden and creating patterns in the sand.", "frames": 85 }' ``` `Response:` ```js { "task_id": "{Returned Task ID}" } ``` 2. Use `task_id` to get output videos. HTTP status codes in the 2xx range indicate that the request has been successfully accepted, while status codes in the 5xx range indicate internal server errors. You can get videos url in `videos` of response. `Request:` ```bash curl --location --request GET 'https://api.novita.ai/v3/async/task-result?task_id={Returned Task ID}' \ --header 'Authorization: Bearer {{API Key}}' ``` `Response:` ```js { "task": { "task_id": "{Returned Task ID}", "task_type": "HUNYUAN_VIDEO_FAST", "status": "TASK_STATUS_SUCCEED", "reason": "", "eta": 0, "progress_percent": 100 }, "images": [], "videos": [ { "video_url": "{The URL of the generated video}", "video_url_ttl": "3600", "video_type": "mp4" } ] } ``` `Video files:`