LoRA for Subject Training
You can train a LoRA model to generate images featuring a subject, such as yourself.
Create Subject Training Task
POST https://api.novita.ai/v3/training/subject
Use this API to start a subject training task.
This is an asynchronous API; only the task_id is returned initially. Utilize this task_id to query the Task Result API at Get Subject Training Result API to retrieve the results of the image generation.
Request Headers
In Bearer {{API Key}} format.
Enum: application/json
Request Body
Task name for this model training.
Base models used for training.
Enum: stable-diffusion-xl-base-1.0
, dreamshaperXL09Alpha_alpha2Xl10_91562
, protovisionXLHighFidelity3D_release0630Bakedvae_154359
, v1-5-pruned-emaonly
, epicrealism_naturalSin_121250
, chilloutmix_NiPrunedFp32Fix
, abyssorangemix3AOM3_aom3a3_10864
, dreamshaper_8_93211
, WFChild_v1.0
, majichenmixrealistic_v10
, realisticVisionV51_v51VAE_94301
, sdxlUnstableDiffusers_v11_216694
, realisticVisionV40_v40VAE_81510
, epicrealismXL_v10_247189
, somboy_v10_172675
, yesmixXL_v10_283329
, animagineXLV31_v31_325600
Training image width. Minimum value is 1.
Training image height. Minimum value is 1.
Image asset IDs and image captions.
Common parameters configured for training.
Response
Utilize this task_id
to query the Task Result API at Get subject training result.
Get subject training result
GET https://api.novita.ai/v3/training/subject
Use this API to get the subject training result, including the model.
Request Headers
Enum: application/json
Bearer authentication format, for example: Bearer {{API Key}}.
Request Body
Response
The task id of training.
Represents the current status of a task, particularly useful for monitoring and managing the progress of training tasks. Each status indicates a specific phase in the task’s lifecycle.
Enum: UNKNOWN
, QUEUING
, TRAINING
, SUCCESS
, CANCELED
, FAILED
Model trained type.
Enum: lora
models info
extra info
Example
Generally, model training involves following steps.
- Upload the images for model training.
- Set training parameters and start the training.
- Get the training results and generate images with the trained model.
1. Upload images for training
- Currently we only supports uploading images in
png
/jpeg
/webp
format. - Each task supports uploading up to 50 images. In order to make the final effect good, the images uploaded should meet some basic conditions, such as: “portrait in the center”, “no watermark”, “clear picture”, etc.
1.1 Get image upload URL
- This interface returns the URL for single image to upload and can be called multiple times to upload images for training.
Response:
assets_id
: The unique identifier of the image, which will be used in the training task.upload_url
: The URL for image upload.method
: The HTTP method for image upload.
1.2 Upload images
After obtaining the upload_url
at step Get image upload URL
, please refer to the following document to complete the image upload: https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/PresignedUrlUploadObject.html.
Put images:
or
2. Start training task and configure parameters
In this step, we will begin the model training process, which is expected to take approximately 10 minutes, depending on the actual server’s availability.
There are four types of parameters for model traning: Model info parameters
, dataset parameters
, components parameters
,expert parameters
, you can set them according to our tables below.
Here are some tips to train a good model:
- At least 10 photos of faces that meet the requirements.
- For parameters
instance_prompt
, we suggests using “a close photo of ohwx <man|woman>” - For parameters
base_model
, valuev1-5-pruned-emaonly
has better generalization ability and can be used in combination with various Base models, such asdreamshaper 2.5D
, valueepic-realism
has a strong sense of reality.
Type | Parameters | Description |
---|---|---|
Model info parameters | name | Name of your training model |
Model info parameters | base_model | base_model type |
Model info parameters | width | Target image width |
Model info parameters | height | Target image height |
dataset parameters | image_dataset_items | Array: consist of imageUrl and image caption |
dataset parameters | - image_dataset_items.assets_id | images assets_id, which can be found in step Get image upload URL |
components parameters | components | Array: consist of name and args , this is a common parameters configured for training. |
components parameters | - components.name | Type of components, Enum: face_crop_region , resize , face_restore |
components parameters | - components.args | Detail values of components.name |
expert parameters | expert_setting | expert parameters. |
expert parameters | - instance_prompt | Captions for all the training images, here is a guidance of how to make a effective prompt : Click Here |
expert parameters | - batch_size | batch size of training. |
expert parameters | - max_train_steps | Max train steps, 500 is enought for lora model training. |
expert parameters | - … | More expert parameters can be access at api reference. |
Here is a example of how to start training:
Response:
The task_id
is the unique identifier of the training task, which can be used to query the training status and results.
3. Get training status
3.1 Get model training and deployment status
In this step, we will obtain the progress of model training and the status of model deployment after training
Response:
task_status
: The status of the training task, Enum:UNKNOWN
,QUEUING
,TRAINING
,SUCCESS
,CANCELED
,FAILED
.model_status
: The status of the model, Enum:DEPLOYING
,SERVING
.model_name
: The name of the model, which can be used to generate images in next step.
When the task_status
is SUCCESS
, the model_status
is SERVING
we can starting to use the lora model.
3.2 Start using the trained model
After model deployed successfully, we can download the model files or generate images directly.
3.2.1 Use the generated models to create images
In order to use the trained lora models, We need to add model_name
into the request
of endpoint /v3/async/txt2img
or /v3/async/img2img
. Currently trained lora model can not be used in /v3 endpoint.
Below is a example of how to generate images with trained model:
Please set the Content-Type
header to application/json
in your HTTP request to indicate that you are sending JSON data. Currently, only JSON format is supported.
Request:
Response:
Use task_id
to get images
HTTP status codes in the 2xx range indicate that the request has been successfully accepted, while status codes in the 5xx range indicate internal server errors.
You can get images url in imgs
of response.
Request:
Response:
3.3 List training tasks
In this step, we can obtain all the info of trained models.
Response:
task_name
: The name of the training task.task_id
: The unique identifier of the training task, which can be used to query the training status and results.task_type
: The type of the training task.task_status
: The status of the training task, Enum:UNKNOWN
,QUEUING
,TRAINING
,SUCCESS
,CANCELED
,FAILED
.created_at
: The time when the training task was created.model
: The trained model.model_name
: The sd name of the model.model_status
: The status of the model, Enum:DEPLOYING
,SERVING
.
4. Training playground
You can also use our training playground to train models in a user-friendly way at: Click Here
4.1 Input Novita AI API Key, images and select training type
4.2 Switch to the inferencing tab and add more detail
Review the training results
Was this page helpful?