The text-to-image endpoint will return only a task_id. You should use the task_id to call the /v2/progress API endpoint to retrieve the image generation results. We will gradually phase out the V2 endpoints. It is recommended to use the V3 endpoints to generate images.
*** 0 - Explicit Nudity, Explicit Sexual Activity, Sex Toys; Hate Symbols.*** 1 - Explicit Nudity, Explicit Sexual Activity, Sex Toys; Hate Symbols; Non-Explicit Nudity, Obstructed Intimate Parts, Kissing on the Lips.*** 2 - Explicit Nudity, Explicit Sexual Activity, Sex Toys; Hate Symbols; Non-Explicit Nudity, Obstructed Intimate Parts, Kissing on the Lips; Female Swimwear or Underwear, Male Swimwear or Underwear.
Enum: 0, 1, 2
Positive prompt words, separated by ,. If you want to use LoRA, you can call the /v3/model endpoint with the parameter filter.types=lora to retrieve the sd_name_in_api field as the model_name. Remember that the format for LoRA models is <lora:$sd_name:$weight>.
Name of the Stable Diffusion model. You can call the /v3/model endpoint with the parameter filter.types=checkpoint to retrieve the sd_name_in_api field as the model_name.
VAE (Variational Auto Encoder). sd_vae can be accessed in the API /v3/model with query parameters filter.types=vae to retrieve the sd_name field as the sd_vae.
This parameter indicates the number of layers to stop from the bottom during optimization, so clip_skip on 2 would mean, that in SD1.x model where the CLIP has 12 layers, you would stop at 10th layer.
Model to use on the image passed to this unit before using it for conditioning. ***Controlnets for SD 1.5: control_v11e_sd15_ip2p, control_v11e_sd15_shuffle, control_v11f1e_sd15_tile, control_v11f1p_sd15_depth, control_v11p_sd15_canny, control_v11p_sd15_inpaint, control_v11p_sd15_lineart, control_v11p_sd15_mlsd, control_v11p_sd15_normalbae, control_v11p_sd15_openpose, control_v11p_sd15_scribble, control_v11p_sd15_seg, control_v11p_sd15_softedge, control_v11p_sd15s2_lineart_anime, ip-adapter-plus-face_sd15, ip-adapter_sd15_plus, ip-adapter_sd15; ***Controlnets for SDXL: t2i-adapter_diffusers_xl_canny, t2i-adapter_diffusers_xl_depth_midas, t2i-adapter_diffusers_xl_depth_zoe, t2i-adapter_diffusers_xl_lineart, t2i-adapter_diffusers_xl_openpose, t2i-adapter_diffusers_xl_sketch, t2i-adapter_xl_canny, t2i-adapter_xl_openpose, t2i-adapter_xl_sketch, ip-adapter_xl
How to resize the input image so as to fit the output resolution of the generation. 0 represent JUST_RESIZE, 1 represent RESIZE_OR_CORP, 2 represent RESIZE_AND_FILL
Enum: 0, 1, 2