Skip to main content

Text to Image with LCM

Text to Image with LCM


10x faster image generation with latent consistency models, synthesizing High-Resolution images with few-step inference.

Request header parameters

  • Content-TypestringRequired


  • AuthorizationstringRequired

    Bearer authentication format, for example: Bearer {{API Key}}.

Request Body parameters

  • extraobject

    Optional extra parameters for the request.

    Show properties
  • model_namestring

    This parameter specifies the name of the model checkpoint. Retrieve the corresponding sd_name_in_api value by invoking the endpoint with type=checkpoint as the query parameter.

  • promptstringRequired

    Text input required to guide the image generation, divided by `,`, Range [1, 1024].

  • negative_promptstring

    Text input that will not guide the image generation, divided by `,`, Range [1, 1024].

  • heightintegerRequired

    Height of image. Range: [128, 2048]

  • widthintegerRequired

    Width of image. Range: [128, 2048]

  • loras[object]

    LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. Currenlty supports up to 5 LoRAs.

    Show properties
  • embeddings[object]

    Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images, currenlty supports up to 5 embeddings.

    Show properties
  • image_numintegerRequired

    Image numbers. Range: [1, 16]

  • stepsintegerRequired

    The number of denoising steps. More steps usually can produce higher quality images, but take more time to generate, Range: [1, 8]

  • seedintegerRequired

    A seed is a number from which Stable Diffusion generates noise, which, makes generation deterministic. Using the same seed and set of parameters will produce identical image each time, minimum -1.

  • guidance_scalenumberRequired

    This setting says how close the Stable Diffusion will listen to your prompt, higer guidance forces the model to better follow the prompt, but result in lower quality output, Range: [0, 3].

  • clip_skipinteger

    This parameter indicates the number of layers to stop from the bottom during optimization, so clip_skip on 2 would mean, that in SD1.x model where the CLIP has 12 layers, you would stop at 10th layer, Range [1, 12], get reference at


  • images[object]


    Show properties


1. Text to Image with LCM

10x faster image generation with latent consistency models, synthesizing High-Resolution images with few-step inference.

Please set the Content-Type header to application/json in your HTTP request to indicate that you are sending JSON data. Currently, only JSON format is supported.


curl --location --request POST '' \
--header 'Authorization: Bearer {{key}}' \
--header 'Content-Type: application/json' \
--header "Accept-Encoding: gzip" \
--data-raw '{
   "prompt": "Glowing jellyfish floating through a foggy forest at twilight",
   "height": 512,
   "width": 512,
   "image_num": 2,
   "steps": 8,
   "guidance_scale": 7.5

HTTP status codes in the 2xx range indicate that the request has been successfully accepted, code 400 means requst params error, while status codes in the 5xx range indicate internal server errors.


    "images": [
            "image_file": "{{Base64 encoded image}}",
            "image_type": "jpeg"
            "image_file": "{{Base64 encoded image}}",
            "image_type": "jpeg"