Skip to main content
POST /api/v1/generate Content-Type: application/json for text-to-image · multipart/form-data for image editing
Parameter support can differ depending on the model used to generate the response. Check the Model Library for model-specific compatibility. Open Model Library.

Authentication

Send your API key in the Authorization header as a Bearer token.
Authorization: Bearer YOUR_API_KEY

Parameters

Common

NameTypeRequiredDescription
modelstringRequiredThe model ID to used to generate the response, like flux-1-kontext. Find supported models in the Model Library.
promptstringRequiredText prompt describing what to generate.

Conditional

The following parameters are not supported by every model. Check the Model Library for model-specific compatibility.

Image Generation (Text Prompt)

Generate an image from a text prompt.
NameTypeRequiredDescription
seedintegerOptionalRandom seed for reproducible results. If not provided, a random seed will be used.
modestringOptionalGeneration mode: “text-to-image” (default) for generating images from text prompts, or “image-editing” for editing source images.
real_timebooleanOptionalEnable real-time web search mode for current references. Text to Image mode only. Defaults to false.
widthintegerOptionalOutput width in pixels, t2i mode only (default: 1024, range: 512–2048).
heightintegerOptionalOutput height in pixels, t2i mode only (default: 1024, range: 512–2048).
guidance_scalenumberOptionalClassifier-free guidance scale (default: 1.0).

Image Editing (Upload or URL)

Transform the source image using an instruction prompt.
NameTypeRequiredDescription
image_filefileOptionalUpload the source image file.
imagefileOptionalSome request types use image instead of image_file for uploading source images.
image_pathstringOptionalReference to the source image (often an HTTPS URL or an internal path).
num_inference_stepsnumberOptionalNumber of inference/denoising steps. Defaults to 30.
binary_responsebooleanOptionalWhether to return binary image data directly instead of JSON.
output_formatstringOptionalOutput image format (jpg or png).
downsizing_mpnumberOptionalDownsample large images for faster processing.
lora_strengthnumberOptionala numerical multiplier that controls the intensity of the applied Low-Rank Adaptation (LoRA) on the base model’s weights. Defaults to 0.8.
ranknumberOptionalEdit complexity/strength knob. Defaults to 32.
offloadingbooleanOptionalEnable CPU offloading in constrained environments.
weightstringOptionalSelect an editing profile (lightning or vanilla).
true_cfg_scalenumberOptionalGuidance scale controlling how strongly the prompt is applied..
sample_stepsnumberOptionalSampling steps.
sample_guide_scalenumberOptionalSampling guidance scale.
negative_promptstringOptionalWhat to avoid in the output.
s3_output_pathstringOptionalDestination bucket/key for the output image (e.g. s3://chatbot-images-eigenai/banana_example.png).

Examples

Image generation (JSON)

Generate an image from a text prompt using a JSON request body.
# Select a model in the Model Library: https://api-web.eigenai.com/model-library

curl -X POST https://api-web.eigenai.com/api/v1/generate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "YOUR_MODEL",
    "prompt": "A fluffy orange tabby cat in a sunlit garden"
  }'

Image editing (multipart upload)

Upload an image file and apply an edit instruction prompt.
# Select a model in the Model Library: https://api-web.eigenai.com/model-library

curl -X POST https://api-web.eigenai.com/api/v1/generate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F "model=YOUR_MODEL" \
  -F "prompt=Replace the bag with a laptop" \
  -F "image_file=@/path/to/source.png" \
  -F "num_inference_steps=15" \
  -F "binary_response=true" \
  --output edited.png