# ChainHub API Guide ## Docs - [How to use ChainHub](https://docs.chainhub.tech/how-to-use-chainhub-1957909m0.md): - Overview [ Quick Start](https://docs.chainhub.tech/-quick-start-1957910m0.md): - Overview [Important Guidelines](https://docs.chainhub.tech/important-guidelines-1957912m0.md): - Overview [Examples](https://docs.chainhub.tech/examples-1957913m0.md): - Overview [Pricing](https://docs.chainhub.tech/pricing-1957911m0.md): - Overview [Error Codes](https://docs.chainhub.tech/error-codes-1957914m0.md): - Chat [Chat](https://docs.chainhub.tech/chat-1957915m0.md): - Chat > Google Gemini API [Gemini Chat](https://docs.chainhub.tech/gemini-chat-1957922m0.md): - Chat > Anthropic Claude Interface [Anthropic Claude](https://docs.chainhub.tech/anthropic-claude-1957921m0.md): - Music Suno [Suno API Documentation](https://docs.chainhub.tech/suno-api-documentation-1957948m0.md): - Kling Platform [Callback Protocol](https://docs.chainhub.tech/callback-protocol-1957950m0.md): - Fal-ai aggregation platform [Integration Tutorial](https://docs.chainhub.tech/integration-tutorial-1957947m0.md): - Replicate Aggregation Platform [Access Tutorial](https://docs.chainhub.tech/access-tutorial-1957949m0.md): - Python Configuration [Python Basics Discussion](https://docs.chainhub.tech/python-basics-discussion-1958277m0.md): - Python Configuration [Using gpt-4o in Python to recognize images](https://docs.chainhub.tech/using-gpt-4o-in-python-to-recognize-images-1958278m0.md): - Python Configuration [Using Claude in Python to Recognize Images](https://docs.chainhub.tech/using-claude-in-python-to-recognize-images-1958279m0.md): - Python Configuration [Python OpenAI official libraries](https://docs.chainhub.tech/python-openai-official-libraries-1958280m0.md): - Python Configuration [Python continuous dialogue](https://docs.chainhub.tech/python-continuous-dialogue-1958281m0.md): - Python Configuration [Using Python to convert speech to text](https://docs.chainhub.tech/using-python-to-convert-speech-to-text-1958282m0.md): - Python Configuration [Using Python to convert text to speech](https://docs.chainhub.tech/using-python-to-convert-text-to-speech-1958283m0.md): - Python Configuration [Vectorization using Embeddings in Python](https://docs.chainhub.tech/vectorization-using-embeddings-in-python-1958284m0.md): - Python Configuration [Python calls DALL·E](https://docs.chainhub.tech/python-calls-dalle-1958285m0.md): - Python Configuration [Simple Python function calling demo](https://docs.chainhub.tech/simple-python-function-calling-demo-1958286m0.md): - Python Configuration [Simple Python LangChain calling OpenAI demo](https://docs.chainhub.tech/simple-python-langchain-calling-openai-demo-1958287m0.md): - Python Configuration [Python llama\Index configuration](https://docs.chainhub.tech/python-llamaindex-configuration-1958288m0.md): - Python Configuration [Using gpt-4o in Python to recognize local images](https://docs.chainhub.tech/using-gpt-4o-in-python-to-recognize-local-images-1958289m0.md): - Python Configuration [Python library for streaming output](https://docs.chainhub.tech/python-library-for-streaming-output-1958290m0.md): - Python Configuration [GPT Realtime Model Call](https://docs.chainhub.tech/gpt-realtime-model-call-1958291m0.md): - Python Configuration [Python request request streaming demo](https://docs.chainhub.tech/python-request-request-streaming-demo-1958293m0.md): - Python Configuration [Using Python to create and edit images with gpt-image-1](https://docs.chainhub.tech/using-python-to-create-and-edit-images-with-gpt-image-1-1958294m0.md): ## API Docs - Chat > ChatGPT Interface > ChatGPT Audio [Audio to text conversion gpt-4o-transcribe](https://docs.chainhub.tech/audio-to-text-conversion-gpt-4o-transcribe-27736947e0.md): Official documentation: https://platform.openai.com/docs/guides/speech-to-text - Chat > ChatGPT Interface > ChatGPT Audio [Creating voice gpt-4o-mini-tts](https://docs.chainhub.tech/creating-voice-gpt-4o-mini-tts-27736948e0.md): Official documentation: https://platform.openai.com/docs/guides/text-to-speech - Chat > ChatGPT Interface > ChatGPT Audio [Create Translation (Not Supported)](https://docs.chainhub.tech/create-translation-not-supported-27736949e0.md): - Chat > ChatGPT Interface > ChatGPT Embeddings [Create an embed](https://docs.chainhub.tech/create-an-embed-27736950e0.md): Obtain a vector representation of a given input, which machine learning models and algorithms can easily use. Related guide: [Embeddings](https://platform.openai.com/docs/guides/embeddings) Creating embedding vectors that represent input text. - Chat > ChatGPT Interface > ChatGPT Auto-Completion [Creation complete](https://docs.chainhub.tech/creation-complete-27736951e0.md): Given a hint, this model will return the completion of one or more predictions, and may also return the probability of the alternative tag for each location. - Chat > ChatGPT Interface > Chat (Responses) [Create function call Copy](https://docs.chainhub.tech/create-function-call-copy-27736963e0.md): https://platform.openai.com/docs/api-reference/responses/create Some OpenAI models only support the Response format, such as o3-pro and codex-mini-latest. - Chat > ChatGPT Interface > GPTs Related [GPTs Dialogue](https://docs.chainhub.tech/gpts-dialogue-27736990e0.md): The model name format is: gpt-4-gizmo-*, the system will automatically recognize it - Chat > Google Gemini API > Chat Compatible Format [Gemini-2.5-flash-all](https://docs.chainhub.tech/gemini-2-5-flash-all-27742235e0.md): Given a hint, the model will return one or more complete predictions, and may also return the probability of an alternative label at each location. - Chat > Google Gemini API > Chat Compatible Format [Chat interface](https://docs.chainhub.tech/chat-interface-27742395e0.md): Given a hint, the model will return one or more complete predictions, and may also return the probability of an alternative label at each location. - Chat > Google Gemini API > Chat Compatible Format [Image recognition interface](https://docs.chainhub.tech/image-recognition-interface-27742377e0.md): Given a hint, the model will return one or more complete predictions, and may also return the probability of an alternative label at each location. - Chat > Google Gemini API > Native Format [Embeddings](https://docs.chainhub.tech/embeddings-27736952e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/text-generation?hl=en#multi-turn-conversations - Chat > Google Gemini API > Native Format [Create text - flow](https://docs.chainhub.tech/create-text-flow-27736953e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/text-generation?hl=en#multi-turn-conversations - Chat > Google Gemini API > Native Format [Text generation + thinking-flow](https://docs.chainhub.tech/text-generation-thinking-flow-27736954e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/text-generation?hl=en#multi-turn-conversations - Chat > Google Gemini API > Native Format [Image generation](https://docs.chainhub.tech/image-generation-27736955e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/image-generation?hl=en - Chat > Google Gemini API > Native Format [Image generation gemini-2.5-flash-image: Controlling aspect ratio](https://docs.chainhub.tech/image-generation-gemini-2-5-flash-image-controlling-aspect-ratio-27736956e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/image-generation?hl=en#gemini-image-editing - Chat > Google Gemini API > Native Format [Image generation gemini-3-pro-image-preview controls aspect ratio and sharpness.](https://docs.chainhub.tech/image-generation-gemini-3-pro-image-preview-controls-aspect-ratio-and-sharpness-27736957e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/image-generation?hl=en#gemini-image-editing - Chat > Google Gemini API > Native Format [Image editing ](https://docs.chainhub.tech/image-editing-27736958e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/image-generation?hl=en#gemini-image-editing - Chat > Google Gemini API > Native Format [google search](https://docs.chainhub.tech/google-search-27736959e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/document-processing?hl=en - Chat > Google Gemini API > Native Format [TTS Text-to-speech](https://docs.chainhub.tech/tts-text-to-speech-27736960e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/image-generation?hl=en#gemini-image-editing - Chat > Google Gemini API > Native Format [Text generation gemini-3-pro-preview:generateContent](https://docs.chainhub.tech/text-generation-gemini-3-pro-previewgeneratecontent-27736961e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/text-generation?hl=en#multi-turn-conversations - Chat > Google Gemini API > Native Format [ Imagen Generate image](https://docs.chainhub.tech/-imagen-generate-image-27736962e0.md): Official documentation: https://ai.google.dev/gemini-api/docs/document-processing?hl=en - Chat > Anthropic Claude Interface > Chat Compatible Format [Create Chat Completion (Streaming)](https://docs.chainhub.tech/create-chat-completion-streaming-27746985e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probability of alternative tokens at each position. - Chat > Anthropic Claude Interface > Chat Compatible Format [Create Extended Thinking Chat](https://docs.chainhub.tech/create-extended-thinking-chat-27747991e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probability of alternative tokens at each position. - Chat > Anthropic Claude Interface > Chat Compatible Format [Create Chat Vision (Non-Streaming)](https://docs.chainhub.tech/create-chat-vision-non-streaming-27747992e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probability of alternative tokens at each position. - Chat > Anthropic Claude Interface > Chat Compatible Format [Create Chat Vision (Streaming)](https://docs.chainhub.tech/create-chat-vision-streaming-27747993e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probability of alternative tokens at each position. - Chat > Anthropic Claude Interface > Chat Compatible Format [Create Chat Completion (Non-Streaming)](https://docs.chainhub.tech/create-chat-completion-non-streaming-27748162e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probability of alternative tokens at each position. - Chat > Anthropic Claude Interface > Native Format [Create function calls (streaming)](https://docs.chainhub.tech/create-function-calls-streaming-27747352e0.md): Given a hint, the model will return one or more complete predictions, and may also return the probability of an alternative label at each location. - Chat > Anthropic Claude Interface > Native Format [Create chat autocomplete (streaming)](https://docs.chainhub.tech/create-chat-autocomplete-streaming-27747136e0.md): Given a hint, the model will return the completion of one or more predictions, and can also return the probability of the alternative label at each location. - Chat > Anthropic Claude Interface > Native Format [Create formatted output ](https://docs.chainhub.tech/create-formatted-output-27747747e0.md): Given a hint, the model will return one or more complete predictions, and may also return the probability of an alternative label at each location. - Chat > Anthropic Claude Interface > Native Format [Create a Thinking Chat](https://docs.chainhub.tech/create-a-thinking-chat-27747356e0.md): The model will return the completion of one or more predictions, and can also return the probability of the alternative label at each location. - Chat > Anthropic Claude Interface > Native Format [Internet search](https://docs.chainhub.tech/internet-search-27736946e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. - Chat > Anthropic Claude Interface > Native Format [PDF support ](https://docs.chainhub.tech/pdf-support-27747581e0.md): Given a hint, the model will return the completion of one or more predictions, and may also return the probability of an alternative label at each location. - Image Models > Qwen Series [qwen-image-edit-2509](https://docs.chainhub.tech/qwen-image-edit-2509-27737101e0.md): Given a prompt and/or input image, the model will generate a new image. - Image Models > Qwen Series [qwen-image-max](https://docs.chainhub.tech/qwen-image-max-27748168e0.md): - Image Models > Qwen Series [z-image-turbo](https://docs.chainhub.tech/z-image-turbo-27748169e0.md): - Image Models > Tencent AIGC [Create Task](https://docs.chainhub.tech/create-task-27737102e0.md): Official Documentation: https://cloud.tencent.com/document/product/266/126240 - Image Models > Tencent AIGC [Get the request result ](https://docs.chainhub.tech/get-the-request-result-27748171e0.md): - Image Models > Midjourney [Upload Image](https://docs.chainhub.tech/upload-image-27737092e0.md): Official documentation: https://docs.midjourney.com/hc/en-us/articles/33329380893325-Managing-Image-Uploads - Image Models > Midjourney [Submit Imagine Task](https://docs.chainhub.tech/submit-imagine-task-27737093e0.md): Official documentation: https://docs.midjourney.com/hc/en-us/articles/32023408776205-Prompt-Basics - Image Models > Midjourney [Query task status by task ID](https://docs.chainhub.tech/query-task-status-by-task-id-27737094e0.md): - Image Models > Midjourney [Search for tasks based on the ID list](https://docs.chainhub.tech/search-for-tasks-based-on-the-id-list-27737095e0.md): - Image Models > Midjourney [Get the seed for the task image](https://docs.chainhub.tech/get-the-seed-for-the-task-image-27737096e0.md): - Image Models > Midjourney [Execute Action](https://docs.chainhub.tech/execute-action-27737097e0.md): Official documentation: https://docs.midjourney.com/hc/en-us/articles/32804058614669-Upscalers - Image Models > Midjourney [Submit Blend task](https://docs.chainhub.tech/submit-blend-task-27737098e0.md): Official documentation: https://docs.midjourney.com/hc/en-us/articles/32635189884557-Blend-Images-on-Discord - Image Models > Midjourney [Submit Describe task](https://docs.chainhub.tech/submit-describe-task-27737099e0.md): Official documentation: https://docs.midjourney.com/hc/en-us/articles/32497889043981-Describe - Image Models > Midjourney [Submit Model](https://docs.chainhub.tech/submit-model-27737100e0.md): - Image Models > Ideogram [Generate 3.0 (Text and Image) Generate ](https://docs.chainhub.tech/generate-3-0-text-and-image-generate-27737083e0.md): Use the Ideogram 3.0 model to generate images synchronously based on the given prompt and optional parameters - Image Models > Ideogram [Generate 3.0 (Image Editing) Edit](https://docs.chainhub.tech/generate-3-0-image-editing-edit-27737084e0.md): Use the Ideogram 3.0 model to generate images synchronously based on the given prompt and optional parameters - Image Models > Ideogram [Generate 3.0 (Image Remix) Remix ](https://docs.chainhub.tech/generate-3-0-image-remix-remix-27737085e0.md): Use the Ideogram 3.0 model to generate images synchronously based on the given prompt and optional parameters - Image Models > Ideogram [Generate 3.0 (Image Reframe) Reframe ](https://docs.chainhub.tech/generate-3-0-image-reframe-reframe-27737086e0.md): Use the Ideogram 3.0 model to generate images synchronously based on the given prompt and optional parameters - Image Models > Ideogram [Generate 3.0 (Replace Background) Replace Background](https://docs.chainhub.tech/generate-3-0-replace-background-replace-background-27737087e0.md): Use the Ideogram 3.0 model to generate images synchronously based on the given prompt and optional parameters - Image Models > Ideogram [ideogram (Text to Image)](https://docs.chainhub.tech/ideogram-text-to-image-27737088e0.md): Generates images synchronously based on a given prompt and optional parameters. - Image Models > Ideogram [Remix (Image Remix)](https://docs.chainhub.tech/remix-image-remix-27737089e0.md): Official documentation: https://developer.ideogram.ai/api-reference/api-reference/remix - Image Models > Ideogram [Upscale (Upscale)](https://docs.chainhub.tech/upscale-upscale-27737090e0.md): Official documentation: https://developer.ideogram.ai/api-reference/api-reference/upscale - Image Models > Ideogram [Describe (Describe)](https://docs.chainhub.tech/describe-describe-27737091e0.md): Official documentation: https://developer.ideogram.ai/api-reference/api-reference/describe - Image Models > Fal.AI Platform [/fal-ai/nano-banana Text-to-Image](https://docs.chainhub.tech/fal-ainano-banana-text-to-image-27737080e0.md): Official documentation: https://fal.ai/models/fal-ai/nano-banana - Image Models > Fal.AI Platform [/fal-ai/nano-banana/edit Image Editing](https://docs.chainhub.tech/fal-ainano-bananaedit-image-editing-27737081e0.md): Official documentation: https://fal.ai/models/fal-ai/nano-banana/edit - Image Models > Fal.AI Platform [Get the request result ](https://docs.chainhub.tech/get-the-request-result-27748389e0.md): - Image Models > FLUX Series > OpenAI Compatible Format [Flux Image Editing (OpenAI dall-e-3 format)](https://docs.chainhub.tech/flux-image-editing-openai-dall-e-3-format-27737082e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. - Image Models > FLUX Series > OpenAI Compatible Format [Flux creation (OpenAI dall-e-3 format)](https://docs.chainhub.tech/flux-creation-openai-dall-e-3-format-27748428e0.md): [picture](https://platform.openai.com/docs/api-reference/images) - Image Models > FLUX Series > Replicate Official Format [Query task](https://docs.chainhub.tech/query-task-27748446e0.md): Official documentation: https://replicate.com/black-forest-labs/flux-kontext-max - Image Models > GPT Image Series [Create gpt-image-1](https://docs.chainhub.tech/create-gpt-image-1-27748450e0.md): Given a hint, the model will return one or more complete predictions, and may also return the probability of an alternative label at each location. - Image Models > GPT Image Series [Edit gpt-image-1](https://docs.chainhub.tech/edit-gpt-image-1-27748452e0.md): Given a hint, the model will return the completion of one or more predictions, and can also return the probability of the alternative label at each location. - Image Models > GPT Image Series [mask gpt-image-1](https://docs.chainhub.tech/mask-gpt-image-1-27748453e0.md): Given a hint, the model will return the completion of one or more predictions, and may also return the probability of an alternative label at each location. - Image Models > GPT Image Series [Create gpt-image-1.5](https://docs.chainhub.tech/create-gpt-image-1-5-27748458e0.md): Given a hint, the model will return the completion of one or more predictions, and can also return the probability of the alternative label at each location. - Image Models > GPT Image Series [Edit gpt-image-1.5](https://docs.chainhub.tech/edit-gpt-image-1-5-27748460e0.md): Given a hint, the model will return the completion of one or more predictions, and can also return the probability of the alternative label at each location. - Image Models > GPT Image Series [Mask gpt-image-1.5](https://docs.chainhub.tech/mask-gpt-image-1-5-27748461e0.md): Given a hint, the model will return the completion of one or more predictions, and may also return the probability of an alternative label at each location. - Video Models > grok Video Generation > Video Unified Format [Create video ](https://docs.chainhub.tech/create-video-27737103e0.md): - Video Models > grok Video Generation > Video Unified Format [Query task ](https://docs.chainhub.tech/query-task-27737104e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probability of alternative tokens at each position. - Video Models > luma Video Generation > Official API Format [Submit video generation task](https://docs.chainhub.tech/submit-video-generation-task-27737105e0.md): Official documentation: https://docs.lumalabs.ai/docs/video-generation - Video Models > luma Video Generation > Official API Format [Extend video](https://docs.chainhub.tech/extend-video-27737106e0.md): Official documentation: https://docs.lumalabs.ai/docs/video-generation - Video Models > luma Video Generation > Query a single task [Query a single task](https://docs.chainhub.tech/query-a-single-task-27737107e0.md): "state": "completed" Enum values: "pending", "processing", "completed", "failed" - Video Models > luma Video Generation > Batch retrieval tasks [Batch retrieval tasks](https://docs.chainhub.tech/batch-retrieval-tasks-27737108e0.md): "state": "completed" Enum values: "pending", "processing", "completed", "failed" - Video Models > Runway Video Generation [Submit video generation task](https://docs.chainhub.tech/submit-video-generation-task-27737109e0.md): Official documentation: https://docs.dev.runwayml.com/api/#tag/Start-generating/paths/~1v1~1image_to_video/post - Video Models > Runway Video Generation [Query video task (free)](https://docs.chainhub.tech/query-video-task-free-27737110e0.md): - Video Models > Sora Video Generation > OpenAI Official Video Format [openai Create video (with Character)](https://docs.chainhub.tech/openai-create-video-with-character-27737112e0.md): - Video Models > Sora Video Generation > OpenAI Official Video Format [openai Query task](https://docs.chainhub.tech/openai-query-task-27737113e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens for each position. - Video Models > Sora Video Generation > OpenAI Official Video Format [openai Download video](https://docs.chainhub.tech/openai-download-video-27737114e0.md): Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens for each position. - Video Models > Sora Video Generation > OpenAI Official Video Format [openai Edit video](https://docs.chainhub.tech/openai-edit-video-27737115e0.md): - Video Models > Sora Video Generation > OpenAI Official Video Format [Create videos with images using OpenAI.](https://docs.chainhub.tech/create-videos-with-images-using-openai-27757342e0.md): - Video Models > Sora Video Generation > OpenAI Official Video Format [Create a video with images using OpenAI in private mode.](https://docs.chainhub.tech/create-a-video-with-images-using-openai-in-private-mode-27757527e0.md): - Video Models > Sora Video Generation > OpenAI Official Video Format [Create videos using storyboards](https://docs.chainhub.tech/create-videos-using-storyboards-27757546e0.md): - Video Models > Sora Video Generation > Chat Format [Create video](https://docs.chainhub.tech/create-video-27757575e0.md): Given a hint, the model will return the completion of one or more predictions, and may also return the probability of an alternative label at each location. - Video Models > Sora Video Generation > Chat Format [Continuous modification to generate video](https://docs.chainhub.tech/continuous-modification-to-generate-video-27757580e0.md): Given a hint, the model will return the completion of one or more predictions, and may also return the probability of an alternative label at each location. - Video Models > Sora Video Generation > Chat Format [Create a video with images](https://docs.chainhub.tech/create-a-video-with-images-27757585e0.md): Given a hint, the model will return the completion of one or more predictions, and may also return the probability of an alternative label at each location. - Video Models > Sora Video Generation > Unified Video Format [Query task ](https://docs.chainhub.tech/query-task-27757762e0.md): Given a hint, the model will return one or more complete predictions, and may also return the probability of an alternative label at each location. - Video Models > Sora Video Generation > Unified Video Format [Create a video with images using sora-2](https://docs.chainhub.tech/create-a-video-with-images-using-sora-2-27757776e0.md): - Video Models > Sora Video Generation > Unified Video Format [Create video sora-2](https://docs.chainhub.tech/create-video-sora-2-27757778e0.md): - Video Models > Sora Video Generation > Unified Video Format [Creating videos with sora-2-pro](https://docs.chainhub.tech/creating-videos-with-sora-2-pro-27757780e0.md): - Video Models > Sora Video Generation > Unified Video Format [Create a video (with a character)](https://docs.chainhub.tech/create-a-video-with-a-character-27757783e0.md): - Video Models > Sora Video Generation [Create character](https://docs.chainhub.tech/create-character-27737111e0.md): - Video Models > Minimax Hailuo Video Generation [First and last frame video](https://docs.chainhub.tech/first-and-last-frame-video-27737116e0.md): Official documentation: https://www.minimax.io/platform/document/Model%3Fkey=684261f14c5738213294faa7?key=66d1439376e52fcee2853049&document=video_generation - Video Models > Minimax Hailuo Video Generation [Video task status query](https://docs.chainhub.tech/video-task-status-query-27737117e0.md): - Video Models > Tencent AIGC Video Generation [Get request result](https://docs.chainhub.tech/get-request-result-27737118e0.md): - Video Models > Tencent AIGC Video Generation [Create task](https://docs.chainhub.tech/create-task-27737119e0.md): Official documentation: https://cloud.tencent.com/document/product/266/126240 - Video Models > Doubao Video Generation [seedance-1-5-pro](https://docs.chainhub.tech/seedance-1-5-pro-27737120e0.md): - Video Models > Doubao Video Generation [Query video generation task list - search multiple task IDs](https://docs.chainhub.tech/query-video-generation-task-list-search-multiple-task-ids-27737121e0.md): - Video Models > Doubao Video Generation [Query a single task](https://docs.chainhub.tech/query-a-single-task-27737122e0.md): - Video Models > Doubao Video Generation [Wensheng Video Example](https://docs.chainhub.tech/wensheng-video-example-27758282e0.md): Official documentation:https://www.volcengine.com/docs/82379/1520757 - Video Models > Doubao Video Generation [Image-based video - first frame](https://docs.chainhub.tech/image-based-video-first-frame-27758285e0.md): Official documentation: https://www.volcengine.com/docs/82379/1520757 - Video Models > Doubao Video Generation [seedance-lite-first and last frames](https://docs.chainhub.tech/seedance-lite-first-and-last-frames-27758286e0.md): - Video Models > Doubao Video Generation [Image-based video - base64 encoded](https://docs.chainhub.tech/image-based-video-base64-encoded-27758291e0.md): Official documentation: https://www.volcengine.com/docs/82379/1520757 - Video Models > Doubao Video Generation [Seedance-Lite Reference Image](https://docs.chainhub.tech/seedance-lite-reference-image-27758295e0.md): Official documentation: https://www.volcengine.com/docs/82379/1520757' - Video Models > Doubao Video Generation [Query video generation task list - default](https://docs.chainhub.tech/query-video-generation-task-list-default-27758299e0.md): - Video Models > Doubao Video Generation [Query video generation task list - search multiple task ID](https://docs.chainhub.tech/query-video-generation-task-list-search-multiple-task-id-27758310e0.md): - Video Models > Doubao Video Generation [seedance-1-5-pro-first and last frames ](https://docs.chainhub.tech/seedance-1-5-pro-first-and-last-frames-27758329e0.md): - Video Models > Wan Video Generation [Generate video](https://docs.chainhub.tech/generate-video-27737123e0.md): - Video Models > Wan Video Generation [Video query](https://docs.chainhub.tech/video-query-27737124e0.md): - Music Suno > Task Submission [Generate song (concatenate song)](https://docs.chainhub.tech/generate-song-concatenate-song-27737066e0.md): - Music Suno > Task Submission [Generate lyrics](https://docs.chainhub.tech/generate-lyrics-27737067e0.md): - Music Suno > Task Submission [Concatenate songs](https://docs.chainhub.tech/concatenate-songs-27737068e0.md): - Music Suno > Task Submission [Report upload completion](https://docs.chainhub.tech/report-upload-completion-27737069e0.md): Step 1: Request upload authorization │ - Music Suno > Task Submission [Query upload processing status](https://docs.chainhub.tech/query-upload-processing-status-27737070e0.md): - Music Suno > Task Submission [Initialize audio clip](https://docs.chainhub.tech/initialize-audio-clip-27737071e0.md): - Music Suno > Task Submission [Request upload authorization](https://docs.chainhub.tech/request-upload-authorization-27737072e0.md): - Music Suno > Task Submission [s3 upload example](https://docs.chainhub.tech/s3-upload-example-27737073e0.md): - Music Suno > Task Submission [Scenario 3: Pure Music - Custom](https://docs.chainhub.tech/scenario-3-pure-music-custom-27737074e0.md): - Music Suno > Task Submission [Song splicing](https://docs.chainhub.tech/song-splicing-27758495e0.md): - Music Suno > Task Submission [Generate a song (custom mode)](https://docs.chainhub.tech/generate-a-song-custom-mode-27758963e0.md): - Music Suno > Task Submission [Generate a song (Inspiration Mode)](https://docs.chainhub.tech/generate-a-song-inspiration-mode-27758966e0.md): - Music Suno > Task Submission [Generate a song (continuation mode)](https://docs.chainhub.tech/generate-a-song-continuation-mode-27758969e0.md): - Music Suno > Task Submission [Generate songs (singer style)](https://docs.chainhub.tech/generate-songs-singer-style-27758990e0.md): # Integration Steps - Music Suno > Task Submission [Generate a song (upload a song for secondary creation).](https://docs.chainhub.tech/generate-a-song-upload-a-song-for-secondary-creation-27758996e0.md): - Music Suno > Task Submission [Generate a song (compose a song)](https://docs.chainhub.tech/generate-a-song-compose-a-song-27759057e0.md): - Music Suno > Task Submission [Report uploaded](https://docs.chainhub.tech/report-uploaded-27759062e0.md): ┌─────────────────────────────────────────────────────────────────┐ - Music Suno > Task Submission [Initialize audio file](https://docs.chainhub.tech/initialize-audio-file-27759065e0.md): ┌─────────────────────────────────────────────────────────────────┐ - Music Suno > Task Submission [Scene 1: Inspiration Mode](https://docs.chainhub.tech/scene-1-inspiration-mode-27759123e0.md): - Music Suno > Task Submission [Scenario 2: Custom lyrics and song title](https://docs.chainhub.tech/scenario-2-custom-lyrics-and-song-title-27759157e0.md): - Music Suno > Query Interface [Batch fetch tasks](https://docs.chainhub.tech/batch-fetch-tasks-27737075e0.md): - Music Suno > Query Interface [Query single task](https://docs.chainhub.tech/query-single-task-27737076e0.md): - Music Suno > Query Interface [Get wav](https://docs.chainhub.tech/get-wav-27737077e0.md): - Music Suno > Query Interface [Timing: lyrics, audio timeline](https://docs.chainhub.tech/timing-lyrics-audio-timeline-27737078e0.md): - Music Suno > Query Interface [Feed details retrieval](https://docs.chainhub.tech/feed-details-retrieval-27737079e0.md): - Kling Platform > Omni-Image [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737020e0.md): - Kling Platform > Omni-Image [Omni-Image](https://docs.chainhub.tech/omni-image-27737021e0.md): - Kling Platform > Omni-Video [Omni-Video](https://docs.chainhub.tech/omni-video-27737022e0.md): - Kling Platform > Custom Elements [Custom Elements](https://docs.chainhub.tech/custom-elements-27737023e0.md): - Kling Platform > Motion Control [Motion Control](https://docs.chainhub.tech/motion-control-27737024e0.md): - Kling Platform > Motion Control [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737025e0.md): - Kling Platform > Image Generation [Image Generation](https://docs.chainhub.tech/image-generation-27737026e0.md): - Kling Platform > Image Generation [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737027e0.md): - Kling Platform > Image Recognition [Image Recognition](https://docs.chainhub.tech/image-recognition-27737028e0.md): - Kling Platform > Image to Video [Image to Video](https://docs.chainhub.tech/image-to-video-27737029e0.md): - Kling Platform > Image > Video [Query task (single)](https://docs.chainhub.tech/query-task-single-27737030e0.md): - Kling Platform > Multi-image reference generation [Multi-image reference generation](https://docs.chainhub.tech/multi-image-reference-generation-27737031e0.md): - Kling Platform > Multi-image reference generation [Query task (single)](https://docs.chainhub.tech/query-task-single-27737032e0.md): - Kling Platform > Multi-image reference video generation [Multi-image reference video generation](https://docs.chainhub.tech/multi-image-reference-video-generation-27737033e0.md): - Kling Platform > Multi-image reference video generation [Query task (single)](https://docs.chainhub.tech/query-task-single-27737034e0.md): - Kling Platform > Multi-modal video editing [Initialize video to be edited](https://docs.chainhub.tech/initialize-video-to-be-edited-27737035e0.md): - Kling Platform > Multi-modal video editing [Add video selection](https://docs.chainhub.tech/add-video-selection-27737036e0.md): - Kling Platform > Multi-modal video editing [Delete video selection](https://docs.chainhub.tech/delete-video-selection-27737037e0.md): - Kling Platform > Multi-modal video editing [Preview selected area video](https://docs.chainhub.tech/preview-selected-area-video-27737038e0.md): - Kling Platform > Multi-modal video editing [Multi-modal video](https://docs.chainhub.tech/multi-modal-video-27737039e0.md): - Kling Platform > Multi-modal video editing [Query task (single)](https://docs.chainhub.tech/query-task-single-27737040e0.md): - Kling Platform > Lip Sync [Face identification](https://docs.chainhub.tech/face-identification-27737041e0.md): - Kling Platform > Lip Sync [Lip sync](https://docs.chainhub.tech/lip-sync-27737042e0.md): - Kling Platform > lip-syncing [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737043e0.md): - Kling Platform > lip-syncing [Facial recognition](https://docs.chainhub.tech/facial-recognition-27771544e0.md): - Kling Platform > lip-syncing [Lip-syncing](https://docs.chainhub.tech/lip-syncing-27771967e0.md): - Kling Platform > image expansion [Image Expansion](https://docs.chainhub.tech/image-expansion-27737044e0.md): - Kling Platform > image expansion [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737045e0.md): - Kling Platform > digital human [Image to Video](https://docs.chainhub.tech/image-to-video-27737046e0.md): - Kling Platform > digital human [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737047e0.md): - Kling Platform > text to video [Text to Video](https://docs.chainhub.tech/text-to-video-27737048e0.md): - Kling Platform > text to video [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737049e0.md): - Kling Platform > text to audio [Text-to-Audio](https://docs.chainhub.tech/text-to-audio-27737050e0.md): - Kling Platform > text to audio [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737051e0.md): - Kling Platform > custom voice [Custom Voice](https://docs.chainhub.tech/custom-voice-27737052e0.md): - Kling Platform > custom voice [Query Custom Voice (Single)](https://docs.chainhub.tech/query-custom-voice-single-27737053e0.md): - Kling Platform > custom voice [Query Official Voices](https://docs.chainhub.tech/query-official-voices-27737054e0.md): - Kling Platform > custom voice [Delete Custom Voice](https://docs.chainhub.tech/delete-custom-voice-27737055e0.md): - Kling Platform > virtual try-on [Virtual Try-On](https://docs.chainhub.tech/virtual-try-on-27737056e0.md): - Kling Platform > virtual try-on [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737057e0.md): - Kling Platform > video extension [Video Extension](https://docs.chainhub.tech/video-extension-27737058e0.md): - Kling Platform > video extension [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737059e0.md): - Kling Platform > video effects [Video Effects](https://docs.chainhub.tech/video-effects-27737060e0.md): - Kling Platform > video effects [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737061e0.md): - Kling Platform > video to audio [Video to Audio](https://docs.chainhub.tech/video-to-audio-27737062e0.md): - Kling Platform > video to audio [Query Task (Single)](https://docs.chainhub.tech/query-task-single-27737063e0.md): - Kling Platform > text-to-speech [Text-to-Speech](https://docs.chainhub.tech/text-to-speech-27737064e0.md): - Fal-ai aggregation platform > falai-veo3 video generation [/fal-ai/veo3](https://docs.chainhub.tech/fal-aiveo3-27736985e0.md): Official documentation address: https://fal.ai/models/fal-ai/veo3 - Fal-ai aggregation platform > falai-veo3 video generation [/fal-ai/veo3/fast/image-to-video](https://docs.chainhub.tech/fal-aiveo3fastimage-to-video-27736986e0.md): Official documentation address: https://fal.ai/models/fal-ai/veo3/fast/image-to-video - Fal-ai aggregation platform > falai-veo3 video generation [/fal-ai/veo3/fast](https://docs.chainhub.tech/fal-aiveo3fast-27736987e0.md): Official documentation address: https://fal.ai/models/fal-ai/veo3/fast - Fal-ai aggregation platform > falai-veo3 video generation [/fal-ai/veo3/requests/{request_id}](https://docs.chainhub.tech/fal-aiveo3requestsrequest-id-27736988e0.md): - Fal-ai aggregation platform > falai-veo3 video generation [/fal-ai/veo3/image-to-video](https://docs.chainhub.tech/fal-aiveo3image-to-video-27736989e0.md): Official documentation: https://fal.ai/models/fal-ai/veo3/image-to-video - Fal-ai aggregation platform [Get the request result](https://docs.chainhub.tech/get-the-request-result-27736964e0.md): - Fal-ai aggregation platform [/fal-ai/flux-1/dev](https://docs.chainhub.tech/fal-aiflux-1dev-27736965e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-1/dev - Fal-ai aggregation platform [/fal-ai/flux-1/dev/image-to-image](https://docs.chainhub.tech/fal-aiflux-1devimage-to-image-27736966e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-1/dev/image-to-image - Fal-ai aggregation platform [/fal-ai/flux-1/dev/redux](https://docs.chainhub.tech/fal-aiflux-1devredux-27736967e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-1/dev/redux - Fal-ai aggregation platform [/fal-ai/flux-1/schnell/redux](https://docs.chainhub.tech/fal-aiflux-1schnellredux-27736968e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-1/schnell/redux - Fal-ai aggregation platform [/fal-ai/flux-pro/kontext](https://docs.chainhub.tech/fal-aiflux-prokontext-27736969e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-pro/kontext - Fal-ai aggregation platform [/fal-ai/flux-pro/kontext/text-to-image](https://docs.chainhub.tech/fal-aiflux-prokontexttext-to-image-27736970e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-pro/kontext/text-to-image - Fal-ai aggregation platform [/fal-ai/flux-pro/kontext/max](https://docs.chainhub.tech/fal-aiflux-prokontextmax-27736971e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-pro/kontext/max - Fal-ai aggregation platform [/fal-ai/flux-pro/kontext/max/multi](https://docs.chainhub.tech/fal-aiflux-prokontextmaxmulti-27736972e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-pro/kontext/max/multi - Fal-ai aggregation platform [/fal-ai/wan/v2.2-a14b/image-to-image](https://docs.chainhub.tech/fal-aiwanv2-2-a14bimage-to-image-27736973e0.md): Official documentation: https://fal.ai/models/fal-ai/wan/v2.2-a14b/image-to-image - Fal-ai aggregation platform [/fal-ai/bytedance/seedream/v4/text-to-image](https://docs.chainhub.tech/fal-aibytedanceseedreamv4text-to-image-27736974e0.md): Official documentation: https://fal.ai/models/fal-ai/bytedance/seedream/v4/text-to-image - Fal-ai aggregation platform [/fal-ai/bytedance/seedream/v4/edit](https://docs.chainhub.tech/fal-aibytedanceseedreamv4edit-27736975e0.md): Official documentation: https://fal.ai/models/fal-ai/bytedance/seedream/v4/edit - Fal-ai aggregation platform [/fal-ai/vidu/reference-to-image](https://docs.chainhub.tech/fal-aividureference-to-image-27736976e0.md): Official documentation: https://fal.ai/models/fal-ai/vidu/reference-to-image - Fal-ai aggregation platform [/fal-ai/imagen4/preview](https://docs.chainhub.tech/fal-aiimagen4preview-27736977e0.md): Official documentation: https://fal.ai/models/fal-ai/imagen4/preview - Fal-ai aggregation platform [/fal-ai/qwen-image-edit-lora](https://docs.chainhub.tech/fal-aiqwen-image-edit-lora-27736978e0.md): - Fal-ai aggregation platform [/fal-ai/qwen-image-edit-plus](https://docs.chainhub.tech/fal-aiqwen-image-edit-plus-27736979e0.md): Official documentation: https://fal.ai/models/fal-ai/qwen-image-edit-plus - Fal-ai aggregation platform [/fal-ai/kling-video/v2.5-turbo/pro/text-to-video](https://docs.chainhub.tech/fal-aikling-videov2-5-turboprotext-to-video-27736980e0.md): - Fal-ai aggregation platform [/fal-ai/kling-video/v2.5-turbo/pro/image-to-video](https://docs.chainhub.tech/fal-aikling-videov2-5-turboproimage-to-video-27736981e0.md): Official documentation: https://fal.ai/models/fal-ai/kling-video/v2.5-turbo/pro/image-to-video - Fal-ai aggregation platform [/fal-ai/flux-lora](https://docs.chainhub.tech/fal-aiflux-lora-27736982e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-lora - Fal-ai aggregation platform [/fal-ai/flux-lora/image-to-image](https://docs.chainhub.tech/fal-aiflux-loraimage-to-image-27736983e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-lora/image-to-image - Fal-ai aggregation platform [/fal-ai/flux-lora/inpainting](https://docs.chainhub.tech/fal-aiflux-lorainpainting-27736984e0.md): Official documentation: https://fal.ai/models/fal-ai/flux-lora/inpainting - Replicate Aggregation Platform [Create task black-forest-labs/flux-kontext-dev](https://docs.chainhub.tech/create-task-black-forest-labsflux-kontext-dev-27736991e0.md): Official documentation: https://replicate.com/black-forest-labs/flux-kontext-dev - Replicate Aggregation Platform [Query task](https://docs.chainhub.tech/query-task-27736992e0.md): Official documentation: https://replicate.com/black-forest-labs/flux-kontext-max - Replicate Aggregation Platform [Create task lucataco/remove-bg](https://docs.chainhub.tech/create-task-lucatacoremove-bg-27736993e0.md): Official documentation:https://replicate.com/lucataco/remove-bg - Replicate Aggregation Platform [Create task ideogram-ai/ideogram-v2-turbo](https://docs.chainhub.tech/create-task-ideogram-aiideogram-v2-turbo-27736994e0.md): Official documentation:https://replicate.com/ideogram-ai/ideogram-v2-turbo - Replicate Aggregation Platform [Create task minimax/video-01-live](https://docs.chainhub.tech/create-task-minimaxvideo-01-live-27736995e0.md): Official documentation:https://replicate.com/minimax/video-01-live - Replicate Aggregation Platform [Create task minimax/video-01](https://docs.chainhub.tech/create-task-minimaxvideo-01-27736996e0.md): Official documentation:https://replicate.com/minimax/video-01 - Replicate Aggregation Platform [Create task recraft-ai/recraft-v3](https://docs.chainhub.tech/create-task-recraft-airecraft-v3-27736997e0.md): Official documentation:https://replicate.com/recraft-ai/recraft-v3 - Replicate Aggregation Platform [Create task recraft-ai/recraft-v3-svg](https://docs.chainhub.tech/create-task-recraft-airecraft-v3-svg-27736998e0.md): Official documentation:https://replicate.com/recraft-ai/recraft-v3-svg - Replicate Aggregation Platform [Create task black-forest-labs/flux-1.1-pro-ultra](https://docs.chainhub.tech/create-task-black-forest-labsflux-1-1-pro-ultra-27736999e0.md): Official documentation: https://replicate.com/black-forest-labs/flux-1.1-pro-ultra - Replicate Aggregation Platform [Create task black-forest-labs/flux-kontext-pro](https://docs.chainhub.tech/create-task-black-forest-labsflux-kontext-pro-27737000e0.md): Official documentation: https://replicate.com/black-forest-labs/flux-kontext-pro - Replicate Aggregation Platform [Create task black-forest-labs/flux-kontext-max](https://docs.chainhub.tech/create-task-black-forest-labsflux-kontext-max-27737001e0.md): Official documentation: https://replicate.com/black-forest-labs/flux-kontext-max - Replicate Aggregation Platform [Create task flux-kontext-apps/multi-image-kontext-max](https://docs.chainhub.tech/create-task-flux-kontext-appsmulti-image-kontext-max-27737002e0.md): Official documentation:https://replicate.com/flux-kontext-apps/multi-image-kontext-max - Replicate Aggregation Platform [Create task flux-kontext-apps/multi-image-kontext-pro](https://docs.chainhub.tech/create-task-flux-kontext-appsmulti-image-kontext-pro-27737003e0.md): Official documentation:https://replicate.com/flux-kontext-apps/multi-image-kontext-pro - Replicate Aggregation Platform [Create task riffusion/riffusion](https://docs.chainhub.tech/create-task-riffusionriffusion-27737004e0.md): Official documentation:https://replicate.com/riffusion/riffusion - Replicate Aggregation Platform [Create task black-forest-labs/flux-fill-dev](https://docs.chainhub.tech/create-task-black-forest-labsflux-fill-dev-27737005e0.md): Official documentation:https://replicate.com/black-forest-labs/flux-fill-dev - Replicate Aggregation Platform [Create task black-forest-labs/flux-fill-pro](https://docs.chainhub.tech/create-task-black-forest-labsflux-fill-pro-27737006e0.md): Official documentation:https://replicate.com/black-forest-labs/flux-fill-pro - Replicate Aggregation Platform [Create task google/imagen-4-fast](https://docs.chainhub.tech/create-task-googleimagen-4-fast-27737007e0.md): Official documentation:https://replicate.com/google/imagen-4-fast - Replicate Aggregation Platform [Create task google/imagen-4-ultra](https://docs.chainhub.tech/create-task-googleimagen-4-ultra-27737008e0.md): Official documentation:https://replicate.com/google/imagen-4-ultra - Replicate Aggregation Platform [Create task google/imagen-4](https://docs.chainhub.tech/create-task-googleimagen-4-27737009e0.md): Official documentation:https://replicate.com/google/imagen-4 - Replicate Aggregation Platform [Create task prunaai/vace-14b](https://docs.chainhub.tech/create-task-prunaaivace-14b-27737010e0.md): Official documentation:https://replicate.com/prunaai/vace-14b - Replicate Aggregation Platform [Create task bytedance/seedream-4](https://docs.chainhub.tech/create-task-bytedanceseedream-4-27737011e0.md): - Rerank Rerank Model [Rerank](https://docs.chainhub.tech/rerank-27737012e0.md): Given a prompt, the model will return one or more predicted completions and can also return the probabilities of alternative tokens at each position.