Skip to content

Comfyanonymous examples. Feature/Version Flux. Here is an example for how to use Textual Inversion/Embeddings. If using GIMP make sure you save the values of the transparent pixels for best results. 1 background image and 3 subjects. SDXL Turbo is a SDXL model that can generate consistent images in a single step. Mar 31, 2023 · You signed in with another tab or window. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff control net. You can load this image in ComfyUI to get the full workflow. SD3 Examples. You signed out in another tab or window. See full list on github. This repo contains examples of what is achievable with ComfyUI. But using the second example script with the api I can only click once until that image finishes generating. - Releases · comfyanonymous/ComfyUI The text box GLIGEN model lets you specify the location and size of multiple objects in the image. 1 Dev Flux. (the cfg set in the sampler). (ignore the pip errors about protobuf) [ ] I tried looking at the examples to see if I could spot a pattern in use cases; I noticed the "simple" sample type was used in the Img2Img type of examples, and Normal was used if it was the initial gen, but I'm not sure if this is the correct way for me to be interpreting these things. 0. yaml. This image contain 4 different areas: night, evening, day, morning. This first example is a basic example of a simple merge between two different checkpoints. Installing. The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. AuraFlow 0. For example: 896x1152 or 1536x640 are good resolutions. example at master · comfyanonymous/ComfyUI Apr 8, 2024 · You signed in with another tab or window. Lt. The denoise controls the amount of noise added to the image. unCLIP models are versions of SD models that are specially tuned to receive image concepts as input in addition to your text prompt. Here is the workflow for the stability SDXL edit model, the checkpoint can be downloaded from: here. Fully supports SD1. Stable cascade is a 3 stage process, first a low resolution latent image is generated with the Stage C diffusion model. The text box GLIGEN model lets you specify the location and size of multiple objects in the image. The proper way Area Composition Examples. Download it and place it in your input folder. Reload to refresh your session. Dec 5, 2023 · This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. As you might Feb 13, 2024 · It's inevitably gonna be supported, just be patient. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard SDXL Turbo is a SDXL model that can generate consistent images in a single step. These are examples demonstrating how to use Loras. safetensors and put it in your ComfyUI/checkpoints directory. Download the model. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. They have since hired Comfyanonymous to help them work on internal tools. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. - GitHub - comfyanonymous/ComfyUI at therundown These are examples demonstrating how to use Loras. Hunyuan DiT is a diffusion model that understands both english and chinese. 2. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. As of writing this there are two image to video checkpoints. Also try increasing your PC's swap file size. 8. Examples page. Here are the official checkpoints for the one tuned to generate 14 frame videos and the one for 25 frame videos. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Example ComfyUI是一个强大的稳定扩散图形用户界面和后台,可以让你用节点式的方式设计和执行高级的AI绘图管道。本文介绍了ComfyUI的官方直译,以及详细的部署教程和使用方法,帮助你快速上手这个前沿的工具。如果你对稳定扩散和图形化界面感兴趣,不妨点击阅读。 Capture UI events. ControlNet Inpaint Example. XLab and InstantX + Shakker Labs have released Controlnets for Flux. Download aura_flow_0. The SD3 checkpoints that contain text encoders: sd3_medium_incl_clips. What am I missing? Examples of ComfyUI workflows. Flux is a family of diffusion models by black forest labs. Hunyuan DiT Examples. Stable Zero123 is a diffusion model that given an image with an object and a simple background can generate images of that object from different angles. You signed in with another tab or window. LCM models are special models that are meant to be sampled in very few steps. Img2Img Examples. GLIGEN Examples. Lora Examples. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. SDXL Turbo Examples. 75 and the last frame 2. py --lowvram --preview-method auto --use-split-cross-attention. ComfyUI Examples. - ComfyUI/ at master · comfyanonymous/ComfyUI Aug 9, 2023 · you can run ComfyUI with --lowram like this: python main. 5GB) and sd3_medium_incl_clips_t5xxlfp8. This latent is then upscaled using the Stage B diffusion model. safetensors and put it in your ComfyUI/models/loras directory. [1] ComfyUI looks 2 Pass Txt2Img (Hires fix) Examples. Download hunyuan_dit_1. To give you an idea of how powerful it is: StabilityAI, the creators of Stable Diffusion, use ComfyUI to test Stable Diffusion internally. Stable Cascade is a major evolution which beats the crap out of SD1. AuraFlow Examples. Asynchronous Queue system. json. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. . All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. x, SDXL, Stable Video Diffusion, Stable Cascade, SD3 and Stable Audio. This is what the workflow looks like in ComfyUI: SDXL Examples. You can find the InstantX Canny model file here (rename to instantx_flux_canny. Note that in ComfyUI txt2img and img2img are the same node. Put the GLIGEN model files in the ComfyUI/models/gligen directory. - ComfyUI/extra_model_paths. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. - Placidina/comfyanonymous-ComfyUI Hypernetwork Examples. 1 Pro Flux. This example contains 4 images composited together. - comfyanonymous/ComfyUI The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Dec 18, 2023 · You signed in with another tab or window. This way frames further away from the init frame get a gradually higher cfg. Jul 20, 2023 · But I am trying to figure out how to send the requests in python but allow them to queue. 3D Examples Stable Zero123. bat) on the standalone. In ComfyUI the saved checkpoints contain the full workflow used to generate them so they can be loaded in the UI just like images to get the full workflow that was used to create them. I load the appropriate stage C and stage B files (not sure if you are supposed to set up stage A yourself, but I did it both with and without) in the checkpo You signed in with another tab or window. safetensors (10. Examples of ComfyUI workflows. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. - comfyanonymous/ComfyUI You signed in with another tab or window. Apr 5, 2023 · You signed in with another tab or window. Mar 14, 2023 · You signed in with another tab or window. Edit models also called InstructPix2Pix models are models that can be used to edit images using a text prompt. Textual Inversion Embeddings Examples. Download it, rename it to: lcm_lora_sdxl. Data, pythongossssss, robinken, and yoland68 to start Comfy Org. LCM Lora. pt embedding in the previous picture. Upscale Model Examples. The total steps is 16. Aug 2, 2024 · Good, i used CFG but it made the image blurry, i used regular KSampler node. You can then load up the following image in ComfyUI to get the workflow: AuraFlow 0. 5. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. These are examples demonstrating the ConditioningSetArea node. LCM loras are loras that can be used to convert a regular model to a LCM model. This upscaled latent is then upscaled again and converted to pixel space by the Stage A VAE. Aug 3, 2024 · comfyanonymous commented Aug 3, 2024 That should be fixed now, try updating, (update/update_comfyui. Features. 1GB) can be used like any regular checkpoint in ComfyUI. These are examples demonstrating how you can achieve the "Hires Fix" feature. The workflow is the same as the one above but with a different prompt. What it's great for: Once you've achieved the artwork you're looking for, it's time to delve deeper and use inpainting, where Feb 23, 2024 · On the official page provided here, I tried the text to image example workflow. 0 (the min_cfg in the node) the middle frame 1. website ComfyUI. ComfyUI. Here is a link to download pruned versions of the supported GLIGEN model files. It basically lets you use images in your prompt. The DiffControlNetLoader node can also be used to load regular controlnet models. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. 1. Regular Full Version Files to download for the regular version Video Examples Image to Video. Git clone the repo and install the requirements. You can Load these images in ComfyUI to get the full workflow. To use it properly you should write your prompt normally then use the GLIGEN Textbox Apply nodes to specify where you want certain objects/concepts in your prompts to be in the image. Nov 30, 2023 · I have no idea what's wrong with it. Text box GLIGEN. safetensors. Put them in the ComfyUI/models/checkpoints folder. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features The following is an older example for: aura_flow_0. safetensors for the example below), the Depth controlnet here and the Union Controlnet here. Like if I do from the comfyui, I can keep queuing as many images to generate as I want. Jun 18, 2024 · As some of you already know, I have resigned from Stability AI and am starting a new chapter. Hunyuan DiT 1. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. The difference between both these checkpoints is that the first contains only 2 text encoders: CLIP-L and CLIP-G while the other one Image Edit Model Examples. These are examples demonstrating how you can achieve the “Hires Fix” feature. Audio Examples Stable Audio Open 1. 5 and SDXL. 1 You signed in with another tab or window. T2I-Adapters are much much more efficient than ControlNets so I highly recommend them. LCM Examples. Flux. com Flux Examples. The reason you can tune both in ComfyUI is because the CLIP and MODEL/UNET part of the LoRA will most likely have learned different concepts so tweaking them separately Example. In this following example the positive text prompt is zeroed out in order for the final output to follow the input image more closely. safetensors to your ComfyUI/models/clip/ directory. ️ 1 ritmototal reacted with heart emoji We would like to show you a description here but the site won’t allow us. We will continue to develop and improve ComfyUI with a lot more resources. Here is an example: You can load this image in ComfyUI to get the workflow. I am partnering with mcmonkey4eva, Dr. Here is an example. AuraFlow is one of the only true open source models with both the code and the weights being under a FOSS license. com/comfyanonymous/ComfyUI. Diff controlnets need the weights of a model to be loaded correctly. The most powerful and modular stable diffusion GUI and backend. Aug 2, 2024 · You signed in with another tab or window. Here is the input image I used for this workflow: T2I-Adapter vs ControlNets. Installing ComfyUI. Here’s a simple example of how to use controlnets, this example uses the scribble controlnet and the AnythingV3 model. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version. Written by comfyanonymous and other contributors. ComfyUI Examples. Here is an example of how to use upscale models like ESRGAN. In the above example the first frame will be cfg 1. In this example we will be using this image. unCLIP Model Examples. Just installed ComfyUI and put the model in the P:\AI_Tools\ComfyUI_windows_portable\ComfyUI\models\checkpoints\ folder. safetensors (5. You can use more steps to increase the quality. You can then load up the following image in ComfyUI to get the workflow: Inpaint Examples. 43 KB. pt Legally the nodes can be shipped in any license because they are packaged separately from the main software and nothing stops someone from writing their own non GPL ComfyUI from scratch that is license compatible with those nodes. setup() is a good place to do this, since the page has fully loaded. This works just like you’d expect - find the UI element in the DOM and add an eventListener. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. These are examples demonstrating how to do img2img. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. Regular KSampler is incompatible with FLUX. Instead, you can use Impact/Inspire Pack's KSampler with Negative Cond Placeholder. The whole point of ComfyUI is AI generation. x, SD2. You switched accounts on another tab or window. Blog Examples of ComfyUI workflows. Github Repo: https://github. We would like to show you a description here but the site won’t allow us. Contribute to comfyanonymous/ComfyUI_examples development by creating an account on GitHub. 8 for example is the same as setting both strength_model and strength_clip to 0. In most UIs adjusting the LoRA strength is only one number and setting the lora strength to 0. Images are encoded using the CLIPVision these models come with and then the concepts extracted by it are passed to the main model when sampling. Dec 19, 2023 · ComfyUI was created in January 2023 by Comfyanonymous, who created the tool to learn how Stable Diffusion works. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: You can apply multiple hypernetworks by chaining multiple Hypernetwork Loader nodes in sequence. - comfyanonymous/ComfyUI Since general shapes like poses and subjects are denoised in the first sampling steps this lets us for example position subjects with specific poses anywhere on the image while keeping a great amount of consistency. safetensors from this page and save it as t5_base. Note that you can omit the filename extension so these two are equivalent: embedding:SDA768. The LCM SDXL lora can be downloaded from here. To use an embedding put the file in the models/embeddings folder then use it in your prompt like I used the SDA768. If you want to use text prompts you can use this example: Note that the strength option can be used to increase the effect each input image has on the final output. sdahv mqokl tect pnbkcphf uzcnh tfef lxw bxnf ssxjp ldgxif