Comfyui controlnet preprocessor example reddit This is what I have so far (using the custom nodes to reduce the visual clutteR) . It is used with "depth" models. Done in ComfyUI with lineart preprocessor and controlnet model and dreamshaper7. Please share your tips, tricks, and workflows for using this software to create your AI art. You just run the preprocessor and then use that image in a “Load Image” Node and use that in your generation process. I saw a tutorial, long time ago, about controlnet preprocessor « reference only ». Mixing ControlNets At this point, you can use this file as an input to ControlNet using the steps described in How to Use ControlNet with ComfyUI – Part 1. THESE TWO CONFLICT WITH EACH OTHER. Maybe it's your settings. Brief Introduction to ControlNet ControlNet is a condition-controlled generation model based on diffusion models (such as Stable Diffusion), initially proposed by Lvmin Zhang, Maneesh Agrawala can anyone please tell me if this is possible in comfyui at all, and where i can find an example workflow or tutorial. However, since a recent Controlnet update, 2 Inpaint preprocessors have appeared, and I don't really understand how to use them : Hi all! I recently made the shift to ComfyUI and have been testing a few things. I don't think the generation info in ComfyUI gets saved with the video files. I'm just struggling to get controlnet to work. If you click the radio button " all " and then manually select your model from the model popup list, " inverted " will be at the very top of the list of all We would like to show you a description here but the site won’t allow us. The row label shows which of the 3 types of reference controlnets were used to generate the image shown in the grid. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. MLSD ControlNet preprocesor. e. We would like to show you a description here but the site won’t allow us. Ty i will try this. Sometimes you want to compare how some of them work. There is now a install. It is also fairly good for positioning things, especially positioning things "near" and "far away". But it gave better results than I thought. Enable ControlNet, set Preprocessor to "None" and Model to "lineart_anime". , Canny, Lineart, MLSD and Scribble. The Workflow Pose ControlNet. Here is an example of the final image using the OpenPose ControlNet model. While depth anything does provide a new controlnet model that's supposedly better trained for it, the project itself is for a depth estimation model. This works fine as I can use the different preprocessors. It is good for positioning things, especially positioning things "near" and "far away". (e. And sometimes something new appears. Where can they be loaded. In this example, we will guide you through installing and using ControlNet models in ComfyUI, and complete a sketch-controlled image generation example. Certainly easy to achieve this than with prompt alone. 8, among other things, the installer updated our global CUDA_PATH environment variable to point to 11. Using Multiple ControlNets to Emphasize Colors: In WebUI settings, open ControlNet options and set 'Multi Controlnet: Max models amount' to 2 or more. I get a bit better results with xinsir's tile compared to TTPlanet's. Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Welcome to the unofficial ComfyUI subreddit. Not sure why the OpenPose ControlNet model seems to be slightly less temporally consistent than the DensePose one here. Hook one up to vae decode and preview image nodes and you can see/save the depth map as a PNG or whatever. I'm trying to implement reference only "controlnet preprocessor". I hope the official one from Stability AI would be more optimised especially on lower end hardware. Segmentation is used to split the image into "chunks" of more or less related elements ("semantic segmentation"). MLSD is good for finding straight lines and edges. I also automated the split of the diffusion steps between the Base and the Refiner models. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow Jul 20, 2024 · site:example. I tried to collect all the ones I know in one place. Only select combinations work moderately alright. So I decided to write my own Python script that adds support for more preprocessors. Be respectful 2. I do see it in the other 2 repos though. ComfyUI is hard. You don't need to Down Sample the picture, this is only usefull if you want to get more detail at the same size unfortunately your examples didn't work. Upload your desired face image in this ControlNet tab. In this case, I changed the beginning of the prompt to include, "standing in flower fields by the ocean, stunning sunset". x, at this time) from the NVIDIA CUDA Toolkit Archive. Apr 1, 2023 · If a preprocessor node doesn't have version option, it is unchanged in ControlNet 1. Install a python package manager for example micromamba (follow the installation instruction on the website). Once I applied the Face Keypoints Preprocessor and ControlNet after the InstantID node the results were really good When the controlnet was turned OFF, the prompt generates the image shown on the bottom corner. Controlnet can be used with other generation models. It is recommended to use version v1. Reply reply More replies More replies More replies I was having trouble getting anything to look like the input image, but the issue was that I wasn't including the ControlNet (as I thought it was only needed for posing and I was having trouble loading the example workflows). I don’t remember if you have to Add or Multiply it with the latent before putting it into the ControlNet node though it’s been a few since I messed with Comfy. Hey everyone! Like many, I like to use Controlnet to condition my inpainting, using different preprocessors, and mixing them. This reddit community is for submitting your favourite digital or natural media **pictorial** creations of landscapes or scenery. Hi, I hope I am not bugging you too much by asking you this on here. subreddit:aww site:imgur. When you generate the image you'd like to upscale, first send it to img2img. It's such a great tool. Also, if this is new and exciting to you, feel free to post Posted by u/Sensitive-Paper6812 - 48 votes and 8 comments Backup your workflows and picture. e. In my canny Edge preprocessor, I seem to not be able to go into decimal like you or other people I have seen do. I found that one of the better combinations is to pick preprocessor "canny" and use Adapter XL Sketch, or preprocessor "t2ia_sketch_pidi" and use a ControlLite model by kohya-ss in its "sdxl fake scribble anime" edition. Please keep posted images SFW. All old workflows still can be used I'm trying to use an Open pose controlnet, using an open pose skeleton image without preprocessing. FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. View community ranking In the Top 10% of largest communities on Reddit Does comfyui support preprocess of image? In Automatic1111 you could put an image and it will preprocess it to depth/canny/etc image to be use. 2. Make sure that you save your workflow by pressing Save in the main menu if you want to use it again. All the workflows for Comfy i've found start with a depth map that has been already generated, and it's creation is not included in the workflow F:\##_ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose. 1 Tile (Unfinished) (Which seems very interesting) Testing ControlNet with a simple input sketch and prompt. I have used: - CheckPoint: RevAnimated v1. Download and install the latest CUDA (12. The subject and background are rendered separately, blended and then upscaled together. Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. ). Example MLSD detectmap with the default settings . This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. 5, Starting 0. In other words, I can do 1 or 0 and nothing in between. Nov 4, 2024 · RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. EDIT: Nevermind, the update of the extension didn't actually work, but now it did. I've installed ComfyUI Manager through which I installed ComfyUI's ControlNet Auxiliary Preprocessors. Choose a weight between 0. 1 Lineart ControlNet 1. com dog. 8. you input that picture, and use "reference_only" pre-processor on ControlNet, and choose Prompt/ControlNet is more important, and then change the prompt text to describing anything else except the chlotes, using maybe 0. Thank you so much! Is there a way to create depth maps from an image inside ComfyUI by using ControlNET like in AUTO1111? I mean, in AUTO i can use the depth preprossessor, but i can't find anything like that in Comfy. Try and experiment by also using the tile model without the upscaler - I have great luck with generating small 512x640 ie - then putting it into img2img with the tile model on and its downsampler set high and then prompting for more detail of the sort you want to add, while setting the img size incrementally higher Controlnet can be used with other generation models. 3. It's a preprocessor for a controlnet model like leres, midas, zoe, marigold I think cold may be needed to support it. Not as simple as dropping a preprocessor into a folder. com find submissions from "example. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. 1 Instruct Pix2Pix ControlNet 1. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Rules 1. Here is ControlNetwrite up and here is the Update discussion. Also, if you're using comfy, add an ImageBlur node between your image and the apply controlnet node and set both blur radius and sigma to 1. This is the purpose of a preprocessor: it converts our reference image (such as a photo, line art, doodle, etc. Additional question. 2 - Lora: Thickeer Lines Anime Style Lora Mix - ControlNet LineArt - ControlNet OpenPose - ControlNet TemporalNet (diffuser) Custom Nodes in Comfyui: - Comfyui Manager The problems with hands adetailer are that: If you use a masked-only inpaint, then the model lacks context for the rest of the body. I don't know why it didn't grab those on the update. Select the size you want to resize it. And its hard to find other people asking this question on here. The controlnet part is lineart of the old photo which tells SD the contour it shall draw. 4. The img2img source is the same photo, but colorized manually and simply, which shows SD the colors it should approximately paint. When the controlnet was turned ON, the image used for the controlnet is shown on the top corner. yaml. I did try it, it did work quite well with ComfyUI’s canny node, however it’s nearly maxing out my 10gb vram and speed also took a noticeable hit (went from 2. bat you can run to install to portable if detected. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not Add Node > ControlNet Preprocessors > Faces and Poses > DW Preprocessor. py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. But as it turned out, there are quite a lot of them. 1, Ending 0. As of 2023-02-26, Pidinet preprocessor does not have an "official" model that goes Example depth map detectimage with the default settings. control_normal-fp16) When trying to install the ControlNet Auxiliary Preprocessors in the latest version of ComfyUI, I get a note telling me to refrain from using it alongside this installation. Leave the Preprocessor to None. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. Reference Only is a ControlNet Preprocessor that does not need any ControlNet Model. 1 Shuffle ControlNet 1. What do I need to install? (I'm migrating from A1111 so comfyui is a bit complex) I also get these errors when I load a workflow with controlnet. If the input is manually inverted, though, for some reason the no-preprocessor inverted-input seems to be better. At the moment, the assembly includes Welcome to the unofficial ComfyUI subreddit. Is there something similar I could use ? Thank you I've been doing some tests in A1111 using the Ultimate Upscaler script together with Controlnet Tile and it works wonderful it doesn't matter what tile size or image resolution I throw at it but in ComfyUI I get this error: We would like to show you a description here but the site won’t allow us. So you'll end up with stuff like backwards hands, too big/small, and other kinds of bad positioning. ControlNet 1. A lot of people are just discovering this technology, and want to show off what they created. example at the end of the filename, and placed my models path like so: d:/sd/models replacing the one in the file. Belittling their efforts will get you banned. control_depth-fp16) In a depth map (which is the actual name of the kind of detectmap image this preprocessor creates), lighter areas are "closer" and darker areas are "further away" not quite. 1 Anime Lineart ControlNet 1. This makes it particularly useful for architecture like room interiors and isometric buildings. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. Start Stable Diffusion and enable the ControlNet extension. For those who have problems with the controlnet preprocessor and have been living with results like the image for some time (like me), check that the ComfyUI/custom_nodes directory doesn't have two similar folders "comfyui_controlnet_aux". You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. May 12, 2025 · 現在ComfyUIのControlNetモデルバージョンは多数あるため、具体的なフローは異なる場合がありますが、ここでは現在のControlNet V1. 1. example I renamed it by removing the . Can I know how do you guys get around this? This is a rework of comfyui_controlnet_preprocessors based on ControlNet auxiliary models by 🤗. Is this possible? In A1111 I can set preprocessor to none, but ComfyUI controlnet node does not have any preprocessor input, so I assume it is always preprocessing the image (ie. For example, we can use a simple sketch to guide the image generation process, producing images that closely align with our sketch. Normal maps is good for intricate details and outlines. EDIT: I must warn people that some of my settings in several nodes are probably incorrect. The preprocessor for openpose makes the images like the one you loaded in your example, but from any image, not just open pose likes and dots. com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW. For those who don't know, it is a technique that works by patching the unet function so it can make two passes during an inference loop: one to write data of the reference image, another one to read it during the normal input image inference so the output emulates the reference Welcome to the unofficial ComfyUI subreddit. r/comfyui • I made a composition workflow, mostly to avoid prompt bleed. You can also right click open in mask editor and apply a mask on the uploaded original image if it contains multiple people, or elements in the background you do not want the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. g. Make sure you set the resolution to match the ratio of the texture you want to synthesize. Hey there, im trying to switch from A1111 to ComfyUI as I am intrigued by the nodebased approach. I am a fairly recent comfyui user. That node can be obtained by installing Fannovel16's ComfyUI's ControlNet Auxiliary Preprocessors custom node. Is there something like this for Comfyui including sdxl? We would like to show you a description here but the site won’t allow us. Since we already created our own segmentation map there is Welcome to the unofficial ComfyUI subreddit. trying to extract the pose). I was frustrated by the lack of some controlnet preprocessors that I wanted to use. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. x) again, is because when we installed 11. 0. json got prompt… c:\Users\your-username-goes here\AppData\Roaming\krita\pykrita\ai_diffusion\. The reason we're reinstalling the latest version (12. All fine detail and depth from the original image is lost, but the shapes of each chunk will remain more or less consistent for every image generation. Depth_leres is almost identical to regular "Depth", but with more ability to fine-tune the options. RunComfy also provides AI Playground , enabling artists to harness the latest AI tools to create incredible art. You can also specifically save the workflow from the floating ComfyUI menu I am looking for a way to input an image of a character, and then make it have different poses without having to train a Lora, using comfyUI. Appreciate just looking into it. Example normal map detectmap with the default settings. It does lose fine, intricate detail though. Just drop any image into it. 4-0. Load the noise image into ControlNet. 8 it/s). Run the WebUI. There is one for a preprocessor and one for loading an image. Workflows are tough to include in reddit Workflow Not Included May 12, 2025 · Then, in other ControlNet-related articles on ComfyUI-Wiki, we will specifically explain how to use individual ControlNet models with relevant examples. You might have to use different settings for his controlnet. Apr 15, 2024 · Rather than remembering all the preprocessor names within ComfyUI ControlNet Aux, this single node contains a long list of preprocessors that you can choose from for your ControlNet. With controlnet I can input an image and begin working on it. edit: nevermind, I think my installation of comfyui_controlnet_aux was somehow botched I didn't have big parts of the source that I can see in the repo. control_mlsd-fp16) We would like to show you a description here but the site won’t allow us. It involves supplying a reference image, using a preprocessor to convert the reference image into a usable "guide image", and then used the matching controlnet model to guide the image generation alongside your prompt and generation model. 6. If you are asked too or the title of the post asks for help in style, technique, etc. I have "Zoe Depth map" preprocessor, but also not the "Zoe Depth Anything" shown in the screenshot. There's a PreProcessor for DWPose in comfyui_controlnet_aux which makes batch-processing via DWPose pretty easy. i am about to lose my mind :< Share Add a Comment Sort by: We would like to show you a description here but the site won’t allow us. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. Speaking of Controlnet, how do you guys get your line drawings? Use photoshop find edges filter and then clean up by hand with a brush? It seems like you could use comfy AI to use controlnet to make the line art, then use controlnet against to use it to generate the final image. When loading the graph, the following node types were not found: CR Batch Process Switch. 1 Inpaint (not very sure about what exactly does this one) ControlNet 1. I went for half-resolution here, with 1024x512. 9 it/s to 1. 5-1. In ControlNet, select Tile_Resample as a Preprocessor and Control_V11f1e_sd15_tile as a Model. ) into a structured feature map so that the ControlNet model can understand and guide the generated result. There are controlnet preprocessor depth map nodes (MiDaS, Zoe, etc. It is not very useful for organic shapes or soft smooth curves. When you click on the radio button for a model type, "inverted" will only appear in the preprocessor popup list for the line-type of models, i. Example fake scribble detectmap with the default settings Welcome to the unofficial ComfyUI subreddit. And above all, BE NICE. Go to the ControlNet tab, activate it and use "ip-adapter_face_id_plus" as preprocessor and "ip-adapter-faceid-plus_sd15" as the model. 1 of preprocessors if they have version option since results from v1. see the search faq for details. If so, rename the first one (adding a letter, for example) and restart ComfyUI. It's about colorizing an old picture. Pidinet is similar to hed, but it generates outlines that are more solid and less "fuzzy". It is used with "normal" models. But I don’t see it with the current version of controlnet for sdxl. /// Does anyone have a clue why I still can't see that preprocessor in the dropdown? I updated it (and controlnet too). I found one that doesn't use sdxl but can't find any others. Example Pidinet detectmap with the default settings. 1 preprocessors are better than v1 one and compatibile with both ControlNet 1 and ControlNet 1. Overgrown jungles, barren planets, futuristic cityscapes, or interiors, are just some examples of what is expected. I'm struggling to find a workflow that allows image/ img input into comfy ui that uses sdxl. shows an example of using controlnet and img2img in a process. Segmentation ControlNet preprocessor . I think the old repo isn't good enough to maintain. Pidinet ControlNet preprocessor . You can load this image in ComfyUI to get the full workflow. There are quite a few different preprocessors in comfyui, which can be further used in the same ControlNet. Fake scribble ControlNet preprocessor Fake scribble is just like regular scribble, but ControlNet is used to automatically create the scribble sketch from an uploaded image. Reply reply We would like to show you a description here but the site won’t allow us. (Results in following images -->). They must be original creations, not photographs of already-existing places. server\ComfyUI\extra_model_paths. Welcome to the unofficial ComfyUI subreddit. YOU NEED TO REMOVE comfyui_controlnet_preprocessors BEFORE USING THIS REPO. Set ControlNet parameters: Weight 0. It is used with "mlsd" models. (Results in following images -->) I love Comfyui, but it is difficult to set a workflow to create animations as easily as it can be done in Automatic1111. Load your segmentation map as an input for ControlNet. But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. The current implementation has far less noise than hed, but far fewer fine details. 5. Would you have even the begining of a clue of why that it. I was wondering if anyone has a workflow or some guidance on how to to get the color model to function? I am guessing I require a preprocessor if I just load an image to the "Apply ControlNet" node. DWPreprocessor First I thought it would allow me to add some iterative details to my upscale jobs, for example, if I started with a picture of empty ocean and added a 'sailboat' prompt, tile would give me an armada of little sailboats floating out there. 5 denoising value. Don't give criticism or your opinions on others painting styles onless asked. DWPose might run very slowly Welcome to the unofficial ComfyUI subreddit. May 12, 2025 · For example, in the image below, we used ComfyUI’s Canny preprocessor, which extracts the contour edge features of the image. Here are the Controlnet settings, as an example: Step 3: Modify your prompt or use a whole new one, and the face will be applied to the new prompt. 1バージョンモデルを例に説明し、具体的なワークフローは後続の関連チュートリアルで補足します。 - selected "OpenPose" control type, with "openpose" preprocessor, and "t2i-adapter_xl_openpose" model, "controlnet is more important" - used this image - received a good openpose preprocessing but this blurry mess for a result Normal map ControlNet preprocessor. But now I can't find the preprocessors like Hed, Canny etc in ComfyUi. LATER EDIT: I noticed this myself when I wanted to use ControlNet for scribbling. Only the layout and connections are, to the best of my knowledge, correct. Hi guys, do you know where I can find preprocessor tile_resample for ComfyUI? I've been using it without any problem on A1111 but since I just moved the whole workflow to ComfyUI, I'm having a hard time making controlnet tile work in the same way to controlnet tile on A1111. Example depth map detectmap with the default settings . Type in your console Depth_lres preprocessor. It kinda seems like the best option is to have a white background, NOT invert input and use the scribble preprocessor, OR invert input in the UI but use no preprocessor. jzbcqz hyeqzn qap fikzrc fexsq ywgpv utwhaz fcgum nackv lqd