Upscale comfyui reddit. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets View community ranking In the Top 1% of largest communities on Reddit. 23K subscribers in the comfyui community. In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. I think I have a reasonable workflow, that allows you to test your prompts and settings and Welcome to the unofficial ComfyUI subreddit. Ah, missing the upscale model. Just use an upscale node. Reply reply Top 5% Rank by size . 9, end_percent 0. Yes, with ultimate sd upscale. This is not the case. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Hey all, Pretty new to the whole comfyui thing with using 1. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. - XG debuted on March 18, 2022. The workflow used is the Default Turbo Postprocessing from this Gdrive folder. More info: https://rtech. 3 denoise, takes a bit longer but gives more consistent results than latent upscale. 2 Welcome to the unofficial ComfyUI subreddit. This could lead users to increase pressure to developers. go up by 4x then downscale to your desired resolution using image upscale. The layout is in Welcome to the unofficial ComfyUI subreddit. (possibly for automatic1111, but I only use comfyUI now) I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. 9 then upscaled in A1111, my finest work yet . But I probably wouldn't upscale by 4x at all if 10 votes, 18 comments. For upscaling it would mean that you can upscale it by a higher factor. r/Stunfisk is your reddit source for news, analyses, and Thanks. I've uploaded the workflow link and the generated pictures of after and before Ultimate SD Upscale for the reference. 5-2x and getting generally nice results. If it was possible to change the Comfyui to GPU as well would be fantastic as i do believe the images look better from it Reply reply Top Upscale while adding "detailed faces" positive clip to an upscaler as input? Im new to ComfyUI, some help would be greatly appreciated Share Add a Comment. So that the underlying model makes the image accordingly to the prompt and the face is the last thing that is changed. the factor 2. In addition to a whole image inpainting and mask only inpainting, I also have workflows that upscale the masked region to do an inpaint and then downscale it back to the original resolution when pasting it back in. 123 votes, 148 comments. What was wondering was if upscale benefits from using LoRA. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. model makers, it's not useful for end-users to upscale images at this point. I'm trying to find a way of upscaling the SD video up from its 1024x576. There isn't a "mode" for img2img. Take the mask upscale it by 4x and than use a cut by mask node from the masquerade nodes. This is done after the refined image is upscaled and encoded into a latent. This is a wrapper for the script used in the A1111 extension. and time them /r/StableDiffusion is back open after the protest of Reddit killing open /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I wanted to share a comfyui workflow that you can try out on your input images you want 4x enlarged, but not changed too much, while still having some leeway with Welcome to the unofficial ComfyUI subreddit. Put your folder in the top left text input. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. Share Sort by: /r/StableDiffusion is back open after the protest of Reddit The A1111 image is upscaled, while ComfyUI is not. ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. Or you can use different upscale method. I was also getting weird generations and then I just switched to using someone else's workflow and they came out perfectly, even when I changed all my workflow settings the same as theirs for testing what it was, so that could be a bug. Please keep posted images SFW. You don't have to use hi-res upscale fix if you don't want to. 5x on 10GB NVIDIA GPU's. The idea is simple, use the refiner as a model for upscaling instead of using a 1. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. You get to know different ComfyUI Upscaler, get exclusive access to my Co This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. Upscale with different denoise parameters really changes the image. Imagine it gets to the point that temporal consistency is solid enough, and generation time is fast enough, that you can play & upscale games or footage in real-time to this level of fidelity. Img2Img Upscale - Upscale a real photo? Trying to expand my knowledge, and one of the things I am curious about is upscaling a photo - lets say I have a backup image, but its How can I upscale and increase the line density (amount of lines?) in my geometric artwork? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I too use SUPIR, but just to sharpen my images on the first pass. The most powerful and modular diffusion model GUI and backend. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Is there any nodes / possibility for an RGBA image (preserving alpha channel and the related exact transparency) for iterative upscale methods ? I tried "Ultimate SD Upscale", but it has a 3 channel input, it refuses alpha channel, nor the "VAE Encode for inpainting" (which has a mask input) also refuses 4 channel input. That might be it. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. I've moved past this onto new errors! u/theflowtyone/ is there something specific about the node output format? I have problems like 'list' object has no attribute 'shape' when passing the output to other nodes like ImageCrop. However, I switched to Ultimate SD Upscale custom node. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. Images reduced from 12288 to 3840 px width. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. 5, euler, sgm_uniform or CNet strength 0. how much upscale it needs to get that final resolution (both normal upscaler or upscaler value that have been 4x scaled by upscale model) Example Workflow of usage in ComfyUI: JSON / PNG. example here. Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. I did once get some noise I didn't like, but rebooted & all was good second try. 5 for latent upscale you can get issues, I tend to use 4x ultrasharp image upscale and then re-encode back thought a ksampler at the higher resolution with a 0. Then another node under loaders> "load upscale model" node. Then comes the higher resolution In this easy ComfyUI Tutorial, you'll learn step-by-step how to upscale in ComfyUI. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. So for now, its only good to explore I'm trying to test this upscale plugin with the MultiAreaConditioning I do notice my ComfyUI setup seems a bit slower than a1111, but I mostly work with SDXL with ComfyUI, and stick with a1111 with SD1. Adding LORAs in my next iteration. Instead, I use Tiled KSampler with 0. SD Ultimate upscale is a popular upscaling extension for AUTOMATIC1111 WebUI. generating 10-20 images per prompt. How can i fix that? Welcome to the unofficial ComfyUI subreddit. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. I share many results and many ask to share. So, recently I've been trying use Ultimate SD Upscale but always get this weird background. i still use a latent upscale in my upscale processes to add detail, whatever works really, do some comparisons. The issue is that he is being a self-serving parasyte of this community. The 16GB usage you saw was for your second, latent upscale pass. You can use a model that gives better hands. Hello ComfyUI fam, I'm currently editing an animation and want to take the 1024x512 video frame sequence output I have and add detail (using the same 1. => in comparison, i can produce greatly detailed pictures in 5 to 10 seconds in 1400x1400. com Open. 9 , euler "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 25 i get a good blending of the face without changing the image to much. After Ultimate SD Upscale Welcome to the unofficial ComfyUI subreddit. Is there benefit to upscaling the latent instead? So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on Click on Install Models on the ComfyUI Manager Menu. 3 usually gives you the best results. Outdated custom nodes -> Fetch Updates and Update in ComfyUI Manager. txt after you removed the extension « txt » This new upscale workflow also runs very efficiently, being able to 1. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Because i dont understand why ultimate-sd-upscale can manage same resolution in same Im trying to use Ultimate SD upscale for upscaling images. Open comment sort options /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It does a first pass with SUPIR and then Ultimate SD for a second pass, and matched the colour of the original brilliantly and Welcome to the unofficial ComfyUI subreddit. and set "Controlnet is more important". These comparisons are done using ComfyUI with default node settings and fixed seeds. 5 model, and can be applied to Automatic easily. 20K subscribers in the comfyui community. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). Or check it out in the app stores     TOPICS Welcome to the unofficial ComfyUI subreddit. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users 72 votes, 20 comments. Nice, it seems like a very neat workflow and produces some nice images. Denoise 0. Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. Supir really changed the upscale history. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. Search for upscale and click on Install for the models you want. I somehow prefer it without There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. . Please share your tips, tricks, and workflows for using this Welcome to the unofficial ComfyUI subreddit. There is also a UltimateSDUpscale node suite (as an extension). Add your thoughts and get the conversation going. 25- Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and workflows for using this View community ranking In the Top 1% of largest communities on Reddit. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. 21 RELEASE! Comfyui Ultimate SD Upscale speed upvotes upscale by model will take you up to like 2x or 4x or whatever. /r/StableDiffusion is back open after the protest of Reddit killing We would like to show you a description here but the site won’t allow us. This workflow was created to automate the process of converting roughs generated by A1111's t2i to higher resolutions by i2i. Step 1 - Text to image: Prompt varies a bit from picture to picture, but here is the first one: high resolution photo of a transparent porcelain android man with glowing backlit panels, closeup on face, anatomical plants, dark swedish forest, night, darkness, grainy, shiny, fashion, intricate plant details, detailed, (composition:1. There is a latent workflow and a pixel space ESRGAN workflow in the examples. Obviously there are a number of Krea/Magnific clone in comfyUI - upscale video game characters to real life! Share Add a Comment. But somehow it creates additional person inside already generated images. I love the use of the rerouting nodes to change the paths. It's a lot faster that tiling but outputs aren't detailed. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. 5 model) during or after the upscale. May be somewhere i will point out the issue. In Automatic it is quite easy and the picture at the end is also clean, color gradients are smoth, details on the body like the View community ranking In the Top 1% of largest communities on Reddit. github. Upscale your output and pass it through hand detailer in your sdxl workflow. To install it as ComfyUI custom node using ComfyUI Manager (Easy Way) : Make sure you already have ComfyUI Manager (it's like an extension There are a lot of upscale variants in ComfyUI. 0. Ah shit, I may need a PSU upgrade. ExpressWarthog8505 • 10K subscribers in the comfyui community. Hi everyone, I need a upscale method that use in my clothes. attach to it a "latent_image" in this case it's "upscale latent" Welcome to the unofficial ComfyUI subreddit. I made an open source tool for running any ComfyUI workflow w/ ZERO setup Get the Reddit app Scan this QR code to download the app now. to choose an image from the batch and upscale just that image. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can also run a regular AI upscale then a downscale (4x * 0. 5 denoise. You will need also a upscale model, in my case I'm using 4x-Ultrasharp, they are Welcome to the unofficial ComfyUI subreddit. Solution: click the node that calls the upscale model and pick one Thank you for your help! I switched to the Ultimate SD Upscale (with Upscale), but the results appear less real to me and it seems like it is making my machine work 'harder'. so i. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. Like 0. g. 15-0. Please share your tips, tricks, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Welcome to the unofficial ComfyUI subreddit 114 votes, 43 comments. 5 "Upscaling with model" and then denoising 0. in a1111 the controlnet We would like to show you a description here but the site won’t allow us. I am very interested in shifting from automatic1111 to working with ComfyUI Is there a version of ultimate SD upscale that has been ported to ComfyUI? 25K subscribers in the comfyui community. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Last is orginal upscale only. ComfyUI: Using the refiner as a model in UltimateSDUpscale. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. This is the kind of thing ComfyUI is great at but would take remembering every time to change the prompt in Automatic1111 WebUI. support/docs/meta You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. Generates a SD1. Node looks like iterative upscale from impact pack Reply reply Top 4% Rank by size . fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. Sort by: Best /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. I'm eager to find a similar capability within the a1111/ComfyUI. - XG 1st WORLD TOUR - The first HOWL - Starts in May, 2024! - XG FIFTH SINGLE 'WOKE UP' 2024. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. For some context, I am trying to upscale images of an anime village, something like Ghibli style. 16K subscribers in the comfyui community. 3) This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. upscale image - these can be used to downscale by setting either a direct resolution or going under 1 on the 'upscale image by' node. and where it upscale/downscale said area. I made one (FaceDetailer > Ultimate SD Upscale > EyeDetailer > EyeDetailer). Please help me fix this issue. with a denoise setting of 0. I learned this from Sytan's Workflow, I like the result. For the samplers I've used Flux has been out of under a week and already seeing some great innovation in the open source community. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. 4 for denoise for the original SD Upscale. if you want to upscale all at the same time, then you may as well just inpaint on the higher res image tbh. then refining. 5 to get a 1024x1024 final image (512 *4*0. So I'm happy to announce today: my tutorial and workflow are available. I wonder how much of the I was always told to use cfg:10 and between 0. 25K subscribers in the comfyui community. If you have previously generated images you want to upscale, you'd modify the HiRes to include the Welcome to the unofficial ComfyUI subreddit. You've possibly messed the noodles up on the "Get latent size" node under the Ultimate SD Upscale node -> It should use the Two INT outputs. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. It already has Ultimate Upscaler but I don't like the results very much 24K subscribers in the comfyui community. Our friendly Reddit community is here to make the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I mean the possibilities are endless. Get the Reddit app Scan this QR code to download the app now. It´s not perfect, but being able to generate a high-quality picture like this in under a For a 2 times upscale Automatic1111 is about 4 times quicker than ComfyUI on my 3090, I'm not sure why. Share Add a Comment. 5. If you want upscale to specific size. then pick one or two to upscale? Most of the upscaling workflows I have upscale every creation which is rarely useful. Try immediately VAEDecode after latent upscale to see what I mean. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. safetensors (SD 4X Upscale Model) I decided to pit That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. It's an 2x upscale workflow. The workflow is kept very simple for this test; Load image Upscale Save image. New comments cannot be posted. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. Thanks! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from Welcome to the unofficial ComfyUI subreddit. safetensors (SD 4X Upscale Model)I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a Welcome to the unofficial ComfyUI subreddit. LoRA training with sdxl1. This is a community to share and discuss 3D photogrammetry modeling. You should insert ImageScale node. I am switching from Automatic to Comfy and am currently trying to upscale. Came across a workflow called a workflow called "1minute 8K Upscale". /r/StableDiffusion is back open after the protest of Reddit killing open API access, which For a dozen days, I've been working on a simple but efficient workflow for upscale. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). The first stage utilizes CCSR - 2x upscale. 5 if you want to divide by 2) after upscaling by a model. Sample a 3072 x 1280 image, sample again for more detail, then upscale 4x, and the result is a 12288 x 5120 px image. Thank I recently started tinkering with Ultimate SD Upscaler as well as other upscale workflows in ComfyUI. Sort by: Best. and Comfyui uses the CPU. I need to KSampler it again after upscaling. But ReActor did a decent job at a faceswap. Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. I had to place the image into a zip, because Welcome to the unofficial ComfyUI subreddit. I have switched over to the Ultimate SD Upscale as well and it works the same for the most part, only with better results. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the It's so wonderful what the ComfyUI Kohya Deep Shrink node can do on a video card with just 8GB. You've changed the batch size on the "pipeLoader - Base" node to be greater than 1 -> Change it to 1 and try again. It depends what you are looking for. ComfyUI Workspace manager v1. face and hand detail + upscale comfyworkflows. This workflow upscales images to 4K or 8K and upscales in 3 stages. Using ComfyUI, you can increase the siz Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. 5 manage workflows, generated Welcome to the unofficial ComfyUI subreddit. extremely detailed We would like to show you a description here but the site won’t allow us. 0 refine model chain with 4Xultrashap comfyUI workflow generation: The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. Hope someone can advise. Or check it out in the app stores     TOPICS. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Do you have ComfyUI manager. how are you setting up the upscale nodes? When I try to add upscaling to my AnimateDiff workflow the upscalled version loses a lot of the consistency Reply Luzipher /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. That said I have been using 1. If you want more resolution you can simply add another Ultimate SD Upscale node. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Still working on the the 17K subscribers in the comfyui community. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. 5=1024). ComfyUI SDXL 0. My workflow. Both these are of I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via I only have 4gb of nvidia vram, so large images crash my process. Clearing up blurry images have it's practical use, but most people are looking for something like Magnific - where it actually fixes all the smudges and messy details of the SD generated images and in the same time produces very clean and sharp The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. Comfyui SDXL-Turbo Extension with upscale nodes Tutorial - Guide Locked post. but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. 0 and upscale with comfyUI sdxl1. I solved that with using only 1 steps and adding multiple iterative upscale nodes. 38 votes, 15 comments. yalm. Second stage utilizes SUPIR - 4K size. I have applied optical flow to the sequence to smooth out the appearance but this results in a loss of definition in every frame. View community ranking In the Top 1% of largest communities on Reddit. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfections of our models, but sometimes the 2nd pass helps. articles on new Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. Holy Paladins, ComfyUI + Animatediff + 2x upscale. r/Trophies. Reply reply This is before the upscale. You can try out the ComfyUI Workflow here. Comfyui SDXL-Turbo Extension with upscale nodes youtube r/lexfridman. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Heres an example with some math to double the original images resolution Welcome to the unofficial ComfyUI subreddit. You can easily utilize schemes below for your custom Here is an example of how to use upscale models like ESRGAN. Third stage utilizes SD ULTIMATE UPSCALE - 8K size. 5 and embeddings and or loras for better hands. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Drag and drop the image into comfyui (doesnt work with reddit) and you'll get the workflow. r I find if it's below 0. It's why you need at least 0. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will Welcome to the unofficial ComfyUI subreddit. Is there a custom node or a way to replicate the A111 ultimate upscale ext in ComfyUI? Skip to main content. This ui will let you design and execute advanced stable diffusion pipelines using a 49 votes, 12 comments. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. Members Online • MTX-Rage . What is the best workflow you know of? I've annotated as much of the workflow I can so beginners can understand how the workflow works and encourage them to use ComfyUI more. From the ComfyUI_examples, Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. It's messy right now but does the job. A lot of people are just discovering this technology, and want to show off what they created. More posts you may like /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. A subreddit for those in It added nothing. X values) if you want to benefit from the higher res processing Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. /r/StableDiffusion is back open after the protest of Reddit killing The upscale quality is mediocre to say the least. If it's the best way to install control net because when I tried manually doing it . It´s actually possible to add an upscaler like 4xUltrasharp to the workflow and upscale your images from 512x512 to 2048x2048, and it´s still blazingly fast. 6 denoise and either: Cnet strength 0. Then simply put in your desired latent resolution. (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. 8 even. Or you can facedetail the result after upscale. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. There is a face detailer node. 5 are usually a better idea than going 2+ here because latent upscale Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. * Use Refiner /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Ultimate SD Upscale for ComfyUI. SD Ultimate upscale – ComfyUI edition. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. and if I need to upscale the image, I run it through Topaz video AI to 4K and up. I use this youtube video workflow , and he uses a basic one. Hires. Belittling their efforts will get you banned. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very You dont get it don't you? The issue isnt wht he offers. It didn't work out. /r/StableDiffusion is back open after the protest of Do you just upscale it or? Or is it a custom node from Searge / others? I can't see it, because I cant find the link for workflow. More info: https://rtech Welcome to the unofficial ComfyUI subreddit. Well yes but the upscale node you use really doesn’t matter I think, except the ldsr and a few other special upscaler that need their own node. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. And above all, BE NICE. I want more detail about patterns. 2 and 0. 0 refine model sdxl1. workflow - google drive link. I managed to make a very good workflow with IP-Adapter with regional masks and ControlNet and it's just missing a good upscale. You just have to use the node "upscale by" using bicubic method and a fractional value (0. You can upscale in SDXL and run the img through a img2img in automatic using sd 1. Notably I can Reddit removes the ComfyUI metadata when you upload your pic. No attempts to fix jpg artifacts, etc. /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. I've struggled with Hires. Sort by: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access Hi, there I am use UItimate SD Upscale but it just doing same process again and again blow is the console code - hoping to get some help Upscaling iteration 1 with scale factor 2 Tile size: 768x768 Tiles amount: 6 Grid: 2x3 Redraw enabled: True Seams fix mode: NONE Requested to load AutoencoderKL Loading 1 new model The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. Simple workflow with componentized frequently used node groups and wireless using UE nodes. 2 options here. I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Also make sure you install missing nodes with ComfyUI Manager. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. After experimenting with it for an hour or so, it seems the answer is yes. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind 15K subscribers in the comfyui community. Or check it out in the app stores     Because upscale amount is determined by upscale model itself. 5), with an ESRGAN model. Be the first to comment Nobody's responded to this post yet. It is intended to upscale and enhance your input images. Internet Culture (Viral) I’m not sure i understand, both Upscale Image (using Model) and Load Upscale Model work fine for me and So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Upscale smaller images to at least 1024 x 1024, before you put them in to be in painted. Open menu Open navigation Go to Reddit Home. comfyanonymous. After borrowing many ideas, and learning ComfyUI. Also, if this is new and exciting to Welcome to the unofficial ComfyUI subreddit. comments sorted by Best Top New Controversial Q&A Add a Comment You can upscale using comfyui. support/docs After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. I had a bad download of the last. io comments sorted by Best Top New Controversial Q&A Add a Comment. It will replicate the image's workflow and seed. ckpt model the node takes, I just downloaded it again and the problem vanished. I personally use the ultimate upscale node in a variety of workflows. It is highly recommended that you feed it images straight out of SD (prior to any saving) - unlike the example above - which shows some of the common artifacts introduced on compressed images. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. New to Comfyui, so not an expert. 21K subscribers in the comfyui community. Increasing the mask blur lost details, but increasing the tile padding to 64 helped. 17K subscribers in the comfyui community. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. AuraSR v1 (model) is ultra sensitive to ANY kind of image compression and when given such image the output will probably be terrible. I gave up on latent upscale. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode Welcome to the unofficial ComfyUI subreddit. You can repeat the upscale and fix process multiple times if you wish. e. Thanks Latent upscale is different from pixel upscale. This results is the same as with the newest Topaz. Please share your tips, tricks, and workflows for using this software to create your AI art. Whenever I upscale using the Ultimate SD Upscale node, there's a vague 'grid pattern' of squares in the final image. Then open Ultimate SD upscale at X2 with Ultrasharp and with tile resolution 640x640 and Mask 16. I cant find any node to upscale image with model by specific factor (or to specific View community ranking In the Top 1% of largest communities on Reddit. 5 models but i need some advice on my workflow. I was working on exploring and putting together my guide ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. You can use it on ComfyUI too! If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Bringing any intermediate images into comfyui for comfy upscale automations I'm so excited! Probably going to start shopping for a second 3090 soon. It works more like DLSS, tile by tile and faster than iterative Grab the image from your file folder, drag it onto the entire ComfyUI window. Through recommended youtube videos i learned that a good way to increase the size and quality of gens i can use iterative upscales first in latent and then iterative upscale for the itself image and also that you can generate pretty high For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged Have this node immediately after the checkpoint loader before anything else using the model line. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. The higher the denoise number the more things it tries to change. 5 set at SDXL resolutions, then hi-res fix latent upscale another 1. 35, 10 steps or less. He's using open source knowledge and the work of hundreds of community minds for his own personal profit through this very same place, instead of giving back to the source where he took everything he used to add his extra Welcome to the unofficial ComfyUI subreddit. Giving 'NoneType' object has no attribute 'copy' errors. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. Look at this workflow : Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. Makeing a bit of progress this week in ComfyUI. I For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. I might do an issue in ComfyUI about that. I understand how outpainting is supposed to work in comfyui (workflow here - https: (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). You could also try a standard checkpoint with say 13, and 30. the number one place on Reddit to discuss Elementor the live page builder for WordPress. And when purely upscaling, the best ComfyUI. Please share your tips, tricks, and 22K subscribers in the comfyui community. Could anyone guide me on how to achieve this locally with exceptional outcomes TBH, I haven't used A1111 extensively, so my understanding of A1111 is not deep, and I don't know what doesn't work in A1111. 5, so I don't really have any direct comparison. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude Yeah what I like to do with comfyui is that I crank up the weight but also don't let the IP adapter start until very late. I have yet to find an upscaler that can outperform the proteus model. Organized ComfyUI txt2Img-upscale workflow. More info Welcome to the unofficial ComfyUI subreddit. py, in order to allow the the 'preview image' node to If you want your workflow to generate a low resolution image and then upscale it immediately, the HiRes examples are exactly what I think you are asking for. The issue is that the upscale adds so much noise that refining step can basically craft a different image that may have newly introduced deformities. A step-by-step guide to mastering image quality. analysis to work out if magnific does something like using a multimodal model to help generate a prompt to use for the upscale gen. Also with good results. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. If I want larger images, I upscale the image. You can also look into the custom node, "Ultimate SD Upscaler", and youtube tutorial for it. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. - Reddit for XG, a girl group featuring Jurin, Chisa, Hinata, Harvey, Juria, Maya, and Cocona on XGALX. More posts you may like r/Trophies. I usually use 4x-UltraSharp for realistic Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add details along the way using an iterative workflow! This The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. it will add details to your workflow generally if your noise is set too high but it definitely won't blur and the sharpness would be A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for Creat a new comfyui, I have created a comfyuiSUPIR only for supir, and in the new comfyui, link the model folders with the full path for base models folder and the checkpoint folder ( at least) in comfy/extra-model. upvlumu zwt aybo bcekl eybvx fdxoma aufkk pwtef ykhtovo wkyfnb