Comfyui arguments reddit

Comfyui arguments reddit. --show-completion: Show completion for the current shell, to copy it or customize the installation. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. I think a function must always have "self" as its first argument. I originally created it to learn more about the underlying code base powering ComfyUI, but after building it I think it could be useful for anyone in the community who is more comfortable coding than using GUIs. that did not I am using ComfyUI with its default settings. I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). A lot of people are just discovering this technology, and want to show off what they created. \python_embeded\python. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. For the latest daily release, launch ComfyUI with this command line argument: --front-end-version Comfy-Org/ComfyUI_frontend@latest For a specific version, replace latest with the desired version number: Aug 8, 2023 · Wherever you are running the "main. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. 55 it/s for SD1. 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. Also, if this is new and exciting to you, feel free to post On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). I stand corrected. For a portable install, launch terminal in comfyUI folder and use . Both character and environment. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. Using ComfyUI with my GTX 1650 is simply way better than using Automatic1111. 9s/it with 1111. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. 8 and PyTorch 2. bat file with notepad, make your changes, then save it. 5 while creating a 896x1152 image via the Euler-A sampler. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. For example, this is mine: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Belittling their efforts will get you banned. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . Options:--install-completion: Install completion for the current shell. For some reason, it broke my ComfyUI when I did it earlier. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. exe -m pip install [dependency] I have read the section of the Github page on installing ComfyUI. py", whether that be in a . bat file. That helped the speed a bit FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. bat file set CUDA_VISIBLE_DEVICES=1. Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. We would like to show you a description here but the site won’t allow us. But there's an even easier way now: StabilityMatrix It's the same thing, but they've done most of the work for you. Update ComfyUI and all your custom nodes, and make sure you are using the correct models. Discord bot users lightly familiar with the models) to supply prompts that involve custom numeric arguments (like # of diffusion steps, LoRA strength, etc. But where do I begin, anyone know any good tutorials for a lora training beginner. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) Hello! I been playing around with comfyui for months now and reached a level where I wanna make my own loras. exe -s ComfyUI\main. I think for me at least for now with my current laptop using comfyUI is the way to go. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Welcome to the unofficial ComfyUI subreddit. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. /main. ) I haven't managed to reproduce this process in Comfyui yet. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. I only have 4gb vram so I'm just trying get my settings optimized. 0 with refiner. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. py --cpu ) but of course not ideal. 6s/it with comfy as opposed to 4. I don't know what Magnific AI uses. I tested with different SDXL models and tested without the Lora but the result is always the same. Basic img2img. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. This narrows the problem down to GPU/Pytorch packages. Please share your tips, tricks, and workflows for using this software to create your AI art. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. Supports: Basic txt2img. A. Some main features are: Automatically install ComfyUI dependencies. 0` Additionally, I've added some firewall rules for TCP/UDP for Port 8188. Has anyone managed to implement Krea. Installation¶ Welcome to the unofficial ComfyUI subreddit. Open the . You might still have to add the occasional command line argument for extensions or something. Has anyone tried or is still trying? I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. nms' with arguments from seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run it, but just can 3. This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. ) and I am trying out using SDXL in ComfyUI. Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Welcome to the unofficial ComfyUI subreddit. But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. 2 seconds, with TensorRT. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. (Same image takes 5. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. py --windows-standalone-build --normalvram --listen 0. py, eg: python3 . 5 (+ Controlnet,PatchModel. Workflows are much more easily reproducible and versionable. However, I kept getting a black image. Lt. That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 0. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. SD1. Hi r/comfyui, we worked with Dr. Updating to the correct latest version of PyTorch is what is needed. 1’s 200,000 GPU hours. r/synthdiy • We're going live with a workshop in an hour. 0` The final line in the run_nvidia_gpu. 37 votes, 11 comments. Data to create a command line tool to improve the ergonomics of using ComfyUI. But inpainting in Comfy is still terrible, not sure it I simply didn't managed to config it right, but when inpainting faces in 1111 I can make it inpaint in a specific resolution, so I do faces at 512x512. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Invoke just released 3. Please share your tips, tricks, and… Hi all, How to ComfyUI with Zluda All credit goes to the people who did the work! lshqqytiger, LeagueRaINi, Next Tech and AI(Youtuber) I just pieced… These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. 21K subscribers in the comfyui community. 5, SD2. It said follow the instructions for manually installing for Windows. py --normalvram. I've ensured both CUDA 11. It appears some other AMD GPU users have similar unsolved issues: I just released an open source ComfyUI extension that can translate any native ComfyUI workflow into executable Python code. json got prompt… Welcome to the unofficial ComfyUI subreddit. Please keep posted images SFW. It's very early stage but I am curious what folks think / excited to update it over time! The goal is to a) make it easy for semi-casual users (e. Comfyui is much better suited for studio use than other GUIs available now. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. ComfyUI is also trivial to extend with custom nodes. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. I used to do it manually, with Symlinks and command arguments and the like. I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. Every time you run the . I get 1. ==[Update]== Launching in CPU mode is successful ( python main. sh file, or in the command line, you can just add the --lowvram option straight after main. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. bat file, it will load the arguments. bat looks like this: `. I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. 1 or not. Thanks in advanced for any information. I did a clean install right now and it works perfectly. For example, this is mine: Hey I'm new to using ComfyUI and was wondering if there are command line arguments to add to launch file like there is in Automatic1111. Only if you want it early. Inpainting (with auto-generated transparency masks). Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. and any other arguments you want to add. What worked for me was to add a simple command line argument to the file: `--listen 0. g. And above all, BE NICE. Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. 53 it/s for SDXL and approximately 4. Here are some examples I did generate using comfyUI + SDXL 1. . While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. Welcome to the unofficial ComfyUI subreddit. Doing in comfyui or any other other SD UI dont matter for me, only that its done locally. ComfyUI was written with experimentation in mind and so it's easier to do different things in it. Launch arguments that I don't know about for ComfyUI, or, Some config stuff I've missed with ComfyUI. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. 1 are updated and used by ComfyUI. Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. bat file, . py --lowvram --auto-launch. On Linux with the latest ComfyUI I am getting 3. gip bmrdm pmuygoc njfmdz qlpr ywqj msba mpist sqxp vwsh