• Log in
  • Enter Key
  • Create An Account

Comfyui upscale example reddit

Comfyui upscale example reddit. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. thats Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 - run your prompt. 1-0. So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Please keep posted images SFW. I go back and forth between OG SD Upscale and Ultimate. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. 1 Schnell; Overview: Cutting-edge performance in image generation with top-notch prompt following, visual quality, image detail, and output diversity. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. Thanks. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. This repo contains examples of what is achievable with ComfyUI. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. 19K subscribers in the comfyui community. Thank I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. The final node is where comfyui take those images and turn it into a video. Belittling their efforts will get you banned. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. But I hardly ever use controlnet for upscaling. Flux is a family of diffusion models by black forest labs. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. For videos of celebrities just going undercover and not doing the activity they are known for please submit to /r/UndercoverCelebs. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. Please share your tips, tricks, and workflows for using this software to create your AI art. You end up with images anyway after ksampling so you can use those upscale node. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). The workflow used is the Default Turbo Postprocessing from this Gdrive folder. Images reduced from 12288 to 3840 px width. all in one workflow would be awesome. Depending on the noise and strength it end up treating each square as an individual image. The cape is img2img upscale after the first 2x upscale, cropped out that portion as a square, and just highres that portion, and comp it back in. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. ComfyUI Fooocus Inpaint with Segmentation Workflow Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. the good thing is no upscale needed. You can do the ControlNet/Ulitmate SD Upscale combo. 5 denoise. If the workflow is not loaded, drag and drop the image you downloaded earlier. Latent quality is better but the final image deviates significantly from the initial generation. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. That's it for upscaling. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Pixel upscale to a low denoise 2nd sampler is not as clean as Upscaler roundup and comparison. 2 Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. You should be able to see where the comp ends, and the quality of the cape drops down to the original upscale. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt Feature/Version Flux. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. Please share your tips, tricks, and… These comparisons are done using ComfyUI with default node settings and fixed seeds. The video demonstrates how to integrate a large language model (LLM) for creative image results without adapters or control nets. Welcome to the unofficial ComfyUI subreddit. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Here is an example: You can load this image in ComfyUI to get the workflow. Thanks for your help This repo contains examples of what is achievable with ComfyUI. g Use a X2 Upscaler model. And when purely upscaling, the best upscaler is called LDSR. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. Latent upscale is different from pixel upscale. I haven't been able to replicate this in Comfy. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. 0 for ComfyUI. You're funny. This could lead users to increase pressure to developers. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. SDXL most definitely doesn't work with the old control net. Flux Examples. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. Try immediately VAEDecode after latent upscale to see what I mean. I just uploaded a simpler example workflow that does a 2x latent upscale in two ways: . Thanks for the tips on Comfy! I'm enjoying it a lot so far. There are also "face detailer" workflows for faces specifically. I usually take my first sample result to pixelspace, upscale by 4x, downscale by 2x, and sampling from step 42 to step 48, then pass it to my third sampler for steps 52 to 58, before going to post with it. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Like 1024, 1280, 2048, 1536. Hands are still bad though. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. So. I also combined ELLA in the workflow to make it easier to get what I want. If you are looking for upscale models to use you can find some on 31 Aug 2024 76:17. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Sample a 3072 x 1280 image, sample again for more detail, then upscale 4x, and the result is a 12288 x 5120 px image. In the CR Upscale Image node, select the upscale_model and set the rescale_factor. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. The downside is that it takes a very long time. Start ComfyUI. Examples of ComfyUI workflows. Makeing a bit of progress this week in ComfyUI. this is just a simple node build off what's given and some of the newer nodes that have come out. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. 2 options here. If I understand correctly how Ultimate SD Upscale + controlnet_tile works, they make an upscale, divide the upscaled image on tiles and then img2img through all the tiles. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). Explore its features, templates and examples on GitHub. The equivalent to Ultimate SD Upscale for A1111 is Ultimate SD Upscale for ComfyUI. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. When I search with quotes it didn't give any results (know it's only giving this reddit post) and without quotes it gave me a bunch of stuff mainly related to sdxl but not cascade and the first result is this: Examples of ComfyUI workflows. This will get to the low-resolution stage and stop. 0. That's because of the model upscale. I originally wanted to release 9. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Is there a workflow to upscale an entire folder of images as is easily done in A1111 in the img2img module? Basically I want to choose a folder and process all the images inside it. Adding LORAs in my next iteration. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 43 votes, 16 comments. Here is an example of how to use upscale models like ESRGAN. TLDR In this tutorial, Seth introduces ComfyUI's Flux workflow, a powerful tool for AI image generation that simplifies the process of upscaling images up to 5. You guys have been very supportive, so I'm posting here first. Larger images also look better after refining, but on 4gb we aren’t going to get away with anything bigger than maybe 1536 x 1536. AP Workflow 9. 1 Dev Flux. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. In the Load Video node, click on choose video to upload and select the video you want. it's nothing spectacular but gives good consistent results without If your image changes drastically on the second sample after upscaling, it's because you are denoising too much. Step 2: Download this sample Image. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata Jan 5, 2024 · Example. You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. It's so wonderful what the ComfyUI Kohya Deep Shrink node can do on a video card with just 8GB. A lot of people are just discovering this technology, and want to show off what they created. 1 or not. 5 "Upscaling with model" and then denoising 0. It does not work with SDXL for me at the moment. The armor is upscaled from the original image without modification. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird . I just find I'm gonna inpaint on my images anyways so that whole process is just an extra step and time. I want to upscale my image with a model, and then select the final size of it. There isn't a "mode" for img2img. By applying both a prompt to improve detail and to increase resolution (indicating as percentage, for example 200% or 300%). repeat until you have an image you like, that you want to upscale. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. I try to use comfyUI to upscale (use SDXL 1. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. ComfyUI Examples. Yes, I search google before asking. We would like to show you a description here but the site won’t allow us. And above all, BE NICE. Still working on the the whole thing but I got the idea down A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Workflow Included Share second pic. Both these are of similar speed. Thanks I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. No attempts to fix jpg artifacts, etc. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. It's why you need at least 0. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. - now change the first sampler's state to 'hold' (from 'sample') and unmute the second sampler - queue the prompt again - this will now run the upscaler and second pass. Hope someone can advise. Using the Iterative Mixing KSampler to noise up the 2x latent before passing it to a few steps of refinement in a regular KSampler. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. 0 Alpha + SD XL Refiner 1. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. I used 4x-AnimeSharp as the upscale_model and rescale the video to 2x. Upscale x1. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to An example might be using a latent upscale; it works fine, but it adds a ton of noise that can lead your image to change after going through the refining step. Currently the extension still needs some improvement, for example you can only do resolution which can be divided by 256. Thanks! I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. 4x using consumer-grade hardware. Jan 13, 2024 · submitted 7 months ago * by nooblito. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters I wonder if there are any workflows for ComfyUI that combine Ultimate SD Upscale + controlnet_tile + IP-Adapter. I have been generally pleased with the results I get from simply using additional samplers. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. This is done after the refined image is upscaled and encoded into a latent. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. 2 and resampling faces 0. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. For some context, I am trying to upscale images of an anime village, something like Ghibli style. I upscaled it to a resolution of 10240x6144 px for us to examine the results. Ugh. 1 Pro Flux. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. The 16GB usage you saw was for your second, latent upscale pass. I might do an issue in ComfyUI about that. The workflow is kept very simple for this test; Load image Upscale Save image. But I probably wouldn't upscale by 4x at all if fidelity is important. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. After that, they generate seams and combine everything together. There's "latent upscale by", but I don't want to upscale the latent image. iyzhfzg vtwmzur dalhck yma wvdqyj adoei unt sdca yhrw jlpzf

patient discussing prior authorization with provider.