Best sdxl upscaler reddit. Img2img using SDXL Refiner, DPM++2M 20 steps 0.

Best sdxl upscaler reddit. This is no longer true.


Best sdxl upscaler reddit 5 Has 5 parameters which will allow you to easily change the prompt and experiment Toggle if the seed should be included in the file name or not Upscale to 2x and 4x in multi-steps, Step 1 - Text to image: Prompt varies a bit from picture to picture, but here is the first one: high resolution photo of a transparent porcelain android man with glowing backlit panels, closeup on face, anatomical plants, dark swedish 4x-UltraSharp is a decent general purpose upscaler. A recommendation: ddim_u has an issue where the time schedule doesn't start at 999. 5 models in ComfyUI but they're 512x768 and as such too small resolution for my uses. SDXL gives you good results by minimal prompting. 5 needs to get more detail. But, and a big BUT, SD1. The blurred latent mask does its best to prevent ugly seams. 5 and in my experience 0. I'm not at the PC, but I think there is one in the base sampler node, so put it there. 5 based Lora there. I mostly go for realism/people with my gens and for that I really like 4x_NMKD-Siax_200k, it handles skin texture quite well but does some weird things with hair if the upscale factor is too large. Even the best upscaler model, while considerably faster than rendering the image anew, will only increase detail resolution that are already present in the source image. py:357: UserWarning: 1Torch was not compiled with flash attention. Do you have ComfyUI manager. Hope someone can advise. 5 based, so you can use any 1. Step one - Prompt: 80s early 90s aesthetic anime, closeup of the face of a beautiful woman exploding into magical plants and colors, living plants, moebius, highly detailed, sharp attention to detail, extremely detailed, dynamic composition, akira, ghost in the shell Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10. 2024-03-27 11:00:02. So my favorite so far is ESRGAN_4x but I am willing to try other upscaler good for adding fine detail and sharpeness. I run out of VRAM. However, it seems like the upscalers just add pixels without adding any detail at all. also use 768 or 1024 I wanted to title it "What is the BEST upscaler", but figured the answer may vary and depend on the use of every different person, I have heard about ULTRASHARP, But I am sure it is not the only "good" one, and there might be many other as good or even better? This subreddit is going to be used to consider various apps and themes to be included in both the "Best Android Apps" and "Best Homescreen Setups" series on the YouTube channel called HowToMen. Welcome to the official subreddit of the PC Master Race / PCMR! All PC-related content is welcome, including build help, tech support, and any doubt one might have about PC ownership. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for ages with no luck. Yes, I agree with your theory. 5 or XL checkpoints. I didn't have random camera parts in a while :) The prompts you mentioned aren't really needed anymore for example. 0. You have a bunch of custom things in here that arent necessary to demonstrate "TurboSDXL + 1 Step Hires Fix Upscaler", and basically wasting our time trying to find things because you dont even provide links. I am loving playing around with the SDXL Turbo-based models popping out in the past week. My goal is to upscale the images generated by the 1. 0 Hi, I'm shikasensei, also known as u/Boring_Ad_914 on Reddit. - LDSR 2x scaling is implemented as downsampling to half res then The 4X-NMKD-Superscale-SP_178000_G model has always been my favorite for upscaling SD1. This new comparison now should be more accurate with seeing which is the best realistic model that still retains pony capabilities, and how it compares Overall I think SDXL's AI is more intelligent and more creative than 1. reddit images do not contan PNG files I believe, so in instances like these: Photorealism Overview. 25-0. Looking over the discussion yesterday about the base model suggestions around Cascade and the other models, I am worried there may not be a good understanding in the community over just how powerful the base models are in particularly the Base SDXL Model. EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora) SD Upscaler doesn't just upscale the picture like Photoshop would do (which you also can do in automatic1111 in the "extra" tab), they regenerate the image so further new detail can be added in the new higher resolution which didn't exist in the lower one. I love the use of the rerouting nodes to change the paths. 3 Denoise with normal scheduler, or 0. In the past, when using SD1. 5 version in Automatic1111. This can let you really play around with refiner step much MUCH further than with the standard SDXL refiner model depending on how well your model choices play Is this supposed to be used with a tile upscaler like UltimateSDUpscale, or is it just for img2img upscale and HiRes fix? Edit: I tested it out myself, and it does control the output in tiled upscaling like UltimateSDUpscale. Since I cannot make up my mind which one works best I have landed on a mix: Resize: 4x Upscaler 1: 4x-UltraSharp. I am trying sdxl lightning and upscale. 5 [Workflow Included] 743168596, Size: 4864x3328, Model hash: c9e3e68f89, Denoising strength: 0. 5, now I use it only with SDXL (bigger tiles 1024x1024) and I do it multiple times with decreasing denoise and cfg. Do a basic Nearest-Exact upscale to 1600x900 (no upscaler model). Currently all 4 methods (including multi diffusion and mixture of diffusers) are far from satisfying to me, so I'm constantly improving the algorithm. I upscaled it to a resolution of 10240x6144 px for us to examine the results. I am putting a link to this post in my SDXL intro/summary post šŸ˜ Possible captions: Lois stood me up Snow-white and the eight gnomes The not so little mermaid Plastic surgery clones of South Korea Kiss and Make Up Hong Kong Night Uses base, refiner, and upscale model Meantime: 38 sec Results: workflow v1. 0 LoRa's. litekite_ ā€¢ Additional comment actions SDXL 1. Useful for certain aesthetics but as generalistic I go to SDXL 1. Use the 8 step lora at 0. 6? I tried the old method with Controlnet, ultimate upscaler with 4x-ultrasharp , but it returned errors like ā€mat1 and mat2 shapes cannot be multipliedā€ (SDXL) is mostly subscription based? How long will they last and whats I'm quite new to this and have been using the "Extras" tab on Automatic1111 to upload and upscale images without entering a prompt. The right side uses the Siax upscaler and the above settings. I'm sure it's not the best, but for a beginner, it's fantastic. "The best" is very much still undecided. Keep "From I try to use comfyUI to upscale (use SDXL 1. But what caught my eye was the list of resolution config that came with the app. I use the defaults and a 1024x1024 tile. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. 5 model (1st image attached above) using the SDXL model (2nd image attached above) to add realism to the low quality faces while preserving the emotional qualities of the faces + their bone structure etc It's not a new base model, it's simply using SDXL base as jumping off point again, like all other Juggernaut versions (and any other SDXL model really). " Under this reply, user "PsychologicalView605" confirmed this discovery in his/her own experiment. 5 we had control net and tiling etc, which last I checked isn't viable with sdxl. ai. SDXL 1. Iā€™ve been seeing news lately about ControlNetā€™s tile model for upscaling. r/PhotoshopTutorials Posted by u/ptitrainvaloin - 13 votes and no comments I haven't tried LDSR, thanks, I will try it. Please share your tips, tricks, and If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. Here's what I typed last time this question was asked: AFAIK for automatic1111 only the "SD upscaler" script uses SD to upscale and its hit or miss. 5 should get me around 4,480 x 3584 however I am getting 3200 x4000 with A1111 sd xl Anyone has this issue? what is the solution? then I need 1. The higher the denoise number the more things it tries to change. This is no longer true. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. More posts from r/universvideos subscribers . Another thing to try is IMG to imging a higher Rez image; seemed to work really well in my case (I gen images till one I like appears then img-to-img (with upscale at like 1. The upscaler is SD1. I found the strength needs to be pretty high for that to be noticed though. When using Roop (faceswaping extension) on sdxl and even some non xl models, i discovered that the face in the resulting image was always blurry. But when I try upscale with same sample settings, I am getting this kind of noise. The left side is my "control group" - ESRGAN upscaler, denoise 0. 5 denoising and for best results closer to 0. Next time, just ask me before assuming SAI has directly told us to not help individuals who may be using leaked models, which is a bit of a shame (since that is the opposite of true ļø) . Usually 3 out of the 5 are pretty much perfect. Some of my favorite SDXL Turbo models so far: SDXL TURBO PLUS - RED TEAM MODEL ļøšŸ¦ŒšŸŽ… - VRAM is King. Both of them give me errors as "C:\Users\shyay\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention. ) But in this post the OP is using the leaked SDXL 0. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. Download the kohya_controllllite_xl_blur model from the link I provided in the last post, put it in your Controlnet model directory, then fire up controlnet. 39 votes, 18 comments. 0 if the base resolution is not too high) because it allow you to use greater batch sizes, and has virtually no processing time cost. Here's a sample I made while experimenting with Hires. At 0,3 I got already so much more details without stupid things appearing it seems good, but still doing test In Script, select Ultimate SD Upscale. This effectively allows you to increase denoise without worrying as much about hallucinations. The only difference is that it doesn't continue on from Juggernaut 9's training, it went back to the start. 236 strength and 89 steps, which will take 21 steps total). The Upscaler function of my AP Workflow 8. 33 to get close. Which one is best depends on the image type, BSRGAN I find is the most general purpose, Real ESRGAN is a better choice if you want things smoother or with anime, SwinIR is a good choice for getting more detail, LDSR I would only use on more important images. 8 strength with Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. I've had very mixed results with SD Upscale, and don't use it much. I'm sure I'm forgetting something :b Welcome to the unofficial ComfyUI subreddit. This can add detail that isn't possible through img2img. SDXL was trained at a base resolution of 1024x1024, but was further fine-tuned on multiple aspect ratios. But it is extremely light as we speak, so much so the Civitai guys probably wouldn't even consider that NSFW at all. 55, Ultimate SD upscale upscaler: 4x-UltraMix_Restore, Ultimate SD upscale tile_width: 1024, Ultimate SD upscale I get great results, use an upscaler like remarci x4 on the settings, dont use latent, denoise about 0. The upscalers always subtley broke the seamless transition. Img2img using SDXL Refiner, DPM++2M 20 steps 0. I have 3070 8gb. due to the limitations of their initial workflow, I decided to create my own, thus Ultra Upscale was born. 1. 5 has better community fine tuning. However, the SDXL refiner obviously doesn't work with SD1. Overall I think portraits look better with SDXL and that the people look less like plastic dolls or photographed by an amateur. SDXL is the best one to get a base image imo, and later I just use Img2Img with other model to hiresfix it. 51 denoising. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting. Plenty of great SD1. It essentially replaces my old, of much worse quality, LoHa. I get that good vibe, like discovering Stable Diffusion all over again. LDSR might in this family, its SLOW. The asphalt on SD3 was the first thing that I noticed a big improvement upon, but not only that, on SDXL the shadow under the car is too dark, uncanny if anything, and it doesn't feel like the car is placed on the asphalt properly, kinda like in games where the characters feel floaty or like they are weightless due to how they step ont he ground and move, SD3 in comparison looks too Welcome to the unofficial ComfyUI subreddit. 5x which makes it a lot crisper BUT takes way longer to get a final image (maybe there is a way to only upscale twice but get It´s actually possible to add an upscaler like 4xUltrasharp to the workflow and upscale your images from 512x512 to 2048x2048, and it´s still blazingly fast. Then I combine it with a combination of either Depth, Canny and OpenPose ControlNets. 5 based models are often useful for adding detail during upscaling(do a txt2img+ControlNet tile resample+colorfix, or high denoising img2img with tile resample for the most detail). I saw in some post that some people do it iterativelly or mixing many samplers but I don't understand much how to do that. 5 - 2. I was wondering what the best way to upscale sdxl images are now? With 1. right now my workflow testing is ranging around 2x-2. Best I could achieve was trying to regenerate the similar image from the same seed using a higher starting resolution and hi-res fix, which appeared to make it seamless, just slightly different composition. It´s not perfect, but being able to generate a high-quality picture like this in under a second, or almost instantly, is mind-boggling. 9 , euler Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. Posted by u/Striking-Long-2960 - 102 votes and 24 comments SDXL is the newer base model for stable diffusion; compared to the previous models it generates at a higher resolution and produces much less body-horror, and I find it seems to follow prompts a lot better and provide more consistency for the same prompt. Steps wise: SDXL for example tends to like higher step counts, up to a point, than 1. 5 ? Can I use smaller or bigger tile size ? I added a switch toggle for the group on the right. I can regenerate the image and use latent upscaling if thatā€™s the best way Iā€™m struggling to find If you're using SDXL, you can try experimenting and seeing which LoRAs can achieve a similar effect. ) Please see the original 512px image: Original 512px 4x_UltraSharp upscale to 1536x1536px ESRGAN_4x upscale to 1536x1536px Tried it with SDXL-base and SDXL-Turbo. If it's the best way to install control net because when I tried manually doing it . 3 GB VRAM via OneTrainer - Both U-NET and Text Encoder 1 is trained - Compared 14 GB config vs slower 10. Opinion: Too expensive and sometimes ignores some part of the input SergeSDXL . Itā€™s a very specific list of values. 5. 3 usually gives you the best results. Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I For latent upscalers you need at least 0. 5 and a lot better at camera terms (when it comes to later checkpoints anyway). 20K subscribers in the comfyui community. Load the upscaled image to the workflow, use ComfyShop to draw a mask and inpaint. The first image is 1024x1024 from sdxl then 1. The image is probably quite nice now, but it's not huge yet. My current workflow to generate decent pictures at upscale X4, with minor glitches. 5 models. If you have decent amounts of VRAM, before you go to an img2img based upscale like UltimateSDUpscale, you can do a txt2img based upscale by using ControlNet tile/or ControlNet inpaint, and regenerating your image at a higher resolution. So I spent 30 minutes, coming up with a workflow that would fix the faces by upscaling them (Roop in Auto1111 has it by default). 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 5, I would just use img2img with the model used for the original gen until my GPU crapped out, and then use SD ultimate upscale for one more pass. 25, then up cfg scale 8 to 12 or so, then you will get a lot of micro details without random houses and people or whatever is in your prompt, play around with denoise, it thats too high, you will get floaties, if its too low the upscaling will look grainy. 5 checkpoint, this NMKD model tends to lead to oversharpening and bitty textures, in Comparison of using ddim as base sampler and using different schedulers 25 steps on base model (left) and refiner (right) base model I believe the left one has more detail so back to testing comparison grid comparison between 24/30 Denoising strenght is important, you need to test what's the best for you. I haven't needed to. However, when upscaling Flux images with an SD1. Upscaler 2: 4x_foolhardy_Remacri (at 50% visibility) CodeFormer visibility: I originally didn't wanna use a model upscaler, but since the output of C is so small, i had to upscale by ~5. View community ranking In the Top 1% of largest communities on Reddit. SDXL models are always first pass for me now, but 1. 5, sometimes 2. . 8x resize in my upscaler). Of course I also know the sd upscaler and ultimate sd upscaler. An overview for Ultimate SD Upscaler, since upscaling tools are becoming widespead! TLDR: I think the winner is again "GoddessOfRealism Pony Beta", it has the most realistic lighting, and also best anatomy, including the wings, and prompt following. Iā€™ll create images at 1024 size and then will want to upscale them. 1, base SDXL is so well tuned already for coherency that most other fine-tune models are basically only adding a "style" to it. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. SDXL's photographer is more professional IMO. SDXL base was just as bad as SD3. 5x upscale with dreamshaper v8 so i have made it 768x768 in ultimate sd to have 4 uniform tiles but i have used other resolutions and SDXL's refiner and HiResFix are just Img2Img at their core ā€” so you can get this same result by taking the output from SDXL and running it through Img2Img with an SD v1. In SD 1. AutoModerator ā€¢ SDXL works very differently to SD 1. Reborn v2. You can expect ~20 seconds per image, 1024x1024, 30 steps, DPM++ 2M SDE Karras. SDXL - BEST Build + Upscaler + Steps Guide. For some context, I am trying to upscale images of an anime village, something like Ghibli style. Iā€™ve been meaning to make sure my SDXL workflow itself didnā€™t need fine tuning before I post something just like this. Reddit iOS Reddit Android Reddit Premium About Reddit Advertise Blog Careers Press. 5 models and I don't get good results with the upscalers either when using SD1. But i have also changed 2 settings, seam fix mode=none and seam fix denoise =0. 0 results. 0: Options: Can use prompt, positive and negative terms, style, and negative style. for your case, the target is 1920 x 1080, so initial recommended latent is 1344 x 768, then upscale it to 1. My current workflow involves going back and forth between a regional sampler, an upscaler, and Krita (for inpainting to fix errors & fill in the details) to refine the output iteratively. I can upscale up to x16, but it has many other features, such as light correction, denoise, deblur, image generator etc. Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. It seemed like a smaller tile would add more detail, and a larger tile would add less. This is now the next model in my "Zeitgeist" series of high-quality SDXL 1. 2 and 0. It takes less than 5 minutes with my 8GB VRAM GC: Generate with txt2img, for example: prompt : A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation I've been using Blur with Ultimate SD Upscale to generate images up to 12K using SDXL. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This is enough for a SDXL model but I can't use ControlNet nor IP Adapter. So, I'd conclude that hires steps don't make the picture better or worse but allow you to upscale your images to a higher magnification. Best was to use sdxl Question - Help Hey, I use my 7900xtx to generate images with sdxl and now my question is, would it be better for me to use some Linux distro or rather Microsoft olive to get the most out of my GPU? Hi guys, today Stability Inc released their new SDXL Turbo model that can inference an image in as little as 1 step. 21K subscribers in the comfyui community. Upscale (I go for 1848x1848 since this somehow results from a 1. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. I have 2 galleries of SD15 in civitai that you can check out if you want, The reason was that the SD Ultimate Upscale workflow I used had a really good 2k Upscale step but the 2nd 4K Upscale step gave bad results so I replaced the 4K step with SUPIR. 9, end_percent 0. Albedo too. In my opinion the best results I got for image upscale with https://deep-image. 25-1. Ultimate sd upscale is the best for me, you can use it with controlnet tile in SD 1. viralspaceshare. 8125x LSDR upscale picture of a young influencer woman, close up shot!, candid pose!, caught off guard!, unique camera angle, stoic model pose, depth of field, overcast morning sky, natural color I ask because after Kohya's deepshrink fix became available, I haven't done any upscaling at all in A1111 or Comfy. Nice, it seems like a very neat workflow and produces some nice images. It didn't work out. Edit: you could try the workflow to see it for yourself. 9 model to act as an upscaler. If it wasnā€™t for the licensing issues I would still have hope for future SD3 model releases Be respectful and follow Reddit's Content Policy This Subreddit is a place The 4X-NMKD-Superscale-SP_178000_G model has always been my favorite for upscaling SD1. This is the concept: Generate your usual 1024x1024 Image. 4 Denoise with Karras scheduler. 447 downscale from 4x upscale model) for reaching 1600 x 2000 resolution basicaly, from target final resolution, it will gives information: what SDXL ratio and resolution I should choose. comments sorted by Best Top New Controversial Q&A Add a Comment. The need arises from the fact that in the last training I did at 1024x1024, the resolution I obtain is too small, and I can't find any way to upscale without introducing visible artifacts cause is a concept that SDXL never saw before. This is done after the refined image is upscaled and encoded into a latent. There is no best at everything option IMO. 0 IMO is the best one on neutral area. 0 Base SDXL 1. Please keep posted images SFW. One recommendation I saw a long time ago was to use a tile width that matched the width of the upscaled output. Reply reply More replies More replies More replies More replies To replicate this, I usually go to extra to upscale scale by 3. Works with SDXL, SDXL Turbo as well as earlier version like SD1. I have switched over to the Ultimate SD Upscale as well and it works the same for the most part, only with better results. That also explain why SDXL Niji SE is so different. 5 still has better fine details. 5 is still more flexible with all resources available but sdxl comes with a cost that most of you people hadnt noticed it yet, the greater the quality of model is the lesser its prompt understanding cpabilities become, albedoxl cannot No problem, I think when ControlNet is compatible with SDXL you will be able to upscale and get it even more water color like, as SDXL is really good at styles like water colors. johnkoubeck ā€¢ The best result I have gotten so far is from the regional sampler from Impact Pack, but it doesn't support SDE or UniPC samplers, unfortunately. 6 denoise and either: Cnet strength 0. 4 for denoise for the original SD Upscale. personally, I won't suggest to use arbitary initial resolution, it's a long topic in itself, but the point is, we should stick to recommended resolution from SDXL training resolution (taken from SDXL paper). My workflow is more: generate images at a smaller size, like 512x384 once you have a good prompt and/or seed, use hires fix to upscale in the txt2img tab (main thing Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. Currently, my focus The rabbit hole is pretty darn deep. You gave good feedback in my post 2 days ago. Thanks If SDXL Could Use ControlNET Tiles, it Would be HUGEā€”Even Now, the Quality Difference is INSANE | Upscaling to 4K in SDXL vs SD 1. Best way to upscale with automatic 1111 1. 0. You can find the model under this link! I highly recommend using this model with an Hi guys, always thank you for the whole community. I'm currently running into certain prompts where latent just looks awful. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. 429x. Nah, gave up trying to upscale the images I had previously made. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). 0 For custom needs Juggernaut is very good. IMHO the best use case for Latent upscalers is in highres fix, with a reasonable upscale (1. Any idea? (Already tried SIAX_200K, good details but adds too much texture/noise to the image. sd 1. It needs to be an XL based Lora if you use it anywhere to the left of the upscaler. Please share your tips, tricks, and Thanks for getting this out, and for clearing everything up. But both of them has stopped updating. 5 or SDXL images using SD1. For training in SDXL and make the inferencing later: SDXL 1. I can't find the original of the one I posted above but this one is a parent/child of the prompt trail I wandered down - bits of the prompt have been ignored, 3 women are named but the intent was to morph their features rather 3 women. It's really cool, but unfortunately really limited currently as it has coherency issues and is "native" at only 512 x 512. Welcome to the unofficial ComfyUI subreddit. Iā€™ve also tried just the Extras - Upscale method which doesnā€™t seem as I have good results with SDXL models, SDXL refiner and most 4x upscalers. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating I appreciate the improvement in prompt understanding that SDXL has, but achieving images with fine details that look spontaneous and natural requires a lot of effort. Bumping the steps up to 35 gives either a catastrophic failure, or absolutely perfect skin. SD 1. 5, euler, sgm_uniform or CNet strength 0. I'm about to downvote it too. With the hype around Magnific AI, I found myself drawn to achieving similar results. Although if you fantasize, you can imagine a system with a star much larger than the Sun, which at the end of its life cycle will not swell into a red giant (as will happen with the Sun), but will begin to collapse before exploding as a supernova, and this is precisely this moment, in an instant before I was always told to use cfg:10 and between 0. comment sorted by Best Top New Controversial Q&A Add a Comment. 5, just I still see posts here claiming SD1. Try different VAEs; I was having a similar issue earlier and the VAE seems to make a huge difference. 5 and then after upscale and facefix, you ll be surprised how much change that was Ultimate SD upscale - what is best tile size ? 1024 for SDXL and 512 for 1. Footnotes: [1] The SD upscale used conservative settings (low CFG scale, low denoising, 20 steps) (dont recall if I used Euler or For example using WebUI, it is best to generate small 512x512 images, then upscale the one you like best. My favorites are #9 and #10. The "Upscaler 2" is if you want a second version of your upscale done at the same time unless it combines the two and I'm unware of that. I like to create images like that one: end result. It is tuning for Anime like images, which TBH is kind of bland for base SDXL because it was tuned mostly for non Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. A normal plain upscaler can interpolate and kind of guess details, but won't create new Recently other than SDXL, I just use Juggernaut and DreamShaper, Juggernaut is for realistic, but it can handle basically anything, DreamShaper excels in artistic styles, but also can handle anything else well. 5x hiresfix, into img2img sd upscale at 1. Iā€™ve just been using Hires Fix, and am totally unfamiliar with the first two. A while back while testing some LoRAs on these MJ images I made, I noticed that the first 22 votes, 25 comments. I'm creating some cool images with some SD1. And bump the mask blur to 20 to help with seams. 5 model (I set at 0. 25, 1. Use -The main focus of the tutorial is to demonstrate how to achieve amazing results with the SDXL model by using the best settings, upscaling image quality, Best AI Image Upscaler 2024 - Leonardo Vs Krea AI Image Upscaler. I'm revising the workflow below to include a non-latent option. Hello! How are people upscaling SDXL? Iā€™m looking to upscale to 4k and probably 8k even. Not ragging on SD1. I'm mostly just unsure of the best scales to use. So I recommend SDXL. 786 x upscale (or using 0. 5x-2x, and also sometimes img2img sd upscaling a third time at 1. The reason why one might end up thinking that's best for Exactly, why are you testing with loras and ipadapters and other stuff that inherently overwrites/adds to the models capabilities? Test with just prompt, or if you need control of the face angle - controlnet, but only one that uses facial landmarks (openpose) and nothing else, not even negative embeds (or if any, one negative one, same for all models). Unlike SD1. Same with SDXL, you can use any two SDXL models as the base model and refiner pair. 5 is a much lighter load on hardware. 4 works best. 4. 5, SDXL base is already "fine-tuned", so training most LoRA on it should not be any harder than training on a specific model. Using that image through unsampler, if i skipped steps like you're supposed to do, blocky artifacts were appearing because the original image was so pixelated. 7-0. Most I had never tried before. A 0 won't change the image at all, and a 1 will replace it completely. I built a ComfyUI workflow that is based on Stable Cascade model which produces very nicely composed images and the added upscaling workflow this will result in larger upsized image using SDXL model. 5/2) then img2img the new image). Also, a few comments were saying it could be integrated into the Img2Img upscale workflow that many already use to make larger images. Okay. 2. It doesn't turn out well with my hands, unlucky. 28 votes, 15 comments. 5 from the usual 1280x1024, upscaling this with 3. 5 and SD2. 2024-03-29 18:40:00. Reply reply SDXL is significantly better at prompt comprehension, and image composition, but 1. My understanding is that SUPIR isn't really an upscaler Thought I'd mention it since I haven't seen it discussed anywhere and googling "SDXL 1024x576" yields 8(!) results. Uses both base and refiner models, 2 upscale models, and the VAE model, 22 votes, 25 comments. 0 and ran the following prompt in a SDXL Lightning - 1. I can't seem to make this new CN Tile work with Ultimate SD Upscaler in ComfyUI. Leonardo AI's Universal Upscaler: How-To Guide. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. true. how much upscale it needs to reach my target res. 5 models that will suit a range of use cases. 3, no added noise or other changes. Please share your tips, tricks, and workflows for using this software to create your AI art. Instead, I use Tiled KSampler with 0. Thanks, I thought I was going crazy when I did my own testing. 12gb is enough for a SDXL model, some LoRAs and, most importantly, ControlNet and IP Adapter all used together. 50 steps of DPM++ SDE Karras CFG 7 2. 5 checkpoint, this NMKD TLDR: Best settings for SDXL are as follows. I agree with your comment, but my goal was not to make a scientifically realistic picture. Delete the other 2. What methods are ppl using to say create 4k+ resolutions? I have some questions about the best way to get detailed SDXL upscales that are sharp and crisp without rendering too many artifacts. I understand that I can get the CN Tile to work with a KSampler (non-upscale), but our goal has always been to be able to use it with the Ultimate SD Upscaler like we used the 1. I've seen best result ranging anywhere from 40 to maybe 100 or 120 steps. it should have total (approx) 1M pixel for initial resolution. I just generate my base image at 2048x2048 or higher, and if I need to upscale the image, I run it through Topaz video AI to 4K and up. Give an upscaler model an image of a person with super smooth skin and it will output a higher resolution picture of smooth skin, but give that image to a ksampler (using a low /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Blue Pencil the same but more oriented to anime. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. Unless you going to downscale the final result, do not use a sharp upscaler (ie LDSR, ESRGAN, S) as the final step. I don't know if it works in Automatic1111 or not. How many you need is extremely dependent on your subject matter, how many LORAs you use, the complexity of the prompt, the resolution, and a ton of different It's 100% depending on what you are upscaling and what you want it to look like when done. I made a preview of each step to see how the image changes itself after sdxl to sd1. It already has Ultimate Upscaler but I don't like the results very much. It's hard to suggest feedback without knowing your workflow, image style is a big factor that will determine the best upscaler and workflow. Fooocus borrows ideas from Midjourneyā€™s ease of use with an open source philosophy, backed by SDXL rendering. 0 Alpha + SD XL Refiner 1. I have yet to find an upscaler that can outperform the proteus model. Increase the hires steps to 75, and you can increase the upscale to 2 before you will run out of memory. Is there any workflow prefered when dealing with SDXL/ comfyUI? you can filter civitai for SDXL. So, at least for SDXL, training with base SDXL is the right choice most of the time. Most SDXL fine-tuned are tuned for photo style images anyway, so not that many new concepts added. I use ESRGAN to upscale, while it greatly increase the resolution the final image still looks a bit "noisy". I think youā€™d just pipe the latent image into the sampler node that receives the sdxl model? Comfyui SDXL upscaler / hires fix Sorry for the possibly repetitive question, but I wanted to get an image with a resolution of 1080x2800, while the original image is generated as 832x1216. (Thereā€™s custom nodes for pretty much everything, including ADetailer. Here's a link to The List, I tried lots of them but wasn't looking for anime specific results and haven't really tried upscaling too many anime pics yet. 114 votes, 43 comments. Setup is simple. In fact is the one I end using in 90% time. 0 3x ultimate sd upscaler denoise comparison. Reddit posts that are chosen to be on the channel will be given a shoutout in the YouTube video. Affen_Brot its taking only 7. But in SDXL, The right upscaler will always depend on the model and style of image you are generating; I'm sure this has been done to death, but here is a comparison of the different upscalers for some wants-to-be-photorealistic content. 3 GB Config - More Info In Comments Thereā€™s a custom node that basically acts as Ultimate SD Upscale. But for upscale in my opinion I got best results with Deep-Image. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Models I've tried and can confirm that this works great on: SDXL, JuggernautXL, NightvisionXL, ProtovisionXL IP Adapter, ControlNets, BlockWeighted Loras, AnimateDiff with prompt scheduling, audio analyzer, upscaler and interrpolator, Grid plotter, Upscaler, Detailer, Inpainter, LLM loader, LayeredDiffusion&LayeredComposer & Font2Img. 5, using one of ESRGAN models usually gives a better result in Hires Fix. Not knowing what the correct way to use this was, I tried this out: Loaded up the new SDXL model 1. 2 to 0. I always get two images in my extras folder when I use both. 7, for non-latent upscalers you will get best results under 0. 0 for ComfyUI, which is free, uses the CCSR node and it can upscale 8x and 10x without even the need for any noise injection (assuming you don't want "creative upscaling"). I just make 5 at a time and pick the best. It's a solid and fast tool, I've been using it for 2 years now. I managed to make a very good workflow with IP-Adapter with regional masks and ControlNet and it's just missing a good upscale. Comfyui SDXL upscaler / hires fix Question - Help Sorry for the possibly repetitive question, but I wanted to get an image with a resolution of 1080x2800, while the original image is generated as 832x1216. There is no question now that if you want the best models SDXL is where itā€™s at. Yes, you can use whatever model you want when running img2img; the trick is how much you denoise. These are definitely some of the best SDXL images I've seen so far. I also see the Automatic1111 Ultjmate SD Upscaler extension. "High budget" is from the SDXL style selector. Go to universvideos r/universvideos ā€¢ by johnkoubeck. yqr ntwc gjey uvkz bquoataz tnc zuw brm wbf kmdyg