Comfyui workflow png reddit
Comfyui workflow png reddit. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now I spent around 15 hours playing around with Fooocus and ComfyUI, and I can't even say that I've started to understand the basics. (vid2vid made with ComfyUI AnimateDiff workflow The workflow joson info is saved with the . And above all, BE NICE. Then i take another picture with a subject (like your problem) removing the background and making it IPAdapter compatible (square), then prompting and ipadapting it into a new one with the background. The png files produced by ComfyUI contain all the workflow info. 15 votes, 14 comments. Flux Schnell is a distilled 4 step model. json files into an executable Python script that can run without launching the ComfyUI server. Im trying to do the same as high res fix, with a model and weight below 0. 43 votes, 16 comments. This works on all images generated by ComfyUI, unless the image was converted to a different format like jpg or webp. Just the workflow including the wildcard prompt, but not what the random prompt generated. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. Please share your tips, tricks, and workflows for using this software to create your AI art. A lot of people are just discovering this technology, and want to show off what they created. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in ControlNet the same way that repo does (by frame-labled filename etc). It'll create the workflow for you. Thanks, already have that but run into the same issue I had earlier where the Load Image node is missing the Upload button, fixed it earlier by doing Update All in Manager and then running the ComfyUI and Python dependencies batch files but that hasn't worked this time, so only going top be able to do prompts from text until I've figured it out. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. This includes yiff… A transparent PNG in the original size with only the newly inpainted part will be generated. 0 download links and new workflow PNG files - New Updated Free Tier Google Colab now auto downloads SDXL 1. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. png Simply load / drag the png into comfyUI and it will load the workflow. You can use the remote. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and I'm not sure which specifics are you asking about but I use ComfyUI for the GUI and use a custom workflow combining controlnet inputs and multiple hiresfix steps. 0 | all workflows use base + refiner Not sure if my approach is correct or sound, but if you go to my other post - the one on just getting started- and download the png and throw it into ComfyUi you’ll see the node setup I sort of cobbled together. If you see a few red boxes, be sure to read the Questions section on the page. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. I dump the metadata for a png I really like: magick identify -verbose . Layer copy & paste this PNG on top of the original in your go to image editing software. Not a specialist, just a knowledgeable beginner. Oh crap. 2) or (bad code:0. But let me know if you need help replicating some of the concepts in my process. Please keep posted images SFW. I'm currently running into certain prompts where latent just looks awful. EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . However, this can be clarified by reloading the workflow or by asking questions. Aug 2, 2024 · All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. 0 and refiner and installs ComfyUI Just started with ComfyUI and really love the drag and drop workflow feature. You can then load or drag the following image in ComfyUI to get the workflow: I'll do you one better, and send you a png you can directly load into Comfy. . PNG into ComfyUI. 0 and refiner and installs ComfyUI First of all, sorry if this has been covered before, i did search and nothing came back. I had to place the image into a zip, because people have told me that Reddit strips . To access your computer you can use the windows remote desktop and forward the tcp port using https://remote. 0 VAEs in ComfyUI. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You signed out in another tab or window. The complete workflow you have used to create a image is also saved in the files metadatas. This should import the complete workflow you have used, even including not-used nodes. Second, if you're using ComfyUI, the SD XL invisible watermark is not applied. It is not much an inconvenience when I'm at my main PC. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. png. If you mean workflows they are embedded into the png files you generate, simply drag a png from your output folder onto the ComfyUI surface to restore the workflow. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. When I save my final PNG image out of ComfyUI, it automatically includes my ComfyUI data/prompts, etc, so that any image made from it, when dragged back into Comfy, sets ComfyUI back up with all the prompts, and data just like the moment I originally created the original image. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. it and the same way you could port forward the comfyui. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. 1 for ComfyUI | now with LoRA, HiresFix, and better image quality | workflows for txt2img, img2img, and inpainting with SDXL 1. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. The Solution To tackle this issue, with ChatGPT's help, I developed a Python-based solution that injects the metadata into the Photoshop file (PNG). it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. ComfyUI is a completely different conceptual approach to generative art. How to upscale your images with ComfyUI: View Now: Merge 2 images together: Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Share, discover, & run thousands of ComfyUI workflows. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) The image itself was supposed to be the workflow png but I heard reddit is stripping the meta data from it. You switched accounts on another tab or window. Anywhere. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. Please DO NOT post any Feral, IRL Selfies, Self Made Art (Unless Permisson is Granted) Porn links, or Random spam. Comfy Workflows Comfy Workflows. It works by converting your workflow. Please share your tips, tricks, and… Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. Belittling their efforts will get you banned. An example of the images you can generate with this workflow: Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. If necessary, updates of the workflow will be made available on Github. Collaborator. Jul 28, 2024 · Actually there is better way to access your computer and comfyui. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. My only current issue is as follows. Welcome to the unofficial ComfyUI subreddit. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. The workflow joson info is saved with the . Comparisons and discussions across different platforms are encouraged. 19K subscribers in the comfyui community. If you really want the json, you can save it after loading the png into comfyui. This topic aims to answer what I believe would be the first questions an a1111 user might have about Comfy. You can save the workflow as a json file with the queue control panel "save" workflow button. I generated images from comfyUI. Subreddit Dedicated to Foxgirls, Dragons, Felines and any other sexy Hentai or Furry Girl you have! Whether they're Anthropomorphic or Not. The test image was a crystal in a glass jar. 9 and 1. Here you can see random noise that is concentrated around the edges of the objects in the image. I can load default and just render that jar again … but it still saves the wrong workflow. There is no version of the generated prompt. -- Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. 0 and refiner and installs ComfyUI Welcome to the unofficial ComfyUI subreddit. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Reload to refresh your session. I use a google colab VM to run Comfyui. If you need help just let me know. This makes it potentially very convenient to share workflows with other. Mar 30, 2023 · edited. I compared the 0. I would like to use that in-tandem with with existing workflow I have that uses QR Code Monster that animates traversal of the portal. Just as an experiment, drag and drop one of the png files you have outputed into comfyUI and see what happens. Loading a PNG to see its workflow is a lifesaver to start understanding the workflow GUI, but it's not nearly enough. pngs of metadata. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. So every time I reconnect I have to load a presaved workflow to continue where I started. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. More to come. You can use () to change emphasis of a word or phrase like: (good code:1. I'm revising the workflow below to include a non-latent option. 4K subscribers in the aiyiff community. I've mostly played around with photorealistic stuff and can make some pretty faces, but whenever I try to put a pretty face on a body in a pose or a situation, I I put together a workflow doing something similar, but taking a background and removing the subject, inpaint the area so i got no subject. 5 from 512x512 to 2048x2048. I tried to find either of those two examples, but I have so many damn images I couldn't find them. it to port forward Up To five ports on free plan. This is a subreddit for the discussion, and posting, of AI generated furry content. true. You can simply open that image in comfyui or simply drag and drop it onto your workflow canvas. My workflow where you can choose and image (or several) from the batch and upscale them on the Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP The metadata from PNG files saved from comfyUI should transfer over to other comfyUI environments. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. and spit it out in some shape or form. Just started with ComfyUI and really love the drag and drop workflow feature. A quick question for people with more experience with ComfyUI than me. View community ranking In the Top 10% of largest communities on Reddit. Searge SDXL Update v2. Reply reply Dry-Comparison-2198 Getting an issue where whatever I generate - a bogus workflow I used a few days ago is saving … and when I try to load the png - it brings up wrong workflow - and fails to render anything if I hit queue. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. 8. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have been replaced by the same node with the same wiring. 8). Save one of the images and drag and drop onto the ComfyUI interface. \ComfyUI_01556_. Instead, I created a simplified 2048X2048 workflow. Again I got the difference between the images and increased the contrast. Save the new image. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. SDXL 1. 0 ComfyUI Tutorial - Readme File Updated With SDXL 1. However, I may be starting to grasp the interface. Mar 31, 2023 · You signed in with another tab or window. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Anyone ever deal with this? This missing metadata can include important workflow information, particularly when using Stable Diffusion or ComfyUI. gvkwupt wkziklf qlwtz xovggfh gugro yii emmfp pefnxlk dnwskeu fsfs