View Single Post

Thread: AI image editor?

  1. - Top - End - #2
    Firbolg in the Playground
    Join Date
    Dec 2010

    Default Re: AI image editor?

    I use the Automatic1111 webui (run locally) for most things. You can use it in inpaint mode or img2img mode to do substitutions or local edits. I'd also recommend using the Tile and Inpaint ControlNets. If you have two images you want to bash together but they'd have an ugly border or mismatched lighting or whatever, you can make the composite by hand and then run that through img2img with denoise strength around 0.5 to 0.75 or so and the Tile ControlNet set to around 0.25 to 0.5. That has a good shot at basically removing the stuff that's overtly wrong, while basically preserving your image content pretty well. Lower denoise and increase controlnet strength to better preserve your image (but risk also preserving the things about it you want to fix.

    For generating something to substitute for an existing part of the image, use the inpaint mode and mask out what you want to change. Use the Inpaint ControlNet at 1.0, denoise at 1.0, and use the global-harmonious setting in the preprocessor, and if you can afford it regenerate the whole image rather than just the masked area. That will make it more likely that lighting, etc is consistent. You probably will want to set 'latent noise' rather than 'original' or 'fill' for the masked area, or you might just erase whatever you mask rather than generating something new there. That said, if the thing in the image is already very close to what you want (pose, etc) then go ahead and use original and a weaker denoise, maybe 0.5-0.7. It's also a lot easier if you can work at relatively low resolutions (512 or 1024) and then upscale (using img2img) rather than trying to do all of this at 4k, that way you can iterate more quickly and even generate in batches so you can get an idea of the variation and see if its worth just hitting 'generate' a few more times, or if you have to change parameters.

    If your original is already very high resolution and you don't want to lose that, what you can do is to downsample your image, do the edit, upscale the edited image, then basically put the two things as layers in Photoshop or whatever and erase the stuff you don't need from the AI-upscaled image, leaving only the new object and its nearby context.

    There's also evidently a couple of different Krita plugins that work in-editor, but I think they largely use the webui thing as a backend anyhow. Still, maybe worth looking into for a smoother workflow if you have to do this a lot...

    Also, as far as getting character consistency (e.g. 'looks like J-H, not just a bunch of different people') that's a bit trickier. If you have maybe 4-5 reference photos you can train a textural inversion embedding (this can be done within the Automatic1111 ui) which basically discovers a special code that kind of encompasses what J-H looks like based on things the model already knows. It takes a few hours to train though, and it can be fiddly. A heavier-duty approach is to actually train a LoRA that modifies the model itself to be better at representing your subject (but in a kind of sparse, composeable way). If you were going to do e.g. an animation replacing the character with yourself or something like that, it might be worth training a LoRA. For that you need a particular plugin called Dreambooth, and the settings for training there are kind of a complex subject.

    If you're going the other way around and using yourself as a reference but you're fine with 'generic armored knight' you don't need to do this. But another thing that might be useful is the OpenPose ControlNet. With that you can capture a pose from a photograph of yourself, and generate some other character taking that same pose. You could also use the Depth ControlNet to get more specific 3d detail (hands, etc), but at the cost of the resulting figure looking more like you do and less like 'it' should.

    For restyling, you can just use things like 'oil painting' or 'watercolor painting' or 'digital artwork by FakeArtistNameHere' (its more important to indicate that this is 'by someone' than actually identifying a real someone it could have been by) 'trending on Artstation' or stuff like that in the prompt. You can also find a bunch of specific style LoRAs out there, people have made a ton of these. Or again if you want to put a lot in, make a bunch of images in the style you want, train a LoRA yourself, and apply that.
    Last edited by NichG; 2023-09-21 at 06:15 PM.