ControlNet’s Reference Only: Simplify Image Generation with Style Control

2023-07-04 01:00:00

Reference Only is a new function integrated in ControlNet. It only needs to input a single reference picture to control the style of the generated image, which can simplify the workflow of training small models such as LoRA. Imitate the style of reference pictures According to ControlNet’s updated information on GitHub, Reference Only can directly link with Stable Diffusion’s Attention Layers, and use any picture as a reference to control the style of the generated image. The biggest feature of Reference Only is that it does not need to rely on any AI model, and it can directly function by inputting a single reference picture, which is quite easy to use. The author first lists the effect display of Reference Only below. The theme of the generated image is also Dr. Takemi in the game “Persona”, but with different reference images, you can control the style of the characters in the generated image. ▲ First of all, we borrow the picture drawn by 生ごミカン as a reference, and we can see that the doctor who generated the picture has the same obvious smoky makeup as the original picture. ▲ Then change to a reference picture with a clear background style, and the background of the generated picture will also change accordingly. ▲ Let’s use the picture of the smartphone app “Stones Gate Clock” as a reference, and the expressions of the characters and the overall tone of the generated picture will also be affected. ▲ Let’s see what happens when you enter “Sonic Boy” as a reference, the results are not very obvious. ▲ What regarding Shizuka from Doraemon? Mmm, it works! ▲ Finally, try Feili from the “Magic Bubble” series. Although the style is a bit similar to the picture above, you can still see the difference if you look closely. Installation and use of Reference Only ControlNet integrated Reference Only in the 1.1.153 version update, so if the reader is a new installation of ControlNet, he should get this version. If ControlNet has been installed, you can go to the Extensions page of the Stable Diffusion WebUI, click “Check for updates” under the Installed tab to check the update of the plug-in function, and click “Apply and restart UI” to install the update and restart. After restarting the Stable Diffusion WebUI, click the triangle on the right of ControlNet to expand its setting items, then click “Click to upload”, select the reference image you want to import, then click the “Enable” box in order, and then select reference_only in Preprocessor. Then set the Control Mode, where Balance means balancing the weight of ControlNet and prompt words, My prompt is more important means prompt words are more important, and ControlNet is more important means ControlNet is more important. Image generation can then proceed as normal. It should be noted that, according to the author’s test experience, Reference Only often needs a higher weight to make the effect more obvious. Therefore, if the style control is not as expected, you can increase the Control Weight in ControlNet (the upper limit is 2), and in Select ControlNet is more important in Control Mode, so that Reference Only can further control the image style. ▲ If you need to update ControlNet, you can go to the Extensions page of Stable Diffusion WebUI, click “Check for updates” under the Installed tab to check the update of the plug-in function, and click “Apply and restart UI” to install the update and restart. ▲ When using Reference Only, you need to click the triangle on the right of ControlNet to expand its setting items and then click “Click to upload” to select the reference picture you want to import. ▲ Then click the “Enable” box in sequence, and then select reference_only in Preprocessor. After that, you can adjust the Control Weight and Control Mode according to your needs to change the weight of the control. ▲ After the setting is completed, you can operate in the usual way, and click the “Generate” button to start generating images. Reference Only provides a convenient way to allow readers to control the style of generated images through a single image, without having to train small models such as LoRA by themselves, which can simplify the workflow of generating images of a specific style. (Back to series article directory)
1688434024
#Stable #Diffusion #Mapping #Manual #Controlling #style #painting #Reference #Kebang

Leave a Replay