Hi, I wanted to share my discovery with you on how to use any number of LORA with Z-Image without image degradation.

For this, you simply load all LORA with a ratio of 1.0 and then merge them using the "ModelMergeSimple" Node (a standard node in ComfyUI). After that, always two LORA are balanced/weighted against each other. The result of all ratios will then be 1.0, which allows the K-Sampler to work without any issues.

you can find workflow here

  • Iirc this means Lora No1 occupies 20% of the first merge, which itself occupies 20% of the second merge etc. leading to Lora 1 weights being something like <5% of the total Lora collective, so this isn't working how it should.

    If you want multiple Loras you need to merge Lora differences so that values aren't just added and creating large tensor peaks.

    If Lora 1 represented as a list of numbers is = [0, 1, 0, 3] And then Lora 2 is = [1, 1, 3, 2]

    The merge should something like be [1, 1, 3, 3] vs pure addition which would be [1, 2, 3, 5] which is over amplifying the last value. Now imagine you have 5 Lora that boost that last value and you do pure addition, it will get to an absurd value in comparison to the rest of the weights, so you'll have to normalise it, which will either just cap the extremes or shrink all the existing weights. This is why a difference merge is best.

    You can also do other types merges such as weighted merging where the more extreme the difference in Loras then the less impact they will have, but they still are peaks and not normalized fully.

    These are things I was doing with conditioning weights with SD1.5 in order to merge multiple style conditionings together to get a clean representation of 4+ art styles into one style and creating new art styles. There's lots of fun to be had messing around with the model, conditioning and encoders!

    And this is why using any additional LoRA with a character LoRA almost always impact the consistency of the character LoRA

  • What’s the real difference in using this approach or just simply using PowerLora Loader, for example?

    I mean what’s the thought process? It’s a genuine question to learn more about Comfy and its nodes!

    I use the power lora loader and don’t have many issues. The key is to lower the strength of each one drastically from what the default is. If the Lora says use 1, i keep it at .2-.3 if i use more than one Lora

    OK, I understand, so it is for a more aggressive way to lower the strength of a LoRA. I thought Power Lora Loader also had that feature, but this is a more specific way to do it with multiple LoRAs.

  • Using Power Lora loader from rgthree is much easier and will achieve the same effect

  • I tried using a character LoRA together with an action LoRA, and it didn’t work well.

    you can still try to increase the ratio of the loras that works not so well- the main purpose of the ModelmergeSimple Node is to hardcode a maximum of 1.0 ratio for the k-sampler.

    And for the ModelMergeSimple ratio: ratio 1 is model1, ratio0 is model2. try to mix the lora with a ratio between 0 and 1.0.

  • I will just use the Power LoRA loader and however many I have loaded I divide it up until they equal 1, unless a LoRA needs 1 no matter what. Also, do Action LoRA's first then character/enhancements afterwards.

  • Lower weights to 0.10 each and use a positive prompt (trigger:3.0) instead to call the loras.

    What custom node do you need for that in Comfyui?

  • Interesting. I'll give it a try and see how it goes. Thanks for your help boss 🫡

  • Was just thinking about this today since I haven't had any luck stacking Loras, would be pretty nice if it was actually this easy. Need to try it out later.

    It works fine if you stack loras that are all based on the same base model. The issue is that a lot of loras right now are on the de detailed turbo model so if you mix those with the regular turbo model trained loras you get body horror mess

  • why 0.87/0.84/0.76/0.78

  • Can this be used for multiple character loras? I tried with no luck.

  • This really fresh for me to discover.. thanks man