The first-ever model that can turn a single image into a LoRA has been released by DiffSynth-Studio.

https://modelscope.cn/models/DiffSynth-Studio/Qwen-Image-i2L/summary

  • A translation

    The i2L (Image to LoRA) model is an architecture designed based on a wild concept of ours. The input for the model is a single image, and the output is a LoRA model trained on that image. We are open-sourcing four models in this release:

    Qwen-Image-i2L-Style Introduction: This is our first model that can be considered successfully trained. Its ability to retain details is very weak, but this actually allows it to effectively extract style information from the image. Therefore, this model can be used for style transfer. Image Encoders: SigLIP2, DINOv3 Parameter Count: 2.4B

    Qwen-Image-i2L-Coarse Introduction: This model is a scaled-up version of Qwen-Image-i2L-Style. The LoRA it produces can already retain content information from the image, but the details are not perfect. If you use this model for style transfer, you must input more images; otherwise, the model will tend to generate the content of the input images. We do not recommend using this model alone. Image Encoders: SigLIP2, DINOv3, Qwen-VL (resolution 224 x 224) Parameter Count: 7.9B

    Qwen-Image-i2L-Fine Introduction: This model is an incremental update version of Qwen-Image-i2L-Coarse and must be used in conjunction with Qwen-Image-i2L-Coarse. It increases the image encoding resolution of Qwen-VL to 1024 x 1024, thereby obtaining more detailed information. Image Encoders: SigLIP2, DINOv3, Qwen-VL (resolution 1024 x 1024) Parameter Count: 7.6B

    Qwen-Image-i2L-Bias Introduction: This model is a static, supplementary LoRA. Because the training data distribution for Coarse and Fine differs from that of the Qwen-Image base model, the images generated by their resulting LoRAs do not align consistently with Qwen-Image's preferences. Using this LoRA model will make the generated images closer to the style of Qwen-Image. Image Encoders: None Parameter Count: 30M

    Interesting, sounds like HyperLoRA from ByteDance earlier this year. They trained it by over fitting a LoRA to each image in their dataset, then using those LoRAs as the target for a given input, making it a LoRA that predicts LoRAs.

  • Big if huge

  • Is it suitable for human face use?

    Judging from the use case descriptions not yet. And none of the examples would be considered character loras.

  • 是不是可以直接写一个节点封装到comfyui呢?

  • Very good but i haven't used qwen image since Z-image released i only use qwen image edit 2509