Visual Style Prompting with Swapping Self-Attention

1Yonsei University, 2NAVER AI Lab
* Equal contribution


We tackle Visual Style Prompting, reflecting style elements from reference images and contents from text prompts, in a training-free manner.


Problem Statement

Ambiguity of text prompts. (a) Ambiguity of text leads to different results within the same style description. (b) Even a detailed style description does not guarantee the generation of the same style images since it has many variants that can hardly be constrained using only text prompts. (c) Reference images can specify detailed visual elements.



In the evolving domain of text-to-image generation, diffusion models have emerged as powerful tools in content creation. Despite their remarkable capability, existing models still face challenges in achieving controlled generation with a consistent style, requiring costly fine-tuning or often inadequately transferring the visual elements due to content leakage.

To address these challenges, we propose a novel approach, visual style prompting, to produce a diverse range of images while maintaining specific style elements and nuances. During the denoising process, we keep the query from original features while swapping the key and value with those from reference features in the late self-attention layers. This approach allows for the visual style prompting without any fine-tuning, ensuring that generated images maintain a faithful style. Through extensive evaluation across various styles and text prompts, our method demonstrates superiority over existing approaches, best reflecting the style of the references and ensuring that resulting images match the text prompts most accurately.


Visual Style Prompting successfully reflects style elements in reference images while not causing content leakage from the reference images.

Method Overview

Overview of swapping self-attention for visual style prompting. We swap the key and value features of self-attention block in an original denoising process with the ones from a ref- erence denoising process. This procedure is repeated for T steps, resulting in the original content rendered with the style elements from the reference image



Competitors face challenges in generating images with diverse layouts and compositions due to content leakage from the reference.


More Comparison

Qualitative comparison with varying styles and fixed content. Visual Style Prompting reflects style elements from vari- ous reference images to render “frog” while others struggle.



Visual style prompting with existing techniques. Our method is compatible with ControlNet and Dreambooth-LoRA.


More Results with Visual Style Prompting

more diversity results.


      title={Visual Style Prompting with Swapping Self-Attention}, 
      author={Jaeseok Jeong and Junho Kim and Yunjey Choi and Gayoung Lee and Youngjung Uh},