Diffuse to Choose: Enriching Image Conditioned Inpainting in Latent Diffusion Models for Virtual Try-All

1Anon Affiliation 1, 2Anon Affiliation 2
Teaser.

Diffuse to Choose (DTC) allows users to virtually place any e-commerce item in any setting, ensuring detailed, semantically coherent blending with realistic lighting and shadows.

More Examples!

Abstract

As online shopping is growing, the ability for buyers to virtually visualize products in their settings—a phenomenon we define as "Virtual Try-All"—has become crucial. Recent diffusion models inherently contain a world model, rendering them suitable for this task within an inpainting context. However, traditional image-conditioned diffusion models often fail to capture the fine-grained details of products. In contrast, personalization-driven models such as DreamPaint are good at preserving the item's details but they are not optimized for real-time applications.

We present Diffuse to Choose, a novel diffusion-based image-conditioned inpainting model that efficiently balances fast inference with the retention of high-fidelity details in a given reference item while ensuring accurate semantic manipulations in the given scene content. Our approach is based on incorporating fine-grained features from the reference image directly into the latent feature maps of the main diffusion model, alongside with a perceptual loss to further preserve the reference item's details. We conducted extensive testing on both in-house and publicly available datasets, and showed that Diffuse to Choose is superior to existing zero-shot diffusion inpainting methods as well as few-shot diffusion personalization algorithms like DreamPaint.

Model Architecture

We utilize a secondary U-Net Encoder to inject fine-grained details into the diffusion process. This begins with masking the source image and then inserting the reference image within the masked area. The resulting pixel-level 'hint' is subsequently adapted by a shallow CNN, aligning it with the VAE output dimensions of the source image, before element-wise added to it. Following this, a U-Net Encoder processes the adapted hint, where at each scale of the U-Net, a FILM module affinely aligns the skip-connected features from the main U-Net Encoder with the pixel-level features from the hint U-Net Encoder. Finally, these aligned feature maps, combined with the main image conditioning, facilitate the inpainting of the masked region.

Masking Strategy

To handle arbitrary mask shapes and in-the-wild examples during inference, we implement a technique for mask augmentation. With this approach, we have an equal probability of using either a bounding box mask (derived from the fine-grained mask) or a fine-grained mask itself during training. When a fine-grained mask is selected, we integrate reference image within the largest rectangular area of the mask.


Qualitative Results

Diffuse to Choose can work with both in the wild reference images and source images, with arbitrary or rigid mask shapes.



Some Fun Applications

You can iteratively decorate your room using Diffuse to Choose.

Or try on different combination of clothes without any constraints.

Adjusting the mask allows for altering the clothing style, such as tucking in clothes or rolling up sleeves.

BibTeX

If you find our work useful, please cite our paper:

@misc{under review,
      doi = {placeholder},
      url = {https://arxiv.org/placeholder},
      author = {Anonymous},
      title = {Diffuse to Choose: Enriching Image Conditioned Inpainting in Latent Diffusion Models for Virtual Try-All},
      publisher = {will be submited to arXiv},
      year = {2023},
      primaryClass={cs.CV}
}