Industry News

By Ricardo Eloy

LooseControl: Revolutionizing 3D Architectural Visualization with AI-Driven Depth Conditioning

Researchers from KAUST, University College London, and Adobe have introduced LooseControl, a model for diffusion-based image generation. This technology uses ControlNet, a neural network, to create 2D images from a 3D layout. 
Image credit: Shariq Farooq Bhat et al.
Specifically, we allow scene boundary control for loosely specifying scenes with only boundary conditions, and 3D box control for specifying layout locations of the target objects rather than the exact shape and appearance of the objects. Using LooseControl, along with text guidance, users can create complex environments (e.g., rooms, street views, etc.) by specifying only scene boundaries and locations of primary objects.
Image credit: Shariq Farooq Bhat et al.
Image credit: Shariq Farooq Bhat et al.
Users can set up 3D boxes in a scene, adjust their position and size, and add a text prompt for the AI to generate images based on this setup. LooseControl allows scene boundary control and 3D box control, enabling the creation of complex environments like rooms. It also includes editing mechanisms for refining images and changing scene aspects, making it a potentially valuable tool for designing complex environments.

Learn more about the project here.
You must be logged in to post a comment. Login here.

About this article

Researchers from KAUST, University College London, and Adobe have introduced LooseControl, a model for diffusion-based image generation.

visibility434
favorite_border0
mode_comment0
Report Abuse

About the author

Ricardo EloyVanguard

CGarchitect Editor/3D Specialist at Chaos

placeSão Paulo, BR