Commanding AI in Architecture
SD+FLUX within the Design process
As generative AI tools become increasingly prevalent among architects and designers, it is key to understand how to harness their potential consciously and deliberately. When utilizing generic diffusion models, we might have questioned whether We are truly in control of the design process. While prompts can provide a good starting point for exploring ideas, they alone are insufficient for achieving a conscious, site-specific design. As your design evolves, you need to command it rather than AI “hallucinate” any random design, thus, the need for a customized deliberate methodology.
Tools like Midjourney and DALL-E are great entry points for initial brainstorming and exploring concepts, but they are limited in their utility across the entire design process and fall short when trying to convey your own design.
In this course, you will learn a multimodal approach. Rendering tools + SD + FLUX + Photoshop AI. You will start with the basics of Prompting and Brainstorm, to then learn an experimental ADE20K segmentation methodology that can provide architects with more control by defining a specific architectural category to include in their scene. While this approach still has limitations—such as the fact that the category remains heavily influenced by the prompt and context—it proves effective for defining general objects in the scene using tools like Photoshop or Rhino.
You will learn how Fredy is seeking to integrate generative AI into various stages of design to enhance, rather than replace, the architect's critical role in the design process. This course is aimed at architects and designers having experimented with Stable Diffusion yet still aiming to improve their control of this tool.
Workshop Overview
Part 1: 50 min - Introductory presentation of MVRDV AI - applications of Gen AI in the design process
Part 2: 10 min - General questions
Part3. 30 min – Understanding the basics
Part 4. 30 min – From 3D model to Image using Controlnet
Part 5: 60 min – Segmentation Methodology
Part 6: 45 Min – Flux Denoising Methodology
Part 7: 15 Min - Summary / Questions / Discussion
Why it is important:
By now, it is evident that all careers will shift towards a more Generative AI inclusive workflow, with this workshop you will not only learn the basics of using Gen AI diffusion models but also the best tips and tricks to achieve high quality, controlled results.
What will you learn:
Generative AI workflow being researched and developed at one of the top innovative architectural firms.
Mentors
Fredy Fortich
Architect / Engineer focused on computational design; particularly BIM coordination in Revit, performance-based design, generative design and machine learning methods
Master of Science -Building Technology -TU Delft, Netherlands
Master of Architecture -Universidad de los Andes, Bogotá, Colombia
Technical Architect at MVRDV
BIM Manager at French Studio
AI Researcher on Diffusion Models
Programme
Stage 1: Learning the Potential and Basics
Part 1: 50 min - Introductory presentation of MVRDV AI + Basics of using Diffusion models: 1 hr
1. AI Brainstorming – Idea Generator
2. AI Super Pinterest – Reference Generator
3. AI Conceptualizer – Unifying various concepts into 1 image
4. AI Collage Design – Alternative to Modelling
5. AI Massing Optioneering
6. AI Materialization Optioneering
7. AI Rendering
a. Video Rendering
b. Ambiance conversion
c. Upscaling
8. AI Customization - LoRa Training and finetuning.
Part 2: 10 min - General questions
Part3. 30 min – Understanding the basics:
Prompting basics, Models, LoRas, Word weights, XY Plot, Denoising Strength, txt2img vs img2img.
Break – 10 Min
Stage 2: Learning the Workflows
Part 4. 30 min- From 3D Model to Generative AI image using Control Net - Forge
– Most effective approach to “freeze” your geometry.
Part 5: 60 min – Segmentation Methodology
Exercise one: Brainstorm image / Photoshop editing / Control net Segmentation
Exercise two: Sketching in Photoshop / Control net Segmentation
Part 6: 45 Min – Flux Denoising Methodology - ComfyUI
Part 7: 15 Min - Summary / Questions / Discussion
Important info:
To ensure the best experience, please have the following installed before the workshop:
Required Software and Tools
• Stable Diffusion + ControlNet:
• Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
• ComfyUI: https://github.com/comfyanonymous/ComfyUI
• ComfyUI Manager: https://civitai.com/models/71980/comfyui-manager
• ControlNets:
https://huggingface.co/lllyasviel/sd_control_collection/tree/main
• Rendering Software:
• Enscape or Lumion (recommended)
• Adobe Photoshop with Firefly:
Ensure your Photoshop version includes Generative AI capabilities.
Recommended Models to Download
• SDXL Models:
• Halcyon SDXL: https://civitai.com/models/299933?modelVersionId=709468
• RealiVis SDXL: https://civitai.com/models/139562?modelVersionId=789646
• Flux Schnell: https://huggingface.co/black-forest-labs/FLUX.1-schnell
• LoRas:
• ElevationXL: https://civitai.com/models/571504/elevationxl
• Nunu XL: https://civitai.com/models/461798/nunu-xl
• Additional ControlNets:
• Segmentation: https://huggingface.co/abovzv/sdxl_segmentation_controlnet_ade20k/blob/main/sdxl_segmentation_ade20k_controlnet.safetensors
Photoshop Library
Access the curated Photoshop Library here.
Important Info:
- It is advisable to have active plans for the following tools: Midjourney, Prome AI, Krea AI and Runway. This tools covered have free versions, except for Midjourney.
- Basic knowledge of interior design and architecture is recommended.