Graph to Plan

1 Brief

Graph theory plays a significant role in the design and analysis of building layouts. Architects have been used bubble diagrams-a schematic diagram of organizational graph-to represent the spatial relationship of architectural spaces as early as 1938 (Emmons and Paul 2017), and various definitions have been proposed by researchers for such organizational structures (Baglivo & Graver, 1983; Roth & Hashimshony, 1988).

Although variant definitions on such structure have been made by many researchers (Baglivo & Graver, 1983; Roth & Hashimshony, 1988), a bubble diagram is substantially a relational graph structure consisting of vertices and edges. A vertex is an abstract representation of a room, and an edge exists where two rooms are spatially connected with each other (Fig.1). The recent development of graph neural network (GNN) gives us an opportunity to train models on graph-structured datasets and generate semantically realistic floor plans, boosting the efficiency of generating massive plan variations in a much shorter time for the design industry.

Screenshot 2021-02-16 213824.png

Fig.1 Graph Representation of Floor Plans

We proposed a combined network consisting of three parts (Fig. 2): Graph Embedding Network, Message Passing Network and Box&Mask Regression Network. The predictor of the network is an input graph, and the output result is the plan layout of rooms. The input graph is initially parameterized as nodes and edges, each of them is represented as a scalar. Through embedding, we improve the scalar presentation to a vector, which is similar to a word embedding layer in natural language processing. The embedded graph information includes three parts: embedded nodes (or rooms), embedded connections (between each pair of rooms), and a connection register table. The embedded graph is then passed into the Message Passing Network. Generally speaking, the Message Passing layers perform the convolution across the graph so that each node’s embedding vector also contains the information of its neighbors. The output of Message Passing is the convolutional embedded nodes matrix and the convolutional embedded connection matrix which will be input into two regression decoders. The box regression is a Multi-perceptron Layer (MLP) to generate the bounding box for each of rooms, represented by the x and y range on the canvas; the mask regression network is an up-sampling CNN which generates a mask pattern indicating which area of the bounding box is the actual room space. By overlapping all rooms together, we obtain one predicted floor plan.

general_diagram.jpg

Fig.2 Overall Network Architecture 

2 Feature Engineering

Our goal is to extract geometric information from structured floor plan data, and generate a graph representation that describes the room relations in such a floor plan. The graph-generating algorithm consists of two steps: 1) machine-parsing the structured floor plan data, to retrieve geometric information of related elements (i.e. rooms, doors); 2) discriminating the relationships between rooms pairwisely to construct the graph.

111.jpg

Fig.3 Plan representation: svg to graph structure

3 Graph Convolutional Networks

The core of the entire network is the graph convolutional part.

The Message Passing Network (Fig. 4) is the core of the entire network setup. What it does is propagating all information about the relations that a node is involved in. This process is repeated for all nodes throughout the graph. Concretely, given an input graph with vectors of dimension D-in at each node and edge after embedding, it computes updated vectors of dimension D-out for those nodes and edges.  Output vectors are a function of a neighborhood of their corresponding inputs, so that each graph convolution layer propagates information along edges of the graph. At the core of the GCN, we used a combination of two MLP layers and one average pooling function to perform the information aggregation. The updated vectors for each node and edge are then fed into the same GCN layer to recurrently get updated N times before moving into the box and mask regression networks.

Fig.4 Architecture of the GCN Network

4 Loss Function

The loss functions are put on both predicted bounding boxes and predicted masks. We adopted L2 difference for boxes, and element-wise binary cross entropy for masks between the ground truestth and prediction. The total loss is a weighted sum of the box and mask losses for which the weights were decided through experiments as shown in Fig. 5. The results of the experiments showed that L1 box loss with much larger weights will produce the best results.

444.png

Fig.5 Testing on L1/L2 Losses

5 Live Demo

Please follow the link: http://gcn.luyueheng.com/  and have fun!