AI in crowdsourced design: sourcing collective design intelligence

Imdat As, Prithwish Basu, and Sergey Burukin

In 1983, French President Francois Mitterrand launched an international competition tor the Opera House Bastille in Paris as part of his monumental building program known as the Grands Travaux. The competition received 756 entries, and a 3-km-long stretch ot drawing sheets was evaluated by the jury (De Haan, Frampton, Haagsma & Sharp, 1988). While one might question how such a large sum of design entries can be fairly evaluated, it is the fact that throughout history, seminal buildings were often acquired through competitions—from the Acropolis in Athens, to cathedrals in the Middle Ages, to the Duomo in the Renaissance. Just in England, 2,500 competitions were held in the 19th century alone (DeJong& Mattie, 1994). Nowadays, most European countries require the use of competitions in obtaining the design of public buildings. With the advent of the Internet and the World Wide Web, and online crowdsourcing platforms such as Arcbazar.com (Arcbazar), even smaller-scale projects, e.g., bathrooms and living rooms, can use the fair competition protocol in obtaining various design solutions. In this chapter, we briefly talk about the “competitions” model, discuss its translation into online crowdsourcing platforms, explore the integration ot artificial intelligence (AI) in crowdsourcing processes, and demonstrate how AI—in particular, deep learning—can be used to produce conceptual designs potentially competing in future crowdsourced projects.

Competitions

Competitions are a popular way to acquire design solutions, because architecture is based on the logic ofabductive reasoning (Steinfeld, 2017). That is, the solution space in design is not as clearly delineated as in engineering. Designers do not strive for a single correct answer, but instead try to solve the problem with a unique, original and idiosyncratic solution. No matter how good a design is, there is always room tor a better solution. In other words, design can be iterated upon indefinitely. Therefore, competitions have been used throughout history to generate design options to compare, contrast, and choose from.

Nevertheless, one might find it surprising that architects submit to competitions at all, since the chance of winning and converting an idea into brick-and-mortar through this protocol is often lower than 1%. Thus, Louis Kahn argued that competitions are a free offering by architects to the larger community, because the majority ot projects never get built and the architects do not get paid (Lipstadt & Bergdoll, 1989). And, not just in architecture, but also in other fields such as open-source software development, engineers work often tor free on a given problem set for the good of the broader public. They get satisfaction it a user downloads their software, and obtain some fame in the community if it becomes more widely used. More recently, with the advent of big data, a more explicit competition model has permeated through software, thanks to well-publicized events such as the Netflix prize in 2006 and a plethora ot Kaggle competitions.

Similarly, for most architects design is a passion, which can be explained by the concept of urges or drives in motivational psychology. Adolf Loos claimed that any creative act serves the sublimation of the creator’s urges, and therefore performs functions beyond its apparent value proposition (Gleiter, 2008). Loos’ argument, which was originally made against ornament in modernist architecture, may also explain why designers participate in design challenges at all: Competitions offer battlegrounds for “creative acts” to outshine.

Indeed, many architects made major breakthroughs in their careers and established their very name through competitions. For example, German architect Gunther Behnisch, who won the prestigious competition for the Munich Olympic Park in 1968, participated in more than 800 competitions over his career. Farshid Moussavi, a renowned contemporary architect, participated in more than 200 competitions, and values competitions for generating “creative leaps” (Moussavi, 2013). However, there are also well-known architects who avoided competitions altogether, such as Frank Lloyd Wright or Louis Kahn. According to Lipstadt and Bergdoll (1989), William Robert Ware in 1899 maintained that:

Every competition ... costs the profession hundreds ot thousands of dollars, most ot which falls upon men who can ill afford the loss. It is cruel and heartbreaking, when fifty or a hundred sets ot drawings are submitted for judgement, to consider that... all but one... have labored in vain, and that out of all the schemes only half a dozen can possibly receive any serious consideration... Thus the profession grows and travails night and day, year in and year out, under the strain ot sacrifices it can ill afford to make. No wonder that the system of competitions has come to be regarded as a sort of nightmare, as an incubus or vampire, stifling the breath of professional life, and draining its blood.

(p. 15)

Despite such critique, the competition model is well accepted by the larger designer community and undoubtedly forms a major vehicle for the production of important edifices around the globe. A new embodiment of competitions—in the form ot online crowdsourcing models—addresses the demand for even smaller-scale design projects and opens up the fair competition protocol to everyday design challenges.

Crowdsourcing

The architectural practice is constantly going through a transition of new technologies, e.g., novel graphic communication tools, new generative design, and construction software. New methods, techniques, and ideas are constantly tested out. This endless flux of changes coupled with the design talent available throughout the world makes crowdsourcing an attractive means for design acquisition. In general, crowdsourcing makes use of a swarm of experts around the globe to resolve a particular problem. This can range from aggregating and editing Wikipedia entries to solving wicked science problems (innocentive.com), or generating new logos (99designs.com), and many more. Crowds are solicited to “develop a new product

Cover image of Arcbazar, an online crowdsourcing platform for architectural design projects (arcbazar.com)

Figure 19.1 Cover image of Arcbazar, an online crowdsourcing platform for architectural design projects (arcbazar.com).

or service, refine a project, calculate or obtain different algorithms, or assist in providing, organizing, or evaluating significant amounts of information in viable data” (Bujor & Avasil- cai, 2018). In architecture, crowdsourcing opens up new opportunities to generate design options, and to facilitate collaboration among designers (Figure 19.1).

One such crowdsourcing platform is Arcbazar, a two-sided marketplace, with clients on the demand side and designers on the supply side. It enables clients to launch competitions, and designers to work on various types and scales of architectural challenges. Clients can onboard projects, i.e., provide a short description, upload images and dimensions, and set deadlines and monetary awards. During the competition process, they interact with designers via an anonymous communication interface. On average, each competition receives about 12—13 design entries. All projects are viewable on the platform and can be rated by fellow designers. The ratings are based on the basis of idea, aesthetics, function, buildability, and graphic sophistication. Similar to traditional competitions, projects are executed anonymously; i.e., the client does not know who the designers are and vice versa, in order to keep the evaluation process strictly fair and merit-based. However, every project contains an analytics page, which displays general real-time data about the location of designers, their education, gender, submissions, etc. (Figure 19.2).

Competing vs collaborating

Traditionally, competitions provide a level playing field where teams or individuals beat one another. The process is competitive and is based on prescribed rules and regulations that ideally facilitate a fair battleground. However, even in the best-case scenario, there is only one clear winner and all other participants are by definition on the losing end. In other words, all but one winner experience some sense of jealousy, disbelief, or disappointment. Therefore, the question is; Is there a possibility to have multiple winners in crowdsourcing models? Moreover, would it be possible for a designer to take a design from a competitor and iterate it further? What are the required intellectual protections?

Designers often build up teams to participate in competitions. Every team member brings preferably something complementary to the table. The team works together, shares the

Screenshots showing the cover images of design submissions to a competition (left), and the project analytics page (right)

Figure 19.2 Screenshots showing the cover images of design submissions to a competition (left), and the project analytics page (right).

burden, and eventually benefits or suffers equally from the success or demise. Is it possible to turn the competition model into a collaborative one, where participants do not compete but cooperate on projects? At Arcbazar, we explored the use of two promising models: (a) the exchange model and (b) the iterative model (Figure 19.3).

Two alternative collaborative models in crowdsourcing

Figure 19.3 Two alternative collaborative models in crowdsourcing: the exchange model (top) and the iterative model (bottom).

Exchange model

This method ot team building happens through direct complementary contributions; i.e., designer A produces a base drawing, and designer 13 uses the base drawing and takes designer A tor an agreed-upon equity into the team. Complimentary work can range from drawings to environmental consulting, engineering input, partial design solutions, or any other project-related help. Designer A, in this way, can become part of multiple teams that agree on the terms put forth. Designers who do not have much time at their hands, or lack expertise in certain areas, can still add a “brick on the wall” and potentially become part of a winning team. It significantly lowers the barrier for designers to enter a competition.

Iterative model

This method is a two-staged crowdsourcing protocol. In the first phase, all designers submit their designs, and the projects are evaluated and ranked by the client. In the second phase, all designs are open for reuse by other designers. The knowledge produced in the first stage is not lost but developed further.

These models may sound counterintuitive, but by comparison, in the field of science, for example, a written article often has multiple authors, and the order of authors reveals the degree of each individual’s contribution to the work. In a similar way, entries in crowdsourcing projects can be “authored” by multiple designers based on their level of contribution. When, and it, the entry wins a competition, the award is shared according to the set equity distribution. If the design gets built, a team of designers gets credited like its authors. The collaborative models put forth aim to facilitate a fair crediting mechanism for designers, in order to develop an objective framework that allows to harvest collective design intelligence built on aggregate design.

Evaluating design entries

One of the most controversial issues in competitions is the evaluation ot design entries. According to Moussavi, the evaluation process in competitions has less to do with the merits than with the “theatre ot unpredictability within which competitions unfold” (Moussavi, 2013). There is no particular framework that could be applied uniformly and objectively on each project. Therefore, evaluations in traditional competitions often follow the tournament model; i.e., designs get judged comparatively and eliminated one by one until a clear winner emerges. Or, projects are judged on an additive basis, i.e., adding virtues, experimental quality, and innovation, and the one with the most aggregated value gets selected. In either case, there are a limited number of possible outcomes for entries: (a) The design wins the competition, (b) the design loses it, (c) the design gets built through repurposing it tor another project, or (d) the design becomes part of a new solution by another architect. All but one project will end up in the latter three categories.

In crowdsourcing projects, on the other hand, evaluations are based on quantitative and qualitative design criteria. Quantitative criteria involve voting procedures among designers, family, and friends of the client. Altogether, they provide a total score, which the system uses to rank the projects. This type ot automation in evaluation is of essence, since the number of entries can often get overwhelming, like in the example mentioned above, where 756 entries were submitted to the Opera Bastille competition in Paris. It is literally impossible to have a fair evaluation through traditional juries with such a large number of entries, no matter how well the jury, regulations, intentions, and organization might be. Quasi-automated evaluation mechanisms can offer a solution for this type of problem. Qualitative criteria, on the other hand, are derived from the client’s own critique, written feedback from experts, and opinions from family and friends. In this setting, the client is sanctioned to make an informed decision based on these data points. The evaluation process overall becomes transparent, and the competition outcome remains merit-based.

Tracking designer performance

Performing well in crowdsourcing projects does not always mean winning a contest. It can be more nuanced. For example, awards can be distributed in a more equitable way. Design entries can be evaluated, scored, and ranked automatically, and an award can be given not only to the top three designers, but distributed among all entrants based on relative scores. In this scenario, every qualified entry could get a share of the award. In other words, it there are, say, ten design submissions, each designer receives a proportional percentage of the total award—based on their final scores. In addition to monetary rewards, designers can also collect points for various acts, e.g., signing up tor a competition, submitting their entry, making peer evaluations, sharing their work, and consulting project owners. At Arcbazar, these points define the history and ranking of designers on performance charts. They depict the history of designers and can be filtered by the location of designers, e.g., charts of top European designers, US designers, and landscape designers (Figure 19.4). Thus, architects can improve their standings by contributing to the larger community. In fact, it has been argued that “peer consumption and feedback are important motivators of participation in crowdsourcing operations and online communities in general” (Keslacy, 2018, p. 311).

Screenshot of designer charts, showing the ranking of top designers on Arcbazar

Figure 19.4 Screenshot of designer charts, showing the ranking of top designers on Arcbazar.

Arcbazar statistics

As of September 2020, Arcbazar gathered more than 30,000 projects worldwide and collected more than 300,000 renderings, drawings, videos, and millions of communication strings (see arcbazar.com/map). The types of competitions on the platform are distributed as follows: 40.6% remodeling, 16.4% landscape design, 15.7% interior design, 13.3% new residential, 12.5% commercial, and 1.5% institutional projects. 41.5% of projects were won by designers from Europe, 27.6% from the USA, and 21.2% from Asia, and the rest is shared among designers from Africa and Oceania (Figure 19.5).

About 15 million smaller-scale projects are remodeled each year in the USA alone. However, 89% of these projects are executed without an architect. Designs are either drafted by contractors or imagined by project owners themselves. Crowdsourcing is lowering the barriers for such projects to benefit from professional design help. It can become an important vehicle to spread competitive design to the wider segments of society and enlarge the potential design footprint ot an architect all around the globe.

Artificial intelligence

Recent developments in Al offer exciting opportunities to improve the overall crowdsourcing experience. In particular, we present three use cases ot Al below: first, in recommending award amounts; second, in surveying existing spaces; and third, in generating novel conceptual designs through deep learning.

Price recommendation system

One of the most common issues of clients on Arcbazar has been the question of the competition award. The problem is: What is an optimal award amount that is high enough to attract designers to participate in a competition, and at the same time low enough for clients to launch the project? We developed an Al-based pricing recommendation system that looks into 53 feature dimensions of previously run competitions, such as type, number of submissions, award amounts, honorable mentions issued, bonuses given, and the level of communication between clients and designers. In a nutshell, the system looks at the performance of previous projects in real time and suggests an award to upcoming new clients. However, the award suggestion is only a recommendation, and therefore, the client is free to set any award amount that is above the set minimum. Typically, the client’s decision on the award is based on a combination of objectives: budget, mood, time limits, estimations, expectations, etc. The recommendation system has to take these objectives into account and output an award amount that satisfies both clients and designers.

The price recommendation system consists ot (a) data analytics and (b) machine learning (ML) selection and evolution. We looked into quantitative data of high-performing projects on Arcbazar and discovered how various aspects of competitions are interconnected. The analysis gave us an idea about the type of data sources we should use to train the ML system. The accuracy of predictions, however, showed that the quantitative dataset was not entirely satisfactory in and of itself; therefore, we also looked into qualitative data, i.e., text description fields written by clients that contain latent objectives. At Arcbazar, a client describes their project quantitatively, e.g., scale, type, and size, and qualitatively, e.g., textual descriptions and comments. The full qualitative and quantitative portrayal of a project provided a more comprehensive representation ot the competition performance and gave us a better

Screenshot of Arcbazar's general analytics page, showing the distribution of type and location of projects, location of designers, submission rates, etc. (arcbazar. com/map)

Figure 19.5 Screenshot of Arcbazar's general analytics page, showing the distribution of type and location of projects, location of designers, submission rates, etc. (arcbazar. com/map).

Graph comparing recommendation systems

Figure 19.6 Graph comparing recommendation systems. The relationship of actual awards issued at Arcbazar and the predicted ones generated through the recommendation system is shown in the diagram. The left one was built using the random forest regressor algorithm that processed only numeric features from the dataset. The right one was built using the multilayer perceptron (DNN) regressor with the enriched dataset after adding natural language processing tools.

chance to find an optimal award amount range. Therefore, we applied natural language processing methods tor the text fields, which increased the number of data fields used in training the ML system and improved the prediction accuracy overall (Figure 19.6). We went through some trial-and-error stages. Firstly, we only used numeric and numeric-like data and tested several ML algorithms on different splitting ratios for the training and validation datasets, ranging from 50/50 to 90/10, respectively. The random forest regressor gave us the best results. Secondly, we added preprocessed text values of the competition brief text fields and repeated testing several ML algorithms on different splitting ratios. This approach helps to reduce overfitting, and resulted in an optimal competition award recommendation system for Arcbazar.

Surveying existing spatial conditions

One of the bottlenecks in crowdsourcing is the need tor accurate dimensions of the spaces in question. Clients do not only write their competition brief, upload images, and decide on a timeline and award amount, but also have to provide accurate dimensions of their existing spaces. Sometimes they have a blueprint of their home, and they take a picture of it and upload it. But often clients do not possess this piece of critical information, which leaves them with two options: Either they sketch the dimensions of the space(s) on a piece of paper and provide it to designers, or they have to hire an architectural surveyor to produce accurate dimensions of the space. This hurdle creates a barrier for many clients to launch a competition.

A simpler solution is to extract measurements straight from images. There are well-known companies working on solutions in photogrammetry in order to turn two-dimensional images into three-dimensional models, such as Autodesk ReCap or Rhino PhotoModeler. One has to provide a series of images covering the entire object or space and stitch them together manually. However, photogrammetry has been augmented with AI, tor example, in aerial drone imagery to reconstruct larger urban areas or architectural heritage sites. Iconem—a French photogrammetry company—in collaboration with Microsoft AI is automatically stitching thousands of drone images together and thereby reconstructing accurate 3D digital models of entire historic heritage sites that are threatened by war—in order to record and

T9.7 Artificial intelligence-based photogrammetry may identify dimensions and labels from single images (top left); and generate an orthographic drawing of the space (top right) (Courtesy

Figure T9.7 Artificial intelligence-based photogrammetry may identify dimensions and labels from single images (top left); and generate an orthographic drawing of the space (top right) (Courtesy: hostalabs.com).

archive edifices; tor example, Iconem has digitized the ancient city of Palmyra, Syria, before it got destroyed during the ongoing conflicts in the Middle East.

The larger challenge is, however, to construct a 3D model from a single image, or few images that have been uploaded by a client and do not fully describe a space or object. It is a difficult task, because an image is a projection of a 3D space on a 2D picture plane, and a lot of spatial data are lost in such compression. An AI system can potentially assist in completing the 3D model with prior knowledge of similar spaces and furniture, fixture, and equipment and predict accurate measurements from a single picture. In an ideal scenario, the client uploads an image, and the system predicts, interpolates major dimensions, and generates a 3D model that designers can import into their software tools to jump-start the design process (Figure 19.7). We experimented with various emerging Al technologies, which at this point did not result in accurate enough dimensions to run a competition on Arcbazar. The goal however is to bring down the accuracy level to about +/—2 cm, at which point it will be feasible to onboard projects; or, alternatively, to develop a new interface tor clients where they can upload a complete set of images to use current technologies to stitch together images and generate a 3D model.

Generating conceptual designs

In the late 19th century, the city of Quebec, Canada, organized a competition for the city hall. There were six design entries, none of which satisfied the jury. The city decided to produce a composite design made from bits and pieces of all entries. The final design by Georges-Emile Tanguay became a “Frankenstein” composition, incorporating Romanesque, neoclassical, and neo-Gothic-style features. Such a process was quite common in historical competitions. Today, stitching together a new design from various competition entries is certainly not considered acceptable or ethical. However, one could argue that decomposing projects and recombining the best aspects of each design entry into a new composition may offer the most ideal solution for a given design problem.

In 2017, we worked through a Defense Advanced Research Projects Agency (DAR- PA)-funded project, to use Al in order to generate conceptual design compositions. The goal of the research was to train deep neural networks (DNNs) with design data from Arcbazar, to compose new conceptual designs—by piecing together high-performing building blocks from past projects in the existing design database (Figure 19.8). There have been extensive developments in the field of deep learning over the last decade. Deep neural networks have been successfully used on a wide range ot real-world applications. In contrast to rule-based systems, DNNs do not need to be programmed upfront, but can decipher rules through examining large amounts of data (Steinfeld, 2017). For example, one can train a DNN with millions of cat images and use it to label cats in new images. This is especially important for

Diagram showing the workflow of discovering latent building blocks from home designs via deep learning

Figure 19.8 Diagram showing the workflow of discovering latent building blocks from home designs via deep learning.

self-driving cars, where the discrimination of objects, such as other cars, trucks, and walk- ing/biking people, in real-time video feeds will make the difference between cars safely maneuvering through traffic or not.

Graph-based representation of architecture

Traditionally, architecture is represented through drawings, e.g., plans and sections, or through more sophisticated and information-rich building information models (BIMs). However, for this study, we represented architectural design using attributed graphs. We focused on the representation of essential elements of architecture, i.e., spaces (or rooms) of various types and their adjacency relationships that tend to occur in real conceptual design. We collected design data from BIMs and converted them into graphs in the following manner: (a) Nodes represent particular room types, e.g., bedroom and bathroom, with attributes such as area, volume, and perimeter; and (b) edges between nodes represent the connection type between rooms, e.g., a door connection, an open connection, and vertical connections, e.g., stairs, ramps, and elevators. We annotated type of rooms, type of relationship between rooms, and the evaluation scores based on various functional performance criteria, such as human-provided scores tor livability (rating how well the living/ family spaces were designed) or sleepability (rating how well the bedroom quarters were designed). Even though we limited our annotations to this narrow set of attributes, graph representations can be easily expanded with additional data, such as type of furniture, lighting fixtures, and color. In order to represent more detailed information, one would need to create auxiliary nodes that show the containment relationship within a subgraph. In short, graph representations can be expanded to contain more details, it those details are available. We used a novel application of graph-based DNNs, i.e., a supervised graph convolutional neural network, and trained DNNs to dissect home designs (graphs) into essential building blocks (subgraphs) and recompose them into new assemblies. Our early results revealed that DNNs are capable of extracting high-performing function-driven building blocks from design data (As, Pal & Basu, 2018).

Training DNNs

In order to set up the DNN, we divided home designs into two datasets, one for training and the other tor testing. We trained the DNN with both design data on homes and their corresponding performance scores on livability. We then implemented a regression test on the remaining homes, which the DNN had not encountered before. For example, the original livability scores we gave of random three homes were 51, 32, and 67 (on a scale of 1—100). The DNN predicted them as 51.2, 24.5, and 67.2 in the test. The original scores provided were based on subjective evaluations given by reviewers, and therefore, it was astounding to see that the DNN was able to predict them with such close accuracy.

T9.9 Discovering high-performing subgraphs, i.e., building blocks, in a graph representation of a house

Figure T9.9 Discovering high-performing subgraphs, i.e., building blocks, in a graph representation of a house.

Identifying high-performing building blocks

Afterward, we used the DNN to identify subgraphs that responded well to the particular functional performance criteria, i.e., to detect essential function-driven building blocks. For example, the system detected the following building block as a high-performing subgraph responding to livability: [“2_Kitchen_82,” “2_Foyer_39,” “2_Pantry_26,” “2_Terrace_1951,” “2_Bath_26,” “2_Living_479,” “2_Dining_308”J. The string “2_Foyer_39” means a foyer on the second floor with an area of 39sf (Figure 19.9). We assume that the DNN classified this building block as high-performing due to the fact that it represents a living room that is quite spacious, 479sf large, is situated next to a dining room, and opens up to large terrace.

Merging building blocks into larger assemblies

Figure 19.10 shows two building blocks (left column) with high scores discovered tor each of the two separate functional targets, i.e., livability and slecpability. To discover these building blocks, DNNs were trained separately on each functional target but with the same set of design samples. Next, we merged discovered building blocks into larger compositions. If, for example, someone wants to compose a new home that performs high on both livability and sleepability, the DNN simply discovers essential building blocks based on these function targets and merges them along edges via graph-merging algorithms (Ehrig & Kreowski, 1979). If there are nodes or edges that are typical in home designs but are missing in the discovered building blocks, we can add auxiliary ones to fill these gaps.

Vector embeddings

The process of adding auxiliary edges and nodes to new compositions in a mathematically principled manner is based on a method of embedding rooms in various design samples onto a latent vector space while preserving both the similarity of room types across the design

Discovered building blocks specific to the target functions of livability and sleepa- bility, and merging discovered building blocks into larger design assemblies

Figure 19.10 Discovered building blocks specific to the target functions of livability and sleepa- bility, and merging discovered building blocks into larger design assemblies.

samples and the proximity of various types of rooms appearing inside each design. This was performed by a DNN-based method for representation learning on attributed graphs. All design graphs with annotated room attributes (“type” in this case) were merged into a single larger graph, and the latter was served as an input to a DNN. In this way, the DNN learned a multi-dimensional vector representation of each node (Figure 19.11). Vector representations of nodes depend on their type as well as their relative proximity to other types of nodes. As Figure 19.11 demonstrates, nodes corresponding to each type of room tend to cluster

Deep neural network-based representation learning of types of rooms in a latent vector space while obeying proximities of types of rooms in design samples

Figure 19.11 Deep neural network-based representation learning of types of rooms in a latent vector space while obeying proximities of types of rooms in design samples.

together since their types are identical. More interestingly, however, is that certain clusters of nodes, e.g., Bedrooms, tend to be closer to some clusters of nodes, e.g., Closets, Baths, Balcony, and Corridors, and not to other clusters, such as Entrance and Dining. Thus, the latent embedded vectors tend to reflect the average proximity of various types of rooms in design samples. Also note that the clusters corresponding to Living rooms, Dining rooms, and Terraces are very close to each other and overlapping at times. This is because most of the design samples had these types of rooms adjacent to each other. Vector embedding essentially exposes such latent design rules.

We used vector embedding to discover auxiliary edges or nodes that might be missing in new compositions. For example, building block H, for livability and H. tor sleepability shown in Figure 19.10 (left column) have the Dining room node in common. We merged these two subgraphs along the Dining room node to form a larger graph. However, this procedure leaves the Bedroom reachable only through the Terrace, which is not ideal. To fix this problem, we computed the probability of connecting various types ot rooms in H, to other types of rooms in H5. Since the vector embedding reveals that the Bedroom cluster is close to the Corridor cluster, our composition algorithm added an auxiliary edge between Bedroom and Corridor with high probability (Figure 19.10, middle).

Note that in case there are no obvious candidate pairs of rooms to be connected by an auxiliary edge, we may need to add new nodes, e.g., rooms, into the composition. The rooms to be added can be determined by examining the vector embedding (Figure 19.11). For example, if the building blocks contain Bedrooms and Living rooms but not “connective” rooms such as Corridors, the latter type ot room is needed to connect the former types. This can be seen as a problem of finding a path from the Bedroom cluster to the Living room cluster. Thus, an intermediate room ot type “Corridor” can be added to the composed design in an algorithmic fashion. Or, for example, as seen in the composition of H, with FL, the resulting graph has no kitchen; in such a situation, a Kitchen node can be added (Figure 19.10, right) through vector embedding—which depicts Kitchen spaces close to Dining, Terrace, and Entrance nodes.

Validating compositions

Afterward, we inspect whether new compositions break any geometric constraints. The subgraphs in themselves may work well, but when put together, they may form impossible assemblies. Therefore, we applied techniques to determine the fitness ot generated designs, such as planarity constraints (Boyer, 2006). Once a solution has been validated, it can be converted into two-dimensional orthographic drawings or three-dimensional massing models through an algorithm that stacks nodes by obeying area and volume attributes, proximities, and connection types. The stacking can occur within a constrained building envelope, for example, as would be necessary if a new design had to fit into an existing building; or can be more loosely stacked, if there are no such spatial restrictions.

The long-term vision of this research is to develop an AI engine that can automate conceptual designs entirely. For example, a client defines a building program and provides the location of the project. The AI engine infers local climate data, lot boundaries, zoning regulations, building codes, etc., and generates a series of design options. If wanted, these designs can be wrapped with particular architectural style, e.g., classical, modern, or styles of renowned architects, and tailored to a unique massing model (Figure 19.12). The client can then pick one of the designs and hire a local contractor for implementation. Eventually, AI design bots may even participate in real-world crowdsourcing projects and become incorporeal competitors to corporeal designers.

Empire State building in New York City, reimagined in different architectural styles (Courtesy

Figure 19.12 Empire State building in New York City, reimagined in different architectural styles (Courtesy: HomeAdvisor).

Conclusion

In this chapter, we discussed the reincarnation of traditional competitions in online crowdsourcing platforms—taking the fair and open competition protocol to smaller-scale projects around the globe. We elaborated on competitive vs collaborative models of design acquisition and discussed the integration of various AI technologies into the crowdsourcing funnel, such as price recommendation and space-surveying systems. Furthermore, we elaborated on how AI—in particular, deep learning—can “read” architecture (through graph representations) and potentially generate conceptual designs.

Architects are increasingly relying on cloud services, advanced software tools, and ambient design knowledge provided by smart apps, which capture and process our surroundings, draft ideas, assist with design direction, and eventually may even design for us. Sooner or later, we will face the existential question: “What is our role as architects?” Anything that can be quantified somehow will be eventually performed better, faster, and more efficiently through automation. In the short term, designers may benefit from AI- driven software tools to jump-start their design process. In the long run, however, there may be consumer tools that are used directly by clients to generate and implement various design solutions. These developments will inevitably have significant bearings on the architectural profession. New practice models may emerge. In an interview with Crosbie (2018), we speculated that:

... perhaps architectural practice could follow other creative fields, such as the music industry. For example, say, Frank Gehry develops a “style” and whoever uses his language through an Al-driven system pays him a royalty fee. Gehry in that way might “design” millions of structures all around the world...

New media, cutting-edge technologies, and novel forms of executing architecture do not necessarily have to be the demise of architects, but can cause quite the contrary and help magnify the potential design footprint of architects on many more projects all around the world.

Acknowledgments

This research was supported in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR001118C0039. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of Raytheon BBN and DARPA. This document does not contain technology or technical data controlled under either the US International Traffic in Arms Regulations or the US Export Administration Regulations.

References

As, I., Pal, S., & Basil, P. (2018). Artificial intelligence in architecture: Generating conceptual design via deep learning. International Journal of Architectural Computing, 16(4), 306—327.

Boyer, J. M., & Myrvold, W. J. (2006). Simplified о (n) planarity by edge addition. Graph Algorithms and Applications 5, 241.

Bujor, A., & Avasilcai, S. (2018). Innovative architectural design development: The Arcbazar creative crowdsourcing contests perspective. In MATEC Web of Conferences (Vol. 184, p. 04002). EDP Sciences.

Crosbie, M., (2018, September 17). Doom or Bloom: What Will Artificial Intelligence Mean for Architecture. Retrieved from https://commonedge.org/doom-or-bloom-what-will-artificial-intelligence- mean-for-architecture/

De Haan, H., Frampton, K., Haagsma, I., & Sharp, D. (1988). Architects in Competition: International Architectural Competitions: International Architectural Competitions of the Last 200 Years. Thames and Hudson.

Dejong, C., & Mattie, E. (1994). Architectural Competitions—Vol. 1—2. Taschen.

Ehrig, H., & Kreowski, H. J. (1979). Pushout-Properties: An analysis of gluing constructions for graphs. Mathematische Nachrichten, 91(1), 135—149.

Gleiter, J. H. (2008). Das neue Ornament-Zur Genealogie des neuen Ornaments im digitalen Zeitalter. Arch Plus, 189, 78-83.

Keslacy, E. (2018). Arcbazar and the Ethics of Crowdsourcing Architecture. Thresholds, 46, 300—317.

Lipstadt, H., & Bergdoll, B. (Eds.). (1989). The Experimental Tradition: Essays on Competitions in Architecture. Princeton Architectural Press.

Moussavi, F. (2013). Creative leaps in the arena of architectural competitions. Architectural Review,

233(1392), 27-28.

Steinfeld, K. (2017). Dreams may come. In ACADIA 2017: DISCIPLINES & DISRUPTION (Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture) (pp. 590—599), Cambridge, MA.

 
Source
< Prev   CONTENTS   Source   Next >