Matching algorithms to assist in designing with reclaimed building elements

Reuse of building components is one of the recommended circular strategies to reduce the environmental impact of new buildings. However, reclaimed building components are more difficult to design with than new products. While new products can be made to match exact needs, the salvaged components have predefined dimensions and quality limitations. Following the Design Science Research methodology, we attempt to answer how the reuse design can be aided by a digital design tool. The developed matching algorithms suggest the optimal assignment of available elements for the desired configuration, considering user-defined constraints and optimisation criteria. In the test cases, we seek to optimise the global warming potential of timber framing elements, defined by life cycle assessment, though the tool is not limited to this objective. The implementation includes greedy algorithms, bipartite graphs, and mixed integer linear programming. The usefulness of the proposed solution is evaluated on simulated sets of building elements in terms of embodied emission reduction and speed of the calculation. The paper contributes with methodologies, algorithms, and test cases to assess their performance. Practitioners can apply the proposed solution to reduce the time of designing with salvaged materials, which can lead to the popularisation of the circular design.


Introduction
With the planet at risk from anthropogenic activities, the circular economy (CE) is deemed one of the remedies to lessen this environmental impact [1,2]. CE is defined as an economy that is 'restorative and regenerative by design, and which aims to keep products, and materials at their highest utility and value at all times . . .' [3]. CE can be broken into three parts: slowing the material loops, by using things longer; closing, by reusing; and narrowing, by using less [4,5]. To successfully reduce waste and resource demand by shifting towards a global CE, thereby complying with the Paris Agreement [6], the ongoing transition needs to speed up significantly.
In the context of buildings, which are one of the primary sources of greenhouse gas (GHG) emissions [7], CE can address three of the industry's environmental burdens: it helps to deal with the scarcity of materials, it reduces GHG emissions from material production, and limits the amount of waste.
According to Addis, the reuse of salvaged components is the most challenging type of reuse from a design point of view [8]. This complexity stems mostly from the constraints that the predefined geometry and material properties impose on the design. Moreover, additional factors such as cost, time frames, and end-user perception hinder a broader enactment of designing with reused elements [9].
In 2021, a building was constructed in Oslo, Norway, at Kristian Augusts gate 13 (KA13), with a substantial part of reclaimed building materials. It achieved 70% lower emission of climate gasses compared to an equivalent building from new material [10]. For some of the reused materials, a total of 89%-98%

DSR
According to Simon, DSR is driven by the desire to improve the practice by introducing an innovative solution, referred to as the artefact [19]. That is the case for our research, hence, we adopt DSR for its methodology.
Good DSR, apart from its contribution, is characterised by the justification in both relevance and rigour [20]. To conform to these objectives, we follow Hevner's Three Cycles View approach to DSR [21].
The first of the three cycles, the relevance cycle, is about ensuring that the object of study contributes to solving an existing problem in the industry. The relevance is provided by building a solution that fulfils the industry's organisational needs and verifying its performance against a set of usefulness criteria. In our case, the building design practice is our application context, and the problem to solve is the complexity and difficulty of designing with salvaged building components compared to new, tailored products, section 3.1. The need to resolve the issue is motivated by environmental justifications since reuse is considered one of the remedies to limit the construction sector's resource extraction and GHG emissions [1].
The second cycle, the design cycle, is the iterative process of building and evaluating the artefact. Such a feedback loop guides the development of the solution, refining it to fulfil the set of objectives best. In this cycle, the design alternatives are generated and compared against requirements [19,21]. Our research consists of multiple iterations, each consisting of implementing an algorithm to automate element matching, section 4, followed by an assertion of validity and performance, section 5.
The last cycle, the rigour cycle, is enforced by evaluating the developed solution's performance against clearly set criteria and comparing it to what exists in the scientific literature. That rigour distinguishes the DSR approach from simply building a piece of software. In the case of this research, the developed solution is evaluated against criteria explained in section 5, which can be replicated by other researchers. The results are discussed in relation to the existing body of knowledge summarised in the literature review in section 3 The paper is structured according to the guidance of Gregor, and Hevner [20]. Although the development of the artefact and its evaluation are simultaneous processes, they are described separately for the paper's clarity. The results comprise of the three following chapters: section 3, the problem exploration, in which a literature review is performed to study the problem, its important aspects, existing solutions, and desired success factors; section 4, the artefact development, where multiple versions of the solution are designed and implemented; and section 5, the evaluation, in which we thoroughly test versions of the artefact, against the rigorous criteria defined in the section 3.

Knowledge contribution
Despite some solutions to the problem already being present in the literature, the solution domain is not exhausted. The application domain, as portrayed in the literature, is of relatively low maturity in the circularity context. Only a few pioneering attempts were made before to automate and improve the reuse design. The matching problem can be seen in other application domains too. Those serve as an inspiration for the development of the artefact and are covered in the section 3. Based on that, this work is positioned between the Exaptation and Improvement quarters in the DSR Knowledge Contribution Framework of Gregor and Hevner [20]. Thereby implying that both extensions to existing solutions and new solutions are presented in this paper, as illustrated in figure 1.
Following the same research guideline, we consider as knowledge contributions the novel constructs, such as multiple matching algorithms and evaluation of their relative performance, the weighted incidence matrix based on the cost function, the constraint definitions, the design principles (pseudocode and descriptions of the algorithms, technological rules), the methodology itself, evaluation criteria together with the input data to test cases, and finally, the instantiated artefact (code implementation).
All the developed code and the test data-the artefacts-have been published and are publicly available open source [22].

The literature review
Before coming up with an improvement proposal, we undertook a thorough literature review of the problem and existing solutions. The description is divided into two parts. The first, section 3.1, covers the literature on challenges with reuse design. The second, section 3.2, lists the previous attempts to solve the problem and possible solutions known from other fields. The two domains overlap to an extent because solutions are often preceded by problem analysis. From both, we draw requirements for an artefact and proposals for approaching the problem. Meeting the requirements then serves as success criteria used for the evaluation of the artefact's performance.
The target group for the artefact are primarily designers who want to implement salvaged elements in their designs. We-the authors-also belong to this group, so the selection of criteria is informed by anecdotal and empirical evidence.
The literature was selected from a search query of Scopus' database on 13 December 2022, seeking articles that address reusing building components in architectural design in the context of environmental practices. Listing 1 shows the exact query string. To set the context, we looked for papers related to circularity. Then, the inclusion criteria filtered papers that describe 'matching' , 'mapping' , or 'reuse' , as well as 'design' and 'building'; the ' * ' in the search string includes words with varying endings of the preceding letters. To specify the search, 'structural' and 'load-bearing' terms were added to include relevant papers previously known to the authors.
The initial query returned 311 results. To limit the scope, we performed a blind review of titles and keywords, leading to the exclusion of 219 irrelevant items. Another round of review included abstracts and resulted in the elimination of 24 more, narrowing the list to 68 relevant papers that were studied. A selection of them is described in section 3.

Life cycle assessment (LCA)
Throughout the development of this artefact, the LCA is used as a metric for optimisation. In particular, we focus on the reduction of the GWP indicator, which serves as the objective function. GWP is a measure of equivalents of kilograms of carbon dioxide emitted to the atmosphere (kgCO2eq). The artefact is to be designed in a way in which the values and the objective can be easily changed by the user when creating an assignment problem.
A simplified version of the GWP calculation uses a building element's volume multiplied by a GWP factor according to equation (1). In the case studies, we input values of k new equal to 28.9 kgCO2eq for a cubic meter of new products, based on the Environmental Product Declaration of actual sawn dried timber from pine tree [23]. The value corresponds to the cut-off approach and system boundary from the beginning of forestry operations to the end of manufacturing in a sawmill (A1-A3), but not including the transportation to the actual site. The negative value of emissions resulting from carbon sequestration in timber is not considered, as it would suggest that the more new timber products, the better the reuse. The reused products will draw value from the same source but reduced by 92.2%, resulting in k reuse = 2.25 kgCO2eq, according to the study on the reuse of building products from Eberhardt, Birkved and Birgisdottir [24]. The value is taking into consideration processes involved in disassembling, processing and storing reclaimed products, and not the initial forestry, debarking, sawing and drying. (1)

Problem domain
Even though reuse was considered a common practice before the industrial era, it is not that common nowadays. The literature identifies several challenges with reusing elements in new buildings. In the KA13 project report [10], we read that, because of the limited material availability, the design process becomes more complex and often repetitive. Similar conclusions were found in the interviews with circularity practitioners [25]. Whereas virgin materials usually come with the necessary documentation, reclaimed materials often lack relevant information [26]. This causes legislative challenges when documenting that the project satisfies contemporary technical requirements. For structural engineers, the material properties must be determined to ensure the structural integrity of the structure. A process that can be time-consuming and challenging. Another identified challenge is the sourcing of materials. In KA13, more than 20 existing buildings were used as donors for materials. Keeping track of what material is available at given stages of a project, where to collect it, and how to store it proved to be challenging [10]. Compared to designing with new material, where the designer can assume an infinite amount of available elements of all types, the inconsistency and uncertainty of available material when using reclaimed elements significantly increases the design process's complexity. Even for small projects, manual assignments of appropriate reclaimed elements quickly become time-consuming and costly. The report from KA13 also emphasises this [10]. The extra time spent on manual design with the reclaimed elements was one of the primary sources of the increased cost.
Finally, the disassembly process of existing buildings is far from optimal. Existing buildings seldom contain elements intended for multiple applications, so they are treated as waste after their initial purpose is served. Because of that, extracting and preparing old material for reuse is time-consuming and costly. Although not dedicated elements can be reused, it is much easier if they are designed for disassembly (DfD). Nowadays, DfD is becoming increasingly popular in research, projects, and standards [27,28].
For reuse to be feasible, the project owners must see a profitable investment. In highly industrialised countries, the cost of labour often exceeds the cost of materials. Consequently, for the reuse of materials to be economically sustainable, the price of material has to increase, or the time spent designing must decrease. The digital tools could reduce the cost associated with the design process and support the assignment and evaluation of elements to be reused. Increased focus on sustainability could further encourage and reward the reuse of building materials.

Solution domain
In this part, we cover existing work in the area of reuse matching tools. Replacing new elements with reclaimed ones is described as an assignment or packing problem. Among all feasible substitutions, the combination of assignments reducing the environmental impact of the final structure is sought. In mathematics, several algorithms exist to solve this type of problem.
One approach to this problem is with greedy algorithms, which solve a problem by always choosing the locally optimum choice at every step in the hope of finding the global solution. This way, a local optimum solution is ensured, which in some cases also approximates a global solution [29]. It is useful when searching for viable solutions within a reasonable amount of iterations.
An example of a greedy search heuristic is seen in the work of Bukauskas et al [18] on a bin-packing definition to help designers match a finite set of diverse demand elements with reclaimed supply elements. By pre-sorting the demand and supply elements in decreasing and increasing orders, respectively, the greedy approach proved as an efficient tool for helping the designer automate the time-consuming and challenging task of designing with reused elements while ensuring the structural integrity of the system.
A mixed integer problem (MIP) is another approach to solving packing problems. Linear problems, as shown in equation (2), contain a vector of decision variables, x, that is sought to be optimised [30]. Each of these has an associated cost, c, and is subjected to some constraints that need to be fulfilled. The values of vector x can be either continuous or discrete. When restricting the variables and constraints to only take integer values, the problem is called an integer problem (IP). The term MIP is used for problems where not all constraints have to take integer values, as is common in many problems. All solutions that satisfy the constraints are called feasible solutions, and the one achieving the best cost function is termed the optimal solution. MIPs are well suited for decision problems where the variables x i can take the value of 0 or 1 depending on whether it is selected or not, as with the assignment problem presented in this paper. The constraint matrix A holds scalar values associated with each variable of vector x, and vector b holds the maximum value of the constraints.
Variations of a MIP problem are introduced in the works of Brütting et al [15,31,32]. In addition to solving the assignment problem, the structural integrity is ensured by introducing both ultimate-and serviceability limit state conditions as constraints in the problem formulation. An additional procedure is proposed which allows for translation of the connections in the structure within given boundaries, seeking to increase the level of reuse further. Results show that structures consisting of reclaimed elements often have a higher volume and lower utilisation than an optimised version from virgin material, which is tailored to fit. However, the structures from reused material embody up to 71% less energy.
Huang et al demonstrate how to design parameterised geodesic domes with reclaimed timber elements [16]. They apply a graph representation of a problem. In their paper, the Hungarian Algorithm is applied to find the best substitution of elements. There are many algorithms to solve optimisation problems expressed with graphs, such as the Hopcroft-Karp algorithm and the maximum bipartite matching (MaxBM) algorithm, which seeks to maximise the sum of weights of edges under the condition that each vertex from set V can have at most one edge connecting it to set U. In the context of this research, the algorithm seeks the most valuable replacements. The limitation of the algorithm is that only one-to-one matching is allowed, meaning that supply elements cannot be cut to fit into more than one location. Moreover, this assignment problem is wrapped in an outer optimisation loop using a genetic algorithm for a design space exploration, investigating how the dome's dimensions affect the level of reuse. This is implemented in the visual programming environment Grasshopper.
Another interesting paper from Parigi proposes an algorithm for designing reciprocal frames using cutoff elements in order to increase the direct reuse of reclaimed timber elements [17]. This way, timber stock that normally would have been ground to smaller particles or incinerated can be utilised higher up the value chain.
A common theme for all the above publications is the aim to automate the decision-making and assignment problem when using reclaimed elements in new designs. A task that takes hours, days, or even weeks to solve manually, can be solved in a trivial amount of time. This is an important step towards increased reuse in the construction industry, as the cost of labour significantly impacts how a project is designed.

Problem identification
The designed artefact is an algorithm for assisting users in replacing new elements from the design with reclaimed elements in a more efficient way than manual assignment. Starting by introducing the algorithm's overall workflow, we describe each implemented matching algorithm in detail based on a simple case study.
As described in section 1, the algorithm is intended as a tool to substitute elements in an already designed system, not a tool to suggest reclaimed elements firsthand. The limitation of that approach is that decisions made at this stage do not affect the design's topology, spacing, and element dimensions. The advantage is that such a tool provides useful feedback to the designer, who can customise the design themselves without having to change their design methodology drastically by designing with reclaimed elements from the beginning.
A simplified workflow diagram is depicted in figure 2. The input data for the algorithm are two sets of elements, a list of constraints and an objective function. The two sets are the design intention, Demand, and the available stock of reclaimed elements, Supply. The constraints contain restrictions for disqualifying certain substitutions. The objective function describes what metric the algorithms will optimise. Here, we want to minimise the structures GWP.
Depending on the application, the input might need to be converted to the desired data structure. During the evaluation stage, the matching problem model is composed. To achieve that, the algorithm eliminates the elements that do not satisfy the constraint criteria and only leave all the possible scenarios in the incidence matrix format. Then, those are assigned a suitability score calculated using the objective function. Finally, the actual matching is performed, selecting the most optimal pairs of elements from the set of possible scenarios. This step is developed using multiple alternative methods: the greedy algorithm (Greedy), the maximum bipartite graph method (MaxBM), and the mixed integer programming (MIP).
The result of the algorithm is a list of pairs of Demand and Supply elements. Each demand element can be present in only one pair, while supply elements, for some methods, can be matched into multiple pairs, meaning that element is divided to serve multiple replacements.

Case study description
To test the functionality of the developed artefact, we define a simple case study. It serves as a diagnostic tool to check if each method produces the expected results.
The first case study is a design consisting of five timber elements, depicted in the upper half of figure 3, labelled D1-D5. The lengths of the elements range from four to thirteen metres, and their cross-section areas are between 0.001-0.100 square meters.
We define three constraint criteria. The area, length and moment of inertia of each supply element cannot be smaller than specified in the design. For example, a supply element cannot be shorter than the demand element it intends to substitute. The optimisation objective is the GWP, as described in section 2.4.
Typically, such elements would be purchased at a local warehouse with standardised dimensions. Assuming we use exact lengths, this would result in a GWP of n i =1 (A i * L i ) * k NEW = 1.94 m 3 * 28.9 kgCO 2 eq/m 3 = 56.1 kgCO 2 eq, which will serve as the reference score for a new structure.
The case study assumes a set of five reclaimed elements available, as depicted in the bottom half of figure 3 with labels S1-S5. The objective is to substitute as many new elements in the design as possible with reclaimed elements, to reduce the total environmental score of the final solution. Without loss of generality and for explanatory purposes, only the GWP factor of each element is considered for this demonstration.

Technical implementation of the artefact
The development is done in the programming language Python because of its functionality, simplicity, amount of existing code implementations, and large community. Before the algorithm can be initiated, the information necessary to represent the elements is stored in Pandas' DataFrame class [33]. Each parameter-Length, Area, and Moment of Inertia-is placed in a column in the DataFrame. Each element is stored row-wise in the DataFrame and given a unique ID. The demand elements from our original design are labelled Di, while the supply elements are labelled Sj. Table 1 illustrates the input for the demand and supply elements used in this example.
The constraint input is a Python dictionary where each key is the name of the attribute it applies to, and the value is the condition to be evaluated. Listing 2 describes the case study constraints.
Using the DataFrames for demand and supply elements together with the constraints from listing 2 as input, the matching algorithm is ready to perform the matching of elements with the implemented methods. Before the matching algorithms run, some common pre-processing operations are done to identify feasible combinations of demand and supply elements subjected to the current constraints. The incidence matrix N, shown in table 2, holds information about feasible substitutions determined by the constraints. Element N ij is True if demand element D i can be substituted by supply element S j . Subsequently, the GWP for each of these substitutions is calculated using equation (1) with the area of the supply element A j and length of demand element L i ; all the weights are stored in the weighted incidence matrix C, table 3. Now, each implemented algorithm can find its optimal matching with available elements under the applied constraints. First, a manual assignment is used as a reference for an initial evaluation of the implemented matching algorithms.

Manual assignment
By investigating the weighted incidence matrix, C, in table 3, it is clear that D1 is best substituted by S1 due to the perfect matching of all parameters. For D2, there are no valid substitutions. Element D3 can be substituted by all but S2. However, we have already assigned S1 to D1, and S5 is considerably longer; both R3 and S4 are good candidates, but S4 is chosen due to slightly better performance. Finally, both D4 and D5 can be assigned to S5 without violating any constraints. Resulting in the substitution of four elements. The final GWP is calculated by summing the weights of selected substitutions and adding the GWP from the remaining original element. Thus, we get the total GWP of the most optimal solution: 0.63 + 0.461 + 3.60 + 1.80 reuse + 13 * 0.02 * 28.9 new = 6.49 + 7.514 = 14.01 kgCO 2 eqv.
The result is 75% lower than the reference with only new elements. This result is used as the benchmark for all the other algorithms in the following subsections.

Greedy algorithm
We began by implementing a Greedy Algorithm. It starts by sorting the design and supply sets by each row's GWP value in descending and ascending order, respectively. This ensures that the algorithm always selects the presently best substitution with the greatest GWP reduction for every iteration. The algorithm iterates through all sorted demand and supply elements, implemented as a nested loop. Moreover, when elements are matched with a surplus length of the supply element, the cutoff part can be put back into the supply list for further matching.
Listing 3 describes the procedures of the greedy algorithm. The incidence matrix, N, is used as a conditional to ensure that demand element Di can be substituted by supply element Sj. The basic version, without plural assign, creates a matching pair if the incidence N ij is True before the supply element is removed from the sorted DataFrame, then breaks the inner loop. If no supply elements fit, no matching is made, and the original demand element will be kept for the final configuration.
input : (DataFrame) sorted_demand, (DataFrame) sorted_supply output : (DataFrame) pairs for demand_el in sorted_demand: Table 4. Matching pairs for the implemented methods and optimal selection and the resulting LCA of each procedure.

Optimal
GreedyS GreedyP MaxBM MIP D1 S1 S1 S1 S1 S1 When the plural assignment of demand elements is activated, an additional step is performed within the inner loop as shown in listing 3. After the matching has been created, the remaining length of the supply element is calculated; if the remaining length is longer than the shortest demand element, the element is inserted back into the sorted supply list at the correct location; else, it is dropped. The final matching results are presented along with the other methods in table 4. In this simple case study, the plural assignment allows for one additional substitution compared to single assignments by using element S5 for two substitutions.

The maximum bipartite graph matching
For the next iteration of the artefact, we converted the problem into a graph representation, as visualised on figure 4. A graph G(E, V) is a data structure consisting of vertices (V) and edges (E). In the case of our problem, the vertices are structural elements, and edges represent all the possible substitutions between them. Since we distinguish between two sets-the demand (D) and supply (S)-and the only matching possible is between these sets, we have a bipartite type of graph G(E, D ∪ S). The edges that would correspond to an impossible match, i.e. substitutions that violate a constraint, are not created in a graph. This is achieved by defining edges based on the 'True' occurrences in the incidence matrix from table 2.
Furthermore, because each possible substitution has an associated score representing its environmental impact, each edge of the graph is assigned a weight. It is called a weighted graph. Note in equation (3) that the weights are calculated as the difference in GWP between the demand element Di and the GWP acquired by substituting it with the supply element Sj. Consequently, the sum of these weights in figure 4 will not equal the final GWP used for comparison with the other methods in table 4 The MaxBM algorithm is applied to solve the presented graph problem using the Python library for Igraph, which contains useful methods for network analysis [34]. The algorithm seeks to maximise the sum of weights of edges under the condition that each vertex from set D can have at most one edge connecting it to set S. In other words, it seeks the most valuable substitutions maximising the GWP difference between demand and supply elements. As each vertex can have only one edge attached to it, this method cannot be used for plural assignments. Figure 4 shows the graph representation of the best solution for this matching problem represented by edges in bold. This result is compared with the other methods in table 4.

MIP
The last iteration utilises a MIP algorithm as described in section 3.2. Unlike the MaxBM, the MIP allows the plural assignment of supply elements. In general terms, there are n demand-and m supply elements indexed i and j, respectively. A variable matrix X is used to store the elements substitutions of a solution. It has the same n × m dimension as the incidence matrix N, and value X ij = 1 if demand element i is substituted by supply element j, else it is 0. Thus, we have a MIP problem with binary values. The problem has two types of constraints. The first constraint ensures that one demand element is only assigned to one supply element. The second constraint ensures that the sum of demand element lengths does not exceed the total length of the supply element in the case of plural assignments. The constraint matrix A for each row in the variable matrix is the length of the demand elements.
To make this two-dimensional variable matrix compatible with the MIP definition in equation (2), X is rearranged into a one-dimensional vector by flattening it row-wise: The same is done for the cost associated with each decision variable c ij . The MIP has the same cost function as used in section 4.6 for the bipartite graph where the GWP of the substitution ij is subtracted from the GWP of the initial element i in that position, equation (3), with weights calculated using equation (1).
After inserting equation (3), the MIP problem can be described with equation (4). Using the cost function, it seeks to maximise the GWP savings while guaranteeing that no supply element has exceeded its capacity. This is provided by the first constraint in equation (4), by ensuring that the sum of all demand elements' length matched with supply element j is less than or equal to its total length. Moreover, as shown in the second constraint, we need to ensure that a demand element is not assigned to more than one supply element Although the remaining constraints are left out of this problem definition, they are ensured by fixing variables where the incidence matrix N ij equals False. This way, the lower bound lb ij = 0 for all variables, and the upper bound ub ij = 1 if N ij = True, else 0. This way, the algorithm saves time by not searching through infeasible solutions already identified in the pre-processing stage.
The implementation of this algorithm is done with the SciPy library [35]. This implements the HiGHS algorithm used to solve the presented MIP problem [36]. As table 4 shows, the MIP approach finds the optimal solution for this example, just like the Greedy algorithm with plural assignments.

Comparison of methods
Each of the methods has been implemented in the order of presentation in this section, which is concluded by a brief discussion about the different methods before further evaluation and comparison in the following section. Firstly, manual assignment of elements is easily done for small examples like this but quickly becomes unfeasible when the number of available elements increases. Consequently, it is only used here to ensure the validity of the later introduced algorithms.
Regarding ease of implementation, the greedy algorithm was the simplest to develop. No external packages are needed, and it works directly on the demand and supply data without rearranging formats or data types. Additionally, with a simple modification of the inner loop in listing 3, the use of cutoff elements from earlier substitutions can be re-introduced into the assignment problem, thereby utilising a greater share of material. In this example, this also results in the optimal solution.
The MaxBM and MIP require more time and external packages to implement. The first is significantly faster, as further discussed in section 5, but lacks the opportunity of multiple assignments. Resulting in less effective material usage and more cut-off. In this case, the LCA using MaxBM is 70% higher than the optimal solution. This can still be a valuable method in cases where the variation in demand and supply elements is smaller, and the cutoffs are less likely to be applicable for subsequent substitutions.
The MIP finds the optimal solution but also takes the longest time. For this small problem, the differences are negligible. When the problem size increase, however, this could impede creativity and exploration of possibilities in the conceptual phase of a project.
Initially, the demand elements have a total volume of 1.94 m 3 . The optimal solution increases the total volume by 42%. This is an expected consequence of the area constraint, ensuring that all substitutions should have an area greater or equal to the demand elements. The consequence is a structure with less efficient material usage but significantly lower environmental impact compared to the new structure.
Although it is too early to conclude, the choice of algorithm could depend on the type of problem, the available time, or the accuracy needed. Luckily, as long as the matching object is created with demand and supply elements, the user can decide which method to use without changing anything. This also prepares the artefact for additional matching algorithms and extensions in future updates.

Evaluation
After implementing the algorithms and applying them to the simple example in section 4, this section evaluates each method against a case study that mimics realistic conditions. The case study uses typical timber roof trusses for both demand and supply elements. The objective remains: to substitute elements to minimise the final structure's GWP. This time, the demand and supply sets are significantly increased to better evaluate the performance and limitations of the artefact's algorithms.

Generating demand and supply elements
In Norway, the use of timber in housings is a long-standing tradition which remains relevant due to timber's inherently environmentally friendly properties and availability. In 2022, detached houses represented nearly half of all dwelling buildings in Norway [37]. Although style and size vary, prefabricated timber roof trusses are popular due to the possibility of manufacturing off-site and the relatively easy assembly process on-site. Roof trusses are also a good example due to varying cross-sections and lengths of elements. Figure 5 depicts four common truss typologies that are used to generate the benchmark case study, together with their typical span lengths and height ranges according to SINTEF Byggforskserien [38,39]. Based on that, we generated the dataset of trusses, assuming angles between 15 • and 35 • with an interval of 5 • , the spans with increments of one meter, and the centre distances between the trusses along a roof-which influence the loading per truss-with values 30, 45, and 60 cm. Cross-referencing all possible combinations of angles, spans, and centre distances yields 585 unique trusses consisting of 4995 elements in total. Then, appropriate cross-sections are found by a linear elastic analysis in Karamba3D using cross-section optimisation of elements [40]. For the analysis, we assumed the C24 quality class and subjected each truss to a uniformly distributed load of 2kN m −2 along the top girders. The obtained cross-sections have dimensions between 36×36 and 73×223 millimetres. Although it is not an exhaustive analysis, it provides a probable distribution of elements for demonstrative purposes. Figure 6 shows the distribution of obtained elements with respect to length and cross-section. The higher the bar in the histograms and the darker the colour in the scatter plot, the more elements with those dimensions are present in the dataset.

Analysis of performance at different demand-supply ratios
The ratio of demand and supply elements might vary significantly in real-world conditions. There might be situations where many elements are needed, but not enough is available on the reuse market. In such cases, lacking products must be purchased, resulting in higher emissions. On the other side, with many available elements, one needs to find the best combination for the specific problem.
To test how ratios affect performance, each method is compared on problems with a total of 1000 elements, with 13 different ratios (table 5) between demand and supply elements. All sets are generated by pseudo-random subsets of the initial material bank from section 5.1.
The results-obtained using a workstation with Intel Core i9-10 920X CPU @ 3.50GHz processor a 64 GB of RAM-are shown in table 5 together with figures 7(a)-(c). The values in figures 7(a) and (b) have been normalised with respect to the best-performing method. It can be seen that score performance, in the form of GWP reduction, is almost the same for all four methods. Even though the substitution rate varies    significantly, up to 70%, for cases with more demand than supply elements (figure 7(b)), the difference in score does not exceed 3% for any of the methods. However, the significant difference can be seen in the duration of matching for each method figure 7(c). MIP is more than two orders of magnitude slower than other methods in most cases. It performs much better when the supply outmatches the demand. Still, the remaining three methods significantly outperform MIP with respect to time. The MaxBM outdoes the Greedy methods when there is more demand than supply elements, the GreedyS performs slightly better if the ratio is the opposite. Despite a subtle dip in the curves representing the scores expressed as GWP savings in figure 7(a) for ratios between the 941:59 and 333:667, we see that the effect of the matching algorithm on resultant GWP savings is marginal. The largest discrepancy between algorithms occurs when there are twice as many demand as supply elements; yet, the score difference is only 2.5%, or about 12 kgCO 2 eqv. So even though the MIP, in this case, returns the best score, the differences are negligible.
The figure 7(b) shows the ratio of substitutions at a fixed amount and varying ratios between elements, meaning how many elements from the demand set were replaced with the supply set. The chart is normalised using the best result as 100%, in this case-MIP. For scenarios where there are more demand than supply elements, the MIP algorithm has a better substitution rate of elements. GreedyP also performs slightly better than the remaining two-which is not surprising as they are single-assignment algorithms. The two graphs, figures 7(a) and (b), demonstrate that the amount of substituted elements does not always give much better results. Often they are very similar.
Finally, we see from figure 7(c) that time spent running each algorithm varies between the methods, with the MIP algorithm generally using a considerably longer time to find a solution. We see that the MIP algorithm has a constant running time for all ratios with fewer supply than demand elements. This time is the algorithm's limit in runtime to avoid it going on for too long. When reaching this limit, it will return the currently best solution. Thus, for longer runtimes, better solutions could be found. The other three algorithms behave a little differently. The fewer supply elements available, the less time is spent on finding a solution, with the maximum runtime when the ratio between demand and supply elements is equal. The GreedyP takes a little longer to run when there is a shortage of supply elements, most likely to cutoffs being returned to the supply bank, causing more iterations.

Analysis of performance at a fixed ratio, varying total amount of elements
The previous section described the situation with a reasonably low number of 1000 elements, equivalent to approximately six small single-family house roofs. In reality, there could be hundreds of buildings ready to be built and demolished within a short period of time, challenging the performance of the artefact. To test that scenario, the next case study uses a constant ratio between demand and supply elements of 1:10 but varies the total number of elements from 11 to 45 056, doubling per each case.
The results are portrayed in table 6 with the run time also displayed in figure 7(d). Now, there are 10 times as many demand-as supply elements, meaning that all algorithms have the same total reduction of GWP-note that for the last two rows, the MIP algorithm failed to produce results due to a shortage of memory on the computer. Consequently, one column in table 6 shows the score for all algorithms. More interesting is the number of supply elements used to achieve this score. With the GreedyS and MaxBM algorithms unable to do plural assign, both use the same number of supply elements as needed for the demand elements. For the plural assignment methods, GreedyP and MIP, depending on the size of the problem, we use 4%-24% fewer supply elements to achieve the same result. The time spent on running the algorithms as the magnitude of the problem grows is also of interest as to which extent the algorithms can be used to provide user interaction when designing with reclaimed elements. In that context, the MIP algorithm quickly exceeds 1 second run time already at 352 elements, whereas the other three can run problems with 2816 elements in the same amount of time. Thus, when used as a design tool while also using plural assign, the GreedyP algorithm is a better choice.

Discussion
The test studies focus on optimising the environmental impact of linear structural framing elements made of timber, considering reclaimed elements of various lengths and cross-sections. However, the artefact was generalised so it could be applied to a completely different context. An example could be the matching of reclaimed doors or even floor panels. The only difference is the input data, but the algorithm for finding matches is universal. Furthermore, the artefact is not limited to certain materials or only reclaimed elements since the same procedures could help, for example, designing with predefined prefabricate elements.
Similarly, the environmental considerations are not bound to the logic of the artefact. The same algorithm could be applied to other objectives, such as cost or utilisation optimisation, or even qualitative constraints, such as quality or aesthetics, as long as the objectives and constraints can be expressed numerically in a tabular form. Extrapolating, the application of the tool could reach outside of the construction context.
Previous work demonstrated how some of the methods could be applied, while in this study, methods were supplemented and evaluated in practical conditions. Comparing the algorithms shows that MIP, despite sometimes giving a negligibly better result, takes substantially more time. Regarding the design tool, speed is crucial for it to help give immediate feedback to a user. It leads to the question of actual user needs, is it the best possible score, or is it a fast and satisfactory answer? The tool could instead allow a user to choose a fast mode at the early feasibility phase and a slower, precise mode for finding the best elements when ordering actual products.
Between some methods, there is little difference in score but a high in number of substituted elements. This is caused by assigning one long reclaimed element to many short-demand elements. A high number of substitutions could lead to additional work on processing and transporting, which were not included in the current cost function.
The test cases assessed the complete sets of demand and supply elements. In actual design work, the designer might want to choose some elements manually, based on criteria of own choice, that might be hard to justify numerically. One example could be the visual qualities of the textures. Hence, the tool should only assist the designer by providing suggestions of the best element fit but leave the final choice to the user, for example, by locking some elements in place before rerunning the algorithm. For the same reason, aspects such as direction or rotation might be necessary for the designers but are not addressed by the current artefact.
Another aspect is the grouping of elements, which could be driven by providing equal aesthetics or stiffness. Having each element of different properties leads to irregularities.
The substitution approach is limited to properties but does not allow for changes to the input topology. For some cases, it would be better if the tool could suggest, for example, adjustments to grid spacing or level elevation that would even further optimise the utilisation of available elements, as proposed by Brütting et al [41]. But because of the complexity, it would significantly affect the performance and scale of the application.
Further work could be done on implementing more aspects, such as connection design, logistics, and extending the test cases to reflect those applications. Other building materials, such as widely applied steel and reinforced concrete, impose new aspects to the matching criteria, which should be researched further. The results of this study is limited to the elements which can be parametrically described by their geometrical properties. The case is more complex for irregular geometries, such as wall or floor slabs with openings or recesses, which should be further analysed. There are also other possible methods of solving the matching problem, such as evolutionary algorithms or machine learning. Those could also be compared against a perfect solution, which could be achieved with the brute force approach, which was not performed in this paper due to high computational demand compared to a low added value in realistic conditions. The evaluation of usefulness to designers would require proper development of the interface and user experience, followed by studies with actual users.
As a part of the contribution, the generated datasets and methods source code were published and can be freely accessed [22]. This enables other researchers to develop alternative solutions and compare them against the same benchmarks.
The study of the artefact led to the question of the role such a tool has in the current design practice. Probably the biggest impact it has is the reduction of time needed to evaluate possible savings from reusing what already exists. It also highlights the importance of how designers define their needs and objectives. Those often span across domains, requiring the close collaboration of architects, engineers and other specialists.
The aspect of capturing those data requirements is being discussed in the field of Building Information Modelling, with its relevant standards, such as Information Delivery Specifications, data dictionaries, data templates, as well as domain-specific concepts such as material passports and material banks.

Conclusions
The research objective was to research the design of a tool useful for potential users in terms of speed and quality of results. Four methods were implemented for that purpose and assessed on basic elements to assert their correctness. The objective was to replace framing elements in a structure designed from new products with elements reclaimed from demolished buildings. Then, they were tested on larger datasets that mimic actual market conditions.
To answer the research question: 'How can a digital tool assist a designer in reusing components?' We undertook the analysis of solutions existing in the academic literature, implemented similar and additional algorithms in an artefact, and tested it on multiple case studies. Alternative methods showed different time and score performances, allowing the selection of the right algorithms for each task. These findings set up provisions for developing a user-friendly tool to assist designers in reusing reclaimed components.
Even though all methods use the same input data, they operate on different data structures. Greedy algorithms rely on two separate tables (matrices) with supply and demand elements; linear programming relies on a single incidence matrix representing demand and supply as rows and columns; the MaxBM utilises a graph representation of the same data. The data transformation adds complexity and affects the time performance, so ideally, the algorithm should work on the same data structure as it receives.
As discussed previously, a manual assignment of these buildings to create new roofs is a tedious process. By applying the developed algorithm, the available reclaimed elements can quickly be assigned to the new roof trusses. Should the dimensions, layout, or number of roof trusses change at any stage in the design process, the script can simply be rerun to find the best and most up-to-date solution. Depending on the user's need, the algorithm's flexibility allows for changes in both objective functions and constraints. This is one of the main contributions of this paper, as it allows for using the same solution for a wide range of problems with varying complexity and size. By automating the design, we can help enable higher uptake of the circularity principles, leading to lowering embodied emissions of new construction.

Data availability statement
The data that support the findings of this study are openly available at the following URL/DOI: 10.5281/ zenodo.7766875.