A: Measuring in Thermal Systems: Reducing Errors and Error Analysis
Measuring the Right Data—Verifying Experimental Boundary Conditions
Christophe T'Joen
Ghent University
Introduction
As the cost (in terms of both hardware and duration to get results) of computational research has reduced significantly over the past decades, and this is set to continue, an increasing volume of thermo-hydraulic research today is conducted primarily through computational means. Numerical tools allow us to study complex geometries and perform parametric studies at a much higher pace than experiments would ever be able to do and at a fraction of the cost, particularly for high-temperature or reactive flows. When combined with advanced stochastic and parametric modeling, e.g., through clustering or neural networks, "computational experiments" can lead to novel designs with an improved efficiency (in whichever form the efficiency is defined for the considered application). It allows exploring areas/geometries to find new (local) optima intended for specific applications. With advances in additive manufacturing (e.g., 3D printing of metals), new previously infeasible designs can now become a reality and in fact can also be tested. Despite advances in and strengths of numerical tools, the evidence of actual performance improvement is realized only through physical testing. This can be done at various scales, often starting in the laboratory at a scaled model. Additive manufacturing can now help laboratories generate test objects quicker to allow these to be tested on, for example, [1-2]. This started in the past decade opening up opportunities to improve energy efficiency or reduce capital intensity of energy (heat or mass) transfer processes. This improvement process could bring engineering practices closer to considering fundamental limits, e.g., theoretical principles such as the constructal theory of Bejan for heat exchangers [3], or drive application of novel materials such as metal or polymer foams [4-5].
Care should be taken when using numerical tools for the design. First, despite these advantages, computational tools have limits; for example, they will consider typically simplified boundary conditions such as uniform velocity/ temperature fields or adiabatic walls. Second, at their very core, all computational approaches which are used today to model macroscale engineering problems rely on internal models to capture flow physics which drive heat transfer and hydraulics, e.g., turbulence modeling (e.g., k-epsilon or large eddy simulation) and wall transfer functions. These models can be layered deep within these tools, making it sometimes hard to reveal their details or the impact on the modeled outcome. And at their core, these models are built on large-scale experiments often in very different situations/configurations than those the numerical tool is applied to as part of the study. As part of computational model verification, sensitivity tests should be conducted by researchers on, for example, grid spacing and grid element count, temporal/spatial discretization, and turbulence (sub-)model constants to confirm the predictions. Fundamentally, the applicability of a certain numerical (turbulence) model should always be evaluated ahead of the simulation, and the selection of a family of modeling approaches, e.g., k-e or k-w, should be done based on the nature of the studied flow or region of flow of interest in the tested configuration.
Performing an experiment is the only way to fully validate a computational result, but it comes with its own set of challenges, just as numerical modeling. Therefore, this evaluation step should be planned carefully to ensure the right outcomes are achievable. When using computational tools to optimize a design, "optimum" candidates should be selected to be tested in actual/scaled conditions to verify the improved performance. Experiments are always limited in the data they generate, be it in the spatial location of the resulting data (typical point or planar data are measured vs. full-field information from simulations) or the frequency content (sampling rate—limited experimental duration), and as such, the data acquisition requirements are something to define in detail up ahead, thinking about the goal of the experiment while at the same time affirming the results are sound. This chapter is focused on the latter aspect, looking at the experimental boundary conditions. In the next paragraph, several key principles are covered before several case studies are explored as illustration.
Thinking Ahead
During the design of an experiment, ensuring the right boundary conditions are met is a key principle; one that at face value appears trivial, but this can be and in most cases is deceptive. Below are some key guiding principles to be applied when setting up experiments:
Scale appropriately: Almost all experiments require a level of scaling, based on a set of non-dimensional numbers. By use of dimensional analysis (Buckingham П theorem, [6]), a physical problem can be cast in a minimum set of non-dimensional numbers. An example of a non-dimensional number is the Reynolds number, which physically links to the laminar or turbulent nature of a fluid flow. When designing an experiment, the studied geometry is in most cases scaled (up or down) for ease of access/to allow the experiment to take place within a reasonable space/time. Geometric similitude of the two samples will then be a given, resulting in scaling of other quantities such as the velocity to preserve a Reynolds number. Flexibility can be gained by switching fluids, but invariably, one will find that not all non-dimensional numbers can be kept constant in the experiment vs. reality. This is not a problem, but researchers should carefully assess this ahead and be aware of these choices. Also, the ratio of properties should be considered; for example, the ratio of heat flux to mass flux should be considered as this can be important, for example when studying heat transfer deterioration or natural convection. It is thus important to, ahead of the experimental design, understand the dimensionless properties which describe the physics of the problem (and in some cases ratios associated with it and the considered fluid property gradients/changes at the operating temperature). When scaling an experiment, explore the boundary conditions and the possible impact their non-ideal character has on the measurements that are intended. For "periodic structures" (illustrated in case study 1), it is key to ensure the tested configuration is indeed acting as a periodic one and avoid impact of the wall/channel on the results. This applies at various levels, e.g., the number of tube racks in a heat exchanger, fins rows in a tube fin bank, unit cells in a cooling tower, number of heat sinks cooling a CPU (see Ref. 7). This can be verified with computational tools or through expanded testing (i.e., increasing the count of the tested elements) and is critical to verify that the result is not affected by boundary effects.
Verify the global balances: In an experiment, thermal or hydraulic energy is transferred by or to the fluid; for example, in a heat exchanger a wall exchanges heat to a nearby (flowing) fluid. As the thermal measurements are taken, it is important that the experiment is set up to allow for an independent verification of the global heat or force balance for a hydraulic study. The term independent refers to using different instruments from those used to gather the main experimental data. With the use of independent sensors, there is no risk of confounding sensor errors, and it provides clarity on the heat flows or hydraulic losses within the system that is being studied. When setting up an experiment, ensure the overall balances can be recorded throughout the (transient) experiment with sufficient accuracy, focused on the key control volume that is being studied. This can be assessed ahead of the actual experiment, considering, for example, the accuracy of the transducers and the method to determine the fluid properties such as enthalpies. Researchers will find that to achieve a closure of Check your ambient losses and boundary conditions: Adiabatic boundary conditions are often assumed initially during design; however, these do not exist in actual experiments. Even the thickest insulation layer will still result in heat loss, but it can be small enough to be ignored if well designed. During the design of an experiment, it is important to consider losses to ambient and establish measurement approaches which verify these at conditions as those present in the experiment. This can be done through artificial heating of the test sample (e.g., electrically) and recording the heat input required to reach a steady-state surface temperature, thus determining the heat lost to the environment at that condition. When performing such studies, it is important to consider the test object and how the measured losses will be used in the further analyses. Losses can be generated by the test setup itself; for example, heat ingress not only through piping feeding fluid to the test setup, but also through the instrumentation itself (heat loss/ingress through thermocouple wires and fittings) should be considered if relevant to the tested problem. Ensure sufficient margin is available to reach a target temperature by design as these heat losses incrementally can stack up to a large wattage lost to the environment if not properly accounted for. Fluid inflow, supplied by pumps or fans, can have a high turbulent intensity swirl and be non-uniform if the test sample is located close, which all may affect the measurements. Verifying the inflow profile during measurements and putting measures in place to smoothen the flow are important to ensure the right boundary conditions are present at the test sample. Verify the steady or transient state: When extracting data and determining a measured quantity, an inherent assumption of steady state is often used (or for transient experiments that the change of the measured quantity is as per approximation). The use of thick insulation or large metal masses in the experiment can result in long transients, and varying ambient conditions (diurnal fluctuations) can result in difficulty achieving the desired experimental steady state. Considering the dynamics early in the design and determining (e.g., through computation) the timescale to reach the steady state are helpful to plan the measurement campaign and set up the data analysis tools to track the experimental quality. It may be better in fact not to insulate a transient experiment or to condition the insulation temperature such that limited heat transfer will take place (as illustrated in case study 2). Real-time monitoring: Many researchers visualize data "quality" on a dashboard during the measurements; the state of the experiment (e.g., measured through time derivates of pressure or flow) and the calculated heat balance are good quality indicators to show continuously while measuring. As today, the instrumentation can be read continuously; it is straightforward to set up such a real-time visualization and this should always be done. It also provides the operators a means to assess the status and safe operation of the test rig. Prepare to be uncertain: It is good practice to prepare the experimental data reduction up ahead of running the experiment to support the instrumentation design and subsequent uncertainty analysis. By analyzing the uncertainty propagation, key factors which impact the outcome will be found and this can drive the need to measure these properties more accurately (e.g., with more sensors or sensors with a higher accuracy), as illustrated in, for example, [8]. No fluid property is free of uncertainty—do not underestimate the impact of uncertainty on fluid properties (which can be derived from the equations of state) or on heat transfer coefficients determined from published correlations. Using these in calculations can result in a strong increase in the overall uncertainty. Adding uncertainty data to the real-time monitoring can be helpful to further monitor the data quality, though often it is quite complex for highly derived properties. Quality checking the uncertainty analysis through sensitivity studies is also effective to test data reduction methods for accuracy. The principles described above form the foundation to design an experimental setup and will be reflected in the various case studies below. The sections below do not focus on the actual data and outcomes—the results can be found in the references—but zoom in on the practices applied in the experimental design and choices made to control the boundary conditions.