# The HyperSurface Energy Footprint, Cost, and Performance

The results in the previous section have confirmed that large MS with little or no discretization error (unit cell size tending to zero) and phase quantization error (large number of unit cell states) consistently yield the best performance for beam steering. In fact, physical size and resolution appear to be more important than the phase. These remarks are important and provide insight on the optimal design points *if we just care about performance.* Therefore, the results above do not provide a unified design guideline (especially if we start to consider cost and complexity as aspects preventing unlimited scaling).

While user requirements indicate the acceptable thresholds of the performance metrics, fabrication restrictions and operational limitations (e.g., available power, space and other overheads) bound our design space [194]. Therefore, the design of an HSF should be tackled from a combined performance-cost perspective so as to deliver an effective and efficient platform for electromagnetic manipulation. In this section, we provide a qualitative discussion of how cost metrics could impact the design of an HSF.

To illustrate our case, first, we show a graphical representation of the HSF structure in Figure 8.11. Essentially, the HSF receives external programmatic commands from a gateway controller that are disseminated to the internal control logic at the controller chips via chip-to- chip interconnects and routing logic [137,140]. These commands contain the state (within the discrete set of possible states) that should be applied to each unit cell. The control logic translates the state into an analog value to be applied to the tuning element, e.g., the voltage applied to a varactor to achieve a target capacitance. Additionally, embedded sensors can pick up data from the environment and send it to the control logic or external devices again via the communications plane.

## Cost and Power Models

Clearly, the addition of sensors, actuators, and other internal circuitry will impact the power consumption and fabrication cost of the device. For the sake of exemplification, let us assume that we have an HSF of *N* x *M* unit cells serviced by *n x m* chips. In a column, each chip of size *D _{c}* gives service to

*X = Mjm*unit cells of size

*D*The relation between

_{u}.*M*and

*m*should be obtained based on

*D*and

_{c}*D*A good approximation would be that a chip of size

_{u}.*D*(even smaller, perhaps

_{c}= D_{u}*D*can be placed at the center of a cluster of 2 x 2 unit cells. A chip of size

_{u}/2)*D*2

_{c}=*D*(even smaller, perhaps

_{u}*3D*can be placed at the center of a cluster of 3 x 3 unit cells.

_{u}/2)An upper limit for the size of *D _{c}* would be 20-30 mm due to the manufacturing constraints of chips nowadays related to the die yield

Figure 8.11 Graphical representation of a possible HSF implementation, which includes the metasurface plane with the metallic patches and the substrate, the sensing/actuation plane with the tuning elements and sensors, the computing/control plane containing the controller chips, and the communications plane containing the routing logic and interconnects. A gateway controller interfaces the HSF with the external world. From [155].

[73]. The size of *D _{c},* on the other hand, has also a lower limit based on the technology and on the functionality that is attributed to it within an HSF. In the prototype case study outlined in this book, the chip should be large enough to host 24 pins to (1) power the chip and (2) to provide a connection to neighbouring unit cells. The size of a pin depends on the technology. Thus, in summary, the cost of the HSF has two main components:

■ Technology: Designers may want to accommodate as many pins as possible per chip so as to improve the intra-HSF bandwidth, even in the case that tiny chips are required to control high-frequency unit cells. In general, however, newer technologies allowing for smaller form factors are more expensive.

**■ Size: **Bigger chips are more expensive as they occupy a larger fraction of the fabrication die and have the potential to host more transistors/pins. Let us leave this term as a parameter, but also let us assume a linear increase of cost when we increase the size.

The cost of the MS and integration will probably depend on *M *or *D _{u}* in the first case, and on

*m*in the latter case. In particular, we assume

where *C _{c}h_{ips}* is the cost of chips,

*C*is the cost of the metasurface layer, and

_{meta}*C*is the cost of integrating the different sub-systems together. Within the first term,

_{int}*Cj-*is a technology factor and

*C*is a size factor determining the cost of a single chip, whereas

_{s}*CII*is the number of chips to be integrated within the HSF. Here,

*CH*=

*n ■ rn.*where

*n*and

*m*are the number of chips in the

*x*and

*у*directions of the HSF plane.

The energy consumption of the HSF, as any other circuit or processor, has two components. First, the static power that the controller chips consume just for being connected to a power source due to, for instance transistor leakage, and that does not depend on the workload. Second, the power that is consumed dynamically as a result of the switching of transistors and the passing of current. Thus, the total power *P* is divided between static power *P _{s}t_{a}* and dynamic power

*Pi*,

_{y}n

On the one hand, the static power mainly depends on the technology node, i.e., smaller transistors in newer technologies suffer from higher leaking currents, as well as on the number of chips. Further, if integrated components such as the varactors continually drain current, the static power will be dependent on the number of unit cells. Some models of static power exist in the literature [153], but a deeper analysis and, ideally, measurements from existing prototypes [187], will be required to have an accurate approximation of the static power consumption of an HSF.

On the other hand, the dynamic power is basically how many current-draining actions are taken per second multiplied by the energy consumed by each of these actions. An example is the switching of transistors from OFF state to ON state, which generates a transient current. We could consider, as an approximation, modeling the power consumption required to change the state of a single unit cell *E _{r})_{ig}. *Then, the dynamic power would be

where *a* € (0,1] is the probability of a unit cell changing state, / is the expected speed of state changes over time, which is application- dependent, and *N ^{2}* stands for the number of unit cells in the HSF. The energy consumed to change the state of a single unit cell, on its turn, depends on i) the energy consumed to send the message from gateway to unit cell, which depends on the number of routers traversed within the HSF as discussed in Section 8.2, and ii) the energy consumed for the controller to compute/apply the new state. These will in turn depend on the number of chips (each chip has one router). The traffic analysis can help to extract some approximate values of

*a,*/, and the average number of router traversals from the gateway to the unit cell.

## Application Specific Figures of Merit

An additional point worth making is that the present analysis does not take the application requirements into consideration. For instance, it is a well-known problem that, although narrow beams provide high efficiency (and may be in fact necessary in scenarios such as terahertz wireless communications [18]), slight target deviations can lead to loss of connectivity. On the contrary, wider beams are less efficient, but also less prone to disruption.

For all these reasons, here we propose a figure of merit that attempts to put the different performance metrics together and introduce user requirements as well. The figure of merit is defined as

where *Aperture* is an arbitrary beam width set as a specific requirement by the user/application. To incorporate the multiple performance metrics, we equalize the units using normalized values. *I)* and *SLL* are converted to percentages, whereas we model the tradeoff between beam width and accuracy by dividing *TD* by *HPBW.* This way, the importance of the *TD* value increases for narrow beams, which are the most vulnerable against connectivity issues. The last term models how close is the MS to achieving the specific requirement of beam width set by the user or application. Note that, with all these considerations, the range of the figure of merit is *FoM* E [0,1] and a high value is preferred. Finally, we also note that weights can be applied to the different terms to create performance profiles according to the requirements of different applications. However, we leave such analysis for future work.

Figure 8.12 plots the figure of merit as a function of the dimensional parameters with *N _{s} =* 4 (which provides a good balance between performance and cost). We repeat the plots for different values of

*Aperture*(10°, 20°, 40°) illustrating cases of high, average, and low directionality requirements. At a first glance, the MS size seems to play a bigger role in determining overall performance. For narrow beam applications, the figure of merit points to large and fine-grained MS as a necessity; whereas smaller and coarser-grained designs can be affordable as the beam requirements are relaxed.

## Performance Cost Analysis

To bridge this gap, the methodology presented in this chapter can be combined with parametrized models accounting for the power consumption or cost associated to the integrated controllers and circuitry. This would allow architects to evaluate performance-cost tradeoffs with performance-cost figures of merit and, by adding weights to each metric, to find optimal design spaces.

To exemplify this, let us take the example of Section 8.1.4.2 and assume a primitive model that states that power or cost of the HSF scale linearly with the number of unit cells per dimension. This assumption is based on the fact that more unit cells imply the use of more controllers and higher transit of messages within the MS [137,140]. In our particular example, we divided *FoM* above by the number of unit cells per dimension and normalized the result. The outcome, that we refer to as *F**0**M**2*, is shown in Figure 8.13. Intuitively, the tendency is to favor configurations with fewer unit cells within the range that yields acceptable performance.