# Orientation

A player’s position suffers changes over time that depends on player’s tactical position. However, as an absolute feature, it is unclear how relevant it might be unless properly contextualized, namely by considering if it contributes to an offensive or defensive action. Therefore, the orientation of a player can provide an additional source of information if associated with the opponent’s goal, contributing towards an understanding of specific actions, such as shots and passes.

Given that x_{G} [fj = (x_{c} [f], y_{c} [f]) is the middle line position of the opponent goal, the orientation of player < towards it, В1, can be calculated as follows:

where *Sj* [f] is the angular position of the player / in the field.

# Trajectory Entropy

Studying the variability of football players lays the foundations for a whole series of possible new performance analysis methods (Carling, Bradley, McCall, & Dupont, 2016; Couceiro, Clemente, Martins, & Machado, 2014). Some non-linear methods, such as Lyapunov exponents (Burdet et al., 2006), Shannon’s entropy (Lopes & Tenreiro Machado, 2019), approximate entropy (Fonseca, Milho, Passos, Araujo, & Davids, 2012), and sample entropy (Menayo, Encarnacion, Gea, & Marcos, 2014), were adopted to study the human performance. Contrarily to traditional methods for variability analysis, such as the standard variation and the coefficient of variation, non-linear methods can provide additional information about the structure of the variability that evolves over time (Couceiro, Clemente, Martins, & Machado, 2014).

Among the many variability analysis methods, the sample entropy is perhaps one of the simplest to compute, while still being one of the most unbiased entropy-related measures (Richman & Moorman, 2000). It is noteworthy that, regardless of the approach, entropy-related metrics can be applied to any other spatio-temporal domain beyond the kinematical analysis, such as in physiological measures (Lake, Richman, Griffin, & Moorman, 2002). However, by trajectory entropy, one implies the employment of this metric, more specifically the sample entropy (*SampEn*), to the trajectory of the athlete and its inherent uncertainty or variability.

The techniques for estimating the sample entropy can be considered as a process represented by a time series and related statistics (Richman & Moorman, 2000). Since the sample entropy can only be applied to a unidimensional time series, the position of player i in the field, previously identified as x,[f], is decomposed in its dimensions - herein simplified to the xy-plane of the football field, i.e., [f] = (x, [f], y, [f]). To avoid writing the

same equations for both dimensions, let us consider a generic time series *и*,[f] to be representative of either [f] or y, [f]. This leads to a sequence of vectors и, [1], и, [2], ..., и, [М — iw + l] e R^{lxm}, each one defined by the array и_{ш};[fc] =[«,[fc]и,[fc + 1]---«;[/? +wi —1]], * and f are constants, with *

*M*being the length of the time series,

*m*the length of sequences to be compared, and fc the tolerance calculated as a percentage of the standard deviation of the data being processed.

A distance between the sequence of vectors i/„„ [fc] can be generally defined as follows:

Let В^, and *Aj*,_{{}, be, respectively, the number of vectors [fc_{2}] within fc of *u _{mi}* [fej] and »

_{m+}i, [fc

_{2}] within fc of и

_{ш+1|}[fe

_{1}], with 1

*With this, one can define:*2-

and their average as

Considering a finite length of the time series M, the sample entropy of player *i *can then be calculated as follows:

Libraries and scripts to calculate the sample entropy of a time series are available to the community, as is the case of Martmez-Cagigal’s SampEn^{1} (Martinez- Cagigal, 2018).